diff --git a/docs/integrations/data-ingestion/clickpipes/mysql/faq.md b/docs/integrations/data-ingestion/clickpipes/mysql/faq.md
index a35ea90091b..99008c9744a 100644
--- a/docs/integrations/data-ingestion/clickpipes/mysql/faq.md
+++ b/docs/integrations/data-ingestion/clickpipes/mysql/faq.md
@@ -34,3 +34,7 @@ You have several options to resolve these issues:
3. **Configure server certificate** - Update your server's SSL certificate to include all connection hostnames and use a trusted Certificate Authority.
4. **Skip certificate verification** - For self-hosted MySQL or MariaDB, whose default configurations provision a self-signed certificate we can't validate ([MySQL](https://dev.mysql.com/doc/refman/8.4/en/creating-ssl-rsa-files-using-mysql.html#creating-ssl-rsa-files-using-mysql-automatic), [MariaDB](https://mariadb.com/kb/en/securing-connections-for-client-and-server/#enabling-tls-for-mariadb-server)). Relying on this certificate encrypts the data in transit but runs the risk of server impersonation. We recommend properly signed certificates for production environments, but this option is useful for testing on a one-off instance or connecting to legacy infrastructure.
+
+### Do you support schema changes? {#do-you-support-schema-changes}
+
+Please refer to the [ClickPipes for MySQL: Schema Changes Propagation Support](./schema-changes) page for more information.
\ No newline at end of file
diff --git a/docs/integrations/data-ingestion/clickpipes/mysql/index.md b/docs/integrations/data-ingestion/clickpipes/mysql/index.md
index 1b0a2a00a9f..1d7572646ca 100644
--- a/docs/integrations/data-ingestion/clickpipes/mysql/index.md
+++ b/docs/integrations/data-ingestion/clickpipes/mysql/index.md
@@ -1,8 +1,8 @@
---
-sidebar_label: 'ClickPipes for MySQL'
+sidebar_label: 'Ingesting Data from MySQL to ClickHouse'
description: 'Describes how to seamlessly connect your MySQL to ClickHouse Cloud.'
slug: /integrations/clickpipes/mysql
-title: 'Ingesting Data from MySQL to ClickHouse (using CDC)'
+title: 'Ingesting data from MySQL to ClickHouse (using CDC)'
---
import BetaBadge from '@theme/badges/BetaBadge';
@@ -15,20 +15,15 @@ import select_destination_db from '@site/static/images/integrations/data-ingesti
import ch_permissions from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/ch-permissions.jpg'
import Image from '@theme/IdealImage';
-# Ingesting data from MySQL to ClickHouse using CDC
+# Ingesting data from MySQL to ClickHouse (using CDC)
-:::info
-Currently, ingesting data from MySQL to ClickHouse Cloud via ClickPipes is in Private Preview.
-:::
-
-
-You can use ClickPipes to ingest data from your source MySQL database into ClickHouse Cloud. The source MySQL database can be hosted on-premises or in the cloud.
+You can use ClickPipes to ingest data from your source MySQL database into ClickHouse Cloud. The source MySQL database can be hosted on-premises or in the cloud using services like Amazon RDS, Google Cloud SQL, and others.
## Prerequisites {#prerequisites}
-To get started, you first need to make sure that your MySQL database is set up correctly. Depending on your source MySQL instance, you may follow any of the following guides:
+To get started, you first need to ensure that your MySQL database is correctly configured for binlog replication. The configuration steps depend on how you're deploying MySQL, so please follow the relevant guide below:
1. [Amazon RDS MySQL](./mysql/source/rds)
@@ -44,7 +39,7 @@ To get started, you first need to make sure that your MySQL database is set up c
Once your source MySQL database is set up, you can continue creating your ClickPipe.
-## Create your ClickPipe {#creating-your-clickpipe}
+## Create your ClickPipe {#create-your-clickpipe}
Make sure you are logged in to your ClickHouse Cloud account. If you don't have an account yet, you can sign up [here](https://cloud.clickhouse.com/).
@@ -61,20 +56,18 @@ Make sure you are logged in to your ClickHouse Cloud account. If you don't have
-### Add your source MySQL database connection {#adding-your-source-mysql-database-connection}
+### Add your source MySQL database connection {#add-your-source-mysql-database-connection}
4. Fill in the connection details for your source MySQL database which you configured in the prerequisites step.
:::info
-
Before you start adding your connection details make sure that you have whitelisted ClickPipes IP addresses in your firewall rules. On the following page you can find a [list of ClickPipes IP addresses](../index.md#list-of-static-ips).
For more information refer to the source MySQL setup guides linked at [the top of this page](#prerequisites).
-
:::
-#### (Optional) Set up SSH tunneling {#optional-setting-up-ssh-tunneling}
+#### (Optional) Set up SSH Tunneling {#optional-set-up-ssh-tunneling}
You can specify SSH tunneling details if your source MySQL database is not publicly accessible.
@@ -88,12 +81,10 @@ You can specify SSH tunneling details if your source MySQL database is not publi
4. Click on "Verify Connection" to verify the connection.
:::note
-
Make sure to whitelist [ClickPipes IP addresses](../clickpipes#list-of-static-ips) in your firewall rules for the SSH bastion host so that ClickPipes can establish the SSH tunnel.
-
:::
-Once the connection details are filled in, click on "Next".
+Once the connection details are filled in, click `Next`.
#### Configure advanced settings {#advanced-settings}
@@ -106,7 +97,7 @@ You can configure the advanced settings if needed. A brief description of each s
- **Snapshot number of tables in parallel**: This is the number of tables that will be fetched in parallel during the initial snapshot. This is useful when you have a large number of tables and you want to control the number of tables fetched in parallel.
-### Configure the tables {#configuring-the-tables}
+### Configure the tables {#configure-the-tables}
5. Here you can select the destination database for your ClickPipe. You can either select an existing database or create a new one.
@@ -121,3 +112,9 @@ You can configure the advanced settings if needed. A brief description of each s
Finally, please refer to the ["ClickPipes for MySQL FAQ"](/integrations/clickpipes/mysql/faq) page for more information about common issues and how to resolve them.
+
+## What's next? {#whats-next}
+
+[//]: # "TODO Write a MySQL-specific migration guide and best practices similar to the existing one for PostgreSQL. The current migration guide points to the MySQL table engine, which is not ideal."
+
+Once you've set up your ClickPipe to replicate data from MySQL to ClickHouse Cloud, you can focus on how to query and model your data for optimal performance. For common questions around MySQL CDC and troubleshooting, see the [MySQL FAQs page](/integrations/data-ingestion/clickpipes/mysql/faq.md).
diff --git a/docs/integrations/data-ingestion/clickpipes/mysql/schema-changes.md b/docs/integrations/data-ingestion/clickpipes/mysql/schema-changes.md
new file mode 100644
index 00000000000..d2c8e37d0df
--- /dev/null
+++ b/docs/integrations/data-ingestion/clickpipes/mysql/schema-changes.md
@@ -0,0 +1,15 @@
+---
+title: 'Schema Changes Propagation Support'
+slug: /integrations/clickpipes/mysql/schema-changes
+description: 'Page describing schema change types detectable by ClickPipes in the source tables'
+---
+
+ClickPipes for MySQL can detect schema changes in the source tables and, in some cases, automatically propagate the changes to the destination tables. The way each DDL operation is handled is documented below:
+
+[//]: # "TODO Extend this page with behavior on rename, data type changes, and truncate + guidance on how to handle incompatible schema changes."
+
+| Schema Change Type | Behaviour |
+| ----------------------------------------------------------------------------------- | ------------------------------------- |
+| Adding a new column (`ALTER TABLE ADD COLUMN ...`) | Propagated automatically. The new column(s) will be populated for all rows replicated after the schema change |
+| Adding a new column with a default value (`ALTER TABLE ADD COLUMN ... DEFAULT ...`) | Propagated automatically. The new column(s) will be populated for all rows replicated after the schema change, but existing rows will not show the default value without a full table refresh |
+| Dropping an existing column (`ALTER TABLE DROP COLUMN ...`) | Detected, but **not** propagated. The dropped column(s) will be populated with `NULL` for all rows replicated after the schema change |
diff --git a/docs/integrations/data-ingestion/clickpipes/mysql/source/aurora.md b/docs/integrations/data-ingestion/clickpipes/mysql/source/aurora.md
index 395d7e18feb..ec11c9bff48 100644
--- a/docs/integrations/data-ingestion/clickpipes/mysql/source/aurora.md
+++ b/docs/integrations/data-ingestion/clickpipes/mysql/source/aurora.md
@@ -19,83 +19,91 @@ import Image from '@theme/IdealImage';
# Aurora MySQL source setup guide
-This is a step-by-step guide on how to configure your Aurora MySQL instance for replicating its data via the MySQL ClickPipe.
-
-:::info
-We also recommend going through the MySQL FAQs [here](/integrations/data-ingestion/clickpipes/mysql/faq.md). The FAQs page is being actively updated.
-:::
+This step-by-step guide shows you how to configure Amazon Aurora MySQL to replicate data into ClickHouse Cloud using the [MySQL ClickPipe](../index.md). For common questions around MySQL CDC, see the [MySQL FAQs page](/integrations/data-ingestion/clickpipes/mysql/faq.md).
## Enable binary log retention {#enable-binlog-retention-aurora}
-The binary log is a set of log files that contain information about data modifications made to an MySQL server instance, and binary log files are required for replication. Both of the below steps must be followed:
-### 1. Enable binary logging via automated backup {#enable-binlog-logging-aurora}
-The automated backups feature determines whether binary logging is turned on or off for MySQL. It can be set in the AWS console:
+The binary log is a set of log files that contain information about data modifications made to a MySQL server instance, and binary log files are required for replication. To configure binary log retention in Aurora MySQL, you must [enable binary logging](#enable-binlog-logging) and [increase the binlog retention interval](#binlog-retention-interval).
+
+### 1. Enable binary logging via automated backup {#enable-binlog-logging}
+
+The automated backups feature determines whether binary logging is turned on or off for MySQL. Automated backups can be configured for your instance in the RDS Console by navigating to **Modify** > **Additional configuration** > **Backup** and selecting the **Enable automated backups** checkbox (if not selected already).
-Setting backup retention to a reasonably long value depending on the replication use-case is advisable.
+We recommend setting the **Backup retention period** to a reasonably long value, depending on the replication use case.
+
+### 2. Increase the binlog retention interval {#binlog-retention-interval}
+
+:::warning
+If ClickPipes tries to resume replication and the required binlog files have been purged due to the configured binlog retention value, the ClickPipe will enter an errored state and a resync is required.
+:::
+
+By default, Aurora MySQL purges the binary log as soon as possible (i.e., _lazy purging_). We recommend increasing the binlog retention interval to at least **72 hours** to ensure availability of binary log files for replication under failure scenarios. To set an interval for binary log retention ([`binlog retention hours`](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/mysql-stored-proc-configuring.html#mysql_rds_set_configuration-usage-notes.binlog-retention-hours)), use the [`mysql.rds_set_configuration`](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/mysql-stored-proc-configuring.html#mysql_rds_set_configuration) procedure:
-### 2. Binlog retention hours {#binlog-retention-hours-aurora}
-The procedure below must be called to ensure availability of binary logs for replication:
+[//]: # "NOTE Most CDC providers recommend the maximum retention period for Aurora RDS (7 days/168 hours). Since this has an impact on disk usage, we conservatively recommend a mininum of 3 days/72 hours."
```text
-mysql=> call mysql.rds_set_configuration('binlog retention hours', 24);
+mysql=> call mysql.rds_set_configuration('binlog retention hours', 72);
```
-If this configuration isn't set, Amazon RDS purges the binary logs as soon as possible, leading to gaps in the binary logs.
-## Configure binlog settings in the parameter group {#binlog-parameter-group-aurora}
+If this configuration isn't set or is set to a low interval, it can lead to gaps in the binary logs, compromising ClickPipes' ability to resume replication.
+
+## Configure binlog settings {#binlog-settings}
-The parameter group can be found when you click on your MySQL instance in the RDS Console, and then heading over to the `Configurations` tab.
+The parameter group can be found when you click on your MySQL instance in the RDS Console, and then navigate to the **Configuration** tab.
+
+:::tip
+If you have a MySQL cluster, the parameters below can be found in the [DB cluster](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_WorkingWithParamGroups.CreatingCluster.html) parameter group instead of the DB instance group.
+:::
-Upon clicking on the parameter group link, you will be taken to the page for it. You will see an Edit button in the top-right.
+
+Click the parameter group link, which will take you to its dedicated page. You should see an **Edit** button in the top right.
-The following settings need to be set as follows:
+
+The following parameters need to be set as follows:
1. `binlog_format` to `ROW`.
-2. `binlog_row_metadata` to `FULL`
+2. `binlog_row_metadata` to `FULL`.
-3. `binlog_row_image` to `FULL`
+3. `binlog_row_image` to `FULL`.
-Then click on `Save Changes` in the top-right. You may need to reboot your instance for the changes to take effect - a way of knowing this is if you see `Pending reboot` next to the parameter group link in the Configurations tab of the RDS instance.
+Then, click on **Save Changes** in the top right corner. You may need to reboot your instance for the changes to take effect — a way of knowing this is if you see `Pending reboot` next to the parameter group link in the **Configuration** tab of the Aurora instance.
+
+## Enable GTID mode (recommended) {#gtid-mode}
+
:::tip
-If you have a MySQL cluster, the above parameters would be found in a [DB Cluster](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_WorkingWithParamGroups.CreatingCluster.html) parameter group and not the DB instance group.
+The MySQL ClickPipe also supports replication without GTID mode. However, enabling GTID mode is recommended for better performance and easier troubleshooting.
:::
-## Enabling GTID mode {#gtid-mode-aurora}
-Global Transaction Identifiers (GTIDs) are unique IDs assigned to each committed transaction in MySQL. They simplify binlog replication and make troubleshooting more straightforward.
+[Global Transaction Identifiers (GTIDs)](https://dev.mysql.com/doc/refman/8.0/en/replication-gtids.html) are unique IDs assigned to each committed transaction in MySQL. They simplify binlog replication and make troubleshooting more straightforward. We **recommend** enabling GTID mode, so that the MySQL ClickPipe can use GTID-based replication.
-If your MySQL instance is MySQL 5.7, 8.0 or 8.4, we recommend enabling GTID mode so that the MySQL ClickPipe can use GTID replication.
+GTID-based replication is supported for Amazon Aurora MySQL v2 (MySQL 5.7) and v3 (MySQL 8.0), as well as Aurora Serverless v2. To enable GTID mode for your Aurora MySQL instance, follow these steps:
-To enable GTID mode for your MySQL instance, follow the steps as follows:
1. In the RDS Console, click on your MySQL instance.
-2. Click on the `Configurations` tab.
+2. Click on the **Configuration** tab.
3. Click on the parameter group link.
-4. Click on the `Edit` button in the top-right corner.
+4. Click on the **Edit** button in the top right corner.
5. Set `enforce_gtid_consistency` to `ON`.
6. Set `gtid-mode` to `ON`.
-7. Click on `Save Changes` in the top-right corner.
+7. Click on **Save Changes** in the top right corner.
8. Reboot your instance for the changes to take effect.
-
-:::info
-The MySQL ClickPipe also supports replication without GTID mode. However, enabling GTID mode is recommended for better performance and easier troubleshooting.
-:::
-
-## Configure a database user {#configure-database-user-aurora}
+## Configure a database user {#configure-database-user}
Connect to your Aurora MySQL instance as an admin user and execute the following commands:
@@ -122,7 +130,7 @@ Connect to your Aurora MySQL instance as an admin user and execute the following
### IP-based access control {#ip-based-access-control}
-If you want to restrict traffic to your Aurora instance, please add the [documented static NAT IPs](../../index.md#list-of-static-ips) to the `Inbound rules` of your Aurora security group as shown below:
+To restrict traffic to your Aurora MySQL instance, add the [documented static NAT IPs](../../index.md#list-of-static-ips) to the **Inbound rules** of your Aurora security group.
@@ -130,4 +138,8 @@ If you want to restrict traffic to your Aurora instance, please add the [documen
### Private access via AWS PrivateLink {#private-access-via-aws-privatelink}
-To connect to your Aurora instance through a private network, you can use AWS PrivateLink. Follow our [AWS PrivateLink setup guide for ClickPipes](/knowledgebase/aws-privatelink-setup-for-clickpipes) to set up the connection.
+To connect to your Aurora MySQL instance through a private network, you can use AWS PrivateLink. Follow the [AWS PrivateLink setup guide for ClickPipes](/knowledgebase/aws-privatelink-setup-for-clickpipes) to set up the connection.
+
+## What's next? {#whats-next}
+
+Now that your Amazon Aurora MySQL instance is configured for binlog replication and securely connecting to ClickHouse Cloud, you can [create your first MySQL ClickPipe](/integrations/clickpipes/mysql/#create-your-clickpipe). For common questions around MySQL CDC, see the [MySQL FAQs page](/integrations/data-ingestion/clickpipes/mysql/faq.md).
\ No newline at end of file
diff --git a/docs/integrations/data-ingestion/clickpipes/mysql/source/rds.md b/docs/integrations/data-ingestion/clickpipes/mysql/source/rds.md
index e3267954868..de711c3ac28 100644
--- a/docs/integrations/data-ingestion/clickpipes/mysql/source/rds.md
+++ b/docs/integrations/data-ingestion/clickpipes/mysql/source/rds.md
@@ -19,42 +19,52 @@ import Image from '@theme/IdealImage';
# RDS MySQL source setup guide
-This is a step-by-step guide on how to configure your RDS MySQL instance for replicating its data via the MySQL ClickPipe.
-
-:::info
-We also recommend going through the MySQL FAQs [here](/integrations/data-ingestion/clickpipes/mysql/faq.md). The FAQs page is being actively updated.
-:::
+This step-by-step guide shows you how to configure Amazon RDS MySQL to replicate data into ClickHouse Cloud using the [MySQL ClickPipe](../index.md). For common questions around MySQL CDC, see the [MySQL FAQs page](/integrations/data-ingestion/clickpipes/mysql/faq.md).
## Enable binary log retention {#enable-binlog-retention-rds}
-The binary log is a set of log files that contain information about data modifications made to an MySQL server instance, and binary log files are required for replication. Both of the below steps must be followed:
-### 1. Enable binary logging via automated backup{#enable-binlog-logging-rds}
-The automated backups feature determines whether binary logging is turned on or off for MySQL. It can be set in the AWS console:
+The binary log is a set of log files that contain information about data modifications made to an MySQL server instance, and binary log files are required for replication. To configure binary log retention in RDS MySQL, you must [enable binary logging](#enable-binlog-logging) and [increase the binlog retention interval](#binlog-retention-interval).
+
+### 1. Enable binary logging via automated backup {#enable-binlog-logging}
+
+The automated backups feature determines whether binary logging is turned on or off for MySQL. Automated backups can be configured for your instance in the RDS Console by navigating to **Modify** > **Additional configuration** > **Backup** and selecting the **Enable automated backups** checkbox (if not selected already).
-Setting backup retention to a reasonably long value depending on the replication use-case is advisable.
+We recommend setting the **Backup retention period** to a reasonably long value, depending on the replication use case.
+
+### 2. Increase the binlog retention interval {#binlog-retention-interval}
+
+:::warning
+If ClickPipes tries to resume replication and the required binlog files have been purged due to the configured binlog retention value, the ClickPipe will enter an errored state and a resync is required.
+:::
-### 2. Binlog retention hours{#binlog-retention-hours-rds}
-Amazon RDS for MySQL has a different method of setting binlog retention duration, which is the amount of time a binlog file containing changes is kept. If some changes are not read before the binlog file is removed, replication will be unable to continue. The default value of binlog retention hours is NULL, which means binary logs aren't retained.
+By default, Amazon RDS purges the binary log as soon as possible (i.e., _lazy purging_). We recommend increasing the binlog retention interval to at least **72 hours** to ensure availability of binary log files for replication under failure scenarios. To set an interval for binary log retention ([`binlog retention hours`](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/mysql-stored-proc-configuring.html#mysql_rds_set_configuration-usage-notes.binlog-retention-hours)), use the [`mysql.rds_set_configuration`](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/mysql-stored-proc-configuring.html#mysql_rds_set_configuration) procedure:
-To specify the number of hours to retain binary logs on a DB instance, use the mysql.rds_set_configuration function with a binlog retention period long enough for replication to occur. `24 hours` is the recommended minimum.
+[//]: # "NOTE Most CDC providers recommend the maximum retention period for RDS (7 days/168 hours). Since this has an impact on disk usage, we conservatively recommend a mininum of 3 days/72 hours."
```text
-mysql=> call mysql.rds_set_configuration('binlog retention hours', 24);
+mysql=> call mysql.rds_set_configuration('binlog retention hours', 72);
```
-## Configure binlog settings in the parameter group {#binlog-parameter-group-rds}
+If this configuration isn't set or is set to a low interval, it can lead to gaps in the binary logs, compromising ClickPipes' ability to resume replication.
+
+## Configure binlog settings {#binlog-settings}
+
+The parameter group can be found when you click on your MySQL instance in the RDS Console, and then navigate to the **Configuration** tab.
-The parameter group can be found when you click on your MySQL instance in the RDS Console, and then heading over to the `Configurations` tab.
+:::tip
+If you have a MySQL cluster, the parameters below can be found in the [DB cluster](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_WorkingWithParamGroups.CreatingCluster.html) parameter group instead of the DB instance group.
+:::
-Upon clicking on the parameter group link, you will be taken to the page for it. You will see an Edit button in the top-right.
+
+Click the parameter group link, which will take you to its dedicated page. You should see an **Edit** button in the top right.
-The following settings need to be set as follows:
+The following parameters need to be set as follows:
1. `binlog_format` to `ROW`.
@@ -68,37 +78,31 @@ The following settings need to be set as follows:
-Then click on `Save Changes` in the top-right. You may need to reboot your instance for the changes to take effect - a way of knowing this is if you see `Pending reboot` next to the parameter group link in the Configurations tab of the RDS instance.
-
+Then, click on **Save Changes** in the top right corner. You may need to reboot your instance for the changes to take effect — a way of knowing this is if you see `Pending reboot` next to the parameter group link in the **Configuration** tab of the RDS instance.
+
+## Enable GTID Mode {#gtid-mode}
+
:::tip
-If you have a MySQL cluster, the above parameters would be found in a [DB Cluster](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_WorkingWithParamGroups.CreatingCluster.html) parameter group and not the DB instance group.
+The MySQL ClickPipe also supports replication without GTID mode. However, enabling GTID mode is recommended for better performance and easier troubleshooting.
:::
-## Enabling GTID Mode {#gtid-mode-rds}
-Global Transaction Identifiers (GTIDs) are unique IDs assigned to each committed transaction in MySQL. They simplify binlog replication and make troubleshooting more straightforward.
+[Global Transaction Identifiers (GTIDs)](https://dev.mysql.com/doc/refman/8.0/en/replication-gtids.html) are unique IDs assigned to each committed transaction in MySQL. They simplify binlog replication and make troubleshooting more straightforward. We **recommend** enabling GTID mode, so that the MySQL ClickPipe can use GTID-based replication.
-If your MySQL instance is MySQL 5.7, 8.0 or 8.4, we recommend enabling GTID mode so that the MySQL ClickPipe can use GTID replication.
+GTID-based replication is supported for Amazon RDS for MySQL versions 5.7, 8.0 and 8.4. To enable GTID mode for your Aurora MySQL instance, follow these steps:
-To enable GTID mode for your MySQL instance, follow the steps as follows:
1. In the RDS Console, click on your MySQL instance.
-2. Click on the `Configurations` tab.
+2. Click on the **Configuration** tab.
3. Click on the parameter group link.
-4. Click on the `Edit` button in the top-right corner.
+4. Click on the **Edit** button in the top right corner.
5. Set `enforce_gtid_consistency` to `ON`.
6. Set `gtid-mode` to `ON`.
-7. Click on `Save Changes` in the top-right corner.
+7. Click on **Save Changes** in the top right corner.
8. Reboot your instance for the changes to take effect.
-
-:::tip
-The MySQL ClickPipe also supports replication without GTID mode. However, enabling GTID mode is recommended for better performance and easier troubleshooting.
-:::
-
-
-## Configure a database user {#configure-database-user-rds}
+## Configure a database user {#configure-database-user}
Connect to your RDS MySQL instance as an admin user and execute the following commands:
@@ -125,7 +129,7 @@ Connect to your RDS MySQL instance as an admin user and execute the following co
### IP-based access control {#ip-based-access-control}
-If you want to restrict traffic to your RDS instance, please add the [documented static NAT IPs](../../index.md#list-of-static-ips) to the `Inbound rules` of your RDS security group.
+To restrict traffic to your Aurora MySQL instance, add the [documented static NAT IPs](../../index.md#list-of-static-ips) to the **Inbound rules** of your RDS security group.
@@ -133,4 +137,8 @@ If you want to restrict traffic to your RDS instance, please add the [documented
### Private access via AWS PrivateLink {#private-access-via-aws-privatelink}
-To connect to your RDS instance through a private network, you can use AWS PrivateLink. Follow our [AWS PrivateLink setup guide for ClickPipes](/knowledgebase/aws-privatelink-setup-for-clickpipes) to set up the connection.
+To connect to your RDS instance through a private network, you can use AWS PrivateLink. Follow the [AWS PrivateLink setup guide for ClickPipes](/knowledgebase/aws-privatelink-setup-for-clickpipes) to set up the connection.
+
+## Next steps {#next-steps}
+
+Now that your Amazon RDS MySQL instance is configured for binlog replication and securely connecting to ClickHouse Cloud, you can [create your first MySQL ClickPipe](/integrations/clickpipes/mysql/#create-your-clickpipe). For common questions around MySQL CDC, see the [MySQL FAQs page](/integrations/data-ingestion/clickpipes/mysql/faq.md).
\ No newline at end of file
diff --git a/docs/integrations/data-ingestion/clickpipes/postgres/index.md b/docs/integrations/data-ingestion/clickpipes/postgres/index.md
index 85923d822ec..8151f1a3ba0 100644
--- a/docs/integrations/data-ingestion/clickpipes/postgres/index.md
+++ b/docs/integrations/data-ingestion/clickpipes/postgres/index.md
@@ -145,6 +145,6 @@ You can configure the Advanced settings if needed. A brief description of each s
## What's next? {#whats-next}
-Once you've moved data from Postgres to ClickHouse, the next obvious question is how to query and model your data in ClickHouse to make the most of it. Please refer to the [migration guide](/migrations/postgresql/overview) to a step by step approaches on how to migrate from PostgreSQL to ClickHouse. Alongside the migration guide, make sure to check the pages about [Deduplication strategies (using CDC)](/integrations/clickpipes/postgres/deduplication) and [Ordering Keys](/integrations/clickpipes/postgres/ordering_keys) to understand how to handle duplicates and customize ordering keys when using CDC.
+Once you've set up your ClickPipe to replicate data from PostgreSQL to ClickHouse Cloud, you can focus on how to query and model your data for optimal performance. See the [migration guide](/migrations/postgresql/overview) to assess which startegy best suits your requirements, as well as the [Deduplication strategies (using CDC)](/integrations/clickpipes/postgres/deduplication) and [Ordering Keys](/integrations/clickpipes/postgres/ordering_keys) pages for best practices on CDC workloads.
-Finally, please refer to the ["ClickPipes for Postgres FAQ"](/integrations/clickpipes/postgres/faq) page for more information about common issues and how to resolve them.
+For common questions around PostgreSQL CDC and troubleshooting, see the [Postgres FAQs page](/integrations/clickpipes/postgres/faq).
diff --git a/docs/integrations/data-ingestion/clickpipes/postgres/schema-changes.md b/docs/integrations/data-ingestion/clickpipes/postgres/schema-changes.md
index 1c5943ca93c..96903ac3e45 100644
--- a/docs/integrations/data-ingestion/clickpipes/postgres/schema-changes.md
+++ b/docs/integrations/data-ingestion/clickpipes/postgres/schema-changes.md
@@ -4,10 +4,12 @@ slug: /integrations/clickpipes/postgres/schema-changes
description: 'Page describing schema change types detectable by ClickPipes in the source tables'
---
-ClickPipes for Postgres can detect schema changes in the source tables. It can propagate some of these changes to the corresponding destination tables as well. The way each schema change is handled is documented below:
+ClickPipes for Postgres can detect schema changes in the source tables and, in some cases, automatically propagate the changes to the destination tables. The way each DDL operation is handled is documented below:
+
+[//]: # "TODO Extend this page with behavior on rename, data type changes, and truncate + guidance on how to handle incompatible schema changes."
| Schema Change Type | Behaviour |
| ----------------------------------------------------------------------------------- | ------------------------------------- |
-| Adding a new column (`ALTER TABLE ADD COLUMN ...`) | Propagated automatically, all rows after the change will have all columns filled |
-| Adding a new column with a default value (`ALTER TABLE ADD COLUMN ... DEFAULT ...`) | Propagated automatically, all rows after the change will have all columns filled but existing rows will not show the DEFAULT value without a full table refresh |
-| Dropping an existing column (`ALTER TABLE DROP COLUMN ...`) | Detected, but not propagated. All rows after the change will have NULL for the dropped columns |
+| Adding a new column (`ALTER TABLE ADD COLUMN ...`) | Propagated automatically. The new column(s) will be populated for all rows replicated after the schema change |
+| Adding a new column with a default value (`ALTER TABLE ADD COLUMN ... DEFAULT ...`) | Propagated automatically. The new column(s) will be populated for all rows replicated after the schema change, but existing rows will not show the default value without a full table refresh |
+| Dropping an existing column (`ALTER TABLE DROP COLUMN ...`) | Detected, but **not** propagated. The dropped column(s) will be populated with `NULL` for all rows replicated after the schema change |
diff --git a/sidebars.js b/sidebars.js
index 66230ab576c..3a171613aef 100644
--- a/sidebars.js
+++ b/sidebars.js
@@ -764,6 +764,7 @@ const sidebars = {
items: [
"integrations/data-ingestion/clickpipes/mysql/index",
"integrations/data-ingestion/clickpipes/mysql/faq",
+ "integrations/data-ingestion/clickpipes/mysql/schema-changes",
{
type: "category",
label: "Source",