+
{% include molt/molt-setup.md %}
## Start Fetch
diff --git a/src/current/molt/migrate-to-cockroachdb.md b/src/current/molt/migrate-to-cockroachdb.md
index a64832549da..13b28bd1cc3 100644
--- a/src/current/molt/migrate-to-cockroachdb.md
+++ b/src/current/molt/migrate-to-cockroachdb.md
@@ -11,8 +11,8 @@ MOLT Fetch supports various migration flows using [MOLT Fetch modes]({% link mol
| Migration flow | Mode | Description | Best for |
|---------------------------------------------------------------------|------------------------------|---------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------|
-| [Bulk load]({% link molt/migrate-bulk-load.md %}) | `--mode data-load` | Perform a one-time bulk load of source data into CockroachDB. | Testing, migrations with [planned downtime]({% link molt/migration-strategy.md %}#approach-to-downtime) |
-| [Load and replicate]({% link molt/migrate-load-replicate.md %}) | MOLT Fetch + MOLT Replicator | Load source data using MOLT Fetch, then replicate subsequent changes using MOLT Replicator. | [Minimal downtime]({% link molt/migration-strategy.md %}#approach-to-downtime) migrations |
+| [Bulk load]({% link molt/migrate-bulk-load.md %}) | `--mode data-load` | Perform a one-time bulk load of source data into CockroachDB. | Testing, migrations with [planned downtime]({% link molt/migration-considerations.md %}#permissible-downtime) |
+| [Load and replicate]({% link molt/migrate-load-replicate.md %}) | MOLT Fetch + MOLT Replicator | Load source data using MOLT Fetch, then replicate subsequent changes using MOLT Replicator. | [Minimal downtime]({% link molt/migration-considerations.md %}#permissible-downtime) migrations |
| [Resume replication]({% link molt/migrate-resume-replication.md %}) | `--mode replication-only` | Resume replication from a checkpoint after interruption. | Resuming interrupted migrations, post-load sync |
| [Failback]({% link molt/migrate-failback.md %}) | `--mode failback` | Replicate changes from CockroachDB back to the source database. | [Rollback]({% link molt/migrate-failback.md %}) scenarios |
diff --git a/src/current/molt/migration-considerations-cutover.md b/src/current/molt/migration-considerations-cutover.md
new file mode 100644
index 00000000000..eac110f84d2
--- /dev/null
+++ b/src/current/molt/migration-considerations-cutover.md
@@ -0,0 +1,8 @@
+---
+title: Cutover Plan
+summary: Learn about the different approaches to cutover, and how to think about this for your migration.
+toc: true
+docs_area: migrate
+---
+
+TBD
\ No newline at end of file
diff --git a/src/current/molt/migration-considerations-phases.md b/src/current/molt/migration-considerations-phases.md
new file mode 100644
index 00000000000..35d451dc0fd
--- /dev/null
+++ b/src/current/molt/migration-considerations-phases.md
@@ -0,0 +1,101 @@
+---
+title: Migration Granularity
+summary: Learn how to think about phased data migration, and whether or not to approach your migration in phases.
+toc: true
+docs_area: migrate
+---
+
+You may choose to migrate all of your data into a CockroachDB cluster at once. However, for larger data stores it's recommended that you migrate data in separate phases. This can help break the migration down into manageable slices, and it can help limit the effects of migration difficulties.
+
+This page explains when to choose each approach, how to define phases, and how to use MOLT tools effectively in either context.
+
+In general:
+
+- Choose to migrate your data **all at once** if your data volume is modest, if you want to minimize migration complexity, or if you don't mind taking on a greater risk of something going wrong.
+
+- Choose a **phased migration** if your data volume is large, especially if you can naturally partition workload by tenant, service/domain, table/shard, geography, or time. A phased migration helps to reduce risk by limiting the workloads that would be adversely affected by a migration failure. It also helps to limit the downtime per phase, and allows the application to continue serving unaffected subsets of the data during the migration of a phase.
+
+## How to divide migrations into phases
+
+Here are some common ways to divide migrations:
+
+* **Per-tenant**: Multi-tenant apps route traffic and data per customer/tenant. Migrate a small cohort first (canary), then progressively larger cohorts. This aligns with access controls and isolates blast radius.
+
+* **Per-service/domain**: In microservice architectures, migrate data owned by a service or domain (e.g., billing, catalog) and route only that service to CockroachDB while others continue on the source. Requires clear data ownership and integration contracts.
+
+* **Per-table or shard**: Start with non-critical tables, large-but-isolated tables, or shard ranges. For monolith schemas, you can still phase by tables with few foreign-key dependencies and clear read/write paths.
+
+* **Per-region/market**: If traffic is regionally segmented, migrate one region/market at a time and validate latency, capacity, and routing rules before expanding.
+
+Tips for picking slices:
+
+- Prefer slices with clear routing keys (tenant_id, region_id) to simplify cutover and verification.
+
+- Start with lower-impact slices to exercise the migration process before migrating high-value cohorts.
+
+## Tradeoffs
+
+| | All at once | Phased |
+|---|---|---|
+| Downtime | A single downtime window, but it affects the whole database | Multiple short windows, each with limited impact |
+| Risk | Higher blast radius if issues surface post-cutover | Lower blast radius, issues confined to a slice |
+| Complexity | Simpler orchestration, enables a single cutover | More orchestration, repeated verify and cutover steps |
+| Validation | One-time, system-wide | Iterative per slice; faster feedback loops |
+| Timeline | Shorter migration time | Longer calendar time but safer path |
+| Best for | Small/medium datasets, simple integrations | Larger datasets, data with natural partitions or multiple tenants, risk-averse migrations |
+
+## Decision framework
+
+Use these questions to guide your approach:
+
+**How large is your dataset and how long will a full migration take?**
+If you can migrate the entire dataset within an acceptable downtime window, all-at-once is simpler. If the migration would take hours or days, phased migrations reduce the risk and downtime per phase.
+
+**Does your data have natural partitions?**
+If you can clearly partition by tenant, service, region, or table with minimal cross-dependencies, phased migration is well-suited. If your data is highly interconnected with complex foreign-key relationships, all-at-once may be easier.
+
+**What is your risk tolerance?**
+If a migration failure affecting the entire system is unacceptable, phased migration limits the blast radius. If you can afford to roll back the entire migration in case of issues, all-at-once is faster.
+
+**How much downtime can you afford per cutover?**
+Phased migrations spread downtime across multiple smaller windows, each affecting only a subset of users or services. All-at-once requires a single larger window affecting everyone.
+
+**What is your team's capacity for orchestration?**
+Phased migrations require repeated cycles of migration, validation, and cutover, with careful coordination of routing and monitoring. All-at-once is a single coordinated event.
+
+**Do you need to validate incrementally?**
+If you want fast feedback loops and the ability to adjust your migration strategy based on early phases, phased migration provides incremental validation. All-at-once validates everything once at the end.
+
+**Can you route traffic selectively?**
+Phased migrations require the ability to route specific tenants, services, or regions to CockroachDB while others remain on the source. If your application can't easily support this, all-at-once may be necessary.
+
+## MOLT toolkit support
+
+Phased and unphased migrations are both supported natively by MOLT.
+
+By default, [MOLT Fetch]({% link molt/molt-fetch.md %}) moves all data from the source database to CockroachDB. However, you can use the `--schema-filter`, `--table-filter`, and `--filter-path` flags to selective migrate data from the source to the target. Learn more about [schema and table selection]({% link molt/molt-fetch.md %}#schema-and-table-selection) and [selective data movement]({% link molt/molt-fetch.md %}#selective-data-movement), both of which can enable a phased migration.
+
+Similarly, you can use [MOLT Verify]({% link molt/molt-verify.md %})'s `--schema-filter` and `--table-filter` flags to run validation checks on subsets of the data in your source and target databases. In a phased migration, you will likely want to verify data at the end of each migration phase, rather than at the end of the entire migration.
+
+[MOLT Replicator]({% link molt/molt-replicator.md %}) replicates full tables by default. If you choose to combine phased migration with [continuous replication]({% link molt/migration-considerations-replication.md %}), you will either need to select phases that include whole tables, or else use [userscripts]({% link molt/molt-replicator.md %}#flags) to select rows to replicate.
+
+## Example sequences
+
+#### Migrating all data at once
+
+
+
+
+
+#### Phased migration
+
+
+
+
+
+## See Also
+
+- [Migration Overview]({% link molt/migration-overview.md %})
+- [Migration Considerations]({% link molt/migration-considerations.md %})
+- [Continuous Replication]({% link molt/migration-considerations-replication.md %})
+- [MOLT Fetch]({% link molt/molt-fetch.md %})
diff --git a/src/current/molt/migration-considerations-replication.md b/src/current/molt/migration-considerations-replication.md
new file mode 100644
index 00000000000..e9cd08d9cb2
--- /dev/null
+++ b/src/current/molt/migration-considerations-replication.md
@@ -0,0 +1,120 @@
+---
+title: Continuous Replication
+summary: Learn when and how to use continuous replication during data migration to minimize downtime and keep the target synchronized with the source.
+toc: true
+docs_area: migrate
+---
+
+Continuous replication can be used during a migration to keep a CockroachDB target cluster synchronized with a live source database. This is often used to minimize downtime at cutover. It can complement bulk data loading or be used independently.
+
+This page explains when to choose continuous replication, how to combine it with bulk loading, and how to use MOLT tools effectively for each approach.
+
+In general:
+
+- Choose to **bulk load only** if you can schedule a downtime window long enough to complete the entire data load and do not need to capture ongoing changes during migration.
+
+- Choose a **hybrid approach (bulk load + continuous replication)** when you need to minimize downtime and keep the target synchronized with ongoing source database changes until cutover.
+
+- You can choose **continuous replication only** for tables with transient data, or in other contexts where you only need to capture ongoing changes and are not concerned with migrating a large initial dataset.
+
+## Permissible downtime
+
+Downtime is the primary factor to consider in determining your migration's approach to continuous replication.
+
+If your migration can accommodate a window of **planned downtime** that's made known to your users in advance, a bulk load approach is simpler. A pure bulk load approach is well-suited for test or pre-production refreshes, or with migrations that can successfully move data within a planned downtime window.
+
+If your migration needs to **minimize downtime**, you will likely need to keep the source database live for as long as possible, continuing to allow write traffic to the source until cutover. In this case, an initial bulk load will need to be followed by a replication period, during which you stream incremental changes from the source to the target CockroachDB cluster. This is ideal for large datasets that are impractical to move within a narrow downtime window, or when you need validation time with a live, continuously synced target before switching traffic. The final downtime is minimized to a brief pause to let replication drain before switching traffic, with the pause length driven by write volume and observed replication lag.
+
+If you're migrating your data [in mulitple phases]({% link molt/migration-considerations-phases.md %}), consider the fact that each phase can have its own separate downtime window and cutover, and that migrating in phases can reduce the length of each individual downtime window.
+
+## Tradeoffs
+
+| | Bulk load only | Hybrid (bulk + replication) | Continuous replication only |
+|---|---|---|---|
+| **Downtime** | Requires full downtime for entire load | Minimal final downtime (brief pause to drain) | Minimal if resuming from checkpoint |
+| **Performance** | Fastest overall if window allows | Spreads work: bulk moves mass, replication handles ongoing changes | Depends on catch-up time from checkpoint |
+| **Complexity** | Fewer moving parts, simpler orchestration | Requires replication infrastructure and monitoring | Requires checkpoint management |
+| **Risk management** | Full commit at once; rollback more disruptive | Supports failback flows for rollback options | Lower risk when resuming known state |
+| **Cutover** | Traffic off until entire load completes | Traffic paused briefly while replication drains | Brief pause to verify sync |
+| **Timeline** | Shortest migration time if downtime permits | Longer preparation but safer path | Short catch-up phase |
+| **Best for** | Simple moves, test environments, scheduled maintenance | Production migrations, large datasets, high availability requirements | Recovery scenarios, post-load sync |
+
+## Decision framework
+
+Use these questions to guide your approach:
+
+**What downtime can you tolerate?**
+If you can't guarantee a window long enough for the full load, favor the hybrid approach to minimize downtime at cutover.
+
+**How large is the dataset and how fast can you bulk-load it?**
+If load time fits inside downtime, bulk-only is simplest. Otherwise, hybrid.
+
+**How active is the source (write rate and burstiness)?**
+Higher write rates mean a longer final drain; this pushes you toward hybrid with close monitoring of replication lag before cutover.
+
+**Do you need a safety net?**
+If rollback is a requirement, design for replication and failback pathways, which the MOLT flow supports.
+
+**How much validation do you require pre-cutover?**
+Hybrid gives you time to validate a live, synchronized target before switching traffic.
+
+## MOLT toolkit support
+
+The MOLT toolkit provides two complementary tools for data migration: [MOLT Fetch]({% link molt/molt-fetch.md %}) for bulk loading the initial dataset, and [MOLT Replicator]({% link molt/molt-replicator.md %}) for continuous replication. These tools work independently or together depending on your chosen replication approach.
+
+### Bulk load only
+
+Use MOLT Fetch to export and load data to CockroachDB.
+
+For pure bulk migrations, set the `--ignore-replication-check` flag to skip gathering replication checkpoints. This simplifies the workflow when you don't need to track change positions for subsequent replication.
+
+MOLT Fetch supports both `IMPORT INTO` (default, for highest throughput with offline tables) and `COPY FROM` (for online tables) loading methods. Because a pure bulk load approach will likely involve substantial application downtime, you may benefit from using `IMPORT INTO`. In this case, do not use the `--use-copy` flag. Learn more about Fetch's [data load modes]({% link molt/molt-fetch.md %}#data-load-mode).
+
+A migration that does not utilize continuous replication would not need to use MOLT Replicator.
+
+
+
+### Hybrid (bulk load + continuous replication)
+
+Use MOLT Fetch to export and load the inital dataset to CockroachDB. Then start MOLT Replicator to begin streaming changes from the source database to CockroachDB.
+
+When you run MOLT Fetch without `--ignore-replication-check`, it emits a checkpoint value that marks the point in time when the bulk load snapshot was taken. After MOLT Fetch completes, the checkpoint is stored in the target database. MOLT Replicator then uses this checkpoint to begin streaming changes from exactly that point, ensuring no data is missed between the bulk load and continuous replication. Learn more about [replication checkpoints]({% link molt/molt-replicator.md %}#replication-checkpoints).
+
+MOLT Fetch supports both `IMPORT INTO` (default, for highest throughput with offline tables) and `COPY FROM` (for online tables) loading methods. Because a hybrid approach will likely aim to have less downtime, you may need to use `COPY FROM` if your tables remain online. In this case, use the `--use-copy` flag. Learn more about Fetch's [data load modes]({% link molt/molt-fetch.md %}#data-load-mode).
+
+MOLT Replicator replicates full tables by default. If you choose to combine continuous replication with a [phased migration]({% link molt/migration-considerations-phases.md %}), you will either need to select phases that include whole tables, or else use [userscripts]({% link molt/molt-replicator.md %}#flags) to select rows to replicate.
+
+MOLT Replicator can be stopped after cutover, or it can remain online to continue streaming changes indefinitely.
+
+### Continuous replication only
+
+If you're only interested in capturing recent changes, skip MOLT Fetch entirely and just use MOLT Replicator.
+
+## Example sequences
+
+#### Bulk load only
+
+
+
+
+
+#### Hybrid approach
+
+
+
+
+
+## See also
+
+- [Migration Overview]({% link molt/migration-overview.md %})
+- [Migration Considerations]({% link molt/migration-considerations.md %})
+- [Migration Granularity]({% link molt/migration-considerations-phases.md %})
+- [MOLT Fetch]({% link molt/molt-fetch.md %})
+- [MOLT Replicator]({% link molt/molt-replicator.md %})
+- [MOLT Verify]({% link molt/molt-verify.md %})
diff --git a/src/current/molt/migration-considerations-rollback.md b/src/current/molt/migration-considerations-rollback.md
new file mode 100644
index 00000000000..5db81cf11c0
--- /dev/null
+++ b/src/current/molt/migration-considerations-rollback.md
@@ -0,0 +1,97 @@
+---
+title: Rollback Plan
+summary: Learn how to plan rollback options to limit risk and preserve data integrity during migration.
+toc: true
+docs_area: migrate
+---
+
+A rollback plan defines how you will undo or recover from a failed migration. A clear rollback strategy limits risk during migration, minimizes business impact, and preserves data integrity so that you can retry the migration with confidence.
+
+This page explains four common rollback options, their trade-offs, and how the MOLT toolkit supports each approach.
+
+In general:
+
+- **Manual reconciliation** is sufficient for low-risk systems or low-complexity migrations where automated rollback is not necessary.
+
+- Utilize **failback replication** to maintain synchronization between the CockroachDB cluster and the original source database after cutover to CockroachDB.
+
+- Utilize **bidirectional replication** (simultaneous forward and failback replication) to maximize database synchronization without requiring app changes, accepting the operational overhead of running two replication streams.
+
+- Choose a **dual-write** strategy for the fastest rollback with minimal orchestration, accepting higher application complexity during the trial window.
+
+## Why plan for rollback
+
+Many things can go wrong during a migration. Performance issues may surface under production load that didn't appear in testing. Application compatibility problems might emerge that require additional code changes. Data discrepancies could be discovered that necessitate investigation and remediation. In any of these scenarios, the ability to quickly and safely return to the source database is critical to minimizing business impact.
+
+Your rollback strategy should align with your migration's risk profile, downtime tolerance, and operational capabilities. High-stakes production migrations typically require faster rollback paths with minimal data loss, while test environments or low-traffic systems can tolerate simpler manual approaches.
+
+### Failback replication
+
+[Continuous (forward) replication]({% link molt/migration-considerations-replication.md %}), which serves to minimize downtime windows, keeps two databases in sync by replicating changes from the source to the target. In contrast, **failback replication** synchronizes data in the opposite direction, from the target back to the source.
+
+Failback replication is useful for rollback because it keeps the source database synchronized with writes that occur on CockroachDB after cutover. If problems emerge during your trial period and you need to roll back, the source database already has all the data that was written to CockroachDB. This enables a quick rollback without data loss.
+
+Failback and forward replication can be used simultaneously (**bidirectional replication**). This is especially useful if the source and the target databases can receive simultaneous, but disparate write traffic. In that case, bidirectional replication is necessary to ensure that both databases stay in sync. It's also useful if downtime windows are long or if cutover is gradual, increasing the likelihood that the two databases receive independent writes.
+
+### Dual-write
+
+Failback replication requires an external replication system (like [MOLT Replicator]({% link molt/molt-replicator.md %})) to keep two databases synchronized. Alternatively, you can modify the application code itself to enable **dual-writes**, wherein the application writes to both the source database and CockroachDB during a trial window. If rollback is needed, you can then redirect traffic to the source without additional data movement.
+
+This enables faster rollback, but increases application complexity as you need to manage two write paths.
+
+## Tradeoffs
+
+| | Manual reconciliation | Failback replication (on-demand) | Bidirectional replication | Dual-write |
+|---|---|---|---|---|
+| **Rollback speed (RTO)** | Slow | Moderate | Fast | Fast |
+| **Data loss risk (RPO)** | Medium-High | Low | Low | Low-Medium (app-dependent) |
+| **Synchronization mechanism** | None (backups/scripts) | Activate failback when needed | Continuous forward + failback | Application writes to both |
+| **Application changes** | None | None | None | Required |
+| **Operational complexity** | Low (tooling), High (manual) | Medium (runbook activation) | High (two replication streams) | High (app layer) |
+| **Overhead during trial** | Low | Low-Medium | High (two replication streams) | Medium (two write paths) |
+| **Best for** | Low-risk systems, simple migrations | Moderate RTO tolerance, lower ongoing cost | Strict RTO/RPO, long or complex cutovers | Short trials, resilient app teams |
+
+## Decision framework
+
+Use these questions to guide your rollback strategy:
+
+**How quickly do you need to roll back if problems occur?**
+If you need immediate rollback, choose dual-write or bidirectional replication. If you can tolerate some delay to activate failback replication, one-way failback replication is sufficient. For low-risk migrations with generous time windows, manual reconciliation may be acceptable.
+
+**How much data can you afford to lose during rollback?**
+If you cannot lose any data written after cutover, choose bidirectional replication or on-demand failback (both preserve all writes). Dual-write can also preserve data if implemented carefully. Manual reconciliation typically accepts some data loss.
+
+**Will writes occur to both databases during the trial period?**
+If traffic might split between source and target (e.g., during gradual cutover or in multi-region scenarios), bidirectional replication keeps both databases synchronized. If traffic cleanly shifts from source to target, on-demand failback or dual-write is sufficient.
+
+**Can you modify the application code?**
+If application changes are expensive or risky, use database-level replication (bidirectional or on-demand failback) instead of dual-write.
+
+**What is your team's operational capacity?**
+Bidirectional replication requires monitoring and managing two active replication streams. On-demand failback requires a tested runbook for activating failback quickly. Dual-write requires application-layer resilience and observability. Manual reconciliation has the lowest operational complexity.
+
+**What are your database capabilities?**
+Ensure your source database supports the change data capture requirements for the migration window. Verify that CockroachDB changefeeds can provide the necessary failback support for your environment.
+
+## MOLT toolkit support
+
+[MOLT Replicator]({% link molt/molt-replicator.md %}) uses change data to stream changes from one database to another. It's used for both [forward replication]({% link molt/migration-considerations-replication.md %}) and [failback replication](#failback-replication).
+
+To use MOLT Replicator in failback mode, run the [`replicator start`]({% link molt/molt-replicator.md %}#commands) command with its various [flags]({% link molt/molt-replicator.md %}#start-failback-flags).
+
+When enabling failback replication, the original source database becomes the replication target, and the original target CockroachDB cluster becomes the replication source. Use the `--sourceConn` flag to indicate the CockroachDB cluster, and use the `--targetConn` flag to indicate the PostgreSQL, MySQL, or Oracle database from which data is being migrated.
+
+MOLT Replicator can be stopped after cutover, or it can remain online to continue streaming changes indefinitely. This could be useful for as long as you want as you want the migration source database to serve as a backup to the new CockroachDB cluster.
+
+Rollback plans that do not utilize failback replication will require external tooling, or in the case of a dual-write strategy, changes to application code. You can still use [MOLT Verify]({% link molt/molt-verify.md %}) to ensure parity between the two databases.
+
+## See also
+
+- [Migration Overview]({% link molt/migration-overview.md %})
+- [Migration Considerations]({% link molt/migration-considerations.md %})
+- [Continuous Replication]({% link molt/migration-considerations-replication.md %})
+- [Validation Strategy]({% link molt/migration-considerations-validation.md %})
+- [MOLT Replicator]({% link molt/molt-replicator.md %})
+- [MOLT Fetch]({% link molt/molt-fetch.md %})
+- [MOLT Verify]({% link molt/molt-verify.md %})
+- [Migrate with Failback]({% link molt/migrate-failback.md %})
diff --git a/src/current/molt/migration-considerations-transformation.md b/src/current/molt/migration-considerations-transformation.md
new file mode 100644
index 00000000000..7d517761dad
--- /dev/null
+++ b/src/current/molt/migration-considerations-transformation.md
@@ -0,0 +1,88 @@
+---
+title: Data Transformation Strategy
+summary: Learn about the different approaches to applying data transformations during a migration and how to choose the right strategy for your use case.
+toc: true
+docs_area: migrate
+---
+
+Data transformations are applied to data as it moves from the source system to the target system. Transformations ensure that the data is compatible, consistent, and valuable in the destination. They are a key part of a migration to CockroachDB. When planning a migration, it's important to determine **what** transformations are necessary and **where** they need to occur.
+
+This page explains the types of transformations to expect, where they can be applied, and how these choices shape your use of MOLT tooling.
+
+## Common transformation types
+
+If the source and target schemas are not identical, some sort of transformation is likely to be necessary during a migration. The set of necessary transformations will depend on the differences between your source database schema and your target CockroachDB schema, as well as any data quality or formatting requirements for your application.
+
+- **Type mapping**: Align source types with CockroachDB types, especially for dialect-specific types.
+- **Format conversion**: Change the format or encoding of certain value to align with the target schema (for example, `2024-03-01T00:00:00Z` to `03/01/2024`).
+- **Field renaming**: Rename fields to fit target schemas or conventions.
+- **Primary key strategy**: Replace source sequences or auto-increment patterns with CockroachDB-friendly IDs (UUIDs, sequences).
+- **Table reshaping**: Consolidate partitioned tables, rename tables, or retarget to different schemas.
+- **Column changes**: Exclude deprecated columns, or map computed columns.
+- **Row filtering**: Move only a subset of rows by tenant, region, or timeframe.
+- **Null/default handling**: Replace, remove, or infer missing values.
+- **Constraints and indexes**: Drop non-primary-key constraints and secondary indexes before bulk load for performance, then recreate after.
+
+## Where to transform
+
+Transformations can occur in the source database, in the target database, or in flight (between the source and the target). Deciding where to perform the transformations is largely determined by technical constraints, including the mutability of the source database and the choice of tooling.
+
+#### Transform in the source database
+
+Apply transformations directly on the source database before migrating data. This is only possible if the source database can be modified to accommodate the transformations suited for the target database.
+
+This provides the advantage of allowing ample time, before the downtime window, to perform the transformations, but it often is not possible due to technical constraints.
+
+#### Transform in the target database
+
+Apply transformations in the CockroachDB cluster after data has been loaded. For any transformations that occur in the target cluster, it's recommended that these occur before cutover, to ensure that live data complies with CockroachDB best practices. Transformations that occur before cutover may extend downtime.
+
+#### Transform in flight
+
+Apply transformations within the migration pipeline, between the source and target databases. This allows the source database to remain as it is, and it allows the target database to be designed using CockroachDB best practices. It also enables testability by separating transformations from either database.
+
+However, in-flight transformations may require more complex tooling. Tranformation in-flight is largely supported by the [MOLT toolkit](#molt-toolkit-support).
+
+## Decision framework
+
+Use these questions to guide your transformation strategy:
+
+- **What is your downtime tolerance?** Near-zero downtime pushes you toward in-flight transforms that apply consistently to bulk and streaming loads.
+- **Will transformation logic be reused post-cutover?** If you need ongoing sync or failback, prefer deterministic, version-controlled in-flight transformations.
+- **How complex are the transformations?** Simple schema reshaping favors MOLT Fetch transformations or target DDL. Complex value normalization or routing favors in-flight userscripts.
+- **Can you modify the source database?** Source-side transformations require permission and capacity to create views, staging tables, or run transformation queries.
+
+## MOLT toolkit support
+
+The MOLT toolkit provides functionality for implementing transformations at each stage of the migration pipeline.
+
+### MOLT Schema Conversion Tool
+
+While not a part of the transformation process itself, the [MOLT Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}) automates the creation of the target database schema based on the schema of the source database. This reduces downstream transformation pressure by addressing DDL incompatibilities upfront.
+
+### MOLT Fetch
+
+[MOLT Fetch]({% link molt/molt-fetch.md %}) supports transformations during a bulk data load:
+
+- **Row filtering**: `--filter-path` specifies a JSON file with table-to-SQL-predicate mappings evaluated in the source dialect before export. Ensure filtered columns are indexed for performance.
+- **Schema shaping**: `--transformations-file` defines table renames, n→1 merges (consolidate partitioned tables), and column exclusions. For n→1 merges, use `--use-copy` or `--direct-copy` and pre-create the target table.
+- **Type alignment**: `--type-map-file` specifies explicit type mappings when auto-creating target tables.
+- **Table lifecycle**: `--table-handling` controls whether to truncate, drop-and-recreate, or assume tables exist.
+
+### MOLT Replicator
+
+[MOLT Replicator]({% link molt/molt-replicator.md %}) uses TypeScript **userscripts** to implement in-flight transformations for continuous replication:
+
+- **Capabilities**: Transform or normalize values, route rows to different tables, enrich data, filter rows, merge partitioned sources.
+- **Structure**: Userscripts export functions (`configureTargetTables`, `onRowUpsert`, `onRowDelete`) that process change data before commit to CockroachDB.
+- **Staging schema**: Replicator uses a CockroachDB staging schema to store replication state and buffered mutations (`--stagingSchema` and `--stagingCreateSchema`).
+
+## See also
+
+- [Migration Overview]({% link molt/migration-overview.md %})
+- [Migration Considerations]({% link molt/migration-considerations.md %})
+- [Migration Granularity]({% link molt/migration-considerations-phases.md %})
+- [Continuous Replication]({% link molt/migration-considerations-replication.md %})
+- [MOLT Fetch]({% link molt/molt-fetch.md %})
+- [MOLT Replicator]({% link molt/molt-replicator.md %})
+- [MOLT Verify]({% link molt/molt-verify.md %})
diff --git a/src/current/molt/migration-considerations-validation.md b/src/current/molt/migration-considerations-validation.md
new file mode 100644
index 00000000000..0a890fada67
--- /dev/null
+++ b/src/current/molt/migration-considerations-validation.md
@@ -0,0 +1,161 @@
+---
+title: Validation Strategy
+summary: Learn when and how to validate data during migration to ensure correctness, completeness, and consistency.
+toc: true
+docs_area: migrate
+---
+
+Validation strategies are critical to ensuring a successful data migration. They're how you confirm that the right data has been moved correctly, is complete, and is usable in the new environment. A validation strategy is defined by **what** validations you want to run and **when** you want to run them.
+
+This page explains how to think about different validation strategies and how to use MOLT tooling to enable validation.
+
+
+
+## What to validate
+
+Running any of the following validations can help you feel confident that the data in the CockroachDB cluster matches the data in the migration source database.
+
+- **Row Count Validation**: Ensures the number of records matches between source and target.
+
+- **Checksum/Hash Validation**: Compares hashed values of rows or columns to detect changes or corruption.
+
+- **Data Sampling**: Randomly sample and manually compare rows between systems.
+
+- **Column-Level Comparison**: Validate individual field values across systems.
+
+- **Business Rule Validation**: Apply domain rules to validate logic or derived values.
+
+- **Boundary Testing**: Ensure edge-case data (nulls, max values, etc.) are correctly migrated.
+
+- **Referential Integrity**: Validate that relationships (foreign keys) are intact in the target.
+
+- **Data Type Validation**: Confirm that fields conform to expected types/formats.
+
+- **Null/Default Value Checks**: Validate expected default values or NULLs post-migration.
+
+- **ETL Process Validation**: Check logs, counts, or errors from migration tools.
+
+- **Automated Testing**: Use scripts or tools to compare results and flag mismatches.
+
+The rigor of your validations (the set of validations you perform) will depend on your organization's risk tolerance and the complexity of the migration.
+
+## When to validate
+
+A migration can be a long process, and depending on the choices made in designing a migration, it can be complex. If the dataset is small or the migration is low in complexity, it may be sufficient to simply run validations when you're ready to cut over application traffic to CockroachDB. However, there are several opportunities to validate your data in advance of cutover.
+
+It's often useful to find natural checkpoints in your migration flow to run validations, and to increase the rigor of those validations as you approach cutover.
+
+If performing a migration [in phases]({% link molt/migration-considerations-phases.md %}), the checkpoints below can be considered in the context of each individual phase. A rigorous validation approach might choose to run validations after each phase, while a more risk-tolerant approach might choose to run them after all of the phases have been migrated but before cutover.
+
+#### Pre-migration (design and dry-run)
+
+Validate converted schema and resolve type mapping issues. Run a dry-run migration on test data and begin query validation to catch behavioral differences early.
+
+#### After a bulk data load
+
+Run comprehensive validations to confirm schema and row-level parity before re-adding constraints and indexes that were dropped to accelerate load.
+
+#### During continuous replication
+
+If using [continuous replication]({% link molt/migration-considerations-replication.md %}), run validation periodically to ensure the target converges with the source. Use live-aware validation to reduce false positives from in-flight changes. This gives you confidence that replication is working correctly.
+
+#### Before cutover
+
+Once replication has drained, run final validation on the complete cutover scope and verify critical application queries.
+
+#### Post-cutover confidence checks
+
+After traffic moves to CockroachDB, run targeted validation on critical tables and application smoke tests to confirm steady state.
+
+## Decision framework
+
+Use these questions to help you determine what validations you want to perform, and when you want to peform them:
+
+**What is your data volume and validation timeline?**
+Larger datasets require more validation time. Consider concurrency tuning, phased validation, or off-peak runs to fit within windows.
+
+**Is the source database active during migration?**
+Active sources require live-aware validation and continuous monitoring during replication. Plan for replication drain before final validation.
+
+**Are there intentional schema or type differences?**
+Expect validation to flag type conversions and collation differences. Decide upfront whether to accept conditional successes or redesign to enable strict parity.
+
+**Which tables are most critical?**
+Prioritize critical data (compliance, transactions, authentication) for comprehensive validation. Use targeted validation loops on high-churn tables during replication.
+
+**Do you have unsupported column types?**
+For types that cannot be compared automatically (e.g., geospatial), plan alternative checks like row counts or application-level validation.
+
+## MOLT toolkit support
+
+[MOLT Verify]({% link molt/molt-verify.md %}) performs structural and row-level comparison between the source database and the CockroachDB cluster. MOLT Verify performs the following verifications to ensure data integrity during a migration:
+
+- Table Verification: Check that the structure of tables between the source database and the target database are the same.
+
+- Column Definition Verification: Check that the column names, data types, constraints, nullability, and other attributes between the source database and the target database are the same.
+
+- Row Value Verification: Check that the actual data in the tables is the same between the source database and the target database.
+
+Other validations beyond those supported by MOLT Verify would need to be run by a third-party tool, but could be run in tandem with MOLT Verify.
+
+If performing a [phased migration]({% link molt/migration-considerations-phases.md %}), you can use MOLT Verify's `--schema-filter` and `--table-filter` flags to specify specific schemas or tables to run the validations on.
+
+If using [continuous replication]({% link molt/migration-considerations-replication.md %}), you can use MOLT Verify's `--continuous` and `--live` flags to enable continuous verification.
+
+Check MOLT Verify's [known limitations]({% link molt/molt-verify.md %}#known-limitations) to ensure the tool's suitability for your validation strategy.
+
+
+
+
+
+
+## See also
+
+- [Migration Overview]({% link molt/migration-overview.md %})
+- [Migration Considerations]({% link molt/migration-considerations.md %})
+- [Migration Granularity]({% link molt/migration-considerations-phases.md %})
+- [Continuous Replication]({% link molt/migration-considerations-replication.md %})
+- [Data Transformation Strategy]({% link molt/migration-considerations-transformation.md %})
+- [MOLT Verify]({% link molt/molt-verify.md %})
+- [MOLT Fetch]({% link molt/molt-fetch.md %})
+- [MOLT Replicator]({% link molt/molt-replicator.md %})
diff --git a/src/current/molt/migration-considerations.md b/src/current/molt/migration-considerations.md
new file mode 100644
index 00000000000..dcaa15853e1
--- /dev/null
+++ b/src/current/molt/migration-considerations.md
@@ -0,0 +1,73 @@
+---
+title: Migration Considerations
+summary: Learn what to consider when making high-level decisions about a migration.
+toc: true
+docs_area: migrate
+---
+
+When planning a migration to CockroachDB, you need to make several high-level decisions that will shape your migration approach. This page provides an overview of key migration variables and the factors that influence them. Each variable has multiple options, and the combination you choose will largely define your migration strategy.
+
+For detailed migration sequencing and tool usage, see [Migration Overview]({% link molt/migration-overview.md %}). For detailed planning guidance, see [Migration Strategy]({% link molt/migration-strategy.md %}).
+
+## Migration variables
+
+Learn more about each migration variable by clicking the links in the left-hand column.
+
+| Variable | Description |
+|---|---|
+| [**Migration granularity**]({% link molt/migration-considerations-phases.md %})
Migration granularity
| Do you want to migrate all of your data at once, or do you want to split your data up into phases and migrate one phase at a time? |
+| [**Continuous replication**]({% link molt/migration-considerations-replication.md %})
Continuous replication
| After the initial data load (or after the initial load of each phase), do you want to stream further changes to that data from the source to the target? |
+| [**Data transformation strategy**]({% link molt/migration-considerations-transformation.md %})
Data transformation strategy
| If there are discrepancies between the source and target schema, how will you define those data transformations, and when will those transformations occur? |
+| [**Validation strategy**]({% link molt/migration-considerations-validation.md %})
Validation strategy
| How and when will you verify that the data in CockroachDB matches the source database? |
+| [**Rollback plan**]({% link molt/migration-considerations-rollback.md %})
Rollback plan
| What approach will you use to roll back the migration if issues arise during or after cutover? |
+
+The combination of these variables largely defines your migration approach. While you'll typically choose one primary option for each variable, some migrations may involve a hybrid approach depending on your specific requirements.
+
+## Factors to consider
+
+When deciding on the options for each migration variable, consider the following business and technical requirements:
+
+### Permissible downtime
+
+How much downtime can your application tolerate during the migration? This is one of the most critical factors in determining your migration approach, and it may influence your choices for [migration granularity]({% link molt/migration-considerations-phases.md %}), [continuous replication]({% link molt/migration-considerations-replication.md %}), and [cutover strategy]({% link molt/migration-considerations-cutover.md %}).
+
+- **Planned downtime** is made known to your users in advance. It involves taking the application offline, conducting the migration, and bginging the application back online on CockroachDB.
+
+ To succeed, you should estimate the amount of downtime required to migrate your data, and ideally schedule the downtime outside of peak hours. Scheduling downtime is easiest if your application traffic is "periodic", meaning that it varies by the time of day, day of week, or day of month.
+
+ If you can support planned downtime, you may want to migrate your data all at once, and _without_ continuous replication.
+
+- **Minimal downtime** impacts as few customers as possible, ideally without impacting their regular usage. If your application is intentionally offline at certain times (e.g., outside business hours), you can migrate the data without users noticing. Alternatively, if your application's functionality is not time-sensitive (e.g., it sends batched messages or emails), you can queue requests while the system is offline and process them after completing the migration to CockroachDB.
+
+- **Near-zero downtime** is necessary for mission-critical applications. For these migrations, consider cutover strategies that keep applications online for as long as possible, and which utilize continuous replication.
+
+In addition to downtime duration, consider whether your application could support windows of **reduced functionality** in which some, but not all, application functionality is brought offline. For example, you can disable writes but not reads while you migrate the application data, and queue data to be written after completing the migration.
+
+### Migration timeframe and allowable complexity
+
+When do you need to complete the migration? How many team members can be allocated for this effort? How much complex orchestration can your team manage? These factors may influence your choices for [migration granularity]({% link molt/migration-considerations-phases.md %}), [continuous replication]({% link molt/migration-considerations-replication.md %}), and [cutover strategy]({% link molt/migration-considerations-cutover.md %}).
+
+- Migrations with a short timeline, or which cannot accommodate high complexity, may want to migrate data all at once, without utilizing continuous replication, and requiring manual reconciliation in the event of migration failure.
+
+- Migrations with a long timeline, or which can accomodate complexity, may want to migrate data in phases. If the migration requires minimal downtime, these migrations may also want to utilize continuous replication. If the migration is low in risk-tolerance, these migrations may also want to enable failback.
+
+### Risk tolerance
+
+How much risk is your organization willing to accept during the migration? This may influence your choices for [migration granularity]({% link molt/migration-considerations-phases.md %}), [validation strategy]({% link molt/migration-considerations-validation.md %}), and [rollback plan]({% link molt/migration-considerations-rollback.md %}).
+
+- Risk-averse migrations should prefer phased migrations that limit the blast radius of any issues. Start with low-risk slices (e.g., a small cohort of tenants or a non-critical service), validate thoroughly, and progressively expand to higher-value workloads. These migrations may also prefer rollback plans that enable quick recovery in the event of migration issues.
+
+- For risk-tolerant migrations, it may be acceptable to migrate all of your data at once. Less stringent validation strategies and manual reconciliation in the event of a migration failure may also be acceptable.
+
+___
+
+These above factors are only a subset of all of what you'll want to consider in the decision-making about your CockroachDB migration, along with your specific business requirements and technical constraints. It's recommended that you document these decisions and the reasoning behind them as part of your [migration plan]({% link molt/migration-strategy.md %}#develop-a-migration-plan).
+
+## See also
+
+- [Migration Overview]({% link molt/migration-overview.md %})
+- [Migration Strategy]({% link molt/migration-strategy.md %})
+- [Bulk vs. Phased Migration]({% link molt/migration-considerations-phases.md %})
+- [MOLT Fetch]({% link molt/molt-fetch.md %})
+- [MOLT Replicator]({% link molt/molt-replicator.md %})
+- [MOLT Verify]({% link molt/molt-verify.md %})
diff --git a/src/current/molt/migration-overview.md b/src/current/molt/migration-overview.md
index 163dc26d8aa..1bc8b86892b 100644
--- a/src/current/molt/migration-overview.md
+++ b/src/current/molt/migration-overview.md
@@ -5,34 +5,47 @@ toc: true
docs_area: migrate
---
-The MOLT (Migrate Off Legacy Technology) toolkit enables safe, minimal-downtime database migrations to CockroachDB. MOLT combines schema transformation, distributed data load, continuous replication, and row-level validation into a highly configurable workflow that adapts to diverse production environments.
+A migration involves transfering data from a pre-existing **source** database onto a **target** CockroachDB cluster. Migrating data is a complex, multi-step process, and a data migration can take many different forms depending on your specific business and technical constraints.
+
+Cockroach Labs provides a MOLT (Migrate Off Legacy Technology) toolkit to aid in migrations.
This page provides an overview of the following:
- Overall [migration sequence](#migration-sequence)
- [MOLT tools](#molt-tools)
-- Supported [migration flows](#migration-flows)
## Migration sequence
-{{site.data.alerts.callout_success}}
-Before you begin the migration, review [Migration Strategy]({% link molt/migration-strategy.md %}).
-{{site.data.alerts.end}}
-
A migration to CockroachDB generally follows this sequence:
-
+
+
+1. **Assess and discover**: Inventory the source database, flag unsupported features, make a migration plan.
+1. **Prepare the environment**: Configure networking, users and permissions, bucket locations, replication settings, and more.
+1. **Convert the source schema**: Generate CockroachDB-compatible [DDL]({% link {{ site.current_cloud_version }}/sql-statements.md %}#data-definition-statements). Apply the converted schema to the target database. Drop constraints and indexes to facilitate data load.
+1. **Load data into CockroachDB**: Bulk load the source data into the CockroachDB cluster.
+1. **Finalize target schema**: Recreate indexes or constraints on CockroachDB that you previously dropped to facilitate data load.
+1. **_(Optional)_ Replicate ongoing changes**: Keep CockroachDB in sync with the source. This may be necessary for migrations that minimize downtime.
+1. **Stop application traffic**: Limit user read/write traffic to the source database. _This begins application downtime._
+1. **Verify data consistency**: Confirm that the CockroachDB data is consistent with the source.
+1. **_(Optional)_ Enable failback**: Replicate data from the target back to the source, enabling a reversion to the source database in the event of migration failure.
+1. **Cut over application traffic**: Resume normal application use, with the CockroachDB cluster as the target database. _This ends application downtime._
-1. Prepare the source database: Configure users, permissions, and replication settings as needed.
-1. Convert the source schema: Use the [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}) to generate CockroachDB-compatible [DDL]({% link {{ site.current_cloud_version }}/sql-statements.md %}#data-definition-statements). Apply the converted schema to the target database. Drop constraints and indexes to facilitate data load.
-1. Load data into CockroachDB: Use [MOLT Fetch]({% link molt/molt-fetch.md %}) to bulk-ingest your source data.
-1. (Optional) Verify consistency before replication: Use [MOLT Verify]({% link molt/molt-verify.md %}) to confirm that the data loaded into CockroachDB is consistent with the source.
-1. Finalize target schema: Recreate indexes or constraints on CockroachDB that you previously dropped to facilitate data load.
-1. Replicate ongoing changes: Enable continuous replication with [MOLT Replicator]({% link molt/molt-replicator.md %}) to keep CockroachDB in sync with the source.
-1. Verify consistency before cutover: Use [MOLT Verify]({% link molt/molt-verify.md %}) to confirm that the CockroachDB data is consistent with the source.
-1. Cut over to CockroachDB: Redirect application traffic to the CockroachDB cluster.
+The MOLT (Migrate Off Legacy Technology) toolkit enables safe, minimal-downtime database migrations to CockroachDB. MOLT combines schema transformation, distributed data load, continuous replication, and row-level validation into a highly configurable workflow that adapts to diverse production environments.
-For more details, refer to [Migration flows](#migration-flows).
+
+
+
+
## MOLT tools
@@ -87,7 +100,7 @@ The [MOLT Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
[MOLT Fetch]({% link molt/molt-fetch.md %}) performs the initial data load to CockroachDB. It supports:
-- [Multiple migration flows](#migration-flows) via `IMPORT INTO` or `COPY FROM`.
+- Multiple migration flows via `IMPORT INTO` or `COPY FROM`.
- Data movement via [cloud storage, local file servers, or direct copy]({% link molt/molt-fetch.md %}#data-path).
- [Concurrent data export]({% link molt/molt-fetch.md %}#best-practices) from multiple source tables and shards.
- [Schema transformation rules]({% link molt/molt-fetch.md %}#transformations).
@@ -110,7 +123,11 @@ The [MOLT Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
- Column definition.
- Row-level data.
-## Migration flows
+
+
+## Migration variables
+
+You must decide how you want your migration to handle each of the following variables. These decisions will depend on your specific business and technical considerations. The MOLT toolkit supports any set of decisions made for the [supported source databases](#molt-tools).
+
+### Migration granularity
-### Bulk load
+You may choose to migrate all of your data into a CockroachDB cluster at once. However, for larger data stores it's recommended that you migrate data in separate phases. This can help break the migration down into manageable slices, and it can help limit the effects of migration difficulties.
-For migrations that tolerate downtime, use MOLT Fetch in `data-load` mode to perform a one-time bulk load of source data into CockroachDB. Refer to [Bulk Load]({% link molt/migrate-bulk-load.md %}).
+### Continuous replication
-### Migrations with minimal downtime
+After data is migrated from the source into CockroachDB, you may choose to continue streaming changes to that source data from the source to the target. This is important for migrations that aim to minimize application downtime, as they may require the source database to continue receiving writes until application traffic is fully cut over to CockroachDB.
-To minimize downtime during migration, use MOLT Fetch for initial data loading followed by MOLT Replicator for continuous replication. Instead of loading all data during a planned downtime window, you can run an initial load followed by continuous replication. Writes are paused only briefly to allow replication to drain before the final cutover. The duration of this pause depends on the volume of write traffic and the replication lag between the source and CockroachDB.
+### Data transformation strategy
-Refer to [Load and Replicate]({% link molt/migrate-load-replicate.md %}) for detailed instructions.
+If there are discrepencies between the source and target schemas, the rules that determine necessary data transformations need to be defined. These transformations can be applied in the source database, in flight, or in the target database.
-### Recovery and rollback strategies
+### Validation strategy
-If the migration is interrupted or cutover must be aborted, MOLT Replicator provides safe recovery options:
+There are several different ways of verifying that the data in the source and the target match one another. You must decide what validation checks you want to perform, and when in the migration process you want to perform them.
+
+### Rollback plan
+
+Until the migration is complete, migration failures may make you decide to roll back application traffic entirely to the source database. You may therefore need a way of keeping the source database up to date with new writes to the target. This is especially important for risk-averse migrations that aim to minimize downtime.
+
+---
-- Resume a previously interrupted replication stream. Refer to [Resume Replication]({% link molt/migrate-resume-replication.md %}).
-- Use failback mode to reverse the migration, synchronizing changes from CockroachDB back to the original source. This ensures data consistency on the source so that you can retry the migration later. Refer to [Migration Failback]({% link molt/migrate-failback.md %}).
+[Learn more about the different migration variables]({% link molt/migration-considerations.md %}), how you should consider the different options for each variable, and how to use the MOLT toolkit for each variable.
## See also
diff --git a/src/current/molt/migration-strategy.md b/src/current/molt/migration-strategy.md
index eb2b46f65f5..b6e76d0c14f 100644
--- a/src/current/molt/migration-strategy.md
+++ b/src/current/molt/migration-strategy.md
@@ -10,10 +10,9 @@ A successful migration to CockroachDB requires planning for downtime, applicatio
This page outlines key decisions, infrastructure considerations, and best practices for a resilient and repeatable high-level migration strategy:
- [Develop a migration plan](#develop-a-migration-plan).
-- Evaluate your [downtime approach](#approach-to-downtime).
- [Size the target CockroachDB cluster](#capacity-planning).
- Implement [application changes](#application-changes) to address necessary [schema changes](#schema-design-best-practices), [transaction contention](#handling-transaction-contention), and [unimplemented features](#unimplemented-features-and-syntax-incompatibilities).
-- [Prepare for migration](#prepare-for-migration) by running a [pre-mortem](#run-a-migration-pre-mortem), setting up [metrics](#set-up-monitoring-and-alerting), [loading test data](#load-test-data), [validating application queries](#validate-queries) for correctness and performance, performing a [migration dry run](#perform-a-dry-run), and reviewing your [cutover strategy](#cutover-strategy).
+- [Prepare for migration](#prepare-for-migration) by running a [pre-mortem](#run-a-migration-pre-mortem), setting up [metrics](#set-up-monitoring-and-alerting), [loading test data](#load-test-data), [validating application queries](#validate-queries) for correctness and performance, performing a [migration dry run](#perform-a-dry-run), and reviewing your cutover strategy.
{% assign variable = value %}
{{site.data.alerts.callout_success}}
For help migrating to CockroachDB, contact our sales team.
@@ -31,20 +30,6 @@ Consider the following as you plan your migration:
Create a document summarizing the migration's purpose, technical details, and team members involved.
-## Approach to downtime
-
-It's important to fully [prepare the migration](#prepare-for-migration) in order to be certain that the migration can be completed successfully during the downtime window.
-
-- *Planned downtime* is made known to your users in advance. Once you have [prepared for the migration](#prepare-for-migration), you take the application offline, [conduct the migration]({% link molt/migration-overview.md %}), and bring the application back online on CockroachDB. To succeed, you should estimate the amount of downtime required to migrate your data, and ideally schedule the downtime outside of peak hours. Scheduling downtime is easiest if your application traffic is "periodic", meaning that it varies by the time of day, day of week, or day of month.
-
- Migrations with planned downtime are only recommended if you can complete the bulk data load (e.g., using the MOLT Fetch [`data-load` mode]({% link molt/molt-fetch.md %}#fetch-mode)) within the downtime window. Otherwise, you can [minimize downtime using continuous replication]({% link molt/migration-overview.md %}#migrations-with-minimal-downtime).
-
-- *Minimal downtime* impacts as few customers as possible, ideally without impacting their regular usage. If your application is intentionally offline at certain times (e.g., outside business hours), you can migrate the data without users noticing. Alternatively, if your application's functionality is not time-sensitive (e.g., it sends batched messages or emails), you can queue requests while the system is offline and process them after completing the migration to CockroachDB.
-
- MOLT enables [migrations with minimal downtime]({% link molt/migration-overview.md %}#migrations-with-minimal-downtime), using [MOLT Replicator]({% link molt/molt-replicator.md %}) for continuous replication of source changes to CockroachDB.
-
-- *Reduced functionality* takes some, but not all, application functionality offline. For example, you can disable writes but not reads while you migrate the application data, and queue data to be written after completing the migration.
-
## Capacity planning
To size the target CockroachDB cluster, consider your data volume and workload characteristics:
@@ -110,7 +95,7 @@ Based on the error budget you [defined in your migration plan](#develop-a-migrat
### Load test data
-It's useful to load test data into CockroachDB so that you can [test your application queries](#validate-queries). Refer to [Migration flows]({% link molt/migration-overview.md %}#migration-flows).
+It's useful to load test data into CockroachDB so that you can [test your application queries](#validate-queries).
MOLT Fetch [supports both `IMPORT INTO` and `COPY FROM`]({% link molt/molt-fetch.md %}#data-load-mode) for loading data into CockroachDB:
@@ -147,7 +132,7 @@ To further minimize potential surprises when you conduct the migration, practice
Performing a dry run is highly recommended. In addition to demonstrating how long the migration may take, a dry run also helps to ensure that team members understand what they need to do during the migration, and that changes to the application are coordinated.
-## Cutover strategy
+
## See also