diff --git a/src/current/_data/redirects.yml b/src/current/_data/redirects.yml
index 71807b980d5..c1d8bf3ffb9 100644
--- a/src/current/_data/redirects.yml
+++ b/src/current/_data/redirects.yml
@@ -286,26 +286,30 @@
- 'migrate-from-serverless-to-dedicated.md'
versions: ['cockroachcloud']
-- destination: migrate-to-cockroachdb.md?filters=mysql
- sources: ['migrate-from-mysql.md']
- versions: ['v25.1']
+- destination: molt/migrate-to-cockroachdb.md?filters=mysql
+ sources: [':version/migrate-from-mysql.md']
-- destination: migrate-to-cockroachdb.md
- sources: ['migrate-from-postgres.md']
- versions: ['v25.1']
+- destination: molt/migrate-to-cockroachdb.md
+ sources: [':version/migrate-from-postgres.md']
-- destination: migration-overview.md
+- destination: molt/migration-overview.md
+ sources: [':version/migration-overview.md']
+
+- destination: molt/migration-overview.md
+ sources: ['molt/molt-overview.md']
+
+- destination: molt/migration-overview.md
sources: ['import-data.md']
versions: ['v2.1', 'v19.1', 'v19.2', 'v20.1', 'v20.2', 'v21.1']
-- destination: molt/molt-fetch.md
- sources: [':version/molt-fetch.md']
-
-- destination: molt/molt-overview.md
+- destination: molt/migration-overview.md
sources:
- molt/live-migration-service.md
- :version/live-migration-service.md
+- destination: molt/molt-fetch.md
+ sources: [':version/molt-fetch.md']
+
- destination: molt/molt-verify.md
sources: [':version/molt-verify.md']
diff --git a/src/current/_includes/molt/fetch-data-load-modes.md b/src/current/_includes/molt/fetch-data-load-modes.md
index 92fec1dd184..0ea7d273307 100644
--- a/src/current/_includes/molt/fetch-data-load-modes.md
+++ b/src/current/_includes/molt/fetch-data-load-modes.md
@@ -1,5 +1,5 @@
-The following example migrates a single `employees` table. The table is exported to an Amazon S3 bucket and imported to CockroachDB using the [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) statement, which is the [default MOLT Fetch mode]({% link molt/molt-fetch.md %}#data-movement).
+The following example migrates a single `employees` table. The table is exported to an Amazon S3 bucket and imported to CockroachDB using the [`IMPORT INTO`]({% link {{ site.current_cloud_version }}/import-into.md %}) statement, which is the [default MOLT Fetch mode]({% link molt/molt-fetch.md %}#data-movement).
-- `IMPORT INTO` [takes the target CockroachDB tables offline]({% link {{ page.version.version }}/import-into.md %}#considerations) to maximize throughput. The tables come back online once the [import job]({% link {{site.current_cloud_version}}/import-into.md %}#view-and-control-import-jobs) completes successfully. If you need to keep the target tables online, add the `--use-copy` flag to export data with [`COPY FROM`]({% link {{ page.version.version }}/copy.md %}) instead. For more details, refer to [Data movement]({% link molt/molt-fetch.md %}#data-movement).
+- `IMPORT INTO` [takes the target CockroachDB tables offline]({% link {{ site.current_cloud_version }}/import-into.md %}#considerations) to maximize throughput. The tables come back online once the [import job]({% link {{site.current_cloud_version}}/import-into.md %}#view-and-control-import-jobs) completes successfully. If you need to keep the target tables online, add the `--use-copy` flag to export data with [`COPY FROM`]({% link {{ site.current_cloud_version }}/copy.md %}) instead. For more details, refer to [Data movement]({% link molt/molt-fetch.md %}#data-movement).
- If you cannot move data to a public cloud, specify `--direct-copy` instead of `--bucket-path` in the `molt fetch` command. This flag instructs MOLT Fetch to use `COPY FROM` to move the source data directly to CockroachDB without an intermediate store. For more information, refer to [Direct copy]({% link molt/molt-fetch.md %}#direct-copy).
\ No newline at end of file
diff --git a/src/current/_includes/molt/fetch-replication-output.md b/src/current/_includes/molt/fetch-replication-output.md
index f6f9fa6801b..8be689becec 100644
--- a/src/current/_includes/molt/fetch-replication-output.md
+++ b/src/current/_includes/molt/fetch-replication-output.md
@@ -12,7 +12,7 @@
{"level":"info","time":"2025-02-10T14:28:13-05:00","message":"staging database name: _replicator_1739215693817700000"}
~~~
- The staging schema provides a replication marker for streaming changes. You will need the staging schema name in case replication fails and must be [resumed]({% link molt/molt-fetch.md %}#resume-replication), or [failback to the source database]({% link {{ page.version.version }}/migrate-failback.md %}) is performed.
+ The staging schema provides a replication marker for streaming changes. You will need the staging schema name in case replication fails and must be [resumed]({% link molt/molt-fetch.md %}#resume-replication), or [failback to the source database]({% link molt/migrate-failback.md %}) is performed.
`upserted rows` log messages indicate that changes were replicated to CockroachDB:
diff --git a/src/current/_includes/molt/migration-modify-target-schema.md b/src/current/_includes/molt/migration-modify-target-schema.md
index 6ddc7e24965..001422e018c 100644
--- a/src/current/_includes/molt/migration-modify-target-schema.md
+++ b/src/current/_includes/molt/migration-modify-target-schema.md
@@ -4,6 +4,6 @@ If you need the best possible [replication](#step-6-replicate-changes-to-cockroa
{{site.data.alerts.end}}
{% endif %}
-You can now add any constraints or indexes that you previously [removed from the CockroachDB schema](#step-3-load-data-into-cockroachdb).
+You can now add any constraints or indexes that you previously [removed from the CockroachDB schema](#step-3-load-data-into-cockroachdb) to facilitate data load. If you used the `--table-handling drop-on-target-and-recreate` option for data load, you **must** manually recreate all indexes and constraints other than [`PRIMARY KEY`]({% link {{ site.current_cloud_version }}/primary-key.md %}) and [`NOT NULL`]({% link {{ site.current_cloud_version }}/not-null.md %}).
-For the appropriate SQL syntax, refer to [`ALTER TABLE ... ADD CONSTRAINT`]({% link {{ page.version.version }}/alter-table.md %}#add-constraint) and [`CREATE INDEX`]({% link {{ page.version.version }}/create-index.md %}). Review the [best practices for creating secondary indexes]({% link {{ page.version.version }}/schema-design-indexes.md %}#best-practices) on CockroachDB.
\ No newline at end of file
+For the appropriate SQL syntax, refer to [`ALTER TABLE ... ADD CONSTRAINT`]({% link {{ site.current_cloud_version }}/alter-table.md %}#add-constraint) and [`CREATE INDEX`]({% link {{ site.current_cloud_version }}/create-index.md %}). Review the [best practices for creating secondary indexes]({% link {{ site.current_cloud_version }}/schema-design-indexes.md %}#best-practices) on CockroachDB.
\ No newline at end of file
diff --git a/src/current/_includes/molt/migration-prepare-schema.md b/src/current/_includes/molt/migration-prepare-schema.md
index 8ada64f4d16..1b7e7f083c1 100644
--- a/src/current/_includes/molt/migration-prepare-schema.md
+++ b/src/current/_includes/molt/migration-prepare-schema.md
@@ -1,24 +1,26 @@
{{site.data.alerts.callout_info}}
-CockroachDB supports the [PostgreSQL wire protocol](https://www.postgresql.org/docs/current/protocol.html) and is largely compatible with PostgreSQL syntax. For syntax differences, refer to [Features that differ from PostgreSQL]({% link {{ page.version.version }}/postgresql-compatibility.md %}#features-that-differ-from-postgresql).
+CockroachDB supports the [PostgreSQL wire protocol](https://www.postgresql.org/docs/current/protocol.html) and is largely compatible with PostgreSQL syntax. For syntax differences, refer to [Features that differ from PostgreSQL]({% link {{ site.current_cloud_version }}/postgresql-compatibility.md %}#features-that-differ-from-postgresql).
{{site.data.alerts.end}}
+{% include molt/migration-schema-design-practices.md %}
+
1. Convert your database schema to an equivalent CockroachDB schema.
- The simplest method is to use the [MOLT Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}) to convert your schema line-by-line. The tool accepts `.sql` files and will convert the syntax, identify [unimplemented features and syntax incompatibilities]({% link {{ page.version.version }}/migration-overview.md %}#unimplemented-features-and-syntax-incompatibilities) in the schema, and suggest edits according to [CockroachDB best practices]({% link {{ page.version.version }}/migration-overview.md %}#schema-design-best-practices).
+ The simplest method is to use the [MOLT Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}) to convert your schema line-by-line. The tool accepts `.sql` files and will convert the syntax, identify [unimplemented features and syntax incompatibilities]({% link molt/migration-strategy.md %}#unimplemented-features-and-syntax-incompatibilities) in the schema, and suggest edits according to [CockroachDB best practices]({% link molt/migration-strategy.md %}#schema-design-best-practices).
The Schema Conversion Tool requires a free [CockroachDB {{ site.data.products.cloud }} account]({% link cockroachcloud/create-an-account.md %}). If this is not an option for you, do one of the following:
- Enable automatic schema creation when [loading data](#step-3-load-data-into-cockroachdb) with MOLT Fetch. The [`--table-handling drop-on-target-and-recreate`]({% link molt/molt-fetch.md %}#target-table-handling) option creates one-to-one [type mappings]({% link molt/molt-fetch.md %}#type-mapping) between the source database and CockroachDB and works well when the source schema is well-defined.
- - Manually convert the schema according to the [schema design best practices]({% link {{ page.version.version }}/migration-overview.md %}#schema-design-best-practices){% comment %}and data type mappings{% endcomment %}. You can also [export a partially converted schema]({% link cockroachcloud/migrations-page.md %}#export-the-schema) from the Schema Conversion Tool to finish the conversion manually.
+ - Manually convert the schema according to the [schema design best practices]({% link molt/migration-strategy.md %}#schema-design-best-practices){% comment %}and data type mappings{% endcomment %}. You can also [export a partially converted schema]({% link cockroachcloud/migrations-page.md %}#export-the-schema) from the Schema Conversion Tool to finish the conversion manually.
For additional help, contact your account team.
1. Import the converted schema to a CockroachDB cluster.
- When migrating to CockroachDB {{ site.data.products.cloud }}, use the Schema Conversion Tool to [migrate the converted schema to a new {{ site.data.products.cloud }} database]({% link cockroachcloud/migrations-page.md %}#migrate-the-schema).
- - When migrating to a {{ site.data.products.core }} CockroachDB cluster, pipe the [data definition language (DDL)]({% link {{ page.version.version }}/sql-statements.md %}#data-definition-statements) directly into [`cockroach sql`]({% link {{ page.version.version }}/cockroach-sql.md %}). You can [export a converted schema file]({% link cockroachcloud/migrations-page.md %}#export-the-schema) from the Schema Conversion Tool.
+ - When migrating to a {{ site.data.products.core }} CockroachDB cluster, pipe the [data definition language (DDL)]({% link {{ site.current_cloud_version }}/sql-statements.md %}#data-definition-statements) directly into [`cockroach sql`]({% link {{ site.current_cloud_version }}/cockroach-sql.md %}). You can [export a converted schema file]({% link cockroachcloud/migrations-page.md %}#export-the-schema) from the Schema Conversion Tool.
{{site.data.alerts.callout_success}}
- For the fastest performance, you can use a [local, single-node CockroachDB cluster]({% link {{ page.version.version }}/cockroach-start-single-node.md %}#start-a-single-node-cluster) to convert your schema.
+ For the fastest performance, you can use a [local, single-node CockroachDB cluster]({% link {{ site.current_cloud_version }}/cockroach-start-single-node.md %}#start-a-single-node-cluster) to convert your schema.
{{site.data.alerts.end}}
@@ -28,15 +30,15 @@ When [using the Schema Conversion Tool]({% link cockroachcloud/migrations-page.m
Strings are case-insensitive in MySQL and case-sensitive in CockroachDB. You may need to edit your MySQL data to get the results you expect from CockroachDB. For example, you may have been doing string comparisons in MySQL that will need to be changed to work with CockroachDB.
-For more information about the case sensitivity of strings in MySQL, refer to [Case Sensitivity in String Searches](https://dev.mysql.com/doc/refman/8.0/en/case-sensitivity.html) from the MySQL documentation. For more information about CockroachDB strings, refer to [`STRING`]({% link {{ page.version.version }}/string.md %}).
+For more information about the case sensitivity of strings in MySQL, refer to [Case Sensitivity in String Searches](https://dev.mysql.com/doc/refman/8.0/en/case-sensitivity.html) from the MySQL documentation. For more information about CockroachDB strings, refer to [`STRING`]({% link {{ site.current_cloud_version }}/string.md %}).
#### Identifier case sensitivity
-Identifiers are case-sensitive in MySQL and [case-insensitive in CockroachDB]({% link {{ page.version.version }}/keywords-and-identifiers.md %}#identifiers). When [using the Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}?filters=mysql#convert-a-schema), you can either keep case sensitivity by enclosing identifiers in double quotes, or make identifiers case-insensitive by converting them to lowercase.
+Identifiers are case-sensitive in MySQL and [case-insensitive in CockroachDB]({% link {{ site.current_cloud_version }}/keywords-and-identifiers.md %}#identifiers). When [using the Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}?filters=mysql#convert-a-schema), you can either keep case sensitivity by enclosing identifiers in double quotes, or make identifiers case-insensitive by converting them to lowercase.
#### `AUTO_INCREMENT` attribute
-The MySQL [`AUTO_INCREMENT`](https://dev.mysql.com/doc/refman/8.0/en/example-auto-increment.html) attribute, which creates sequential column values, is not supported in CockroachDB. When [using the Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}?filters=mysql#convert-a-schema), columns with `AUTO_INCREMENT` can be converted to use [sequences]({% link {{ page.version.version }}/create-sequence.md %}), `UUID` values with [`gen_random_uuid()`]({% link {{ page.version.version }}/functions-and-operators.md %}#id-generation-functions), or unique `INT8` values using [`unique_rowid()`]({% link {{ page.version.version }}/functions-and-operators.md %}#id-generation-functions). Cockroach Labs does not recommend using a sequence to define a primary key column. For more information, refer to [Unique ID best practices]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#unique-id-best-practices).
+The MySQL [`AUTO_INCREMENT`](https://dev.mysql.com/doc/refman/8.0/en/example-auto-increment.html) attribute, which creates sequential column values, is not supported in CockroachDB. When [using the Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}?filters=mysql#convert-a-schema), columns with `AUTO_INCREMENT` can be converted to use [sequences]({% link {{ site.current_cloud_version }}/create-sequence.md %}), `UUID` values with [`gen_random_uuid()`]({% link {{ site.current_cloud_version }}/functions-and-operators.md %}#id-generation-functions), or unique `INT8` values using [`unique_rowid()`]({% link {{ site.current_cloud_version }}/functions-and-operators.md %}#id-generation-functions). Cockroach Labs does not recommend using a sequence to define a primary key column. For more information, refer to [Unique ID best practices]({% link {{ site.current_cloud_version }}/performance-best-practices-overview.md %}#unique-id-best-practices).
{{site.data.alerts.callout_info}}
Changing a column type during schema conversion will cause [MOLT Verify]({% link molt/molt-verify.md %}) to identify a type mismatch during data validation. This is expected behavior.
@@ -44,19 +46,19 @@ Changing a column type during schema conversion will cause [MOLT Verify]({% link
#### `ENUM` type
-MySQL `ENUM` types are defined in table columns. On CockroachDB, [`ENUM`]({% link {{ page.version.version }}/enum.md %}) is a standalone type. When [using the Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}?filters=mysql#convert-a-schema), you can either deduplicate the `ENUM` definitions or create a separate type for each column.
+MySQL `ENUM` types are defined in table columns. On CockroachDB, [`ENUM`]({% link {{ site.current_cloud_version }}/enum.md %}) is a standalone type. When [using the Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}?filters=mysql#convert-a-schema), you can either deduplicate the `ENUM` definitions or create a separate type for each column.
#### `TINYINT` type
-`TINYINT` data types are not supported in CockroachDB. The [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}?filters=mysql) automatically converts `TINYINT` columns to [`INT2`]({% link {{ page.version.version }}/int.md %}) (`SMALLINT`).
+`TINYINT` data types are not supported in CockroachDB. The [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}?filters=mysql) automatically converts `TINYINT` columns to [`INT2`]({% link {{ site.current_cloud_version }}/int.md %}) (`SMALLINT`).
#### Geospatial types
-MySQL geometry types are not converted to CockroachDB [geospatial types]({% link {{ page.version.version }}/spatial-data-overview.md %}#spatial-objects) by the [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}?filters=mysql). They should be manually converted to the corresponding types in CockroachDB.
+MySQL geometry types are not converted to CockroachDB [geospatial types]({% link {{ site.current_cloud_version }}/spatial-data-overview.md %}#spatial-objects) by the [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}?filters=mysql). They should be manually converted to the corresponding types in CockroachDB.
#### `FIELD` function
-The MYSQL `FIELD` function is not supported in CockroachDB. Instead, you can use the [`array_position`]({% link {{ page.version.version }}/functions-and-operators.md %}#array-functions) function, which returns the index of the first occurrence of element in the array.
+The MYSQL `FIELD` function is not supported in CockroachDB. Instead, you can use the [`array_position`]({% link {{ site.current_cloud_version }}/functions-and-operators.md %}#array-functions) function, which returns the index of the first occurrence of element in the array.
Example usage:
@@ -72,7 +74,7 @@ SELECT array_position(ARRAY[4,1,3,2],1);
(1 row)
~~~
-While MySQL returns 0 when the element is not found, CockroachDB returns `NULL`. So if you are using the `ORDER BY` clause in a statement with the `array_position` function, the caveat is that sort is applied even when the element is not found. As a workaround, you can use the [`COALESCE`]({% link {{ page.version.version }}/functions-and-operators.md %}#conditional-and-function-like-operators) operator.
+While MySQL returns 0 when the element is not found, CockroachDB returns `NULL`. So if you are using the `ORDER BY` clause in a statement with the `array_position` function, the caveat is that sort is applied even when the element is not found. As a workaround, you can use the [`COALESCE`]({% link {{ site.current_cloud_version }}/functions-and-operators.md %}#conditional-and-function-like-operators) operator.
{% include_cached copy-clipboard.html %}
~~~ sql
diff --git a/src/current/_includes/molt/migration-schema-design-practices.md b/src/current/_includes/molt/migration-schema-design-practices.md
new file mode 100644
index 00000000000..cb92b826e95
--- /dev/null
+++ b/src/current/_includes/molt/migration-schema-design-practices.md
@@ -0,0 +1,7 @@
+Follow these recommendations when converting your schema for compatibility with CockroachDB.
+
+- Define an explicit primary key on every table. For more information, refer to [Primary key best practices]({% link {{ site.current_cloud_version }}/schema-design-table.md %}#primary-key-best-practices).
+
+- Do **not** use a sequence to define a primary key column. Instead, Cockroach Labs recommends that you use [multi-column primary keys]({% link {{ site.current_cloud_version }}/performance-best-practices-overview.md %}#use-multi-column-primary-keys) or [auto-generating unique IDs]({% link {{ site.current_cloud_version }}/performance-best-practices-overview.md %}#use-functions-to-generate-unique-ids) for primary key columns.
+
+- By default on CockroachDB, `INT` is an alias for `INT8`, which creates 64-bit signed integers. PostgreSQL and MySQL default to 32-bit integers. Depending on your source database or application requirements, you may need to change the integer size to `4`. For more information, refer to [Considerations for 64-bit signed integers]({% link {{ site.current_cloud_version }}/int.md %}#considerations-for-64-bit-signed-integers).
\ No newline at end of file
diff --git a/src/current/_includes/releases/v23.1/v23.1.0-alpha.3.md b/src/current/_includes/releases/v23.1/v23.1.0-alpha.3.md
index 1499be57ff4..c4f44c35165 100644
--- a/src/current/_includes/releases/v23.1/v23.1.0-alpha.3.md
+++ b/src/current/_includes/releases/v23.1/v23.1.0-alpha.3.md
@@ -19,7 +19,7 @@ Release Date: February 21, 2023
SQL language changes
- Added [latency information]({% link cockroachcloud/statements-page.md %}#statement-statistics) in seconds to the statement statistics on `crdb_internal.statement_statistics`, `system.statement_statistics`, and `crdb_internal.cluster_statement_statistics`, with information about: `min`, `max`, `p50`, `p90`, and `p99. Also added the columns: `latency_seconds_min`, `latency_seconds_max`, `latency_seconds_p50`, `latency_seconds_p90`, and `latency_seconds_p99` to `crdb_internal.node_statement_statistics`.[#96396][#96396]
-- Deprecated the `PGDUMP` and `MYSQLDUMP` formats for [`IMPORT`]({% link v23.1/import.md %}). They are still present, but will be removed in a future release. See the [Migration Overview]({% link v23.1/migration-overview.md %}) page for alternatives. [#96386][#96386]
+- Deprecated the `PGDUMP` and `MYSQLDUMP` formats for [`IMPORT`]({% link v23.1/import.md %}). They are still present, but will be removed in a future release. See the [Migration Overview]({% link molt/migration-overview.md %}) page for alternatives. [#96386][#96386]
- [`COPY ... FROM ... QUOTE '"'`]({% link v23.1/copy.md %}) will no longer error. [#96572][#96572]
- Added `last_error_code` column to the `crdb_internal.node_statement_statistics` table. Added `last_error_code` field to the `statistics` JSON blob in the `crdb_internal.statement_statistics` and `system.statement_statistics` tables. [#96436][#96436]
- Added support for expressions of the form `COLLATE "default"`, `COLLATE "C"`, and `COLLATE "POSIX"`. Since the default [collation]({% link v23.1/collate.md %}) cannot be changed currently, these expressions are all equivalent. The expressions are evaluated by treating the input as a normal string, and ignoring the collation. This means that comparisons between strings and collated strings that use `"default"`, `"C"`, or `"POSIX"` are now supported. Creating a column with the `"C"` or `"POSIX"` collations is still not supported. [#96828][#96828]
diff --git a/src/current/_includes/releases/v23.1/v23.1.0.md b/src/current/_includes/releases/v23.1/v23.1.0.md
index d171aec3b9c..0abdc0a6a19 100644
--- a/src/current/_includes/releases/v23.1/v23.1.0.md
+++ b/src/current/_includes/releases/v23.1/v23.1.0.md
@@ -481,7 +481,7 @@ The following changes should be reviewed prior to upgrading. Default cluster set
- The `SELECT` privilege on a set of tables allows a user to run core changefeeds against them.
- The `CHANGEFEED` privilege on a set of tables allows a user to run enterprise changefeeds on them, and also manage the underlying changefeed job (ie. view, pause, cancel, and resume the job).
Notably, a new [cluster setting]({% link v23.1/cluster-settings.md %}) `changefeed.permissions.require_external_connection_sink.enabled` is added and set to `false` by default. Enabling this setting restricts users with `CHANGEFEED` on a set of tables to create enterprise changefeeds into external connections only. To use a given external connection, a user typically needs the `USAGE` privilege on it. Note that `ALTER DEFAULT PRIVILEGES` can be used with both the `CHANGEFEED` and `SELECT` privileges to assign coarse-grained permissions (i.e., assign permissions to all tables in a schema rather than manually assign them for each table). [#94796][#94796]
-- Deprecated the `PGDUMP` and `MYSQLDUMP` formats for [`IMPORT`]({% link v23.1/import.md %}). They are still present, but will be removed in a future release. See the [Migration Overview]({% link v23.1/migration-overview.md %}) page for alternatives. [#96386][#96386]
+- Deprecated the `PGDUMP` and `MYSQLDUMP` formats for [`IMPORT`]({% link v23.1/import.md %}). They are still present, but will be removed in a future release. See the [Migration Overview]({% link molt/migration-overview.md %}) page for alternatives. [#96386][#96386]
Known limitations
@@ -495,7 +495,7 @@ Cockroach University | [Introduction to Distributed SQL and CockroachDB](https:/
Cockroach University | [Practical First Steps with CockroachDB](https://university.cockroachlabs.com/courses/course-v1:crl+practical-first-steps-with-crdb+self-paced/about) | This course will give you the tools you need to get started with CockroachDB. During the course, you will learn how to spin up a cluster, use the Admin UI to monitor cluster activity, and use SQL shell to solve a set of hands-on exercises.
Cockroach University | [Building a Highly Resilient Multi-region Database using CockroachDB](https://university.cockroachlabs.com/courses/course-v1:crl+intro-to-resilience-in-multi-region+self-paced/about) | This course is part of a series introducing solutions to running low-latency, highly resilient applications for data-intensive workloads on CockroachDB. In this course we focus on surviving large-scale infrastructure failures like losing an entire cloud region without losing data during recovery. We’ll show you how to use CockroachDB survival goals in a multi-region cluster to implement a highly resilient database that survives node or network failures across multiple regions with zero data loss.
Cockroach University | [Introduction to Serverless Databases and CockroachDB Serverless](https://university.cockroachlabs.com/courses/course-v1:crl+intro-to-serverless+self-paced/about) | This course introduces the core concepts behind serverless databases and gives you the tools you need to get started with CockroachDB Serverless. You will learn how serverless databases remove the burden of configuring, sizing, provisioning, securing, maintaining and dynamically scaling your database based on load. This means you simply pay for the serverless database resources you use.
-Docs | [Migration Overview]({% link v23.1/migration-overview.md %}) | This page summarizes the steps of migrating a database to CockroachDB, which include testing and updating your schema to work with CockroachDB, moving your data into CockroachDB, and testing and updating your application.
+Docs | [Migration Overview]({% link molt/migration-overview.md %}) | This page summarizes the steps of migrating a database to CockroachDB, which include testing and updating your schema to work with CockroachDB, moving your data into CockroachDB, and testing and updating your application.
Docs | [Developer Guide Overview]({% link v23.1/developer-guide-overview.md %}) | This page provides an overview of resources available to developers building applications on CockroachDB.
Docs | [Security Overview](https://www.cockroachlabs.com/docs/v23.1/security-reference/security-overview) | The 23.1 release encapsulates a number of security milestones. See the security overview for a summary.
Docs | [Architecture Overview](https://www.cockroachlabs.com/docs/v23.1/architecture/overview) | This page provides a starting point for understanding the architecture and design choices that enable CockroachDB's scalability and consistency capabilities.
diff --git a/src/current/_includes/releases/v23.2/v23.2.0.md b/src/current/_includes/releases/v23.2/v23.2.0.md
index 3e454b126a4..ab40aea5ac6 100644
--- a/src/current/_includes/releases/v23.2/v23.2.0.md
+++ b/src/current/_includes/releases/v23.2/v23.2.0.md
@@ -384,7 +384,7 @@ Cockroach University | [Introduction to Distributed SQL and CockroachDB](https:/
Cockroach University | [Practical First Steps with CockroachDB](https://university.cockroachlabs.com/courses/course-v1:crl+practical-first-steps-with-crdb+self-paced/about) | This course will give you the tools you need to get started with CockroachDB. During the course, you will learn how to spin up a cluster, use the Admin UI to monitor cluster activity, and use SQL shell to solve a set of hands-on exercises.
Cockroach University | [Enterprise Application Development with CockroachDB](https://university.cockroachlabs.com/courses/course-v1:crl+client-side-txn-handling+self-paced/about) | This course is the first in a series designed to equip you with best practices for mastering application-level (client-side) transaction management in CockroachDB. We'll dive deep on common differences between CockroachDB and legacy SQL databases and help you sidestep challenges you might encounter when migrating to CockroachDB from Oracle, PostgreSQL, and MySQL.
Cockroach University | [Building a Highly Resilient Multi-region Database using CockroachDB](https://university.cockroachlabs.com/courses/course-v1:crl+intro-to-resilience-in-multi-region+self-paced/about) | This course is part of a series introducing solutions to running low-latency, highly resilient applications for data-intensive workloads on CockroachDB. In this course we focus on surviving large-scale infrastructure failures like losing an entire cloud region without losing data during recovery. We’ll show you how to use CockroachDB survival goals in a multi-region cluster to implement a highly resilient database that survives node or network failures across multiple regions with zero data loss.
-Docs | [Migration Overview]({% link v23.2/migration-overview.md %}) | This page summarizes the steps of migrating a database to CockroachDB, which include testing and updating your schema to work with CockroachDB, moving your data into CockroachDB, and testing and updating your application.
+Docs | [Migration Overview]({% link molt/migration-overview.md %}) | This page summarizes the steps of migrating a database to CockroachDB, which include testing and updating your schema to work with CockroachDB, moving your data into CockroachDB, and testing and updating your application.
Docs | [Architecture Overview](https://www.cockroachlabs.com/docs/v23.2/architecture/overview) | This page provides a starting point for understanding the architecture and design choices that enable CockroachDB's scalability and consistency capabilities.
Docs | [SQL Feature Support]({% link v23.2/sql-feature-support.md %}) | The page summarizes the standard SQL features CockroachDB supports as well as common extensions to the standard.
Docs | [Change Data Capture Overview]({% link v23.2/change-data-capture-overview.md %}) | This page summarizes CockroachDB's data streaming capabilities. Change data capture (CDC) provides efficient, distributed, row-level changefeeds into a configurable sink for downstream processing such as reporting, caching, or full-text indexing.
diff --git a/src/current/_includes/releases/v24.1/v24.1.0.md b/src/current/_includes/releases/v24.1/v24.1.0.md
index 83f743894ae..d9fa47fa190 100644
--- a/src/current/_includes/releases/v24.1/v24.1.0.md
+++ b/src/current/_includes/releases/v24.1/v24.1.0.md
@@ -525,7 +525,7 @@ Cockroach University | [Introduction to Distributed SQL and CockroachDB](https:/
Cockroach University | [Practical First Steps with CockroachDB](https://university.cockroachlabs.com/courses/course-v1:crl+practical-first-steps-with-crdb+self-paced/about) | This course will give you the tools you need to get started with CockroachDB. During the course, you will learn how to spin up a cluster, use the Admin UI to monitor cluster activity, and use SQL shell to solve a set of hands-on exercises.
Cockroach University | [Enterprise Application Development with CockroachDB](https://university.cockroachlabs.com/courses/course-v1:crl+client-side-txn-handling+self-paced/about) | This course is the first in a series designed to equip you with best practices for mastering application-level (client-side) transaction management in CockroachDB. We'll dive deep on common differences between CockroachDB and legacy SQL databases and help you sidestep challenges you might encounter when migrating to CockroachDB from Oracle, PostgreSQL, and MySQL.
Cockroach University | [Building a Highly Resilient Multi-region Database using CockroachDB](https://university.cockroachlabs.com/courses/course-v1:crl+intro-to-resilience-in-multi-region+self-paced/about) | This course is part of a series introducing solutions to running low-latency, highly resilient applications for data-intensive workloads on CockroachDB. In this course we focus on surviving large-scale infrastructure failures like losing an entire cloud region without losing data during recovery. We’ll show you how to use CockroachDB survival goals in a multi-region cluster to implement a highly resilient database that survives node or network failures across multiple regions with zero data loss.
-Docs | [Migration Overview]({% link v24.1/migration-overview.md %}) | This page summarizes the steps of migrating a database to CockroachDB, which include testing and updating your schema to work with CockroachDB, moving your data into CockroachDB, and testing and updating your application.
+Docs | [Migration Overview]({% link molt/migration-overview.md %}) | This page summarizes the steps of migrating a database to CockroachDB, which include testing and updating your schema to work with CockroachDB, moving your data into CockroachDB, and testing and updating your application.
Docs | [Architecture Overview](https://www.cockroachlabs.com/docs/v24.1/architecture/overview) | This page provides a starting point for understanding the architecture and design choices that enable CockroachDB's scalability and consistency capabilities.
Docs | [SQL Feature Support]({% link v24.1/sql-feature-support.md %}) | The page summarizes the standard SQL features CockroachDB supports as well as common extensions to the standard.
Docs | [Change Data Capture Overview]({% link v24.1/change-data-capture-overview.md %}) | This page summarizes CockroachDB's data streaming capabilities. Change data capture (CDC) provides efficient, distributed, row-level changefeeds into a configurable sink for downstream processing such as reporting, caching, or full-text indexing.
diff --git a/src/current/_includes/releases/v24.2/v24.2.0.md b/src/current/_includes/releases/v24.2/v24.2.0.md
index 182a84f13d1..9a8262c6127 100644
--- a/src/current/_includes/releases/v24.2/v24.2.0.md
+++ b/src/current/_includes/releases/v24.2/v24.2.0.md
@@ -137,7 +137,7 @@ Cockroach University | [Introduction to Distributed SQL and CockroachDB](https:/
Cockroach University | [Practical First Steps with CockroachDB](https://university.cockroachlabs.com/courses/course-v1:crl+practical-first-steps-with-crdb+self-paced/about) | This course will give you the tools you need to get started with CockroachDB. During the course, you will learn how to spin up a cluster, use the Admin UI to monitor cluster activity, and use SQL shell to solve a set of hands-on exercises.
Cockroach University | [Enterprise Application Development with CockroachDB](https://university.cockroachlabs.com/courses/course-v1:crl+client-side-txn-handling+self-paced/about) | This course is the first in a series designed to equip you with best practices for mastering application-level (client-side) transaction management in CockroachDB. We'll dive deep on common differences between CockroachDB and legacy SQL databases and help you sidestep challenges you might encounter when migrating to CockroachDB from Oracle, PostgreSQL, and MySQL.
Cockroach University | [Building a Highly Resilient Multi-region Database using CockroachDB](https://university.cockroachlabs.com/courses/course-v1:crl+intro-to-resilience-in-multi-region+self-paced/about) | This course is part of a series introducing solutions to running low-latency, highly resilient applications for data-intensive workloads on CockroachDB. In this course we focus on surviving large-scale infrastructure failures like losing an entire cloud region without losing data during recovery. We'll show you how to use CockroachDB survival goals in a multi-region cluster to implement a highly resilient database that survives node or network failures across multiple regions with zero data loss.
-Docs | [Migration Overview]({% link v24.2/migration-overview.md %}) | This page summarizes the steps of migrating a database to CockroachDB, which include testing and updating your schema to work with CockroachDB, moving your data into CockroachDB, and testing and updating your application.
+Docs | [Migration Overview]({% link molt/migration-overview.md %}) | This page summarizes the steps of migrating a database to CockroachDB, which include testing and updating your schema to work with CockroachDB, moving your data into CockroachDB, and testing and updating your application.
Docs | [Architecture Overview](https://www.cockroachlabs.com/docs/v24.2/architecture/overview) | This page provides a starting point for understanding the architecture and design choices that enable CockroachDB's scalability and consistency capabilities.
Docs | [SQL Feature Support]({% link v24.2/sql-feature-support.md %}) | The page summarizes the standard SQL features CockroachDB supports as well as common extensions to the standard.
Docs | [Change Data Capture Overview]({% link v24.2/change-data-capture-overview.md %}) | This page summarizes CockroachDB's data streaming capabilities. Change data capture (CDC) provides efficient, distributed, row-level changefeeds into a configurable sink for downstream processing such as reporting, caching, or full-text indexing.
diff --git a/src/current/_includes/releases/v24.3/v24.3.0.md b/src/current/_includes/releases/v24.3/v24.3.0.md
index 40e776d759c..863ffe8625e 100644
--- a/src/current/_includes/releases/v24.3/v24.3.0.md
+++ b/src/current/_includes/releases/v24.3/v24.3.0.md
@@ -187,7 +187,7 @@ Cockroach University | [Introduction to Distributed SQL and CockroachDB](https:/
Cockroach University | [Practical First Steps with CockroachDB](https://university.cockroachlabs.com/courses/course-v1:crl+practical-first-steps-with-crdb+self-paced/about) | This course will give you the tools you need to get started with CockroachDB. During the course, you will learn how to spin up a cluster, use the Admin UI to monitor cluster activity, and use SQL shell to solve a set of hands-on exercises.
Cockroach University | [Enterprise Application Development with CockroachDB](https://university.cockroachlabs.com/courses/course-v1:crl+client-side-txn-handling+self-paced/about) | This course is the first in a series designed to equip you with best practices for mastering application-level (client-side) transaction management in CockroachDB. We'll dive deep on common differences between CockroachDB and legacy SQL databases and help you sidestep challenges you might encounter when migrating to CockroachDB from Oracle, PostgreSQL, and MySQL.
Cockroach University | [Building a Highly Resilient Multi-region Database using CockroachDB](https://university.cockroachlabs.com/courses/course-v1:crl+intro-to-resilience-in-multi-region+self-paced/about) | This course is part of a series introducing solutions to running low-latency, highly resilient applications for data-intensive workloads on CockroachDB. In this course we focus on surviving large-scale infrastructure failures like losing an entire cloud region without losing data during recovery. We'll show you how to use CockroachDB survival goals in a multi-region cluster to implement a highly resilient database that survives node or network failures across multiple regions with zero data loss.
-Docs | [Migration Overview]({% link v24.3/migration-overview.md %}) | This page summarizes the steps of migrating a database to CockroachDB, which include testing and updating your schema to work with CockroachDB, moving your data into CockroachDB, and testing and updating your application.
+Docs | [Migration Overview]({% link molt/migration-overview.md %}) | This page summarizes the steps of migrating a database to CockroachDB, which include testing and updating your schema to work with CockroachDB, moving your data into CockroachDB, and testing and updating your application.
Docs | [Architecture Overview](https://www.cockroachlabs.com/docs/v24.3/architecture/overview) | This page provides a starting point for understanding the architecture and design choices that enable CockroachDB's scalability and consistency capabilities.
Docs | [SQL Feature Support]({% link v24.3/sql-feature-support.md %}) | The page summarizes the standard SQL features CockroachDB supports as well as common extensions to the standard.
Docs | [Change Data Capture Overview]({% link v24.3/change-data-capture-overview.md %}) | This page summarizes CockroachDB's data streaming capabilities. Change data capture (CDC) provides efficient, distributed, row-level changefeeds into a configurable sink for downstream processing such as reporting, caching, or full-text indexing.
diff --git a/src/current/_includes/releases/v25.1/v25.1.0.md b/src/current/_includes/releases/v25.1/v25.1.0.md
index b3d2f6869e7..fa0b952bda2 100644
--- a/src/current/_includes/releases/v25.1/v25.1.0.md
+++ b/src/current/_includes/releases/v25.1/v25.1.0.md
@@ -89,7 +89,7 @@ Cockroach University | [Introduction to Distributed SQL and CockroachDB](https:/
Cockroach University | [Practical First Steps with CockroachDB](https://university.cockroachlabs.com/courses/course-v1:crl+practical-first-steps-with-crdb+self-paced/about) | This course will give you the tools you need to get started with CockroachDB. During the course, you will learn how to spin up a cluster, use the Admin UI to monitor cluster activity, and use SQL shell to solve a set of hands-on exercises.
Cockroach University | [Enterprise Application Development with CockroachDB](https://university.cockroachlabs.com/courses/course-v1:crl+client-side-txn-handling+self-paced/about) | This course is the first in a series designed to equip you with best practices for mastering application-level (client-side) transaction management in CockroachDB. We'll dive deep on common differences between CockroachDB and legacy SQL databases and help you sidestep challenges you might encounter when migrating to CockroachDB from Oracle, PostgreSQL, and MySQL.
Cockroach University | [Building a Highly Resilient Multi-region Database using CockroachDB](https://university.cockroachlabs.com/courses/course-v1:crl+intro-to-resilience-in-multi-region+self-paced/about) | This course is part of a series introducing solutions to running low-latency, highly resilient applications for data-intensive workloads on CockroachDB. In this course we focus on surviving large-scale infrastructure failures like losing an entire cloud region without losing data during recovery. We'll show you how to use CockroachDB survival goals in a multi-region cluster to implement a highly resilient database that survives node or network failures across multiple regions with zero data loss.
-Docs | [Migration Overview]({% link v25.1/migration-overview.md %}) | This page summarizes the steps of migrating a database to CockroachDB, which include testing and updating your schema to work with CockroachDB, moving your data into CockroachDB, and testing and updating your application.
+Docs | [Migration Overview]({% link molt/migration-overview.md %}) | This page summarizes the steps of migrating a database to CockroachDB, which include testing and updating your schema to work with CockroachDB, moving your data into CockroachDB, and testing and updating your application.
Docs | [Architecture Overview](https://www.cockroachlabs.com/docs/v25.1/architecture/overview) | This page provides a starting point for understanding the architecture and design choices that enable CockroachDB's scalability and consistency capabilities.
Docs | [SQL Feature Support]({% link v25.1/sql-feature-support.md %}) | The page summarizes the standard SQL features CockroachDB supports as well as common extensions to the standard.
Docs | [Change Data Capture Overview]({% link v25.1/change-data-capture-overview.md %}) | This page summarizes CockroachDB's data streaming capabilities. Change data capture (CDC) provides efficient, distributed, row-level changefeeds into a configurable sink for downstream processing such as reporting, caching, or full-text indexing.
diff --git a/src/current/_includes/sidebar-data-cockroachcloud.json b/src/current/_includes/sidebar-data-cockroachcloud.json
index 189a8c14532..f9c670a79ef 100644
--- a/src/current/_includes/sidebar-data-cockroachcloud.json
+++ b/src/current/_includes/sidebar-data-cockroachcloud.json
@@ -134,7 +134,7 @@
{
"title": "Migration Overview",
"urls": [
- "/{{site.current_cloud_version}}/migration-overview.html"
+ "/molt/migration-overview.html"
]
},
{
@@ -146,13 +146,13 @@
{
"title": "Migrate from PostgreSQL",
"urls": [
- "/{{site.current_cloud_version}}/migrate-from-postgres.html"
+ "/molt/migrate-to-cockroachdb.html"
]
},
{
"title": "Migrate from MySQL",
"urls": [
- "/{{site.current_cloud_version}}/migrate-from-mysql.html"
+ "/molt/migrate-to-cockroachdb.html?filters=mysql"
]
},
{
diff --git a/src/current/_includes/v23.1/sidebar-data/migrate.json b/src/current/_includes/v23.1/sidebar-data/migrate.json
index 4a09e7dcd3d..c3a9397fefe 100644
--- a/src/current/_includes/v23.1/sidebar-data/migrate.json
+++ b/src/current/_includes/v23.1/sidebar-data/migrate.json
@@ -5,7 +5,31 @@
{
"title": "Overview",
"urls": [
- "/${VERSION}/migration-overview.html"
+ "/molt/migration-overview.html"
+ ]
+ },
+ {
+ "title": "Migration Strategy",
+ "urls": [
+ "/molt/migration-strategy.html"
+ ]
+ },
+ {
+ "title": "Migrate to CockroachDB",
+ "urls": [
+ "/molt/migrate-to-cockroachdb.html"
+ ]
+ },
+ {
+ "title": "Migrate in Phases",
+ "urls": [
+ "/molt/migrate-in-phases.html"
+ ]
+ },
+ {
+ "title": "Migration Failback",
+ "urls": [
+ "/molt/migrate-failback.html"
]
},
{
@@ -107,29 +131,11 @@
}
]
},
- {
- "title": "Migrate from PostgreSQL",
- "urls": [
- "/${VERSION}/migrate-from-postgres.html"
- ]
- },
- {
- "title": "Migrate from MySQL",
- "urls": [
- "/${VERSION}/migrate-from-mysql.html"
- ]
- },
{
"title": "Migrate from Oracle",
"urls": [
"/${VERSION}/migrate-from-oracle.html"
]
- },
- {
- "title": "Migration Strategy: Lift and Shift",
- "urls": [
- "/${VERSION}/migration-strategy-lift-and-shift.html"
- ]
}
]
}
diff --git a/src/current/_includes/v23.2/sidebar-data/migrate.json b/src/current/_includes/v23.2/sidebar-data/migrate.json
index 48e1670082a..5448baf143e 100644
--- a/src/current/_includes/v23.2/sidebar-data/migrate.json
+++ b/src/current/_includes/v23.2/sidebar-data/migrate.json
@@ -5,18 +5,36 @@
{
"title": "Overview",
"urls": [
- "/${VERSION}/migration-overview.html"
+ "/molt/migration-overview.html"
+ ]
+ },
+ {
+ "title": "Migration Strategy",
+ "urls": [
+ "/molt/migration-strategy.html"
+ ]
+ },
+ {
+ "title": "Migrate to CockroachDB",
+ "urls": [
+ "/molt/migrate-to-cockroachdb.html"
+ ]
+ },
+ {
+ "title": "Migrate in Phases",
+ "urls": [
+ "/molt/migrate-in-phases.html"
+ ]
+ },
+ {
+ "title": "Migration Failback",
+ "urls": [
+ "/molt/migrate-failback.html"
]
},
{
"title": "MOLT Tools",
"items": [
- {
- "title": "Overview",
- "urls": [
- "/molt/molt-overview.html"
- ]
- },
{
"title": "Schema Conversion Tool",
"urls": [
@@ -119,29 +137,11 @@
}
]
},
- {
- "title": "Migrate from PostgreSQL",
- "urls": [
- "/${VERSION}/migrate-from-postgres.html"
- ]
- },
- {
- "title": "Migrate from MySQL",
- "urls": [
- "/${VERSION}/migrate-from-mysql.html"
- ]
- },
{
"title": "Migrate from Oracle",
"urls": [
"/${VERSION}/migrate-from-oracle.html"
]
- },
- {
- "title": "Migration Strategy: Lift and Shift",
- "urls": [
- "/${VERSION}/migration-strategy-lift-and-shift.html"
- ]
}
]
}
diff --git a/src/current/_includes/v24.1/sidebar-data/migrate.json b/src/current/_includes/v24.1/sidebar-data/migrate.json
index 48e1670082a..5448baf143e 100644
--- a/src/current/_includes/v24.1/sidebar-data/migrate.json
+++ b/src/current/_includes/v24.1/sidebar-data/migrate.json
@@ -5,18 +5,36 @@
{
"title": "Overview",
"urls": [
- "/${VERSION}/migration-overview.html"
+ "/molt/migration-overview.html"
+ ]
+ },
+ {
+ "title": "Migration Strategy",
+ "urls": [
+ "/molt/migration-strategy.html"
+ ]
+ },
+ {
+ "title": "Migrate to CockroachDB",
+ "urls": [
+ "/molt/migrate-to-cockroachdb.html"
+ ]
+ },
+ {
+ "title": "Migrate in Phases",
+ "urls": [
+ "/molt/migrate-in-phases.html"
+ ]
+ },
+ {
+ "title": "Migration Failback",
+ "urls": [
+ "/molt/migrate-failback.html"
]
},
{
"title": "MOLT Tools",
"items": [
- {
- "title": "Overview",
- "urls": [
- "/molt/molt-overview.html"
- ]
- },
{
"title": "Schema Conversion Tool",
"urls": [
@@ -119,29 +137,11 @@
}
]
},
- {
- "title": "Migrate from PostgreSQL",
- "urls": [
- "/${VERSION}/migrate-from-postgres.html"
- ]
- },
- {
- "title": "Migrate from MySQL",
- "urls": [
- "/${VERSION}/migrate-from-mysql.html"
- ]
- },
{
"title": "Migrate from Oracle",
"urls": [
"/${VERSION}/migrate-from-oracle.html"
]
- },
- {
- "title": "Migration Strategy: Lift and Shift",
- "urls": [
- "/${VERSION}/migration-strategy-lift-and-shift.html"
- ]
}
]
}
diff --git a/src/current/_includes/v24.2/sidebar-data/migrate.json b/src/current/_includes/v24.2/sidebar-data/migrate.json
index 48e1670082a..5448baf143e 100644
--- a/src/current/_includes/v24.2/sidebar-data/migrate.json
+++ b/src/current/_includes/v24.2/sidebar-data/migrate.json
@@ -5,18 +5,36 @@
{
"title": "Overview",
"urls": [
- "/${VERSION}/migration-overview.html"
+ "/molt/migration-overview.html"
+ ]
+ },
+ {
+ "title": "Migration Strategy",
+ "urls": [
+ "/molt/migration-strategy.html"
+ ]
+ },
+ {
+ "title": "Migrate to CockroachDB",
+ "urls": [
+ "/molt/migrate-to-cockroachdb.html"
+ ]
+ },
+ {
+ "title": "Migrate in Phases",
+ "urls": [
+ "/molt/migrate-in-phases.html"
+ ]
+ },
+ {
+ "title": "Migration Failback",
+ "urls": [
+ "/molt/migrate-failback.html"
]
},
{
"title": "MOLT Tools",
"items": [
- {
- "title": "Overview",
- "urls": [
- "/molt/molt-overview.html"
- ]
- },
{
"title": "Schema Conversion Tool",
"urls": [
@@ -119,29 +137,11 @@
}
]
},
- {
- "title": "Migrate from PostgreSQL",
- "urls": [
- "/${VERSION}/migrate-from-postgres.html"
- ]
- },
- {
- "title": "Migrate from MySQL",
- "urls": [
- "/${VERSION}/migrate-from-mysql.html"
- ]
- },
{
"title": "Migrate from Oracle",
"urls": [
"/${VERSION}/migrate-from-oracle.html"
]
- },
- {
- "title": "Migration Strategy: Lift and Shift",
- "urls": [
- "/${VERSION}/migration-strategy-lift-and-shift.html"
- ]
}
]
}
diff --git a/src/current/_includes/v24.3/sidebar-data/migrate.json b/src/current/_includes/v24.3/sidebar-data/migrate.json
index 48e1670082a..5448baf143e 100644
--- a/src/current/_includes/v24.3/sidebar-data/migrate.json
+++ b/src/current/_includes/v24.3/sidebar-data/migrate.json
@@ -5,18 +5,36 @@
{
"title": "Overview",
"urls": [
- "/${VERSION}/migration-overview.html"
+ "/molt/migration-overview.html"
+ ]
+ },
+ {
+ "title": "Migration Strategy",
+ "urls": [
+ "/molt/migration-strategy.html"
+ ]
+ },
+ {
+ "title": "Migrate to CockroachDB",
+ "urls": [
+ "/molt/migrate-to-cockroachdb.html"
+ ]
+ },
+ {
+ "title": "Migrate in Phases",
+ "urls": [
+ "/molt/migrate-in-phases.html"
+ ]
+ },
+ {
+ "title": "Migration Failback",
+ "urls": [
+ "/molt/migrate-failback.html"
]
},
{
"title": "MOLT Tools",
"items": [
- {
- "title": "Overview",
- "urls": [
- "/molt/molt-overview.html"
- ]
- },
{
"title": "Schema Conversion Tool",
"urls": [
@@ -119,29 +137,11 @@
}
]
},
- {
- "title": "Migrate from PostgreSQL",
- "urls": [
- "/${VERSION}/migrate-from-postgres.html"
- ]
- },
- {
- "title": "Migrate from MySQL",
- "urls": [
- "/${VERSION}/migrate-from-mysql.html"
- ]
- },
{
"title": "Migrate from Oracle",
"urls": [
"/${VERSION}/migrate-from-oracle.html"
]
- },
- {
- "title": "Migration Strategy: Lift and Shift",
- "urls": [
- "/${VERSION}/migration-strategy-lift-and-shift.html"
- ]
}
]
}
diff --git a/src/current/_includes/v25.1/sidebar-data/migrate.json b/src/current/_includes/v25.1/sidebar-data/migrate.json
index 15eee5be2d4..5448baf143e 100644
--- a/src/current/_includes/v25.1/sidebar-data/migrate.json
+++ b/src/current/_includes/v25.1/sidebar-data/migrate.json
@@ -5,36 +5,36 @@
{
"title": "Overview",
"urls": [
- "/${VERSION}/migration-overview.html"
+ "/molt/migration-overview.html"
+ ]
+ },
+ {
+ "title": "Migration Strategy",
+ "urls": [
+ "/molt/migration-strategy.html"
]
},
{
"title": "Migrate to CockroachDB",
"urls": [
- "/${VERSION}/migrate-to-cockroachdb.html"
+ "/molt/migrate-to-cockroachdb.html"
]
},
{
"title": "Migrate in Phases",
"urls": [
- "/${VERSION}/migrate-in-phases.html"
+ "/molt/migrate-in-phases.html"
]
},
{
"title": "Migration Failback",
"urls": [
- "/${VERSION}/migrate-failback.html"
+ "/molt/migrate-failback.html"
]
},
{
"title": "MOLT Tools",
"items": [
- {
- "title": "Overview",
- "urls": [
- "/molt/molt-overview.html"
- ]
- },
{
"title": "Schema Conversion Tool",
"urls": [
@@ -142,12 +142,6 @@
"urls": [
"/${VERSION}/migrate-from-oracle.html"
]
- },
- {
- "title": "Migration Strategy: Lift and Shift",
- "urls": [
- "/${VERSION}/migration-strategy-lift-and-shift.html"
- ]
}
]
}
diff --git a/src/current/_includes/v25.2/sidebar-data/migrate.json b/src/current/_includes/v25.2/sidebar-data/migrate.json
index 15eee5be2d4..5448baf143e 100644
--- a/src/current/_includes/v25.2/sidebar-data/migrate.json
+++ b/src/current/_includes/v25.2/sidebar-data/migrate.json
@@ -5,36 +5,36 @@
{
"title": "Overview",
"urls": [
- "/${VERSION}/migration-overview.html"
+ "/molt/migration-overview.html"
+ ]
+ },
+ {
+ "title": "Migration Strategy",
+ "urls": [
+ "/molt/migration-strategy.html"
]
},
{
"title": "Migrate to CockroachDB",
"urls": [
- "/${VERSION}/migrate-to-cockroachdb.html"
+ "/molt/migrate-to-cockroachdb.html"
]
},
{
"title": "Migrate in Phases",
"urls": [
- "/${VERSION}/migrate-in-phases.html"
+ "/molt/migrate-in-phases.html"
]
},
{
"title": "Migration Failback",
"urls": [
- "/${VERSION}/migrate-failback.html"
+ "/molt/migrate-failback.html"
]
},
{
"title": "MOLT Tools",
"items": [
- {
- "title": "Overview",
- "urls": [
- "/molt/molt-overview.html"
- ]
- },
{
"title": "Schema Conversion Tool",
"urls": [
@@ -142,12 +142,6 @@
"urls": [
"/${VERSION}/migrate-from-oracle.html"
]
- },
- {
- "title": "Migration Strategy: Lift and Shift",
- "urls": [
- "/${VERSION}/migration-strategy-lift-and-shift.html"
- ]
}
]
}
diff --git a/src/current/cockroachcloud/create-a-basic-cluster.md b/src/current/cockroachcloud/create-a-basic-cluster.md
index d680038c8fc..eec9b6a9287 100644
--- a/src/current/cockroachcloud/create-a-basic-cluster.md
+++ b/src/current/cockroachcloud/create-a-basic-cluster.md
@@ -109,7 +109,7 @@ Click **Create cluster**. Your cluster will be created in a few seconds.
- [Manage access]({% link cockroachcloud/managing-access.md %})
- [Learn CockroachDB SQL]({% link cockroachcloud/learn-cockroachdb-sql.md %}).
- Explore our [example apps]({% link {{site.current_cloud_version}}/example-apps.md %}) for examples on how to build applications using your preferred driver or ORM and run it on CockroachDB.
-- [Migrate your existing data]({% link {{site.current_cloud_version}}/migration-overview.md %}).
+- [Migrate your existing data]({% link molt/migration-overview.md %}).
- Build a simple CRUD application in [Go]({% link {{site.current_cloud_version}}/build-a-go-app-with-cockroachdb.md %}), [Java]({% link {{site.current_cloud_version}}/build-a-java-app-with-cockroachdb.md %}), [Node.js]({% link {{site.current_cloud_version}}/build-a-nodejs-app-with-cockroachdb.md %}), or [Python]({% link {{site.current_cloud_version}}/build-a-python-app-with-cockroachdb.md %}).
- For examples of applications that use free CockroachDB {{ site.data.products.cloud }} clusters, check out the following [Hack the North](https://hackthenorth.com/) projects:
- [flock](https://devpost.com/software/flock-figure-out-what-film-to-watch-with-friends)
diff --git a/src/current/cockroachcloud/migrate-from-standard-to-advanced.md b/src/current/cockroachcloud/migrate-from-standard-to-advanced.md
index 7b5f22dbb11..d510efaa7b6 100644
--- a/src/current/cockroachcloud/migrate-from-standard-to-advanced.md
+++ b/src/current/cockroachcloud/migrate-from-standard-to-advanced.md
@@ -204,7 +204,7 @@ ALTER TABLE tpcc.district ADD CONSTRAINT fk_d_w_id_ref_warehouse FOREIGN KEY (d_
## See also
- [`IMPORT INTO`]({% link {{site.current_cloud_version}}/import-into.md %})
-- [Migration Overview]({% link {{site.current_cloud_version}}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [Migrate from CSV]({% link {{site.current_cloud_version}}/migrate-from-csv.md %})
- [Import Performance Best Practices]({% link {{site.current_cloud_version}}/import-performance-best-practices.md %})
- [Use the Built-in SQL Client]({% link {{site.current_cloud_version}}/cockroach-sql.md %})
diff --git a/src/current/cockroachcloud/migrations-page.md b/src/current/cockroachcloud/migrations-page.md
index c86c79f5e38..603d156ac8a 100644
--- a/src/current/cockroachcloud/migrations-page.md
+++ b/src/current/cockroachcloud/migrations-page.md
@@ -15,7 +15,7 @@ The **Migrations** page on the CockroachDB {{ site.data.products.cloud }} Consol
- [Export the converted schema.](#export-the-schema) {% include cockroachcloud/migration/sct-self-hosted.md %}
{{site.data.alerts.callout_info}}
- The **Migrations** page is used to convert a schema for use with CockroachDB and to create a new database that uses the schema. It does not include moving data to the new database. For details on all steps required to complete a database migration, see the [Migration Overview]({% link {{version_prefix}}migration-overview.md %}).
+ The **Migrations** page is used to convert a schema for use with CockroachDB and to create a new database that uses the schema. It does not include moving data to the new database. For details on all steps required to complete a database migration, see the [Migration Overview]({% link molt/migration-strategy.md %}).
{{site.data.alerts.end}}
To view this page, select a cluster from the [**Clusters** page]({% link cockroachcloud/cluster-management.md %}#view-clusters-page), and click **Migration** in the **Data** section of the left side navigation.
@@ -51,7 +51,7 @@ The steps to convert your schema depend on your source dialect.
-
INT type conversion: On CockroachDB, INT is an alias for INT8, which creates 64-bit signed integers. On PostgreSQL, INT defaults to INT4. For details, see Schema design best practices.
+
INT type conversion: On CockroachDB, INT is an alias for INT8, which creates 64-bit signed integers. On PostgreSQL, INT defaults to INT4. For details, see Schema design best practices.
@@ -59,16 +59,16 @@ The steps to convert your schema depend on your source dialect.
-
AUTO_INCREMENT Conversion Option: We do not recommend using a sequence to define a primary key column. For details, see Schema design best practices. To understand the differences between the UUID and unique_rowid() options, see the SQL FAQs.
+
AUTO_INCREMENT Conversion Option: We do not recommend using a sequence to define a primary key column. For details, see Schema design best practices. To understand the differences between the UUID and unique_rowid() options, see the SQL FAQs.
Enum Preferences: On CockroachDB, ENUMS are a standalone type. On MySQL, they are part of column definitions. You can select to either deduplicate the ENUM definitions or create a separate type for each column.
-
GENERATED AS IDENTITY Conversion Option: We do not recommend using a sequence to define a primary key column. For details, see Schema design best practices. To understand the differences between the UUID and unique_rowid() options, see the SQL FAQs.
+
GENERATED AS IDENTITY Conversion Option: We do not recommend using a sequence to define a primary key column. For details, see Schema design best practices. To understand the differences between the UUID and unique_rowid() options, see the SQL FAQs.
-
IDENTITY Conversion Option: We do not recommend using a sequence to define a primary key column. For details, see Schema design best practices. To understand the differences between the UUID and unique_rowid() options, see the SQL FAQs.
+
IDENTITY Conversion Option: We do not recommend using a sequence to define a primary key column. For details, see Schema design best practices. To understand the differences between the UUID and unique_rowid() options, see the SQL FAQs.
@@ -166,7 +166,7 @@ The banner at the top of the page displays:
The number of Compatibility Notes on differences in SQL syntax. Although these statements do not block schema migration, you should [update](#update-the-schema) them before migrating the schema.
### Summary Report
@@ -198,7 +198,7 @@ After updating the schema, click [**Retry Migration**](#retry-the-migration). If
| Column | Description |
|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Description | A summary of the error type. |
-| Category | The category of error:
**Unimplemented Feature:** A statement that uses an [unimplemented feature]({% link {{version_prefix}}migration-overview.md %}#unimplemented-features-and-syntax-incompatibilities).
**Uncreated User:** A statement that references a nonexistent user.
**Incompatibility:** (Non-PostgreSQL schemas) A statement that could not be converted because it has no equivalent syntax on CockroachDB.
**Uncategorized:** A statement that the tool did not or could not execute.
**Incidental:** A statement that failed because another SQL statement encountered one of the preceding error types.
|
+| Category | The category of error:
**Unimplemented Feature:** A statement that uses an [unimplemented feature]({% link molt/migration-strategy.md %}#unimplemented-features-and-syntax-incompatibilities).
**Uncreated User:** A statement that references a nonexistent user.
**Incompatibility:** (Non-PostgreSQL schemas) A statement that could not be converted because it has no equivalent syntax on CockroachDB.
**Uncategorized:** A statement that the tool did not or could not execute.
**Incidental:** A statement that failed because another SQL statement encountered one of the preceding error types.
|
| Complexity | The estimated difficulty of addressing the error. |
| Remaining Instances | The number of times the error still occurs on the provided schema. This number will change as you [update the schema](#update-the-schema) to fix errors. Click the `+` icon on the row to view up to 20 individual statements where this occurs. |
| Actions | The option to **Add User** to add a missing SQL user, or **Delete** all statements that contain the error type. This cannot be undone after you [retry the migration](#retry-the-migration). |
@@ -216,7 +216,7 @@ After updating the schema, click [**Retry Migration**](#retry-the-migration). If
#### Suggestions
-**Suggestions** relate to [schema design best practices]({% link {{version_prefix}}migration-overview.md %}#schema-design-best-practices). They do not block [schema migration](#migrate-the-schema).
+**Suggestions** relate to [schema design best practices]({% link molt/migration-strategy.md %}#schema-design-best-practices). They do not block [schema migration](#migrate-the-schema).
| Column | Description |
|-------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
@@ -298,7 +298,7 @@ To update the schema:
| Category | Solution | Bulk Actions | Required for schema migration? |
|----------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------|--------------------------------|
-| Unimplemented feature | The feature does not yet exist on CockroachDB. Implement a workaround by editing the statement and adding statements. Otherwise, remove the statement from the schema. If a link to a tracking issue is included, click the link for further context. For more information about unimplemented features, see the [Migration Overview]({% link {{version_prefix}}migration-overview.md %}#unimplemented-features-and-syntax-incompatibilities). | Delete | Yes |
+| Unimplemented feature | The feature does not yet exist on CockroachDB. Implement a workaround by editing the statement and adding statements. Otherwise, remove the statement from the schema. If a link to a tracking issue is included, click the link for further context. For more information about unimplemented features, see the [Migration Overview]({% link molt/migration-strategy.md %}#unimplemented-features-and-syntax-incompatibilities). | Delete | Yes |
| Uncreated user | Click the **Add User** button next to the error message. You must be a member of the [`admin` role]({% link cockroachcloud/managing-access.md %}). This adds the missing user to the cluster. | Add User, Delete | Yes |
| Incidental | Resolve the error in the earlier failed statement that caused the incidental error. | Delete | Yes |
| Incompatibility (non-PostgreSQL schemas) | There is no equivalent syntax on CockroachDB. Implement a workaround by replacing the statement. Otherwise, remove the statement from the schema. Then check **Acknowledge**. | Delete | Yes |
@@ -327,7 +327,7 @@ To migrate the schema, click **Migrate Schema** when viewing the **Summary Repor
1. Name the new database and select a SQL user to own the database.
1. Click **Migrate**.
-After migrating the schema and creating the new database, you can [load some test data]({% link {{version_prefix}}migration-overview.md %}#load-test-data) and [validate your queries]({% link {{version_prefix}}migration-overview.md %}#validate-queries).
+After migrating the schema and creating the new database, you can [load some test data]({% link molt/migration-strategy.md %}#load-test-data) and [validate your queries]({% link molt/migration-strategy.md %}#validate-queries).
## Schemas table
@@ -359,8 +359,7 @@ To delete or verify a set of credentials, select the appropriate option in the *
## See also
-- [Migration Overview]({% link {{version_prefix}}migration-overview.md %})
-- [Migrate to CockroachDB]({% link {{ site.current_cloud_version }}/migrate-to-cockroachdb.md %})
-- [MOLT Overview]({% link molt/molt-overview.md %})
+- [Migration Overview]({% link molt/migration-strategy.md %})
+- [Migrate to CockroachDB]({% link molt/migrate-to-cockroachdb.md %})
- [MOLT Fetch]({% link molt/molt-fetch.md %})
- [MOLT Verify]({% link molt/molt-verify.md %})
\ No newline at end of file
diff --git a/src/current/cockroachcloud/private-clusters.md b/src/current/cockroachcloud/private-clusters.md
index 66e86cda615..9f22068bd51 100644
--- a/src/current/cockroachcloud/private-clusters.md
+++ b/src/current/cockroachcloud/private-clusters.md
@@ -46,4 +46,4 @@ Egress traffic from a private cluster to non-cloud external resources will alway
## Limitations
-- An existing cluster can't be migrated in-place to a private cluster. Instead, migrate the existing cluster's data to a new private cluster. Refer to [Migrate Your Database to CockroachDB]({% link {{ site.current_cloud_version }}/migration-overview.md %}).
+- An existing cluster can't be migrated in-place to a private cluster. Instead, migrate the existing cluster's data to a new private cluster. Refer to [Migrate Your Database to CockroachDB]({% link molt/migration-overview.md %}).
diff --git a/src/current/cockroachcloud/quickstart.md b/src/current/cockroachcloud/quickstart.md
index ccee1e88044..0d8dd9f3799 100644
--- a/src/current/cockroachcloud/quickstart.md
+++ b/src/current/cockroachcloud/quickstart.md
@@ -202,7 +202,7 @@ Now that you have a CockroachDB {{ site.data.products.standard }} cluster runnin
- [Learn CockroachDB SQL]({% link cockroachcloud/learn-cockroachdb-sql.md %}).
- [Create and manage SQL users]({% link cockroachcloud/managing-access.md %}).
- Explore our [example apps]({% link {{site.current_cloud_version}}/example-apps.md %}) for examples on how to build applications using your preferred driver or ORM and run it on CockroachDB.
-- [Migrate your existing data]({% link {{site.current_cloud_version}}/migration-overview.md %}).
+- [Migrate your existing data]({% link molt/migration-overview.md %}).
This page highlights just one way you can get started with CockroachDB. For information on other options that are available when creating a CockroachDB cluster, see the following:
diff --git a/src/current/cockroachcloud/resource-usage-basic.md b/src/current/cockroachcloud/resource-usage-basic.md
index 2001acfdf30..c0aac71bac3 100644
--- a/src/current/cockroachcloud/resource-usage-basic.md
+++ b/src/current/cockroachcloud/resource-usage-basic.md
@@ -108,7 +108,7 @@ You might also see [multiple open connections](#excessive-number-of-connections)
### Data migration
-An initial data load during a migration may consume a high number of RUs. Generally in this case, optimized performance will also coincide with optimized RU consumption. For more information about migrations, refer to the [Migration Overview]({% link {{ site.current_cloud_version }}/migration-overview.md %}).
+An initial data load during a migration may consume a high number of RUs. Generally in this case, optimized performance will also coincide with optimized RU consumption. For more information about migrations, refer to the [Migration Overview]({% link molt/migration-overview.md %}).
### Changefeeds (CDC)
diff --git a/src/current/cockroachcloud/resource-usage.md b/src/current/cockroachcloud/resource-usage.md
index 8a12a14cee0..da19f7285d9 100644
--- a/src/current/cockroachcloud/resource-usage.md
+++ b/src/current/cockroachcloud/resource-usage.md
@@ -112,7 +112,7 @@ You might also see [multiple open connections](#excessive-number-of-connections)
### Data migration
-An initial data load during a migration may consume a high number of RUs. Generally in this case, optimized performance will also coincide with optimized RU consumption. For more information about migrations, refer to the [Migration Overview]({% link {{ site.current_cloud_version }}/migration-overview.md %}).
+An initial data load during a migration may consume a high number of RUs. Generally in this case, optimized performance will also coincide with optimized RU consumption. For more information about migrations, refer to the [Migration Overview]({% link molt/migration-overview.md %}).
### Changefeeds (CDC)
diff --git a/src/current/images/molt/migration_flow.svg b/src/current/images/molt/migration_flow.svg
new file mode 100644
index 00000000000..63462dc2dd4
--- /dev/null
+++ b/src/current/images/molt/migration_flow.svg
@@ -0,0 +1,882 @@
+
\ No newline at end of file
diff --git a/src/current/molt/migrate-failback.md b/src/current/molt/migrate-failback.md
new file mode 100644
index 00000000000..672b47718cc
--- /dev/null
+++ b/src/current/molt/migrate-failback.md
@@ -0,0 +1,120 @@
+---
+title: Migration Failback
+summary: Learn how to fail back from a CockroachDB cluster to a PostgreSQL or MySQL database.
+toc: true
+docs_area: migrate
+---
+
+Failback can be performed after you have loaded data into CockroachDB and are replicating ongoing changes. Failing back to the source database ensures that—in case you need to roll back the migration—data remains consistent on the source.
+
+{% assign tab_names_html = "Load and replicate;Phased migration;Failback" %}
+{% assign html_page_filenames = "migrate-to-cockroachdb.html;migrate-in-phases.html;migrate-failback.html" %}
+
+{% include filter-tabs.md tab_names=tab_names_html page_filenames=html_page_filenames page_folder="molt" %}
+
+## Before you begin
+
+- [Enable rangefeeds]({% link {{ site.current_cloud_version }}/create-and-configure-changefeeds.md %}#enable-rangefeeds) in the CockroachDB SQL shell:
+
+ {% include_cached copy-clipboard.html %}
+ ~~~ sql
+ SET CLUSTER SETTING kv.rangefeed.enabled = true;
+ ~~~
+
+- Ensure that your CockroachDB deployment has a valid [Enterprise license]({% link {{ site.current_cloud_version }}/licensing-faqs.md %}).
+
+Select the source dialect you migrated to CockroachDB:
+
+
+
+
+
+
+## Step 1. Stop replication to CockroachDB
+
+Cancel replication to CockroachDB by entering `ctrl-c` to issue a `SIGTERM` signal to the `fetch` process. This returns an exit code `0`.
+
+## Step 2. Fail back from CockroachDB
+
+The following example watches the `employees` table for change events.
+
+1. Issue the [MOLT Fetch]({% link molt/molt-fetch.md %}) command to fail back to the source database, specifying `--mode failback`. For details on this mode, refer to the [MOLT Fetch]({% link molt/molt-fetch.md %}#fail-back-to-source-database) page.
+
+ {{site.data.alerts.callout_success}}
+ Be mindful when specifying the connection strings: `--source` is the CockroachDB connection string and `--target` is the connection string of the database you migrated from.
+ {{site.data.alerts.end}}
+
+ Use the `--stagingSchema` replication flag to provide the name of the staging schema. This is found in the `staging database name` message that is written at the beginning of the [replication task]({% link molt/migrate-in-phases.md %}#step-6-replicate-changes-to-cockroachdb).
+
+
+ {% include_cached copy-clipboard.html %}
+ ~~~ shell
+ molt fetch \
+ --source 'postgres://root@localhost:26257/defaultdb?sslmode=verify-full' \
+ --target 'postgres://postgres:postgres@localhost:5432/molt?sslmode=verify-full' \
+ --table-filter 'employees' \
+ --non-interactive \
+ --replicator-flags "--stagingSchema _replicator_1739996035106984000" \
+ --mode failback \
+ --changefeeds-path 'changefeed-secure.json'
+ ~~~
+
+
+
+ {% include_cached copy-clipboard.html %}
+ ~~~ shell
+ molt fetch \
+ --source 'postgres://root@localhost:26257/defaultdb?sslmode=verify-full' \
+ --target 'mysql://user:password@localhost/molt?sslcert=.%2fsource_certs%2fclient.root.crt&sslkey=.%2fsource_certs%2fclient.root.key&sslmode=verify-full&sslrootcert=.%2fsource_certs%2fca.crt' \
+ --table-filter 'employees' \
+ --non-interactive \
+ --replicator-flags "--stagingSchema _replicator_1739996035106984000" \
+ --mode failback \
+ --changefeeds-path 'changefeed-secure.json'
+ ~~~
+
+
+ `--changefeeds-path` specifies a path to `changefeed-secure.json`, which should contain the following setting override:
+
+ {% include_cached copy-clipboard.html %}
+ ~~~ json
+ {
+ "sink_query_parameters": "client_cert={base64 cert}&client_key={base64 key}&ca_cert={base64 CA cert}"
+ }
+ ~~~
+
+ `client_cert`, `client_key`, and `ca_cert` are [webhook sink parameters]({% link {{ site.current_cloud_version }}/changefeed-sinks.md %}#webhook-parameters) that must be base64- and URL-encoded (for example, use the command `base64 -i ./client.crt | jq -R -r '@uri'`).
+
+ {{site.data.alerts.callout_success}}
+ For details on the default changefeed settings and how to override them, refer to [Changefeed override settings]({% link molt/molt-fetch.md %}#changefeed-override-settings).
+ {{site.data.alerts.end}}
+
+1. Check the output to observe `fetch progress`.
+
+ A `starting replicator` message indicates that the task has started:
+
+ ~~~ json
+ {"level":"info","time":"2025-02-20T15:55:44-05:00","message":"starting replicator"}
+ ~~~
+
+ The `staging database name` message contains the name of the staging schema:
+
+ ~~~ json
+ {"level":"info","time":"2025-02-11T14:56:20-05:00","message":"staging database name: _replicator_1739303283084207000"}
+ ~~~
+
+ A `creating changefeed` message indicates that a changefeed will be passing change events from CockroachDB to the failback target:
+
+ ~~~ json
+ {"level":"info","time":"2025-02-20T15:55:44-05:00","message":"creating changefeed on the source CRDB database"}
+ ~~~
+
+## See also
+
+- [Migration Overview]({% link molt/migration-overview.md %})
+- [MOLT Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
+- [MOLT Fetch]({% link molt/molt-fetch.md %})
+- [MOLT Verify]({% link molt/molt-verify.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
+- [Migrate to CockroachDB]({% link molt/migrate-to-cockroachdb.md %})
+- [Migrate to CockroachDB in Phases]({% link molt/migrate-in-phases.md %})
\ No newline at end of file
diff --git a/src/current/molt/migrate-in-phases.md b/src/current/molt/migrate-in-phases.md
new file mode 100644
index 00000000000..84df6d5aa21
--- /dev/null
+++ b/src/current/molt/migrate-in-phases.md
@@ -0,0 +1,158 @@
+---
+title: Migrate to CockroachDB in Phases
+summary: Learn how to migrate data in phases from a PostgreSQL or MySQL database into a CockroachDB cluster.
+toc: true
+docs_area: migrate
+---
+
+A phased migration to CockroachDB uses the [MOLT tools]({% link molt/migration-overview.md %}) to [convert your source schema](#step-2-prepare-the-source-schema), incrementally [load source data](#step-3-load-data-into-cockroachdb) and [verify the results](#step-4-verify-the-data-load), and finally [replicate ongoing changes](#step-6-replicate-changes-to-cockroachdb) before performing cutover.
+
+{% assign tab_names_html = "Load and replicate;Phased migration;Failback" %}
+{% assign html_page_filenames = "migrate-to-cockroachdb.html;migrate-in-phases.html;migrate-failback.html" %}
+
+{% include filter-tabs.md tab_names=tab_names_html page_filenames=html_page_filenames page_folder="molt" %}
+
+## Before you begin
+
+- Review the [Migration Overview]({% link molt/migration-overview.md %}).
+- Install the [MOLT (Migrate Off Legacy Technology)]({% link releases/molt.md %}#installation) tools.
+- Review the MOLT Fetch [setup]({% link molt/molt-fetch.md %}#setup) and [best practices]({% link molt/molt-fetch.md %}#best-practices).
+{% include molt/fetch-secure-cloud-storage.md %}
+
+Select the source dialect you will migrate to CockroachDB:
+
+
+
+
+
+
+## Step 1. Prepare the source database
+
+{% include molt/migration-prepare-database.md %}
+
+## Step 2. Prepare the source schema
+
+{% include molt/migration-prepare-schema.md %}
+
+## Step 3. Load data into CockroachDB
+
+{{site.data.alerts.callout_success}}
+To optimize performance of [data load](#step-3-load-data-into-cockroachdb), Cockroach Labs recommends dropping any [constraints]({% link {{ site.current_cloud_version }}/alter-table.md %}#drop-constraint) and [indexes]({% link {{site.current_cloud_version}}/drop-index.md %}) on the target CockroachDB database. You can [recreate them after the data is loaded](#step-5-modify-the-cockroachdb-schema).
+{{site.data.alerts.end}}
+
+Perform an initial load of data into the target database. This can be a subset of the source data that you wish to verify, or it can be the entire dataset.
+
+{% include molt/fetch-data-load-modes.md %}
+
+1. Issue the [MOLT Fetch]({% link molt/molt-fetch.md %}) command to move the source data to CockroachDB, specifying `--mode data-load` to perform a one-time data load. For details on this mode, refer to the [MOLT Fetch]({% link molt/molt-fetch.md %}#load-data) page.
+
+ {{site.data.alerts.callout_info}}
+ Ensure that the `--source` and `--target` [connection strings]({% link molt/molt-fetch.md %}#connection-strings) are URL-encoded.
+ {{site.data.alerts.end}}
+
+
+ {% include_cached copy-clipboard.html %}
+ Be sure to specify `--pglogical-replication-slot-name`, which is required for replication in [Step 6](#step-6-replicate-changes-to-cockroachdb).
+
+ ~~~ shell
+ molt fetch \
+ --source 'postgres://postgres:postgres@localhost:5432/molt?sslmode=verify-full' \
+ --target 'postgres://root@localhost:26257/defaultdb?sslmode=verify-full' \
+ --table-filter 'employees' \
+ --bucket-path 's3://molt-test' \
+ --table-handling truncate-if-exists \
+ --non-interactive \
+ --pglogical-replication-slot-name cdc_slot \
+ --mode data-load
+ ~~~
+
+
+
+ {% include_cached copy-clipboard.html %}
+ ~~~ shell
+ molt fetch \
+ --source 'mysql://user:password@localhost/molt?sslcert=.%2fsource_certs%2fclient.root.crt&sslkey=.%2fsource_certs%2fclient.root.key&sslmode=verify-full&sslrootcert=.%2fsource_certs%2fca.crt' \
+ --target 'postgres://root@localhost:26257/defaultdb?sslmode=verify-full' \
+ --table-filter 'employees' \
+ --bucket-path 's3://molt-test' \
+ --table-handling truncate-if-exists \
+ --non-interactive \
+ --mode data-load
+ ~~~
+
+
+{% include molt/fetch-data-load-output.md %}
+
+## Step 4. Verify the data load
+
+{% include molt/verify-output.md %}
+
+Repeat [Step 3](#step-3-load-data-into-cockroachdb) and [Step 4](#step-4-verify-the-data-load) to migrate any remaining tables.
+
+## Step 5. Modify the CockroachDB schema
+
+{% include molt/migration-modify-target-schema.md %}
+
+## Step 6. Replicate changes to CockroachDB
+
+With initial load complete, start replication of ongoing changes on the source to CockroachDB.
+
+The following example specifies that the `employees` table should be watched for change events.
+
+1. Issue the [MOLT Fetch]({% link molt/molt-fetch.md %}) command to start replication on CockroachDB, specifying `--mode replication-only` to replicate ongoing changes on the source to CockroachDB. For details on this mode, refer to the [MOLT Fetch]({% link molt/molt-fetch.md %}#replicate-changes) page.
+
+
+ Be sure to specify the same `--pglogical-replication-slot-name` value that you provided in [Step 3](#step-3-load-data-into-cockroachdb).
+
+ {% include_cached copy-clipboard.html %}
+ ~~~ shell
+ molt fetch \
+ --source 'postgres://postgres:postgres@localhost:5432/molt?sslmode=verify-full' \
+ --target 'postgres://root@localhost:26257/defaultdb?sslmode=verify-full' \
+ --table-filter 'employees' \
+ --non-interactive \
+ --mode replication-only \
+ --pglogical-replication-slot-name cdc_slot
+ ~~~
+
+
+
+ Use the `--defaultGTIDSet` replication flag to specify the GTID set. To find your GTID record, run `SELECT source_uuid, min(interval_start), max(interval_end) FROM mysql.gtid_executed GROUP BY source_uuid;` on MySQL.
+
+ {% include_cached copy-clipboard.html %}
+ ~~~ shell
+ molt fetch \
+ --source 'mysql://user:password@localhost/molt?sslcert=.%2fsource_certs%2fclient.root.crt&sslkey=.%2fsource_certs%2fclient.root.key&sslmode=verify-full&sslrootcert=.%2fsource_certs%2fca.crt' \
+ --target 'postgres://root@localhost:26257/defaultdb?sslmode=verify-full' \
+ --table-filter 'employees' \
+ --non-interactive \
+ --mode replication-only \
+ --replicator-flags '--defaultGTIDSet 4c658ae6-e8ad-11ef-8449-0242ac140006:1-29'
+ ~~~
+
+
+{% include molt/fetch-replication-output.md %}
+
+## Step 7. Stop replication and verify data
+
+{% include molt/migration-stop-replication.md %}
+
+1. Repeat [Step 4](#step-4-verify-the-data-load) to verify the updated data.
+
+{{site.data.alerts.callout_success}}
+If you encountered issues with replication, you can now use [`failback`]({% link molt/migrate-failback.md %}) mode to replicate changes on CockroachDB back to the initial source database. In case you need to roll back the migration, this ensures that data is consistent on the initial source database.
+{{site.data.alerts.end}}
+
+## Step 8. Cutover
+
+Perform a cutover by resuming application traffic, now to CockroachDB.
+
+## See also
+
+- [Migration Overview]({% link molt/migration-overview.md %})
+- [MOLT Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
+- [MOLT Fetch]({% link molt/molt-fetch.md %})
+- [MOLT Verify]({% link molt/molt-verify.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
+- [Migrate to CockroachDB]({% link molt/migrate-to-cockroachdb.md %})
+- [Migration Failback]({% link molt/migrate-failback.md %})
\ No newline at end of file
diff --git a/src/current/molt/migrate-to-cockroachdb.md b/src/current/molt/migrate-to-cockroachdb.md
new file mode 100644
index 00000000000..e275a0eb17c
--- /dev/null
+++ b/src/current/molt/migrate-to-cockroachdb.md
@@ -0,0 +1,118 @@
+---
+title: Migrate to CockroachDB
+summary: Learn how to migrate data from a PostgreSQL or MySQL database into a CockroachDB cluster.
+toc: true
+docs_area: migrate
+---
+
+A migration to CockroachDB uses the [MOLT tools]({% link molt/migration-overview.md %}) to [convert your source schema](#step-2-prepare-the-source-schema), [load source data](#step-3-load-data-into-cockroachdb) into CockroachDB and immediately [replicate ongoing changes](#step-4-replicate-changes-to-cockroachdb), and [verify consistency](#step-5-stop-replication-and-verify-data) on the CockroachDB cluster before performing cutover.
+
+{% assign tab_names_html = "Load and replicate;Phased migration;Failback" %}
+{% assign html_page_filenames = "migrate-to-cockroachdb.html;migrate-in-phases.html;migrate-failback.html" %}
+
+{% include filter-tabs.md tab_names=tab_names_html page_filenames=html_page_filenames page_folder="molt" %}
+
+## Before you begin
+
+- Review the [Migration Overview]({% link molt/migration-overview.md %}).
+- Install the [MOLT (Migrate Off Legacy Technology)]({% link releases/molt.md %}#installation) tools.
+- Review the MOLT Fetch [setup]({% link molt/molt-fetch.md %}#setup) and [best practices]({% link molt/molt-fetch.md %}#best-practices).
+{% include molt/fetch-secure-cloud-storage.md %}
+
+Select the source dialect you will migrate to CockroachDB:
+
+
+
+
+
+
+## Step 1. Prepare the source database
+
+{% include molt/migration-prepare-database.md %}
+
+## Step 2. Prepare the source schema
+
+{% include molt/migration-prepare-schema.md %}
+
+## Step 3. Load data into CockroachDB
+
+{{site.data.alerts.callout_success}}
+To optimize performance of [data load](#step-3-load-data-into-cockroachdb), Cockroach Labs recommends dropping any [constraints]({% link {{ site.current_cloud_version }}/alter-table.md %}#drop-constraint) and [indexes]({% link {{site.current_cloud_version}}/drop-index.md %}) on the target CockroachDB database. You can [recreate them after the data is loaded](#step-6-modify-the-cockroachdb-schema).
+{{site.data.alerts.end}}
+
+Start the initial load of data into the target database. Continuous replication of changes will start once the data load is complete.
+
+{% include molt/fetch-data-load-modes.md %}
+
+1. Issue the [MOLT Fetch]({% link molt/molt-fetch.md %}) command to move the source data to CockroachDB, specifying `--mode data-load-and-replication` to perform an initial load followed by continuous replication. For details on this mode, refer to the [MOLT Fetch]({% link molt/molt-fetch.md %}#load-data-and-replicate-changes) page.
+
+ {{site.data.alerts.callout_info}}
+ Ensure that the `--source` and `--target` [connection strings]({% link molt/molt-fetch.md %}#connection-strings) are URL-encoded.
+ {{site.data.alerts.end}}
+
+
+ Be sure to specify `--pglogical-replication-slot-name`, which is required for replication.
+
+ {% include_cached copy-clipboard.html %}
+ ~~~ shell
+ molt fetch \
+ --source "postgres://postgres:postgres@localhost:5432/molt?sslmode=verify-full" \
+ --target "postgres://root@localhost:26257/defaultdb?sslmode=verify-full" \
+ --table-filter 'employees' \
+ --bucket-path 's3://molt-test' \
+ --table-handling truncate-if-exists \
+ --non-interactive \
+ --mode data-load-and-replication \
+ --pglogical-replication-slot-name cdc_slot
+ ~~~
+
+
+
+ {% include_cached copy-clipboard.html %}
+ ~~~ shell
+ molt fetch \
+ --source 'mysql://user:password@localhost/molt?sslcert=.%2fsource_certs%2fclient.root.crt&sslkey=.%2fsource_certs%2fclient.root.key&sslmode=verify-full&sslrootcert=.%2fsource_certs%2fca.crt' \
+ --target "postgres://root@localhost:26257/defaultdb?sslmode=verify-full" \
+ --table-filter 'employees' \
+ --bucket-path 's3://molt-test' \
+ --table-handling truncate-if-exists \
+ --non-interactive \
+ --mode data-load-and-replication
+ ~~~
+
+
+{% include molt/fetch-data-load-output.md %}
+
+## Step 4. Replicate changes to CockroachDB
+
+1. Continuous replication begins immediately after `fetch complete`.
+
+{% include molt/fetch-replication-output.md %}
+
+## Step 5. Stop replication and verify data
+
+{% include molt/migration-stop-replication.md %}
+
+{% include molt/verify-output.md %}
+
+{{site.data.alerts.callout_success}}
+If you encountered issues with replication, you can now use [`failback`]({% link molt/migrate-failback.md %}) mode to replicate changes on CockroachDB back to the initial source database. In case you need to roll back the migration, this ensures that data is consistent on the initial source database.
+{{site.data.alerts.end}}
+
+## Step 6. Modify the CockroachDB schema
+
+{% include molt/migration-modify-target-schema.md %}
+
+## Step 7. Cutover
+
+Perform a cutover by resuming application traffic, now to CockroachDB.
+
+## See also
+
+- [Migration Overview]({% link molt/migration-overview.md %})
+- [MOLT Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
+- [MOLT Fetch]({% link molt/molt-fetch.md %})
+- [MOLT Verify]({% link molt/molt-verify.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
+- [Migrate to CockroachDB in Phases]({% link molt/migrate-in-phases.md %})
+- [Migration Failback]({% link molt/migrate-failback.md %})
\ No newline at end of file
diff --git a/src/current/molt/migration-overview.md b/src/current/molt/migration-overview.md
new file mode 100644
index 00000000000..948847f1b1f
--- /dev/null
+++ b/src/current/molt/migration-overview.md
@@ -0,0 +1,136 @@
+---
+title: Migration Overview
+summary: Learn how to migrate your database to a CockroachDB cluster.
+toc: true
+docs_area: migrate
+---
+
+The MOLT (Migrate Off Legacy Technology) toolkit enables safe, minimal-downtime database migrations to CockroachDB. MOLT combines schema transformation, distributed data load, continuous replication, and row-level validation into a highly configurable workflow that adapts to diverse production environments.
+
+This page has an overview of the following:
+
+- Overall [migration flow](#migration-flow)
+- [MOLT tools](#molt-tools)
+- Supported [migration and failback modes](#migration-modes)
+
+## Migration flow
+
+{{site.data.alerts.callout_success}}
+Before you begin the migration, review [Migration Strategy]({% link molt/migration-strategy.md %}).
+{{site.data.alerts.end}}
+
+A migration to CockroachDB generally follows this sequence:
+
+
+
+
+
+1. Prepare the source database: Configure users, permissions, and replication settings as needed.
+1. Convert the source schema: Use the [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}) to generate CockroachDB-compatible [DDL]({% link {{ site.current_cloud_version }}/sql-statements.md %}#data-definition-statements). Apply the converted schema to the target database. Drop constraints and indexes to facilitate data load.
+1. Load data into CockroachDB: Use [MOLT Fetch]({% link molt/molt-fetch.md %}) to bulk-ingest your source data.
+1. (Optional) Verify consistency before replication: Use [MOLT Verify]({% link molt/molt-verify.md %}) to confirm that the data loaded into CockroachDB is consistent with the source.
+1. Replicate ongoing changes. Enable continuous replication to keep CockroachDB in sync with the source.
+1. Verify consistency before cutover. Use MOLT Verify to confirm that the CockroachDB data is consistent with the source.
+1. Finalize target schema: Recreate indexes or constraints on CockroachDB that you previously dropped to facilitate data load.
+1. Cut over to CockroachDB. Redirect application traffic to the CockroachDB cluster.
+
+For a practical example of the preceding steps, refer to [Migrate to CockroachDB]({% link molt/migrate-to-cockroachdb.md %}).
+
+## MOLT tools
+
+[MOLT (Migrate Off Legacy Technology)]({% link releases/molt.md %}) is a set of tools for schema conversion, data load, replication, and validation. Migrations with MOLT are resilient, restartable, and scale to large data sets.
+
+MOLT [Fetch](#fetch) and [Verify](#verify) are CLI-based to maximize control, automation, and visibility during the data load and replication stages.
+
+
+
+### Schema Conversion Tool
+
+The [MOLT Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}) converts a source database schema to a CockroachDB-compatible schema. The tool:
+
+- Identifies [unimplemented features]({% link molt/migration-strategy.md %}#unimplemented-features-and-syntax-incompatibilities)
+- Rewrites unsupported [DDL syntax]({% link {{ site.current_cloud_version }}/sql-statements.md %}#data-definition-statements)
+- Applies CockroachDB [schema best practices]({% link molt/migration-strategy.md %}#schema-design-best-practices)
+
+### Fetch
+
+[MOLT Fetch]({% link molt/molt-fetch.md %}) performs the core data migration to CockroachDB. It supports:
+
+- [Multiple migration modes](#migration-modes) via `IMPORT INTO` or `COPY FROM`
+- Data movement via [cloud storage, local file servers, or direct copy]({% link molt/molt-fetch.md %}#data-path).
+- [Concurrent data export]({% link molt/molt-fetch.md %}#best-practices) from multiple source tables and shards
+- [Continuous replication]({% link molt/molt-fetch.md %}#replicate-changes), enabling you to minimize downtime before cutover
+- [Schema transformation rules]({% link molt/molt-fetch.md %}#transformations)
+- Safe [continuation]({% link molt/molt-fetch.md %}#fetch-continuation) to retry failed or interrupted tasks from specific checkpoints
+- [Failback]({% link molt/molt-fetch.md %}#fail-back-to-source-database) to replicate changes from CockroachDB back to the original source via a secure changefeed
+
+### Verify
+
+[MOLT Verify]({% link molt/molt-verify.md %}) checks for data and schema discrepancies between the source database and CockroachDB. It performs:
+
+- Table structure verification
+- Column definition verification
+- Row-level data verification
+- Continuous, live, or one-time verification
+
+## Migration modes
+
+MOLT Fetch supports [multiple data migration modes]({% link molt/molt-fetch.md %}#fetch-mode). These can be combined based on your testing and cutover strategy.
+
+| Mode | Description | Best For |
+|---------------------------------------------|------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------|
+| `--mode data-load` | Performs one-time load of source data into CockroachDB | Testing, migrations with planned downtime, [phased migrations]({% link molt/migrate-in-phases.md %}) |
+| `--mode data-load-and-replication` | Loads source data and starts continuous replication from the source database | [Migrations with minimal downtime]({% link molt/migrate-to-cockroachdb.md %}) |
+| `--mode replication-only` | Starts replication from a previously loaded source | [Phased migrations]({% link molt/migrate-in-phases.md %}), post-load sync |
+| `--mode failback` | Replicates changes on CockroachDB back to the original source | [Rollback scenarios]({% link molt/migrate-failback.md %}) |
+| `--mode export-only` / `--mode import-only` | Separates data export and import phases | Large-scale migrations, custom storage pipelines |
+| `--direct-copy` | Loads data without intermediate storage using `COPY FROM` | Local testing, limited infra environments |
+
+## Migrations with minimal downtime
+
+MOLT simplifies and streamlines the following migration patterns that use a replication stream to minimize downtime. Rather than load all data into CockroachDB during a planned downtime window, you perform an initial data load and continuously replicate any subsequent changes to CockroachDB. Writes are only briefly paused to allow replication to drain before final cutover.
+
+### Full migration with minimal downtime
+
+Run MOLT Fetch in `data-load-and-replication` mode to load the initial source data into CockroachDB. Continuous replication starts automatically after the initial load. When ready, pause application traffic to allow replication to drain, validate data consistency with MOLT Verify, then cut over to CockroachDB. For example steps, refer to [Migrate to CockroachDB]({% link molt/migrate-to-cockroachdb.md %}).
+
+### Phased migration with minimal downtime
+
+Run MOLT Fetch in `data-load` mode to incrementally load and validate data in batches. After loading the initial source data, switch to `replication-only` mode to sync ongoing changes. When ready, pause application traffic to allow replication to drain, validate again with MOLT Verify, then cut over to CockroachDB. For example steps, refer to [Migrate in Phases]({% link molt/migrate-in-phases.md %}).
+
+## Migration failback
+
+If issues arise post-cutover, run MOLT Fetch in `failback` mode to replicate changes from CockroachDB back to the original source database. This ensures that data is consistent on the original source so that you can retry the migration later. For example steps, refer to [Migration Failback]({% link molt/migrate-failback.md %}).
+
+## See also
+
+- [Migrate to CockroachDB]({% link molt/migrate-to-cockroachdb.md %})
+- [Migrate to CockroachDB in Phases]({% link molt/migrate-in-phases.md %})
+- [Migration Failback]({% link molt/migrate-failback.md %})
+- [Migration Strategy]({% link molt/migration-strategy.md %})
+- [MOLT Releases]({% link releases/molt.md %})
\ No newline at end of file
diff --git a/src/current/molt/migration-strategy.md b/src/current/molt/migration-strategy.md
new file mode 100644
index 00000000000..a88630572db
--- /dev/null
+++ b/src/current/molt/migration-strategy.md
@@ -0,0 +1,172 @@
+---
+title: Migration Strategy
+summary: Build a migration strategy before performing a database migration to CockroachDB.
+toc: true
+docs_area: migrate
+---
+
+A successful migration to CockroachDB requires planning for downtime, application changes, observability, and rollback.
+
+This page outlines key decisions, infrastructure considerations, and best practices for a resilient and repeatable high-level migration strategy:
+
+- [Develop a migration plan](#develop-a-migration-plan).
+- Evaluate your [downtime approach](#approach-to-downtime).
+- [Size the target CockroachDB cluster](#capacity-planning).
+- Implement [application changes](#application-changes) to address necessary [schema changes](#schema-design-best-practices), [transaction contention](#handling-transaction-contention), and [unimplemented features](#unimplemented-features-and-syntax-incompatibilities).
+- [Prepare for migration](#prepare-for-migration) by running a [pre-mortem](#run-a-migration-pre-mortem), setting up [metrics](#set-up-monitoring-and-alerting), [loading test data](#load-test-data), [validating application queries](#validate-queries) for correctness and performance, performing a [migration dry run](#perform-a-dry-run), and reviewing your [cutover strategy](#cutover-strategy).
+{% assign variable = value %}
+{{site.data.alerts.callout_success}}
+For help migrating to CockroachDB, contact our sales team.
+{{site.data.alerts.end}}
+
+### Develop a migration plan
+
+Consider the following as you plan your migration:
+
+- Who will lead and perform the migration? Which teams are involved, and which aspects are they responsible for?
+- Which internal and external parties do you need to inform about the migration?
+- Which external or third-party tools (e.g., microservices, analytics, payment processors, aggregators, CRMs) must be tested and migrated along with your application?
+- When is the best time to perform this migration to be minimally disruptive to the database's users?
+- What is your target date for completing the migration?
+
+Create a document that summarizes the intent of the migration, the technical details, and the team members involved.
+
+### Approach to downtime
+
+It's important to fully [prepare the migration](#prepare-for-migration) in order to be certain that the migration can be completed successfully during the downtime window.
+
+- *Scheduled downtime* is made known to your users in advance. Once you have [prepared for the migration](#prepare-for-migration), you take the application offline, [conduct the migration]({% link molt/migration-overview.md %}), and bring the application back online on CockroachDB. To succeed, you should estimate the amount of downtime required to migrate your data, and ideally schedule the downtime outside of peak hours. Scheduling downtime is easiest if your application traffic is "periodic", meaning that it varies by the time of day, day of week, or day of month.
+
+- *Unscheduled downtime* impacts as few customers as possible, ideally without impacting their regular usage. If your application is intentionally offline at certain times (e.g., outside business hours), you can migrate the data without users noticing. Alternatively, if your application's functionality is not time-sensitive (e.g., it sends batched messages or emails), then you can queue requests while your system is offline, and process those requests after completing the migration to CockroachDB.
+
+- *Reduced functionality* takes some, but not all, application functionality offline. For example, you can disable writes but not reads while you migrate the application data, and queue data to be written after completing the migration.
+
+The [MOLT tools]({% link molt/migration-overview.md %}) enable migrations with minimal downtime. Refer to [Cutover strategy](#cutover-strategy).
+
+### Capacity planning
+
+Determine the size of the target CockroachDB cluster. To do this, consider your data volume and workload characteristics:
+
+- What is the total size of the data you will migrate?
+- How many active [application connections]({% link {{ site.current_cloud_version }}/recommended-production-settings.md %}#connection-pooling) will be running in the CockroachDB environment?
+
+Use this information to size the CockroachDB cluster you will create. If you are migrating to a CockroachDB {{ site.data.products.cloud }} cluster, see [Plan Your Cluster]({% link cockroachcloud/plan-your-cluster.md %}) for details:
+
+- For CockroachDB {{ site.data.products.standard }} and {{ site.data.products.basic }}, your cluster will scale automatically to meet your storage and usage requirements. Refer to the [CockroachDB {{ site.data.products.standard }}]({% link cockroachcloud/plan-your-cluster-basic.md %}#request-units) and [CockroachDB {{ site.data.products.basic }}]({% link cockroachcloud/plan-your-cluster-basic.md %}#request-units) documentation to learn about how to limit your resource consumption.
+- For CockroachDB {{ site.data.products.advanced }}, refer to the [example]({% link cockroachcloud/plan-your-cluster-advanced.md %}#example) that shows how your data volume, storage requirements, and replication factor affect the recommended node size (number of vCPUs per node) and total number of nodes on the cluster.
+- For guidance on sizing for connection pools, see the CockroachDB {{ site.data.products.cloud }} [Production Checklist]({% link cockroachcloud/production-checklist.md %}#sql-connection-handling).
+
+If you are migrating to a CockroachDB {{ site.data.products.core }} cluster:
+
+- Refer to our [sizing methodology]({% link {{ site.current_cloud_version }}/recommended-production-settings.md %}#sizing) to determine the total number of vCPUs on the cluster and the number of vCPUs per node (which determines the number of nodes on the cluster).
+- Refer to our [storage recommendations]({% link {{ site.current_cloud_version }}/recommended-production-settings.md %}#storage) to determine the amount of storage to provision on each node.
+- For guidance on sizing for connection pools, see the CockroachDB {{ site.data.products.core }} [Production Checklist]({% link {{ site.current_cloud_version }}/recommended-production-settings.md %}#connection-pooling).
+
+### Application changes
+
+As you develop your migration plan, consider the application changes that you will need to make. These may relate to the following:
+
+- [Handling transaction contention.](#handling-transaction-contention)
+- [Unimplemented features and syntax incompatibilities.](#unimplemented-features-and-syntax-incompatibilities)
+
+#### Schema design best practices
+
+{% include molt/migration-schema-design-practices.md %}
+
+#### Handling transaction contention
+
+Optimize your queries against [transaction contention]({% link {{ site.current_cloud_version }}/performance-best-practices-overview.md %}#transaction-contention). You may encounter [transaction retry errors]({% link {{ site.current_cloud_version }}/transaction-retry-error-reference.md %}) when you [test application queries](#validate-queries), as well as transaction contention due to long-running transactions when you [conduct the migration]({% link molt/migration-overview.md %}) and bulk load data.
+
+Transaction retry errors are more frequent under CockroachDB's default [`SERIALIZABLE` isolation level]({% link {{ site.current_cloud_version }}/demo-serializable.md %}). If you are migrating an application that was built at a `READ COMMITTED` isolation level, you should first [enable `READ COMMITTED` isolation]({% link {{ site.current_cloud_version }}/read-committed.md %}#enable-read-committed-isolation) on the CockroachDB cluster for compatibility.
+
+#### Unimplemented features and syntax incompatibilities
+
+Update your queries to resolve differences in functionality and SQL syntax.
+
+CockroachDB supports the [PostgreSQL wire protocol](https://www.postgresql.org/docs/current/protocol.html) and is largely [compatible with PostgreSQL syntax]({% link {{ site.current_cloud_version }}/postgresql-compatibility.md %}).
+
+For full compatibility with CockroachDB, you may need to implement workarounds in your schema design, in your [data manipulation language (DML)]({% link {{ site.current_cloud_version }}/sql-statements.md %}#data-manipulation-statements), or in your application code.
+
+For more details on the CockroachDB SQL implementation, refer to [SQL Feature Support]({% link {{ site.current_cloud_version }}/sql-feature-support.md %}).
+
+### Prepare for migration
+
+Once you have a migration plan, prepare the team, application, source database, and CockroachDB cluster for the migration.
+
+#### Run a migration "pre-mortem"
+
+To minimize issues after cutover, compose a migration "pre-mortem":
+
+1. Clearly describe the roles and processes of each team member performing the migration.
+1. List the likely failure points and issues that you may encounter as you [conduct the migration]({% link molt/migration-overview.md %}).
+1. Rank potential issues by severity, and identify ways to reduce risk.
+1. Create a plan for implementing the actions that would most effectively reduce risk.
+
+#### Set up monitoring and alerting
+
+Based on the error budget you [defined in your migration plan](#develop-a-migration-plan), identify the metrics that you can use to measure your success criteria and set up monitoring for the migration. These metrics may be identical to those you normally use in production, but can also be specific to your migration needs.
+
+#### Load test data
+
+It's useful to load test data into CockroachDB so that you can [test your application queries](#validate-queries). Refer to the steps in [Migrate to CockroachDB in Phases]({% link molt/migrate-in-phases.md %}).
+
+MOLT Fetch [supports both `IMPORT INTO` and `COPY FROM`]({% link molt/molt-fetch.md %}#data-movement) for loading data into CockroachDB:
+
+- Use `IMPORT INTO` for maximum throughput when the target tables can be offline. For a bulk data migration, most users should use `IMPORT INTO` because the tables will be offline anyway, and `IMPORT INTO` can [perform the data import much faster]({% link {{ site.current_cloud_version }}/import-performance-best-practices.md %}) than `COPY FROM`.
+- Use `COPY FROM` (or `--direct-copy`) when the target must remain queryable during load.
+
+#### Validate queries
+
+After you [load the test data](#load-test-data), validate your queries on CockroachDB. You can do this by [shadowing](#shadowing) or by [manually testing](#test-query-results-and-performance) the queries.
+
+Note that CockroachDB defaults to the [`SERIALIZABLE`]({% link {{ site.current_cloud_version }}/demo-serializable.md %}) transaction isolation level. If you are migrating an application that was built at a `READ COMMITTED` isolation level on the source database, you must [enable `READ COMMITTED` isolation]({% link {{ site.current_cloud_version }}/read-committed.md %}#enable-read-committed-isolation) on the CockroachDB cluster for compatibility.
+
+##### Shadowing
+
+You can "shadow" your production workload by executing your source SQL statements on CockroachDB in parallel. You can then [validate the queries](#test-query-results-and-performance) on CockroachDB for consistency, performance, and potential issues with the migration.
+
+##### Test query results and performance
+
+You can manually validate your queries by testing a subset of "critical queries" on an otherwise idle CockroachDB cluster:
+
+- Check the application logs for error messages and the API response time. If application requests are slower than expected, use the **SQL Activity** page on the [CockroachDB {{ site.data.products.cloud }} Console]({% link cockroachcloud/statements-page.md %}) or [DB Console]({% link {{ site.current_cloud_version }}/ui-statements-page.md %}) to find the longest-running queries that are part of that application request. If necessary, tune the queries according to our best practices for [SQL performance]({% link {{ site.current_cloud_version }}/performance-best-practices-overview.md %}).
+
+- Compare the results of the queries and check that they are identical in both the source database and CockroachDB. To do this, you can use [MOLT Verify]({% link molt/molt-verify.md %}).
+
+Test performance on a CockroachDB cluster that is appropriately [sized](#capacity-planning) for your workload:
+
+1. Run the application with single- or very low-concurrency and verify the app's performance is acceptable. The cluster should be provisioned with more than enough resources to handle this workload, because you need to verify that the queries will be fast enough when there are zero resource bottlenecks.
+
+1. Run stress tests with at least the production concurrency and rate, but ideally higher in order to verify that the system can handle unexpected spikes in load. This can also uncover [contention]({% link {{ site.current_cloud_version }}/performance-best-practices-overview.md %}#transaction-contention) issues that will appear during spikes in app load, which may require [application design changes](#handling-transaction-contention) to avoid.
+
+#### Perform a dry run
+
+To further minimize potential surprises when you conduct the migration, practice cutover using your application and similar volumes of data on a "dry-run" environment. Use a test or development environment that is as similar as possible to production.
+
+Performing a dry run is highly recommended. In addition to demonstrating how long the migration may take, a dry run also helps to ensure that team members understand what they need to do during the migration, and that changes to the application are coordinated.
+
+### Cutover strategy
+
+*Cutover* is the process of switching application traffic from the source database to CockroachDB. Once the source data is fully migrated to CockroachDB, you "flip the switch" to route application traffic to the new database, thus ending downtime.
+
+MOLT enables [migrations with minimal downtime]({% link molt/migration-overview.md %}#migrations-with-minimal-downtime), using continuous replication of source changes to CockroachDB.
+
+To safely cut over when using replication:
+
+1. Stop application traffic on the source database.
+1. Wait for the replication stream to drain.
+1. When your [monitoring](#set-up-monitoring-and-alerting) indicates that replication is idle, use [MOLT Verify]({% link molt/molt-verify.md %}) to validate the CockroachDB data.
+1. Start application traffic on CockroachDB.
+
+When you are ready to migrate, refer to [Migrate to CockroachDB]({% link molt/migrate-to-cockroachdb.md %}) or [Migrate to CockroachDB in Phases]({% link molt/migrate-in-phases.md %}) for practical examples of the migration steps.
+
+## See also
+
+- [Migration Overview]({% link molt/migration-overview.md %})
+- [Migrate to CockroachDB]({% link molt/migrate-to-cockroachdb.md %})
+- [Migrate to CockroachDB in Phases]({% link molt/migrate-in-phases.md %})
+- [Migration Failback]({% link molt/migrate-failback.md %})
+- [Schema Design Overview]({% link {{ site.current_cloud_version }}/schema-design-overview.md %})
+- [Primary key best practices]({% link {{ site.current_cloud_version }}/schema-design-table.md %}#primary-key-best-practices)
+- [Secondary index best practices]({% link {{ site.current_cloud_version }}/schema-design-indexes.md %}#best-practices)
+- [Transaction contention best practices]({% link {{ site.current_cloud_version }}/performance-best-practices-overview.md %}#transaction-contention)
\ No newline at end of file
diff --git a/src/current/molt/molt-fetch.md b/src/current/molt/molt-fetch.md
index 2cb6ba41e53..d3116d5c73c 100644
--- a/src/current/molt/molt-fetch.md
+++ b/src/current/molt/molt-fetch.md
@@ -5,7 +5,7 @@ toc: true
docs_area: migrate
---
-MOLT Fetch moves data from a source database into CockroachDB as part of a [database migration]({% link {{site.current_cloud_version}}/migration-overview.md %}).
+MOLT Fetch moves data from a source database into CockroachDB as part of a [database migration]({% link molt/migration-overview.md %}).
MOLT Fetch uses [`IMPORT INTO`]({% link {{site.current_cloud_version}}/import-into.md %}) or [`COPY FROM`]({% link {{site.current_cloud_version}}/copy.md %}) to move the source data to cloud storage (Google Cloud Storage or Amazon S3), a local file server, or local memory. Once the data is exported, MOLT Fetch can load the data into a target CockroachDB database and replicate changes from the source database. For details, see [Usage](#usage).
@@ -182,7 +182,7 @@ To verify that your connections and configuration work properly, run MOLT Fetch
| `--log-file` | Write messages to the specified log filename. If no filename is provided, messages write to `fetch-{datetime}.log`. If `"stdout"` is provided, messages write to `stdout`. |
| `--logging` | Level at which to log messages (`trace`/`debug`/`info`/`warn`/`error`/`fatal`/`panic`).
**Default:** `info` |
| `--metrics-listen-addr` | Address of the Prometheus metrics endpoint, which has the path `{address}/metrics`. For details on important metrics to monitor, see [Metrics](#metrics).
**Default:** `'127.0.0.1:3030'` |
-| `--mode` | Configure the MOLT Fetch behavior: `data-load`, `data-load-and-replication`, `replication-only`, `export-only`, or `import-only`. For details, refer to [Fetch mode](#fetch-mode).
**Default:** `data-load` |
+| `--mode` | Configure the MOLT Fetch behavior: `data-load`, `data-load-and-replication`, `replication-only`, `export-only`, `import-only`, or `failback`. For details, refer to [Fetch mode](#fetch-mode).
**Default:** `data-load` |
| `--non-interactive` | Run the fetch task without interactive prompts. This is recommended **only** when running `molt fetch` in an automated process (i.e., a job or continuous integration). |
| `--pglogical-publication-name` | If set, the name of the [publication](https://www.postgresql.org/docs/current/logical-replication-publication.html) that will be created or used for replication. Used in [`replication-only`](#replicate-changes) mode.
**Default:** `molt_fetch` |
| `--pglogical-publication-and-slot-drop-and-recreate` | If set, drops the [publication](https://www.postgresql.org/docs/current/logical-replication-publication.html) and slots if they exist and then recreates them. Used in [`replication-only`](#replicate-changes) mode. |
@@ -282,7 +282,7 @@ In case you need to rename your [publication](https://www.postgresql.org/docs/cu
Before using this option, the source PostgreSQL or MySQL database **must** be configured for continuous replication, as described in [Setup](#replication-setup). MySQL 5.7 and later are supported.
{{site.data.alerts.end}}
-`data-load-and-replication` instructs MOLT Fetch to load the source data into CockroachDB, and replicate any subsequent changes on the source.
+`data-load-and-replication` instructs MOLT Fetch to load the source data into CockroachDB, and replicate any subsequent changes on the source. This enables [migrations with minimal downtime]({% link molt/migration-overview.md %}#migrations-with-minimal-downtime).
{% include_cached copy-clipboard.html %}
~~~
@@ -322,7 +322,7 @@ Before using this option:
- The `replicator` binary **must** be located either in the same directory as `molt` or in a directory beneath `molt`.
{{site.data.alerts.end}}
-`replication-only` instructs MOLT Fetch to replicate ongoing changes on the source to CockroachDB, using the specified replication marker. This assumes you have already run [`--mode data-load`](#load-data) to load the source data into CockroachDB.
+`replication-only` instructs MOLT Fetch to replicate ongoing changes on the source to CockroachDB, using the specified replication marker. This assumes you have already run [`--mode data-load`](#load-data) to load the source data into CockroachDB. This enables [migrations with minimal downtime]({% link molt/migration-overview.md %}#migrations-with-minimal-downtime).
- For a PostgreSQL source, you should have already created a replication slot when [loading data](#load-data). Specify the same replication slot name using `--pglogical-replication-slot-name`. For example:
@@ -617,7 +617,7 @@ To drop existing tables and create new tables before loading the data, use `drop
--table-handling drop-on-target-and-recreate
~~~
-When using the `drop-on-target-and-recreate` option, MOLT Fetch creates a new CockroachDB table to load the source data if one does not already exist. To guide the automatic schema creation, you can [explicitly map source types to CockroachDB types](#type-mapping).
+When using the `drop-on-target-and-recreate` option, MOLT Fetch creates a new CockroachDB table to load the source data if one does not already exist. To guide the automatic schema creation, you can [explicitly map source types to CockroachDB types](#type-mapping). `drop-on-target-and-recreate` does **not** create indexes or constraints other than [`PRIMARY KEY`]({% link {{site.current_cloud_version}}/primary-key.md %}) and [`NOT NULL`]({% link {{site.current_cloud_version}}/not-null.md %}).
#### Mismatch handling
@@ -1120,8 +1120,6 @@ DEBUG [Sep 11 11:04:01] httpReque
## See also
-- [MOLT Verify]({% link molt/molt-verify.md %})
-- [Migration Overview]({% link {{site.current_cloud_version}}/migration-overview.md %})
-- [Migrate from PostgreSQL]({% link {{site.current_cloud_version}}/migrate-from-postgres.md %})
-- [Migrate from MySQL]({% link {{site.current_cloud_version}}/migrate-from-mysql.md %})
-- [Migrate from CSV]({% link {{site.current_cloud_version}}/migrate-from-csv.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
+- [Migrate to CockroachDB]({% link molt/migrate-to-cockroachdb.md %})
+- [MOLT Verify]({% link molt/molt-verify.md %})
\ No newline at end of file
diff --git a/src/current/molt/molt-overview.md b/src/current/molt/molt-overview.md
deleted file mode 100644
index fa03556bf9f..00000000000
--- a/src/current/molt/molt-overview.md
+++ /dev/null
@@ -1,49 +0,0 @@
----
-title: MOLT Overview
-summary: Learn how to use the MOLT tools to migrate to CockroachDB.
-toc: true
-docs_area: migrate
----
-
-This page describes the MOLT (Migrate Off Legacy Technology) tools. For instructions on migrating to CockroachDB, refer to [Migrate to CockroachDB]({% link {{ site.current_cloud_version }}/migrate-to-cockroachdb.md %}).
-
-Use the MOLT tools to:
-
-- Convert a schema for compatibility with CockroachDB.
-- Load test and production data into CockroachDB.
-- Validate queries on CockroachDB.
-
-## MOLT tools
-
-
-
-| Tool | Usage | Supported sources | Release status |
-|---------------------------------------------------|-------------------------------------------|---------------------------------------|--------------------------------------------------------------------------------------------|
-| [Schema Conversion Tool](#schema-conversion-tool) | Schema conversion | PostgreSQL, MySQL, Oracle, SQL Server | GA |
-| [Fetch](#fetch) | Initial data load; continuous replication | PostgreSQL, MySQL, CockroachDB | GA |
-| [Verify](#verify) | Data validation | PostgreSQL, MySQL, CockroachDB | [Preview]({% link {{ site.current_cloud_version }}/cockroachdb-feature-availability.md %}) |
-
-### Schema Conversion Tool
-
-The [MOLT Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}) converts a source database schema to a CockroachDB-compatible schema. The supported Schema Conversion Tool sources are PostgreSQL, MySQL, Oracle, and SQL Server.
-
-The tool will convert [data definition (DDL) syntax]({% link {{ site.current_cloud_version }}/sql-statements.md %}#data-definition-statements) (excluding destructive statements such as `DROP`), identify [unimplemented features and syntax incompatibilities]({% link {{ site.current_cloud_version }}/migration-overview.md %}#unimplemented-features-and-syntax-incompatibilities) in the schema, and suggest edits according to CockroachDB [best practices]({% link {{ site.current_cloud_version }}/migration-overview.md %}#schema-design-best-practices).
-
-### Fetch
-
-[MOLT Fetch]({% link molt/molt-fetch.md %}) moves data from a source database into CockroachDB. Data is moved via one-time bulk ingestion, which is optionally followed by continuously streaming replication. The supported Fetch sources are PostgreSQL, MySQL, and CockroachDB.
-
-You can use MOLT Fetch to [load test data]({% link {{ site.current_cloud_version }}/migration-overview.md %}#load-test-data) into CockroachDB, enabling you to [test your application queries]({% link {{ site.current_cloud_version }}/migration-overview.md %}#validate-queries) on CockroachDB. When you're ready to [conduct the migration]({% link {{ site.current_cloud_version }}/migration-overview.md %}#conduct-the-migration) in a production environment, use MOLT Fetch to move your source data to CockroachDB. You can also enable continuous replication of any changes on the source database to CockroachDB.
-
-### Verify
-
-[MOLT Verify]({% link molt/molt-verify.md %}) checks for discrepancies between the source and target schemas and data. It verifies that table structures, column definitions, and row values match between the source and target. The supported Verify sources are PostgreSQL, MySQL, and CockroachDB.
-
-Use MOLT Verify after loading data with [MOLT Fetch](#fetch) to confirm that the CockroachDB data matches the source.
-
-## See also
-
-- [Migration Overview]({% link {{ site.current_cloud_version }}/migration-overview.md %})
-- [MOLT Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
-- [MOLT Fetch]({% link molt/molt-fetch.md %})
-- [MOLT Verify]({% link molt/molt-verify.md %})
\ No newline at end of file
diff --git a/src/current/molt/molt-verify.md b/src/current/molt/molt-verify.md
index 1276a4d715e..aade818f5dc 100644
--- a/src/current/molt/molt-verify.md
+++ b/src/current/molt/molt-verify.md
@@ -9,7 +9,7 @@ docs_area: migrate
{% include feature-phases/preview.md %}
{{site.data.alerts.end}}
-MOLT Verify checks for data discrepancies between a source database and CockroachDB during a [database migration]({% link {{site.current_cloud_version}}/migration-overview.md %}).
+MOLT Verify checks for data discrepancies between a source database and CockroachDB during a [database migration]({% link molt/migration-overview.md %}).
The tool performs the following verifications to ensure data integrity during a migration:
@@ -25,8 +25,8 @@ For a demo of MOLT Verify, watch the following video:
The following databases are currently supported:
-- [PostgreSQL]({% link {{site.current_cloud_version}}/migrate-from-postgres.md %})
-- [MySQL]({% link {{site.current_cloud_version}}/migrate-from-mysql.md %})
+- PostgreSQL 12-14
+- MySQL 5.7, 8.0 and later
- CockroachDB
## Installation
@@ -115,7 +115,7 @@ When verification completes, the output displays a summary message like the foll
## Docker usage
-{% include {{ page.version.version }}/molt/molt-docker.md %}
+{% include molt/molt-docker.md %}
## Known limitations
@@ -130,8 +130,6 @@ The following limitation is specific to MySQL:
## See also
-- [MOLT Fetch]({% link molt/molt-fetch.md %})
-- [Migration Overview]({% link {{site.current_cloud_version}}/migration-overview.md %})
-- [Migrate from PostgreSQL]({% link {{site.current_cloud_version}}/migrate-from-postgres.md %})
-- [Migrate from MySQL]({% link {{site.current_cloud_version}}/migrate-from-mysql.md %})
-- [Migrate from CSV]({% link {{site.current_cloud_version}}/migrate-from-csv.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
+- [Migrate to CockroachDB]({% link molt/migrate-to-cockroachdb.md %})
+- [MOLT Fetch]({% link molt/molt-fetch.md %})
\ No newline at end of file
diff --git a/src/current/releases/molt.md b/src/current/releases/molt.md
index b4a04359aca..7870b71852a 100644
--- a/src/current/releases/molt.md
+++ b/src/current/releases/molt.md
@@ -5,7 +5,7 @@ toc: true
docs_area: releases
---
-This page has details about each release of the following [MOLT (Migrate Off Legacy Technology) tools]({% link molt/molt-overview.md %}):
+This page has details about each release of the following [MOLT (Migrate Off Legacy Technology) tools]({% link molt/migration-overview.md %}):
- [Fetch]({% link molt/molt-fetch.md %})
- [Verify]({% link molt/molt-verify.md %})
diff --git a/src/current/v21.2/migrate-from-mysql.md b/src/current/v21.2/migrate-from-mysql.md
index 811e9e5631a..b7287459663 100644
--- a/src/current/v21.2/migrate-from-mysql.md
+++ b/src/current/v21.2/migrate-from-mysql.md
@@ -5,9 +5,6 @@ toc: true
docs_area: migrate
---
-{{site.data.alerts.callout_danger}}
-The instructions on this page require updates. For updated guidance, see the [current documentation](../stable/migrate-from-mysql.html).
-{{site.data.alerts.end}}
This page has instructions for migrating data from MySQL to CockroachDB using [`IMPORT`](import.html)'s support for reading [`mysqldump`][mysqldump] files.
diff --git a/src/current/v21.2/migrate-from-postgres.md b/src/current/v21.2/migrate-from-postgres.md
index 47d5576123d..59d14c3a43e 100644
--- a/src/current/v21.2/migrate-from-postgres.md
+++ b/src/current/v21.2/migrate-from-postgres.md
@@ -5,9 +5,6 @@ toc: true
docs_area: migrate
---
-{{site.data.alerts.callout_danger}}
-The instructions on this page require updates. For updated guidance, see the [current documentation](../stable/migrate-from-postgres.html).
-{{site.data.alerts.end}}
This page has instructions for migrating data from PostgreSQL to CockroachDB using [`IMPORT`][import]'s support for reading [`pg_dump`][pgdump] files.
diff --git a/src/current/v22.1/migrate-from-mysql.md b/src/current/v22.1/migrate-from-mysql.md
index f79108be377..a3db34e73d9 100644
--- a/src/current/v22.1/migrate-from-mysql.md
+++ b/src/current/v22.1/migrate-from-mysql.md
@@ -6,9 +6,6 @@ keywords: load data infile
docs_area: migrate
---
-{{site.data.alerts.callout_danger}}
-The instructions on this page require updates. For updated guidance, see the [current documentation](../stable/migrate-from-mysql.html).
-{{site.data.alerts.end}}
This page has instructions for migrating data from MySQL to CockroachDB using [`IMPORT`](import.html)'s support for reading [`mysqldump`][mysqldump] files.
diff --git a/src/current/v22.1/migrate-from-postgres.md b/src/current/v22.1/migrate-from-postgres.md
index 43c57bf0880..b96afc3bba9 100644
--- a/src/current/v22.1/migrate-from-postgres.md
+++ b/src/current/v22.1/migrate-from-postgres.md
@@ -6,9 +6,6 @@ keywords: copy
docs_area: migrate
---
-{{site.data.alerts.callout_danger}}
-The instructions on this page require updates. For updated guidance, see the [current documentation](../stable/migrate-from-postgres.html).
-{{site.data.alerts.end}}
This page has instructions for migrating data from PostgreSQL to CockroachDB using [`IMPORT`][import]'s support for reading [`pg_dump`][pgdump] files.
diff --git a/src/current/v22.2/migrate-from-mysql.md b/src/current/v22.2/migrate-from-mysql.md
index d935649f9fb..f3f04e7d270 100644
--- a/src/current/v22.2/migrate-from-mysql.md
+++ b/src/current/v22.2/migrate-from-mysql.md
@@ -6,9 +6,6 @@ keywords: load data infile
docs_area: migrate
---
-{{site.data.alerts.callout_danger}}
-The instructions on this page require updates. For updated guidance, see the [current documentation](../stable/migrate-from-mysql.html).
-{{site.data.alerts.end}}
This page has instructions for migrating data from MySQL to CockroachDB using [`IMPORT`](import.html)'s support for reading [`mysqldump`][mysqldump] files.
diff --git a/src/current/v22.2/migrate-from-postgres.md b/src/current/v22.2/migrate-from-postgres.md
index 2ba0503bb07..c08cb748e7b 100644
--- a/src/current/v22.2/migrate-from-postgres.md
+++ b/src/current/v22.2/migrate-from-postgres.md
@@ -6,9 +6,6 @@ keywords: copy
docs_area: migrate
---
-{{site.data.alerts.callout_danger}}
-The instructions on this page require updates. For updated guidance, see the [current documentation](../stable/migrate-from-postgres.html).
-{{site.data.alerts.end}}
This page has instructions for migrating data from PostgreSQL to CockroachDB using [`IMPORT`][import]'s support for reading [`pg_dump`][pgdump] files.
diff --git a/src/current/v23.1/aws-dms.md b/src/current/v23.1/aws-dms.md
index dcd06b60d5b..acd79cd79e9 100644
--- a/src/current/v23.1/aws-dms.md
+++ b/src/current/v23.1/aws-dms.md
@@ -41,7 +41,7 @@ Complete the following items before starting the DMS migration:
- Manually create all schema objects in the target CockroachDB cluster. If you are migrating from PostgreSQL, MySQL, Oracle, or Microsoft SQL Server, you can [use the **Schema Conversion Tool**]({% link cockroachcloud/migrations-page.md %}) to convert and export your schema.
- - All tables must have an explicitly defined primary key. For more guidance, see the [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}#schema-design-best-practices).
+ - All tables must have an explicitly defined primary key. For more guidance, see [Migration Strategy]({% link molt/migration-strategy.md %}#schema-design-best-practices).
- Drop all [constraints]({% link {{ page.version.version }}/constraints.md %}) per the [AWS DMS best practices](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_BestPractices.html#CHAP_BestPractices.Performance). You can recreate them after the [full load completes](#step-3-verify-the-migration). AWS DMS can create a basic schema, but does not create [indexes]({% link {{ page.version.version }}/indexes.md %}) or constraints such as [foreign keys]({% link {{ page.version.version }}/foreign-key.md %}) and [defaults]({% link {{ page.version.version }}/default-value.md %}).
@@ -412,7 +412,7 @@ The `BatchApplyEnabled` setting can improve replication performance and is recom
## See Also
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
- [`cockroach demo`]({% link {{ page.version.version }}/cockroach-demo.md %})
- [AWS DMS documentation](https://docs.aws.amazon.com/dms/latest/userguide/Welcome.html)
diff --git a/src/current/v23.1/build-a-java-app-with-cockroachdb-hibernate.md b/src/current/v23.1/build-a-java-app-with-cockroachdb-hibernate.md
index c17157046d3..f6e4c60db7f 100644
--- a/src/current/v23.1/build-a-java-app-with-cockroachdb-hibernate.md
+++ b/src/current/v23.1/build-a-java-app-with-cockroachdb-hibernate.md
@@ -130,9 +130,9 @@ APP: getAccountBalance(2) --> 350.00
If you are trying to get a large data set into CockroachDB all at once (a bulk import), avoid writing client-side code altogether and use the [`IMPORT`]({% link {{ page.version.version }}/import.md %}) statement instead. It is much faster and more efficient than making a series of [`INSERT`s]({% link {{ page.version.version }}/insert.md %}) and [`UPDATE`s]({% link {{ page.version.version }}/update.md %}). It bypasses the [SQL layer]({% link {{ page.version.version }}/architecture/sql-layer.md %}) altogether and writes directly to the [storage layer](architecture/storage-layer.html) of the database.
-For more information about importing data from PostgreSQL, see [Migrate from PostgreSQL]({% link {{ page.version.version }}/migrate-from-postgres.md %}).
+For more information about importing data from PostgreSQL, see [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %}).
-For more information about importing data from MySQL, see [Migrate from MySQL]({% link {{ page.version.version }}/migrate-from-mysql.md %}).
+For more information about importing data from MySQL, see [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql).
### Use `reWriteBatchedInserts` for increased speed
diff --git a/src/current/v23.1/build-a-java-app-with-cockroachdb.md b/src/current/v23.1/build-a-java-app-with-cockroachdb.md
index 0de093fc87b..441fb2ad148 100644
--- a/src/current/v23.1/build-a-java-app-with-cockroachdb.md
+++ b/src/current/v23.1/build-a-java-app-with-cockroachdb.md
@@ -269,9 +269,9 @@ props.setProperty("options", "-c sql_safe_updates=true -c statement_timeout=30")
If you are trying to get a large data set into CockroachDB all at once (a bulk import), avoid writing client-side code altogether and use the [`IMPORT`]({% link {{ page.version.version }}/import.md %}) statement instead. It is much faster and more efficient than making a series of [`INSERT`s]({% link {{ page.version.version }}/insert.md %}) and [`UPDATE`s]({% link {{ page.version.version }}/update.md %}). It bypasses the [SQL layer]({% link {{ page.version.version }}/architecture/sql-layer.md %}) altogether and writes directly to the [storage layer](architecture/storage-layer.html) of the database.
-For more information about importing data from PostgreSQL, see [Migrate from PostgreSQL]({% link {{ page.version.version }}/migrate-from-postgres.md %}).
+For more information about importing data from PostgreSQL, see [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %}).
-For more information about importing data from MySQL, see [Migrate from MySQL]({% link {{ page.version.version }}/migrate-from-mysql.md %}).
+For more information about importing data from MySQL, see [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql).
### Use `reWriteBatchedInserts` for increased speed
diff --git a/src/current/v23.1/build-a-python-app-with-cockroachdb-sqlalchemy.md b/src/current/v23.1/build-a-python-app-with-cockroachdb-sqlalchemy.md
index 41053f8988d..69ef20d0bc1 100644
--- a/src/current/v23.1/build-a-python-app-with-cockroachdb-sqlalchemy.md
+++ b/src/current/v23.1/build-a-python-app-with-cockroachdb-sqlalchemy.md
@@ -204,9 +204,9 @@ Instead, we recommend breaking your transaction into smaller units of work (or "
If you are trying to get a large data set into CockroachDB all at once (a bulk import), avoid writing client-side code that uses an ORM and use the [`IMPORT`]({% link {{ page.version.version }}/import.md %}) statement instead. It is much faster and more efficient than making a series of [`INSERT`s]({% link {{ page.version.version }}/insert.md %}) and [`UPDATE`s]({% link {{ page.version.version }}/update.md %}) such as are generated by calls to [`session.bulk_save_objects()`](https://docs.sqlalchemy.org/orm/session_api.html?highlight=bulk_save_object#sqlalchemy.orm.session.Session.bulk_save_objects).
-For more information about importing data from PostgreSQL, see [Migrate from PostgreSQL]({% link {{ page.version.version }}/migrate-from-postgres.md %}).
+For more information about importing data from PostgreSQL, see [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %}).
-For more information about importing data from MySQL, see [Migrate from MySQL]({% link {{ page.version.version }}/migrate-from-mysql.md %}).
+For more information about importing data from MySQL, see [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql).
### Prefer the query builder
diff --git a/src/current/v23.1/cockroach-import.md b/src/current/v23.1/cockroach-import.md
index 9874f3d3016..ec1c29096b2 100644
--- a/src/current/v23.1/cockroach-import.md
+++ b/src/current/v23.1/cockroach-import.md
@@ -37,8 +37,8 @@ $ cockroach import --help
## Supported Formats
-- [`pgdump`]({% link {{ page.version.version }}/migrate-from-postgres.md %})
-- [`mysqldump`]({% link {{ page.version.version }}/migrate-from-mysql.md %})
+- [`pgdump`]({% link molt/migrate-to-cockroachdb.md %})
+- [`mysqldump`]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql)
## Flags
@@ -110,5 +110,5 @@ successfully imported table test_table from pgdump file /Users/maxroach/Desktop/
- [`cockroach` Commands Overview]({% link {{ page.version.version }}/cockroach-commands.md %})
- [`IMPORT`]({% link {{ page.version.version }}/import.md %})
- [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %})
-- [Migrate from PostgreSQL]({% link {{ page.version.version }}/migrate-from-postgres.md %})
-- [Migrate from MySQL]({% link {{ page.version.version }}/migrate-from-mysql.md %})
+- [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %})
+- [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql)
diff --git a/src/current/v23.1/cockroach-nodelocal-upload.md b/src/current/v23.1/cockroach-nodelocal-upload.md
index 890ecbcb9db..f36afd6284d 100644
--- a/src/current/v23.1/cockroach-nodelocal-upload.md
+++ b/src/current/v23.1/cockroach-nodelocal-upload.md
@@ -95,6 +95,5 @@ Then, you can use the file to [`IMPORT`]({% link {{ page.version.version }}/impo
- [`cockroach` Commands Overview]({% link {{ page.version.version }}/cockroach-commands.md %})
- [Troubleshooting Overview]({% link {{ page.version.version }}/troubleshooting-overview.md %})
-- [Import Data]({% link {{ page.version.version }}/migration-overview.md %})
- [`IMPORT`]({% link {{ page.version.version }}/import.md %})
- [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %})
diff --git a/src/current/v23.1/copy.md b/src/current/v23.1/copy.md
index aa825fdc003..144da73b4ac 100644
--- a/src/current/v23.1/copy.md
+++ b/src/current/v23.1/copy.md
@@ -358,10 +358,10 @@ You can copy CSV data into CockroachDB using the following methods:
## See also
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %})
- [`EXPORT`]({% link {{ page.version.version }}/export.md %})
- [Install a Driver or ORM Framework]({% link {{ page.version.version }}/install-client-drivers.md %})
{% comment %}
-- [Migrate from PostgreSQL]({% link {{ page.version.version }}/migrate-from-postgres.md %})
+- [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %})
{% endcomment %}
\ No newline at end of file
diff --git a/src/current/v23.1/debezium.md b/src/current/v23.1/debezium.md
index ea2eb513b50..a9d14707ea4 100644
--- a/src/current/v23.1/debezium.md
+++ b/src/current/v23.1/debezium.md
@@ -116,7 +116,7 @@ Once all of the [prerequisite steps](#before-you-begin) are completed, you can u
## See also
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
- [Change Data Capture Overview]({% link {{ page.version.version }}/change-data-capture-overview.md %})
- [Third-Party Tools Supported by Cockroach Labs]({% link {{ page.version.version }}/third-party-database-tools.md %})
diff --git a/src/current/v23.1/frequently-asked-questions.md b/src/current/v23.1/frequently-asked-questions.md
index 1e2cae3d8e5..06d79245016 100644
--- a/src/current/v23.1/frequently-asked-questions.md
+++ b/src/current/v23.1/frequently-asked-questions.md
@@ -149,7 +149,7 @@ Note, however, that the protocol used doesn't significantly impact how easy it i
### Can a PostgreSQL or MySQL application be migrated to CockroachDB?
-Yes. Most users should be able to follow the instructions in [Migrate from PostgreSQL]({% link {{ page.version.version }}/migrate-from-postgres.md %}) or [Migrate from MySQL]({% link {{ page.version.version }}/migrate-from-mysql.md %}). Due to differences in available features and syntax, some features supported by these databases may require manual effort to port to CockroachDB. Check those pages for details.
+Yes. Most users should be able to follow the instructions in [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %}) or [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql). Due to differences in available features and syntax, some features supported by these databases may require manual effort to port to CockroachDB. Check those pages for details.
We also fully support [importing your data via CSV]({% link {{ page.version.version }}/migrate-from-csv.md %}).
diff --git a/src/current/v23.1/goldengate.md b/src/current/v23.1/goldengate.md
index 7fd572094e4..30ee23c17df 100644
--- a/src/current/v23.1/goldengate.md
+++ b/src/current/v23.1/goldengate.md
@@ -514,7 +514,7 @@ Run the steps in this section on a machine and in a directory where Oracle Golde
## See also
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
- [Change Data Capture Overview]({% link {{ page.version.version }}/change-data-capture-overview.md %})
- [Third-Party Tools Supported by Cockroach Labs]({% link {{ page.version.version }}/third-party-database-tools.md %})
diff --git a/src/current/v23.1/import-into.md b/src/current/v23.1/import-into.md
index e6d6960dcbf..3e05357cb0e 100644
--- a/src/current/v23.1/import-into.md
+++ b/src/current/v23.1/import-into.md
@@ -158,7 +158,7 @@ You can control the `IMPORT` process's behavior using any of the following key-v
For examples showing how to use these options, see the [Examples section]({% link {{ page.version.version }}/import-into.md %}#examples).
-For instructions and working examples showing how to migrate data from other databases and formats, see the [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}).
+For instructions and working examples showing how to migrate data from other databases and formats, see the [Migration Overview]({% link molt/migration-overview.md %}).
## View and control import jobs
@@ -297,6 +297,6 @@ For more information about importing data from Avro, including examples, see [Mi
## See also
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [Use Cloud Storage]({% link {{ page.version.version }}/use-cloud-storage.md %})
- [Import Performance Best Practices]({% link {{ page.version.version }}/import-performance-best-practices.md %})
diff --git a/src/current/v23.1/import-performance-best-practices.md b/src/current/v23.1/import-performance-best-practices.md
index 7834c7650ba..a52f3cef522 100644
--- a/src/current/v23.1/import-performance-best-practices.md
+++ b/src/current/v23.1/import-performance-best-practices.md
@@ -162,9 +162,9 @@ If you cannot both split and sort your dataset, the performance of either split
## See also
- [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %})
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [Migrate from Oracle]({% link {{ page.version.version }}/migrate-from-oracle.md %})
-- [Migrate from PostgreSQL]({% link {{ page.version.version }}/migrate-from-postgres.md %})
-- [Migrate from MySQL]({% link {{ page.version.version }}/migrate-from-mysql.md %})
+- [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %})
+- [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql)
- [Migrate from CSV]({% link {{ page.version.version }}/migrate-from-csv.md %})
- [Migrate from Avro]({% link {{ page.version.version }}/migrate-from-avro.md %})
diff --git a/src/current/v23.1/import.md b/src/current/v23.1/import.md
index 16e6904ff34..290fdfccca4 100644
--- a/src/current/v23.1/import.md
+++ b/src/current/v23.1/import.md
@@ -30,7 +30,7 @@ To import data into a new table, use [`CREATE TABLE`]({% link {{ page.version.ve
- `IMPORT` is a blocking statement. To run an import job asynchronously, use the [`DETACHED`](#options-detached) option.
- `IMPORT` cannot be used within a [rolling upgrade]({% link {{ page.version.version }}/upgrade-cockroach-version.md %}).
- Certain `IMPORT TABLE` statements that defined the table schema inline are **not** supported in v22.1 and later versions. These include running `IMPORT TABLE ... CREATE USING` and `IMPORT TABLE` with any non-bundle format (`CSV`, `DELIMITED`, `PGCOPY`, or `AVRO`) data types. Instead, use `CREATE TABLE` and `IMPORT INTO`; see this [example]({% link {{ page.version.version }}/import-into.md %}#import-into-a-new-table-from-a-csv-file) for more detail.
-- For instructions and working examples on how to migrate data from other databases, see the [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}).
+- For instructions and working examples on how to migrate data from other databases, see the [Migration Overview]({% link molt/migration-overview.md %}).
- `IMPORT` cannot directly import data to `REGIONAL BY ROW` tables that are part of [multi-region databases]({% link {{ page.version.version }}/multiregion-overview.md %}). Instead, use [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) which supports importing into `REGIONAL BY ROW` tables.
{{site.data.alerts.callout_success}}
@@ -98,7 +98,7 @@ Key |
Context
| Value
For examples showing how to use these options, see the [Examples](#examples) section below.
-For instructions and working examples showing how to migrate data from other databases and formats, see the [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}).
+For instructions and working examples showing how to migrate data from other databases and formats, see the [Migration Overview]({% link molt/migration-overview.md %}).
## Requirements
@@ -184,7 +184,7 @@ IMPORT PGDUMP 's3://{BUCKET NAME}/{customers.sql}?AWS_ACCESS_KEY_ID={ACCESS KEY}
WITH ignore_unsupported_statements;
~~~
-For this command to succeed, you need to have created the dump file with specific flags to `pg_dump`, and use the `WITH ignore_unsupported_statements` clause. For more information, see [Migrate from PostgreSQL]({% link {{ page.version.version }}/migrate-from-postgres.md %}).
+For this command to succeed, you need to have created the dump file with specific flags to `pg_dump`, and use the `WITH ignore_unsupported_statements` clause. For more information, see [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %}).
### Import a table from a PostgreSQL database dump
@@ -197,7 +197,7 @@ IMPORT TABLE employees
If the table schema specifies foreign keys into tables that do not exist yet, the `WITH skip_foreign_keys` option may be needed. For more information, see the list of [import options](#import-options).
-For this command to succeed, you need to have created the dump file with specific flags to `pg_dump`. For more information, see [Migrate from PostgreSQL]({% link {{ page.version.version }}/migrate-from-postgres.md %}).
+For this command to succeed, you need to have created the dump file with specific flags to `pg_dump`. For more information, see [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %}).
### Import a MySQL database dump
@@ -206,7 +206,7 @@ For this command to succeed, you need to have created the dump file with specifi
IMPORT MYSQLDUMP 's3://{BUCKET NAME}/{employees-full.sql}?AWS_ACCESS_KEY_ID={ACCESS KEY}&AWS_SECRET_ACCESS_KEY={SECRET ACCESS KEY}';
~~~
-For more detailed information about importing data from MySQL, see [Migrate from MySQL]({% link {{ page.version.version }}/migrate-from-mysql.md %}).
+For more detailed information about importing data from MySQL, see [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql).
### Import a table from a MySQL database dump
@@ -218,7 +218,7 @@ IMPORT TABLE employees
If the table schema specifies foreign keys into tables that do not exist yet, the `WITH skip_foreign_keys` option may be needed. For more information, see the list of [import options](#import-options).
-For more detailed information about importing data from MySQL, see [Migrate from MySQL]({% link {{ page.version.version }}/migrate-from-mysql.md %}).
+For more detailed information about importing data from MySQL, see [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql).
### Import a limited number of rows
@@ -376,12 +376,12 @@ CSV DATA ('s3://{BUCKET NAME}/{customer-data}?AWS_ACCESS_KEY_ID={ACCESS KEY}&AWS
## See also
- [Use Cloud Storage]({% link {{ page.version.version }}/use-cloud-storage.md %})
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
-- [Migrate from MySQL][mysql]
-- [Migrate from PostgreSQL][postgres]
+- [Migration Overview]({% link molt/migration-overview.md %})
+- [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql)
+- [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %})
- [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %})
{% comment %} Reference Links {% endcomment %}
-[postgres]: {% link {{ page.version.version }}/migrate-from-postgres.md %}
-[mysql]: {% link {{ page.version.version }}/migrate-from-mysql.md %}
+[postgres]: {% link molt/migrate-to-cockroachdb.md %}
+[mysql]: {% link molt/migrate-to-cockroachdb.md %}?filters=mysql
diff --git a/src/current/v23.1/index.md b/src/current/v23.1/index.md
index 2008d323859..010a71f91f4 100644
--- a/src/current/v23.1/index.md
+++ b/src/current/v23.1/index.md
@@ -99,11 +99,11 @@ docs_area:
diff --git a/src/current/v23.1/insert-data.md b/src/current/v23.1/insert-data.md
index 2115fb4068c..b0964de875d 100644
--- a/src/current/v23.1/insert-data.md
+++ b/src/current/v23.1/insert-data.md
@@ -101,7 +101,7 @@ conn.commit()
## Bulk insert
-If you need to get a lot of data into a CockroachDB cluster quickly, use the [`IMPORT`]({% link {{ page.version.version }}/import.md %}) statement instead of sending SQL [`INSERT`s]({% link {{ page.version.version }}/insert.md %}) from application code. It will be much faster because it bypasses the SQL layer altogether and writes directly to the data store using low-level commands. For instructions, see the [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}).
+If you need to get a lot of data into a CockroachDB cluster quickly, use the [`IMPORT`]({% link {{ page.version.version }}/import.md %}) statement instead of sending SQL [`INSERT`s]({% link {{ page.version.version }}/insert.md %}) from application code. It will be much faster because it bypasses the SQL layer altogether and writes directly to the data store using low-level commands. For instructions, see the [Migration Overview]({% link molt/migration-overview.md %}).
{% include {{page.version.version}}/sql/limit-row-size.md %}
@@ -109,7 +109,7 @@ If you need to get a lot of data into a CockroachDB cluster quickly, use the [`I
Reference information related to this task:
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [`IMPORT`]({% link {{ page.version.version }}/import.md %})
- [Import performance]({% link {{ page.version.version }}/import.md %}#performance)
- [`INSERT`]({% link {{ page.version.version }}/insert.md %})
diff --git a/src/current/v23.1/known-limitations.md b/src/current/v23.1/known-limitations.md
index d4f373a1ddb..44cc38e3b9d 100644
--- a/src/current/v23.1/known-limitations.md
+++ b/src/current/v23.1/known-limitations.md
@@ -454,7 +454,7 @@ CockroachDB supports efficiently storing and querying [spatial data]({% link {{
- CockroachDB does not support using [schema name prefixes]({% link {{ page.version.version }}/sql-name-resolution.md %}#how-name-resolution-works) to refer to [data types]({% link {{ page.version.version }}/data-types.md %}) with type modifiers (e.g., `public.geometry(linestring, 4326)`). Instead, use fully-unqualified names to refer to data types with type modifiers (e.g., `geometry(linestring,4326)`).
- Note that, in [`IMPORT PGDUMP`]({% link {{ page.version.version }}/migrate-from-postgres.md %}) output, [`GEOMETRY` and `GEOGRAPHY`]({% link {{ page.version.version }}/export-spatial-data.md %}) data type names are prefixed by `public.`. If the type has a type modifier, you must remove the `public.` from the type name in order for the statements to work in CockroachDB.
+ Note that, in [`IMPORT PGDUMP`]({% link molt/migrate-to-cockroachdb.md %}) output, [`GEOMETRY` and `GEOGRAPHY`]({% link {{ page.version.version }}/export-spatial-data.md %}) data type names are prefixed by `public.`. If the type has a type modifier, you must remove the `public.` from the type name in order for the statements to work in CockroachDB.
[Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/56492)
diff --git a/src/current/v23.1/migrate-from-avro.md b/src/current/v23.1/migrate-from-avro.md
index 8e161b48acd..0ff5cc230bc 100644
--- a/src/current/v23.1/migrate-from-avro.md
+++ b/src/current/v23.1/migrate-from-avro.md
@@ -218,8 +218,8 @@ You will need to run [`ALTER TABLE ... ADD CONSTRAINT`]({% link {{ page.version.
- [`IMPORT`][import]
- [Import Performance Best Practices]({% link {{ page.version.version }}/import-performance-best-practices.md %})
- [Migrate from CSV][csv]
-- [Migrate from MySQL][mysql]
-- [Migrate from PostgreSQL][postgres]
+- [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql)
+- [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %})
- [Back Up and Restore Data]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %})
- [Use the Built-in SQL Client]({% link {{ page.version.version }}/cockroach-sql.md %})
- [`cockroach` Commands Overview]({% link {{ page.version.version }}/cockroach-commands.md %})
diff --git a/src/current/v23.1/migrate-from-csv.md b/src/current/v23.1/migrate-from-csv.md
index fb8fe7adaa8..c47ba89adba 100644
--- a/src/current/v23.1/migrate-from-csv.md
+++ b/src/current/v23.1/migrate-from-csv.md
@@ -178,8 +178,8 @@ IMPORT INTO employees (emp_no, birth_date, first_name, last_name, gender, hire_d
- [`IMPORT`][import]
- [Import Performance Best Practices]({% link {{ page.version.version }}/import-performance-best-practices.md %})
-- [Migrate from MySQL][mysql]
-- [Migrate from PostgreSQL][postgres]
+- [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql)
+- [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %})
- [Back Up and Restore Data]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %})
- [Use the Built-in SQL Client]({% link {{ page.version.version }}/cockroach-sql.md %})
- [`cockroach` Commands Overview]({% link {{ page.version.version }}/cockroach-commands.md %})
diff --git a/src/current/v23.1/migrate-from-geojson.md b/src/current/v23.1/migrate-from-geojson.md
index dc27add32b0..26fffdca0d4 100644
--- a/src/current/v23.1/migrate-from-geojson.md
+++ b/src/current/v23.1/migrate-from-geojson.md
@@ -92,9 +92,9 @@ IMPORT PGDUMP ('http://localhost:3000/tanks.sql');
- [Migrate from Shapefiles]({% link {{ page.version.version }}/migrate-from-shapefiles.md %})
- [Migrate from GeoPackage]({% link {{ page.version.version }}/migrate-from-geopackage.md %})
- [Spatial indexes]({% link {{ page.version.version }}/spatial-indexes.md %})
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
-- [Migrate from MySQL][mysql]
-- [Migrate from PostgreSQL][postgres]
+- [Migration Overview]({% link molt/migration-overview.md %})
+- [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql)
+- [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %})
- [Back Up and Restore Data]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %})
- [Use the Built-in SQL Client]({% link {{ page.version.version }}/cockroach-sql.md %})
- [`cockroach` Commands Overview]({% link {{ page.version.version }}/cockroach-commands.md %})
diff --git a/src/current/v23.1/migrate-from-geopackage.md b/src/current/v23.1/migrate-from-geopackage.md
index 0eaffa746ae..bf7c73f27a2 100644
--- a/src/current/v23.1/migrate-from-geopackage.md
+++ b/src/current/v23.1/migrate-from-geopackage.md
@@ -104,9 +104,9 @@ IMPORT PGDUMP ('http://localhost:3000/springs.sql');
- [Spatial indexes]({% link {{ page.version.version }}/spatial-indexes.md %})
- [Migrate from OpenStreetMap]({% link {{ page.version.version }}/migrate-from-openstreetmap.md %})
- [Migrate from Shapefiles]({% link {{ page.version.version }}/migrate-from-shapefiles.md %})
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
-- [Migrate from MySQL][mysql]
-- [Migrate from PostgreSQL][postgres]
+- [Migration Overview]({% link molt/migration-overview.md %})
+- [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql)
+- [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %})
- [Back Up and Restore Data]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %})
- [Use the Built-in SQL Client]({% link {{ page.version.version }}/cockroach-sql.md %})
- [`cockroach` Commands Overview]({% link {{ page.version.version }}/cockroach-commands.md %})
diff --git a/src/current/v23.1/migrate-from-mysql.md b/src/current/v23.1/migrate-from-mysql.md
deleted file mode 100644
index 50a7133eaf3..00000000000
--- a/src/current/v23.1/migrate-from-mysql.md
+++ /dev/null
@@ -1,410 +0,0 @@
----
-title: Migrate from MySQL
-summary: Learn how to migrate data from MySQL into a CockroachDB cluster.
-toc: true
-docs_area: migrate
----
-
-This page describes basic considerations and provides a basic [example](#example-migrate-world-to-cockroachdb) of migrating data from MySQL to CockroachDB. The information on this page assumes that you have read [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}), which describes the broad phases and considerations of migrating a database to CockroachDB.
-
-The [MySQL migration example](#example-migrate-world-to-cockroachdb) on this page demonstrates how to use [MOLT tooling]({% link {{ page.version.version }}/migration-overview.md %}#molt) to update the MySQL schema, perform an initial load of data, and validate the data. These steps are essential when [preparing for a full migration]({% link {{ page.version.version }}/migration-overview.md %}#prepare-for-migration).
-
-{{site.data.alerts.callout_success}}
-If you need help migrating to CockroachDB, contact our sales team.
-{{site.data.alerts.end}}
-
-## Syntax differences
-
-You will likely need to make application changes due to differences in syntax between MySQL and CockroachDB. Along with the [general considerations in the migration overview]({% link {{ page.version.version }}/migration-overview.md %}#application-changes), also consider the following MySQL-specific information as you develop your migration plan.
-
-When [using the Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}?filters=mysql#convert-a-schema), MySQL syntax that cannot automatically be converted will be displayed in the [**Summary Report**]({% link cockroachcloud/migrations-page.md %}?filters=mysql#summary-report). These may include the following.
-
-#### String case sensitivity
-
-Strings are case-insensitive in MySQL and case-sensitive in CockroachDB. You may need to edit your MySQL data to get the results you expect from CockroachDB. For example, you may have been doing string comparisons in MySQL that will need to be changed to work with CockroachDB.
-
-For more information about the case sensitivity of strings in MySQL, see [Case Sensitivity in String Searches](https://dev.mysql.com/doc/refman/8.0/en/case-sensitivity.html) from the MySQL documentation. For more information about CockroachDB strings, see [`STRING`]({% link {{ page.version.version }}/string.md %}).
-
-#### Identifier case sensitivity
-
-Identifiers are case-sensitive in MySQL and [case-insensitive in CockroachDB]({% link {{ page.version.version }}/keywords-and-identifiers.md %}#identifiers). When [using the Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}?filters=mysql#convert-a-schema), you can either keep case sensitivity by enclosing identifiers in double quotes, or make identifiers case-insensitive by converting them to lowercase.
-
-#### `AUTO_INCREMENT` attribute
-
-The MySQL [`AUTO_INCREMENT`](https://dev.mysql.com/doc/refman/8.0/en/example-auto-increment.html) attribute, which creates sequential column values, is not supported in CockroachDB. When [using the Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}?filters=mysql#convert-a-schema), columns with `AUTO_INCREMENT` can be converted to use [sequences]({% link {{ page.version.version }}/create-sequence.md %}), `UUID` values with [`gen_random_uuid()`]({% link {{ page.version.version }}/functions-and-operators.md %}#id-generation-functions), or unique `INT8` values using [`unique_rowid()`]({% link {{ page.version.version }}/functions-and-operators.md %}#id-generation-functions). Cockroach Labs does not recommend using a sequence to define a primary key column. For more information, see [Unique ID best practices]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#unique-id-best-practices).
-
-{{site.data.alerts.callout_info}}
-Changing a column type during schema conversion will cause [MOLT Verify]({% link molt/molt-verify.md %}) to identify a type mismatch during [data validation](#step-3-validate-the-migrated-data). This is expected behavior.
-{{site.data.alerts.end}}
-
-#### `ENUM` type
-
-MySQL `ENUM` types are defined in table columns. On CockroachDB, [`ENUM`]({% link {{ page.version.version }}/enum.md %}) is a standalone type. When [using the Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}?filters=mysql#convert-a-schema), you can either deduplicate the `ENUM` definitions or create a separate type for each column.
-
-#### `TINYINT` type
-
-`TINYINT` data types are not supported in CockroachDB. The [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}?filters=mysql) automatically converts `TINYINT` columns to [`INT2`]({% link {{ page.version.version }}/int.md %}) (`SMALLINT`).
-
-#### Geospatial types
-
-MySQL geometry types are not converted to CockroachDB [geospatial types]({% link {{ page.version.version }}/spatial-data-overview.md %}#spatial-objects) by the [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}?filters=mysql). They should be manually converted to the corresponding types in CockroachDB.
-
-#### `FIELD` function
-
-The MYSQL `FIELD` function is not supported in CockroachDB. Instead, you can use the [`array_position`]({% link {{ page.version.version }}/functions-and-operators.md %}#array-functions) function, which returns the index of the first occurrence of element in the array.
-
-Example usage:
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-SELECT array_position(ARRAY[4,1,3,2],1);
-~~~
-
-~~~
- array_position
-------------------
- 2
-(1 row)
-~~~
-
-While MySQL returns 0 when the element is not found, CockroachDB returns `NULL`. So if you are using the `ORDER BY` clause in a statement with the `array_position` function, the caveat is that sort is applied even when the element is not found. As a workaround, you can use the [`COALESCE`]({% link {{ page.version.version }}/functions-and-operators.md %}#conditional-and-function-like-operators) operator.
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-SELECT * FROM table_a ORDER BY COALESCE(array_position(ARRAY[4,1,3,2],5),999);
-~~~
-
-## Load MySQL data
-
-You can use one of the following methods to migrate MySQL data to CockroachDB:
-
-- {% include {{ page.version.version }}/migration/load-data-import-into.md %}
-
- {% include {{ page.version.version }}/misc/import-perf.md %}
-
-- {% include {{ page.version.version }}/migration/load-data-third-party.md %}
-
-The [following example](#example-migrate-world-to-cockroachdb) uses `IMPORT INTO` to perform the initial data load.
-
-## Example: Migrate `world` to CockroachDB
-
-The following steps demonstrate [converting a schema]({% link {{ page.version.version }}/migration-overview.md %}#convert-the-schema), performing an [initial load of data]({% link {{ page.version.version }}/migration-overview.md %}#load-test-data), and [validating data consistency]({% link {{ page.version.version }}/migration-overview.md %}#validate-queries) during a migration.
-
-In the context of a full migration, these steps ensure that MySQL data can be properly migrated to CockroachDB and your application queries tested against the cluster. For details, see the [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}#prepare-for-migration).
-
-### Before you begin
-
-The example uses the [MySQL `world` data set](https://dev.mysql.com/doc/index-other.html) and demonstrates how to migrate the schema and data to a CockroachDB {{ site.data.products.standard }} cluster. To follow along with these steps:
-
-1. Download the [`world` data set](https://dev.mysql.com/doc/index-other.html).
-
-1. Create the `world` database on your MySQL instance, specifying the path of the downloaded file:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- mysqlsh -uroot --sql --file {path}/world-db/world.sql
- ~~~
-
-1. Create a free [{{ site.data.products.cloud }} account]({% link cockroachcloud/create-an-account.md %}), which is used to access the [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}?filters=mysql) and create the {{ site.data.products.standard }} cluster.
-
-{{site.data.alerts.callout_success}}
-{% include cockroachcloud/migration/sct-self-hosted.md %}
-{{site.data.alerts.end}}
-
-### Step 1. Convert the MySQL schema
-
-Use the [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}?filters=mysql) to convert the `world` schema for compatibility with CockroachDB. The schema has three tables: `city`, `country`, and `countrylanguage`.
-
-1. Dump the MySQL `world` schema with the following [`mysqldump`](https://dev.mysql.com/doc/refman/8.0/en/mysqldump-sql-format.html) command:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- mysqldump -uroot --no-data world > world_schema.sql
- ~~~
-
-1. Open the [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}?filters=mysql) in the {{ site.data.products.cloud }} Console and [add a new MySQL schema]({% link cockroachcloud/migrations-page.md %}?filters=mysql#convert-a-schema).
-
- For **AUTO_INCREMENT Conversion Option**, select the [`unique_rowid()`]({% link {{ page.version.version }}/functions-and-operators.md %}#id-generation-functions) option. This will convert the `ID` column in the `city` table, which has MySQL type `int` and `AUTO_INCREMENT`, to a CockroachDB [`INT8`]({% link {{ page.version.version }}/int.md %}) type with default values generated by [`unique_rowid()`]({% link {{ page.version.version }}/functions-and-operators.md %}#id-generation-functions). For context on this option, see [`AUTO_INCREMENT` attribute](#auto_increment-attribute).
-
- The `UUID` and `unique_rowid()` options are each preferred for [different use cases]({% link {{ page.version.version }}/sql-faqs.md %}#what-are-the-differences-between-uuid-sequences-and-unique_rowid). For this example, selecting the `unique_rowid()` option makes [loading the data](#step-2-load-the-mysql-data) more straightforward in a later step, since both the source and target columns will have integer types.
-
-1. [Upload `world_schema.sql`]({% link cockroachcloud/migrations-page.md %}?filters=mysql#upload-file) to the Schema Conversion Tool.
-
- After conversion is complete, [review the results]({% link cockroachcloud/migrations-page.md %}?filters=mysql#review-the-schema). The [**Summary Report**]({% link cockroachcloud/migrations-page.md %}?filters=mysql#summary-report) shows that there are no errors. This means that the schema is ready to migrate to CockroachDB.
-
- {{site.data.alerts.callout_success}}
- You can also [add your MySQL database credentials]({% link cockroachcloud/migrations-page.md %}?filters=mysql#use-credentials) to have the Schema Conversion Tool obtain the schema directly from the MySQL database.
- {{site.data.alerts.end}}
-
- This example migrates directly to a CockroachDB {{ site.data.products.standard }} cluster. {% include cockroachcloud/migration/sct-self-hosted.md %}
-
-1. Before you migrate the converted schema, click the **Statements** tab to view the [Statements list]({% link cockroachcloud/migrations-page.md %}?filters=mysql#statements-list). Scroll down to the `CREATE TABLE countrylanguage` statement and edit the statement to add a [collation]({% link {{ page.version.version }}/collate.md %}) (`COLLATE en_US`) on the `language` column:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- CREATE TABLE countrylanguage (
- countrycode VARCHAR(3) DEFAULT '' NOT NULL,
- language VARCHAR(30) COLLATE en_US DEFAULT '' NOT NULL,
- isofficial countrylanguage_isofficial_enum
- DEFAULT 'F'
- NOT NULL,
- percentage DECIMAL(4,1) DEFAULT '0.0' NOT NULL,
- PRIMARY KEY (countrycode, language),
- INDEX countrycode (countrycode),
- CONSTRAINT countrylanguage_ibfk_1
- FOREIGN KEY (countrycode) REFERENCES country (code)
- )
- ~~~
-
- Click **Save**.
-
- This is a workaround to prevent [data validation](#step-3-validate-the-migrated-data) from failing due to collation mismatches. For more details, see the [MOLT Verify] ({% link molt/molt-verify.md %}#known-limitations) documentation.
-
-1. Click [**Migrate Schema**]({% link cockroachcloud/migrations-page.md %}?filters=mysql#migrate-the-schema) to create a new {{ site.data.products.standard }} cluster with the converted schema. Name the database `world`.
-
- You can view this database on the [**Databases** page]({% link cockroachcloud/databases-page.md %}) of the {{ site.data.products.cloud }} Console.
-
-1. Open a SQL shell to the CockroachDB `world` cluster. To find the command, open the **Connect** dialog in the {{ site.data.products.cloud }} Console and select the `world` database and **CockroachDB Client** option. It will look like:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- cockroach sql --url "postgresql://{username}@{hostname}:{port}/world?sslmode=verify-full"
- ~~~
-
-1. For large imports, Cockroach Labs recommends [removing indexes prior to loading data]({% link {{ page.version.version }}/import-performance-best-practices.md %}#import-into-a-schema-with-secondary-indexes) and recreating them afterward. This provides increased visibility into the import progress and the ability to retry each step independently.
-
- Show the indexes on the `world` database:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- SHOW INDEXES FROM DATABASE world;
- ~~~
-
- The `countrycode` [foreign key]({% link {{ page.version.version }}/foreign-key.md %}) indexes on the `city` and `countrylanguage` tables can be removed for now:
-
- ~~~
- table_name | index_name | index_schema | non_unique | seq_in_index | column_name | definition | direction | storing | implicit | visible
- ---------------------------------+-------------------------------------------------+--------------+------------+--------------+-----------------+-----------------+-----------+---------+----------+----------
- ...
- city | countrycode | public | t | 2 | id | id | ASC | f | t | t
- city | countrycode | public | t | 1 | countrycode | countrycode | ASC | f | f | t
- ...
- countrylanguage | countrycode | public | t | 1 | countrycode | countrycode | ASC | f | f | t
- countrylanguage | countrycode | public | t | 2 | language | language | ASC | f | t | t
- ...
- ~~~
-
-1. Drop the `countrycode` indexes:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- DROP INDEX city@countrycode;
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- DROP INDEX countrylanguage@countrycode;
- ~~~
-
- You will recreate the indexes after [loading the data](#step-2-load-the-mysql-data).
-
-### Step 2. Load the MySQL data
-
-Load the `world` data into CockroachDB using [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) with CSV-formatted data. {% include {{ page.version.version }}/sql/export-csv-tsv.md %}
-
-{{site.data.alerts.callout_info}}
-When MySQL dumps data, the tables are not ordered by [foreign key]({% link {{ page.version.version }}/foreign-key.md %}) constraints, and foreign keys are not placed in the correct dependency order. It is best to disable foreign key checks when loading data into CockroachDB, and revalidate foreign keys on each table after the data is loaded.
-
-By default, [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) invalidates all [foreign key]({% link {{ page.version.version }}/foreign-key.md %}) constraints on the target table.
-{{site.data.alerts.end}}
-
-1. Dump the MySQL `world` data with the following [`mysqldump` command](https://dev.mysql.com/doc/refman/8.0/en/mysqldump-delimited-text.html):
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- mysqldump -uroot -T /{path}/world-data --fields-terminated-by ',' --fields-enclosed-by '"' --fields-escaped-by '\' --no-create-info world
- ~~~
-
- This dumps each table in your database to the path `/{path}/world-data` as a `.txt` file in CSV format.
- - `--fields-terminated-by` specifies that values are separated by commas instead of tabs.
- - `--fields-enclosed-by` and `--fields-escaped-by` specify the characters that enclose and escape column values, respectively.
- - `--no-create-info` dumps only the [data manipulation language (DML)]({% link {{ page.version.version }}/sql-statements.md %}#data-manipulation-statements).
-
-1. Host the files where the CockroachDB cluster can access them.
-
- Each node in the CockroachDB cluster needs to have access to the files being imported. There are several ways for the cluster to access the data; for more information on the types of storage [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) can pull from, see the following:
- - [Use Cloud Storage]({% link {{ page.version.version }}/use-cloud-storage.md %})
- - [Use a Local File Server]({% link {{ page.version.version }}/use-a-local-file-server.md %})
-
- Cloud storage such as Amazon S3 or Google Cloud is highly recommended for hosting the data files you want to import.
-
- The dump files generated in the preceding step are already hosted on a public S3 bucket created for this example.
-
-1. Open a SQL shell to the CockroachDB `world` cluster, using the same command as before:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- cockroach sql --url "postgresql://{username}@{hostname}:{port}/world?sslmode=verify-full"
- ~~~
-
-1. Use [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) to import each MySQL dump file into the corresponding table in the `world` database.
-
- The following commands point to a public S3 bucket where the `world` data dump files are hosted for this example. The `nullif='\N'` clause specifies that `\N` values, which are produced by the `mysqldump` command, should be read as [`NULL`]({% link {{ page.version.version }}/null-handling.md %}).
-
- {{site.data.alerts.callout_success}}
- You can add the `row_limit` [option]({% link {{ page.version.version }}/import-into.md %}#import-options) to specify the number of rows to import. For example, `row_limit = '10'` will import the first 10 rows of the table. This option is useful for finding errors quickly before executing a more time- and resource-consuming import.
- {{site.data.alerts.end}}
-
- ~~~ sql
- IMPORT INTO countrylanguage
- CSV DATA (
- 'https://cockroachdb-migration-examples.s3.us-east-1.amazonaws.com/mysql/world/countrylanguage.txt'
- )
- WITH
- nullif='\N';
- ~~~
-
- ~~~
- job_id | status | fraction_completed | rows | index_entries | bytes
- ---------------------+-----------+--------------------+------+---------------+---------
- 887782070812344321 | succeeded | 1 | 984 | 984 | 171555
- ~~~
-
- ~~~ sql
- IMPORT INTO country
- CSV DATA (
- 'https://cockroachdb-migration-examples.s3.us-east-1.amazonaws.com/mysql/world/country.txt'
- )
- WITH
- nullif='\N';
- ~~~
-
- ~~~
- job_id | status | fraction_completed | rows | index_entries | bytes
- ---------------------+-----------+--------------------+------+---------------+--------
- 887782114360819713 | succeeded | 1 | 239 | 0 | 33173
- ~~~
-
- ~~~ sql
- IMPORT INTO city
- CSV DATA (
- 'https://cockroachdb-migration-examples.s3.us-east-1.amazonaws.com/mysql/world/city.txt'
- )
- WITH
- nullif='\N';
- ~~~
-
- ~~~
- job_id | status | fraction_completed | rows | index_entries | bytes
- ---------------------+-----------+--------------------+------+---------------+---------
- 887782154421567489 | succeeded | 1 | 4079 | 4079 | 288140
- ~~~
-
- {{site.data.alerts.callout_info}}
- After [converting the schema](#step-1-convert-the-mysql-schema) to work with CockroachDB, the `id` column in `city` is an [`INT8`]({% link {{ page.version.version }}/int.md %}) with default values generated by [`unique_rowid()`]({% link {{ page.version.version }}/functions-and-operators.md %}#id-generation-functions). However, `unique_rowid()` values are only generated when new rows are [inserted]({% link {{ page.version.version }}/insert.md %}) without an `id` value. The MySQL data dump still includes the sequential `id` values generated by the MySQL [`AUTO_INCREMENT` attribute](#auto_increment-attribute), and these are imported with the `IMPORT INTO` command.
-
- In an actual migration, you can either update the primary key into a [multi-column key]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#use-multi-column-primary-keys) or add a new primary key column that [generates unique IDs]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#unique-id-best-practices).
- {{site.data.alerts.end}}
-
-1. Recreate the indexes that you deleted before importing the data:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- CREATE INDEX countrycode ON city (countrycode, id);
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- CREATE INDEX countrycode ON countrylanguage (countrycode, language);
- ~~~
-
-1. Recall that `IMPORT INTO` invalidates all [foreign key]({% link {{ page.version.version }}/foreign-key.md %}) constraints on the target table. View the constraints that are defined on `city` and `countrylanguage`:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- SHOW CONSTRAINTS FROM city;
- ~~~
-
- ~~~
- table_name | constraint_name | constraint_type | details | validated
- -------------+-----------------+-----------------+--------------------------------------------------------------+------------
- city | city_ibfk_1 | FOREIGN KEY | FOREIGN KEY (countrycode) REFERENCES country(code) NOT VALID | f
- city | city_pkey | PRIMARY KEY | PRIMARY KEY (id ASC) | t
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- SHOW CONSTRAINTS FROM countrylanguage;
- ~~~
-
- ~~~
- table_name | constraint_name | constraint_type | details | validated
- ------------------+------------------------+-----------------+--------------------------------------------------------------+------------
- countrylanguage | countrylanguage_ibfk_1 | FOREIGN KEY | FOREIGN KEY (countrycode) REFERENCES country(code) NOT VALID | f
- countrylanguage | countrylanguage_pkey | PRIMARY KEY | PRIMARY KEY (countrycode ASC, language ASC) | t
- ~~~
-
-1. To validate the foreign keys, issue an [`ALTER TABLE ... VALIDATE CONSTRAINT`]({% link {{ page.version.version }}/alter-table.md %}#validate-constraint) statement for each table:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- ALTER TABLE city VALIDATE CONSTRAINT city_ibfk_1;
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- ALTER TABLE countrylanguage VALIDATE CONSTRAINT countrylanguage_ibfk_1;
- ~~~
-
-### Step 3. Validate the migrated data
-
-Use [MOLT Verify]({% link molt/molt-verify.md %}) to check that the data on MySQL and CockroachDB are consistent.
-
-1. [Install MOLT Verify.]({% link molt/molt-verify.md %})
-
-1. In the directory where you installed MOLT Verify, use the following command to compare the two databases, specifying the [JDBC connection string for MySQL](https://dev.mysql.com/doc/connector-j/8.1/en/connector-j-reference-jdbc-url-format.html) with `--source` and the SQL connection string for CockroachDB with `--target`:
-
- {{site.data.alerts.callout_success}}
- To find the CockroachDB connection string, open the **Connect** dialog in the {{ site.data.products.cloud }} Console and select the `world` database and the **General connection string** option.
- {{site.data.alerts.end}}
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- ./molt verify --source 'jdbc:mysql://{user}:{password}@tcp({host}:{port})/world' --target 'postgresql://{user}:{password}@{host}:{port}/world?sslmode=verify-full'
- ~~~
-
- You will see the initial output:
-
- ~~~
- INF verification in progress
- ~~~
-
- The following warnings indicate that the MySQL and CockroachDB columns have different types. This is an expected result, since some columns were [changed to `ENUM` types](#enum-type) when you [converted the schema](#step-1-convert-the-mysql-schema):
-
- ~~~
- WRN mismatching table definition mismatch_info="column type mismatch on continent: text vs country_continent_enum" table_name=country table_schema=public
- WRN mismatching table definition mismatch_info="column type mismatch on isofficial: text vs countrylanguage_isofficial_enum" table_name=countrylanguage table_schema=public
- ~~~
-
- The following output indicates that MOLT Verify has completed verification:
-
- ~~~
- INF finished row verification on public.country (shard 1/1): truth rows seen: 239, success: 239, missing: 0, mismatch: 0, extraneous: 0, live_retry: 0
- INF finished row verification on public.countrylanguage (shard 1/1): truth rows seen: 984, success: 984, missing: 0, mismatch: 0, extraneous: 0, live_retry: 0
- INF finished row verification on public.city (shard 1/1): truth rows seen: 4079, success: 4079, missing: 0, mismatch: 0, extraneous: 0, live_retry: 0
- INF verification complete
- ~~~
-
-With the schema migrated and the initial data load verified, the next steps in a real-world migration are to ensure that you have made any necessary [application changes]({% link {{ page.version.version }}/migration-overview.md %}#application-changes), [validate application queries]({% link {{ page.version.version }}/migration-overview.md %}#validate-queries), and [perform a dry run]({% link {{ page.version.version }}/migration-overview.md %}#perform-a-dry-run) before [conducting the full migration]({% link {{ page.version.version }}/migration-overview.md %}#conduct-the-migration).
-
-To learn more, see the [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}).
-
-## See also
-
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
-- [Use the Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
-- [Use the MOLT Verify tool]({% link molt/molt-verify.md %})
-- [Import Performance Best Practices]({% link {{ page.version.version }}/import-performance-best-practices.md %})
-- [Migrate from CSV]({% link {{ page.version.version }}/migrate-from-csv.md %})
-- [Migrate from PostgreSQL]({% link {{ page.version.version }}/migrate-from-postgres.md %})
-- [Can a PostgreSQL or MySQL application be migrated to CockroachDB?]({% link {{ page.version.version }}/frequently-asked-questions.md %}#can-a-postgresql-or-mysql-application-be-migrated-to-cockroachdb)
diff --git a/src/current/v23.1/migrate-from-openstreetmap.md b/src/current/v23.1/migrate-from-openstreetmap.md
index 27e4012bf91..14d88a94aac 100644
--- a/src/current/v23.1/migrate-from-openstreetmap.md
+++ b/src/current/v23.1/migrate-from-openstreetmap.md
@@ -129,9 +129,9 @@ Osm2pgsql took 2879s overall
- [Migrate from GeoPackages]({% link {{ page.version.version }}/migrate-from-geopackage.md %})
- [Migrate from GeoJSON]({% link {{ page.version.version }}/migrate-from-geojson.md %})
- [Migrate from Shapefiles]({% link {{ page.version.version }}/migrate-from-shapefiles.md %})
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
-- [Migrate from MySQL][mysql]
-- [Migrate from PostgreSQL][postgres]
+- [Migration Overview]({% link molt/migration-overview.md %})
+- [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql)
+- [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %})
- [Back Up and Restore Data]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %})
- [Use the Built-in SQL Client]({% link {{ page.version.version }}/cockroach-sql.md %})
- [`cockroach` Commands Overview]({% link {{ page.version.version }}/cockroach-commands.md %})
diff --git a/src/current/v23.1/migrate-from-oracle.md b/src/current/v23.1/migrate-from-oracle.md
index b69940a37c8..bd69674970d 100644
--- a/src/current/v23.1/migrate-from-oracle.md
+++ b/src/current/v23.1/migrate-from-oracle.md
@@ -388,8 +388,8 @@ You will have to refactor Oracle SQL and functions that do not comply with [ANSI
- [`IMPORT`]({% link {{ page.version.version }}/import.md %})
- [Import Performance Best Practices]({% link {{ page.version.version }}/import-performance-best-practices.md %})
- [Migrate from CSV]({% link {{ page.version.version }}/migrate-from-csv.md %})
-- [Migrate from MySQL]({% link {{ page.version.version }}/migrate-from-mysql.md %})
-- [Migrate from PostgreSQL]({% link {{ page.version.version }}/migrate-from-postgres.md %})
+- [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql)
+- [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %})
- [Back Up and Restore Data]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %})
- [Use the Built-in SQL Client]({% link {{ page.version.version }}/cockroach-sql.md %})
- [`cockroach` Commands Overview]({% link {{ page.version.version }}/cockroach-commands.md %})
diff --git a/src/current/v23.1/migrate-from-postgres.md b/src/current/v23.1/migrate-from-postgres.md
deleted file mode 100644
index c7f76a26208..00000000000
--- a/src/current/v23.1/migrate-from-postgres.md
+++ /dev/null
@@ -1,295 +0,0 @@
----
-title: Migrate from PostgreSQL
-summary: Learn how to migrate data from PostgreSQL into a CockroachDB cluster.
-toc: true
-docs_area: migrate
----
-
-This page describes basic considerations and provides a basic [example](#example-migrate-frenchtowns-to-cockroachdb) of migrating data from PostgreSQL to CockroachDB. The information on this page assumes that you have read [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}), which describes the broad phases and considerations of migrating a database to CockroachDB.
-
-The [PostgreSQL migration example](#example-migrate-frenchtowns-to-cockroachdb) on this page demonstrates how to use [MOLT tooling]({% link {{ page.version.version }}/migration-overview.md %}#molt) to update the PostgreSQL schema, perform an initial load of data, and validate the data. These steps are essential when [preparing for a full migration]({% link {{ page.version.version }}/migration-overview.md %}#prepare-for-migration).
-
-{{site.data.alerts.callout_success}}
-If you need help migrating to CockroachDB, contact our sales team.
-{{site.data.alerts.end}}
-
-## Syntax differences
-
-CockroachDB supports the [PostgreSQL wire protocol](https://www.postgresql.org/docs/current/protocol.html) and is largely compatible with PostgreSQL syntax.
-
-For syntax differences, refer to [Features that differ from PostgreSQL]({% link {{ page.version.version }}/postgresql-compatibility.md %}#features-that-differ-from-postgresql).
-
-### Unsupported features
-
-The following PostgreSQL features do not yet exist in CockroachDB:
-
-{% include {{page.version.version}}/sql/unsupported-postgres-features.md %}
-
-## Load PostgreSQL data
-
-You can use one of the following methods to migrate PostgreSQL data to CockroachDB:
-
-- {% include {{ page.version.version }}/migration/load-data-import-into.md %}
-
- {% include {{ page.version.version }}/misc/import-perf.md %}
-
-- {% include {{ page.version.version }}/migration/load-data-third-party.md %}
-
-- {% include {{ page.version.version }}/migration/load-data-copy-from.md %}
-
-The [following example](#example-migrate-frenchtowns-to-cockroachdb) uses `IMPORT INTO` to perform the initial data load.
-
-## Example: Migrate `frenchtowns` to CockroachDB
-
-The following steps demonstrate [converting a schema]({% link {{ page.version.version }}/migration-overview.md %}#convert-the-schema), performing an [initial load of data]({% link {{ page.version.version }}/migration-overview.md %}#load-test-data), and [validating data consistency]({% link {{ page.version.version }}/migration-overview.md %}#validate-queries) during a migration.
-
-In the context of a full migration, these steps ensure that PostgreSQL data can be properly migrated to CockroachDB and your application queries tested against the cluster. For details, see the [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}#prepare-for-migration).
-
-### Before you begin
-
-The example uses a modified version of the PostgreSQL `french-towns-communes-francais` data set and demonstrates how to migrate the schema and data to a CockroachDB {{ site.data.products.standard }} cluster. To follow along with these steps:
-
-1. Download the `frenchtowns` data set:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- curl -O https://cockroachdb-migration-examples.s3.us-east-1.amazonaws.com/postgresql/frenchtowns/frenchtowns.sql
- ~~~
-
-1. Create a `frenchtowns` database on your PostgreSQL instance:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- createdb frenchtowns
- ~~~
-
-1. Load the `frenchtowns` data into PostgreSQL, specifying the path of the downloaded file:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- psql frenchtowns -a -f frenchtowns.sql
- ~~~
-
-1. Create a free [{{ site.data.products.cloud }} account]({% link cockroachcloud/create-an-account.md %}), which is used to access the [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}) and [create the CockroachDB {{ site.data.products.standard }} cluster]({% link cockroachcloud/create-your-cluster.md %}).
-
-{{site.data.alerts.callout_success}}
-{% include cockroachcloud/migration/sct-self-hosted.md %}
-{{site.data.alerts.end}}
-
-### Step 1. Convert the PostgreSQL schema
-
-Use the [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}) to convert the `frenchtowns` schema for compatibility with CockroachDB. The schema has three tables: `regions`, `departments`, and `towns`.
-
-1. Dump the PostgreSQL `frenchtowns` schema with the following [`pg_dump`](https://www.postgresql.org/docs/15/app-pgdump.html) command:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- pg_dump --schema-only frenchtowns > frenchtowns_schema.sql
- ~~~
-
-1. Open the [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}) in the {{ site.data.products.cloud }} Console and [add a new PostgreSQL schema]({% link cockroachcloud/migrations-page.md %}#convert-a-schema).
-
- After conversion is complete, [review the results]({% link cockroachcloud/migrations-page.md %}#review-the-schema). The [**Summary Report**]({% link cockroachcloud/migrations-page.md %}#summary-report) shows that there are errors under **Required Fixes**. You must resolve these in order to migrate the schema to CockroachDB.
-
- {{site.data.alerts.callout_success}}
- You can also [add your PostgreSQL database credentials]({% link cockroachcloud/migrations-page.md %}#use-credentials) to have the Schema Conversion Tool obtain the schema directly from the PostgreSQL database.
- {{site.data.alerts.end}}
-
-1. `Missing user: postgres` errors indicate that the SQL user `postgres` is missing from CockroachDB. Click **Add User** to create the user.
-
-1. `Miscellaneous Errors` includes a `SELECT pg_catalog.set_config('search_path', '', false)` statement that can safely be removed. Click **Delete** to remove the statement from the schema.
-
-1. Review the `CREATE SEQUENCE` statements listed under **Suggestions**. Cockroach Labs does not recommend using a sequence to define a primary key column. For more information, see [Unique ID best practices]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#unique-id-best-practices).
-
- For this example, **Acknowledge** the suggestion without making further changes. In practice, after [conducting the full migration]({% link {{ page.version.version }}/migration-overview.md %}#conduct-the-migration) to CockroachDB, you would modify your CockroachDB schema to use unique and non-sequential primary keys.
-
-1. Click **Retry Migration**. The **Summary Report** now shows that there are no errors. This means that the schema is ready to migrate to CockroachDB.
-
- This example migrates directly to a CockroachDB {{ site.data.products.standard }} cluster. {% include cockroachcloud/migration/sct-self-hosted.md %}
-
-1. Click [**Migrate Schema**]({% link cockroachcloud/migrations-page.md %}#migrate-the-schema) to create a new CockroachDB {{ site.data.products.standard }} cluster with the converted schema. Name the database `frenchtowns`.
-
- You can view this database on the [**Databases** page]({% link cockroachcloud/databases-page.md %}) of the {{ site.data.products.cloud }} Console.
-
-### Step 2. Load the PostgreSQL data
-
-Load the `frenchtowns` data into CockroachDB using [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) with CSV-formatted data. {% include {{ page.version.version }}/sql/export-csv-tsv.md %}
-
-{{site.data.alerts.callout_info}}
-By default, [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) invalidates all [foreign key]({% link {{ page.version.version }}/foreign-key.md %}) constraints on the target table.
-{{site.data.alerts.end}}
-
-1. Dump each table in the PostgreSQL `frenchtowns` database to a CSV-formatted file:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- psql frenchtowns -c "COPY regions TO stdout DELIMITER ',' CSV;" > regions.csv
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- psql frenchtowns -c "COPY departments TO stdout DELIMITER ',' CSV;" > departments.csv
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- psql frenchtowns -c "COPY towns TO stdout DELIMITER ',' CSV;" > towns.csv
- ~~~
-
-1. Host the files where the CockroachDB cluster can access them.
-
- Each node in the CockroachDB cluster needs to have access to the files being imported. There are several ways for the cluster to access the data; for more information on the types of storage [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) can pull from, see the following:
- - [Use Cloud Storage]({% link {{ page.version.version }}/use-cloud-storage.md %})
- - [Use a Local File Server]({% link {{ page.version.version }}/use-a-local-file-server.md %})
-
- Cloud storage such as Amazon S3 or Google Cloud is highly recommended for hosting the data files you want to import.
-
- The dump files generated in the preceding step are already hosted on a public S3 bucket created for this example.
-
-1. Open a SQL shell to the CockroachDB `frenchtowns` cluster. To find the command, open the **Connect** dialog in the {{ site.data.products.cloud }} Console and select the `frenchtowns` database and **CockroachDB Client** option. It will look like:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- cockroach sql --url "postgresql://{username}@{hostname}:{port}/frenchtowns?sslmode=verify-full"
- ~~~
-
-1. Use [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) to import each PostgreSQL dump file into the corresponding table in the `frenchtowns` database.
-
- The following commands point to a public S3 bucket where the `frenchtowns` data dump files are hosted for this example.
-
- {{site.data.alerts.callout_success}}
- You can add the `row_limit` [option]({% link {{ page.version.version }}/import-into.md %}#import-options) to specify the number of rows to import. For example, `row_limit = '10'` will import the first 10 rows of the table. This option is useful for finding errors quickly before executing a more time- and resource-consuming import.
- {{site.data.alerts.end}}
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- IMPORT INTO regions
- CSV DATA (
- 'https://cockroachdb-migration-examples.s3.us-east-1.amazonaws.com/postgresql/frenchtowns/regions.csv'
- );
- ~~~
-
- ~~~
- job_id | status | fraction_completed | rows | index_entries | bytes
- ---------------------+-----------+--------------------+------+---------------+--------
- 893753132185026561 | succeeded | 1 | 26 | 52 | 2338
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- IMPORT INTO departments
- CSV DATA (
- 'https://cockroachdb-migration-examples.s3.us-east-1.amazonaws.com/postgresql/frenchtowns/departments.csv'
- );
- ~~~
-
- ~~~
- job_id | status | fraction_completed | rows | index_entries | bytes
- ---------------------+-----------+--------------------+------+---------------+--------
- 893753147892465665 | succeeded | 1 | 100 | 300 | 11166
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- IMPORT INTO towns
- CSV DATA (
- 'https://cockroachdb-migration-examples.s3.us-east-1.amazonaws.com/postgresql/frenchtowns/towns.csv'
- );
- ~~~
-
- ~~~
- job_id | status | fraction_completed | rows | index_entries | bytes
- ---------------------+-----------+--------------------+-------+---------------+----------
- 893753162225680385 | succeeded | 1 | 36684 | 36684 | 2485007
- ~~~
-
-1. Recall that `IMPORT INTO` invalidates all [foreign key]({% link {{ page.version.version }}/foreign-key.md %}) constraints on the target table. View the constraints that are defined on `departments` and `towns`:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- SHOW CONSTRAINTS FROM departments;
- ~~~
-
- ~~~
- table_name | constraint_name | constraint_type | details | validated
- --------------+-------------------------+-----------------+---------------------------------------------------------+------------
- departments | departments_capital_key | UNIQUE | UNIQUE (capital ASC) | t
- departments | departments_code_key | UNIQUE | UNIQUE (code ASC) | t
- departments | departments_name_key | UNIQUE | UNIQUE (name ASC) | t
- departments | departments_pkey | PRIMARY KEY | PRIMARY KEY (id ASC) | t
- departments | departments_region_fkey | FOREIGN KEY | FOREIGN KEY (region) REFERENCES regions(code) NOT VALID | f
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- SHOW CONSTRAINTS FROM towns;
- ~~~
-
- ~~~
- table_name | constraint_name | constraint_type | details | validated
- -------------+---------------------------+-----------------+-----------------------------------------------------------------+------------
- towns | towns_code_department_key | UNIQUE | UNIQUE (code ASC, department ASC) | t
- towns | towns_department_fkey | FOREIGN KEY | FOREIGN KEY (department) REFERENCES departments(code) NOT VALID | f
- towns | towns_pkey | PRIMARY KEY | PRIMARY KEY (id ASC) | t
- ~~~
-
-1. To validate the foreign keys, issue an [`ALTER TABLE ... VALIDATE CONSTRAINT`]({% link {{ page.version.version }}/alter-table.md %}#validate-constraint) statement for each table:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- ALTER TABLE departments VALIDATE CONSTRAINT departments_region_fkey;
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- ALTER TABLE towns VALIDATE CONSTRAINT towns_department_fkey;
- ~~~
-
-### Step 3. Validate the migrated data
-
-Use [MOLT Verify]({% link molt/molt-verify.md %}) to check that the data on PostgreSQL and CockroachDB are consistent.
-
-1. [Install MOLT Verify.]({% link molt/molt-verify.md %})
-
-1. In the directory where you installed MOLT Verify, use the following command to compare the two databases, specifying the PostgreSQL connection string with `--source` and the CockroachDB connection string with `--target`:
-
- {{site.data.alerts.callout_success}}
- To find the CockroachDB connection string, open the **Connect** dialog in the {{ site.data.products.cloud }} Console and select the `frenchtowns` database and the **General connection string** option.
- {{site.data.alerts.end}}
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- ./molt verify --source 'postgresql://{username}:{password}@{host}:{port}/frenchtowns' --target 'postgresql://{user}:{password}@{host}:{port}/frenchtowns?sslmode=verify-full'
- ~~~
-
- You will see the initial output:
-
- ~~~
- INF verification in progress
- ~~~
-
- The following output indicates that MOLT Verify has completed verification:
-
- ~~~
- INF finished row verification on public.regions (shard 1/1): truth rows seen: 26, success: 26, missing: 0, mismatch: 0, extraneous: 0, live_retry: 0
- INF finished row verification on public.departments (shard 1/1): truth rows seen: 100, success: 100, missing: 0, mismatch: 0, extraneous: 0, live_retry: 0
- INF progress on public.towns (shard 1/1): truth rows seen: 10000, success: 10000, missing: 0, mismatch: 0, extraneous: 0, live_retry: 0
- INF progress on public.towns (shard 1/1): truth rows seen: 20000, success: 20000, missing: 0, mismatch: 0, extraneous: 0, live_retry: 0
- INF progress on public.towns (shard 1/1): truth rows seen: 30000, success: 30000, missing: 0, mismatch: 0, extraneous: 0, live_retry: 0
- INF finished row verification on public.towns (shard 1/1): truth rows seen: 36684, success: 36684, missing: 0, mismatch: 0, extraneous: 0, live_retry: 0
- INF verification complete
- ~~~
-
-With the schema migrated and the initial data load verified, the next steps in a real-world migration are to ensure that you have made any necessary [application changes]({% link {{ page.version.version }}/migration-overview.md %}#application-changes), [validate application queries]({% link {{ page.version.version }}/migration-overview.md %}#validate-queries), and [perform a dry run]({% link {{ page.version.version }}/migration-overview.md %}#perform-a-dry-run) before [conducting the full migration]({% link {{ page.version.version }}/migration-overview.md %}#conduct-the-migration).
-
-To learn more, see the [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}).
-
-## See also
-
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
-- [Use the Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
-- [Use the MOLT Verify tool]({% link molt/molt-verify.md %})
-- [Import Performance Best Practices]({% link {{ page.version.version }}/import-performance-best-practices.md %})
-- [Migrate from CSV]({% link {{ page.version.version }}/migrate-from-csv.md %})
-- [Migrate from MySQL]({% link {{ page.version.version }}/migrate-from-mysql.md %})
-- [Can a PostgreSQL or MySQL application be migrated to CockroachDB?]({% link {{ page.version.version }}/frequently-asked-questions.md %}#can-a-postgresql-or-mysql-application-be-migrated-to-cockroachdb)
diff --git a/src/current/v23.1/migrate-from-shapefiles.md b/src/current/v23.1/migrate-from-shapefiles.md
index 0e9d9e2befe..1783f435cdf 100644
--- a/src/current/v23.1/migrate-from-shapefiles.md
+++ b/src/current/v23.1/migrate-from-shapefiles.md
@@ -114,9 +114,9 @@ IMPORT PGDUMP ('http://localhost:3000/tornado-points.sql') WITH ignore_unsupport
- [Migrate from OpenStreetMap]({% link {{ page.version.version }}/migrate-from-openstreetmap.md %})
- [Migrate from GeoJSON]({% link {{ page.version.version }}/migrate-from-geojson.md %})
- [Migrate from GeoPackage]({% link {{ page.version.version }}/migrate-from-geopackage.md %})
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
-- [Migrate from MySQL][mysql]
-- [Migrate from PostgreSQL][postgres]
+- [Migration Overview]({% link molt/migration-overview.md %})
+- [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql)
+- [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %})
- [Back Up and Restore Data]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %})
- [Use the Built-in SQL Client]({% link {{ page.version.version }}/cockroach-sql.md %})
- [`cockroach` Commands Overview]({% link {{ page.version.version }}/cockroach-commands.md %})
diff --git a/src/current/v23.1/migration-overview.md b/src/current/v23.1/migration-overview.md
deleted file mode 100644
index b21e68e707b..00000000000
--- a/src/current/v23.1/migration-overview.md
+++ /dev/null
@@ -1,353 +0,0 @@
----
-title: Migration Overview
-summary: Learn how to migrate your database to a CockroachDB cluster.
-toc: true
-docs_area: migrate
----
-
-This page provides an overview of how to migrate a database to CockroachDB.
-
-A database migration broadly consists of the following phases:
-
-1. [Develop a migration plan:](#develop-a-migration-plan) Evaluate your [downtime requirements](#approach-to-downtime) and [cutover strategy](#cutover-strategy), [size the CockroachDB cluster](#capacity-planning) that you will migrate to, and become familiar with the [application changes](#application-changes) that you need to make for CockroachDB.
-1. [Prepare for migration:](#prepare-for-migration) Run a [pre-mortem](#run-a-migration-pre-mortem) (optional), set up [metrics](#set-up-monitoring-and-alerting) (optional), [convert your schema](#convert-the-schema), perform an [initial load of test data](#load-test-data), [validate your application queries](#validate-queries) for correctness and performance, and [perform a dry run](#perform-a-dry-run) of the migration.
-1. [Conduct the migration:](#conduct-the-migration) Use a [lift-and-shift](#lift-and-shift) or ["zero-downtime"](#zero-downtime) method to migrate your data, application, and users to CockroachDB.
-1. [Complete the migration:](#complete-the-migration) Notify the appropriate parties and summarize the details.
-
-{{site.data.alerts.callout_success}}
-If you need help migrating to CockroachDB, contact our sales team.
-{{site.data.alerts.end}}
-
-## Develop a migration plan
-
-Consider the following as you plan your migration:
-
-- Who will lead and perform the migration? Which teams are involved, and which aspects are they responsible for?
-- Which internal and external parties do you need to inform about the migration?
-- Which external or third-party tools (e.g., microservices, analytics, payment processors, aggregators, CRMs) must be tested and migrated along with your application?
-- What portion of the data can be inconsistent, and for how long? What is the tolerable percentage of latency and application errors? This comprises your "error budget".
-- What is the tolerable [downtime](#approach-to-downtime), and what [cutover strategy](#cutover-strategy) will you use to switch users to CockroachDB?
-- Will you set up a "dry-run" environment to test the migration? How many [dry-run migrations](#perform-a-dry-run) will you perform?
-- When is the best time to perform this migration to be minimally disruptive to the database's users?
-- What is your target date for completing the migration?
-
-Create a document that summarizes the intent of the migration, the technical details, and the team members involved.
-
-### Approach to downtime
-
-A primary consideration is whether your application can tolerate downtime:
-
-- What types of operations can you suspend: reads, writes, or both?
-- How long can operations be suspended: seconds, minutes, or hours?
-- Should writes be queued while service is suspended?
-
-Take the following two use cases:
-
-- An application that is primarily in use during daytime business hours may be able to be taken offline during a predetermined timeframe without disrupting the user experience and business continuity. In this case, your migration can occur in a [downtime window](#downtime-window).
-- An application that must serve writes continuously cannot tolerate a long downtime window. In this case, you will aim for [zero or near-zero downtime](#minimal-downtime).
-
-#### Downtime window
-
-If your application can tolerate downtime, then it will likely be easiest to take your application offline, load a snapshot of the data into CockroachDB, and perform a [cutover](#cutover-strategy) to CockroachDB once the data is migrated. This is known as a *lift-and-shift* migration.
-
-A lift-and-shift approach is the most straightforward. However, it's important to fully [prepare the migration](#prepare-for-migration) in order to be certain that it can be completed successfully during the downtime window.
-
-- *Scheduled downtime* is made known to your users in advance. Once you have [prepared for the migration](#prepare-for-migration), you take the application offline, [conduct the migration](#conduct-the-migration), and bring the application back online on CockroachDB. To succeed, you should estimate the amount of downtime required to migrate your data, and ideally schedule the downtime outside of peak hours. Scheduling downtime is easiest if your application traffic is "periodic", meaning that it varies by the time of day, day of week, or day of month.
-
-- *Unscheduled downtime* impacts as few customers as possible, ideally without impacting their regular usage. If your application is intentionally offline at certain times (e.g., outside business hours), you can migrate the data without users noticing. Alternatively, if your application's functionality is not time-sensitive (e.g., it sends batched messages or emails), then you can queue requests while your system is offline, and process those requests after completing the migration to CockroachDB.
-
-- *Reduced functionality* takes some, but not all, application functionality offline. For example, you can disable writes but not reads while you migrate the application data, and queue data to be written after completing the migration.
-
-For an overview of lift-and-shift migrations to CockroachDB, see [Lift and Shift](#lift-and-shift). For considerations and details about the pros and cons of this approach, see [Migration Strategy: Lift and Shift]({% link {{ page.version.version }}/migration-strategy-lift-and-shift.md %}).
-
-#### Minimal downtime
-
-If your application cannot tolerate downtime, then you should aim for a "zero-downtime" approach. "Zero" means that downtime is reduced to either an absolute minimum or zero, such that users do not notice the migration.
-
-The minimum possible downtime depends on whether you can tolerate inconsistency in the migrated data:
-
-- *Consistent* migrations reduce downtime to an absolute minimum (i.e., from 30 seconds to sub-seconds) while keeping data synchronized between the source database and CockroachDB. **Consistency requires downtime.** In this approach, downtime occurs right before [cutover](#cutover-strategy), as you drain the remaining transactions from the source database to CockroachDB.
-
-- *Inconsistent* migrations can reduce downtime to zero. These require the most preparation, and typically allow read/write traffic to both databases for at least a small amount of time, thereby sacrificing consistency for availability. Without stopping application traffic, you perform an immediate [cutover](#cutover-strategy), while assuming that some writes will not be replicated to CockroachDB. You may want to manually reconcile these data inconsistencies after switching over.
-
-For an overview of zero-downtime migrations to CockroachDB, see [Zero Downtime](#zero-downtime). {% comment %}For details, see [Migration Strategy: Zero Downtime](migration-strategy-zero-downtime).{% endcomment %}
-
-### Cutover strategy
-
-*Cutover* is the process of switching application traffic from the source database to CockroachDB. Consider the following:
-
-- Will you perform the cutover all at once, or incrementally (e.g., by a subset of users, workloads, or tables)?
-
- - Switching all at once generally follows a [downtime window](#downtime-window) approach. Once the data is migrated to CockroachDB, you "flip the switch" to route application traffic to the new database, thus ending downtime.
-
- - Migrations with [zero or near-zero downtime](#minimal-downtime) can switch either all at once or incrementally, since writes are being synchronously replicated and the system can be gradually migrated as you [validate the queries](#validate-queries).
-
-- Will you have a fallback plan that allows you to reverse ("roll back") the migration from CockroachDB to the source database? A fallback plan enables you to fix any issues or inconsistencies that you encounter during or after cutover, then retry the migration.
-
-#### All at once (no rollback)
-
-This is the simplest cutover method, since you won't need to develop and execute a fallback plan.
-
-As part of [migration preparations](#prepare-for-migration), you will have already [tested your queries and performance](#test-query-results-and-performance) to have confidence to migrate without a rollback option. After moving all of the data from the source database to CockroachDB, you switch application traffic to CockroachDB.
-
-#### All at once (rollback)
-
-This method adds a fallback plan to the simple [all-at-once](#all-at-once-no-rollback) cutover.
-
-In addition to moving data to CockroachDB, data is also replicated from CockroachDB back to the source database in case you need to roll back the migration. Continuous replication is already possible when performing a [zero-downtime migration](#zero-downtime) that dual writes to both databases. Otherwise, you will need to ensure that data is replicated in the reverse direction at cutover. The challenge is to find a point at which both the source database and CockroachDB are in sync, so that you can roll back to that point. You should also avoid falling into a circular state where updates continuously travel back and forth between the source database and CockroachDB.
-
-#### Phased rollout
-
-Also known as the ["strangler fig"](https://en.wikipedia.org/wiki/Strangler_fig) approach, a phased rollout migrates a portion of your users, workloads, or tables over time. Until all users, workloads, and/or tables are migrated, the application will continue to write to both databases.
-
-This approach enables you to take your time with the migration, and to pause or roll back as you [monitor the migration](#set-up-monitoring-and-alerting) for issues and performance. Rolling back the migration involves the same caveats and considerations as for the [all-at-once](#all-at-once-rollback) method. Because you can control the blast radius of your migration by routing traffic for a subset of users or services, a phased rollout has reduced business risk and user impact at the cost of increased implementation risk. You will need to figure out how to migrate in phases while ensuring that your application is unaffected.
-
-### Capacity planning
-
-{{site.data.alerts.callout_success}}
-If you need help migrating to CockroachDB, contact our sales team.
-{{site.data.alerts.end}}
-
-Determine the size of the target CockroachDB cluster. To do this, consider your data volume and workload characteristics:
-
-- What is the total size of the data you will migrate?
-- How many active [application connections]({% link {{ page.version.version }}/recommended-production-settings.md %}#connection-pooling) will be running in the CockroachDB environment?
-
-Use this information to size the CockroachDB cluster you will create. If you are migrating to a CockroachDB {{ site.data.products.cloud }} cluster, see [Plan Your Cluster]({% link cockroachcloud/plan-your-cluster.md %}) for details:
-
-- For CockroachDB {{ site.data.products.standard }} and {{ site.data.products.basic }}, your cluster will scale automatically to meet your storage and usage requirements. Refer to the [CockroachDB {{ site.data.products.standard }}]({% link cockroachcloud/plan-your-cluster-basic.md %}#request-units) and [CockroachDB {{ site.data.products.basic }}]({% link cockroachcloud/plan-your-cluster-basic.md %}#request-units) documentation to learn about how to limit your resource consumption.
-- For CockroachDB {{ site.data.products.advanced }}, refer to the [example]({% link cockroachcloud/plan-your-cluster-advanced.md %}#example) that shows how your data volume, storage requirements, and replication factor affect the recommended node size (number of vCPUs per node) and total number of nodes on the cluster.
-- For guidance on sizing for connection pools, see the CockroachDB {{ site.data.products.cloud }} [Production Checklist]({% link cockroachcloud/production-checklist.md %}#sql-connection-handling).
-
-If you are migrating to a CockroachDB {{ site.data.products.core }} cluster:
-
-- Refer to our [sizing methodology]({% link {{ page.version.version }}/recommended-production-settings.md %}#sizing) to determine the total number of vCPUs on the cluster and the number of vCPUs per node (which determines the number of nodes on the cluster).
-- Refer to our [storage recommendations]({% link {{ page.version.version }}/recommended-production-settings.md %}#storage) to determine the amount of storage to provision on each node.
-- For guidance on sizing for connection pools, see the CockroachDB {{ site.data.products.core }} [Production Checklist]({% link {{ page.version.version }}/recommended-production-settings.md %}#connection-pooling).
-
-### Application changes
-
-As you develop your migration plan, consider the application changes that you will need to make. These may relate to the following:
-
-- [Designing a schema that is compatible with CockroachDB.](#schema-design-best-practices)
-- [Creating effective indexes on CockroachDB.](#index-creation-best-practices)
-- [Handling transaction contention.](#handling-transaction-contention)
-- [Unimplemented features and syntax incompatibilities.](#unimplemented-features-and-syntax-incompatibilities)
-
-#### Schema design best practices
-
-Follow these recommendations when [converting your schema](#convert-the-schema) for compatibility with CockroachDB.
-
-{{site.data.alerts.callout_success}}
-The [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}) automatically identifies potential improvements to your schema.
-{{site.data.alerts.end}}
-
-- You should define an explicit primary key on every table. For more information, see [Primary key best practices]({% link {{ page.version.version }}/schema-design-table.md %}#primary-key-best-practices).
-
-- Do not use a sequence to define a primary key column. Instead, Cockroach Labs recommends that you use [multi-column primary keys]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#use-multi-column-primary-keys) or [auto-generating unique IDs]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#use-functions-to-generate-unique-ids) for primary key columns.
-
-- By default on CockroachDB, `INT` is an alias for `INT8`, which creates 64-bit signed integers. Depending on your source database or application requirements, you may need to change the integer size to `4`. For example, [PostgreSQL defaults to 32-bit integers](https://www.postgresql.org/docs/9.6/datatype-numeric.html). For more information, see [Considerations for 64-bit signed integers]({% link {{ page.version.version }}/int.md %}#considerations-for-64-bit-signed-integers).
-
-#### Index creation best practices
-
-Review the [best practices for creating secondary indexes]({% link {{ page.version.version }}/schema-design-indexes.md %}#best-practices) on CockroachDB.
-
-{% include {{page.version.version}}/performance/use-hash-sharded-indexes.md %}
-
-#### Handling transaction contention
-
-Optimize your queries against [transaction contention]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#transaction-contention). You will likely encounter [transaction retry errors]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}) related to CockroachDB's [`SERIALIZABLE` isolation level]({% link {{ page.version.version }}/demo-serializable.md %}) when you [test application queries](#validate-queries), as well as transaction contention due to long-running transactions when you [conduct the migration](#conduct-the-migration) and bulk load data.
-
-#### Unimplemented features and syntax incompatibilities
-
-Update your queries to resolve differences in functionality and SQL syntax.
-
-{{site.data.alerts.callout_success}}
-The [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}) automatically flags syntax incompatibilities and unimplemented features in your schema.
-{{site.data.alerts.end}}
-
-CockroachDB supports the [PostgreSQL wire protocol](https://www.postgresql.org/docs/current/protocol.html) and is largely compatible with PostgreSQL syntax. However, the following PostgreSQL features do not yet exist in CockroachDB:
-
-{% include {{page.version.version}}/sql/unsupported-postgres-features.md %}
-
-If your source database uses any of the preceding features, you may need to implement workarounds in your schema design, in your [data manipulation language (DML)]({% link {{ page.version.version }}/sql-statements.md %}#data-manipulation-statements), or in your application code.
-
-For more details on the CockroachDB SQL implementation, see [SQL Feature Support]({% link {{ page.version.version }}/sql-feature-support.md %}).
-
-## Prepare for migration
-
-Once you have a migration plan, prepare the team, application, source database, and CockroachDB cluster for the migration.
-
-### Run a migration "pre-mortem"
-
-{{site.data.alerts.callout_success}}
-This step is optional.
-{{site.data.alerts.end}}
-
-To minimize issues after [cutover](#cutover-strategy), compose a migration "pre-mortem":
-
-- Clearly describe the roles and processes of each team member performing the migration.
-- List the likely failure points and issues that you may encounter as you [conduct the migration](#conduct-the-migration).
-- Rank potential issues by severity, and identify ways to reduce risk.
-- Create a plan for implementing the actions that would most effectively reduce risk.
-
-### Set up monitoring and alerting
-
-{{site.data.alerts.callout_success}}
-This step is optional.
-{{site.data.alerts.end}}
-
-Based on the error budget you [defined in your migration plan](#develop-a-migration-plan), identify the metrics that you can use to measure your success criteria and set up monitoring for the migration. These metrics may be identical to those you normally use in production, but can also be specific to your migration needs.
-
-### Update the schema and queries
-
-In the following order:
-
-1. [Convert your schema](#convert-the-schema).
-1. [Load test data](#load-test-data).
-1. [Validate your application queries](#validate-queries).
-
-
-
-You can use the following MOLT (Migrate Off Legacy Technology) tools to simplify these steps:
-
-- [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
-- [MOLT Verify]({% link molt/molt-verify.md %})
-
-#### Convert the schema
-
-First, convert your database schema to an equivalent CockroachDB schema:
-
-- Use the [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}) to convert your schema line-by-line. This requires a free [CockroachDB {{ site.data.products.cloud }} account]({% link cockroachcloud/create-an-account.md %}). The tool will convert the syntax, identify [unimplemented features and syntax incompatibilities](#unimplemented-features-and-syntax-incompatibilities) in the schema, and suggest edits according to CockroachDB [best practices](#schema-design-best-practices).
- {{site.data.alerts.callout_info}}
- The Schema Conversion Tool accepts `.sql` files from PostgreSQL, MySQL, Oracle, and Microsoft SQL Server.
- {{site.data.alerts.end}}
-
-- Alternatively, manually convert the schema according to our [schema design best practices](#schema-design-best-practices){% comment %}and data type mappings{% endcomment %}. You can also [export a partially converted schema]({% link cockroachcloud/migrations-page.md %}#export-the-schema) from the Schema Conversion Tool to finish the conversion manually.
-
-Then import the converted schema to a CockroachDB cluster:
-
-- For CockroachDB {{ site.data.products.cloud }}, use the Schema Conversion Tool to [migrate the converted schema to a new {{ site.data.products.cloud }} database]({% link cockroachcloud/migrations-page.md %}#migrate-the-schema).
-- For CockroachDB {{ site.data.products.core }}, pipe the [data definition language (DDL)]({% link {{ page.version.version }}/sql-statements.md %}#data-definition-statements) directly into [`cockroach sql`]({% link {{ page.version.version }}/cockroach-sql.md %}). You can [export a converted schema file]({% link cockroachcloud/migrations-page.md %}#export-the-schema) from the Schema Conversion Tool.
- {{site.data.alerts.callout_success}}
- For the fastest performance, you can use a [local, single-node CockroachDB cluster]({% link {{ page.version.version }}/cockroach-start-single-node.md %}#start-a-single-node-cluster) to convert your schema and [check the results of queries](#test-query-results-and-performance).
- {{site.data.alerts.end}}
-
-#### Load test data
-
-{{site.data.alerts.callout_success}}
-Before moving data, Cockroach Labs recommends [dropping any indexes]({% link {{ page.version.version }}/drop-index.md %}) on the CockroachDB database. The indexes can be [recreated]({% link {{ page.version.version }}/create-index.md %}) after the data is loaded. Doing so will optimize performance.
-{{site.data.alerts.end}}
-
-After [converting the schema](#convert-the-schema), load your data into CockroachDB so that you can [test your application queries](#validate-queries). Then use one of the following methods to migrate the data (you may need to use additional tooling to extract and/or convert the data to an appropriate file format):
-
-- {% include {{ page.version.version }}/migration/load-data-import-into.md %} Typically, initial data loading during a database migration will not be running concurrently with application traffic, so the fact that `IMPORT INTO` takes the table offline may not have any observable availability impact.
-- {% include {{ page.version.version }}/migration/load-data-third-party.md %} Within the tool, you can select the database tables to migrate to the test cluster.
-- {% include {{ page.version.version }}/migration/load-data-copy-from.md %}
-
-#### Validate queries
-
-After you [load the test data](#load-test-data), validate your queries on CockroachDB. You can do this by [shadowing](#shadowing) or by [manually testing](#test-query-results-and-performance) the queries.
-
-##### Shadowing
-
-You can "shadow" your production workload by executing your source SQL statements on CockroachDB in parallel. You can then [test the queries](#test-query-results-and-performance) on CockroachDB for consistency, performance, and potential issues with the migration.
-
-Shadowing may not be necessary or practical for your workload. For example, because transactions are serialized on CockroachDB, this will limit your ability to validate the performance of high-throughput workloads.
-
-##### Test query results and performance
-
-You can manually validate your queries by testing a subset of "critical queries" on an otherwise idle CockroachDB cluster:
-
-- Check the application logs for error messages and the API response time. If application requests are slower than expected, use the **SQL Activity** page on the [CockroachDB {{ site.data.products.cloud }} Console]({% link cockroachcloud/statements-page.md %}) or [DB Console]({% link {{ page.version.version }}/ui-statements-page.md %}) to find the longest-running queries that are part of that application request. If necessary, tune the queries according to our best practices for [SQL performance]({% link {{ page.version.version }}/performance-best-practices-overview.md %}).
-
-- Compare the results of the queries and check that they are identical in both the source database and CockroachDB. To do this, you can use [MOLT Verify]({% link molt/molt-verify.md %}).
-
-Test performance on a CockroachDB cluster that is appropriately [sized](#capacity-planning) for your workload:
-
-1. Run the application with single- or very low-concurrency and verify the app's performance is acceptable. The cluster should be provisioned with more than enough resources to handle this workload, because you need to verify that the queries will be fast enough when there are zero resource bottlenecks.
-
-1. Run stress tests with at least the production concurrency and rate, but ideally higher in order to verify that the system can handle unexpected spikes in load. This can also uncover [contention]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#transaction-contention) issues that will appear during spikes in app load, which may require [application design changes](#handling-transaction-contention) to avoid.
-
-### Perform a dry run
-
-To further minimize potential surprises when you conduct the migration, practice [cutover](#cutover-strategy) using your application and similar volumes of data on a "dry-run" environment. Use a test or development environment that is as similar as possible to production.
-
-Performing a dry run is highly recommended. In addition to demonstrating how long the migration may take, a dry run also helps to ensure that team members understand what they need to do during the migration, and that changes to the application are coordinated.
-
-## Conduct the migration
-
-Before proceeding, double-check that you are [prepared to migrate](#prepare-for-migration).
-
-Once you are ready to migrate, optionally [drop the database]({% link {{ page.version.version }}/drop-database.md %}) and delete the test cluster so that you can get a clean start:
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-DROP DATABASE {database-name} CASCADE;
-~~~
-
-Alternatively, [truncate]({% link {{ page.version.version }}/truncate.md %}) each table you used for testing to avoid having to recreate your schema:
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-TRUNCATE {table-name} CASCADE;
-~~~
-
-Migrate your data to CockroachDB using the method that is appropriate for your [downtime requirements](#approach-to-downtime) and [cutover strategy](#cutover-strategy).
-
-### Lift and Shift
-
-Using this method, consistency is achieved by only performing the cutover once all writes have been replicated from the source database to CockroachDB. This requires downtime during which the application traffic is stopped.
-
-The following is a high-level overview of the migration steps. For considerations and details about the pros and cons of this approach, see [Migration Strategy: Lift and Shift]({% link {{ page.version.version }}/migration-strategy-lift-and-shift.md %}).
-
-1. Stop application traffic to your source database. **This begins downtime.**
-1. Move data in one of the following ways:
- - {% include {{ page.version.version }}/migration/load-data-import-into.md %}
- - {% include {{ page.version.version }}/migration/load-data-third-party.md %}
- - {% include {{ page.version.version }}/migration/load-data-copy-from.md %}
-1. After the data is migrated, you can use [MOLT Verify]({% link molt/molt-verify.md %}) to validate the consistency of the data between the source database and CockroachDB.
-1. Perform a [cutover](#cutover-strategy) by resuming application traffic, now to CockroachDB.
-{% comment %}1. If you want the ability to [roll back](#all-at-once-rollback) the migration, replicate data back to the source database.{% endcomment %}
-
-### Zero Downtime
-
-Using this method, downtime is minimized by performing the cutover while writes are still being replicated from the source database to CockroachDB. Inconsistencies are resolved through manual reconciliation.
-
-The following is a high-level overview of the migration steps. {% comment %}For details on this migration strategy, see [Migration Strategy: Zero Downtime]({% link {{ page.version.version }}/migration-strategy-lift-and-shift.md %}).{% endcomment %}
-
-To prioritize consistency and minimize downtime:
-
-1. {% include {{ page.version.version }}/migration/load-data-third-party.md %} Select the tool's option to **replicate ongoing changes** after performing the initial load of data into CockroachDB.
-1. As the data is migrating, you can use [MOLT Verify]({% link molt/molt-verify.md %}) to validate the consistency of the data between the source database and CockroachDB.
-1. Once nearly all data from your source database has been moved to CockroachDB (for example, with a <1 second delay or <1000 rows), stop application traffic to your source database. **This begins downtime.**
-1. Wait for replication to CockroachDB to complete.
-1. Perform a [cutover](#cutover-strategy) by resuming application traffic, now to CockroachDB.
-
-To achieve zero downtime with inconsistency:
-
-1. {% include {{ page.version.version }}/migration/load-data-third-party.md %} Select the tool's option to replicate ongoing changes after performing the initial load of data into CockroachDB.
-1. As the data is migrating, you can use [MOLT Verify]({% link molt/molt-verify.md %}) to validate the consistency of the data between the source database and CockroachDB.
-1. Once nearly all data from your source database has been moved to CockroachDB (for example, with a <1 second delay or <1000 rows), perform a [cutover](#cutover-strategy) by pointing application traffic to CockroachDB.
-1. Manually reconcile any inconsistencies caused by writes that were not replicated during the cutover.
-1. Close the connection to the source database when you are ready to finish the migration.
-
-## Complete the migration
-
-After you have successfully [conducted the migration](#conduct-the-migration):
-
-- Notify the teams and other stakeholders impacted by the migration.
-- Retire any test or development environments used to verify the migration.
-- Extend the document you created when [developing your migration plan](#develop-a-migration-plan) with any issues encountered and follow-up work that needs to be done.
-
-## See also
-
-- [Can a PostgreSQL or MySQL application be migrated to CockroachDB?]({% link {{ page.version.version }}/frequently-asked-questions.md %}#can-a-postgresql-or-mysql-application-be-migrated-to-cockroachdb)
-- [PostgreSQL Compatibility]({% link {{ page.version.version }}/postgresql-compatibility.md %})
-- [Use the Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
-- [Schema Design Overview]({% link {{ page.version.version }}/schema-design-overview.md %})
-- [Create a User-defined Schema]({% link {{ page.version.version }}/schema-design-schema.md %})
-- [Primary key best practices]({% link {{ page.version.version }}/schema-design-table.md %}#primary-key-best-practices)
-- [Secondary index best practices]({% link {{ page.version.version }}/schema-design-indexes.md %}#best-practices)
-- [Transaction contention best practices]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#transaction-contention)
-- [Back Up and Restore]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %})
diff --git a/src/current/v23.1/migration-strategy-lift-and-shift.md b/src/current/v23.1/migration-strategy-lift-and-shift.md
deleted file mode 100644
index 402f7126bec..00000000000
--- a/src/current/v23.1/migration-strategy-lift-and-shift.md
+++ /dev/null
@@ -1,139 +0,0 @@
----
-title: "Migration Strategy: Lift and Shift"
-summary: Learn about the 'Lift and Shift' data migration strategy
-toc: true
-docs_area: migrate
----
-
-There are multiple strategies for [migrating off legacy technology]({% link {{ page.version.version }}/migration-overview.md %}) to CockroachDB.
-
-This page discusses the ["Lift and Shift" strategy]({% link {{ page.version.version }}/migration-overview.md %}#lift-and-shift) for migrating your database, which is a commonly used approach. This approach, which is also known as "Big Bang" (and by other names), refers to the process where your data is moved in its entirety from a source system to a target system within a defined period of time. This typically involves some application downtime and can involve some service degradation.
-
-Lift and Shift may not be the right approach if a strong application service continuity during the migration is required. It may be a viable method if application downtime is permitted.
-
-{{site.data.alerts.callout_info}}
-The information on this page assumes you have already reviewed the [migration overview]({% link {{ page.version.version }}/migration-overview.md %}).
-{{site.data.alerts.end}}
-
-## Pros and Cons
-
-On the spectrum of different data migration strategies, Lift and Shift has the following pros and cons. The terms "lower" and "higher" are not absolute, but relative to other approaches.
-
-Pros:
-
-- Conceptually straightforward.
-- Less complex: If you can afford some downtime, the overall effort will usually be lower, and the chance of errors is lower.
-- Lower time start-to-finish: In general, the more downtime you can afford, the shorter the overall migration project timeframe can be.
-- Lower technical risk: It does not involve running multiple systems alongside each other for an extended period of time.
-- Easy to practice [dry runs]({% link {{ page.version.version }}/migration-overview.md %}#perform-a-dry-run) of import/export using testing/non-production systems.
-- Good import/export tooling is available (e.g., external tools like: [AWS Database Migration Service (DMS)]({% link {{ page.version.version }}/aws-dms.md %}), [Qlik Replicate]({% link {{ page.version.version }}/qlik.md %}), [Striim]({% link {{ page.version.version }}/striim.md %}); or internal tools like [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}), [`COPY FROM`]({% link {{ page.version.version }}/copy.md %}), [`cockroach userdata`]({% link {{ page.version.version }}/cockroach-userfile-upload.md %})).
-- If your application already has regularly scheduled maintenance windows, your customers will not encounter application downtime.
-
-Cons:
-
-- All or nothing: It either works or does not work; once you start, you have to finish or [roll back]({% link {{ page.version.version }}/migration-overview.md %}#all-at-once-rollback).
-- Higher project risk: The project **must** be completed to meet a given [downtime / service degradation window]({% link {{ page.version.version }}/migration-overview.md %}#downtime-window).
-- Application service continuity requirements must be relaxed (that is, application downtime or increased latency may be needed).
-
-## Process design considerations
-
-{{site.data.alerts.callout_info}}
-The high-level considerations in this section only refer to the data-loading portion of your migration. They assume you are following the steps in the overall migration process described in [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}).
-{{site.data.alerts.end}}
-
-Keep in mind the following considerations when designing a Lift and Shift data migration process.
-
-- [Decide on your data migration tooling.](#managed-migration)
-- [Decide which data formats you will use.](#data-formats)
-- [Design a restartable process.](#restartable)
-- [Design a scalable process.](#scalable)
-
-
-
-### Decide on your data migration tooling
-
-If you plan to do your bulk data migration using a managed migration service, you must have a secure, publicly available CockroachDB cluster. CockroachDB supports the following [third-party migration services]({% link {{ page.version.version }}/third-party-database-tools.md %}#data-migration-tools):
-
-- [AWS Database Migration Service (DMS)]({% link {{ page.version.version }}/aws-dms.md %})
-- [Qlik Replicate]({% link {{ page.version.version }}/qlik.md %})
-- [Striim]({% link {{ page.version.version }}/striim.md %})
-
-{{site.data.alerts.callout_info}}
-Depending on the migration service you choose, [long-running transactions]({% link {{ page.version.version }}/query-behavior-troubleshooting.md %}#hanging-or-stuck-queries) can occur. In some cases, these queries will cause [transaction retry errors]({% link {{ page.version.version }}/common-errors.md %}#restart-transaction). If you encounter these errors while migrating to CockroachDB using a managed migration service, please reach out to our [Support Resources]({% link {{ page.version.version }}/support-resources.md %}).
-{{site.data.alerts.end}}
-
-If you will not be using a managed migration service, see the following sections for more information on how to use SQL statements like [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}), [`COPY FROM`]({% link {{ page.version.version }}/copy.md %}), etc.
-
-
-
-### Decide which data formats and storage media you will use
-
-It's important to decide which data formats, storage media, and database features you will use to migrate your data.
-
-Data formats that can be imported by CockroachDB include:
-
-- [SQL]({% link {{ page.version.version }}/schema-design-overview.md %}) for the [schema import]({% link cockroachcloud/migrations-page.md %}).
-- [CSV]({% link {{ page.version.version }}/migrate-from-csv.md %}) for table data.
-- [Avro]({% link {{ page.version.version }}/migrate-from-avro.md %}) for table data.
-
-The storage media you use to export / import from can be intermediate data files or streaming data coming over the network. Options include:
-
-- Local "userdata" storage for small tables (see [`cockroach userdata`]({% link {{ page.version.version }}/cockroach-userfile-upload.md %}), [Use a Local File Server]({% link {{ page.version.version }}/use-a-local-file-server.md %})).
-- Cloud blob storage (see [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}), [Import Performance Best Practices]({% link {{ page.version.version }}/import-performance-best-practices.md %}), [Use Cloud Storage]({% link {{ page.version.version }}/use-cloud-storage.md %})).
-- Direct wire transfers (see [managed migration services](#managed-migration)).
-
-Database features for export/import from the source and target databases can include:
-
-- Tools for exporting from the source database may include: `pg_dump --schema-only` and `COPY FROM`, `mysqldump`, `expdp`, etc.
-- For import into CockroachDB, use [`COPY FROM`]({% link {{ page.version.version }}/copy.md %}) or [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}). For a bulk data migrations, most users should use `IMPORT INTO` because the tables will be offline anyway, and `IMPORT INTO` can [perform the data import much faster]({% link {{ page.version.version }}/import-performance-best-practices.md %}) than `COPY FROM`.
-
-Decide which of the options above will meet your requirements while resulting in a process that is [restartable](#restartable) and [scalable](#scalable).
-
-
-
-### Design a restartable process
-
-To have a higher chance of success, design your data migration process so it can be stopped and restarted from an intermediate state at any time during the process. This will help minimize errors and avoid wasted effort.
-
-Keep the following requirements in mind as you design a restartable import/export process:
-
-- Bulk migrate data in manageable size batches for your source and target systems.
- - This is a best practice. If something happens to the target cluster during import, the amount of wasted work will be minimized.
-- Implement progress/state keeping with process restart capabilities.
-- Make sure your export process is idempotent: the same input to your export process should return the same output data.
-- If possible, export and import the majority of your data before taking down the source database. This can ensure that you only have to deal with the incremental changes from your last import to complete the migration process.
-
-
-
-### Design a scalable and performant process
-
-Once your process is [restartable and resilient to failures](#design-a-restartable-process), it's important to also make sure it will scale to the needs of your data set. The larger the data set you are migrating to CockroachDB, the more important the performance and scalability of your process will be.
-
-Keep the following requirements in mind:
-
-- Schema and data should be imported separately.
-- Your process should handle multiple files across multiple export/import streams concurrently.
- - For best performance, these files should contain presorted, disjoint data sets.
-- Benchmark the performance of your migration process to help ensure it will complete within the allotted downtime window.
-
-For more information about import performance, see [Import Performance Best Practices]({% link {{ page.version.version }}/import-performance-best-practices.md %}).
-
-## See also
-
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
-- [Use the Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
-- [Migrate with AWS Database Migration Service (DMS)]({% link {{ page.version.version }}/aws-dms.md %})
-- [AWS DMS documentation](https://docs.aws.amazon.com/dms/latest/userguide/Welcome.html)
-- [Migrate and Replicate Data with Qlik Replicate]({% link {{ page.version.version }}/qlik.md %})
-- [Migrate and Replicate Data with Striim]({% link {{ page.version.version }}/striim.md %})
-- [Schema Design Overview]({% link {{ page.version.version }}/schema-design-overview.md %})
-- [Back Up and Restore]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %})
-- [Export data with Changefeeds]({% link {{ page.version.version }}/export-data-with-changefeeds.md %})
-- [`COPY`]({% link {{ page.version.version }}/copy.md %})
-- [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %})
-- [Migrate from CSV]({% link {{ page.version.version }}/migrate-from-csv.md %})
-- [Migrate from Avro]({% link {{ page.version.version }}/migrate-from-avro.md %})
-- [Client connection parameters]({% link {{ page.version.version }}/connection-parameters.md %})
-
-
-{% comment %} eof {% endcomment %}
diff --git a/src/current/v23.1/qlik.md b/src/current/v23.1/qlik.md
index bdb646d7b20..da53e969bfd 100644
--- a/src/current/v23.1/qlik.md
+++ b/src/current/v23.1/qlik.md
@@ -68,7 +68,7 @@ Complete the following items before using Qlik Replicate:
- If you are migrating from PostgreSQL, MySQL, Oracle, or Microsoft SQL Server, [use the **Schema Conversion Tool**]({% link cockroachcloud/migrations-page.md %}) to convert and export your schema. Ensure that any schema changes are also reflected on your tables, or add transformation rules. If you make substantial schema changes, the Qlik Replicate migration may fail.
{{site.data.alerts.callout_info}}
- All tables must have an explicitly defined primary key. For more guidance, see the [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}#schema-design-best-practices).
+ All tables must have an explicitly defined primary key. For more guidance, see [Migration Strategy]({% link molt/migration-strategy.md %}#schema-design-best-practices).
{{site.data.alerts.end}}
## Migrate and replicate data to CockroachDB
@@ -96,7 +96,7 @@ In the Qlik Replicate interface, CockroachDB is configured as a PostgreSQL **sou
## See also
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
- [Change Data Capture Overview]({% link {{ page.version.version }}/change-data-capture-overview.md %})
- [Third-Party Tools Supported by Cockroach Labs]({% link {{ page.version.version }}/third-party-database-tools.md %})
diff --git a/src/current/v23.1/sql-faqs.md b/src/current/v23.1/sql-faqs.md
index 1d971591089..98b8c51eb34 100644
--- a/src/current/v23.1/sql-faqs.md
+++ b/src/current/v23.1/sql-faqs.md
@@ -13,7 +13,7 @@ docs_area: get_started
{{site.data.alerts.callout_info}}
You can also use the [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) statement to bulk-insert CSV data into an existing table.
{{site.data.alerts.end}}
-- To bulk-insert data into a new table, the [`IMPORT`]({% link {{ page.version.version }}/import.md %}) statement performs better than `INSERT`. `IMPORT` can also be used to [migrate data from other databases]({% link {{ page.version.version }}/migration-overview.md %}) like MySQL, Oracle, and PostgreSQL.
+- To bulk-insert data into a new table, the [`IMPORT`]({% link {{ page.version.version }}/import.md %}) statement performs better than `INSERT`. `IMPORT` can also be used to [migrate data from other databases]({% link molt/migration-overview.md %}) like MySQL, Oracle, and PostgreSQL.
## How do I auto-generate unique row IDs in CockroachDB?
diff --git a/src/current/v23.1/striim.md b/src/current/v23.1/striim.md
index 436d7440cbd..67526c8ff9d 100644
--- a/src/current/v23.1/striim.md
+++ b/src/current/v23.1/striim.md
@@ -37,7 +37,7 @@ Complete the following items before using Striim:
- If you are migrating from PostgreSQL, MySQL, Oracle, or Microsoft SQL Server, [use the **Schema Conversion Tool**]({% link cockroachcloud/migrations-page.md %}) to convert and export your schema. Ensure that any schema changes are also reflected on your tables.
{{site.data.alerts.callout_info}}
- All tables must have an explicitly defined primary key. For more guidance, see the [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}#schema-design-best-practices).
+ All tables must have an explicitly defined primary key. For more guidance, see [Migration Strategy]({% link molt/migration-strategy.md %}#schema-design-best-practices).
{{site.data.alerts.end}}
## Migrate and replicate data to CockroachDB
@@ -110,7 +110,7 @@ To perform continuous replication of ongoing changes, create a Striim applicatio
## See also
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
- [Change Data Capture Overview]({% link {{ page.version.version }}/change-data-capture-overview.md %})
- [Third-Party Tools Supported by Cockroach Labs]({% link {{ page.version.version }}/third-party-database-tools.md %})
diff --git a/src/current/v23.1/take-and-restore-encrypted-backups.md b/src/current/v23.1/take-and-restore-encrypted-backups.md
index 75bca9d9ae1..3df3d641c4e 100644
--- a/src/current/v23.1/take-and-restore-encrypted-backups.md
+++ b/src/current/v23.1/take-and-restore-encrypted-backups.md
@@ -407,7 +407,7 @@ To restore from a specific backup, use [`RESTORE FROM {subdirectory} IN ...`]({%
- [Take Full and Incremental Backups]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %})
- [Take and Restore Locality-aware Backups]({% link {{ page.version.version }}/take-and-restore-locality-aware-backups.md %})
- [Take Backups with Revision History and Restore from a Point-in-time]({% link {{ page.version.version }}/take-backups-with-revision-history-and-restore-from-a-point-in-time.md %})
-- [`IMPORT`]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [Use the Built-in SQL Client]({% link {{ page.version.version }}/cockroach-sql.md %})
- [`cockroach` Commands Overview]({% link {{ page.version.version }}/cockroach-commands.md %})
diff --git a/src/current/v23.1/take-and-restore-locality-aware-backups.md b/src/current/v23.1/take-and-restore-locality-aware-backups.md
index cb6736f61a5..bdff82ab7c5 100644
--- a/src/current/v23.1/take-and-restore-locality-aware-backups.md
+++ b/src/current/v23.1/take-and-restore-locality-aware-backups.md
@@ -237,6 +237,6 @@ RESUME JOB 27536791415282;
- [Take Full and Incremental Backups]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %})
- [Take and Restore Encrypted Backups]({% link {{ page.version.version }}/take-and-restore-encrypted-backups.md %})
- [Take Backups with Revision History and Restore from a Point-in-time]({% link {{ page.version.version }}/take-backups-with-revision-history-and-restore-from-a-point-in-time.md %})
-- [`IMPORT`]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [Use the Built-in SQL Client]({% link {{ page.version.version }}/cockroach-sql.md %})
- [`cockroach` Commands Overview]({% link {{ page.version.version }}/cockroach-commands.md %})
diff --git a/src/current/v23.1/take-backups-with-revision-history-and-restore-from-a-point-in-time.md b/src/current/v23.1/take-backups-with-revision-history-and-restore-from-a-point-in-time.md
index 4904d50f203..cc59f5c060a 100644
--- a/src/current/v23.1/take-backups-with-revision-history-and-restore-from-a-point-in-time.md
+++ b/src/current/v23.1/take-backups-with-revision-history-and-restore-from-a-point-in-time.md
@@ -53,7 +53,7 @@ To view the available backup subdirectories you can restore from, use [`SHOW BAC
- [Take Full and Incremental Backups]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %})
- [Take and Restore Encrypted Backups]({% link {{ page.version.version }}/take-and-restore-encrypted-backups.md %})
- [Take and Restore Locality-aware Backups]({% link {{ page.version.version }}/take-and-restore-locality-aware-backups.md %})
-- [`IMPORT`]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [Use the Built-in SQL Client]({% link {{ page.version.version }}/cockroach-sql.md %})
- [`cockroach` Commands Overview]({% link {{ page.version.version }}/cockroach-commands.md %})
diff --git a/src/current/v23.1/take-full-and-incremental-backups.md b/src/current/v23.1/take-full-and-incremental-backups.md
index 4d9b5ef36d4..ae06c5e39f2 100644
--- a/src/current/v23.1/take-full-and-incremental-backups.md
+++ b/src/current/v23.1/take-full-and-incremental-backups.md
@@ -384,7 +384,7 @@ To create a table with `exclude_data_from_backup`, see [Create a table with data
- [Take and Restore Encrypted Backups]({% link {{ page.version.version }}/take-and-restore-encrypted-backups.md %})
- [Take and Restore Locality-aware Backups]({% link {{ page.version.version }}/take-and-restore-locality-aware-backups.md %})
- [Take Backups with Revision History and Restore from a Point-in-time]({% link {{ page.version.version }}/take-backups-with-revision-history-and-restore-from-a-point-in-time.md %})
-- [`IMPORT`]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [Use the Built-in SQL Client]({% link {{ page.version.version }}/cockroach-sql.md %})
- [`cockroach` Commands Overview]({% link {{ page.version.version }}/cockroach-commands.md %})
- [Expire Past Backups]({% link {{ page.version.version }}/expire-past-backups.md %})
diff --git a/src/current/v23.2/aws-dms.md b/src/current/v23.2/aws-dms.md
index c3ed0f071a3..da914d0b0e7 100644
--- a/src/current/v23.2/aws-dms.md
+++ b/src/current/v23.2/aws-dms.md
@@ -41,7 +41,7 @@ Complete the following items before starting the DMS migration:
- Manually create all schema objects in the target CockroachDB cluster. If you are migrating from PostgreSQL, MySQL, Oracle, or Microsoft SQL Server, you can [use the **Schema Conversion Tool**]({% link cockroachcloud/migrations-page.md %}) to convert and export your schema.
- - All tables must have an explicitly defined primary key. For more guidance, see the [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}#schema-design-best-practices).
+ - All tables must have an explicitly defined primary key. For more guidance, see [Migration Strategy]({% link molt/migration-strategy.md %}#schema-design-best-practices).
- Drop all [constraints]({% link {{ page.version.version }}/constraints.md %}) per the [AWS DMS best practices](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_BestPractices.html#CHAP_BestPractices.Performance). You can recreate them after the [full load completes](#step-3-verify-the-migration). AWS DMS can create a basic schema, but does not create [indexes]({% link {{ page.version.version }}/indexes.md %}) or constraints such as [foreign keys]({% link {{ page.version.version }}/foreign-key.md %}) and [defaults]({% link {{ page.version.version }}/default-value.md %}).
@@ -408,7 +408,7 @@ The `BatchApplyEnabled` setting can improve replication performance and is recom
## See Also
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
- [`cockroach demo`]({% link {{ page.version.version }}/cockroach-demo.md %})
- [AWS DMS documentation](https://docs.aws.amazon.com/dms/latest/userguide/Welcome.html)
diff --git a/src/current/v23.2/build-a-java-app-with-cockroachdb-hibernate.md b/src/current/v23.2/build-a-java-app-with-cockroachdb-hibernate.md
index c17157046d3..f6e4c60db7f 100644
--- a/src/current/v23.2/build-a-java-app-with-cockroachdb-hibernate.md
+++ b/src/current/v23.2/build-a-java-app-with-cockroachdb-hibernate.md
@@ -130,9 +130,9 @@ APP: getAccountBalance(2) --> 350.00
If you are trying to get a large data set into CockroachDB all at once (a bulk import), avoid writing client-side code altogether and use the [`IMPORT`]({% link {{ page.version.version }}/import.md %}) statement instead. It is much faster and more efficient than making a series of [`INSERT`s]({% link {{ page.version.version }}/insert.md %}) and [`UPDATE`s]({% link {{ page.version.version }}/update.md %}). It bypasses the [SQL layer]({% link {{ page.version.version }}/architecture/sql-layer.md %}) altogether and writes directly to the [storage layer](architecture/storage-layer.html) of the database.
-For more information about importing data from PostgreSQL, see [Migrate from PostgreSQL]({% link {{ page.version.version }}/migrate-from-postgres.md %}).
+For more information about importing data from PostgreSQL, see [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %}).
-For more information about importing data from MySQL, see [Migrate from MySQL]({% link {{ page.version.version }}/migrate-from-mysql.md %}).
+For more information about importing data from MySQL, see [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql).
### Use `reWriteBatchedInserts` for increased speed
diff --git a/src/current/v23.2/build-a-java-app-with-cockroachdb.md b/src/current/v23.2/build-a-java-app-with-cockroachdb.md
index 0de093fc87b..441fb2ad148 100644
--- a/src/current/v23.2/build-a-java-app-with-cockroachdb.md
+++ b/src/current/v23.2/build-a-java-app-with-cockroachdb.md
@@ -269,9 +269,9 @@ props.setProperty("options", "-c sql_safe_updates=true -c statement_timeout=30")
If you are trying to get a large data set into CockroachDB all at once (a bulk import), avoid writing client-side code altogether and use the [`IMPORT`]({% link {{ page.version.version }}/import.md %}) statement instead. It is much faster and more efficient than making a series of [`INSERT`s]({% link {{ page.version.version }}/insert.md %}) and [`UPDATE`s]({% link {{ page.version.version }}/update.md %}). It bypasses the [SQL layer]({% link {{ page.version.version }}/architecture/sql-layer.md %}) altogether and writes directly to the [storage layer](architecture/storage-layer.html) of the database.
-For more information about importing data from PostgreSQL, see [Migrate from PostgreSQL]({% link {{ page.version.version }}/migrate-from-postgres.md %}).
+For more information about importing data from PostgreSQL, see [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %}).
-For more information about importing data from MySQL, see [Migrate from MySQL]({% link {{ page.version.version }}/migrate-from-mysql.md %}).
+For more information about importing data from MySQL, see [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql).
### Use `reWriteBatchedInserts` for increased speed
diff --git a/src/current/v23.2/build-a-python-app-with-cockroachdb-sqlalchemy.md b/src/current/v23.2/build-a-python-app-with-cockroachdb-sqlalchemy.md
index 41053f8988d..69ef20d0bc1 100644
--- a/src/current/v23.2/build-a-python-app-with-cockroachdb-sqlalchemy.md
+++ b/src/current/v23.2/build-a-python-app-with-cockroachdb-sqlalchemy.md
@@ -204,9 +204,9 @@ Instead, we recommend breaking your transaction into smaller units of work (or "
If you are trying to get a large data set into CockroachDB all at once (a bulk import), avoid writing client-side code that uses an ORM and use the [`IMPORT`]({% link {{ page.version.version }}/import.md %}) statement instead. It is much faster and more efficient than making a series of [`INSERT`s]({% link {{ page.version.version }}/insert.md %}) and [`UPDATE`s]({% link {{ page.version.version }}/update.md %}) such as are generated by calls to [`session.bulk_save_objects()`](https://docs.sqlalchemy.org/orm/session_api.html?highlight=bulk_save_object#sqlalchemy.orm.session.Session.bulk_save_objects).
-For more information about importing data from PostgreSQL, see [Migrate from PostgreSQL]({% link {{ page.version.version }}/migrate-from-postgres.md %}).
+For more information about importing data from PostgreSQL, see [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %}).
-For more information about importing data from MySQL, see [Migrate from MySQL]({% link {{ page.version.version }}/migrate-from-mysql.md %}).
+For more information about importing data from MySQL, see [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql).
### Prefer the query builder
diff --git a/src/current/v23.2/cockroach-import.md b/src/current/v23.2/cockroach-import.md
index 9874f3d3016..ec1c29096b2 100644
--- a/src/current/v23.2/cockroach-import.md
+++ b/src/current/v23.2/cockroach-import.md
@@ -37,8 +37,8 @@ $ cockroach import --help
## Supported Formats
-- [`pgdump`]({% link {{ page.version.version }}/migrate-from-postgres.md %})
-- [`mysqldump`]({% link {{ page.version.version }}/migrate-from-mysql.md %})
+- [`pgdump`]({% link molt/migrate-to-cockroachdb.md %})
+- [`mysqldump`]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql)
## Flags
@@ -110,5 +110,5 @@ successfully imported table test_table from pgdump file /Users/maxroach/Desktop/
- [`cockroach` Commands Overview]({% link {{ page.version.version }}/cockroach-commands.md %})
- [`IMPORT`]({% link {{ page.version.version }}/import.md %})
- [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %})
-- [Migrate from PostgreSQL]({% link {{ page.version.version }}/migrate-from-postgres.md %})
-- [Migrate from MySQL]({% link {{ page.version.version }}/migrate-from-mysql.md %})
+- [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %})
+- [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql)
diff --git a/src/current/v23.2/cockroach-nodelocal-upload.md b/src/current/v23.2/cockroach-nodelocal-upload.md
index 890ecbcb9db..f36afd6284d 100644
--- a/src/current/v23.2/cockroach-nodelocal-upload.md
+++ b/src/current/v23.2/cockroach-nodelocal-upload.md
@@ -95,6 +95,5 @@ Then, you can use the file to [`IMPORT`]({% link {{ page.version.version }}/impo
- [`cockroach` Commands Overview]({% link {{ page.version.version }}/cockroach-commands.md %})
- [Troubleshooting Overview]({% link {{ page.version.version }}/troubleshooting-overview.md %})
-- [Import Data]({% link {{ page.version.version }}/migration-overview.md %})
- [`IMPORT`]({% link {{ page.version.version }}/import.md %})
- [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %})
diff --git a/src/current/v23.2/copy.md b/src/current/v23.2/copy.md
index aa825fdc003..144da73b4ac 100644
--- a/src/current/v23.2/copy.md
+++ b/src/current/v23.2/copy.md
@@ -358,10 +358,10 @@ You can copy CSV data into CockroachDB using the following methods:
## See also
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %})
- [`EXPORT`]({% link {{ page.version.version }}/export.md %})
- [Install a Driver or ORM Framework]({% link {{ page.version.version }}/install-client-drivers.md %})
{% comment %}
-- [Migrate from PostgreSQL]({% link {{ page.version.version }}/migrate-from-postgres.md %})
+- [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %})
{% endcomment %}
\ No newline at end of file
diff --git a/src/current/v23.2/debezium.md b/src/current/v23.2/debezium.md
index ea2eb513b50..a9d14707ea4 100644
--- a/src/current/v23.2/debezium.md
+++ b/src/current/v23.2/debezium.md
@@ -116,7 +116,7 @@ Once all of the [prerequisite steps](#before-you-begin) are completed, you can u
## See also
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
- [Change Data Capture Overview]({% link {{ page.version.version }}/change-data-capture-overview.md %})
- [Third-Party Tools Supported by Cockroach Labs]({% link {{ page.version.version }}/third-party-database-tools.md %})
diff --git a/src/current/v23.2/frequently-asked-questions.md b/src/current/v23.2/frequently-asked-questions.md
index 3ca30f68e1d..510a53851b6 100644
--- a/src/current/v23.2/frequently-asked-questions.md
+++ b/src/current/v23.2/frequently-asked-questions.md
@@ -149,7 +149,7 @@ Note, however, that the protocol used doesn't significantly impact how easy it i
### Can a PostgreSQL or MySQL application be migrated to CockroachDB?
-Yes. Most users should be able to follow the instructions in [Migrate from PostgreSQL]({% link {{ page.version.version }}/migrate-from-postgres.md %}) or [Migrate from MySQL]({% link {{ page.version.version }}/migrate-from-mysql.md %}). Due to differences in available features and syntax, some features supported by these databases may require manual effort to port to CockroachDB. Check those pages for details.
+Yes. Most users should be able to follow the instructions in [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %}) or [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql). Due to differences in available features and syntax, some features supported by these databases may require manual effort to port to CockroachDB. Check those pages for details.
We also fully support [importing your data via CSV]({% link {{ page.version.version }}/migrate-from-csv.md %}).
diff --git a/src/current/v23.2/goldengate.md b/src/current/v23.2/goldengate.md
index 7fd572094e4..30ee23c17df 100644
--- a/src/current/v23.2/goldengate.md
+++ b/src/current/v23.2/goldengate.md
@@ -514,7 +514,7 @@ Run the steps in this section on a machine and in a directory where Oracle Golde
## See also
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
- [Change Data Capture Overview]({% link {{ page.version.version }}/change-data-capture-overview.md %})
- [Third-Party Tools Supported by Cockroach Labs]({% link {{ page.version.version }}/third-party-database-tools.md %})
diff --git a/src/current/v23.2/import-into.md b/src/current/v23.2/import-into.md
index e6d6960dcbf..3e05357cb0e 100644
--- a/src/current/v23.2/import-into.md
+++ b/src/current/v23.2/import-into.md
@@ -158,7 +158,7 @@ You can control the `IMPORT` process's behavior using any of the following key-v
For examples showing how to use these options, see the [Examples section]({% link {{ page.version.version }}/import-into.md %}#examples).
-For instructions and working examples showing how to migrate data from other databases and formats, see the [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}).
+For instructions and working examples showing how to migrate data from other databases and formats, see the [Migration Overview]({% link molt/migration-overview.md %}).
## View and control import jobs
@@ -297,6 +297,6 @@ For more information about importing data from Avro, including examples, see [Mi
## See also
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [Use Cloud Storage]({% link {{ page.version.version }}/use-cloud-storage.md %})
- [Import Performance Best Practices]({% link {{ page.version.version }}/import-performance-best-practices.md %})
diff --git a/src/current/v23.2/import-performance-best-practices.md b/src/current/v23.2/import-performance-best-practices.md
index 7834c7650ba..a52f3cef522 100644
--- a/src/current/v23.2/import-performance-best-practices.md
+++ b/src/current/v23.2/import-performance-best-practices.md
@@ -162,9 +162,9 @@ If you cannot both split and sort your dataset, the performance of either split
## See also
- [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %})
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [Migrate from Oracle]({% link {{ page.version.version }}/migrate-from-oracle.md %})
-- [Migrate from PostgreSQL]({% link {{ page.version.version }}/migrate-from-postgres.md %})
-- [Migrate from MySQL]({% link {{ page.version.version }}/migrate-from-mysql.md %})
+- [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %})
+- [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql)
- [Migrate from CSV]({% link {{ page.version.version }}/migrate-from-csv.md %})
- [Migrate from Avro]({% link {{ page.version.version }}/migrate-from-avro.md %})
diff --git a/src/current/v23.2/import.md b/src/current/v23.2/import.md
index 16e6904ff34..290fdfccca4 100644
--- a/src/current/v23.2/import.md
+++ b/src/current/v23.2/import.md
@@ -30,7 +30,7 @@ To import data into a new table, use [`CREATE TABLE`]({% link {{ page.version.ve
- `IMPORT` is a blocking statement. To run an import job asynchronously, use the [`DETACHED`](#options-detached) option.
- `IMPORT` cannot be used within a [rolling upgrade]({% link {{ page.version.version }}/upgrade-cockroach-version.md %}).
- Certain `IMPORT TABLE` statements that defined the table schema inline are **not** supported in v22.1 and later versions. These include running `IMPORT TABLE ... CREATE USING` and `IMPORT TABLE` with any non-bundle format (`CSV`, `DELIMITED`, `PGCOPY`, or `AVRO`) data types. Instead, use `CREATE TABLE` and `IMPORT INTO`; see this [example]({% link {{ page.version.version }}/import-into.md %}#import-into-a-new-table-from-a-csv-file) for more detail.
-- For instructions and working examples on how to migrate data from other databases, see the [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}).
+- For instructions and working examples on how to migrate data from other databases, see the [Migration Overview]({% link molt/migration-overview.md %}).
- `IMPORT` cannot directly import data to `REGIONAL BY ROW` tables that are part of [multi-region databases]({% link {{ page.version.version }}/multiregion-overview.md %}). Instead, use [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) which supports importing into `REGIONAL BY ROW` tables.
{{site.data.alerts.callout_success}}
@@ -98,7 +98,7 @@ Key |
Context
| Value
For examples showing how to use these options, see the [Examples](#examples) section below.
-For instructions and working examples showing how to migrate data from other databases and formats, see the [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}).
+For instructions and working examples showing how to migrate data from other databases and formats, see the [Migration Overview]({% link molt/migration-overview.md %}).
## Requirements
@@ -184,7 +184,7 @@ IMPORT PGDUMP 's3://{BUCKET NAME}/{customers.sql}?AWS_ACCESS_KEY_ID={ACCESS KEY}
WITH ignore_unsupported_statements;
~~~
-For this command to succeed, you need to have created the dump file with specific flags to `pg_dump`, and use the `WITH ignore_unsupported_statements` clause. For more information, see [Migrate from PostgreSQL]({% link {{ page.version.version }}/migrate-from-postgres.md %}).
+For this command to succeed, you need to have created the dump file with specific flags to `pg_dump`, and use the `WITH ignore_unsupported_statements` clause. For more information, see [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %}).
### Import a table from a PostgreSQL database dump
@@ -197,7 +197,7 @@ IMPORT TABLE employees
If the table schema specifies foreign keys into tables that do not exist yet, the `WITH skip_foreign_keys` option may be needed. For more information, see the list of [import options](#import-options).
-For this command to succeed, you need to have created the dump file with specific flags to `pg_dump`. For more information, see [Migrate from PostgreSQL]({% link {{ page.version.version }}/migrate-from-postgres.md %}).
+For this command to succeed, you need to have created the dump file with specific flags to `pg_dump`. For more information, see [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %}).
### Import a MySQL database dump
@@ -206,7 +206,7 @@ For this command to succeed, you need to have created the dump file with specifi
IMPORT MYSQLDUMP 's3://{BUCKET NAME}/{employees-full.sql}?AWS_ACCESS_KEY_ID={ACCESS KEY}&AWS_SECRET_ACCESS_KEY={SECRET ACCESS KEY}';
~~~
-For more detailed information about importing data from MySQL, see [Migrate from MySQL]({% link {{ page.version.version }}/migrate-from-mysql.md %}).
+For more detailed information about importing data from MySQL, see [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql).
### Import a table from a MySQL database dump
@@ -218,7 +218,7 @@ IMPORT TABLE employees
If the table schema specifies foreign keys into tables that do not exist yet, the `WITH skip_foreign_keys` option may be needed. For more information, see the list of [import options](#import-options).
-For more detailed information about importing data from MySQL, see [Migrate from MySQL]({% link {{ page.version.version }}/migrate-from-mysql.md %}).
+For more detailed information about importing data from MySQL, see [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql).
### Import a limited number of rows
@@ -376,12 +376,12 @@ CSV DATA ('s3://{BUCKET NAME}/{customer-data}?AWS_ACCESS_KEY_ID={ACCESS KEY}&AWS
## See also
- [Use Cloud Storage]({% link {{ page.version.version }}/use-cloud-storage.md %})
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
-- [Migrate from MySQL][mysql]
-- [Migrate from PostgreSQL][postgres]
+- [Migration Overview]({% link molt/migration-overview.md %})
+- [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql)
+- [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %})
- [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %})
{% comment %} Reference Links {% endcomment %}
-[postgres]: {% link {{ page.version.version }}/migrate-from-postgres.md %}
-[mysql]: {% link {{ page.version.version }}/migrate-from-mysql.md %}
+[postgres]: {% link molt/migrate-to-cockroachdb.md %}
+[mysql]: {% link molt/migrate-to-cockroachdb.md %}?filters=mysql
diff --git a/src/current/v23.2/index.md b/src/current/v23.2/index.md
index 2008d323859..010a71f91f4 100644
--- a/src/current/v23.2/index.md
+++ b/src/current/v23.2/index.md
@@ -99,11 +99,11 @@ docs_area:
diff --git a/src/current/v23.2/insert-data.md b/src/current/v23.2/insert-data.md
index 2115fb4068c..b0964de875d 100644
--- a/src/current/v23.2/insert-data.md
+++ b/src/current/v23.2/insert-data.md
@@ -101,7 +101,7 @@ conn.commit()
## Bulk insert
-If you need to get a lot of data into a CockroachDB cluster quickly, use the [`IMPORT`]({% link {{ page.version.version }}/import.md %}) statement instead of sending SQL [`INSERT`s]({% link {{ page.version.version }}/insert.md %}) from application code. It will be much faster because it bypasses the SQL layer altogether and writes directly to the data store using low-level commands. For instructions, see the [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}).
+If you need to get a lot of data into a CockroachDB cluster quickly, use the [`IMPORT`]({% link {{ page.version.version }}/import.md %}) statement instead of sending SQL [`INSERT`s]({% link {{ page.version.version }}/insert.md %}) from application code. It will be much faster because it bypasses the SQL layer altogether and writes directly to the data store using low-level commands. For instructions, see the [Migration Overview]({% link molt/migration-overview.md %}).
{% include {{page.version.version}}/sql/limit-row-size.md %}
@@ -109,7 +109,7 @@ If you need to get a lot of data into a CockroachDB cluster quickly, use the [`I
Reference information related to this task:
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [`IMPORT`]({% link {{ page.version.version }}/import.md %})
- [Import performance]({% link {{ page.version.version }}/import.md %}#performance)
- [`INSERT`]({% link {{ page.version.version }}/insert.md %})
diff --git a/src/current/v23.2/known-limitations.md b/src/current/v23.2/known-limitations.md
index 880a4500997..031b079d04d 100644
--- a/src/current/v23.2/known-limitations.md
+++ b/src/current/v23.2/known-limitations.md
@@ -436,7 +436,7 @@ CockroachDB supports efficiently storing and querying [spatial data]({% link {{
- CockroachDB does not support using [schema name prefixes]({% link {{ page.version.version }}/sql-name-resolution.md %}#how-name-resolution-works) to refer to [data types]({% link {{ page.version.version }}/data-types.md %}) with type modifiers (e.g., `public.geometry(linestring, 4326)`). Instead, use fully-unqualified names to refer to data types with type modifiers (e.g., `geometry(linestring,4326)`).
- Note that, in [`IMPORT PGDUMP`]({% link {{ page.version.version }}/migrate-from-postgres.md %}) output, [`GEOMETRY` and `GEOGRAPHY`]({% link {{ page.version.version }}/export-spatial-data.md %}) data type names are prefixed by `public.`. If the type has a type modifier, you must remove the `public.` from the type name in order for the statements to work in CockroachDB.
+ Note that, in [`IMPORT PGDUMP`]({% link molt/migrate-to-cockroachdb.md %}) output, [`GEOMETRY` and `GEOGRAPHY`]({% link {{ page.version.version }}/export-spatial-data.md %}) data type names are prefixed by `public.`. If the type has a type modifier, you must remove the `public.` from the type name in order for the statements to work in CockroachDB.
[Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/56492)
diff --git a/src/current/v23.2/migrate-from-avro.md b/src/current/v23.2/migrate-from-avro.md
index fb47d9b6868..676f5274bff 100644
--- a/src/current/v23.2/migrate-from-avro.md
+++ b/src/current/v23.2/migrate-from-avro.md
@@ -218,8 +218,8 @@ You will need to run [`ALTER TABLE ... ADD CONSTRAINT`]({% link {{ page.version.
- [`IMPORT`][import]
- [Import Performance Best Practices]({% link {{ page.version.version }}/import-performance-best-practices.md %})
- [Migrate from CSV][csv]
-- [Migrate from MySQL][mysql]
-- [Migrate from PostgreSQL][postgres]
+- [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql)
+- [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %})
- [Back Up and Restore Data]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %})
- [Use the Built-in SQL Client]({% link {{ page.version.version }}/cockroach-sql.md %})
- [`cockroach` Commands Overview]({% link {{ page.version.version }}/cockroach-commands.md %})
diff --git a/src/current/v23.2/migrate-from-csv.md b/src/current/v23.2/migrate-from-csv.md
index fb8fe7adaa8..c47ba89adba 100644
--- a/src/current/v23.2/migrate-from-csv.md
+++ b/src/current/v23.2/migrate-from-csv.md
@@ -178,8 +178,8 @@ IMPORT INTO employees (emp_no, birth_date, first_name, last_name, gender, hire_d
- [`IMPORT`][import]
- [Import Performance Best Practices]({% link {{ page.version.version }}/import-performance-best-practices.md %})
-- [Migrate from MySQL][mysql]
-- [Migrate from PostgreSQL][postgres]
+- [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql)
+- [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %})
- [Back Up and Restore Data]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %})
- [Use the Built-in SQL Client]({% link {{ page.version.version }}/cockroach-sql.md %})
- [`cockroach` Commands Overview]({% link {{ page.version.version }}/cockroach-commands.md %})
diff --git a/src/current/v23.2/migrate-from-geojson.md b/src/current/v23.2/migrate-from-geojson.md
index 6c83c906215..6e9d29ef456 100644
--- a/src/current/v23.2/migrate-from-geojson.md
+++ b/src/current/v23.2/migrate-from-geojson.md
@@ -92,9 +92,9 @@ IMPORT PGDUMP ('http://localhost:3000/tanks.sql');
- [Migrate from Shapefiles]({% link {{ page.version.version }}/migrate-from-shapefiles.md %})
- [Migrate from GeoPackage]({% link {{ page.version.version }}/migrate-from-geopackage.md %})
- [Spatial indexes]({% link {{ page.version.version }}/spatial-indexes.md %})
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
-- [Migrate from MySQL][mysql]
-- [Migrate from PostgreSQL][postgres]
+- [Migration Overview]({% link molt/migration-overview.md %})
+- [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql)
+- [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %})
- [Back Up and Restore Data]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %})
- [Use the Built-in SQL Client]({% link {{ page.version.version }}/cockroach-sql.md %})
- [`cockroach` Commands Overview]({% link {{ page.version.version }}/cockroach-commands.md %})
diff --git a/src/current/v23.2/migrate-from-geopackage.md b/src/current/v23.2/migrate-from-geopackage.md
index 8be52473601..bc8ff047aa0 100644
--- a/src/current/v23.2/migrate-from-geopackage.md
+++ b/src/current/v23.2/migrate-from-geopackage.md
@@ -104,9 +104,9 @@ IMPORT PGDUMP ('http://localhost:3000/springs.sql');
- [Spatial indexes]({% link {{ page.version.version }}/spatial-indexes.md %})
- [Migrate from OpenStreetMap]({% link {{ page.version.version }}/migrate-from-openstreetmap.md %})
- [Migrate from Shapefiles]({% link {{ page.version.version }}/migrate-from-shapefiles.md %})
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
-- [Migrate from MySQL][mysql]
-- [Migrate from PostgreSQL][postgres]
+- [Migration Overview]({% link molt/migration-overview.md %})
+- [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql)
+- [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %})
- [Back Up and Restore Data]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %})
- [Use the Built-in SQL Client]({% link {{ page.version.version }}/cockroach-sql.md %})
- [`cockroach` Commands Overview]({% link {{ page.version.version }}/cockroach-commands.md %})
diff --git a/src/current/v23.2/migrate-from-mysql.md b/src/current/v23.2/migrate-from-mysql.md
deleted file mode 100644
index 767c0a27ce5..00000000000
--- a/src/current/v23.2/migrate-from-mysql.md
+++ /dev/null
@@ -1,412 +0,0 @@
----
-title: Migrate from MySQL
-summary: Learn how to migrate data from MySQL into a CockroachDB cluster.
-toc: true
-docs_area: migrate
----
-
-This page describes basic considerations and provides a basic [example](#example-migrate-world-to-cockroachdb) of migrating data from MySQL to CockroachDB. The information on this page assumes that you have read [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}), which describes the broad phases and considerations of migrating a database to CockroachDB.
-
-The [MySQL migration example](#example-migrate-world-to-cockroachdb) on this page demonstrates how to use [MOLT tooling]({% link {{ page.version.version }}/migration-overview.md %}#molt) to update the MySQL schema, perform an initial load of data, and validate the data. These steps are essential when [preparing for a full migration]({% link {{ page.version.version }}/migration-overview.md %}#prepare-for-migration).
-
-{{site.data.alerts.callout_success}}
-If you need help migrating to CockroachDB, contact our sales team.
-{{site.data.alerts.end}}
-
-## Syntax differences
-
-You will likely need to make application changes due to differences in syntax between MySQL and CockroachDB. Along with the [general considerations in the migration overview]({% link {{ page.version.version }}/migration-overview.md %}#application-changes), also consider the following MySQL-specific information as you develop your migration plan.
-
-When [using the Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}?filters=mysql#convert-a-schema), MySQL syntax that cannot automatically be converted will be displayed in the [**Summary Report**]({% link cockroachcloud/migrations-page.md %}?filters=mysql#summary-report). These may include the following.
-
-#### String case sensitivity
-
-Strings are case-insensitive in MySQL and case-sensitive in CockroachDB. You may need to edit your MySQL data to get the results you expect from CockroachDB. For example, you may have been doing string comparisons in MySQL that will need to be changed to work with CockroachDB.
-
-For more information about the case sensitivity of strings in MySQL, see [Case Sensitivity in String Searches](https://dev.mysql.com/doc/refman/8.0/en/case-sensitivity.html) from the MySQL documentation. For more information about CockroachDB strings, see [`STRING`]({% link {{ page.version.version }}/string.md %}).
-
-#### Identifier case sensitivity
-
-Identifiers are case-sensitive in MySQL and [case-insensitive in CockroachDB]({% link {{ page.version.version }}/keywords-and-identifiers.md %}#identifiers). When [using the Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}?filters=mysql#convert-a-schema), you can either keep case sensitivity by enclosing identifiers in double quotes, or make identifiers case-insensitive by converting them to lowercase.
-
-#### `AUTO_INCREMENT` attribute
-
-The MySQL [`AUTO_INCREMENT`](https://dev.mysql.com/doc/refman/8.0/en/example-auto-increment.html) attribute, which creates sequential column values, is not supported in CockroachDB. When [using the Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}?filters=mysql#convert-a-schema), columns with `AUTO_INCREMENT` can be converted to use [sequences]({% link {{ page.version.version }}/create-sequence.md %}), `UUID` values with [`gen_random_uuid()`]({% link {{ page.version.version }}/functions-and-operators.md %}#id-generation-functions), or unique `INT8` values using [`unique_rowid()`]({% link {{ page.version.version }}/functions-and-operators.md %}#id-generation-functions). Cockroach Labs does not recommend using a sequence to define a primary key column. For more information, see [Unique ID best practices]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#unique-id-best-practices).
-
-{{site.data.alerts.callout_info}}
-Changing a column type during schema conversion will cause [MOLT Verify]({% link molt/molt-verify.md %}) to identify a type mismatch during [data validation](#step-3-validate-the-migrated-data). This is expected behavior.
-{{site.data.alerts.end}}
-
-#### `ENUM` type
-
-MySQL `ENUM` types are defined in table columns. On CockroachDB, [`ENUM`]({% link {{ page.version.version }}/enum.md %}) is a standalone type. When [using the Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}?filters=mysql#convert-a-schema), you can either deduplicate the `ENUM` definitions or create a separate type for each column.
-
-#### `TINYINT` type
-
-`TINYINT` data types are not supported in CockroachDB. The [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}?filters=mysql) automatically converts `TINYINT` columns to [`INT2`]({% link {{ page.version.version }}/int.md %}) (`SMALLINT`).
-
-#### Geospatial types
-
-MySQL geometry types are not converted to CockroachDB [geospatial types]({% link {{ page.version.version }}/spatial-data-overview.md %}#spatial-objects) by the [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}?filters=mysql). They should be manually converted to the corresponding types in CockroachDB.
-
-#### `FIELD` function
-
-The MYSQL `FIELD` function is not supported in CockroachDB. Instead, you can use the [`array_position`]({% link {{ page.version.version }}/functions-and-operators.md %}#array-functions) function, which returns the index of the first occurrence of element in the array.
-
-Example usage:
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-SELECT array_position(ARRAY[4,1,3,2],1);
-~~~
-
-~~~
- array_position
-------------------
- 2
-(1 row)
-~~~
-
-While MySQL returns 0 when the element is not found, CockroachDB returns `NULL`. So if you are using the `ORDER BY` clause in a statement with the `array_position` function, the caveat is that sort is applied even when the element is not found. As a workaround, you can use the [`COALESCE`]({% link {{ page.version.version }}/functions-and-operators.md %}#conditional-and-function-like-operators) operator.
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-SELECT * FROM table_a ORDER BY COALESCE(array_position(ARRAY[4,1,3,2],5),999);
-~~~
-
-## Load MySQL data
-
-You can use [MOLT Fetch]({% link molt/molt-fetch.md %}) to migrate MySQL data to CockroachDB.
-
-Alternatively, you can use one of the following methods to migrate the data:
-
-- {% include {{ page.version.version }}/migration/load-data-import-into.md %}
-
- {% include {{ page.version.version }}/misc/import-perf.md %}
-
-- {% include {{ page.version.version }}/migration/load-data-third-party.md %}
-
-The [following example](#example-migrate-world-to-cockroachdb) uses `IMPORT INTO` to perform the initial data load.
-
-## Example: Migrate `world` to CockroachDB
-
-The following steps demonstrate [converting a schema]({% link {{ page.version.version }}/migration-overview.md %}#convert-the-schema), performing an [initial load of data]({% link {{ page.version.version }}/migration-overview.md %}#load-test-data), and [validating data consistency]({% link {{ page.version.version }}/migration-overview.md %}#validate-queries) during a migration.
-
-In the context of a full migration, these steps ensure that MySQL data can be properly migrated to CockroachDB and your application queries tested against the cluster. For details, see the [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}#prepare-for-migration).
-
-### Before you begin
-
-The example uses the [MySQL `world` data set](https://dev.mysql.com/doc/index-other.html) and demonstrates how to migrate the schema and data to a CockroachDB {{ site.data.products.standard }} cluster. To follow along with these steps:
-
-1. Download the [`world` data set](https://dev.mysql.com/doc/index-other.html).
-
-1. Create the `world` database on your MySQL instance, specifying the path of the downloaded file:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- mysqlsh -uroot --sql --file {path}/world-db/world.sql
- ~~~
-
-1. Create a free [{{ site.data.products.cloud }} account]({% link cockroachcloud/create-an-account.md %}), which is used to access the [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}?filters=mysql) and create the {{ site.data.products.standard }} cluster.
-
-{{site.data.alerts.callout_success}}
-{% include cockroachcloud/migration/sct-self-hosted.md %}
-{{site.data.alerts.end}}
-
-### Step 1. Convert the MySQL schema
-
-Use the [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}?filters=mysql) to convert the `world` schema for compatibility with CockroachDB. The schema has three tables: `city`, `country`, and `countrylanguage`.
-
-1. Dump the MySQL `world` schema with the following [`mysqldump`](https://dev.mysql.com/doc/refman/8.0/en/mysqldump-sql-format.html) command:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- mysqldump -uroot --no-data world > world_schema.sql
- ~~~
-
-1. Open the [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}?filters=mysql) in the {{ site.data.products.cloud }} Console and [add a new MySQL schema]({% link cockroachcloud/migrations-page.md %}?filters=mysql#convert-a-schema).
-
- For **AUTO_INCREMENT Conversion Option**, select the [`unique_rowid()`]({% link {{ page.version.version }}/functions-and-operators.md %}#id-generation-functions) option. This will convert the `ID` column in the `city` table, which has MySQL type `int` and `AUTO_INCREMENT`, to a CockroachDB [`INT8`]({% link {{ page.version.version }}/int.md %}) type with default values generated by [`unique_rowid()`]({% link {{ page.version.version }}/functions-and-operators.md %}#id-generation-functions). For context on this option, see [`AUTO_INCREMENT` attribute](#auto_increment-attribute).
-
- The `UUID` and `unique_rowid()` options are each preferred for [different use cases]({% link {{ page.version.version }}/sql-faqs.md %}#what-are-the-differences-between-uuid-sequences-and-unique_rowid). For this example, selecting the `unique_rowid()` option makes [loading the data](#step-2-load-the-mysql-data) more straightforward in a later step, since both the source and target columns will have integer types.
-
-1. [Upload `world_schema.sql`]({% link cockroachcloud/migrations-page.md %}?filters=mysql#upload-file) to the Schema Conversion Tool.
-
- After conversion is complete, [review the results]({% link cockroachcloud/migrations-page.md %}?filters=mysql#review-the-schema). The [**Summary Report**]({% link cockroachcloud/migrations-page.md %}?filters=mysql#summary-report) shows that there are no errors. This means that the schema is ready to migrate to CockroachDB.
-
- {{site.data.alerts.callout_success}}
- You can also [add your MySQL database credentials]({% link cockroachcloud/migrations-page.md %}?filters=mysql#use-credentials) to have the Schema Conversion Tool obtain the schema directly from the MySQL database.
- {{site.data.alerts.end}}
-
- This example migrates directly to a CockroachDB {{ site.data.products.standard }} cluster. {% include cockroachcloud/migration/sct-self-hosted.md %}
-
-1. Before you migrate the converted schema, click the **Statements** tab to view the [Statements list]({% link cockroachcloud/migrations-page.md %}?filters=mysql#statements-list). Scroll down to the `CREATE TABLE countrylanguage` statement and edit the statement to add a [collation]({% link {{ page.version.version }}/collate.md %}) (`COLLATE en_US`) on the `language` column:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- CREATE TABLE countrylanguage (
- countrycode VARCHAR(3) DEFAULT '' NOT NULL,
- language VARCHAR(30) COLLATE en_US DEFAULT '' NOT NULL,
- isofficial countrylanguage_isofficial_enum
- DEFAULT 'F'
- NOT NULL,
- percentage DECIMAL(4,1) DEFAULT '0.0' NOT NULL,
- PRIMARY KEY (countrycode, language),
- INDEX countrycode (countrycode),
- CONSTRAINT countrylanguage_ibfk_1
- FOREIGN KEY (countrycode) REFERENCES country (code)
- )
- ~~~
-
- Click **Save**.
-
- This is a workaround to prevent [data validation](#step-3-validate-the-migrated-data) from failing due to collation mismatches. For more details, see the [MOLT Verify] ({% link molt/molt-verify.md %}#known-limitations) documentation.
-
-1. Click [**Migrate Schema**]({% link cockroachcloud/migrations-page.md %}?filters=mysql#migrate-the-schema) to create a new {{ site.data.products.standard }} cluster with the converted schema. Name the database `world`.
-
- You can view this database on the [**Databases** page]({% link cockroachcloud/databases-page.md %}) of the {{ site.data.products.cloud }} Console.
-
-1. Open a SQL shell to the CockroachDB `world` cluster. To find the command, open the **Connect** dialog in the {{ site.data.products.cloud }} Console and select the `world` database and **CockroachDB Client** option. It will look like:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- cockroach sql --url "postgresql://{username}@{hostname}:{port}/world?sslmode=verify-full"
- ~~~
-
-1. For large imports, Cockroach Labs recommends [removing indexes prior to loading data]({% link {{ page.version.version }}/import-performance-best-practices.md %}#import-into-a-schema-with-secondary-indexes) and recreating them afterward. This provides increased visibility into the import progress and the ability to retry each step independently.
-
- Show the indexes on the `world` database:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- SHOW INDEXES FROM DATABASE world;
- ~~~
-
- The `countrycode` [foreign key]({% link {{ page.version.version }}/foreign-key.md %}) indexes on the `city` and `countrylanguage` tables can be removed for now:
-
- ~~~
- table_name | index_name | index_schema | non_unique | seq_in_index | column_name | definition | direction | storing | implicit | visible
- ---------------------------------+-------------------------------------------------+--------------+------------+--------------+-----------------+-----------------+-----------+---------+----------+----------
- ...
- city | countrycode | public | t | 2 | id | id | ASC | f | t | t
- city | countrycode | public | t | 1 | countrycode | countrycode | ASC | f | f | t
- ...
- countrylanguage | countrycode | public | t | 1 | countrycode | countrycode | ASC | f | f | t
- countrylanguage | countrycode | public | t | 2 | language | language | ASC | f | t | t
- ...
- ~~~
-
-1. Drop the `countrycode` indexes:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- DROP INDEX city@countrycode;
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- DROP INDEX countrylanguage@countrycode;
- ~~~
-
- You will recreate the indexes after [loading the data](#step-2-load-the-mysql-data).
-
-### Step 2. Load the MySQL data
-
-Load the `world` data into CockroachDB using [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) with CSV-formatted data. {% include {{ page.version.version }}/sql/export-csv-tsv.md %}
-
-{{site.data.alerts.callout_info}}
-When MySQL dumps data, the tables are not ordered by [foreign key]({% link {{ page.version.version }}/foreign-key.md %}) constraints, and foreign keys are not placed in the correct dependency order. It is best to disable foreign key checks when loading data into CockroachDB, and revalidate foreign keys on each table after the data is loaded.
-
-By default, [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) invalidates all [foreign key]({% link {{ page.version.version }}/foreign-key.md %}) constraints on the target table.
-{{site.data.alerts.end}}
-
-1. Dump the MySQL `world` data with the following [`mysqldump` command](https://dev.mysql.com/doc/refman/8.0/en/mysqldump-delimited-text.html):
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- mysqldump -uroot -T /{path}/world-data --fields-terminated-by ',' --fields-enclosed-by '"' --fields-escaped-by '\' --no-create-info world
- ~~~
-
- This dumps each table in your database to the path `/{path}/world-data` as a `.txt` file in CSV format.
- - `--fields-terminated-by` specifies that values are separated by commas instead of tabs.
- - `--fields-enclosed-by` and `--fields-escaped-by` specify the characters that enclose and escape column values, respectively.
- - `--no-create-info` dumps only the [data manipulation language (DML)]({% link {{ page.version.version }}/sql-statements.md %}#data-manipulation-statements).
-
-1. Host the files where the CockroachDB cluster can access them.
-
- Each node in the CockroachDB cluster needs to have access to the files being imported. There are several ways for the cluster to access the data; for more information on the types of storage [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) can pull from, see the following:
- - [Use Cloud Storage]({% link {{ page.version.version }}/use-cloud-storage.md %})
- - [Use a Local File Server]({% link {{ page.version.version }}/use-a-local-file-server.md %})
-
- Cloud storage such as Amazon S3 or Google Cloud is highly recommended for hosting the data files you want to import.
-
- The dump files generated in the preceding step are already hosted on a public S3 bucket created for this example.
-
-1. Open a SQL shell to the CockroachDB `world` cluster, using the same command as before:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- cockroach sql --url "postgresql://{username}@{hostname}:{port}/world?sslmode=verify-full"
- ~~~
-
-1. Use [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) to import each MySQL dump file into the corresponding table in the `world` database.
-
- The following commands point to a public S3 bucket where the `world` data dump files are hosted for this example. The `nullif='\N'` clause specifies that `\N` values, which are produced by the `mysqldump` command, should be read as [`NULL`]({% link {{ page.version.version }}/null-handling.md %}).
-
- {{site.data.alerts.callout_success}}
- You can add the `row_limit` [option]({% link {{ page.version.version }}/import-into.md %}#import-options) to specify the number of rows to import. For example, `row_limit = '10'` will import the first 10 rows of the table. This option is useful for finding errors quickly before executing a more time- and resource-consuming import.
- {{site.data.alerts.end}}
-
- ~~~ sql
- IMPORT INTO countrylanguage
- CSV DATA (
- 'https://cockroachdb-migration-examples.s3.us-east-1.amazonaws.com/mysql/world/countrylanguage.txt'
- )
- WITH
- nullif='\N';
- ~~~
-
- ~~~
- job_id | status | fraction_completed | rows | index_entries | bytes
- ---------------------+-----------+--------------------+------+---------------+---------
- 887782070812344321 | succeeded | 1 | 984 | 984 | 171555
- ~~~
-
- ~~~ sql
- IMPORT INTO country
- CSV DATA (
- 'https://cockroachdb-migration-examples.s3.us-east-1.amazonaws.com/mysql/world/country.txt'
- )
- WITH
- nullif='\N';
- ~~~
-
- ~~~
- job_id | status | fraction_completed | rows | index_entries | bytes
- ---------------------+-----------+--------------------+------+---------------+--------
- 887782114360819713 | succeeded | 1 | 239 | 0 | 33173
- ~~~
-
- ~~~ sql
- IMPORT INTO city
- CSV DATA (
- 'https://cockroachdb-migration-examples.s3.us-east-1.amazonaws.com/mysql/world/city.txt'
- )
- WITH
- nullif='\N';
- ~~~
-
- ~~~
- job_id | status | fraction_completed | rows | index_entries | bytes
- ---------------------+-----------+--------------------+------+---------------+---------
- 887782154421567489 | succeeded | 1 | 4079 | 4079 | 288140
- ~~~
-
- {{site.data.alerts.callout_info}}
- After [converting the schema](#step-1-convert-the-mysql-schema) to work with CockroachDB, the `id` column in `city` is an [`INT8`]({% link {{ page.version.version }}/int.md %}) with default values generated by [`unique_rowid()`]({% link {{ page.version.version }}/functions-and-operators.md %}#id-generation-functions). However, `unique_rowid()` values are only generated when new rows are [inserted]({% link {{ page.version.version }}/insert.md %}) without an `id` value. The MySQL data dump still includes the sequential `id` values generated by the MySQL [`AUTO_INCREMENT` attribute](#auto_increment-attribute), and these are imported with the `IMPORT INTO` command.
-
- In an actual migration, you can either update the primary key into a [multi-column key]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#use-multi-column-primary-keys) or add a new primary key column that [generates unique IDs]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#unique-id-best-practices).
- {{site.data.alerts.end}}
-
-1. Recreate the indexes that you deleted before importing the data:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- CREATE INDEX countrycode ON city (countrycode, id);
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- CREATE INDEX countrycode ON countrylanguage (countrycode, language);
- ~~~
-
-1. Recall that `IMPORT INTO` invalidates all [foreign key]({% link {{ page.version.version }}/foreign-key.md %}) constraints on the target table. View the constraints that are defined on `city` and `countrylanguage`:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- SHOW CONSTRAINTS FROM city;
- ~~~
-
- ~~~
- table_name | constraint_name | constraint_type | details | validated
- -------------+-----------------+-----------------+--------------------------------------------------------------+------------
- city | city_ibfk_1 | FOREIGN KEY | FOREIGN KEY (countrycode) REFERENCES country(code) NOT VALID | f
- city | city_pkey | PRIMARY KEY | PRIMARY KEY (id ASC) | t
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- SHOW CONSTRAINTS FROM countrylanguage;
- ~~~
-
- ~~~
- table_name | constraint_name | constraint_type | details | validated
- ------------------+------------------------+-----------------+--------------------------------------------------------------+------------
- countrylanguage | countrylanguage_ibfk_1 | FOREIGN KEY | FOREIGN KEY (countrycode) REFERENCES country(code) NOT VALID | f
- countrylanguage | countrylanguage_pkey | PRIMARY KEY | PRIMARY KEY (countrycode ASC, language ASC) | t
- ~~~
-
-1. To validate the foreign keys, issue an [`ALTER TABLE ... VALIDATE CONSTRAINT`]({% link {{ page.version.version }}/alter-table.md %}#validate-constraint) statement for each table:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- ALTER TABLE city VALIDATE CONSTRAINT city_ibfk_1;
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- ALTER TABLE countrylanguage VALIDATE CONSTRAINT countrylanguage_ibfk_1;
- ~~~
-
-### Step 3. Validate the migrated data
-
-Use [MOLT Verify]({% link molt/molt-verify.md %}) to check that the data on MySQL and CockroachDB are consistent.
-
-1. [Install MOLT Verify.]({% link molt/molt-verify.md %})
-
-1. In the directory where you installed MOLT Verify, use the following command to compare the two databases, specifying the [JDBC connection string for MySQL](https://dev.mysql.com/doc/connector-j/8.1/en/connector-j-reference-jdbc-url-format.html) with `--source` and the SQL connection string for CockroachDB with `--target`:
-
- {{site.data.alerts.callout_success}}
- To find the CockroachDB connection string, open the **Connect** dialog in the {{ site.data.products.cloud }} Console and select the `world` database and the **General connection string** option.
- {{site.data.alerts.end}}
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- ./molt verify --source 'jdbc:mysql://{user}:{password}@tcp({host}:{port})/world' --target 'postgresql://{user}:{password}@{host}:{port}/world?sslmode=verify-full'
- ~~~
-
- You will see the initial output:
-
- ~~~
- INF verification in progress
- ~~~
-
- The following warnings indicate that the MySQL and CockroachDB columns have different types. This is an expected result, since some columns were [changed to `ENUM` types](#enum-type) when you [converted the schema](#step-1-convert-the-mysql-schema):
-
- ~~~
- WRN mismatching table definition mismatch_info="column type mismatch on continent: text vs country_continent_enum" table_name=country table_schema=public
- WRN mismatching table definition mismatch_info="column type mismatch on isofficial: text vs countrylanguage_isofficial_enum" table_name=countrylanguage table_schema=public
- ~~~
-
- The following output indicates that MOLT Verify has completed verification:
-
- ~~~
- INF finished row verification on public.country (shard 1/1): truth rows seen: 239, success: 239, missing: 0, mismatch: 0, extraneous: 0, live_retry: 0
- INF finished row verification on public.countrylanguage (shard 1/1): truth rows seen: 984, success: 984, missing: 0, mismatch: 0, extraneous: 0, live_retry: 0
- INF finished row verification on public.city (shard 1/1): truth rows seen: 4079, success: 4079, missing: 0, mismatch: 0, extraneous: 0, live_retry: 0
- INF verification complete
- ~~~
-
-With the schema migrated and the initial data load verified, the next steps in a real-world migration are to ensure that you have made any necessary [application changes]({% link {{ page.version.version }}/migration-overview.md %}#application-changes), [validate application queries]({% link {{ page.version.version }}/migration-overview.md %}#validate-queries), and [perform a dry run]({% link {{ page.version.version }}/migration-overview.md %}#perform-a-dry-run) before [conducting the full migration]({% link {{ page.version.version }}/migration-overview.md %}#conduct-the-migration).
-
-To learn more, see the [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}).
-
-## See also
-
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
-- [Use the Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
-- [Use the MOLT Verify tool]({% link molt/molt-verify.md %})
-- [Import Performance Best Practices]({% link {{ page.version.version }}/import-performance-best-practices.md %})
-- [Migrate from CSV]({% link {{ page.version.version }}/migrate-from-csv.md %})
-- [Migrate from PostgreSQL]({% link {{ page.version.version }}/migrate-from-postgres.md %})
-- [Can a PostgreSQL or MySQL application be migrated to CockroachDB?]({% link {{ page.version.version }}/frequently-asked-questions.md %}#can-a-postgresql-or-mysql-application-be-migrated-to-cockroachdb)
diff --git a/src/current/v23.2/migrate-from-openstreetmap.md b/src/current/v23.2/migrate-from-openstreetmap.md
index db717368941..0d21840d358 100644
--- a/src/current/v23.2/migrate-from-openstreetmap.md
+++ b/src/current/v23.2/migrate-from-openstreetmap.md
@@ -129,9 +129,9 @@ Osm2pgsql took 2879s overall
- [Migrate from GeoPackages]({% link {{ page.version.version }}/migrate-from-geopackage.md %})
- [Migrate from GeoJSON]({% link {{ page.version.version }}/migrate-from-geojson.md %})
- [Migrate from Shapefiles]({% link {{ page.version.version }}/migrate-from-shapefiles.md %})
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
-- [Migrate from MySQL][mysql]
-- [Migrate from PostgreSQL][postgres]
+- [Migration Overview]({% link molt/migration-overview.md %})
+- [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql)
+- [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %})
- [Back Up and Restore Data]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %})
- [Use the Built-in SQL Client]({% link {{ page.version.version }}/cockroach-sql.md %})
- [`cockroach` Commands Overview]({% link {{ page.version.version }}/cockroach-commands.md %})
diff --git a/src/current/v23.2/migrate-from-oracle.md b/src/current/v23.2/migrate-from-oracle.md
index 505f2699bf6..dffe4182df6 100644
--- a/src/current/v23.2/migrate-from-oracle.md
+++ b/src/current/v23.2/migrate-from-oracle.md
@@ -388,8 +388,8 @@ You will have to refactor Oracle SQL and functions that do not comply with [ANSI
- [`IMPORT`]({% link {{ page.version.version }}/import.md %})
- [Import Performance Best Practices]({% link {{ page.version.version }}/import-performance-best-practices.md %})
- [Migrate from CSV]({% link {{ page.version.version }}/migrate-from-csv.md %})
-- [Migrate from MySQL]({% link {{ page.version.version }}/migrate-from-mysql.md %})
-- [Migrate from PostgreSQL]({% link {{ page.version.version }}/migrate-from-postgres.md %})
+- [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql)
+- [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %})
- [Back Up and Restore Data]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %})
- [Use the Built-in SQL Client]({% link {{ page.version.version }}/cockroach-sql.md %})
- [`cockroach` Commands Overview]({% link {{ page.version.version }}/cockroach-commands.md %})
diff --git a/src/current/v23.2/migrate-from-postgres.md b/src/current/v23.2/migrate-from-postgres.md
deleted file mode 100644
index 795d303e656..00000000000
--- a/src/current/v23.2/migrate-from-postgres.md
+++ /dev/null
@@ -1,297 +0,0 @@
----
-title: Migrate from PostgreSQL
-summary: Learn how to migrate data from PostgreSQL into a CockroachDB cluster.
-toc: true
-docs_area: migrate
----
-
-This page describes basic considerations and provides a basic [example](#example-migrate-frenchtowns-to-cockroachdb) of migrating data from PostgreSQL to CockroachDB. The information on this page assumes that you have read [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}), which describes the broad phases and considerations of migrating a database to CockroachDB.
-
-The [PostgreSQL migration example](#example-migrate-frenchtowns-to-cockroachdb) on this page demonstrates how to use [MOLT tooling]({% link {{ page.version.version }}/migration-overview.md %}#molt) to update the PostgreSQL schema, perform an initial load of data, and validate the data. These steps are essential when [preparing for a full migration]({% link {{ page.version.version }}/migration-overview.md %}#prepare-for-migration).
-
-{{site.data.alerts.callout_success}}
-If you need help migrating to CockroachDB, contact our sales team.
-{{site.data.alerts.end}}
-
-## Syntax differences
-
-CockroachDB supports the [PostgreSQL wire protocol](https://www.postgresql.org/docs/current/protocol.html) and is largely compatible with PostgreSQL syntax.
-
-For syntax differences, refer to [Features that differ from PostgreSQL]({% link {{ page.version.version }}/postgresql-compatibility.md %}#features-that-differ-from-postgresql).
-
-### Unsupported features
-
-The following PostgreSQL features do not yet exist in CockroachDB:
-
-{% include {{page.version.version}}/sql/unsupported-postgres-features.md %}
-
-## Load PostgreSQL data
-
-You can use [MOLT Fetch]({% link molt/molt-fetch.md %}) to migrate PostgreSQL data to CockroachDB.
-
-Alternatively, you can use one of the following methods to migrate the data:
-
-- {% include {{ page.version.version }}/migration/load-data-import-into.md %}
-
- {% include {{ page.version.version }}/misc/import-perf.md %}
-
-- {% include {{ page.version.version }}/migration/load-data-third-party.md %}
-
-- {% include {{ page.version.version }}/migration/load-data-copy-from.md %}
-
-The [following example](#example-migrate-frenchtowns-to-cockroachdb) uses `IMPORT INTO` to perform the initial data load.
-
-## Example: Migrate `frenchtowns` to CockroachDB
-
-The following steps demonstrate [converting a schema]({% link {{ page.version.version }}/migration-overview.md %}#convert-the-schema), performing an [initial load of data]({% link {{ page.version.version }}/migration-overview.md %}#load-test-data), and [validating data consistency]({% link {{ page.version.version }}/migration-overview.md %}#validate-queries) during a migration.
-
-In the context of a full migration, these steps ensure that PostgreSQL data can be properly migrated to CockroachDB and your application queries tested against the cluster. For details, see the [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}#prepare-for-migration).
-
-### Before you begin
-
-The example uses a modified version of the PostgreSQL `french-towns-communes-francais` data set and demonstrates how to migrate the schema and data to a CockroachDB {{ site.data.products.standard }} cluster. To follow along with these steps:
-
-1. Download the `frenchtowns` data set:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- curl -O https://cockroachdb-migration-examples.s3.us-east-1.amazonaws.com/postgresql/frenchtowns/frenchtowns.sql
- ~~~
-
-1. Create a `frenchtowns` database on your PostgreSQL instance:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- createdb frenchtowns
- ~~~
-
-1. Load the `frenchtowns` data into PostgreSQL, specifying the path of the downloaded file:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- psql frenchtowns -a -f frenchtowns.sql
- ~~~
-
-1. Create a free [{{ site.data.products.cloud }} account]({% link cockroachcloud/create-an-account.md %}), which is used to access the [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}) and [create the CockroachDB {{ site.data.products.standard }} cluster]({% link cockroachcloud/create-your-cluster.md %}).
-
-{{site.data.alerts.callout_success}}
-{% include cockroachcloud/migration/sct-self-hosted.md %}
-{{site.data.alerts.end}}
-
-### Step 1. Convert the PostgreSQL schema
-
-Use the [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}) to convert the `frenchtowns` schema for compatibility with CockroachDB. The schema has three tables: `regions`, `departments`, and `towns`.
-
-1. Dump the PostgreSQL `frenchtowns` schema with the following [`pg_dump`](https://www.postgresql.org/docs/15/app-pgdump.html) command:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- pg_dump --schema-only frenchtowns > frenchtowns_schema.sql
- ~~~
-
-1. Open the [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}) in the {{ site.data.products.cloud }} Console and [add a new PostgreSQL schema]({% link cockroachcloud/migrations-page.md %}#convert-a-schema).
-
- After conversion is complete, [review the results]({% link cockroachcloud/migrations-page.md %}#review-the-schema). The [**Summary Report**]({% link cockroachcloud/migrations-page.md %}#summary-report) shows that there are errors under **Required Fixes**. You must resolve these in order to migrate the schema to CockroachDB.
-
- {{site.data.alerts.callout_success}}
- You can also [add your PostgreSQL database credentials]({% link cockroachcloud/migrations-page.md %}#use-credentials) to have the Schema Conversion Tool obtain the schema directly from the PostgreSQL database.
- {{site.data.alerts.end}}
-
-1. `Missing user: postgres` errors indicate that the SQL user `postgres` is missing from CockroachDB. Click **Add User** to create the user.
-
-1. `Miscellaneous Errors` includes a `SELECT pg_catalog.set_config('search_path', '', false)` statement that can safely be removed. Click **Delete** to remove the statement from the schema.
-
-1. Review the `CREATE SEQUENCE` statements listed under **Suggestions**. Cockroach Labs does not recommend using a sequence to define a primary key column. For more information, see [Unique ID best practices]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#unique-id-best-practices).
-
- For this example, **Acknowledge** the suggestion without making further changes. In practice, after [conducting the full migration]({% link {{ page.version.version }}/migration-overview.md %}#conduct-the-migration) to CockroachDB, you would modify your CockroachDB schema to use unique and non-sequential primary keys.
-
-1. Click **Retry Migration**. The **Summary Report** now shows that there are no errors. This means that the schema is ready to migrate to CockroachDB.
-
- This example migrates directly to a CockroachDB {{ site.data.products.standard }} cluster. {% include cockroachcloud/migration/sct-self-hosted.md %}
-
-1. Click [**Migrate Schema**]({% link cockroachcloud/migrations-page.md %}#migrate-the-schema) to create a new CockroachDB {{ site.data.products.standard }} cluster with the converted schema. Name the database `frenchtowns`.
-
- You can view this database on the [**Databases** page]({% link cockroachcloud/databases-page.md %}) of the {{ site.data.products.cloud }} Console.
-
-### Step 2. Load the PostgreSQL data
-
-Load the `frenchtowns` data into CockroachDB using [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) with CSV-formatted data. {% include {{ page.version.version }}/sql/export-csv-tsv.md %}
-
-{{site.data.alerts.callout_info}}
-By default, [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) invalidates all [foreign key]({% link {{ page.version.version }}/foreign-key.md %}) constraints on the target table.
-{{site.data.alerts.end}}
-
-1. Dump each table in the PostgreSQL `frenchtowns` database to a CSV-formatted file:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- psql frenchtowns -c "COPY regions TO stdout DELIMITER ',' CSV;" > regions.csv
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- psql frenchtowns -c "COPY departments TO stdout DELIMITER ',' CSV;" > departments.csv
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- psql frenchtowns -c "COPY towns TO stdout DELIMITER ',' CSV;" > towns.csv
- ~~~
-
-1. Host the files where the CockroachDB cluster can access them.
-
- Each node in the CockroachDB cluster needs to have access to the files being imported. There are several ways for the cluster to access the data; for more information on the types of storage [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) can pull from, see the following:
- - [Use Cloud Storage]({% link {{ page.version.version }}/use-cloud-storage.md %})
- - [Use a Local File Server]({% link {{ page.version.version }}/use-a-local-file-server.md %})
-
- Cloud storage such as Amazon S3 or Google Cloud is highly recommended for hosting the data files you want to import.
-
- The dump files generated in the preceding step are already hosted on a public S3 bucket created for this example.
-
-1. Open a SQL shell to the CockroachDB `frenchtowns` cluster. To find the command, open the **Connect** dialog in the {{ site.data.products.cloud }} Console and select the `frenchtowns` database and **CockroachDB Client** option. It will look like:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- cockroach sql --url "postgresql://{username}@{hostname}:{port}/frenchtowns?sslmode=verify-full"
- ~~~
-
-1. Use [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) to import each PostgreSQL dump file into the corresponding table in the `frenchtowns` database.
-
- The following commands point to a public S3 bucket where the `frenchtowns` data dump files are hosted for this example.
-
- {{site.data.alerts.callout_success}}
- You can add the `row_limit` [option]({% link {{ page.version.version }}/import-into.md %}#import-options) to specify the number of rows to import. For example, `row_limit = '10'` will import the first 10 rows of the table. This option is useful for finding errors quickly before executing a more time- and resource-consuming import.
- {{site.data.alerts.end}}
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- IMPORT INTO regions
- CSV DATA (
- 'https://cockroachdb-migration-examples.s3.us-east-1.amazonaws.com/postgresql/frenchtowns/regions.csv'
- );
- ~~~
-
- ~~~
- job_id | status | fraction_completed | rows | index_entries | bytes
- ---------------------+-----------+--------------------+------+---------------+--------
- 893753132185026561 | succeeded | 1 | 26 | 52 | 2338
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- IMPORT INTO departments
- CSV DATA (
- 'https://cockroachdb-migration-examples.s3.us-east-1.amazonaws.com/postgresql/frenchtowns/departments.csv'
- );
- ~~~
-
- ~~~
- job_id | status | fraction_completed | rows | index_entries | bytes
- ---------------------+-----------+--------------------+------+---------------+--------
- 893753147892465665 | succeeded | 1 | 100 | 300 | 11166
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- IMPORT INTO towns
- CSV DATA (
- 'https://cockroachdb-migration-examples.s3.us-east-1.amazonaws.com/postgresql/frenchtowns/towns.csv'
- );
- ~~~
-
- ~~~
- job_id | status | fraction_completed | rows | index_entries | bytes
- ---------------------+-----------+--------------------+-------+---------------+----------
- 893753162225680385 | succeeded | 1 | 36684 | 36684 | 2485007
- ~~~
-
-1. Recall that `IMPORT INTO` invalidates all [foreign key]({% link {{ page.version.version }}/foreign-key.md %}) constraints on the target table. View the constraints that are defined on `departments` and `towns`:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- SHOW CONSTRAINTS FROM departments;
- ~~~
-
- ~~~
- table_name | constraint_name | constraint_type | details | validated
- --------------+-------------------------+-----------------+---------------------------------------------------------+------------
- departments | departments_capital_key | UNIQUE | UNIQUE (capital ASC) | t
- departments | departments_code_key | UNIQUE | UNIQUE (code ASC) | t
- departments | departments_name_key | UNIQUE | UNIQUE (name ASC) | t
- departments | departments_pkey | PRIMARY KEY | PRIMARY KEY (id ASC) | t
- departments | departments_region_fkey | FOREIGN KEY | FOREIGN KEY (region) REFERENCES regions(code) NOT VALID | f
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- SHOW CONSTRAINTS FROM towns;
- ~~~
-
- ~~~
- table_name | constraint_name | constraint_type | details | validated
- -------------+---------------------------+-----------------+-----------------------------------------------------------------+------------
- towns | towns_code_department_key | UNIQUE | UNIQUE (code ASC, department ASC) | t
- towns | towns_department_fkey | FOREIGN KEY | FOREIGN KEY (department) REFERENCES departments(code) NOT VALID | f
- towns | towns_pkey | PRIMARY KEY | PRIMARY KEY (id ASC) | t
- ~~~
-
-1. To validate the foreign keys, issue an [`ALTER TABLE ... VALIDATE CONSTRAINT`]({% link {{ page.version.version }}/alter-table.md %}#validate-constraint) statement for each table:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- ALTER TABLE departments VALIDATE CONSTRAINT departments_region_fkey;
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- ALTER TABLE towns VALIDATE CONSTRAINT towns_department_fkey;
- ~~~
-
-### Step 3. Validate the migrated data
-
-Use [MOLT Verify]({% link molt/molt-verify.md %}) to check that the data on PostgreSQL and CockroachDB are consistent.
-
-1. [Install MOLT Verify.]({% link molt/molt-verify.md %})
-
-1. In the directory where you installed MOLT Verify, use the following command to compare the two databases, specifying the PostgreSQL connection string with `--source` and the CockroachDB connection string with `--target`:
-
- {{site.data.alerts.callout_success}}
- To find the CockroachDB connection string, open the **Connect** dialog in the {{ site.data.products.cloud }} Console and select the `frenchtowns` database and the **General connection string** option.
- {{site.data.alerts.end}}
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- ./molt verify --source 'postgresql://{username}:{password}@{host}:{port}/frenchtowns' --target 'postgresql://{user}:{password}@{host}:{port}/frenchtowns?sslmode=verify-full'
- ~~~
-
- You will see the initial output:
-
- ~~~
- INF verification in progress
- ~~~
-
- The following output indicates that MOLT Verify has completed verification:
-
- ~~~
- INF finished row verification on public.regions (shard 1/1): truth rows seen: 26, success: 26, missing: 0, mismatch: 0, extraneous: 0, live_retry: 0
- INF finished row verification on public.departments (shard 1/1): truth rows seen: 100, success: 100, missing: 0, mismatch: 0, extraneous: 0, live_retry: 0
- INF progress on public.towns (shard 1/1): truth rows seen: 10000, success: 10000, missing: 0, mismatch: 0, extraneous: 0, live_retry: 0
- INF progress on public.towns (shard 1/1): truth rows seen: 20000, success: 20000, missing: 0, mismatch: 0, extraneous: 0, live_retry: 0
- INF progress on public.towns (shard 1/1): truth rows seen: 30000, success: 30000, missing: 0, mismatch: 0, extraneous: 0, live_retry: 0
- INF finished row verification on public.towns (shard 1/1): truth rows seen: 36684, success: 36684, missing: 0, mismatch: 0, extraneous: 0, live_retry: 0
- INF verification complete
- ~~~
-
-With the schema migrated and the initial data load verified, the next steps in a real-world migration are to ensure that you have made any necessary [application changes]({% link {{ page.version.version }}/migration-overview.md %}#application-changes), [validate application queries]({% link {{ page.version.version }}/migration-overview.md %}#validate-queries), and [perform a dry run]({% link {{ page.version.version }}/migration-overview.md %}#perform-a-dry-run) before [conducting the full migration]({% link {{ page.version.version }}/migration-overview.md %}#conduct-the-migration).
-
-To learn more, see the [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}).
-
-## See also
-
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
-- [Use the Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
-- [Use the MOLT Verify tool]({% link molt/molt-verify.md %})
-- [Import Performance Best Practices]({% link {{ page.version.version }}/import-performance-best-practices.md %})
-- [Migrate from CSV]({% link {{ page.version.version }}/migrate-from-csv.md %})
-- [Migrate from MySQL]({% link {{ page.version.version }}/migrate-from-mysql.md %})
-- [Can a PostgreSQL or MySQL application be migrated to CockroachDB?]({% link {{ page.version.version }}/frequently-asked-questions.md %}#can-a-postgresql-or-mysql-application-be-migrated-to-cockroachdb)
diff --git a/src/current/v23.2/migrate-from-shapefiles.md b/src/current/v23.2/migrate-from-shapefiles.md
index bfc3cc7fa34..b2f2d9f6bdf 100644
--- a/src/current/v23.2/migrate-from-shapefiles.md
+++ b/src/current/v23.2/migrate-from-shapefiles.md
@@ -114,9 +114,9 @@ IMPORT PGDUMP ('http://localhost:3000/tornado-points.sql') WITH ignore_unsupport
- [Migrate from OpenStreetMap]({% link {{ page.version.version }}/migrate-from-openstreetmap.md %})
- [Migrate from GeoJSON]({% link {{ page.version.version }}/migrate-from-geojson.md %})
- [Migrate from GeoPackage]({% link {{ page.version.version }}/migrate-from-geopackage.md %})
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
-- [Migrate from MySQL][mysql]
-- [Migrate from PostgreSQL][postgres]
+- [Migration Overview]({% link molt/migration-overview.md %})
+- [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql)
+- [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %})
- [Back Up and Restore Data]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %})
- [Use the Built-in SQL Client]({% link {{ page.version.version }}/cockroach-sql.md %})
- [`cockroach` Commands Overview]({% link {{ page.version.version }}/cockroach-commands.md %})
diff --git a/src/current/v23.2/migration-overview.md b/src/current/v23.2/migration-overview.md
deleted file mode 100644
index 2c434da5373..00000000000
--- a/src/current/v23.2/migration-overview.md
+++ /dev/null
@@ -1,355 +0,0 @@
----
-title: Migration Overview
-summary: Learn how to migrate your database to a CockroachDB cluster.
-toc: true
-docs_area: migrate
----
-
-This page provides an overview of how to migrate a database to CockroachDB.
-
-A database migration broadly consists of the following phases:
-
-1. [Develop a migration plan:](#develop-a-migration-plan) Evaluate your [downtime requirements](#approach-to-downtime) and [cutover strategy](#cutover-strategy), [size the CockroachDB cluster](#capacity-planning) that you will migrate to, and become familiar with the [application changes](#application-changes) that you need to make for CockroachDB.
-1. [Prepare for migration:](#prepare-for-migration) Run a [pre-mortem](#run-a-migration-pre-mortem) (optional), set up [metrics](#set-up-monitoring-and-alerting) (optional), [convert your schema](#convert-the-schema), perform an [initial load of test data](#load-test-data), [validate your application queries](#validate-queries) for correctness and performance, and [perform a dry run](#perform-a-dry-run) of the migration.
-1. [Conduct the migration:](#conduct-the-migration) Use a [lift-and-shift](#lift-and-shift) or ["zero-downtime"](#zero-downtime) method to migrate your data, application, and users to CockroachDB.
-1. [Complete the migration:](#complete-the-migration) Notify the appropriate parties and summarize the details.
-
-{{site.data.alerts.callout_success}}
-If you need help migrating to CockroachDB, contact our sales team.
-{{site.data.alerts.end}}
-
-## Develop a migration plan
-
-Consider the following as you plan your migration:
-
-- Who will lead and perform the migration? Which teams are involved, and which aspects are they responsible for?
-- Which internal and external parties do you need to inform about the migration?
-- Which external or third-party tools (e.g., microservices, analytics, payment processors, aggregators, CRMs) must be tested and migrated along with your application?
-- What portion of the data can be inconsistent, and for how long? What is the tolerable percentage of latency and application errors? This comprises your "error budget".
-- What is the tolerable [downtime](#approach-to-downtime), and what [cutover strategy](#cutover-strategy) will you use to switch users to CockroachDB?
-- Will you set up a "dry-run" environment to test the migration? How many [dry-run migrations](#perform-a-dry-run) will you perform?
-- When is the best time to perform this migration to be minimally disruptive to the database's users?
-- What is your target date for completing the migration?
-
-Create a document that summarizes the intent of the migration, the technical details, and the team members involved.
-
-### Approach to downtime
-
-A primary consideration is whether your application can tolerate downtime:
-
-- What types of operations can you suspend: reads, writes, or both?
-- How long can operations be suspended: seconds, minutes, or hours?
-- Should writes be queued while service is suspended?
-
-Take the following two use cases:
-
-- An application that is primarily in use during daytime business hours may be able to be taken offline during a predetermined timeframe without disrupting the user experience and business continuity. In this case, your migration can occur in a [downtime window](#downtime-window).
-- An application that must serve writes continuously cannot tolerate a long downtime window. In this case, you will aim for [zero or near-zero downtime](#minimal-downtime).
-
-#### Downtime window
-
-If your application can tolerate downtime, then it will likely be easiest to take your application offline, load a snapshot of the data into CockroachDB, and perform a [cutover](#cutover-strategy) to CockroachDB once the data is migrated. This is known as a *lift-and-shift* migration.
-
-A lift-and-shift approach is the most straightforward. However, it's important to fully [prepare the migration](#prepare-for-migration) in order to be certain that it can be completed successfully during the downtime window.
-
-- *Scheduled downtime* is made known to your users in advance. Once you have [prepared for the migration](#prepare-for-migration), you take the application offline, [conduct the migration](#conduct-the-migration), and bring the application back online on CockroachDB. To succeed, you should estimate the amount of downtime required to migrate your data, and ideally schedule the downtime outside of peak hours. Scheduling downtime is easiest if your application traffic is "periodic", meaning that it varies by the time of day, day of week, or day of month.
-
-- *Unscheduled downtime* impacts as few customers as possible, ideally without impacting their regular usage. If your application is intentionally offline at certain times (e.g., outside business hours), you can migrate the data without users noticing. Alternatively, if your application's functionality is not time-sensitive (e.g., it sends batched messages or emails), then you can queue requests while your system is offline, and process those requests after completing the migration to CockroachDB.
-
-- *Reduced functionality* takes some, but not all, application functionality offline. For example, you can disable writes but not reads while you migrate the application data, and queue data to be written after completing the migration.
-
-For an overview of lift-and-shift migrations to CockroachDB, see [Lift and Shift](#lift-and-shift).
-
-#### Minimal downtime
-
-If your application cannot tolerate downtime, then you should aim for a "zero-downtime" approach. This reduces downtime to an absolute minimum, such that users do not notice the migration.
-
-The minimum possible downtime depends on whether you can tolerate inconsistency in the migrated data:
-
-- Migrations performed using *consistent cutover* reduce downtime to an absolute minimum (i.e., seconds or sub-seconds) while keeping data synchronized between the source database and CockroachDB. **Consistency requires downtime.** In this approach, downtime occurs right before [cutover](#cutover-strategy), as you drain the remaining transactions from the source database to CockroachDB.
-
-- Migrations performed using *immediate cutover* can reduce downtime to zero. These require the most preparation, and typically allow read/write traffic to both databases for at least a short period of time, sacrificing consistency for availability. Without stopping application traffic, you perform an **immediate** [cutover](#cutover-strategy), while assuming that some writes will not be replicated to CockroachDB. You may want to manually reconcile these data inconsistencies after switching over.
-
-For an overview of zero-downtime migrations to CockroachDB, see [Zero Downtime](#zero-downtime). {% comment %}For details, see [Migration Strategy: Zero Downtime](migration-strategy-zero-downtime).{% endcomment %}
-
-### Cutover strategy
-
-*Cutover* is the process of switching application traffic from the source database to CockroachDB. Consider the following:
-
-- Will you perform the cutover all at once, or incrementally (e.g., by a subset of users, workloads, or tables)?
-
- - Switching all at once generally follows a [downtime window](#downtime-window) approach. Once the data is migrated to CockroachDB, you "flip the switch" to route application traffic to the new database, thus ending downtime.
-
- - Migrations with [zero or near-zero downtime](#minimal-downtime) can switch either all at once or incrementally, since writes are being synchronously replicated and the system can be gradually migrated as you [validate the queries](#validate-queries).
-
-- Will you have a fallback plan that allows you to reverse ("roll back") the migration from CockroachDB to the source database? A fallback plan enables you to fix any issues or inconsistencies that you encounter during or after cutover, then retry the migration.
-
-#### All at once (no rollback)
-
-This is the simplest cutover method, since you won't need to develop and execute a fallback plan.
-
-As part of [migration preparations](#prepare-for-migration), you will have already [tested your queries and performance](#test-query-results-and-performance) to have confidence to migrate without a rollback option. After moving all of the data from the source database to CockroachDB, you switch application traffic to CockroachDB.
-
-#### All at once (rollback)
-
-This method adds a fallback plan to the simple [all-at-once](#all-at-once-no-rollback) cutover.
-
-In addition to moving data to CockroachDB, data is also replicated from CockroachDB back to the source database in case you need to roll back the migration. Continuous replication is already possible when performing a [zero-downtime migration](#zero-downtime) that dual writes to both databases. Otherwise, you will need to ensure that data is replicated in the reverse direction at cutover. The challenge is to find a point at which both the source database and CockroachDB are in sync, so that you can roll back to that point. You should also avoid falling into a circular state where updates continuously travel back and forth between the source database and CockroachDB.
-
-#### Phased rollout
-
-Also known as the ["strangler fig"](https://en.wikipedia.org/wiki/Strangler_fig) approach, a phased rollout migrates a portion of your users, workloads, or tables over time. Until all users, workloads, and/or tables are migrated, the application will continue to write to both databases.
-
-This approach enables you to take your time with the migration, and to pause or roll back as you [monitor the migration](#set-up-monitoring-and-alerting) for issues and performance. Rolling back the migration involves the same caveats and considerations as for the [all-at-once](#all-at-once-rollback) method. Because you can control the blast radius of your migration by routing traffic for a subset of users or services, a phased rollout has reduced business risk and user impact at the cost of increased implementation risk. You will need to figure out how to migrate in phases while ensuring that your application is unaffected.
-
-### Capacity planning
-
-{{site.data.alerts.callout_success}}
-If you need help migrating to CockroachDB, contact our sales team.
-{{site.data.alerts.end}}
-
-Determine the size of the target CockroachDB cluster. To do this, consider your data volume and workload characteristics:
-
-- What is the total size of the data you will migrate?
-- How many active [application connections]({% link {{ page.version.version }}/recommended-production-settings.md %}#connection-pooling) will be running in the CockroachDB environment?
-
-Use this information to size the CockroachDB cluster you will create. If you are migrating to a CockroachDB {{ site.data.products.cloud }} cluster, see [Plan Your Cluster]({% link cockroachcloud/plan-your-cluster.md %}) for details:
-
-- For CockroachDB {{ site.data.products.standard }} and {{ site.data.products.basic }}, your cluster will scale automatically to meet your storage and usage requirements. Refer to the [CockroachDB {{ site.data.products.standard }}]({% link cockroachcloud/plan-your-cluster-basic.md %}#request-units) and [CockroachDB {{ site.data.products.basic }}]({% link cockroachcloud/plan-your-cluster-basic.md %}#request-units) documentation to learn about how to limit your resource consumption.
-- For CockroachDB {{ site.data.products.advanced }}, refer to the [example]({% link cockroachcloud/plan-your-cluster-advanced.md %}#example) that shows how your data volume, storage requirements, and replication factor affect the recommended node size (number of vCPUs per node) and total number of nodes on the cluster.
-- For guidance on sizing for connection pools, see the CockroachDB {{ site.data.products.cloud }} [Production Checklist]({% link cockroachcloud/production-checklist.md %}#sql-connection-handling).
-
-If you are migrating to a CockroachDB {{ site.data.products.core }} cluster:
-
-- Refer to our [sizing methodology]({% link {{ page.version.version }}/recommended-production-settings.md %}#sizing) to determine the total number of vCPUs on the cluster and the number of vCPUs per node (which determines the number of nodes on the cluster).
-- Refer to our [storage recommendations]({% link {{ page.version.version }}/recommended-production-settings.md %}#storage) to determine the amount of storage to provision on each node.
-- For guidance on sizing for connection pools, see the CockroachDB {{ site.data.products.core }} [Production Checklist]({% link {{ page.version.version }}/recommended-production-settings.md %}#connection-pooling).
-
-### Application changes
-
-As you develop your migration plan, consider the application changes that you will need to make. These may relate to the following:
-
-- [Designing a schema that is compatible with CockroachDB.](#schema-design-best-practices)
-- [Creating effective indexes on CockroachDB.](#index-creation-best-practices)
-- [Handling transaction contention.](#handling-transaction-contention)
-- [Unimplemented features and syntax incompatibilities.](#unimplemented-features-and-syntax-incompatibilities)
-
-#### Schema design best practices
-
-Follow these recommendations when [converting your schema](#convert-the-schema) for compatibility with CockroachDB.
-
-{{site.data.alerts.callout_success}}
-The [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}) automatically identifies potential improvements to your schema.
-{{site.data.alerts.end}}
-
-- You should define an explicit primary key on every table. For more information, see [Primary key best practices]({% link {{ page.version.version }}/schema-design-table.md %}#primary-key-best-practices).
-
-- Do not use a sequence to define a primary key column. Instead, Cockroach Labs recommends that you use [multi-column primary keys]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#use-multi-column-primary-keys) or [auto-generating unique IDs]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#use-functions-to-generate-unique-ids) for primary key columns.
-
-- By default on CockroachDB, `INT` is an alias for `INT8`, which creates 64-bit signed integers. Depending on your source database or application requirements, you may need to change the integer size to `4`. For example, [PostgreSQL defaults to 32-bit integers](https://www.postgresql.org/docs/9.6/datatype-numeric.html). For more information, see [Considerations for 64-bit signed integers]({% link {{ page.version.version }}/int.md %}#considerations-for-64-bit-signed-integers).
-
-#### Index creation best practices
-
-Review the [best practices for creating secondary indexes]({% link {{ page.version.version }}/schema-design-indexes.md %}#best-practices) on CockroachDB.
-
-{% include {{page.version.version}}/performance/use-hash-sharded-indexes.md %}
-
-#### Handling transaction contention
-
-Optimize your queries against [transaction contention]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#transaction-contention). You may encounter [transaction retry errors]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}) when you [test application queries](#validate-queries), as well as transaction contention due to long-running transactions when you [conduct the migration](#conduct-the-migration) and bulk load data.
-
-Transaction retry errors are more frequent under CockroachDB's default [`SERIALIZABLE` isolation level]({% link {{ page.version.version }}/demo-serializable.md %}). If you are migrating an application that was built at a `READ COMMITTED` isolation level, you should first [enable `READ COMMITTED` isolation]({% link {{ page.version.version }}/read-committed.md %}#enable-read-committed-isolation) on the CockroachDB cluster for compatibility.
-
-#### Unimplemented features and syntax incompatibilities
-
-Update your queries to resolve differences in functionality and SQL syntax.
-
-{{site.data.alerts.callout_success}}
-The [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}) automatically flags syntax incompatibilities and unimplemented features in your schema.
-{{site.data.alerts.end}}
-
-CockroachDB supports the [PostgreSQL wire protocol](https://www.postgresql.org/docs/current/protocol.html) and is largely compatible with PostgreSQL syntax. However, the following PostgreSQL features do not yet exist in CockroachDB:
-
-{% include {{page.version.version}}/sql/unsupported-postgres-features.md %}
-
-If your source database uses any of the preceding features, you may need to implement workarounds in your schema design, in your [data manipulation language (DML)]({% link {{ page.version.version }}/sql-statements.md %}#data-manipulation-statements), or in your application code.
-
-For more details on the CockroachDB SQL implementation, see [SQL Feature Support]({% link {{ page.version.version }}/sql-feature-support.md %}).
-
-## Prepare for migration
-
-Once you have a migration plan, prepare the team, application, source database, and CockroachDB cluster for the migration.
-
-### Run a migration "pre-mortem"
-
-{{site.data.alerts.callout_success}}
-This step is optional.
-{{site.data.alerts.end}}
-
-To minimize issues after [cutover](#cutover-strategy), compose a migration "pre-mortem":
-
-- Clearly describe the roles and processes of each team member performing the migration.
-- List the likely failure points and issues that you may encounter as you [conduct the migration](#conduct-the-migration).
-- Rank potential issues by severity, and identify ways to reduce risk.
-- Create a plan for implementing the actions that would most effectively reduce risk.
-
-### Set up monitoring and alerting
-
-{{site.data.alerts.callout_success}}
-This step is optional.
-{{site.data.alerts.end}}
-
-Based on the error budget you [defined in your migration plan](#develop-a-migration-plan), identify the metrics that you can use to measure your success criteria and set up monitoring for the migration. These metrics may be identical to those you normally use in production, but can also be specific to your migration needs.
-
-### Update the schema and queries
-
-In the following order:
-
-1. [Convert your schema](#convert-the-schema).
-1. [Load test data](#load-test-data).
-1. [Validate your application queries](#validate-queries).
-
-
-
-You can use the following [MOLT (Migrate Off Legacy Technology) tools]({% link molt/molt-overview.md %}) to simplify these steps:
-
-- [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
-- [MOLT Fetch]({% link molt/molt-fetch.md %})
-- [MOLT Verify]({% link molt/molt-verify.md %})
-
-#### Convert the schema
-
-First, convert your database schema to an equivalent CockroachDB schema:
-
-- Use the [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}) to convert your schema line-by-line. This requires a free [CockroachDB {{ site.data.products.cloud }} account]({% link cockroachcloud/create-an-account.md %}). The tool will convert the syntax, identify [unimplemented features and syntax incompatibilities](#unimplemented-features-and-syntax-incompatibilities) in the schema, and suggest edits according to CockroachDB [best practices](#schema-design-best-practices).
- {{site.data.alerts.callout_info}}
- The Schema Conversion Tool accepts `.sql` files from PostgreSQL, MySQL, Oracle, and Microsoft SQL Server.
- {{site.data.alerts.end}}
-
-- Alternatively, manually convert the schema according to our [schema design best practices](#schema-design-best-practices){% comment %}and data type mappings{% endcomment %}. You can also [export a partially converted schema]({% link cockroachcloud/migrations-page.md %}#export-the-schema) from the Schema Conversion Tool to finish the conversion manually.
-
-Then import the converted schema to a CockroachDB cluster:
-
-- For CockroachDB {{ site.data.products.cloud }}, use the Schema Conversion Tool to [migrate the converted schema to a new {{ site.data.products.cloud }} database]({% link cockroachcloud/migrations-page.md %}#migrate-the-schema).
-- For CockroachDB {{ site.data.products.core }}, pipe the [data definition language (DDL)]({% link {{ page.version.version }}/sql-statements.md %}#data-definition-statements) directly into [`cockroach sql`]({% link {{ page.version.version }}/cockroach-sql.md %}). You can [export a converted schema file]({% link cockroachcloud/migrations-page.md %}#export-the-schema) from the Schema Conversion Tool.
- {{site.data.alerts.callout_success}}
- For the fastest performance, you can use a [local, single-node CockroachDB cluster]({% link {{ page.version.version }}/cockroach-start-single-node.md %}#start-a-single-node-cluster) to convert your schema and [check the results of queries](#test-query-results-and-performance).
- {{site.data.alerts.end}}
-
-#### Load test data
-
-{{site.data.alerts.callout_success}}
-Before moving data, Cockroach Labs recommends [dropping any indexes]({% link {{ page.version.version }}/drop-index.md %}) on the CockroachDB database. The indexes can be [recreated]({% link {{ page.version.version }}/create-index.md %}) after the data is loaded. Doing so will optimize performance.
-{{site.data.alerts.end}}
-
-After [converting the schema](#convert-the-schema), load your data into CockroachDB so that you can [test your application queries](#validate-queries). Then use [MOLT Fetch]({% link molt/molt-fetch.md %}) to move the source data to CockroachDB.
-
-Alternatively, you can use one of the following methods to migrate the data. Additional tooling may be required to extract or convert the data to a supported file format.
-
-- {% include {{ page.version.version }}/migration/load-data-import-into.md %} Typically during a migration, data is initially loaded before foreground application traffic begins to be served, so the impact of taking the table offline when running `IMPORT INTO` may be minimal.
-- {% include {{ page.version.version }}/migration/load-data-third-party.md %} Within the tool, you can select the database tables to migrate to the test cluster.
-- {% include {{ page.version.version }}/migration/load-data-copy-from.md %}
-
-#### Validate queries
-
-After you [load the test data](#load-test-data), validate your queries on CockroachDB. You can do this by [shadowing](#shadowing) or by [manually testing](#test-query-results-and-performance) the queries.
-
-Note that CockroachDB defaults to the [`SERIALIZABLE`]({% link {{ page.version.version }}/demo-serializable.md %}) transaction isolation level. If you are migrating an application that was built at a `READ COMMITTED` isolation level on the source database, you must [enable `READ COMMITTED` isolation]({% link {{ page.version.version }}/read-committed.md %}#enable-read-committed-isolation) on the CockroachDB cluster for compatibility.
-
-##### Shadowing
-
-You can "shadow" your production workload by executing your source SQL statements on CockroachDB in parallel. You can then [validate the queries](#test-query-results-and-performance) on CockroachDB for consistency, performance, and potential issues with the migration. Shadowing should **not** be used in production when performing a [live migration](#zero-downtime).
-
-##### Test query results and performance
-
-You can manually validate your queries by testing a subset of "critical queries" on an otherwise idle CockroachDB cluster:
-
-- Check the application logs for error messages and the API response time. If application requests are slower than expected, use the **SQL Activity** page on the [CockroachDB {{ site.data.products.cloud }} Console]({% link cockroachcloud/statements-page.md %}) or [DB Console]({% link {{ page.version.version }}/ui-statements-page.md %}) to find the longest-running queries that are part of that application request. If necessary, tune the queries according to our best practices for [SQL performance]({% link {{ page.version.version }}/performance-best-practices-overview.md %}).
-
-- Compare the results of the queries and check that they are identical in both the source database and CockroachDB. To do this, you can use [MOLT Verify]({% link molt/molt-verify.md %}).
-
-Test performance on a CockroachDB cluster that is appropriately [sized](#capacity-planning) for your workload:
-
-1. Run the application with single- or very low-concurrency and verify the app's performance is acceptable. The cluster should be provisioned with more than enough resources to handle this workload, because you need to verify that the queries will be fast enough when there are zero resource bottlenecks.
-
-1. Run stress tests with at least the production concurrency and rate, but ideally higher in order to verify that the system can handle unexpected spikes in load. This can also uncover [contention]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#transaction-contention) issues that will appear during spikes in app load, which may require [application design changes](#handling-transaction-contention) to avoid.
-
-### Perform a dry run
-
-To further minimize potential surprises when you conduct the migration, practice [cutover](#cutover-strategy) using your application and similar volumes of data on a "dry-run" environment. Use a test or development environment that is as similar as possible to production.
-
-Performing a dry run is highly recommended. In addition to demonstrating how long the migration may take, a dry run also helps to ensure that team members understand what they need to do during the migration, and that changes to the application are coordinated.
-
-## Conduct the migration
-
-Before proceeding, double-check that you are [prepared to migrate](#prepare-for-migration).
-
-Once you are ready to migrate, optionally [drop the database]({% link {{ page.version.version }}/drop-database.md %}) and delete the test cluster so that you can get a clean start:
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-DROP DATABASE {database-name} CASCADE;
-~~~
-
-Alternatively, [truncate]({% link {{ page.version.version }}/truncate.md %}) each table you used for testing to avoid having to recreate your schema:
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-TRUNCATE {table-name} CASCADE;
-~~~
-
-Migrate your data to CockroachDB using the method that is appropriate for your [downtime requirements](#approach-to-downtime) and [cutover strategy](#cutover-strategy).
-
-### Lift and Shift
-
-Using this method, consistency is achieved by only performing the cutover once all writes have been replicated from the source database to CockroachDB. This requires downtime during which the application traffic is stopped.
-
-The following is a high-level overview of the migration steps. For considerations and details about the pros and cons of this approach, see [Migration Strategy: Lift and Shift]({% link {{ page.version.version }}/migration-strategy-lift-and-shift.md %}).
-
-1. Stop application traffic to your source database. **This begins downtime.**
-1. Use [MOLT Fetch]({% link molt/molt-fetch.md %}) to move the source data to CockroachDB.
-1. After the data is migrated, use [MOLT Verify]({% link molt/molt-verify.md %}) to validate the consistency of the data between the source database and CockroachDB.
-1. Perform a [cutover](#cutover-strategy) by resuming application traffic, now to CockroachDB.
-{% comment %}1. If you want the ability to [roll back](#all-at-once-rollback) the migration, replicate data back to the source database.{% endcomment %}
-
-### Zero Downtime
-
-During a "live migration", downtime is minimized by performing the cutover while writes are still being replicated from the source database to CockroachDB. Inconsistencies are resolved through manual reconciliation.
-
-The following is a high-level overview of the migration steps. The two approaches are mutually exclusive, and each has [tradeoffs](#minimal-downtime). {% comment %}For details on this migration strategy, see [Migration Strategy: Zero Downtime]({% link {{ page.version.version }}/migration-strategy-lift-and-shift.md %}).{% endcomment %}
-
-To prioritize consistency and minimize downtime:
-
-1. Use [MOLT Fetch]({% link molt/molt-fetch.md %}) to move the source data to CockroachDB. Enable [**continuous replication**]({% link molt/molt-fetch.md %}#load-data-and-replicate-changes) after it performs the initial load of data into CockroachDB.
-1. As the data is migrating, use [MOLT Verify]({% link molt/molt-verify.md %}) to validate the consistency of the data between the source database and CockroachDB.
-1. Once nearly all data from your source database has been moved to CockroachDB (for example, with a <1 second delay or <1000 rows), stop application traffic to your source database. **This begins downtime.**
-1. Wait for MOLT Fetch to finish replicating changes to CockroachDB.
-1. Perform a [cutover](#cutover-strategy) by resuming application traffic, now to CockroachDB.
-
-To achieve zero downtime with inconsistency:
-
-1. Use [MOLT Fetch]({% link molt/molt-fetch.md %}) to move the source data to CockroachDB. Use the tool to **replicate ongoing changes** after performing the initial load of data into CockroachDB.
-1. As the data is migrating, you can use [MOLT Verify]({% link molt/molt-verify.md %}) to validate the consistency of the data between the source database and CockroachDB.
-1. After nearly all data from your source database has been moved to CockroachDB (for example, with a <1 second delay or <1000 rows), perform an [*immediate cutover*](#cutover-strategy) by pointing application traffic to CockroachDB.
-1. Manually reconcile any inconsistencies caused by writes that were not replicated during the cutover.
-1. Close the connection to the source database when you are ready to finish the migration.
-
-## Complete the migration
-
-After you have successfully [conducted the migration](#conduct-the-migration):
-
-- Notify the teams and other stakeholders impacted by the migration.
-- Retire any test or development environments used to verify the migration.
-- Extend the document you created when [developing your migration plan](#develop-a-migration-plan) with any issues encountered and follow-up work that needs to be done.
-
-## See also
-
-- [Can a PostgreSQL or MySQL application be migrated to CockroachDB?]({% link {{ page.version.version }}/frequently-asked-questions.md %}#can-a-postgresql-or-mysql-application-be-migrated-to-cockroachdb)
-- [PostgreSQL Compatibility]({% link {{ page.version.version }}/postgresql-compatibility.md %})
-- [Use the Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
-- [Schema Design Overview]({% link {{ page.version.version }}/schema-design-overview.md %})
-- [Create a User-defined Schema]({% link {{ page.version.version }}/schema-design-schema.md %})
-- [Primary key best practices]({% link {{ page.version.version }}/schema-design-table.md %}#primary-key-best-practices)
-- [Secondary index best practices]({% link {{ page.version.version }}/schema-design-indexes.md %}#best-practices)
-- [Transaction contention best practices]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#transaction-contention)
-- [Back Up and Restore]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %})
diff --git a/src/current/v23.2/migration-strategy-lift-and-shift.md b/src/current/v23.2/migration-strategy-lift-and-shift.md
deleted file mode 100644
index 402f7126bec..00000000000
--- a/src/current/v23.2/migration-strategy-lift-and-shift.md
+++ /dev/null
@@ -1,139 +0,0 @@
----
-title: "Migration Strategy: Lift and Shift"
-summary: Learn about the 'Lift and Shift' data migration strategy
-toc: true
-docs_area: migrate
----
-
-There are multiple strategies for [migrating off legacy technology]({% link {{ page.version.version }}/migration-overview.md %}) to CockroachDB.
-
-This page discusses the ["Lift and Shift" strategy]({% link {{ page.version.version }}/migration-overview.md %}#lift-and-shift) for migrating your database, which is a commonly used approach. This approach, which is also known as "Big Bang" (and by other names), refers to the process where your data is moved in its entirety from a source system to a target system within a defined period of time. This typically involves some application downtime and can involve some service degradation.
-
-Lift and Shift may not be the right approach if a strong application service continuity during the migration is required. It may be a viable method if application downtime is permitted.
-
-{{site.data.alerts.callout_info}}
-The information on this page assumes you have already reviewed the [migration overview]({% link {{ page.version.version }}/migration-overview.md %}).
-{{site.data.alerts.end}}
-
-## Pros and Cons
-
-On the spectrum of different data migration strategies, Lift and Shift has the following pros and cons. The terms "lower" and "higher" are not absolute, but relative to other approaches.
-
-Pros:
-
-- Conceptually straightforward.
-- Less complex: If you can afford some downtime, the overall effort will usually be lower, and the chance of errors is lower.
-- Lower time start-to-finish: In general, the more downtime you can afford, the shorter the overall migration project timeframe can be.
-- Lower technical risk: It does not involve running multiple systems alongside each other for an extended period of time.
-- Easy to practice [dry runs]({% link {{ page.version.version }}/migration-overview.md %}#perform-a-dry-run) of import/export using testing/non-production systems.
-- Good import/export tooling is available (e.g., external tools like: [AWS Database Migration Service (DMS)]({% link {{ page.version.version }}/aws-dms.md %}), [Qlik Replicate]({% link {{ page.version.version }}/qlik.md %}), [Striim]({% link {{ page.version.version }}/striim.md %}); or internal tools like [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}), [`COPY FROM`]({% link {{ page.version.version }}/copy.md %}), [`cockroach userdata`]({% link {{ page.version.version }}/cockroach-userfile-upload.md %})).
-- If your application already has regularly scheduled maintenance windows, your customers will not encounter application downtime.
-
-Cons:
-
-- All or nothing: It either works or does not work; once you start, you have to finish or [roll back]({% link {{ page.version.version }}/migration-overview.md %}#all-at-once-rollback).
-- Higher project risk: The project **must** be completed to meet a given [downtime / service degradation window]({% link {{ page.version.version }}/migration-overview.md %}#downtime-window).
-- Application service continuity requirements must be relaxed (that is, application downtime or increased latency may be needed).
-
-## Process design considerations
-
-{{site.data.alerts.callout_info}}
-The high-level considerations in this section only refer to the data-loading portion of your migration. They assume you are following the steps in the overall migration process described in [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}).
-{{site.data.alerts.end}}
-
-Keep in mind the following considerations when designing a Lift and Shift data migration process.
-
-- [Decide on your data migration tooling.](#managed-migration)
-- [Decide which data formats you will use.](#data-formats)
-- [Design a restartable process.](#restartable)
-- [Design a scalable process.](#scalable)
-
-
-
-### Decide on your data migration tooling
-
-If you plan to do your bulk data migration using a managed migration service, you must have a secure, publicly available CockroachDB cluster. CockroachDB supports the following [third-party migration services]({% link {{ page.version.version }}/third-party-database-tools.md %}#data-migration-tools):
-
-- [AWS Database Migration Service (DMS)]({% link {{ page.version.version }}/aws-dms.md %})
-- [Qlik Replicate]({% link {{ page.version.version }}/qlik.md %})
-- [Striim]({% link {{ page.version.version }}/striim.md %})
-
-{{site.data.alerts.callout_info}}
-Depending on the migration service you choose, [long-running transactions]({% link {{ page.version.version }}/query-behavior-troubleshooting.md %}#hanging-or-stuck-queries) can occur. In some cases, these queries will cause [transaction retry errors]({% link {{ page.version.version }}/common-errors.md %}#restart-transaction). If you encounter these errors while migrating to CockroachDB using a managed migration service, please reach out to our [Support Resources]({% link {{ page.version.version }}/support-resources.md %}).
-{{site.data.alerts.end}}
-
-If you will not be using a managed migration service, see the following sections for more information on how to use SQL statements like [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}), [`COPY FROM`]({% link {{ page.version.version }}/copy.md %}), etc.
-
-
-
-### Decide which data formats and storage media you will use
-
-It's important to decide which data formats, storage media, and database features you will use to migrate your data.
-
-Data formats that can be imported by CockroachDB include:
-
-- [SQL]({% link {{ page.version.version }}/schema-design-overview.md %}) for the [schema import]({% link cockroachcloud/migrations-page.md %}).
-- [CSV]({% link {{ page.version.version }}/migrate-from-csv.md %}) for table data.
-- [Avro]({% link {{ page.version.version }}/migrate-from-avro.md %}) for table data.
-
-The storage media you use to export / import from can be intermediate data files or streaming data coming over the network. Options include:
-
-- Local "userdata" storage for small tables (see [`cockroach userdata`]({% link {{ page.version.version }}/cockroach-userfile-upload.md %}), [Use a Local File Server]({% link {{ page.version.version }}/use-a-local-file-server.md %})).
-- Cloud blob storage (see [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}), [Import Performance Best Practices]({% link {{ page.version.version }}/import-performance-best-practices.md %}), [Use Cloud Storage]({% link {{ page.version.version }}/use-cloud-storage.md %})).
-- Direct wire transfers (see [managed migration services](#managed-migration)).
-
-Database features for export/import from the source and target databases can include:
-
-- Tools for exporting from the source database may include: `pg_dump --schema-only` and `COPY FROM`, `mysqldump`, `expdp`, etc.
-- For import into CockroachDB, use [`COPY FROM`]({% link {{ page.version.version }}/copy.md %}) or [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}). For a bulk data migrations, most users should use `IMPORT INTO` because the tables will be offline anyway, and `IMPORT INTO` can [perform the data import much faster]({% link {{ page.version.version }}/import-performance-best-practices.md %}) than `COPY FROM`.
-
-Decide which of the options above will meet your requirements while resulting in a process that is [restartable](#restartable) and [scalable](#scalable).
-
-
-
-### Design a restartable process
-
-To have a higher chance of success, design your data migration process so it can be stopped and restarted from an intermediate state at any time during the process. This will help minimize errors and avoid wasted effort.
-
-Keep the following requirements in mind as you design a restartable import/export process:
-
-- Bulk migrate data in manageable size batches for your source and target systems.
- - This is a best practice. If something happens to the target cluster during import, the amount of wasted work will be minimized.
-- Implement progress/state keeping with process restart capabilities.
-- Make sure your export process is idempotent: the same input to your export process should return the same output data.
-- If possible, export and import the majority of your data before taking down the source database. This can ensure that you only have to deal with the incremental changes from your last import to complete the migration process.
-
-
-
-### Design a scalable and performant process
-
-Once your process is [restartable and resilient to failures](#design-a-restartable-process), it's important to also make sure it will scale to the needs of your data set. The larger the data set you are migrating to CockroachDB, the more important the performance and scalability of your process will be.
-
-Keep the following requirements in mind:
-
-- Schema and data should be imported separately.
-- Your process should handle multiple files across multiple export/import streams concurrently.
- - For best performance, these files should contain presorted, disjoint data sets.
-- Benchmark the performance of your migration process to help ensure it will complete within the allotted downtime window.
-
-For more information about import performance, see [Import Performance Best Practices]({% link {{ page.version.version }}/import-performance-best-practices.md %}).
-
-## See also
-
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
-- [Use the Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
-- [Migrate with AWS Database Migration Service (DMS)]({% link {{ page.version.version }}/aws-dms.md %})
-- [AWS DMS documentation](https://docs.aws.amazon.com/dms/latest/userguide/Welcome.html)
-- [Migrate and Replicate Data with Qlik Replicate]({% link {{ page.version.version }}/qlik.md %})
-- [Migrate and Replicate Data with Striim]({% link {{ page.version.version }}/striim.md %})
-- [Schema Design Overview]({% link {{ page.version.version }}/schema-design-overview.md %})
-- [Back Up and Restore]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %})
-- [Export data with Changefeeds]({% link {{ page.version.version }}/export-data-with-changefeeds.md %})
-- [`COPY`]({% link {{ page.version.version }}/copy.md %})
-- [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %})
-- [Migrate from CSV]({% link {{ page.version.version }}/migrate-from-csv.md %})
-- [Migrate from Avro]({% link {{ page.version.version }}/migrate-from-avro.md %})
-- [Client connection parameters]({% link {{ page.version.version }}/connection-parameters.md %})
-
-
-{% comment %} eof {% endcomment %}
diff --git a/src/current/v23.2/qlik.md b/src/current/v23.2/qlik.md
index bdb646d7b20..da53e969bfd 100644
--- a/src/current/v23.2/qlik.md
+++ b/src/current/v23.2/qlik.md
@@ -68,7 +68,7 @@ Complete the following items before using Qlik Replicate:
- If you are migrating from PostgreSQL, MySQL, Oracle, or Microsoft SQL Server, [use the **Schema Conversion Tool**]({% link cockroachcloud/migrations-page.md %}) to convert and export your schema. Ensure that any schema changes are also reflected on your tables, or add transformation rules. If you make substantial schema changes, the Qlik Replicate migration may fail.
{{site.data.alerts.callout_info}}
- All tables must have an explicitly defined primary key. For more guidance, see the [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}#schema-design-best-practices).
+ All tables must have an explicitly defined primary key. For more guidance, see [Migration Strategy]({% link molt/migration-strategy.md %}#schema-design-best-practices).
{{site.data.alerts.end}}
## Migrate and replicate data to CockroachDB
@@ -96,7 +96,7 @@ In the Qlik Replicate interface, CockroachDB is configured as a PostgreSQL **sou
## See also
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
- [Change Data Capture Overview]({% link {{ page.version.version }}/change-data-capture-overview.md %})
- [Third-Party Tools Supported by Cockroach Labs]({% link {{ page.version.version }}/third-party-database-tools.md %})
diff --git a/src/current/v23.2/read-committed.md b/src/current/v23.2/read-committed.md
index 3b6ac6c051f..94c697270f2 100644
--- a/src/current/v23.2/read-committed.md
+++ b/src/current/v23.2/read-committed.md
@@ -15,7 +15,7 @@ docs_area: deploy
- Your application needs to maintain a high workload concurrency with minimal [transaction retries]({% link {{ page.version.version }}/developer-basics.md %}#transaction-retries), and it can tolerate potential [concurrency anomalies](#concurrency-anomalies). Predictable query performance at high concurrency is more valuable than guaranteed transaction [serializability]({% link {{ page.version.version }}/developer-basics.md %}#serializability-and-transaction-contention).
-- You are [migrating an application to CockroachDB]({% link {{ page.version.version }}/migration-overview.md %}) that was built at a `READ COMMITTED` isolation level on the source database, and it is not feasible to modify your application to use `SERIALIZABLE` isolation.
+- You are [migrating an application to CockroachDB]({% link molt/migration-overview.md %}) that was built at a `READ COMMITTED` isolation level on the source database, and it is not feasible to modify your application to use `SERIALIZABLE` isolation.
Whereas `SERIALIZABLE` isolation guarantees data correctness by placing transactions into a [serializable ordering]({% link {{ page.version.version }}/demo-serializable.md %}), `READ COMMITTED` isolation permits some [concurrency anomalies](#concurrency-anomalies) in exchange for minimizing transaction aborts, [retries]({% link {{ page.version.version }}/developer-basics.md %}#transaction-retries), and blocking. Compared to `SERIALIZABLE` transactions, `READ COMMITTED` transactions return fewer [serialization errors]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}) that require client-side handling. See [`READ COMMITTED` transaction behavior](#read-committed-transaction-behavior).
@@ -942,4 +942,4 @@ The following affect the performance of `READ COMMITTED` transactions:
- [Serializable Transactions]({% link {{ page.version.version }}/demo-serializable.md %})
- [What Write Skew Looks Like](https://www.cockroachlabs.com/blog/what-write-skew-looks-like/)
- [Read Committed RFC](https://github.com/cockroachdb/cockroach/blob/master/docs/RFCS/20230122_read_committed_isolation.md)
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
diff --git a/src/current/v23.2/sql-faqs.md b/src/current/v23.2/sql-faqs.md
index 1d971591089..98b8c51eb34 100644
--- a/src/current/v23.2/sql-faqs.md
+++ b/src/current/v23.2/sql-faqs.md
@@ -13,7 +13,7 @@ docs_area: get_started
{{site.data.alerts.callout_info}}
You can also use the [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) statement to bulk-insert CSV data into an existing table.
{{site.data.alerts.end}}
-- To bulk-insert data into a new table, the [`IMPORT`]({% link {{ page.version.version }}/import.md %}) statement performs better than `INSERT`. `IMPORT` can also be used to [migrate data from other databases]({% link {{ page.version.version }}/migration-overview.md %}) like MySQL, Oracle, and PostgreSQL.
+- To bulk-insert data into a new table, the [`IMPORT`]({% link {{ page.version.version }}/import.md %}) statement performs better than `INSERT`. `IMPORT` can also be used to [migrate data from other databases]({% link molt/migration-overview.md %}) like MySQL, Oracle, and PostgreSQL.
## How do I auto-generate unique row IDs in CockroachDB?
diff --git a/src/current/v23.2/striim.md b/src/current/v23.2/striim.md
index 436d7440cbd..67526c8ff9d 100644
--- a/src/current/v23.2/striim.md
+++ b/src/current/v23.2/striim.md
@@ -37,7 +37,7 @@ Complete the following items before using Striim:
- If you are migrating from PostgreSQL, MySQL, Oracle, or Microsoft SQL Server, [use the **Schema Conversion Tool**]({% link cockroachcloud/migrations-page.md %}) to convert and export your schema. Ensure that any schema changes are also reflected on your tables.
{{site.data.alerts.callout_info}}
- All tables must have an explicitly defined primary key. For more guidance, see the [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}#schema-design-best-practices).
+ All tables must have an explicitly defined primary key. For more guidance, see [Migration Strategy]({% link molt/migration-strategy.md %}#schema-design-best-practices).
{{site.data.alerts.end}}
## Migrate and replicate data to CockroachDB
@@ -110,7 +110,7 @@ To perform continuous replication of ongoing changes, create a Striim applicatio
## See also
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
- [Change Data Capture Overview]({% link {{ page.version.version }}/change-data-capture-overview.md %})
- [Third-Party Tools Supported by Cockroach Labs]({% link {{ page.version.version }}/third-party-database-tools.md %})
diff --git a/src/current/v23.2/take-and-restore-encrypted-backups.md b/src/current/v23.2/take-and-restore-encrypted-backups.md
index e08e0b51f36..d7b3472d34b 100644
--- a/src/current/v23.2/take-and-restore-encrypted-backups.md
+++ b/src/current/v23.2/take-and-restore-encrypted-backups.md
@@ -407,7 +407,7 @@ To restore from a specific backup, use [`RESTORE FROM {subdirectory} IN ...`]({%
- [Take Full and Incremental Backups]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %})
- [Take and Restore Locality-aware Backups]({% link {{ page.version.version }}/take-and-restore-locality-aware-backups.md %})
- [Take Backups with Revision History and Restore from a Point-in-time]({% link {{ page.version.version }}/take-backups-with-revision-history-and-restore-from-a-point-in-time.md %})
-- [`IMPORT`]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [Use the Built-in SQL Client]({% link {{ page.version.version }}/cockroach-sql.md %})
- [`cockroach` Commands Overview]({% link {{ page.version.version }}/cockroach-commands.md %})
diff --git a/src/current/v23.2/take-and-restore-locality-aware-backups.md b/src/current/v23.2/take-and-restore-locality-aware-backups.md
index cb6736f61a5..bdff82ab7c5 100644
--- a/src/current/v23.2/take-and-restore-locality-aware-backups.md
+++ b/src/current/v23.2/take-and-restore-locality-aware-backups.md
@@ -237,6 +237,6 @@ RESUME JOB 27536791415282;
- [Take Full and Incremental Backups]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %})
- [Take and Restore Encrypted Backups]({% link {{ page.version.version }}/take-and-restore-encrypted-backups.md %})
- [Take Backups with Revision History and Restore from a Point-in-time]({% link {{ page.version.version }}/take-backups-with-revision-history-and-restore-from-a-point-in-time.md %})
-- [`IMPORT`]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [Use the Built-in SQL Client]({% link {{ page.version.version }}/cockroach-sql.md %})
- [`cockroach` Commands Overview]({% link {{ page.version.version }}/cockroach-commands.md %})
diff --git a/src/current/v23.2/take-backups-with-revision-history-and-restore-from-a-point-in-time.md b/src/current/v23.2/take-backups-with-revision-history-and-restore-from-a-point-in-time.md
index 4904d50f203..cc59f5c060a 100644
--- a/src/current/v23.2/take-backups-with-revision-history-and-restore-from-a-point-in-time.md
+++ b/src/current/v23.2/take-backups-with-revision-history-and-restore-from-a-point-in-time.md
@@ -53,7 +53,7 @@ To view the available backup subdirectories you can restore from, use [`SHOW BAC
- [Take Full and Incremental Backups]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %})
- [Take and Restore Encrypted Backups]({% link {{ page.version.version }}/take-and-restore-encrypted-backups.md %})
- [Take and Restore Locality-aware Backups]({% link {{ page.version.version }}/take-and-restore-locality-aware-backups.md %})
-- [`IMPORT`]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [Use the Built-in SQL Client]({% link {{ page.version.version }}/cockroach-sql.md %})
- [`cockroach` Commands Overview]({% link {{ page.version.version }}/cockroach-commands.md %})
diff --git a/src/current/v23.2/take-full-and-incremental-backups.md b/src/current/v23.2/take-full-and-incremental-backups.md
index 97f55aa4e4c..d994a8a7c37 100644
--- a/src/current/v23.2/take-full-and-incremental-backups.md
+++ b/src/current/v23.2/take-full-and-incremental-backups.md
@@ -384,7 +384,7 @@ To create a table with `exclude_data_from_backup`, see [Create a table with data
- [Take and Restore Encrypted Backups]({% link {{ page.version.version }}/take-and-restore-encrypted-backups.md %})
- [Take and Restore Locality-aware Backups]({% link {{ page.version.version }}/take-and-restore-locality-aware-backups.md %})
- [Take Backups with Revision History and Restore from a Point-in-time]({% link {{ page.version.version }}/take-backups-with-revision-history-and-restore-from-a-point-in-time.md %})
-- [`IMPORT`]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [Use the Built-in SQL Client]({% link {{ page.version.version }}/cockroach-sql.md %})
- [`cockroach` Commands Overview]({% link {{ page.version.version }}/cockroach-commands.md %})
- [Expire Past Backups]({% link {{ page.version.version }}/expire-past-backups.md %})
diff --git a/src/current/v24.1/aws-dms.md b/src/current/v24.1/aws-dms.md
index c7b5a992383..962c66cabb9 100644
--- a/src/current/v24.1/aws-dms.md
+++ b/src/current/v24.1/aws-dms.md
@@ -41,7 +41,7 @@ Complete the following items before starting the DMS migration:
- Manually create all schema objects in the target CockroachDB cluster. If you are migrating from PostgreSQL, MySQL, Oracle, or Microsoft SQL Server, you can [use the **Schema Conversion Tool**]({% link cockroachcloud/migrations-page.md %}) to convert and export your schema.
- - All tables must have an explicitly defined primary key. For more guidance, see the [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}#schema-design-best-practices).
+ - All tables must have an explicitly defined primary key. For more guidance, see [Migration Strategy]({% link molt/migration-strategy.md %}#schema-design-best-practices).
- Drop all [constraints]({% link {{ page.version.version }}/constraints.md %}) per the [AWS DMS best practices](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_BestPractices.html#CHAP_BestPractices.Performance). You can recreate them after the [full load completes](#step-3-verify-the-migration). AWS DMS can create a basic schema, but does not create [indexes]({% link {{ page.version.version }}/indexes.md %}) or constraints such as [foreign keys]({% link {{ page.version.version }}/foreign-key.md %}) and [defaults]({% link {{ page.version.version }}/default-value.md %}).
@@ -406,7 +406,7 @@ The `BatchApplyEnabled` setting can improve replication performance and is recom
## See Also
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
- [`cockroach demo`]({% link {{ page.version.version }}/cockroach-demo.md %})
- [AWS DMS documentation](https://docs.aws.amazon.com/dms/latest/userguide/Welcome.html)
diff --git a/src/current/v24.1/build-a-java-app-with-cockroachdb-hibernate.md b/src/current/v24.1/build-a-java-app-with-cockroachdb-hibernate.md
index deaf6ead590..f1f2ca4ec16 100644
--- a/src/current/v24.1/build-a-java-app-with-cockroachdb-hibernate.md
+++ b/src/current/v24.1/build-a-java-app-with-cockroachdb-hibernate.md
@@ -130,9 +130,9 @@ APP: getAccountBalance(2) --> 350.00
If you are trying to get a large data set into CockroachDB all at once (a bulk import), avoid writing client-side code altogether and use the [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) statement instead. It is much faster and more efficient than making a series of [`INSERT`s]({% link {{ page.version.version }}/insert.md %}) and [`UPDATE`s]({% link {{ page.version.version }}/update.md %}). It bypasses the [SQL layer]({% link {{ page.version.version }}/architecture/sql-layer.md %}) altogether and writes directly to the [storage layer](architecture/storage-layer.html) of the database.
-For more information about importing data from PostgreSQL, see [Migrate from PostgreSQL]({% link {{ page.version.version }}/migrate-from-postgres.md %}).
+For more information about importing data from PostgreSQL, see [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %}).
-For more information about importing data from MySQL, see [Migrate from MySQL]({% link {{ page.version.version }}/migrate-from-mysql.md %}).
+For more information about importing data from MySQL, see [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql).
### Use `reWriteBatchedInserts` for increased speed
diff --git a/src/current/v24.1/build-a-java-app-with-cockroachdb.md b/src/current/v24.1/build-a-java-app-with-cockroachdb.md
index 38d4b02bb14..f7eca7a7245 100644
--- a/src/current/v24.1/build-a-java-app-with-cockroachdb.md
+++ b/src/current/v24.1/build-a-java-app-with-cockroachdb.md
@@ -269,9 +269,9 @@ props.setProperty("options", "-c sql_safe_updates=true -c statement_timeout=30")
If you are trying to get a large data set into CockroachDB all at once (a bulk import), avoid writing client-side code altogether and use the [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) statement instead. It is much faster and more efficient than making a series of [`INSERT`s]({% link {{ page.version.version }}/insert.md %}) and [`UPDATE`s]({% link {{ page.version.version }}/update.md %}). It bypasses the [SQL layer]({% link {{ page.version.version }}/architecture/sql-layer.md %}) altogether and writes directly to the [storage layer](architecture/storage-layer.html) of the database.
-For more information about importing data from PostgreSQL, see [Migrate from PostgreSQL]({% link {{ page.version.version }}/migrate-from-postgres.md %}).
+For more information about importing data from PostgreSQL, see [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %}).
-For more information about importing data from MySQL, see [Migrate from MySQL]({% link {{ page.version.version }}/migrate-from-mysql.md %}).
+For more information about importing data from MySQL, see [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql).
### Use `reWriteBatchedInserts` for increased speed
diff --git a/src/current/v24.1/build-a-python-app-with-cockroachdb-sqlalchemy.md b/src/current/v24.1/build-a-python-app-with-cockroachdb-sqlalchemy.md
index b50f73477af..c96ccfab463 100644
--- a/src/current/v24.1/build-a-python-app-with-cockroachdb-sqlalchemy.md
+++ b/src/current/v24.1/build-a-python-app-with-cockroachdb-sqlalchemy.md
@@ -204,9 +204,9 @@ Instead, we recommend breaking your transaction into smaller units of work (or "
If you are trying to get a large data set into CockroachDB all at once (a bulk import), avoid writing client-side code that uses an ORM and use the [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) statement instead. It is much faster and more efficient than making a series of [`INSERT`s]({% link {{ page.version.version }}/insert.md %}) and [`UPDATE`s]({% link {{ page.version.version }}/update.md %}) such as are generated by calls to [`session.bulk_save_objects()`](https://docs.sqlalchemy.org/orm/session_api.html?highlight=bulk_save_object#sqlalchemy.orm.session.Session.bulk_save_objects).
-For more information about importing data from PostgreSQL, see [Migrate from PostgreSQL]({% link {{ page.version.version }}/migrate-from-postgres.md %}).
+For more information about importing data from PostgreSQL, see [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %}).
-For more information about importing data from MySQL, see [Migrate from MySQL]({% link {{ page.version.version }}/migrate-from-mysql.md %}).
+For more information about importing data from MySQL, see [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql).
### Prefer the query builder
diff --git a/src/current/v24.1/copy.md b/src/current/v24.1/copy.md
index aa825fdc003..144da73b4ac 100644
--- a/src/current/v24.1/copy.md
+++ b/src/current/v24.1/copy.md
@@ -358,10 +358,10 @@ You can copy CSV data into CockroachDB using the following methods:
## See also
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %})
- [`EXPORT`]({% link {{ page.version.version }}/export.md %})
- [Install a Driver or ORM Framework]({% link {{ page.version.version }}/install-client-drivers.md %})
{% comment %}
-- [Migrate from PostgreSQL]({% link {{ page.version.version }}/migrate-from-postgres.md %})
+- [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %})
{% endcomment %}
\ No newline at end of file
diff --git a/src/current/v24.1/debezium.md b/src/current/v24.1/debezium.md
index ea2eb513b50..a9d14707ea4 100644
--- a/src/current/v24.1/debezium.md
+++ b/src/current/v24.1/debezium.md
@@ -116,7 +116,7 @@ Once all of the [prerequisite steps](#before-you-begin) are completed, you can u
## See also
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
- [Change Data Capture Overview]({% link {{ page.version.version }}/change-data-capture-overview.md %})
- [Third-Party Tools Supported by Cockroach Labs]({% link {{ page.version.version }}/third-party-database-tools.md %})
diff --git a/src/current/v24.1/frequently-asked-questions.md b/src/current/v24.1/frequently-asked-questions.md
index 07e00b6959f..c94ba15582b 100644
--- a/src/current/v24.1/frequently-asked-questions.md
+++ b/src/current/v24.1/frequently-asked-questions.md
@@ -147,7 +147,7 @@ Note, however, that the protocol used doesn't significantly impact how easy it i
### Can a PostgreSQL or MySQL application be migrated to CockroachDB?
-Yes. Most users should be able to follow the instructions in [Migrate from PostgreSQL]({% link {{ page.version.version }}/migrate-from-postgres.md %}) or [Migrate from MySQL]({% link {{ page.version.version }}/migrate-from-mysql.md %}). Due to differences in available features and syntax, some features supported by these databases may require manual effort to port to CockroachDB. Check those pages for details.
+Yes. Most users should be able to follow the instructions in [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %}) or [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql). Due to differences in available features and syntax, some features supported by these databases may require manual effort to port to CockroachDB. Check those pages for details.
We also fully support [importing your data via CSV]({% link {{ page.version.version }}/migrate-from-csv.md %}).
diff --git a/src/current/v24.1/goldengate.md b/src/current/v24.1/goldengate.md
index 7fd572094e4..30ee23c17df 100644
--- a/src/current/v24.1/goldengate.md
+++ b/src/current/v24.1/goldengate.md
@@ -514,7 +514,7 @@ Run the steps in this section on a machine and in a directory where Oracle Golde
## See also
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
- [Change Data Capture Overview]({% link {{ page.version.version }}/change-data-capture-overview.md %})
- [Third-Party Tools Supported by Cockroach Labs]({% link {{ page.version.version }}/third-party-database-tools.md %})
diff --git a/src/current/v24.1/import-into.md b/src/current/v24.1/import-into.md
index ad4a2c9ca5c..780b9195104 100644
--- a/src/current/v24.1/import-into.md
+++ b/src/current/v24.1/import-into.md
@@ -158,7 +158,7 @@ You can control the `IMPORT` process's behavior using any of the following key-v
For examples showing how to use these options, see the [Examples section]({% link {{ page.version.version }}/import-into.md %}#examples).
-For instructions and working examples showing how to migrate data from other databases and formats, see the [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}).
+For instructions and working examples showing how to migrate data from other databases and formats, see the [Migration Overview]({% link molt/migration-overview.md %}).
## View and control import jobs
@@ -285,6 +285,6 @@ For more information about importing data from Avro, including examples, see [Mi
## See also
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [Use Cloud Storage]({% link {{ page.version.version }}/use-cloud-storage.md %})
- [Import Performance Best Practices]({% link {{ page.version.version }}/import-performance-best-practices.md %})
diff --git a/src/current/v24.1/import-performance-best-practices.md b/src/current/v24.1/import-performance-best-practices.md
index b85d28f1fe6..ca69a57fc4a 100644
--- a/src/current/v24.1/import-performance-best-practices.md
+++ b/src/current/v24.1/import-performance-best-practices.md
@@ -160,9 +160,9 @@ If you cannot both split and sort your dataset, the performance of either split
## See also
- [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %})
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [Migrate from Oracle]({% link {{ page.version.version }}/migrate-from-oracle.md %})
-- [Migrate from PostgreSQL]({% link {{ page.version.version }}/migrate-from-postgres.md %})
-- [Migrate from MySQL]({% link {{ page.version.version }}/migrate-from-mysql.md %})
+- [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %})
+- [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql)
- [Migrate from CSV]({% link {{ page.version.version }}/migrate-from-csv.md %})
- [Migrate from Avro]({% link {{ page.version.version }}/migrate-from-avro.md %})
diff --git a/src/current/v24.1/index.md b/src/current/v24.1/index.md
index 2008d323859..010a71f91f4 100644
--- a/src/current/v24.1/index.md
+++ b/src/current/v24.1/index.md
@@ -99,11 +99,11 @@ docs_area:
diff --git a/src/current/v24.1/insert-data.md b/src/current/v24.1/insert-data.md
index a7e76489a0c..dc3c26ee1f5 100644
--- a/src/current/v24.1/insert-data.md
+++ b/src/current/v24.1/insert-data.md
@@ -105,7 +105,7 @@ conn.commit()
Reference information related to this task:
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [`INSERT`]({% link {{ page.version.version }}/insert.md %})
- [`UPSERT`]({% link {{ page.version.version }}/upsert.md %})
- [Transaction Contention]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#transaction-contention)
diff --git a/src/current/v24.1/migrate-from-avro.md b/src/current/v24.1/migrate-from-avro.md
index 39b7bdbc9aa..de42d232917 100644
--- a/src/current/v24.1/migrate-from-avro.md
+++ b/src/current/v24.1/migrate-from-avro.md
@@ -216,8 +216,8 @@ You will need to run [`ALTER TABLE ... ADD CONSTRAINT`]({% link {{ page.version.
- [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %})
- [Import Performance Best Practices]({% link {{ page.version.version }}/import-performance-best-practices.md %})
- [Migrate from CSV][csv]
-- [Migrate from MySQL][mysql]
-- [Migrate from PostgreSQL][postgres]
+- [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql)
+- [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %})
- [Back Up and Restore Data]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %})
- [Use the Built-in SQL Client]({% link {{ page.version.version }}/cockroach-sql.md %})
- [`cockroach` Commands Overview]({% link {{ page.version.version }}/cockroach-commands.md %})
diff --git a/src/current/v24.1/migrate-from-csv.md b/src/current/v24.1/migrate-from-csv.md
index e0e1d92da9c..d30eeb2e98c 100644
--- a/src/current/v24.1/migrate-from-csv.md
+++ b/src/current/v24.1/migrate-from-csv.md
@@ -176,8 +176,8 @@ IMPORT INTO employees (emp_no, birth_date, first_name, last_name, gender, hire_d
- [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %})
- [Import Performance Best Practices]({% link {{ page.version.version }}/import-performance-best-practices.md %})
-- [Migrate from MySQL][mysql]
-- [Migrate from PostgreSQL][postgres]
+- [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql)
+- [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %})
- [Back Up and Restore Data]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %})
- [Use the Built-in SQL Client]({% link {{ page.version.version }}/cockroach-sql.md %})
- [`cockroach` Commands Overview]({% link {{ page.version.version }}/cockroach-commands.md %})
diff --git a/src/current/v24.1/migrate-from-geojson.md b/src/current/v24.1/migrate-from-geojson.md
index 2c3326af39e..e0d804f0be7 100644
--- a/src/current/v24.1/migrate-from-geojson.md
+++ b/src/current/v24.1/migrate-from-geojson.md
@@ -122,9 +122,9 @@ IMPORT INTO underground_storage_tank CSV DATA ('http://localhost:3000/tanks.csv'
- [Migrate from Shapefiles]({% link {{ page.version.version }}/migrate-from-shapefiles.md %})
- [Migrate from GeoPackage]({% link {{ page.version.version }}/migrate-from-geopackage.md %})
- [Spatial indexes]({% link {{ page.version.version }}/spatial-indexes.md %})
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
-- [Migrate from MySQL][mysql]
-- [Migrate from PostgreSQL][postgres]
+- [Migration Overview]({% link molt/migration-overview.md %})
+- [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql)
+- [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %})
- [Back Up and Restore Data]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %})
- [Use the Built-in SQL Client]({% link {{ page.version.version }}/cockroach-sql.md %})
- [`cockroach` Commands Overview]({% link {{ page.version.version }}/cockroach-commands.md %})
diff --git a/src/current/v24.1/migrate-from-geopackage.md b/src/current/v24.1/migrate-from-geopackage.md
index 53acdaff4a7..c3fabeb57ef 100644
--- a/src/current/v24.1/migrate-from-geopackage.md
+++ b/src/current/v24.1/migrate-from-geopackage.md
@@ -114,9 +114,9 @@ IMPORT INTO busstops CSV DATA ('http://localhost:3000/busstops.csv') WITH skip =
- [Spatial indexes]({% link {{ page.version.version }}/spatial-indexes.md %})
- [Migrate from OpenStreetMap]({% link {{ page.version.version }}/migrate-from-openstreetmap.md %})
- [Migrate from Shapefiles]({% link {{ page.version.version }}/migrate-from-shapefiles.md %})
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
-- [Migrate from MySQL][mysql]
-- [Migrate from PostgreSQL][postgres]
+- [Migration Overview]({% link molt/migration-overview.md %})
+- [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql)
+- [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %})
- [Back Up and Restore Data]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %})
- [Use the Built-in SQL Client]({% link {{ page.version.version }}/cockroach-sql.md %})
- [`cockroach` Commands Overview]({% link {{ page.version.version }}/cockroach-commands.md %})
diff --git a/src/current/v24.1/migrate-from-mysql.md b/src/current/v24.1/migrate-from-mysql.md
deleted file mode 100644
index 29cbebde972..00000000000
--- a/src/current/v24.1/migrate-from-mysql.md
+++ /dev/null
@@ -1,412 +0,0 @@
----
-title: Migrate from MySQL
-summary: Learn how to migrate data from MySQL into a CockroachDB cluster.
-toc: true
-docs_area: migrate
----
-
-This page describes basic considerations and provides a basic [example](#example-migrate-world-to-cockroachdb) of migrating data from MySQL to CockroachDB. The information on this page assumes that you have read [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}), which describes the broad phases and considerations of migrating a database to CockroachDB.
-
-The [MySQL migration example](#example-migrate-world-to-cockroachdb) on this page demonstrates how to use [MOLT tooling]({% link {{ page.version.version }}/migration-overview.md %}#molt) to update the MySQL schema, perform an initial load of data, and validate the data. These steps are essential when [preparing for a full migration]({% link {{ page.version.version }}/migration-overview.md %}#prepare-for-migration).
-
-{{site.data.alerts.callout_success}}
-If you need help migrating to CockroachDB, contact our sales team.
-{{site.data.alerts.end}}
-
-## Syntax differences
-
-You will likely need to make application changes due to differences in syntax between MySQL and CockroachDB. Along with the [general considerations in the migration overview]({% link {{ page.version.version }}/migration-overview.md %}#application-changes), also consider the following MySQL-specific information as you develop your migration plan.
-
-When [using the Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}?filters=mysql#convert-a-schema), MySQL syntax that cannot automatically be converted will be displayed in the [**Summary Report**]({% link cockroachcloud/migrations-page.md %}?filters=mysql#summary-report). These may include the following.
-
-#### String case sensitivity
-
-Strings are case-insensitive in MySQL and case-sensitive in CockroachDB. You may need to edit your MySQL data to get the results you expect from CockroachDB. For example, you may have been doing string comparisons in MySQL that will need to be changed to work with CockroachDB.
-
-For more information about the case sensitivity of strings in MySQL, see [Case Sensitivity in String Searches](https://dev.mysql.com/doc/refman/8.0/en/case-sensitivity.html) from the MySQL documentation. For more information about CockroachDB strings, see [`STRING`]({% link {{ page.version.version }}/string.md %}).
-
-#### Identifier case sensitivity
-
-Identifiers are case-sensitive in MySQL and [case-insensitive in CockroachDB]({% link {{ page.version.version }}/keywords-and-identifiers.md %}#identifiers). When [using the Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}?filters=mysql#convert-a-schema), you can either keep case sensitivity by enclosing identifiers in double quotes, or make identifiers case-insensitive by converting them to lowercase.
-
-#### `AUTO_INCREMENT` attribute
-
-The MySQL [`AUTO_INCREMENT`](https://dev.mysql.com/doc/refman/8.0/en/example-auto-increment.html) attribute, which creates sequential column values, is not supported in CockroachDB. When [using the Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}?filters=mysql#convert-a-schema), columns with `AUTO_INCREMENT` can be converted to use [sequences]({% link {{ page.version.version }}/create-sequence.md %}), `UUID` values with [`gen_random_uuid()`]({% link {{ page.version.version }}/functions-and-operators.md %}#id-generation-functions), or unique `INT8` values using [`unique_rowid()`]({% link {{ page.version.version }}/functions-and-operators.md %}#id-generation-functions). Cockroach Labs does not recommend using a sequence to define a primary key column. For more information, see [Unique ID best practices]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#unique-id-best-practices).
-
-{{site.data.alerts.callout_info}}
-Changing a column type during schema conversion will cause [MOLT Verify]({% link molt/molt-verify.md %}) to identify a type mismatch during [data validation](#step-3-validate-the-migrated-data). This is expected behavior.
-{{site.data.alerts.end}}
-
-#### `ENUM` type
-
-MySQL `ENUM` types are defined in table columns. On CockroachDB, [`ENUM`]({% link {{ page.version.version }}/enum.md %}) is a standalone type. When [using the Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}?filters=mysql#convert-a-schema), you can either deduplicate the `ENUM` definitions or create a separate type for each column.
-
-#### `TINYINT` type
-
-`TINYINT` data types are not supported in CockroachDB. The [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}?filters=mysql) automatically converts `TINYINT` columns to [`INT2`]({% link {{ page.version.version }}/int.md %}) (`SMALLINT`).
-
-#### Geospatial types
-
-MySQL geometry types are not converted to CockroachDB [geospatial types]({% link {{ page.version.version }}/spatial-data-overview.md %}#spatial-objects) by the [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}?filters=mysql). They should be manually converted to the corresponding types in CockroachDB.
-
-#### `FIELD` function
-
-The MYSQL `FIELD` function is not supported in CockroachDB. Instead, you can use the [`array_position`]({% link {{ page.version.version }}/functions-and-operators.md %}#array-functions) function, which returns the index of the first occurrence of element in the array.
-
-Example usage:
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-SELECT array_position(ARRAY[4,1,3,2],1);
-~~~
-
-~~~
- array_position
-------------------
- 2
-(1 row)
-~~~
-
-While MySQL returns 0 when the element is not found, CockroachDB returns `NULL`. So if you are using the `ORDER BY` clause in a statement with the `array_position` function, the caveat is that sort is applied even when the element is not found. As a workaround, you can use the [`COALESCE`]({% link {{ page.version.version }}/functions-and-operators.md %}#conditional-and-function-like-operators) operator.
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-SELECT * FROM table_a ORDER BY COALESCE(array_position(ARRAY[4,1,3,2],5),999);
-~~~
-
-## Load MySQL data
-
-You can use [MOLT Fetch]({% link molt/molt-fetch.md %}) to migrate MySQL data to CockroachDB.
-
-Alternatively, you can use one of the following methods to migrate the data:
-
-- {% include {{ page.version.version }}/migration/load-data-import-into.md %}
-
- {% include {{ page.version.version }}/misc/import-perf.md %}
-
-- {% include {{ page.version.version }}/migration/load-data-third-party.md %}
-
-The [following example](#example-migrate-world-to-cockroachdb) uses `IMPORT INTO` to perform the initial data load.
-
-## Example: Migrate `world` to CockroachDB
-
-The following steps demonstrate [converting a schema]({% link {{ page.version.version }}/migration-overview.md %}#convert-the-schema), performing an [initial load of data]({% link {{ page.version.version }}/migration-overview.md %}#load-test-data), and [validating data consistency]({% link {{ page.version.version }}/migration-overview.md %}#validate-queries) during a migration.
-
-In the context of a full migration, these steps ensure that MySQL data can be properly migrated to CockroachDB and your application queries tested against the cluster. For details, see the [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}#prepare-for-migration).
-
-### Before you begin
-
-The example uses the [MySQL `world` data set](https://dev.mysql.com/doc/index-other.html) and demonstrates how to migrate the schema and data to a CockroachDB {{ site.data.products.standard }} cluster. To follow along with these steps:
-
-1. Download the [`world` data set](https://dev.mysql.com/doc/index-other.html).
-
-1. Create the `world` database on your MySQL instance, specifying the path of the downloaded file:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- mysqlsh -uroot --sql --file {path}/world-db/world.sql
- ~~~
-
-1. Create a free [{{ site.data.products.cloud }} account]({% link cockroachcloud/create-an-account.md %}), which is used to access the [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}?filters=mysql) and [create the CockroachDB {{ site.data.products.standard }} cluster]({% link cockroachcloud/create-your-cluster.md %}).
-
-{{site.data.alerts.callout_success}}
-{% include cockroachcloud/migration/sct-self-hosted.md %}
-{{site.data.alerts.end}}
-
-### Step 1. Convert the MySQL schema
-
-Use the [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}?filters=mysql) to convert the `world` schema for compatibility with CockroachDB. The schema has three tables: `city`, `country`, and `countrylanguage`.
-
-1. Dump the MySQL `world` schema with the following [`mysqldump`](https://dev.mysql.com/doc/refman/8.0/en/mysqldump-sql-format.html) command:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- mysqldump -uroot --no-data world > world_schema.sql
- ~~~
-
-1. Open the [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}?filters=mysql) in the {{ site.data.products.cloud }} Console and [add a new MySQL schema]({% link cockroachcloud/migrations-page.md %}?filters=mysql#convert-a-schema).
-
- For **AUTO_INCREMENT Conversion Option**, select the [`unique_rowid()`]({% link {{ page.version.version }}/functions-and-operators.md %}#id-generation-functions) option. This will convert the `ID` column in the `city` table, which has MySQL type `int` and `AUTO_INCREMENT`, to a CockroachDB [`INT8`]({% link {{ page.version.version }}/int.md %}) type with default values generated by [`unique_rowid()`]({% link {{ page.version.version }}/functions-and-operators.md %}#id-generation-functions). For context on this option, see [`AUTO_INCREMENT` attribute](#auto_increment-attribute).
-
- The `UUID` and `unique_rowid()` options are each preferred for [different use cases]({% link {{ page.version.version }}/sql-faqs.md %}#what-are-the-differences-between-uuid-sequences-and-unique_rowid). For this example, selecting the `unique_rowid()` option makes [loading the data](#step-2-load-the-mysql-data) more straightforward in a later step, since both the source and target columns will have integer types.
-
-1. [Upload `world_schema.sql`]({% link cockroachcloud/migrations-page.md %}?filters=mysql#upload-file) to the Schema Conversion Tool.
-
- After conversion is complete, [review the results]({% link cockroachcloud/migrations-page.md %}?filters=mysql#review-the-schema). The [**Summary Report**]({% link cockroachcloud/migrations-page.md %}?filters=mysql#summary-report) shows that there are no errors. This means that the schema is ready to migrate to CockroachDB.
-
- {{site.data.alerts.callout_success}}
- You can also [add your MySQL database credentials]({% link cockroachcloud/migrations-page.md %}?filters=mysql#use-credentials) to have the Schema Conversion Tool obtain the schema directly from the MySQL database.
- {{site.data.alerts.end}}
-
- This example migrates directly to a CockroachDB {{ site.data.products.standard }} cluster. {% include cockroachcloud/migration/sct-self-hosted.md %}
-
-1. Before you migrate the converted schema, click the **Statements** tab to view the [Statements list]({% link cockroachcloud/migrations-page.md %}?filters=mysql#statements-list). Scroll down to the `CREATE TABLE countrylanguage` statement and edit the statement to add a [collation]({% link {{ page.version.version }}/collate.md %}) (`COLLATE en_US`) on the `language` column:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- CREATE TABLE countrylanguage (
- countrycode VARCHAR(3) DEFAULT '' NOT NULL,
- language VARCHAR(30) COLLATE en_US DEFAULT '' NOT NULL,
- isofficial countrylanguage_isofficial_enum
- DEFAULT 'F'
- NOT NULL,
- percentage DECIMAL(4,1) DEFAULT '0.0' NOT NULL,
- PRIMARY KEY (countrycode, language),
- INDEX countrycode (countrycode),
- CONSTRAINT countrylanguage_ibfk_1
- FOREIGN KEY (countrycode) REFERENCES country (code)
- )
- ~~~
-
- Click **Save**.
-
- This is a workaround to prevent [data validation](#step-3-validate-the-migrated-data) from failing due to collation mismatches. For more details, see the [MOLT Verify] ({% link molt/molt-verify.md %}#known-limitations) documentation.
-
-1. Click [**Migrate Schema**]({% link cockroachcloud/migrations-page.md %}?filters=mysql#migrate-the-schema) to create a new {{ site.data.products.standard }} cluster with the converted schema. Name the database `world`.
-
- You can view this database on the [**Databases** page]({% link cockroachcloud/databases-page.md %}) of the {{ site.data.products.cloud }} Console.
-
-1. Open a SQL shell to the CockroachDB `world` cluster. To find the command, open the **Connect** dialog in the {{ site.data.products.cloud }} Console and select the `world` database and **CockroachDB Client** option. It will look like:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- cockroach sql --url "postgresql://{username}@{hostname}:{port}/world?sslmode=verify-full"
- ~~~
-
-1. For large imports, Cockroach Labs recommends [removing indexes prior to loading data]({% link {{ page.version.version }}/import-performance-best-practices.md %}#import-into-a-schema-with-secondary-indexes) and recreating them afterward. This provides increased visibility into the import progress and the ability to retry each step independently.
-
- Show the indexes on the `world` database:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- SHOW INDEXES FROM DATABASE world;
- ~~~
-
- The `countrycode` [foreign key]({% link {{ page.version.version }}/foreign-key.md %}) indexes on the `city` and `countrylanguage` tables can be removed for now:
-
- ~~~
- table_name | index_name | index_schema | non_unique | seq_in_index | column_name | definition | direction | storing | implicit | visible
- ---------------------------------+-------------------------------------------------+--------------+------------+--------------+-----------------+-----------------+-----------+---------+----------+----------
- ...
- city | countrycode | public | t | 2 | id | id | ASC | f | t | t
- city | countrycode | public | t | 1 | countrycode | countrycode | ASC | f | f | t
- ...
- countrylanguage | countrycode | public | t | 1 | countrycode | countrycode | ASC | f | f | t
- countrylanguage | countrycode | public | t | 2 | language | language | ASC | f | t | t
- ...
- ~~~
-
-1. Drop the `countrycode` indexes:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- DROP INDEX city@countrycode;
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- DROP INDEX countrylanguage@countrycode;
- ~~~
-
- You will recreate the indexes after [loading the data](#step-2-load-the-mysql-data).
-
-### Step 2. Load the MySQL data
-
-Load the `world` data into CockroachDB using [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) with CSV-formatted data. {% include {{ page.version.version }}/sql/export-csv-tsv.md %}
-
-{{site.data.alerts.callout_info}}
-When MySQL dumps data, the tables are not ordered by [foreign key]({% link {{ page.version.version }}/foreign-key.md %}) constraints, and foreign keys are not placed in the correct dependency order. It is best to disable foreign key checks when loading data into CockroachDB, and revalidate foreign keys on each table after the data is loaded.
-
-By default, [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) invalidates all [foreign key]({% link {{ page.version.version }}/foreign-key.md %}) constraints on the target table.
-{{site.data.alerts.end}}
-
-1. Dump the MySQL `world` data with the following [`mysqldump` command](https://dev.mysql.com/doc/refman/8.0/en/mysqldump-delimited-text.html):
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- mysqldump -uroot -T /{path}/world-data --fields-terminated-by ',' --fields-enclosed-by '"' --fields-escaped-by '\' --no-create-info world
- ~~~
-
- This dumps each table in your database to the path `/{path}/world-data` as a `.txt` file in CSV format.
- - `--fields-terminated-by` specifies that values are separated by commas instead of tabs.
- - `--fields-enclosed-by` and `--fields-escaped-by` specify the characters that enclose and escape column values, respectively.
- - `--no-create-info` dumps only the [data manipulation language (DML)]({% link {{ page.version.version }}/sql-statements.md %}#data-manipulation-statements).
-
-1. Host the files where the CockroachDB cluster can access them.
-
- Each node in the CockroachDB cluster needs to have access to the files being imported. There are several ways for the cluster to access the data; for more information on the types of storage [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) can pull from, see the following:
- - [Use Cloud Storage]({% link {{ page.version.version }}/use-cloud-storage.md %})
- - [Use a Local File Server]({% link {{ page.version.version }}/use-a-local-file-server.md %})
-
- Cloud storage such as Amazon S3 or Google Cloud is highly recommended for hosting the data files you want to import.
-
- The dump files generated in the preceding step are already hosted on a public S3 bucket created for this example.
-
-1. Open a SQL shell to the CockroachDB `world` cluster, using the same command as before:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- cockroach sql --url "postgresql://{username}@{hostname}:{port}/world?sslmode=verify-full"
- ~~~
-
-1. Use [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) to import each MySQL dump file into the corresponding table in the `world` database.
-
- The following commands point to a public S3 bucket where the `world` data dump files are hosted for this example. The `nullif='\N'` clause specifies that `\N` values, which are produced by the `mysqldump` command, should be read as [`NULL`]({% link {{ page.version.version }}/null-handling.md %}).
-
- {{site.data.alerts.callout_success}}
- You can add the `row_limit` [option]({% link {{ page.version.version }}/import-into.md %}#import-options) to specify the number of rows to import. For example, `row_limit = '10'` will import the first 10 rows of the table. This option is useful for finding errors quickly before executing a more time- and resource-consuming import.
- {{site.data.alerts.end}}
-
- ~~~ sql
- IMPORT INTO countrylanguage
- CSV DATA (
- 'https://cockroachdb-migration-examples.s3.us-east-1.amazonaws.com/mysql/world/countrylanguage.txt'
- )
- WITH
- nullif='\N';
- ~~~
-
- ~~~
- job_id | status | fraction_completed | rows | index_entries | bytes
- ---------------------+-----------+--------------------+------+---------------+---------
- 887782070812344321 | succeeded | 1 | 984 | 984 | 171555
- ~~~
-
- ~~~ sql
- IMPORT INTO country
- CSV DATA (
- 'https://cockroachdb-migration-examples.s3.us-east-1.amazonaws.com/mysql/world/country.txt'
- )
- WITH
- nullif='\N';
- ~~~
-
- ~~~
- job_id | status | fraction_completed | rows | index_entries | bytes
- ---------------------+-----------+--------------------+------+---------------+--------
- 887782114360819713 | succeeded | 1 | 239 | 0 | 33173
- ~~~
-
- ~~~ sql
- IMPORT INTO city
- CSV DATA (
- 'https://cockroachdb-migration-examples.s3.us-east-1.amazonaws.com/mysql/world/city.txt'
- )
- WITH
- nullif='\N';
- ~~~
-
- ~~~
- job_id | status | fraction_completed | rows | index_entries | bytes
- ---------------------+-----------+--------------------+------+---------------+---------
- 887782154421567489 | succeeded | 1 | 4079 | 4079 | 288140
- ~~~
-
- {{site.data.alerts.callout_info}}
- After [converting the schema](#step-1-convert-the-mysql-schema) to work with CockroachDB, the `id` column in `city` is an [`INT8`]({% link {{ page.version.version }}/int.md %}) with default values generated by [`unique_rowid()`]({% link {{ page.version.version }}/functions-and-operators.md %}#id-generation-functions). However, `unique_rowid()` values are only generated when new rows are [inserted]({% link {{ page.version.version }}/insert.md %}) without an `id` value. The MySQL data dump still includes the sequential `id` values generated by the MySQL [`AUTO_INCREMENT` attribute](#auto_increment-attribute), and these are imported with the `IMPORT INTO` command.
-
- In an actual migration, you can either update the primary key into a [multi-column key]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#use-multi-column-primary-keys) or add a new primary key column that [generates unique IDs]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#unique-id-best-practices).
- {{site.data.alerts.end}}
-
-1. Recreate the indexes that you deleted before importing the data:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- CREATE INDEX countrycode ON city (countrycode, id);
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- CREATE INDEX countrycode ON countrylanguage (countrycode, language);
- ~~~
-
-1. Recall that `IMPORT INTO` invalidates all [foreign key]({% link {{ page.version.version }}/foreign-key.md %}) constraints on the target table. View the constraints that are defined on `city` and `countrylanguage`:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- SHOW CONSTRAINTS FROM city;
- ~~~
-
- ~~~
- table_name | constraint_name | constraint_type | details | validated
- -------------+-----------------+-----------------+--------------------------------------------------------------+------------
- city | city_ibfk_1 | FOREIGN KEY | FOREIGN KEY (countrycode) REFERENCES country(code) NOT VALID | f
- city | city_pkey | PRIMARY KEY | PRIMARY KEY (id ASC) | t
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- SHOW CONSTRAINTS FROM countrylanguage;
- ~~~
-
- ~~~
- table_name | constraint_name | constraint_type | details | validated
- ------------------+------------------------+-----------------+--------------------------------------------------------------+------------
- countrylanguage | countrylanguage_ibfk_1 | FOREIGN KEY | FOREIGN KEY (countrycode) REFERENCES country(code) NOT VALID | f
- countrylanguage | countrylanguage_pkey | PRIMARY KEY | PRIMARY KEY (countrycode ASC, language ASC) | t
- ~~~
-
-1. To validate the foreign keys, issue an [`ALTER TABLE ... VALIDATE CONSTRAINT`]({% link {{ page.version.version }}/alter-table.md %}#validate-constraint) statement for each table:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- ALTER TABLE city VALIDATE CONSTRAINT city_ibfk_1;
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- ALTER TABLE countrylanguage VALIDATE CONSTRAINT countrylanguage_ibfk_1;
- ~~~
-
-### Step 3. Validate the migrated data
-
-Use [MOLT Verify]({% link molt/molt-verify.md %}) to check that the data on MySQL and CockroachDB are consistent.
-
-1. [Install MOLT Verify.]({% link molt/molt-verify.md %})
-
-1. In the directory where you installed MOLT Verify, use the following command to compare the two databases, specifying the [JDBC connection string for MySQL](https://dev.mysql.com/doc/connector-j/8.1/en/connector-j-reference-jdbc-url-format.html) with `--source` and the SQL connection string for CockroachDB with `--target`:
-
- {{site.data.alerts.callout_success}}
- To find the CockroachDB connection string, open the **Connect** dialog in the {{ site.data.products.cloud }} Console and select the `world` database and the **General connection string** option.
- {{site.data.alerts.end}}
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- ./molt verify --source 'jdbc:mysql://{user}:{password}@tcp({host}:{port})/world' --target 'postgresql://{user}:{password}@{host}:{port}/world?sslmode=verify-full'
- ~~~
-
- You will see the initial output:
-
- ~~~
- INF verification in progress
- ~~~
-
- The following warnings indicate that the MySQL and CockroachDB columns have different types. This is an expected result, since some columns were [changed to `ENUM` types](#enum-type) when you [converted the schema](#step-1-convert-the-mysql-schema):
-
- ~~~
- WRN mismatching table definition mismatch_info="column type mismatch on continent: text vs country_continent_enum" table_name=country table_schema=public
- WRN mismatching table definition mismatch_info="column type mismatch on isofficial: text vs countrylanguage_isofficial_enum" table_name=countrylanguage table_schema=public
- ~~~
-
- The following output indicates that MOLT Verify has completed verification:
-
- ~~~
- INF finished row verification on public.country (shard 1/1): truth rows seen: 239, success: 239, missing: 0, mismatch: 0, extraneous: 0, live_retry: 0
- INF finished row verification on public.countrylanguage (shard 1/1): truth rows seen: 984, success: 984, missing: 0, mismatch: 0, extraneous: 0, live_retry: 0
- INF finished row verification on public.city (shard 1/1): truth rows seen: 4079, success: 4079, missing: 0, mismatch: 0, extraneous: 0, live_retry: 0
- INF verification complete
- ~~~
-
-With the schema migrated and the initial data load verified, the next steps in a real-world migration are to ensure that you have made any necessary [application changes]({% link {{ page.version.version }}/migration-overview.md %}#application-changes), [validate application queries]({% link {{ page.version.version }}/migration-overview.md %}#validate-queries), and [perform a dry run]({% link {{ page.version.version }}/migration-overview.md %}#perform-a-dry-run) before [conducting the full migration]({% link {{ page.version.version }}/migration-overview.md %}#conduct-the-migration).
-
-To learn more, see the [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}).
-
-## See also
-
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
-- [Use the Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
-- [Use the MOLT Verify tool]({% link molt/molt-verify.md %})
-- [Import Performance Best Practices]({% link {{ page.version.version }}/import-performance-best-practices.md %})
-- [Migrate from CSV]({% link {{ page.version.version }}/migrate-from-csv.md %})
-- [Migrate from PostgreSQL]({% link {{ page.version.version }}/migrate-from-postgres.md %})
-- [Can a PostgreSQL or MySQL application be migrated to CockroachDB?]({% link {{ page.version.version }}/frequently-asked-questions.md %}#can-a-postgresql-or-mysql-application-be-migrated-to-cockroachdb)
diff --git a/src/current/v24.1/migrate-from-openstreetmap.md b/src/current/v24.1/migrate-from-openstreetmap.md
index 843c335beba..a4bcdc56eef 100644
--- a/src/current/v24.1/migrate-from-openstreetmap.md
+++ b/src/current/v24.1/migrate-from-openstreetmap.md
@@ -128,9 +128,9 @@ Osm2pgsql took 2879s overall
- [Migrate from GeoPackages]({% link {{ page.version.version }}/migrate-from-geopackage.md %})
- [Migrate from GeoJSON]({% link {{ page.version.version }}/migrate-from-geojson.md %})
- [Migrate from Shapefiles]({% link {{ page.version.version }}/migrate-from-shapefiles.md %})
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
-- [Migrate from MySQL][mysql]
-- [Migrate from PostgreSQL][postgres]
+- [Migration Overview]({% link molt/migration-overview.md %})
+- [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql)
+- [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %})
- [Back Up and Restore Data]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %})
- [Use the Built-in SQL Client]({% link {{ page.version.version }}/cockroach-sql.md %})
- [`cockroach` Commands Overview]({% link {{ page.version.version }}/cockroach-commands.md %})
diff --git a/src/current/v24.1/migrate-from-oracle.md b/src/current/v24.1/migrate-from-oracle.md
index 2979cd47a36..a62e9ae2c11 100644
--- a/src/current/v24.1/migrate-from-oracle.md
+++ b/src/current/v24.1/migrate-from-oracle.md
@@ -390,8 +390,8 @@ You will have to refactor Oracle SQL and functions that do not comply with [ANSI
- [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %})
- [Import Performance Best Practices]({% link {{ page.version.version }}/import-performance-best-practices.md %})
- [Migrate from CSV]({% link {{ page.version.version }}/migrate-from-csv.md %})
-- [Migrate from MySQL]({% link {{ page.version.version }}/migrate-from-mysql.md %})
-- [Migrate from PostgreSQL]({% link {{ page.version.version }}/migrate-from-postgres.md %})
+- [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql)
+- [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %})
- [Back Up and Restore Data]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %})
- [Use the Built-in SQL Client]({% link {{ page.version.version }}/cockroach-sql.md %})
- [`cockroach` Commands Overview]({% link {{ page.version.version }}/cockroach-commands.md %})
diff --git a/src/current/v24.1/migrate-from-postgres.md b/src/current/v24.1/migrate-from-postgres.md
deleted file mode 100644
index 795d303e656..00000000000
--- a/src/current/v24.1/migrate-from-postgres.md
+++ /dev/null
@@ -1,297 +0,0 @@
----
-title: Migrate from PostgreSQL
-summary: Learn how to migrate data from PostgreSQL into a CockroachDB cluster.
-toc: true
-docs_area: migrate
----
-
-This page describes basic considerations and provides a basic [example](#example-migrate-frenchtowns-to-cockroachdb) of migrating data from PostgreSQL to CockroachDB. The information on this page assumes that you have read [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}), which describes the broad phases and considerations of migrating a database to CockroachDB.
-
-The [PostgreSQL migration example](#example-migrate-frenchtowns-to-cockroachdb) on this page demonstrates how to use [MOLT tooling]({% link {{ page.version.version }}/migration-overview.md %}#molt) to update the PostgreSQL schema, perform an initial load of data, and validate the data. These steps are essential when [preparing for a full migration]({% link {{ page.version.version }}/migration-overview.md %}#prepare-for-migration).
-
-{{site.data.alerts.callout_success}}
-If you need help migrating to CockroachDB, contact our sales team.
-{{site.data.alerts.end}}
-
-## Syntax differences
-
-CockroachDB supports the [PostgreSQL wire protocol](https://www.postgresql.org/docs/current/protocol.html) and is largely compatible with PostgreSQL syntax.
-
-For syntax differences, refer to [Features that differ from PostgreSQL]({% link {{ page.version.version }}/postgresql-compatibility.md %}#features-that-differ-from-postgresql).
-
-### Unsupported features
-
-The following PostgreSQL features do not yet exist in CockroachDB:
-
-{% include {{page.version.version}}/sql/unsupported-postgres-features.md %}
-
-## Load PostgreSQL data
-
-You can use [MOLT Fetch]({% link molt/molt-fetch.md %}) to migrate PostgreSQL data to CockroachDB.
-
-Alternatively, you can use one of the following methods to migrate the data:
-
-- {% include {{ page.version.version }}/migration/load-data-import-into.md %}
-
- {% include {{ page.version.version }}/misc/import-perf.md %}
-
-- {% include {{ page.version.version }}/migration/load-data-third-party.md %}
-
-- {% include {{ page.version.version }}/migration/load-data-copy-from.md %}
-
-The [following example](#example-migrate-frenchtowns-to-cockroachdb) uses `IMPORT INTO` to perform the initial data load.
-
-## Example: Migrate `frenchtowns` to CockroachDB
-
-The following steps demonstrate [converting a schema]({% link {{ page.version.version }}/migration-overview.md %}#convert-the-schema), performing an [initial load of data]({% link {{ page.version.version }}/migration-overview.md %}#load-test-data), and [validating data consistency]({% link {{ page.version.version }}/migration-overview.md %}#validate-queries) during a migration.
-
-In the context of a full migration, these steps ensure that PostgreSQL data can be properly migrated to CockroachDB and your application queries tested against the cluster. For details, see the [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}#prepare-for-migration).
-
-### Before you begin
-
-The example uses a modified version of the PostgreSQL `french-towns-communes-francais` data set and demonstrates how to migrate the schema and data to a CockroachDB {{ site.data.products.standard }} cluster. To follow along with these steps:
-
-1. Download the `frenchtowns` data set:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- curl -O https://cockroachdb-migration-examples.s3.us-east-1.amazonaws.com/postgresql/frenchtowns/frenchtowns.sql
- ~~~
-
-1. Create a `frenchtowns` database on your PostgreSQL instance:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- createdb frenchtowns
- ~~~
-
-1. Load the `frenchtowns` data into PostgreSQL, specifying the path of the downloaded file:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- psql frenchtowns -a -f frenchtowns.sql
- ~~~
-
-1. Create a free [{{ site.data.products.cloud }} account]({% link cockroachcloud/create-an-account.md %}), which is used to access the [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}) and [create the CockroachDB {{ site.data.products.standard }} cluster]({% link cockroachcloud/create-your-cluster.md %}).
-
-{{site.data.alerts.callout_success}}
-{% include cockroachcloud/migration/sct-self-hosted.md %}
-{{site.data.alerts.end}}
-
-### Step 1. Convert the PostgreSQL schema
-
-Use the [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}) to convert the `frenchtowns` schema for compatibility with CockroachDB. The schema has three tables: `regions`, `departments`, and `towns`.
-
-1. Dump the PostgreSQL `frenchtowns` schema with the following [`pg_dump`](https://www.postgresql.org/docs/15/app-pgdump.html) command:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- pg_dump --schema-only frenchtowns > frenchtowns_schema.sql
- ~~~
-
-1. Open the [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}) in the {{ site.data.products.cloud }} Console and [add a new PostgreSQL schema]({% link cockroachcloud/migrations-page.md %}#convert-a-schema).
-
- After conversion is complete, [review the results]({% link cockroachcloud/migrations-page.md %}#review-the-schema). The [**Summary Report**]({% link cockroachcloud/migrations-page.md %}#summary-report) shows that there are errors under **Required Fixes**. You must resolve these in order to migrate the schema to CockroachDB.
-
- {{site.data.alerts.callout_success}}
- You can also [add your PostgreSQL database credentials]({% link cockroachcloud/migrations-page.md %}#use-credentials) to have the Schema Conversion Tool obtain the schema directly from the PostgreSQL database.
- {{site.data.alerts.end}}
-
-1. `Missing user: postgres` errors indicate that the SQL user `postgres` is missing from CockroachDB. Click **Add User** to create the user.
-
-1. `Miscellaneous Errors` includes a `SELECT pg_catalog.set_config('search_path', '', false)` statement that can safely be removed. Click **Delete** to remove the statement from the schema.
-
-1. Review the `CREATE SEQUENCE` statements listed under **Suggestions**. Cockroach Labs does not recommend using a sequence to define a primary key column. For more information, see [Unique ID best practices]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#unique-id-best-practices).
-
- For this example, **Acknowledge** the suggestion without making further changes. In practice, after [conducting the full migration]({% link {{ page.version.version }}/migration-overview.md %}#conduct-the-migration) to CockroachDB, you would modify your CockroachDB schema to use unique and non-sequential primary keys.
-
-1. Click **Retry Migration**. The **Summary Report** now shows that there are no errors. This means that the schema is ready to migrate to CockroachDB.
-
- This example migrates directly to a CockroachDB {{ site.data.products.standard }} cluster. {% include cockroachcloud/migration/sct-self-hosted.md %}
-
-1. Click [**Migrate Schema**]({% link cockroachcloud/migrations-page.md %}#migrate-the-schema) to create a new CockroachDB {{ site.data.products.standard }} cluster with the converted schema. Name the database `frenchtowns`.
-
- You can view this database on the [**Databases** page]({% link cockroachcloud/databases-page.md %}) of the {{ site.data.products.cloud }} Console.
-
-### Step 2. Load the PostgreSQL data
-
-Load the `frenchtowns` data into CockroachDB using [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) with CSV-formatted data. {% include {{ page.version.version }}/sql/export-csv-tsv.md %}
-
-{{site.data.alerts.callout_info}}
-By default, [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) invalidates all [foreign key]({% link {{ page.version.version }}/foreign-key.md %}) constraints on the target table.
-{{site.data.alerts.end}}
-
-1. Dump each table in the PostgreSQL `frenchtowns` database to a CSV-formatted file:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- psql frenchtowns -c "COPY regions TO stdout DELIMITER ',' CSV;" > regions.csv
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- psql frenchtowns -c "COPY departments TO stdout DELIMITER ',' CSV;" > departments.csv
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- psql frenchtowns -c "COPY towns TO stdout DELIMITER ',' CSV;" > towns.csv
- ~~~
-
-1. Host the files where the CockroachDB cluster can access them.
-
- Each node in the CockroachDB cluster needs to have access to the files being imported. There are several ways for the cluster to access the data; for more information on the types of storage [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) can pull from, see the following:
- - [Use Cloud Storage]({% link {{ page.version.version }}/use-cloud-storage.md %})
- - [Use a Local File Server]({% link {{ page.version.version }}/use-a-local-file-server.md %})
-
- Cloud storage such as Amazon S3 or Google Cloud is highly recommended for hosting the data files you want to import.
-
- The dump files generated in the preceding step are already hosted on a public S3 bucket created for this example.
-
-1. Open a SQL shell to the CockroachDB `frenchtowns` cluster. To find the command, open the **Connect** dialog in the {{ site.data.products.cloud }} Console and select the `frenchtowns` database and **CockroachDB Client** option. It will look like:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- cockroach sql --url "postgresql://{username}@{hostname}:{port}/frenchtowns?sslmode=verify-full"
- ~~~
-
-1. Use [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) to import each PostgreSQL dump file into the corresponding table in the `frenchtowns` database.
-
- The following commands point to a public S3 bucket where the `frenchtowns` data dump files are hosted for this example.
-
- {{site.data.alerts.callout_success}}
- You can add the `row_limit` [option]({% link {{ page.version.version }}/import-into.md %}#import-options) to specify the number of rows to import. For example, `row_limit = '10'` will import the first 10 rows of the table. This option is useful for finding errors quickly before executing a more time- and resource-consuming import.
- {{site.data.alerts.end}}
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- IMPORT INTO regions
- CSV DATA (
- 'https://cockroachdb-migration-examples.s3.us-east-1.amazonaws.com/postgresql/frenchtowns/regions.csv'
- );
- ~~~
-
- ~~~
- job_id | status | fraction_completed | rows | index_entries | bytes
- ---------------------+-----------+--------------------+------+---------------+--------
- 893753132185026561 | succeeded | 1 | 26 | 52 | 2338
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- IMPORT INTO departments
- CSV DATA (
- 'https://cockroachdb-migration-examples.s3.us-east-1.amazonaws.com/postgresql/frenchtowns/departments.csv'
- );
- ~~~
-
- ~~~
- job_id | status | fraction_completed | rows | index_entries | bytes
- ---------------------+-----------+--------------------+------+---------------+--------
- 893753147892465665 | succeeded | 1 | 100 | 300 | 11166
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- IMPORT INTO towns
- CSV DATA (
- 'https://cockroachdb-migration-examples.s3.us-east-1.amazonaws.com/postgresql/frenchtowns/towns.csv'
- );
- ~~~
-
- ~~~
- job_id | status | fraction_completed | rows | index_entries | bytes
- ---------------------+-----------+--------------------+-------+---------------+----------
- 893753162225680385 | succeeded | 1 | 36684 | 36684 | 2485007
- ~~~
-
-1. Recall that `IMPORT INTO` invalidates all [foreign key]({% link {{ page.version.version }}/foreign-key.md %}) constraints on the target table. View the constraints that are defined on `departments` and `towns`:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- SHOW CONSTRAINTS FROM departments;
- ~~~
-
- ~~~
- table_name | constraint_name | constraint_type | details | validated
- --------------+-------------------------+-----------------+---------------------------------------------------------+------------
- departments | departments_capital_key | UNIQUE | UNIQUE (capital ASC) | t
- departments | departments_code_key | UNIQUE | UNIQUE (code ASC) | t
- departments | departments_name_key | UNIQUE | UNIQUE (name ASC) | t
- departments | departments_pkey | PRIMARY KEY | PRIMARY KEY (id ASC) | t
- departments | departments_region_fkey | FOREIGN KEY | FOREIGN KEY (region) REFERENCES regions(code) NOT VALID | f
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- SHOW CONSTRAINTS FROM towns;
- ~~~
-
- ~~~
- table_name | constraint_name | constraint_type | details | validated
- -------------+---------------------------+-----------------+-----------------------------------------------------------------+------------
- towns | towns_code_department_key | UNIQUE | UNIQUE (code ASC, department ASC) | t
- towns | towns_department_fkey | FOREIGN KEY | FOREIGN KEY (department) REFERENCES departments(code) NOT VALID | f
- towns | towns_pkey | PRIMARY KEY | PRIMARY KEY (id ASC) | t
- ~~~
-
-1. To validate the foreign keys, issue an [`ALTER TABLE ... VALIDATE CONSTRAINT`]({% link {{ page.version.version }}/alter-table.md %}#validate-constraint) statement for each table:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- ALTER TABLE departments VALIDATE CONSTRAINT departments_region_fkey;
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- ALTER TABLE towns VALIDATE CONSTRAINT towns_department_fkey;
- ~~~
-
-### Step 3. Validate the migrated data
-
-Use [MOLT Verify]({% link molt/molt-verify.md %}) to check that the data on PostgreSQL and CockroachDB are consistent.
-
-1. [Install MOLT Verify.]({% link molt/molt-verify.md %})
-
-1. In the directory where you installed MOLT Verify, use the following command to compare the two databases, specifying the PostgreSQL connection string with `--source` and the CockroachDB connection string with `--target`:
-
- {{site.data.alerts.callout_success}}
- To find the CockroachDB connection string, open the **Connect** dialog in the {{ site.data.products.cloud }} Console and select the `frenchtowns` database and the **General connection string** option.
- {{site.data.alerts.end}}
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- ./molt verify --source 'postgresql://{username}:{password}@{host}:{port}/frenchtowns' --target 'postgresql://{user}:{password}@{host}:{port}/frenchtowns?sslmode=verify-full'
- ~~~
-
- You will see the initial output:
-
- ~~~
- INF verification in progress
- ~~~
-
- The following output indicates that MOLT Verify has completed verification:
-
- ~~~
- INF finished row verification on public.regions (shard 1/1): truth rows seen: 26, success: 26, missing: 0, mismatch: 0, extraneous: 0, live_retry: 0
- INF finished row verification on public.departments (shard 1/1): truth rows seen: 100, success: 100, missing: 0, mismatch: 0, extraneous: 0, live_retry: 0
- INF progress on public.towns (shard 1/1): truth rows seen: 10000, success: 10000, missing: 0, mismatch: 0, extraneous: 0, live_retry: 0
- INF progress on public.towns (shard 1/1): truth rows seen: 20000, success: 20000, missing: 0, mismatch: 0, extraneous: 0, live_retry: 0
- INF progress on public.towns (shard 1/1): truth rows seen: 30000, success: 30000, missing: 0, mismatch: 0, extraneous: 0, live_retry: 0
- INF finished row verification on public.towns (shard 1/1): truth rows seen: 36684, success: 36684, missing: 0, mismatch: 0, extraneous: 0, live_retry: 0
- INF verification complete
- ~~~
-
-With the schema migrated and the initial data load verified, the next steps in a real-world migration are to ensure that you have made any necessary [application changes]({% link {{ page.version.version }}/migration-overview.md %}#application-changes), [validate application queries]({% link {{ page.version.version }}/migration-overview.md %}#validate-queries), and [perform a dry run]({% link {{ page.version.version }}/migration-overview.md %}#perform-a-dry-run) before [conducting the full migration]({% link {{ page.version.version }}/migration-overview.md %}#conduct-the-migration).
-
-To learn more, see the [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}).
-
-## See also
-
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
-- [Use the Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
-- [Use the MOLT Verify tool]({% link molt/molt-verify.md %})
-- [Import Performance Best Practices]({% link {{ page.version.version }}/import-performance-best-practices.md %})
-- [Migrate from CSV]({% link {{ page.version.version }}/migrate-from-csv.md %})
-- [Migrate from MySQL]({% link {{ page.version.version }}/migrate-from-mysql.md %})
-- [Can a PostgreSQL or MySQL application be migrated to CockroachDB?]({% link {{ page.version.version }}/frequently-asked-questions.md %}#can-a-postgresql-or-mysql-application-be-migrated-to-cockroachdb)
diff --git a/src/current/v24.1/migrate-from-shapefiles.md b/src/current/v24.1/migrate-from-shapefiles.md
index ea4e7a368e6..44f366d5a69 100644
--- a/src/current/v24.1/migrate-from-shapefiles.md
+++ b/src/current/v24.1/migrate-from-shapefiles.md
@@ -140,9 +140,9 @@ IMPORT INTO tornadoes CSV DATA ('http://localhost:3000/tornadoes.csv') WITH skip
- [Migrate from OpenStreetMap]({% link {{ page.version.version }}/migrate-from-openstreetmap.md %})
- [Migrate from GeoJSON]({% link {{ page.version.version }}/migrate-from-geojson.md %})
- [Migrate from GeoPackage]({% link {{ page.version.version }}/migrate-from-geopackage.md %})
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
-- [Migrate from MySQL][mysql]
-- [Migrate from PostgreSQL][postgres]
+- [Migration Overview]({% link molt/migration-overview.md %})
+- [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql)
+- [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %})
- [Back Up and Restore Data]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %})
- [Use the Built-in SQL Client]({% link {{ page.version.version }}/cockroach-sql.md %})
- [`cockroach` Commands Overview]({% link {{ page.version.version }}/cockroach-commands.md %})
diff --git a/src/current/v24.1/migration-overview.md b/src/current/v24.1/migration-overview.md
deleted file mode 100644
index 832ed1de75a..00000000000
--- a/src/current/v24.1/migration-overview.md
+++ /dev/null
@@ -1,355 +0,0 @@
----
-title: Migration Overview
-summary: Learn how to migrate your database to a CockroachDB cluster.
-toc: true
-docs_area: migrate
----
-
-This page provides an overview of how to migrate a database to CockroachDB.
-
-A database migration broadly consists of the following phases:
-
-1. [Develop a migration plan:](#develop-a-migration-plan) Evaluate your [downtime requirements](#approach-to-downtime) and [cutover strategy](#cutover-strategy), [size the CockroachDB cluster](#capacity-planning) that you will migrate to, and become familiar with the [application changes](#application-changes) that you need to make for CockroachDB.
-1. [Prepare for migration:](#prepare-for-migration) Run a [pre-mortem](#run-a-migration-pre-mortem) (optional), set up [metrics](#set-up-monitoring-and-alerting) (optional), [convert your schema](#convert-the-schema), perform an [initial load of test data](#load-test-data), [validate your application queries](#validate-queries) for correctness and performance, and [perform a dry run](#perform-a-dry-run) of the migration.
-1. [Conduct the migration:](#conduct-the-migration) Use a [lift-and-shift](#lift-and-shift) or ["zero-downtime"](#zero-downtime) method to migrate your data, application, and users to CockroachDB.
-1. [Complete the migration:](#complete-the-migration) Notify the appropriate parties and summarize the details.
-
-{{site.data.alerts.callout_success}}
-If you need help migrating to CockroachDB, contact our sales team.
-{{site.data.alerts.end}}
-
-## Develop a migration plan
-
-Consider the following as you plan your migration:
-
-- Who will lead and perform the migration? Which teams are involved, and which aspects are they responsible for?
-- Which internal and external parties do you need to inform about the migration?
-- Which external or third-party tools (e.g., microservices, analytics, payment processors, aggregators, CRMs) must be tested and migrated along with your application?
-- What portion of the data can be inconsistent, and for how long? What is the tolerable percentage of latency and application errors? This comprises your "error budget".
-- What is the tolerable [downtime](#approach-to-downtime), and what [cutover strategy](#cutover-strategy) will you use to switch users to CockroachDB?
-- Will you set up a "dry-run" environment to test the migration? How many [dry-run migrations](#perform-a-dry-run) will you perform?
-- When is the best time to perform this migration to be minimally disruptive to the database's users?
-- What is your target date for completing the migration?
-
-Create a document that summarizes the intent of the migration, the technical details, and the team members involved.
-
-### Approach to downtime
-
-A primary consideration is whether your application can tolerate downtime:
-
-- What types of operations can you suspend: reads, writes, or both?
-- How long can operations be suspended: seconds, minutes, or hours?
-- Should writes be queued while service is suspended?
-
-Take the following two use cases:
-
-- An application that is primarily in use during daytime business hours may be able to be taken offline during a predetermined timeframe without disrupting the user experience and business continuity. In this case, your migration can occur in a [downtime window](#downtime-window).
-- An application that must serve writes continuously cannot tolerate a long downtime window. In this case, you will aim for [zero or near-zero downtime](#minimal-downtime).
-
-#### Downtime window
-
-If your application can tolerate downtime, then it will likely be easiest to take your application offline, load a snapshot of the data into CockroachDB, and perform a [cutover](#cutover-strategy) to CockroachDB once the data is migrated. This is known as a *lift-and-shift* migration.
-
-A lift-and-shift approach is the most straightforward. However, it's important to fully [prepare the migration](#prepare-for-migration) in order to be certain that it can be completed successfully during the downtime window.
-
-- *Scheduled downtime* is made known to your users in advance. Once you have [prepared for the migration](#prepare-for-migration), you take the application offline, [conduct the migration](#conduct-the-migration), and bring the application back online on CockroachDB. To succeed, you should estimate the amount of downtime required to migrate your data, and ideally schedule the downtime outside of peak hours. Scheduling downtime is easiest if your application traffic is "periodic", meaning that it varies by the time of day, day of week, or day of month.
-
-- *Unscheduled downtime* impacts as few customers as possible, ideally without impacting their regular usage. If your application is intentionally offline at certain times (e.g., outside business hours), you can migrate the data without users noticing. Alternatively, if your application's functionality is not time-sensitive (e.g., it sends batched messages or emails), then you can queue requests while your system is offline, and process those requests after completing the migration to CockroachDB.
-
-- *Reduced functionality* takes some, but not all, application functionality offline. For example, you can disable writes but not reads while you migrate the application data, and queue data to be written after completing the migration.
-
-For an overview of lift-and-shift migrations to CockroachDB, see [Lift and Shift](#lift-and-shift).
-
-#### Minimal downtime
-
-If your application cannot tolerate downtime, then you should aim for a "zero-downtime" approach. This reduces downtime to an absolute minimum, such that users do not notice the migration.
-
-The minimum possible downtime depends on whether you can tolerate inconsistency in the migrated data:
-
-- Migrations performed using *consistent cutover* reduce downtime to an absolute minimum (i.e., seconds or sub-seconds) while keeping data synchronized between the source database and CockroachDB. **Consistency requires downtime.** In this approach, downtime occurs right before [cutover](#cutover-strategy), as you drain the remaining transactions from the source database to CockroachDB.
-
-- Migrations performed using *immediate cutover* can reduce downtime to zero. These require the most preparation, and typically allow read/write traffic to both databases for at least a short period of time, sacrificing consistency for availability. Without stopping application traffic, you perform an **immediate** [cutover](#cutover-strategy), while assuming that some writes will not be replicated to CockroachDB. You may want to manually reconcile these data inconsistencies after switching over.
-
-For an overview of zero-downtime migrations to CockroachDB, see [Zero Downtime](#zero-downtime). {% comment %}For details, see [Migration Strategy: Zero Downtime](migration-strategy-zero-downtime).{% endcomment %}
-
-### Cutover strategy
-
-*Cutover* is the process of switching application traffic from the source database to CockroachDB. Consider the following:
-
-- Will you perform the cutover all at once, or incrementally (e.g., by a subset of users, workloads, or tables)?
-
- - Switching all at once generally follows a [downtime window](#downtime-window) approach. Once the data is migrated to CockroachDB, you "flip the switch" to route application traffic to the new database, thus ending downtime.
-
- - Migrations with [zero or near-zero downtime](#minimal-downtime) can switch either all at once or incrementally, since writes are being synchronously replicated and the system can be gradually migrated as you [validate the queries](#validate-queries).
-
-- Will you have a fallback plan that allows you to reverse ("roll back") the migration from CockroachDB to the source database? A fallback plan enables you to fix any issues or inconsistencies that you encounter during or after cutover, then retry the migration.
-
-#### All at once (no rollback)
-
-This is the simplest cutover method, since you won't need to develop and execute a fallback plan.
-
-As part of [migration preparations](#prepare-for-migration), you will have already [tested your queries and performance](#test-query-results-and-performance) to have confidence to migrate without a rollback option. After moving all of the data from the source database to CockroachDB, you switch application traffic to CockroachDB.
-
-#### All at once (rollback)
-
-This method adds a fallback plan to the simple [all-at-once](#all-at-once-no-rollback) cutover.
-
-In addition to moving data to CockroachDB, data is also replicated from CockroachDB back to the source database in case you need to roll back the migration. Continuous replication is already possible when performing a [zero-downtime migration](#zero-downtime) that dual writes to both databases. Otherwise, you will need to ensure that data is replicated in the reverse direction at cutover. The challenge is to find a point at which both the source database and CockroachDB are in sync, so that you can roll back to that point. You should also avoid falling into a circular state where updates continuously travel back and forth between the source database and CockroachDB.
-
-#### Phased rollout
-
-Also known as the ["strangler fig"](https://en.wikipedia.org/wiki/Strangler_fig) approach, a phased rollout migrates a portion of your users, workloads, or tables over time. Until all users, workloads, and/or tables are migrated, the application will continue to write to both databases.
-
-This approach enables you to take your time with the migration, and to pause or roll back as you [monitor the migration](#set-up-monitoring-and-alerting) for issues and performance. Rolling back the migration involves the same caveats and considerations as for the [all-at-once](#all-at-once-rollback) method. Because you can control the blast radius of your migration by routing traffic for a subset of users or services, a phased rollout has reduced business risk and user impact at the cost of increased implementation risk. You will need to figure out how to migrate in phases while ensuring that your application is unaffected.
-
-### Capacity planning
-
-{{site.data.alerts.callout_success}}
-If you need help migrating to CockroachDB, contact our sales team.
-{{site.data.alerts.end}}
-
-Determine the size of the target CockroachDB cluster. To do this, consider your data volume and workload characteristics:
-
-- What is the total size of the data you will migrate?
-- How many active [application connections]({% link {{ page.version.version }}/recommended-production-settings.md %}#connection-pooling) will be running in the CockroachDB environment?
-
-Use this information to size the CockroachDB cluster you will create. If you are migrating to a CockroachDB {{ site.data.products.cloud }} cluster, see [Plan Your Cluster]({% link cockroachcloud/plan-your-cluster.md %}) for details:
-
-- For CockroachDB {{ site.data.products.standard }} and {{ site.data.products.basic }}, your cluster will scale automatically to meet your storage and usage requirements. Refer to the [CockroachDB {{ site.data.products.standard }}]({% link cockroachcloud/plan-your-cluster-basic.md %}#request-units) and [CockroachDB {{ site.data.products.basic }}]({% link cockroachcloud/plan-your-cluster-basic.md %}#request-units) documentation to learn about how to limit your resource consumption.
-- For CockroachDB {{ site.data.products.advanced }}, refer to the [example]({% link cockroachcloud/plan-your-cluster-advanced.md %}#example) that shows how your data volume, storage requirements, and replication factor affect the recommended node size (number of vCPUs per node) and total number of nodes on the cluster.
-- For guidance on sizing for connection pools, see the CockroachDB {{ site.data.products.cloud }} [Production Checklist]({% link cockroachcloud/production-checklist.md %}#connection-pooling).
-
-If you are migrating to a CockroachDB {{ site.data.products.core }} cluster:
-
-- Refer to our [sizing methodology]({% link {{ page.version.version }}/recommended-production-settings.md %}#sizing) to determine the total number of vCPUs on the cluster and the number of vCPUs per node (which determines the number of nodes on the cluster).
-- Refer to our [storage recommendations]({% link {{ page.version.version }}/recommended-production-settings.md %}#storage) to determine the amount of storage to provision on each node.
-- For guidance on sizing for connection pools, see the CockroachDB {{ site.data.products.core }} [Production Checklist]({% link {{ page.version.version }}/recommended-production-settings.md %}#connection-pooling).
-
-### Application changes
-
-As you develop your migration plan, consider the application changes that you will need to make. These may relate to the following:
-
-- [Designing a schema that is compatible with CockroachDB.](#schema-design-best-practices)
-- [Creating effective indexes on CockroachDB.](#index-creation-best-practices)
-- [Handling transaction contention.](#handling-transaction-contention)
-- [Unimplemented features and syntax incompatibilities.](#unimplemented-features-and-syntax-incompatibilities)
-
-#### Schema design best practices
-
-Follow these recommendations when [converting your schema](#convert-the-schema) for compatibility with CockroachDB.
-
-{{site.data.alerts.callout_success}}
-The [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}) automatically identifies potential improvements to your schema.
-{{site.data.alerts.end}}
-
-- You should define an explicit primary key on every table. For more information, see [Primary key best practices]({% link {{ page.version.version }}/schema-design-table.md %}#primary-key-best-practices).
-
-- Do not use a sequence to define a primary key column. Instead, Cockroach Labs recommends that you use [multi-column primary keys]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#use-multi-column-primary-keys) or [auto-generating unique IDs]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#use-functions-to-generate-unique-ids) for primary key columns.
-
-- By default on CockroachDB, `INT` is an alias for `INT8`, which creates 64-bit signed integers. Depending on your source database or application requirements, you may need to change the integer size to `4`. For example, [PostgreSQL defaults to 32-bit integers](https://www.postgresql.org/docs/9.6/datatype-numeric.html). For more information, see [Considerations for 64-bit signed integers]({% link {{ page.version.version }}/int.md %}#considerations-for-64-bit-signed-integers).
-
-#### Index creation best practices
-
-Review the [best practices for creating secondary indexes]({% link {{ page.version.version }}/schema-design-indexes.md %}#best-practices) on CockroachDB.
-
-{% include {{page.version.version}}/performance/use-hash-sharded-indexes.md %}
-
-#### Handling transaction contention
-
-Optimize your queries against [transaction contention]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#transaction-contention). You may encounter [transaction retry errors]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}) when you [test application queries](#validate-queries), as well as transaction contention due to long-running transactions when you [conduct the migration](#conduct-the-migration) and bulk load data.
-
-Transaction retry errors are more frequent under CockroachDB's default [`SERIALIZABLE` isolation level]({% link {{ page.version.version }}/demo-serializable.md %}). If you are migrating an application that was built at a `READ COMMITTED` isolation level, you should first [enable `READ COMMITTED` isolation]({% link {{ page.version.version }}/read-committed.md %}#enable-read-committed-isolation) on the CockroachDB cluster for compatibility.
-
-#### Unimplemented features and syntax incompatibilities
-
-Update your queries to resolve differences in functionality and SQL syntax.
-
-{{site.data.alerts.callout_success}}
-The [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}) automatically flags syntax incompatibilities and unimplemented features in your schema.
-{{site.data.alerts.end}}
-
-CockroachDB supports the [PostgreSQL wire protocol](https://www.postgresql.org/docs/current/protocol.html) and is largely compatible with PostgreSQL syntax. However, the following PostgreSQL features do not yet exist in CockroachDB:
-
-{% include {{page.version.version}}/sql/unsupported-postgres-features.md %}
-
-If your source database uses any of the preceding features, you may need to implement workarounds in your schema design, in your [data manipulation language (DML)]({% link {{ page.version.version }}/sql-statements.md %}#data-manipulation-statements), or in your application code.
-
-For more details on the CockroachDB SQL implementation, see [SQL Feature Support]({% link {{ page.version.version }}/sql-feature-support.md %}).
-
-## Prepare for migration
-
-Once you have a migration plan, prepare the team, application, source database, and CockroachDB cluster for the migration.
-
-### Run a migration "pre-mortem"
-
-{{site.data.alerts.callout_success}}
-This step is optional.
-{{site.data.alerts.end}}
-
-To minimize issues after [cutover](#cutover-strategy), compose a migration "pre-mortem":
-
-- Clearly describe the roles and processes of each team member performing the migration.
-- List the likely failure points and issues that you may encounter as you [conduct the migration](#conduct-the-migration).
-- Rank potential issues by severity, and identify ways to reduce risk.
-- Create a plan for implementing the actions that would most effectively reduce risk.
-
-### Set up monitoring and alerting
-
-{{site.data.alerts.callout_success}}
-This step is optional.
-{{site.data.alerts.end}}
-
-Based on the error budget you [defined in your migration plan](#develop-a-migration-plan), identify the metrics that you can use to measure your success criteria and set up monitoring for the migration. These metrics may be identical to those you normally use in production, but can also be specific to your migration needs.
-
-### Update the schema and queries
-
-In the following order:
-
-1. [Convert your schema](#convert-the-schema).
-1. [Load test data](#load-test-data).
-1. [Validate your application queries](#validate-queries).
-
-
-
-You can use the following [MOLT (Migrate Off Legacy Technology) tools]({% link molt/molt-overview.md %}) to simplify these steps:
-
-- [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
-- [MOLT Fetch]({% link molt/molt-fetch.md %})
-- [MOLT Verify]({% link molt/molt-verify.md %})
-
-#### Convert the schema
-
-First, convert your database schema to an equivalent CockroachDB schema:
-
-- Use the [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}) to convert your schema line-by-line. This requires a free [CockroachDB {{ site.data.products.cloud }} account]({% link cockroachcloud/create-an-account.md %}). The tool will convert the syntax, identify [unimplemented features and syntax incompatibilities](#unimplemented-features-and-syntax-incompatibilities) in the schema, and suggest edits according to CockroachDB [best practices](#schema-design-best-practices).
- {{site.data.alerts.callout_info}}
- The Schema Conversion Tool accepts `.sql` files from PostgreSQL, MySQL, Oracle, and Microsoft SQL Server.
- {{site.data.alerts.end}}
-
-- Alternatively, manually convert the schema according to our [schema design best practices](#schema-design-best-practices){% comment %}and data type mappings{% endcomment %}. You can also [export a partially converted schema]({% link cockroachcloud/migrations-page.md %}#export-the-schema) from the Schema Conversion Tool to finish the conversion manually.
-
-Then import the converted schema to a CockroachDB cluster:
-
-- For CockroachDB {{ site.data.products.cloud }}, use the Schema Conversion Tool to [migrate the converted schema to a new {{ site.data.products.cloud }} database]({% link cockroachcloud/migrations-page.md %}#migrate-the-schema).
-- For CockroachDB {{ site.data.products.core }}, pipe the [data definition language (DDL)]({% link {{ page.version.version }}/sql-statements.md %}#data-definition-statements) directly into [`cockroach sql`]({% link {{ page.version.version }}/cockroach-sql.md %}). You can [export a converted schema file]({% link cockroachcloud/migrations-page.md %}#export-the-schema) from the Schema Conversion Tool.
- {{site.data.alerts.callout_success}}
- For the fastest performance, you can use a [local, single-node CockroachDB cluster]({% link {{ page.version.version }}/cockroach-start-single-node.md %}#start-a-single-node-cluster) to convert your schema and [check the results of queries](#test-query-results-and-performance).
- {{site.data.alerts.end}}
-
-#### Load test data
-
-{{site.data.alerts.callout_success}}
-Before moving data, Cockroach Labs recommends [dropping any indexes]({% link {{ page.version.version }}/drop-index.md %}) on the CockroachDB database. The indexes can be [recreated]({% link {{ page.version.version }}/create-index.md %}) after the data is loaded. Doing so will optimize performance.
-{{site.data.alerts.end}}
-
-After [converting the schema](#convert-the-schema), load your data into CockroachDB so that you can [test your application queries](#validate-queries). Then use [MOLT Fetch]({% link molt/molt-fetch.md %}) to move the source data to CockroachDB.
-
-Alternatively, you can use one of the following methods to migrate the data. Additional tooling may be required to extract or convert the data to a supported file format.
-
-- {% include {{ page.version.version }}/migration/load-data-import-into.md %} Typically during a migration, data is initially loaded before foreground application traffic begins to be served, so the impact of taking the table offline when running `IMPORT INTO` may be minimal.
-- {% include {{ page.version.version }}/migration/load-data-third-party.md %} Within the tool, you can select the database tables to migrate to the test cluster.
-- {% include {{ page.version.version }}/migration/load-data-copy-from.md %}
-
-#### Validate queries
-
-After you [load the test data](#load-test-data), validate your queries on CockroachDB. You can do this by [shadowing](#shadowing) or by [manually testing](#test-query-results-and-performance) the queries.
-
-Note that CockroachDB defaults to the [`SERIALIZABLE`]({% link {{ page.version.version }}/demo-serializable.md %}) transaction isolation level. If you are migrating an application that was built at a `READ COMMITTED` isolation level on the source database, you must [enable `READ COMMITTED` isolation]({% link {{ page.version.version }}/read-committed.md %}#enable-read-committed-isolation) on the CockroachDB cluster for compatibility.
-
-##### Shadowing
-
-You can "shadow" your production workload by executing your source SQL statements on CockroachDB in parallel. You can then [validate the queries](#test-query-results-and-performance) on CockroachDB for consistency, performance, and potential issues with the migration. Shadowing should **not** be used in production when performing a [live migration](#zero-downtime).
-
-##### Test query results and performance
-
-You can manually validate your queries by testing a subset of "critical queries" on an otherwise idle CockroachDB cluster:
-
-- Check the application logs for error messages and the API response time. If application requests are slower than expected, use the **SQL Activity** page on the [CockroachDB {{ site.data.products.cloud }} Console]({% link cockroachcloud/statements-page.md %}) or [DB Console]({% link {{ page.version.version }}/ui-statements-page.md %}) to find the longest-running queries that are part of that application request. If necessary, tune the queries according to our best practices for [SQL performance]({% link {{ page.version.version }}/performance-best-practices-overview.md %}).
-
-- Compare the results of the queries and check that they are identical in both the source database and CockroachDB. To do this, you can use [MOLT Verify]({% link molt/molt-verify.md %}).
-
-Test performance on a CockroachDB cluster that is appropriately [sized](#capacity-planning) for your workload:
-
-1. Run the application with single- or very low-concurrency and verify the app's performance is acceptable. The cluster should be provisioned with more than enough resources to handle this workload, because you need to verify that the queries will be fast enough when there are zero resource bottlenecks.
-
-1. Run stress tests with at least the production concurrency and rate, but ideally higher in order to verify that the system can handle unexpected spikes in load. This can also uncover [contention]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#transaction-contention) issues that will appear during spikes in app load, which may require [application design changes](#handling-transaction-contention) to avoid.
-
-### Perform a dry run
-
-To further minimize potential surprises when you conduct the migration, practice [cutover](#cutover-strategy) using your application and similar volumes of data on a "dry-run" environment. Use a test or development environment that is as similar as possible to production.
-
-Performing a dry run is highly recommended. In addition to demonstrating how long the migration may take, a dry run also helps to ensure that team members understand what they need to do during the migration, and that changes to the application are coordinated.
-
-## Conduct the migration
-
-Before proceeding, double-check that you are [prepared to migrate](#prepare-for-migration).
-
-Once you are ready to migrate, optionally [drop the database]({% link {{ page.version.version }}/drop-database.md %}) and delete the test cluster so that you can get a clean start:
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-DROP DATABASE {database-name} CASCADE;
-~~~
-
-Alternatively, [truncate]({% link {{ page.version.version }}/truncate.md %}) each table you used for testing to avoid having to recreate your schema:
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-TRUNCATE {table-name} CASCADE;
-~~~
-
-Migrate your data to CockroachDB using the method that is appropriate for your [downtime requirements](#approach-to-downtime) and [cutover strategy](#cutover-strategy).
-
-### Lift and Shift
-
-Using this method, consistency is achieved by only performing the cutover once all writes have been replicated from the source database to CockroachDB. This requires downtime during which the application traffic is stopped.
-
-The following is a high-level overview of the migration steps. For considerations and details about the pros and cons of this approach, see [Migration Strategy: Lift and Shift]({% link {{ page.version.version }}/migration-strategy-lift-and-shift.md %}).
-
-1. Stop application traffic to your source database. **This begins downtime.**
-1. Use [MOLT Fetch]({% link molt/molt-fetch.md %}) to move the source data to CockroachDB.
-1. After the data is migrated, use [MOLT Verify]({% link molt/molt-verify.md %}) to validate the consistency of the data between the source database and CockroachDB.
-1. Perform a [cutover](#cutover-strategy) by resuming application traffic, now to CockroachDB.
-{% comment %}1. If you want the ability to [roll back](#all-at-once-rollback) the migration, replicate data back to the source database.{% endcomment %}
-
-### Zero Downtime
-
-During a "live migration", downtime is minimized by performing the cutover while writes are still being replicated from the source database to CockroachDB. Inconsistencies are resolved through manual reconciliation.
-
-The following is a high-level overview of the migration steps. The two approaches are mutually exclusive, and each has [tradeoffs](#minimal-downtime). {% comment %}For details on this migration strategy, see [Migration Strategy: Zero Downtime]({% link {{ page.version.version }}/migration-strategy-lift-and-shift.md %}).{% endcomment %}
-
-To prioritize consistency and minimize downtime:
-
-1. Use [MOLT Fetch]({% link molt/molt-fetch.md %}) to move the source data to CockroachDB. Enable [**continuous replication**]({% link molt/molt-fetch.md %}#load-data-and-replicate-changes) after it performs the initial load of data into CockroachDB.
-1. As the data is migrating, use [MOLT Verify]({% link molt/molt-verify.md %}) to validate the consistency of the data between the source database and CockroachDB.
-1. Once nearly all data from your source database has been moved to CockroachDB (for example, with a <1 second delay or <1000 rows), stop application traffic to your source database. **This begins downtime.**
-1. Wait for MOLT Fetch to finish replicating changes to CockroachDB.
-1. Perform a [cutover](#cutover-strategy) by resuming application traffic, now to CockroachDB.
-
-To achieve zero downtime with inconsistency:
-
-1. Use [MOLT Fetch]({% link molt/molt-fetch.md %}) to move the source data to CockroachDB. Use the tool to **replicate ongoing changes** after performing the initial load of data into CockroachDB.
-1. As the data is migrating, you can use [MOLT Verify]({% link molt/molt-verify.md %}) to validate the consistency of the data between the source database and CockroachDB.
-1. After nearly all data from your source database has been moved to CockroachDB (for example, with a <1 second delay or <1000 rows), perform an [*immediate cutover*](#cutover-strategy) by pointing application traffic to CockroachDB.
-1. Manually reconcile any inconsistencies caused by writes that were not replicated during the cutover.
-1. Close the connection to the source database when you are ready to finish the migration.
-
-## Complete the migration
-
-After you have successfully [conducted the migration](#conduct-the-migration):
-
-- Notify the teams and other stakeholders impacted by the migration.
-- Retire any test or development environments used to verify the migration.
-- Extend the document you created when [developing your migration plan](#develop-a-migration-plan) with any issues encountered and follow-up work that needs to be done.
-
-## See also
-
-- [Can a PostgreSQL or MySQL application be migrated to CockroachDB?]({% link {{ page.version.version }}/frequently-asked-questions.md %}#can-a-postgresql-or-mysql-application-be-migrated-to-cockroachdb)
-- [PostgreSQL Compatibility]({% link {{ page.version.version }}/postgresql-compatibility.md %})
-- [Use the Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
-- [Schema Design Overview]({% link {{ page.version.version }}/schema-design-overview.md %})
-- [Create a User-defined Schema]({% link {{ page.version.version }}/schema-design-schema.md %})
-- [Primary key best practices]({% link {{ page.version.version }}/schema-design-table.md %}#primary-key-best-practices)
-- [Secondary index best practices]({% link {{ page.version.version }}/schema-design-indexes.md %}#best-practices)
-- [Transaction contention best practices]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#transaction-contention)
-- [Back Up and Restore]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %})
diff --git a/src/current/v24.1/migration-strategy-lift-and-shift.md b/src/current/v24.1/migration-strategy-lift-and-shift.md
deleted file mode 100644
index 402f7126bec..00000000000
--- a/src/current/v24.1/migration-strategy-lift-and-shift.md
+++ /dev/null
@@ -1,139 +0,0 @@
----
-title: "Migration Strategy: Lift and Shift"
-summary: Learn about the 'Lift and Shift' data migration strategy
-toc: true
-docs_area: migrate
----
-
-There are multiple strategies for [migrating off legacy technology]({% link {{ page.version.version }}/migration-overview.md %}) to CockroachDB.
-
-This page discusses the ["Lift and Shift" strategy]({% link {{ page.version.version }}/migration-overview.md %}#lift-and-shift) for migrating your database, which is a commonly used approach. This approach, which is also known as "Big Bang" (and by other names), refers to the process where your data is moved in its entirety from a source system to a target system within a defined period of time. This typically involves some application downtime and can involve some service degradation.
-
-Lift and Shift may not be the right approach if a strong application service continuity during the migration is required. It may be a viable method if application downtime is permitted.
-
-{{site.data.alerts.callout_info}}
-The information on this page assumes you have already reviewed the [migration overview]({% link {{ page.version.version }}/migration-overview.md %}).
-{{site.data.alerts.end}}
-
-## Pros and Cons
-
-On the spectrum of different data migration strategies, Lift and Shift has the following pros and cons. The terms "lower" and "higher" are not absolute, but relative to other approaches.
-
-Pros:
-
-- Conceptually straightforward.
-- Less complex: If you can afford some downtime, the overall effort will usually be lower, and the chance of errors is lower.
-- Lower time start-to-finish: In general, the more downtime you can afford, the shorter the overall migration project timeframe can be.
-- Lower technical risk: It does not involve running multiple systems alongside each other for an extended period of time.
-- Easy to practice [dry runs]({% link {{ page.version.version }}/migration-overview.md %}#perform-a-dry-run) of import/export using testing/non-production systems.
-- Good import/export tooling is available (e.g., external tools like: [AWS Database Migration Service (DMS)]({% link {{ page.version.version }}/aws-dms.md %}), [Qlik Replicate]({% link {{ page.version.version }}/qlik.md %}), [Striim]({% link {{ page.version.version }}/striim.md %}); or internal tools like [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}), [`COPY FROM`]({% link {{ page.version.version }}/copy.md %}), [`cockroach userdata`]({% link {{ page.version.version }}/cockroach-userfile-upload.md %})).
-- If your application already has regularly scheduled maintenance windows, your customers will not encounter application downtime.
-
-Cons:
-
-- All or nothing: It either works or does not work; once you start, you have to finish or [roll back]({% link {{ page.version.version }}/migration-overview.md %}#all-at-once-rollback).
-- Higher project risk: The project **must** be completed to meet a given [downtime / service degradation window]({% link {{ page.version.version }}/migration-overview.md %}#downtime-window).
-- Application service continuity requirements must be relaxed (that is, application downtime or increased latency may be needed).
-
-## Process design considerations
-
-{{site.data.alerts.callout_info}}
-The high-level considerations in this section only refer to the data-loading portion of your migration. They assume you are following the steps in the overall migration process described in [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}).
-{{site.data.alerts.end}}
-
-Keep in mind the following considerations when designing a Lift and Shift data migration process.
-
-- [Decide on your data migration tooling.](#managed-migration)
-- [Decide which data formats you will use.](#data-formats)
-- [Design a restartable process.](#restartable)
-- [Design a scalable process.](#scalable)
-
-
-
-### Decide on your data migration tooling
-
-If you plan to do your bulk data migration using a managed migration service, you must have a secure, publicly available CockroachDB cluster. CockroachDB supports the following [third-party migration services]({% link {{ page.version.version }}/third-party-database-tools.md %}#data-migration-tools):
-
-- [AWS Database Migration Service (DMS)]({% link {{ page.version.version }}/aws-dms.md %})
-- [Qlik Replicate]({% link {{ page.version.version }}/qlik.md %})
-- [Striim]({% link {{ page.version.version }}/striim.md %})
-
-{{site.data.alerts.callout_info}}
-Depending on the migration service you choose, [long-running transactions]({% link {{ page.version.version }}/query-behavior-troubleshooting.md %}#hanging-or-stuck-queries) can occur. In some cases, these queries will cause [transaction retry errors]({% link {{ page.version.version }}/common-errors.md %}#restart-transaction). If you encounter these errors while migrating to CockroachDB using a managed migration service, please reach out to our [Support Resources]({% link {{ page.version.version }}/support-resources.md %}).
-{{site.data.alerts.end}}
-
-If you will not be using a managed migration service, see the following sections for more information on how to use SQL statements like [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}), [`COPY FROM`]({% link {{ page.version.version }}/copy.md %}), etc.
-
-
-
-### Decide which data formats and storage media you will use
-
-It's important to decide which data formats, storage media, and database features you will use to migrate your data.
-
-Data formats that can be imported by CockroachDB include:
-
-- [SQL]({% link {{ page.version.version }}/schema-design-overview.md %}) for the [schema import]({% link cockroachcloud/migrations-page.md %}).
-- [CSV]({% link {{ page.version.version }}/migrate-from-csv.md %}) for table data.
-- [Avro]({% link {{ page.version.version }}/migrate-from-avro.md %}) for table data.
-
-The storage media you use to export / import from can be intermediate data files or streaming data coming over the network. Options include:
-
-- Local "userdata" storage for small tables (see [`cockroach userdata`]({% link {{ page.version.version }}/cockroach-userfile-upload.md %}), [Use a Local File Server]({% link {{ page.version.version }}/use-a-local-file-server.md %})).
-- Cloud blob storage (see [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}), [Import Performance Best Practices]({% link {{ page.version.version }}/import-performance-best-practices.md %}), [Use Cloud Storage]({% link {{ page.version.version }}/use-cloud-storage.md %})).
-- Direct wire transfers (see [managed migration services](#managed-migration)).
-
-Database features for export/import from the source and target databases can include:
-
-- Tools for exporting from the source database may include: `pg_dump --schema-only` and `COPY FROM`, `mysqldump`, `expdp`, etc.
-- For import into CockroachDB, use [`COPY FROM`]({% link {{ page.version.version }}/copy.md %}) or [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}). For a bulk data migrations, most users should use `IMPORT INTO` because the tables will be offline anyway, and `IMPORT INTO` can [perform the data import much faster]({% link {{ page.version.version }}/import-performance-best-practices.md %}) than `COPY FROM`.
-
-Decide which of the options above will meet your requirements while resulting in a process that is [restartable](#restartable) and [scalable](#scalable).
-
-
-
-### Design a restartable process
-
-To have a higher chance of success, design your data migration process so it can be stopped and restarted from an intermediate state at any time during the process. This will help minimize errors and avoid wasted effort.
-
-Keep the following requirements in mind as you design a restartable import/export process:
-
-- Bulk migrate data in manageable size batches for your source and target systems.
- - This is a best practice. If something happens to the target cluster during import, the amount of wasted work will be minimized.
-- Implement progress/state keeping with process restart capabilities.
-- Make sure your export process is idempotent: the same input to your export process should return the same output data.
-- If possible, export and import the majority of your data before taking down the source database. This can ensure that you only have to deal with the incremental changes from your last import to complete the migration process.
-
-
-
-### Design a scalable and performant process
-
-Once your process is [restartable and resilient to failures](#design-a-restartable-process), it's important to also make sure it will scale to the needs of your data set. The larger the data set you are migrating to CockroachDB, the more important the performance and scalability of your process will be.
-
-Keep the following requirements in mind:
-
-- Schema and data should be imported separately.
-- Your process should handle multiple files across multiple export/import streams concurrently.
- - For best performance, these files should contain presorted, disjoint data sets.
-- Benchmark the performance of your migration process to help ensure it will complete within the allotted downtime window.
-
-For more information about import performance, see [Import Performance Best Practices]({% link {{ page.version.version }}/import-performance-best-practices.md %}).
-
-## See also
-
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
-- [Use the Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
-- [Migrate with AWS Database Migration Service (DMS)]({% link {{ page.version.version }}/aws-dms.md %})
-- [AWS DMS documentation](https://docs.aws.amazon.com/dms/latest/userguide/Welcome.html)
-- [Migrate and Replicate Data with Qlik Replicate]({% link {{ page.version.version }}/qlik.md %})
-- [Migrate and Replicate Data with Striim]({% link {{ page.version.version }}/striim.md %})
-- [Schema Design Overview]({% link {{ page.version.version }}/schema-design-overview.md %})
-- [Back Up and Restore]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %})
-- [Export data with Changefeeds]({% link {{ page.version.version }}/export-data-with-changefeeds.md %})
-- [`COPY`]({% link {{ page.version.version }}/copy.md %})
-- [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %})
-- [Migrate from CSV]({% link {{ page.version.version }}/migrate-from-csv.md %})
-- [Migrate from Avro]({% link {{ page.version.version }}/migrate-from-avro.md %})
-- [Client connection parameters]({% link {{ page.version.version }}/connection-parameters.md %})
-
-
-{% comment %} eof {% endcomment %}
diff --git a/src/current/v24.1/qlik.md b/src/current/v24.1/qlik.md
index 8d219d381b5..53949f99392 100644
--- a/src/current/v24.1/qlik.md
+++ b/src/current/v24.1/qlik.md
@@ -68,7 +68,7 @@ Complete the following items before using Qlik Replicate:
- If you are migrating from PostgreSQL, MySQL, Oracle, or Microsoft SQL Server, [use the **Schema Conversion Tool**]({% link cockroachcloud/migrations-page.md %}) to convert and export your schema. Ensure that any schema changes are also reflected on your tables, or add transformation rules. If you make substantial schema changes, the Qlik Replicate migration may fail.
{{site.data.alerts.callout_info}}
- All tables must have an explicitly defined primary key. For more guidance, see the [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}#schema-design-best-practices).
+ All tables must have an explicitly defined primary key. For more guidance, see [Migration Strategy]({% link molt/migration-strategy.md %}#schema-design-best-practices).
{{site.data.alerts.end}}
## Migrate and replicate data to CockroachDB
@@ -96,7 +96,7 @@ In the Qlik Replicate interface, CockroachDB is configured as a PostgreSQL **sou
## See also
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
- [Change Data Capture Overview]({% link {{ page.version.version }}/change-data-capture-overview.md %})
- [Third-Party Tools Supported by Cockroach Labs]({% link {{ page.version.version }}/third-party-database-tools.md %})
diff --git a/src/current/v24.1/read-committed.md b/src/current/v24.1/read-committed.md
index 83d7f9ad7e6..bcf0a5d9f92 100644
--- a/src/current/v24.1/read-committed.md
+++ b/src/current/v24.1/read-committed.md
@@ -13,7 +13,7 @@ docs_area: deploy
- Your application needs to maintain a high workload concurrency with minimal [transaction retries]({% link {{ page.version.version }}/developer-basics.md %}#transaction-retries), and it can tolerate potential [concurrency anomalies](#concurrency-anomalies). Predictable query performance at high concurrency is more valuable than guaranteed transaction [serializability]({% link {{ page.version.version }}/developer-basics.md %}#serializability-and-transaction-contention).
-- You are [migrating an application to CockroachDB]({% link {{ page.version.version }}/migration-overview.md %}) that was built at a `READ COMMITTED` isolation level on the source database, and it is not feasible to modify your application to use `SERIALIZABLE` isolation.
+- You are [migrating an application to CockroachDB]({% link molt/migration-overview.md %}) that was built at a `READ COMMITTED` isolation level on the source database, and it is not feasible to modify your application to use `SERIALIZABLE` isolation.
Whereas `SERIALIZABLE` isolation guarantees data correctness by placing transactions into a [serializable ordering]({% link {{ page.version.version }}/demo-serializable.md %}), `READ COMMITTED` isolation permits some [concurrency anomalies](#concurrency-anomalies) in exchange for minimizing transaction aborts, [retries]({% link {{ page.version.version }}/developer-basics.md %}#transaction-retries), and blocking. Compared to `SERIALIZABLE` transactions, `READ COMMITTED` transactions do **not** return [serialization errors]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}) that require client-side handling. See [`READ COMMITTED` transaction behavior](#read-committed-transaction-behavior).
@@ -919,4 +919,4 @@ SELECT * FROM schedules
- [Serializable Transactions]({% link {{ page.version.version }}/demo-serializable.md %})
- [What Write Skew Looks Like](https://www.cockroachlabs.com/blog/what-write-skew-looks-like/)
- [Read Committed RFC](https://github.com/cockroachdb/cockroach/blob/master/docs/RFCS/20230122_read_committed_isolation.md)
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
diff --git a/src/current/v24.1/striim.md b/src/current/v24.1/striim.md
index 436d7440cbd..67526c8ff9d 100644
--- a/src/current/v24.1/striim.md
+++ b/src/current/v24.1/striim.md
@@ -37,7 +37,7 @@ Complete the following items before using Striim:
- If you are migrating from PostgreSQL, MySQL, Oracle, or Microsoft SQL Server, [use the **Schema Conversion Tool**]({% link cockroachcloud/migrations-page.md %}) to convert and export your schema. Ensure that any schema changes are also reflected on your tables.
{{site.data.alerts.callout_info}}
- All tables must have an explicitly defined primary key. For more guidance, see the [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}#schema-design-best-practices).
+ All tables must have an explicitly defined primary key. For more guidance, see [Migration Strategy]({% link molt/migration-strategy.md %}#schema-design-best-practices).
{{site.data.alerts.end}}
## Migrate and replicate data to CockroachDB
@@ -110,7 +110,7 @@ To perform continuous replication of ongoing changes, create a Striim applicatio
## See also
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
- [Change Data Capture Overview]({% link {{ page.version.version }}/change-data-capture-overview.md %})
- [Third-Party Tools Supported by Cockroach Labs]({% link {{ page.version.version }}/third-party-database-tools.md %})
diff --git a/src/current/v24.2/aws-dms.md b/src/current/v24.2/aws-dms.md
index bb3b53b22a2..99a42ac0532 100644
--- a/src/current/v24.2/aws-dms.md
+++ b/src/current/v24.2/aws-dms.md
@@ -41,7 +41,7 @@ Complete the following items before starting the DMS migration:
- Manually create all schema objects in the target CockroachDB cluster. If you are migrating from PostgreSQL, MySQL, Oracle, or Microsoft SQL Server, you can [use the **Schema Conversion Tool**]({% link cockroachcloud/migrations-page.md %}) to convert and export your schema.
- - All tables must have an explicitly defined primary key. For more guidance, see the [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}#schema-design-best-practices).
+ - All tables must have an explicitly defined primary key. For more guidance, see [Migration Strategy]({% link molt/migration-strategy.md %}#schema-design-best-practices).
- Drop all [constraints]({% link {{ page.version.version }}/constraints.md %}) per the [AWS DMS best practices](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_BestPractices.html#CHAP_BestPractices.Performance). You can recreate them after the [full load completes](#step-3-verify-the-migration). AWS DMS can create a basic schema, but does not create [indexes]({% link {{ page.version.version }}/indexes.md %}) or constraints such as [foreign keys]({% link {{ page.version.version }}/foreign-key.md %}) and [defaults]({% link {{ page.version.version }}/default-value.md %}).
@@ -406,7 +406,7 @@ The `BatchApplyEnabled` setting can improve replication performance and is recom
## See Also
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
- [`cockroach demo`]({% link {{ page.version.version }}/cockroach-demo.md %})
- [AWS DMS documentation](https://docs.aws.amazon.com/dms/latest/userguide/Welcome.html)
diff --git a/src/current/v24.2/build-a-java-app-with-cockroachdb-hibernate.md b/src/current/v24.2/build-a-java-app-with-cockroachdb-hibernate.md
index c19bdcaed7b..c4b53d27629 100644
--- a/src/current/v24.2/build-a-java-app-with-cockroachdb-hibernate.md
+++ b/src/current/v24.2/build-a-java-app-with-cockroachdb-hibernate.md
@@ -130,9 +130,9 @@ APP: getAccountBalance(2) --> 350.00
If you are trying to get a large data set into CockroachDB all at once (a bulk import), avoid writing client-side code altogether and use the [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) statement instead. It is much faster and more efficient than making a series of [`INSERT`s]({% link {{ page.version.version }}/insert.md %}) and [`UPDATE`s]({% link {{ page.version.version }}/update.md %}). It bypasses the [SQL layer]({% link {{ page.version.version }}/architecture/sql-layer.md %}) altogether and writes directly to the [storage layer](architecture/storage-layer.html) of the database.
-For more information about importing data from PostgreSQL, see [Migrate from PostgreSQL]({% link {{ page.version.version }}/migrate-from-postgres.md %}).
+For more information about importing data from PostgreSQL, see [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %}).
-For more information about importing data from MySQL, see [Migrate from MySQL]({% link {{ page.version.version }}/migrate-from-mysql.md %}).
+For more information about importing data from MySQL, see [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql).
### Use `reWriteBatchedInserts` for increased speed
diff --git a/src/current/v24.2/build-a-java-app-with-cockroachdb.md b/src/current/v24.2/build-a-java-app-with-cockroachdb.md
index eae610ab364..d798e9dbaee 100644
--- a/src/current/v24.2/build-a-java-app-with-cockroachdb.md
+++ b/src/current/v24.2/build-a-java-app-with-cockroachdb.md
@@ -269,9 +269,9 @@ props.setProperty("options", "-c sql_safe_updates=true -c statement_timeout=30")
If you are trying to get a large data set into CockroachDB all at once (a bulk import), avoid writing client-side code altogether and use the [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) statement instead. It is much faster and more efficient than making a series of [`INSERT`s]({% link {{ page.version.version }}/insert.md %}) and [`UPDATE`s]({% link {{ page.version.version }}/update.md %}). It bypasses the [SQL layer]({% link {{ page.version.version }}/architecture/sql-layer.md %}) altogether and writes directly to the [storage layer](architecture/storage-layer.html) of the database.
-For more information about importing data from PostgreSQL, see [Migrate from PostgreSQL]({% link {{ page.version.version }}/migrate-from-postgres.md %}).
+For more information about importing data from PostgreSQL, see [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %}).
-For more information about importing data from MySQL, see [Migrate from MySQL]({% link {{ page.version.version }}/migrate-from-mysql.md %}).
+For more information about importing data from MySQL, see [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql).
### Use `reWriteBatchedInserts` for increased speed
diff --git a/src/current/v24.2/build-a-python-app-with-cockroachdb-sqlalchemy.md b/src/current/v24.2/build-a-python-app-with-cockroachdb-sqlalchemy.md
index 73e5dfa528b..e56b9c136a8 100644
--- a/src/current/v24.2/build-a-python-app-with-cockroachdb-sqlalchemy.md
+++ b/src/current/v24.2/build-a-python-app-with-cockroachdb-sqlalchemy.md
@@ -204,9 +204,9 @@ Instead, we recommend breaking your transaction into smaller units of work (or "
If you are trying to get a large data set into CockroachDB all at once (a bulk import), avoid writing client-side code that uses an ORM and use the [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) statement instead. It is much faster and more efficient than making a series of [`INSERT`s]({% link {{ page.version.version }}/insert.md %}) and [`UPDATE`s]({% link {{ page.version.version }}/update.md %}) such as are generated by calls to [`session.bulk_save_objects()`](https://docs.sqlalchemy.org/orm/session_api.html?highlight=bulk_save_object#sqlalchemy.orm.session.Session.bulk_save_objects).
-For more information about importing data from PostgreSQL, see [Migrate from PostgreSQL]({% link {{ page.version.version }}/migrate-from-postgres.md %}).
+For more information about importing data from PostgreSQL, see [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %}).
-For more information about importing data from MySQL, see [Migrate from MySQL]({% link {{ page.version.version }}/migrate-from-mysql.md %}).
+For more information about importing data from MySQL, see [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql).
### Prefer the query builder
diff --git a/src/current/v24.2/copy.md b/src/current/v24.2/copy.md
index aa825fdc003..144da73b4ac 100644
--- a/src/current/v24.2/copy.md
+++ b/src/current/v24.2/copy.md
@@ -358,10 +358,10 @@ You can copy CSV data into CockroachDB using the following methods:
## See also
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %})
- [`EXPORT`]({% link {{ page.version.version }}/export.md %})
- [Install a Driver or ORM Framework]({% link {{ page.version.version }}/install-client-drivers.md %})
{% comment %}
-- [Migrate from PostgreSQL]({% link {{ page.version.version }}/migrate-from-postgres.md %})
+- [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %})
{% endcomment %}
\ No newline at end of file
diff --git a/src/current/v24.2/debezium.md b/src/current/v24.2/debezium.md
index ea2eb513b50..a9d14707ea4 100644
--- a/src/current/v24.2/debezium.md
+++ b/src/current/v24.2/debezium.md
@@ -116,7 +116,7 @@ Once all of the [prerequisite steps](#before-you-begin) are completed, you can u
## See also
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
- [Change Data Capture Overview]({% link {{ page.version.version }}/change-data-capture-overview.md %})
- [Third-Party Tools Supported by Cockroach Labs]({% link {{ page.version.version }}/third-party-database-tools.md %})
diff --git a/src/current/v24.2/frequently-asked-questions.md b/src/current/v24.2/frequently-asked-questions.md
index 07e00b6959f..c94ba15582b 100644
--- a/src/current/v24.2/frequently-asked-questions.md
+++ b/src/current/v24.2/frequently-asked-questions.md
@@ -147,7 +147,7 @@ Note, however, that the protocol used doesn't significantly impact how easy it i
### Can a PostgreSQL or MySQL application be migrated to CockroachDB?
-Yes. Most users should be able to follow the instructions in [Migrate from PostgreSQL]({% link {{ page.version.version }}/migrate-from-postgres.md %}) or [Migrate from MySQL]({% link {{ page.version.version }}/migrate-from-mysql.md %}). Due to differences in available features and syntax, some features supported by these databases may require manual effort to port to CockroachDB. Check those pages for details.
+Yes. Most users should be able to follow the instructions in [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %}) or [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql). Due to differences in available features and syntax, some features supported by these databases may require manual effort to port to CockroachDB. Check those pages for details.
We also fully support [importing your data via CSV]({% link {{ page.version.version }}/migrate-from-csv.md %}).
diff --git a/src/current/v24.2/goldengate.md b/src/current/v24.2/goldengate.md
index 7fd572094e4..30ee23c17df 100644
--- a/src/current/v24.2/goldengate.md
+++ b/src/current/v24.2/goldengate.md
@@ -514,7 +514,7 @@ Run the steps in this section on a machine and in a directory where Oracle Golde
## See also
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
- [Change Data Capture Overview]({% link {{ page.version.version }}/change-data-capture-overview.md %})
- [Third-Party Tools Supported by Cockroach Labs]({% link {{ page.version.version }}/third-party-database-tools.md %})
diff --git a/src/current/v24.2/import-into.md b/src/current/v24.2/import-into.md
index ad4a2c9ca5c..780b9195104 100644
--- a/src/current/v24.2/import-into.md
+++ b/src/current/v24.2/import-into.md
@@ -158,7 +158,7 @@ You can control the `IMPORT` process's behavior using any of the following key-v
For examples showing how to use these options, see the [Examples section]({% link {{ page.version.version }}/import-into.md %}#examples).
-For instructions and working examples showing how to migrate data from other databases and formats, see the [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}).
+For instructions and working examples showing how to migrate data from other databases and formats, see the [Migration Overview]({% link molt/migration-overview.md %}).
## View and control import jobs
@@ -285,6 +285,6 @@ For more information about importing data from Avro, including examples, see [Mi
## See also
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [Use Cloud Storage]({% link {{ page.version.version }}/use-cloud-storage.md %})
- [Import Performance Best Practices]({% link {{ page.version.version }}/import-performance-best-practices.md %})
diff --git a/src/current/v24.2/import-performance-best-practices.md b/src/current/v24.2/import-performance-best-practices.md
index b85d28f1fe6..ca69a57fc4a 100644
--- a/src/current/v24.2/import-performance-best-practices.md
+++ b/src/current/v24.2/import-performance-best-practices.md
@@ -160,9 +160,9 @@ If you cannot both split and sort your dataset, the performance of either split
## See also
- [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %})
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [Migrate from Oracle]({% link {{ page.version.version }}/migrate-from-oracle.md %})
-- [Migrate from PostgreSQL]({% link {{ page.version.version }}/migrate-from-postgres.md %})
-- [Migrate from MySQL]({% link {{ page.version.version }}/migrate-from-mysql.md %})
+- [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %})
+- [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql)
- [Migrate from CSV]({% link {{ page.version.version }}/migrate-from-csv.md %})
- [Migrate from Avro]({% link {{ page.version.version }}/migrate-from-avro.md %})
diff --git a/src/current/v24.2/index.md b/src/current/v24.2/index.md
index eff4d508353..78ac87b9f4c 100644
--- a/src/current/v24.2/index.md
+++ b/src/current/v24.2/index.md
@@ -99,11 +99,11 @@ docs_area:
diff --git a/src/current/v24.2/insert-data.md b/src/current/v24.2/insert-data.md
index a7e76489a0c..dc3c26ee1f5 100644
--- a/src/current/v24.2/insert-data.md
+++ b/src/current/v24.2/insert-data.md
@@ -105,7 +105,7 @@ conn.commit()
Reference information related to this task:
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [`INSERT`]({% link {{ page.version.version }}/insert.md %})
- [`UPSERT`]({% link {{ page.version.version }}/upsert.md %})
- [Transaction Contention]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#transaction-contention)
diff --git a/src/current/v24.2/migrate-from-avro.md b/src/current/v24.2/migrate-from-avro.md
index 39b7bdbc9aa..de42d232917 100644
--- a/src/current/v24.2/migrate-from-avro.md
+++ b/src/current/v24.2/migrate-from-avro.md
@@ -216,8 +216,8 @@ You will need to run [`ALTER TABLE ... ADD CONSTRAINT`]({% link {{ page.version.
- [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %})
- [Import Performance Best Practices]({% link {{ page.version.version }}/import-performance-best-practices.md %})
- [Migrate from CSV][csv]
-- [Migrate from MySQL][mysql]
-- [Migrate from PostgreSQL][postgres]
+- [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql)
+- [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %})
- [Back Up and Restore Data]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %})
- [Use the Built-in SQL Client]({% link {{ page.version.version }}/cockroach-sql.md %})
- [`cockroach` Commands Overview]({% link {{ page.version.version }}/cockroach-commands.md %})
diff --git a/src/current/v24.2/migrate-from-csv.md b/src/current/v24.2/migrate-from-csv.md
index e0e1d92da9c..d30eeb2e98c 100644
--- a/src/current/v24.2/migrate-from-csv.md
+++ b/src/current/v24.2/migrate-from-csv.md
@@ -176,8 +176,8 @@ IMPORT INTO employees (emp_no, birth_date, first_name, last_name, gender, hire_d
- [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %})
- [Import Performance Best Practices]({% link {{ page.version.version }}/import-performance-best-practices.md %})
-- [Migrate from MySQL][mysql]
-- [Migrate from PostgreSQL][postgres]
+- [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql)
+- [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %})
- [Back Up and Restore Data]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %})
- [Use the Built-in SQL Client]({% link {{ page.version.version }}/cockroach-sql.md %})
- [`cockroach` Commands Overview]({% link {{ page.version.version }}/cockroach-commands.md %})
diff --git a/src/current/v24.2/migrate-from-geojson.md b/src/current/v24.2/migrate-from-geojson.md
index 2c3326af39e..e0d804f0be7 100644
--- a/src/current/v24.2/migrate-from-geojson.md
+++ b/src/current/v24.2/migrate-from-geojson.md
@@ -122,9 +122,9 @@ IMPORT INTO underground_storage_tank CSV DATA ('http://localhost:3000/tanks.csv'
- [Migrate from Shapefiles]({% link {{ page.version.version }}/migrate-from-shapefiles.md %})
- [Migrate from GeoPackage]({% link {{ page.version.version }}/migrate-from-geopackage.md %})
- [Spatial indexes]({% link {{ page.version.version }}/spatial-indexes.md %})
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
-- [Migrate from MySQL][mysql]
-- [Migrate from PostgreSQL][postgres]
+- [Migration Overview]({% link molt/migration-overview.md %})
+- [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql)
+- [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %})
- [Back Up and Restore Data]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %})
- [Use the Built-in SQL Client]({% link {{ page.version.version }}/cockroach-sql.md %})
- [`cockroach` Commands Overview]({% link {{ page.version.version }}/cockroach-commands.md %})
diff --git a/src/current/v24.2/migrate-from-geopackage.md b/src/current/v24.2/migrate-from-geopackage.md
index 53acdaff4a7..c3fabeb57ef 100644
--- a/src/current/v24.2/migrate-from-geopackage.md
+++ b/src/current/v24.2/migrate-from-geopackage.md
@@ -114,9 +114,9 @@ IMPORT INTO busstops CSV DATA ('http://localhost:3000/busstops.csv') WITH skip =
- [Spatial indexes]({% link {{ page.version.version }}/spatial-indexes.md %})
- [Migrate from OpenStreetMap]({% link {{ page.version.version }}/migrate-from-openstreetmap.md %})
- [Migrate from Shapefiles]({% link {{ page.version.version }}/migrate-from-shapefiles.md %})
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
-- [Migrate from MySQL][mysql]
-- [Migrate from PostgreSQL][postgres]
+- [Migration Overview]({% link molt/migration-overview.md %})
+- [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql)
+- [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %})
- [Back Up and Restore Data]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %})
- [Use the Built-in SQL Client]({% link {{ page.version.version }}/cockroach-sql.md %})
- [`cockroach` Commands Overview]({% link {{ page.version.version }}/cockroach-commands.md %})
diff --git a/src/current/v24.2/migrate-from-mysql.md b/src/current/v24.2/migrate-from-mysql.md
deleted file mode 100644
index 29cbebde972..00000000000
--- a/src/current/v24.2/migrate-from-mysql.md
+++ /dev/null
@@ -1,412 +0,0 @@
----
-title: Migrate from MySQL
-summary: Learn how to migrate data from MySQL into a CockroachDB cluster.
-toc: true
-docs_area: migrate
----
-
-This page describes basic considerations and provides a basic [example](#example-migrate-world-to-cockroachdb) of migrating data from MySQL to CockroachDB. The information on this page assumes that you have read [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}), which describes the broad phases and considerations of migrating a database to CockroachDB.
-
-The [MySQL migration example](#example-migrate-world-to-cockroachdb) on this page demonstrates how to use [MOLT tooling]({% link {{ page.version.version }}/migration-overview.md %}#molt) to update the MySQL schema, perform an initial load of data, and validate the data. These steps are essential when [preparing for a full migration]({% link {{ page.version.version }}/migration-overview.md %}#prepare-for-migration).
-
-{{site.data.alerts.callout_success}}
-If you need help migrating to CockroachDB, contact our sales team.
-{{site.data.alerts.end}}
-
-## Syntax differences
-
-You will likely need to make application changes due to differences in syntax between MySQL and CockroachDB. Along with the [general considerations in the migration overview]({% link {{ page.version.version }}/migration-overview.md %}#application-changes), also consider the following MySQL-specific information as you develop your migration plan.
-
-When [using the Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}?filters=mysql#convert-a-schema), MySQL syntax that cannot automatically be converted will be displayed in the [**Summary Report**]({% link cockroachcloud/migrations-page.md %}?filters=mysql#summary-report). These may include the following.
-
-#### String case sensitivity
-
-Strings are case-insensitive in MySQL and case-sensitive in CockroachDB. You may need to edit your MySQL data to get the results you expect from CockroachDB. For example, you may have been doing string comparisons in MySQL that will need to be changed to work with CockroachDB.
-
-For more information about the case sensitivity of strings in MySQL, see [Case Sensitivity in String Searches](https://dev.mysql.com/doc/refman/8.0/en/case-sensitivity.html) from the MySQL documentation. For more information about CockroachDB strings, see [`STRING`]({% link {{ page.version.version }}/string.md %}).
-
-#### Identifier case sensitivity
-
-Identifiers are case-sensitive in MySQL and [case-insensitive in CockroachDB]({% link {{ page.version.version }}/keywords-and-identifiers.md %}#identifiers). When [using the Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}?filters=mysql#convert-a-schema), you can either keep case sensitivity by enclosing identifiers in double quotes, or make identifiers case-insensitive by converting them to lowercase.
-
-#### `AUTO_INCREMENT` attribute
-
-The MySQL [`AUTO_INCREMENT`](https://dev.mysql.com/doc/refman/8.0/en/example-auto-increment.html) attribute, which creates sequential column values, is not supported in CockroachDB. When [using the Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}?filters=mysql#convert-a-schema), columns with `AUTO_INCREMENT` can be converted to use [sequences]({% link {{ page.version.version }}/create-sequence.md %}), `UUID` values with [`gen_random_uuid()`]({% link {{ page.version.version }}/functions-and-operators.md %}#id-generation-functions), or unique `INT8` values using [`unique_rowid()`]({% link {{ page.version.version }}/functions-and-operators.md %}#id-generation-functions). Cockroach Labs does not recommend using a sequence to define a primary key column. For more information, see [Unique ID best practices]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#unique-id-best-practices).
-
-{{site.data.alerts.callout_info}}
-Changing a column type during schema conversion will cause [MOLT Verify]({% link molt/molt-verify.md %}) to identify a type mismatch during [data validation](#step-3-validate-the-migrated-data). This is expected behavior.
-{{site.data.alerts.end}}
-
-#### `ENUM` type
-
-MySQL `ENUM` types are defined in table columns. On CockroachDB, [`ENUM`]({% link {{ page.version.version }}/enum.md %}) is a standalone type. When [using the Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}?filters=mysql#convert-a-schema), you can either deduplicate the `ENUM` definitions or create a separate type for each column.
-
-#### `TINYINT` type
-
-`TINYINT` data types are not supported in CockroachDB. The [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}?filters=mysql) automatically converts `TINYINT` columns to [`INT2`]({% link {{ page.version.version }}/int.md %}) (`SMALLINT`).
-
-#### Geospatial types
-
-MySQL geometry types are not converted to CockroachDB [geospatial types]({% link {{ page.version.version }}/spatial-data-overview.md %}#spatial-objects) by the [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}?filters=mysql). They should be manually converted to the corresponding types in CockroachDB.
-
-#### `FIELD` function
-
-The MYSQL `FIELD` function is not supported in CockroachDB. Instead, you can use the [`array_position`]({% link {{ page.version.version }}/functions-and-operators.md %}#array-functions) function, which returns the index of the first occurrence of element in the array.
-
-Example usage:
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-SELECT array_position(ARRAY[4,1,3,2],1);
-~~~
-
-~~~
- array_position
-------------------
- 2
-(1 row)
-~~~
-
-While MySQL returns 0 when the element is not found, CockroachDB returns `NULL`. So if you are using the `ORDER BY` clause in a statement with the `array_position` function, the caveat is that sort is applied even when the element is not found. As a workaround, you can use the [`COALESCE`]({% link {{ page.version.version }}/functions-and-operators.md %}#conditional-and-function-like-operators) operator.
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-SELECT * FROM table_a ORDER BY COALESCE(array_position(ARRAY[4,1,3,2],5),999);
-~~~
-
-## Load MySQL data
-
-You can use [MOLT Fetch]({% link molt/molt-fetch.md %}) to migrate MySQL data to CockroachDB.
-
-Alternatively, you can use one of the following methods to migrate the data:
-
-- {% include {{ page.version.version }}/migration/load-data-import-into.md %}
-
- {% include {{ page.version.version }}/misc/import-perf.md %}
-
-- {% include {{ page.version.version }}/migration/load-data-third-party.md %}
-
-The [following example](#example-migrate-world-to-cockroachdb) uses `IMPORT INTO` to perform the initial data load.
-
-## Example: Migrate `world` to CockroachDB
-
-The following steps demonstrate [converting a schema]({% link {{ page.version.version }}/migration-overview.md %}#convert-the-schema), performing an [initial load of data]({% link {{ page.version.version }}/migration-overview.md %}#load-test-data), and [validating data consistency]({% link {{ page.version.version }}/migration-overview.md %}#validate-queries) during a migration.
-
-In the context of a full migration, these steps ensure that MySQL data can be properly migrated to CockroachDB and your application queries tested against the cluster. For details, see the [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}#prepare-for-migration).
-
-### Before you begin
-
-The example uses the [MySQL `world` data set](https://dev.mysql.com/doc/index-other.html) and demonstrates how to migrate the schema and data to a CockroachDB {{ site.data.products.standard }} cluster. To follow along with these steps:
-
-1. Download the [`world` data set](https://dev.mysql.com/doc/index-other.html).
-
-1. Create the `world` database on your MySQL instance, specifying the path of the downloaded file:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- mysqlsh -uroot --sql --file {path}/world-db/world.sql
- ~~~
-
-1. Create a free [{{ site.data.products.cloud }} account]({% link cockroachcloud/create-an-account.md %}), which is used to access the [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}?filters=mysql) and [create the CockroachDB {{ site.data.products.standard }} cluster]({% link cockroachcloud/create-your-cluster.md %}).
-
-{{site.data.alerts.callout_success}}
-{% include cockroachcloud/migration/sct-self-hosted.md %}
-{{site.data.alerts.end}}
-
-### Step 1. Convert the MySQL schema
-
-Use the [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}?filters=mysql) to convert the `world` schema for compatibility with CockroachDB. The schema has three tables: `city`, `country`, and `countrylanguage`.
-
-1. Dump the MySQL `world` schema with the following [`mysqldump`](https://dev.mysql.com/doc/refman/8.0/en/mysqldump-sql-format.html) command:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- mysqldump -uroot --no-data world > world_schema.sql
- ~~~
-
-1. Open the [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}?filters=mysql) in the {{ site.data.products.cloud }} Console and [add a new MySQL schema]({% link cockroachcloud/migrations-page.md %}?filters=mysql#convert-a-schema).
-
- For **AUTO_INCREMENT Conversion Option**, select the [`unique_rowid()`]({% link {{ page.version.version }}/functions-and-operators.md %}#id-generation-functions) option. This will convert the `ID` column in the `city` table, which has MySQL type `int` and `AUTO_INCREMENT`, to a CockroachDB [`INT8`]({% link {{ page.version.version }}/int.md %}) type with default values generated by [`unique_rowid()`]({% link {{ page.version.version }}/functions-and-operators.md %}#id-generation-functions). For context on this option, see [`AUTO_INCREMENT` attribute](#auto_increment-attribute).
-
- The `UUID` and `unique_rowid()` options are each preferred for [different use cases]({% link {{ page.version.version }}/sql-faqs.md %}#what-are-the-differences-between-uuid-sequences-and-unique_rowid). For this example, selecting the `unique_rowid()` option makes [loading the data](#step-2-load-the-mysql-data) more straightforward in a later step, since both the source and target columns will have integer types.
-
-1. [Upload `world_schema.sql`]({% link cockroachcloud/migrations-page.md %}?filters=mysql#upload-file) to the Schema Conversion Tool.
-
- After conversion is complete, [review the results]({% link cockroachcloud/migrations-page.md %}?filters=mysql#review-the-schema). The [**Summary Report**]({% link cockroachcloud/migrations-page.md %}?filters=mysql#summary-report) shows that there are no errors. This means that the schema is ready to migrate to CockroachDB.
-
- {{site.data.alerts.callout_success}}
- You can also [add your MySQL database credentials]({% link cockroachcloud/migrations-page.md %}?filters=mysql#use-credentials) to have the Schema Conversion Tool obtain the schema directly from the MySQL database.
- {{site.data.alerts.end}}
-
- This example migrates directly to a CockroachDB {{ site.data.products.standard }} cluster. {% include cockroachcloud/migration/sct-self-hosted.md %}
-
-1. Before you migrate the converted schema, click the **Statements** tab to view the [Statements list]({% link cockroachcloud/migrations-page.md %}?filters=mysql#statements-list). Scroll down to the `CREATE TABLE countrylanguage` statement and edit the statement to add a [collation]({% link {{ page.version.version }}/collate.md %}) (`COLLATE en_US`) on the `language` column:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- CREATE TABLE countrylanguage (
- countrycode VARCHAR(3) DEFAULT '' NOT NULL,
- language VARCHAR(30) COLLATE en_US DEFAULT '' NOT NULL,
- isofficial countrylanguage_isofficial_enum
- DEFAULT 'F'
- NOT NULL,
- percentage DECIMAL(4,1) DEFAULT '0.0' NOT NULL,
- PRIMARY KEY (countrycode, language),
- INDEX countrycode (countrycode),
- CONSTRAINT countrylanguage_ibfk_1
- FOREIGN KEY (countrycode) REFERENCES country (code)
- )
- ~~~
-
- Click **Save**.
-
- This is a workaround to prevent [data validation](#step-3-validate-the-migrated-data) from failing due to collation mismatches. For more details, see the [MOLT Verify] ({% link molt/molt-verify.md %}#known-limitations) documentation.
-
-1. Click [**Migrate Schema**]({% link cockroachcloud/migrations-page.md %}?filters=mysql#migrate-the-schema) to create a new {{ site.data.products.standard }} cluster with the converted schema. Name the database `world`.
-
- You can view this database on the [**Databases** page]({% link cockroachcloud/databases-page.md %}) of the {{ site.data.products.cloud }} Console.
-
-1. Open a SQL shell to the CockroachDB `world` cluster. To find the command, open the **Connect** dialog in the {{ site.data.products.cloud }} Console and select the `world` database and **CockroachDB Client** option. It will look like:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- cockroach sql --url "postgresql://{username}@{hostname}:{port}/world?sslmode=verify-full"
- ~~~
-
-1. For large imports, Cockroach Labs recommends [removing indexes prior to loading data]({% link {{ page.version.version }}/import-performance-best-practices.md %}#import-into-a-schema-with-secondary-indexes) and recreating them afterward. This provides increased visibility into the import progress and the ability to retry each step independently.
-
- Show the indexes on the `world` database:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- SHOW INDEXES FROM DATABASE world;
- ~~~
-
- The `countrycode` [foreign key]({% link {{ page.version.version }}/foreign-key.md %}) indexes on the `city` and `countrylanguage` tables can be removed for now:
-
- ~~~
- table_name | index_name | index_schema | non_unique | seq_in_index | column_name | definition | direction | storing | implicit | visible
- ---------------------------------+-------------------------------------------------+--------------+------------+--------------+-----------------+-----------------+-----------+---------+----------+----------
- ...
- city | countrycode | public | t | 2 | id | id | ASC | f | t | t
- city | countrycode | public | t | 1 | countrycode | countrycode | ASC | f | f | t
- ...
- countrylanguage | countrycode | public | t | 1 | countrycode | countrycode | ASC | f | f | t
- countrylanguage | countrycode | public | t | 2 | language | language | ASC | f | t | t
- ...
- ~~~
-
-1. Drop the `countrycode` indexes:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- DROP INDEX city@countrycode;
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- DROP INDEX countrylanguage@countrycode;
- ~~~
-
- You will recreate the indexes after [loading the data](#step-2-load-the-mysql-data).
-
-### Step 2. Load the MySQL data
-
-Load the `world` data into CockroachDB using [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) with CSV-formatted data. {% include {{ page.version.version }}/sql/export-csv-tsv.md %}
-
-{{site.data.alerts.callout_info}}
-When MySQL dumps data, the tables are not ordered by [foreign key]({% link {{ page.version.version }}/foreign-key.md %}) constraints, and foreign keys are not placed in the correct dependency order. It is best to disable foreign key checks when loading data into CockroachDB, and revalidate foreign keys on each table after the data is loaded.
-
-By default, [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) invalidates all [foreign key]({% link {{ page.version.version }}/foreign-key.md %}) constraints on the target table.
-{{site.data.alerts.end}}
-
-1. Dump the MySQL `world` data with the following [`mysqldump` command](https://dev.mysql.com/doc/refman/8.0/en/mysqldump-delimited-text.html):
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- mysqldump -uroot -T /{path}/world-data --fields-terminated-by ',' --fields-enclosed-by '"' --fields-escaped-by '\' --no-create-info world
- ~~~
-
- This dumps each table in your database to the path `/{path}/world-data` as a `.txt` file in CSV format.
- - `--fields-terminated-by` specifies that values are separated by commas instead of tabs.
- - `--fields-enclosed-by` and `--fields-escaped-by` specify the characters that enclose and escape column values, respectively.
- - `--no-create-info` dumps only the [data manipulation language (DML)]({% link {{ page.version.version }}/sql-statements.md %}#data-manipulation-statements).
-
-1. Host the files where the CockroachDB cluster can access them.
-
- Each node in the CockroachDB cluster needs to have access to the files being imported. There are several ways for the cluster to access the data; for more information on the types of storage [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) can pull from, see the following:
- - [Use Cloud Storage]({% link {{ page.version.version }}/use-cloud-storage.md %})
- - [Use a Local File Server]({% link {{ page.version.version }}/use-a-local-file-server.md %})
-
- Cloud storage such as Amazon S3 or Google Cloud is highly recommended for hosting the data files you want to import.
-
- The dump files generated in the preceding step are already hosted on a public S3 bucket created for this example.
-
-1. Open a SQL shell to the CockroachDB `world` cluster, using the same command as before:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- cockroach sql --url "postgresql://{username}@{hostname}:{port}/world?sslmode=verify-full"
- ~~~
-
-1. Use [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) to import each MySQL dump file into the corresponding table in the `world` database.
-
- The following commands point to a public S3 bucket where the `world` data dump files are hosted for this example. The `nullif='\N'` clause specifies that `\N` values, which are produced by the `mysqldump` command, should be read as [`NULL`]({% link {{ page.version.version }}/null-handling.md %}).
-
- {{site.data.alerts.callout_success}}
- You can add the `row_limit` [option]({% link {{ page.version.version }}/import-into.md %}#import-options) to specify the number of rows to import. For example, `row_limit = '10'` will import the first 10 rows of the table. This option is useful for finding errors quickly before executing a more time- and resource-consuming import.
- {{site.data.alerts.end}}
-
- ~~~ sql
- IMPORT INTO countrylanguage
- CSV DATA (
- 'https://cockroachdb-migration-examples.s3.us-east-1.amazonaws.com/mysql/world/countrylanguage.txt'
- )
- WITH
- nullif='\N';
- ~~~
-
- ~~~
- job_id | status | fraction_completed | rows | index_entries | bytes
- ---------------------+-----------+--------------------+------+---------------+---------
- 887782070812344321 | succeeded | 1 | 984 | 984 | 171555
- ~~~
-
- ~~~ sql
- IMPORT INTO country
- CSV DATA (
- 'https://cockroachdb-migration-examples.s3.us-east-1.amazonaws.com/mysql/world/country.txt'
- )
- WITH
- nullif='\N';
- ~~~
-
- ~~~
- job_id | status | fraction_completed | rows | index_entries | bytes
- ---------------------+-----------+--------------------+------+---------------+--------
- 887782114360819713 | succeeded | 1 | 239 | 0 | 33173
- ~~~
-
- ~~~ sql
- IMPORT INTO city
- CSV DATA (
- 'https://cockroachdb-migration-examples.s3.us-east-1.amazonaws.com/mysql/world/city.txt'
- )
- WITH
- nullif='\N';
- ~~~
-
- ~~~
- job_id | status | fraction_completed | rows | index_entries | bytes
- ---------------------+-----------+--------------------+------+---------------+---------
- 887782154421567489 | succeeded | 1 | 4079 | 4079 | 288140
- ~~~
-
- {{site.data.alerts.callout_info}}
- After [converting the schema](#step-1-convert-the-mysql-schema) to work with CockroachDB, the `id` column in `city` is an [`INT8`]({% link {{ page.version.version }}/int.md %}) with default values generated by [`unique_rowid()`]({% link {{ page.version.version }}/functions-and-operators.md %}#id-generation-functions). However, `unique_rowid()` values are only generated when new rows are [inserted]({% link {{ page.version.version }}/insert.md %}) without an `id` value. The MySQL data dump still includes the sequential `id` values generated by the MySQL [`AUTO_INCREMENT` attribute](#auto_increment-attribute), and these are imported with the `IMPORT INTO` command.
-
- In an actual migration, you can either update the primary key into a [multi-column key]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#use-multi-column-primary-keys) or add a new primary key column that [generates unique IDs]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#unique-id-best-practices).
- {{site.data.alerts.end}}
-
-1. Recreate the indexes that you deleted before importing the data:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- CREATE INDEX countrycode ON city (countrycode, id);
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- CREATE INDEX countrycode ON countrylanguage (countrycode, language);
- ~~~
-
-1. Recall that `IMPORT INTO` invalidates all [foreign key]({% link {{ page.version.version }}/foreign-key.md %}) constraints on the target table. View the constraints that are defined on `city` and `countrylanguage`:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- SHOW CONSTRAINTS FROM city;
- ~~~
-
- ~~~
- table_name | constraint_name | constraint_type | details | validated
- -------------+-----------------+-----------------+--------------------------------------------------------------+------------
- city | city_ibfk_1 | FOREIGN KEY | FOREIGN KEY (countrycode) REFERENCES country(code) NOT VALID | f
- city | city_pkey | PRIMARY KEY | PRIMARY KEY (id ASC) | t
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- SHOW CONSTRAINTS FROM countrylanguage;
- ~~~
-
- ~~~
- table_name | constraint_name | constraint_type | details | validated
- ------------------+------------------------+-----------------+--------------------------------------------------------------+------------
- countrylanguage | countrylanguage_ibfk_1 | FOREIGN KEY | FOREIGN KEY (countrycode) REFERENCES country(code) NOT VALID | f
- countrylanguage | countrylanguage_pkey | PRIMARY KEY | PRIMARY KEY (countrycode ASC, language ASC) | t
- ~~~
-
-1. To validate the foreign keys, issue an [`ALTER TABLE ... VALIDATE CONSTRAINT`]({% link {{ page.version.version }}/alter-table.md %}#validate-constraint) statement for each table:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- ALTER TABLE city VALIDATE CONSTRAINT city_ibfk_1;
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- ALTER TABLE countrylanguage VALIDATE CONSTRAINT countrylanguage_ibfk_1;
- ~~~
-
-### Step 3. Validate the migrated data
-
-Use [MOLT Verify]({% link molt/molt-verify.md %}) to check that the data on MySQL and CockroachDB are consistent.
-
-1. [Install MOLT Verify.]({% link molt/molt-verify.md %})
-
-1. In the directory where you installed MOLT Verify, use the following command to compare the two databases, specifying the [JDBC connection string for MySQL](https://dev.mysql.com/doc/connector-j/8.1/en/connector-j-reference-jdbc-url-format.html) with `--source` and the SQL connection string for CockroachDB with `--target`:
-
- {{site.data.alerts.callout_success}}
- To find the CockroachDB connection string, open the **Connect** dialog in the {{ site.data.products.cloud }} Console and select the `world` database and the **General connection string** option.
- {{site.data.alerts.end}}
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- ./molt verify --source 'jdbc:mysql://{user}:{password}@tcp({host}:{port})/world' --target 'postgresql://{user}:{password}@{host}:{port}/world?sslmode=verify-full'
- ~~~
-
- You will see the initial output:
-
- ~~~
- INF verification in progress
- ~~~
-
- The following warnings indicate that the MySQL and CockroachDB columns have different types. This is an expected result, since some columns were [changed to `ENUM` types](#enum-type) when you [converted the schema](#step-1-convert-the-mysql-schema):
-
- ~~~
- WRN mismatching table definition mismatch_info="column type mismatch on continent: text vs country_continent_enum" table_name=country table_schema=public
- WRN mismatching table definition mismatch_info="column type mismatch on isofficial: text vs countrylanguage_isofficial_enum" table_name=countrylanguage table_schema=public
- ~~~
-
- The following output indicates that MOLT Verify has completed verification:
-
- ~~~
- INF finished row verification on public.country (shard 1/1): truth rows seen: 239, success: 239, missing: 0, mismatch: 0, extraneous: 0, live_retry: 0
- INF finished row verification on public.countrylanguage (shard 1/1): truth rows seen: 984, success: 984, missing: 0, mismatch: 0, extraneous: 0, live_retry: 0
- INF finished row verification on public.city (shard 1/1): truth rows seen: 4079, success: 4079, missing: 0, mismatch: 0, extraneous: 0, live_retry: 0
- INF verification complete
- ~~~
-
-With the schema migrated and the initial data load verified, the next steps in a real-world migration are to ensure that you have made any necessary [application changes]({% link {{ page.version.version }}/migration-overview.md %}#application-changes), [validate application queries]({% link {{ page.version.version }}/migration-overview.md %}#validate-queries), and [perform a dry run]({% link {{ page.version.version }}/migration-overview.md %}#perform-a-dry-run) before [conducting the full migration]({% link {{ page.version.version }}/migration-overview.md %}#conduct-the-migration).
-
-To learn more, see the [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}).
-
-## See also
-
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
-- [Use the Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
-- [Use the MOLT Verify tool]({% link molt/molt-verify.md %})
-- [Import Performance Best Practices]({% link {{ page.version.version }}/import-performance-best-practices.md %})
-- [Migrate from CSV]({% link {{ page.version.version }}/migrate-from-csv.md %})
-- [Migrate from PostgreSQL]({% link {{ page.version.version }}/migrate-from-postgres.md %})
-- [Can a PostgreSQL or MySQL application be migrated to CockroachDB?]({% link {{ page.version.version }}/frequently-asked-questions.md %}#can-a-postgresql-or-mysql-application-be-migrated-to-cockroachdb)
diff --git a/src/current/v24.2/migrate-from-openstreetmap.md b/src/current/v24.2/migrate-from-openstreetmap.md
index 843c335beba..a4bcdc56eef 100644
--- a/src/current/v24.2/migrate-from-openstreetmap.md
+++ b/src/current/v24.2/migrate-from-openstreetmap.md
@@ -128,9 +128,9 @@ Osm2pgsql took 2879s overall
- [Migrate from GeoPackages]({% link {{ page.version.version }}/migrate-from-geopackage.md %})
- [Migrate from GeoJSON]({% link {{ page.version.version }}/migrate-from-geojson.md %})
- [Migrate from Shapefiles]({% link {{ page.version.version }}/migrate-from-shapefiles.md %})
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
-- [Migrate from MySQL][mysql]
-- [Migrate from PostgreSQL][postgres]
+- [Migration Overview]({% link molt/migration-overview.md %})
+- [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql)
+- [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %})
- [Back Up and Restore Data]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %})
- [Use the Built-in SQL Client]({% link {{ page.version.version }}/cockroach-sql.md %})
- [`cockroach` Commands Overview]({% link {{ page.version.version }}/cockroach-commands.md %})
diff --git a/src/current/v24.2/migrate-from-oracle.md b/src/current/v24.2/migrate-from-oracle.md
index 2979cd47a36..a62e9ae2c11 100644
--- a/src/current/v24.2/migrate-from-oracle.md
+++ b/src/current/v24.2/migrate-from-oracle.md
@@ -390,8 +390,8 @@ You will have to refactor Oracle SQL and functions that do not comply with [ANSI
- [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %})
- [Import Performance Best Practices]({% link {{ page.version.version }}/import-performance-best-practices.md %})
- [Migrate from CSV]({% link {{ page.version.version }}/migrate-from-csv.md %})
-- [Migrate from MySQL]({% link {{ page.version.version }}/migrate-from-mysql.md %})
-- [Migrate from PostgreSQL]({% link {{ page.version.version }}/migrate-from-postgres.md %})
+- [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql)
+- [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %})
- [Back Up and Restore Data]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %})
- [Use the Built-in SQL Client]({% link {{ page.version.version }}/cockroach-sql.md %})
- [`cockroach` Commands Overview]({% link {{ page.version.version }}/cockroach-commands.md %})
diff --git a/src/current/v24.2/migrate-from-postgres.md b/src/current/v24.2/migrate-from-postgres.md
deleted file mode 100644
index 795d303e656..00000000000
--- a/src/current/v24.2/migrate-from-postgres.md
+++ /dev/null
@@ -1,297 +0,0 @@
----
-title: Migrate from PostgreSQL
-summary: Learn how to migrate data from PostgreSQL into a CockroachDB cluster.
-toc: true
-docs_area: migrate
----
-
-This page describes basic considerations and provides a basic [example](#example-migrate-frenchtowns-to-cockroachdb) of migrating data from PostgreSQL to CockroachDB. The information on this page assumes that you have read [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}), which describes the broad phases and considerations of migrating a database to CockroachDB.
-
-The [PostgreSQL migration example](#example-migrate-frenchtowns-to-cockroachdb) on this page demonstrates how to use [MOLT tooling]({% link {{ page.version.version }}/migration-overview.md %}#molt) to update the PostgreSQL schema, perform an initial load of data, and validate the data. These steps are essential when [preparing for a full migration]({% link {{ page.version.version }}/migration-overview.md %}#prepare-for-migration).
-
-{{site.data.alerts.callout_success}}
-If you need help migrating to CockroachDB, contact our sales team.
-{{site.data.alerts.end}}
-
-## Syntax differences
-
-CockroachDB supports the [PostgreSQL wire protocol](https://www.postgresql.org/docs/current/protocol.html) and is largely compatible with PostgreSQL syntax.
-
-For syntax differences, refer to [Features that differ from PostgreSQL]({% link {{ page.version.version }}/postgresql-compatibility.md %}#features-that-differ-from-postgresql).
-
-### Unsupported features
-
-The following PostgreSQL features do not yet exist in CockroachDB:
-
-{% include {{page.version.version}}/sql/unsupported-postgres-features.md %}
-
-## Load PostgreSQL data
-
-You can use [MOLT Fetch]({% link molt/molt-fetch.md %}) to migrate PostgreSQL data to CockroachDB.
-
-Alternatively, you can use one of the following methods to migrate the data:
-
-- {% include {{ page.version.version }}/migration/load-data-import-into.md %}
-
- {% include {{ page.version.version }}/misc/import-perf.md %}
-
-- {% include {{ page.version.version }}/migration/load-data-third-party.md %}
-
-- {% include {{ page.version.version }}/migration/load-data-copy-from.md %}
-
-The [following example](#example-migrate-frenchtowns-to-cockroachdb) uses `IMPORT INTO` to perform the initial data load.
-
-## Example: Migrate `frenchtowns` to CockroachDB
-
-The following steps demonstrate [converting a schema]({% link {{ page.version.version }}/migration-overview.md %}#convert-the-schema), performing an [initial load of data]({% link {{ page.version.version }}/migration-overview.md %}#load-test-data), and [validating data consistency]({% link {{ page.version.version }}/migration-overview.md %}#validate-queries) during a migration.
-
-In the context of a full migration, these steps ensure that PostgreSQL data can be properly migrated to CockroachDB and your application queries tested against the cluster. For details, see the [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}#prepare-for-migration).
-
-### Before you begin
-
-The example uses a modified version of the PostgreSQL `french-towns-communes-francais` data set and demonstrates how to migrate the schema and data to a CockroachDB {{ site.data.products.standard }} cluster. To follow along with these steps:
-
-1. Download the `frenchtowns` data set:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- curl -O https://cockroachdb-migration-examples.s3.us-east-1.amazonaws.com/postgresql/frenchtowns/frenchtowns.sql
- ~~~
-
-1. Create a `frenchtowns` database on your PostgreSQL instance:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- createdb frenchtowns
- ~~~
-
-1. Load the `frenchtowns` data into PostgreSQL, specifying the path of the downloaded file:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- psql frenchtowns -a -f frenchtowns.sql
- ~~~
-
-1. Create a free [{{ site.data.products.cloud }} account]({% link cockroachcloud/create-an-account.md %}), which is used to access the [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}) and [create the CockroachDB {{ site.data.products.standard }} cluster]({% link cockroachcloud/create-your-cluster.md %}).
-
-{{site.data.alerts.callout_success}}
-{% include cockroachcloud/migration/sct-self-hosted.md %}
-{{site.data.alerts.end}}
-
-### Step 1. Convert the PostgreSQL schema
-
-Use the [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}) to convert the `frenchtowns` schema for compatibility with CockroachDB. The schema has three tables: `regions`, `departments`, and `towns`.
-
-1. Dump the PostgreSQL `frenchtowns` schema with the following [`pg_dump`](https://www.postgresql.org/docs/15/app-pgdump.html) command:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- pg_dump --schema-only frenchtowns > frenchtowns_schema.sql
- ~~~
-
-1. Open the [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}) in the {{ site.data.products.cloud }} Console and [add a new PostgreSQL schema]({% link cockroachcloud/migrations-page.md %}#convert-a-schema).
-
- After conversion is complete, [review the results]({% link cockroachcloud/migrations-page.md %}#review-the-schema). The [**Summary Report**]({% link cockroachcloud/migrations-page.md %}#summary-report) shows that there are errors under **Required Fixes**. You must resolve these in order to migrate the schema to CockroachDB.
-
- {{site.data.alerts.callout_success}}
- You can also [add your PostgreSQL database credentials]({% link cockroachcloud/migrations-page.md %}#use-credentials) to have the Schema Conversion Tool obtain the schema directly from the PostgreSQL database.
- {{site.data.alerts.end}}
-
-1. `Missing user: postgres` errors indicate that the SQL user `postgres` is missing from CockroachDB. Click **Add User** to create the user.
-
-1. `Miscellaneous Errors` includes a `SELECT pg_catalog.set_config('search_path', '', false)` statement that can safely be removed. Click **Delete** to remove the statement from the schema.
-
-1. Review the `CREATE SEQUENCE` statements listed under **Suggestions**. Cockroach Labs does not recommend using a sequence to define a primary key column. For more information, see [Unique ID best practices]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#unique-id-best-practices).
-
- For this example, **Acknowledge** the suggestion without making further changes. In practice, after [conducting the full migration]({% link {{ page.version.version }}/migration-overview.md %}#conduct-the-migration) to CockroachDB, you would modify your CockroachDB schema to use unique and non-sequential primary keys.
-
-1. Click **Retry Migration**. The **Summary Report** now shows that there are no errors. This means that the schema is ready to migrate to CockroachDB.
-
- This example migrates directly to a CockroachDB {{ site.data.products.standard }} cluster. {% include cockroachcloud/migration/sct-self-hosted.md %}
-
-1. Click [**Migrate Schema**]({% link cockroachcloud/migrations-page.md %}#migrate-the-schema) to create a new CockroachDB {{ site.data.products.standard }} cluster with the converted schema. Name the database `frenchtowns`.
-
- You can view this database on the [**Databases** page]({% link cockroachcloud/databases-page.md %}) of the {{ site.data.products.cloud }} Console.
-
-### Step 2. Load the PostgreSQL data
-
-Load the `frenchtowns` data into CockroachDB using [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) with CSV-formatted data. {% include {{ page.version.version }}/sql/export-csv-tsv.md %}
-
-{{site.data.alerts.callout_info}}
-By default, [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) invalidates all [foreign key]({% link {{ page.version.version }}/foreign-key.md %}) constraints on the target table.
-{{site.data.alerts.end}}
-
-1. Dump each table in the PostgreSQL `frenchtowns` database to a CSV-formatted file:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- psql frenchtowns -c "COPY regions TO stdout DELIMITER ',' CSV;" > regions.csv
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- psql frenchtowns -c "COPY departments TO stdout DELIMITER ',' CSV;" > departments.csv
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- psql frenchtowns -c "COPY towns TO stdout DELIMITER ',' CSV;" > towns.csv
- ~~~
-
-1. Host the files where the CockroachDB cluster can access them.
-
- Each node in the CockroachDB cluster needs to have access to the files being imported. There are several ways for the cluster to access the data; for more information on the types of storage [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) can pull from, see the following:
- - [Use Cloud Storage]({% link {{ page.version.version }}/use-cloud-storage.md %})
- - [Use a Local File Server]({% link {{ page.version.version }}/use-a-local-file-server.md %})
-
- Cloud storage such as Amazon S3 or Google Cloud is highly recommended for hosting the data files you want to import.
-
- The dump files generated in the preceding step are already hosted on a public S3 bucket created for this example.
-
-1. Open a SQL shell to the CockroachDB `frenchtowns` cluster. To find the command, open the **Connect** dialog in the {{ site.data.products.cloud }} Console and select the `frenchtowns` database and **CockroachDB Client** option. It will look like:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- cockroach sql --url "postgresql://{username}@{hostname}:{port}/frenchtowns?sslmode=verify-full"
- ~~~
-
-1. Use [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) to import each PostgreSQL dump file into the corresponding table in the `frenchtowns` database.
-
- The following commands point to a public S3 bucket where the `frenchtowns` data dump files are hosted for this example.
-
- {{site.data.alerts.callout_success}}
- You can add the `row_limit` [option]({% link {{ page.version.version }}/import-into.md %}#import-options) to specify the number of rows to import. For example, `row_limit = '10'` will import the first 10 rows of the table. This option is useful for finding errors quickly before executing a more time- and resource-consuming import.
- {{site.data.alerts.end}}
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- IMPORT INTO regions
- CSV DATA (
- 'https://cockroachdb-migration-examples.s3.us-east-1.amazonaws.com/postgresql/frenchtowns/regions.csv'
- );
- ~~~
-
- ~~~
- job_id | status | fraction_completed | rows | index_entries | bytes
- ---------------------+-----------+--------------------+------+---------------+--------
- 893753132185026561 | succeeded | 1 | 26 | 52 | 2338
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- IMPORT INTO departments
- CSV DATA (
- 'https://cockroachdb-migration-examples.s3.us-east-1.amazonaws.com/postgresql/frenchtowns/departments.csv'
- );
- ~~~
-
- ~~~
- job_id | status | fraction_completed | rows | index_entries | bytes
- ---------------------+-----------+--------------------+------+---------------+--------
- 893753147892465665 | succeeded | 1 | 100 | 300 | 11166
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- IMPORT INTO towns
- CSV DATA (
- 'https://cockroachdb-migration-examples.s3.us-east-1.amazonaws.com/postgresql/frenchtowns/towns.csv'
- );
- ~~~
-
- ~~~
- job_id | status | fraction_completed | rows | index_entries | bytes
- ---------------------+-----------+--------------------+-------+---------------+----------
- 893753162225680385 | succeeded | 1 | 36684 | 36684 | 2485007
- ~~~
-
-1. Recall that `IMPORT INTO` invalidates all [foreign key]({% link {{ page.version.version }}/foreign-key.md %}) constraints on the target table. View the constraints that are defined on `departments` and `towns`:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- SHOW CONSTRAINTS FROM departments;
- ~~~
-
- ~~~
- table_name | constraint_name | constraint_type | details | validated
- --------------+-------------------------+-----------------+---------------------------------------------------------+------------
- departments | departments_capital_key | UNIQUE | UNIQUE (capital ASC) | t
- departments | departments_code_key | UNIQUE | UNIQUE (code ASC) | t
- departments | departments_name_key | UNIQUE | UNIQUE (name ASC) | t
- departments | departments_pkey | PRIMARY KEY | PRIMARY KEY (id ASC) | t
- departments | departments_region_fkey | FOREIGN KEY | FOREIGN KEY (region) REFERENCES regions(code) NOT VALID | f
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- SHOW CONSTRAINTS FROM towns;
- ~~~
-
- ~~~
- table_name | constraint_name | constraint_type | details | validated
- -------------+---------------------------+-----------------+-----------------------------------------------------------------+------------
- towns | towns_code_department_key | UNIQUE | UNIQUE (code ASC, department ASC) | t
- towns | towns_department_fkey | FOREIGN KEY | FOREIGN KEY (department) REFERENCES departments(code) NOT VALID | f
- towns | towns_pkey | PRIMARY KEY | PRIMARY KEY (id ASC) | t
- ~~~
-
-1. To validate the foreign keys, issue an [`ALTER TABLE ... VALIDATE CONSTRAINT`]({% link {{ page.version.version }}/alter-table.md %}#validate-constraint) statement for each table:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- ALTER TABLE departments VALIDATE CONSTRAINT departments_region_fkey;
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- ALTER TABLE towns VALIDATE CONSTRAINT towns_department_fkey;
- ~~~
-
-### Step 3. Validate the migrated data
-
-Use [MOLT Verify]({% link molt/molt-verify.md %}) to check that the data on PostgreSQL and CockroachDB are consistent.
-
-1. [Install MOLT Verify.]({% link molt/molt-verify.md %})
-
-1. In the directory where you installed MOLT Verify, use the following command to compare the two databases, specifying the PostgreSQL connection string with `--source` and the CockroachDB connection string with `--target`:
-
- {{site.data.alerts.callout_success}}
- To find the CockroachDB connection string, open the **Connect** dialog in the {{ site.data.products.cloud }} Console and select the `frenchtowns` database and the **General connection string** option.
- {{site.data.alerts.end}}
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- ./molt verify --source 'postgresql://{username}:{password}@{host}:{port}/frenchtowns' --target 'postgresql://{user}:{password}@{host}:{port}/frenchtowns?sslmode=verify-full'
- ~~~
-
- You will see the initial output:
-
- ~~~
- INF verification in progress
- ~~~
-
- The following output indicates that MOLT Verify has completed verification:
-
- ~~~
- INF finished row verification on public.regions (shard 1/1): truth rows seen: 26, success: 26, missing: 0, mismatch: 0, extraneous: 0, live_retry: 0
- INF finished row verification on public.departments (shard 1/1): truth rows seen: 100, success: 100, missing: 0, mismatch: 0, extraneous: 0, live_retry: 0
- INF progress on public.towns (shard 1/1): truth rows seen: 10000, success: 10000, missing: 0, mismatch: 0, extraneous: 0, live_retry: 0
- INF progress on public.towns (shard 1/1): truth rows seen: 20000, success: 20000, missing: 0, mismatch: 0, extraneous: 0, live_retry: 0
- INF progress on public.towns (shard 1/1): truth rows seen: 30000, success: 30000, missing: 0, mismatch: 0, extraneous: 0, live_retry: 0
- INF finished row verification on public.towns (shard 1/1): truth rows seen: 36684, success: 36684, missing: 0, mismatch: 0, extraneous: 0, live_retry: 0
- INF verification complete
- ~~~
-
-With the schema migrated and the initial data load verified, the next steps in a real-world migration are to ensure that you have made any necessary [application changes]({% link {{ page.version.version }}/migration-overview.md %}#application-changes), [validate application queries]({% link {{ page.version.version }}/migration-overview.md %}#validate-queries), and [perform a dry run]({% link {{ page.version.version }}/migration-overview.md %}#perform-a-dry-run) before [conducting the full migration]({% link {{ page.version.version }}/migration-overview.md %}#conduct-the-migration).
-
-To learn more, see the [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}).
-
-## See also
-
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
-- [Use the Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
-- [Use the MOLT Verify tool]({% link molt/molt-verify.md %})
-- [Import Performance Best Practices]({% link {{ page.version.version }}/import-performance-best-practices.md %})
-- [Migrate from CSV]({% link {{ page.version.version }}/migrate-from-csv.md %})
-- [Migrate from MySQL]({% link {{ page.version.version }}/migrate-from-mysql.md %})
-- [Can a PostgreSQL or MySQL application be migrated to CockroachDB?]({% link {{ page.version.version }}/frequently-asked-questions.md %}#can-a-postgresql-or-mysql-application-be-migrated-to-cockroachdb)
diff --git a/src/current/v24.2/migrate-from-shapefiles.md b/src/current/v24.2/migrate-from-shapefiles.md
index ea4e7a368e6..44f366d5a69 100644
--- a/src/current/v24.2/migrate-from-shapefiles.md
+++ b/src/current/v24.2/migrate-from-shapefiles.md
@@ -140,9 +140,9 @@ IMPORT INTO tornadoes CSV DATA ('http://localhost:3000/tornadoes.csv') WITH skip
- [Migrate from OpenStreetMap]({% link {{ page.version.version }}/migrate-from-openstreetmap.md %})
- [Migrate from GeoJSON]({% link {{ page.version.version }}/migrate-from-geojson.md %})
- [Migrate from GeoPackage]({% link {{ page.version.version }}/migrate-from-geopackage.md %})
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
-- [Migrate from MySQL][mysql]
-- [Migrate from PostgreSQL][postgres]
+- [Migration Overview]({% link molt/migration-overview.md %})
+- [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql)
+- [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %})
- [Back Up and Restore Data]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %})
- [Use the Built-in SQL Client]({% link {{ page.version.version }}/cockroach-sql.md %})
- [`cockroach` Commands Overview]({% link {{ page.version.version }}/cockroach-commands.md %})
diff --git a/src/current/v24.2/migration-overview.md b/src/current/v24.2/migration-overview.md
deleted file mode 100644
index b1e02d03254..00000000000
--- a/src/current/v24.2/migration-overview.md
+++ /dev/null
@@ -1,355 +0,0 @@
----
-title: Migration Overview
-summary: Learn how to migrate your database to a CockroachDB cluster.
-toc: true
-docs_area: migrate
----
-
-This page provides an overview of how to migrate a database to CockroachDB.
-
-A database migration broadly consists of the following phases:
-
-1. [Develop a migration plan:](#develop-a-migration-plan) Evaluate your [downtime requirements](#approach-to-downtime) and [cutover strategy](#cutover-strategy), [size the CockroachDB cluster](#capacity-planning) that you will migrate to, and become familiar with the [application changes](#application-changes) that you need to make for CockroachDB.
-1. [Prepare for migration:](#prepare-for-migration) Run a [pre-mortem](#run-a-migration-pre-mortem) (optional), set up [metrics](#set-up-monitoring-and-alerting) (optional), [convert your schema](#convert-the-schema), perform an [initial load of test data](#load-test-data), [validate your application queries](#validate-queries) for correctness and performance, and [perform a dry run](#perform-a-dry-run) of the migration.
-1. [Conduct the migration:](#conduct-the-migration) Use a [lift-and-shift](#lift-and-shift) or ["zero-downtime"](#zero-downtime) method to migrate your data, application, and users to CockroachDB.
-1. [Complete the migration:](#complete-the-migration) Notify the appropriate parties and summarize the details.
-
-{{site.data.alerts.callout_success}}
-If you need help migrating to CockroachDB, contact our sales team.
-{{site.data.alerts.end}}
-
-## Develop a migration plan
-
-Consider the following as you plan your migration:
-
-- Who will lead and perform the migration? Which teams are involved, and which aspects are they responsible for?
-- Which internal and external parties do you need to inform about the migration?
-- Which external or third-party tools (e.g., microservices, analytics, payment processors, aggregators, CRMs) must be tested and migrated along with your application?
-- What portion of the data can be inconsistent, and for how long? What is the tolerable percentage of latency and application errors? This comprises your "error budget".
-- What is the tolerable [downtime](#approach-to-downtime), and what [cutover strategy](#cutover-strategy) will you use to switch users to CockroachDB?
-- Will you set up a "dry-run" environment to test the migration? How many [dry-run migrations](#perform-a-dry-run) will you perform?
-- When is the best time to perform this migration to be minimally disruptive to the database's users?
-- What is your target date for completing the migration?
-
-Create a document that summarizes the intent of the migration, the technical details, and the team members involved.
-
-### Approach to downtime
-
-A primary consideration is whether your application can tolerate downtime:
-
-- What types of operations can you suspend: reads, writes, or both?
-- How long can operations be suspended: seconds, minutes, or hours?
-- Should writes be queued while service is suspended?
-
-Take the following two use cases:
-
-- An application that is primarily in use during daytime business hours may be able to be taken offline during a predetermined timeframe without disrupting the user experience and business continuity. In this case, your migration can occur in a [downtime window](#downtime-window).
-- An application that must serve writes continuously cannot tolerate a long downtime window. In this case, you will aim for [zero or near-zero downtime](#minimal-downtime).
-
-#### Downtime window
-
-If your application can tolerate downtime, then it will likely be easiest to take your application offline, load a snapshot of the data into CockroachDB, and perform a [cutover](#cutover-strategy) to CockroachDB once the data is migrated. This is known as a *lift-and-shift* migration.
-
-A lift-and-shift approach is the most straightforward. However, it's important to fully [prepare the migration](#prepare-for-migration) in order to be certain that it can be completed successfully during the downtime window.
-
-- *Scheduled downtime* is made known to your users in advance. Once you have [prepared for the migration](#prepare-for-migration), you take the application offline, [conduct the migration](#conduct-the-migration), and bring the application back online on CockroachDB. To succeed, you should estimate the amount of downtime required to migrate your data, and ideally schedule the downtime outside of peak hours. Scheduling downtime is easiest if your application traffic is "periodic", meaning that it varies by the time of day, day of week, or day of month.
-
-- *Unscheduled downtime* impacts as few customers as possible, ideally without impacting their regular usage. If your application is intentionally offline at certain times (e.g., outside business hours), you can migrate the data without users noticing. Alternatively, if your application's functionality is not time-sensitive (e.g., it sends batched messages or emails), then you can queue requests while your system is offline, and process those requests after completing the migration to CockroachDB.
-
-- *Reduced functionality* takes some, but not all, application functionality offline. For example, you can disable writes but not reads while you migrate the application data, and queue data to be written after completing the migration.
-
-For an overview of lift-and-shift migrations to CockroachDB, see [Lift and Shift](#lift-and-shift).
-
-#### Minimal downtime
-
-If your application cannot tolerate downtime, then you should aim for a "zero-downtime" approach. This reduces downtime to an absolute minimum, such that users do not notice the migration.
-
-The minimum possible downtime depends on whether you can tolerate inconsistency in the migrated data:
-
-- Migrations performed using *consistent cutover* reduce downtime to an absolute minimum (i.e., seconds or sub-seconds) while keeping data synchronized between the source database and CockroachDB. **Consistency requires downtime.** In this approach, downtime occurs right before [cutover](#cutover-strategy), as you drain the remaining transactions from the source database to CockroachDB.
-
-- Migrations performed using *immediate cutover* can reduce downtime to zero. These require the most preparation, and typically allow read/write traffic to both databases for at least a short period of time, sacrificing consistency for availability. Without stopping application traffic, you perform an **immediate** [cutover](#cutover-strategy), while assuming that some writes will not be replicated to CockroachDB. You may want to manually reconcile these data inconsistencies after switching over.
-
-For an overview of zero-downtime migrations to CockroachDB, see [Zero Downtime](#zero-downtime). {% comment %}For details, see [Migration Strategy: Zero Downtime](migration-strategy-zero-downtime).{% endcomment %}
-
-### Cutover strategy
-
-*Cutover* is the process of switching application traffic from the source database to CockroachDB. Consider the following:
-
-- Will you perform the cutover all at once, or incrementally (e.g., by a subset of users, workloads, or tables)?
-
- - Switching all at once generally follows a [downtime window](#downtime-window) approach. Once the data is migrated to CockroachDB, you "flip the switch" to route application traffic to the new database, thus ending downtime.
-
- - Migrations with [zero or near-zero downtime](#minimal-downtime) can switch either all at once or incrementally, since writes are being synchronously replicated and the system can be gradually migrated as you [validate the queries](#validate-queries).
-
-- Will you have a fallback plan that allows you to reverse ("roll back") the migration from CockroachDB to the source database? A fallback plan enables you to fix any issues or inconsistencies that you encounter during or after cutover, then retry the migration.
-
-#### All at once (no rollback)
-
-This is the simplest cutover method, since you won't need to develop and execute a fallback plan.
-
-As part of [migration preparations](#prepare-for-migration), you will have already [tested your queries and performance](#test-query-results-and-performance) to have confidence to migrate without a rollback option. After moving all of the data from the source database to CockroachDB, you switch application traffic to CockroachDB.
-
-#### All at once (rollback)
-
-This method adds a fallback plan to the simple [all-at-once](#all-at-once-no-rollback) cutover.
-
-In addition to moving data to CockroachDB, data is also replicated from CockroachDB back to the source database in case you need to roll back the migration. Continuous replication is already possible when performing a [zero-downtime migration](#zero-downtime) that dual writes to both databases. Otherwise, you will need to ensure that data is replicated in the reverse direction at cutover. The challenge is to find a point at which both the source database and CockroachDB are in sync, so that you can roll back to that point. You should also avoid falling into a circular state where updates continuously travel back and forth between the source database and CockroachDB.
-
-#### Phased rollout
-
-Also known as the ["strangler fig"](https://en.wikipedia.org/wiki/Strangler_fig) approach, a phased rollout migrates a portion of your users, workloads, or tables over time. Until all users, workloads, and/or tables are migrated, the application will continue to write to both databases.
-
-This approach enables you to take your time with the migration, and to pause or roll back as you [monitor the migration](#set-up-monitoring-and-alerting) for issues and performance. Rolling back the migration involves the same caveats and considerations as for the [all-at-once](#all-at-once-rollback) method. Because you can control the blast radius of your migration by routing traffic for a subset of users or services, a phased rollout has reduced business risk and user impact at the cost of increased implementation risk. You will need to figure out how to migrate in phases while ensuring that your application is unaffected.
-
-### Capacity planning
-
-{{site.data.alerts.callout_success}}
-If you need help migrating to CockroachDB, contact our sales team.
-{{site.data.alerts.end}}
-
-Determine the size of the target CockroachDB cluster. To do this, consider your data volume and workload characteristics:
-
-- What is the total size of the data you will migrate?
-- How many active [application connections]({% link {{ page.version.version }}/recommended-production-settings.md %}#connection-pooling) will be running in the CockroachDB environment?
-
-Use this information to size the CockroachDB cluster you will create. If you are migrating to a CockroachDB {{ site.data.products.cloud }} cluster, see [Plan Your Cluster]({% link cockroachcloud/plan-your-cluster.md %}) for details:
-
-- For CockroachDB {{ site.data.products.standard }} and {{ site.data.products.basic }}, your cluster will scale automatically to meet your storage and usage requirements. Refer to the [CockroachDB {{ site.data.products.standard }}]({% link cockroachcloud/plan-your-cluster-basic.md %}#request-units) and [CockroachDB {{ site.data.products.basic }}]({% link cockroachcloud/plan-your-cluster-basic.md %}#request-units) documentation to learn about how to limit your resource consumption.
-- For CockroachDB {{ site.data.products.advanced }}, refer to the [example]({% link cockroachcloud/plan-your-cluster-advanced.md %}#example) that shows how your data volume, storage requirements, and replication factor affect the recommended node size (number of vCPUs per node) and total number of nodes on the cluster.
-- For guidance on sizing for connection pools, see the CockroachDB {{ site.data.products.cloud }} [Production Checklist]({% link cockroachcloud/production-checklist.md %}#sql-connection-handling).
-
-If you are migrating to a CockroachDB {{ site.data.products.core }} cluster:
-
-- Refer to our [sizing methodology]({% link {{ page.version.version }}/recommended-production-settings.md %}#sizing) to determine the total number of vCPUs on the cluster and the number of vCPUs per node (which determines the number of nodes on the cluster).
-- Refer to our [storage recommendations]({% link {{ page.version.version }}/recommended-production-settings.md %}#storage) to determine the amount of storage to provision on each node.
-- For guidance on sizing for connection pools, see the CockroachDB {{ site.data.products.core }} [Production Checklist]({% link {{ page.version.version }}/recommended-production-settings.md %}#connection-pooling).
-
-### Application changes
-
-As you develop your migration plan, consider the application changes that you will need to make. These may relate to the following:
-
-- [Designing a schema that is compatible with CockroachDB.](#schema-design-best-practices)
-- [Creating effective indexes on CockroachDB.](#index-creation-best-practices)
-- [Handling transaction contention.](#handling-transaction-contention)
-- [Unimplemented features and syntax incompatibilities.](#unimplemented-features-and-syntax-incompatibilities)
-
-#### Schema design best practices
-
-Follow these recommendations when [converting your schema](#convert-the-schema) for compatibility with CockroachDB.
-
-{{site.data.alerts.callout_success}}
-The [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}) automatically identifies potential improvements to your schema.
-{{site.data.alerts.end}}
-
-- You should define an explicit primary key on every table. For more information, see [Primary key best practices]({% link {{ page.version.version }}/schema-design-table.md %}#primary-key-best-practices).
-
-- Do not use a sequence to define a primary key column. Instead, Cockroach Labs recommends that you use [multi-column primary keys]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#use-multi-column-primary-keys) or [auto-generating unique IDs]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#use-functions-to-generate-unique-ids) for primary key columns.
-
-- By default on CockroachDB, `INT` is an alias for `INT8`, which creates 64-bit signed integers. Depending on your source database or application requirements, you may need to change the integer size to `4`. For example, [PostgreSQL defaults to 32-bit integers](https://www.postgresql.org/docs/9.6/datatype-numeric.html). For more information, see [Considerations for 64-bit signed integers]({% link {{ page.version.version }}/int.md %}#considerations-for-64-bit-signed-integers).
-
-#### Index creation best practices
-
-Review the [best practices for creating secondary indexes]({% link {{ page.version.version }}/schema-design-indexes.md %}#best-practices) on CockroachDB.
-
-{% include {{page.version.version}}/performance/use-hash-sharded-indexes.md %}
-
-#### Handling transaction contention
-
-Optimize your queries against [transaction contention]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#transaction-contention). You may encounter [transaction retry errors]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}) when you [test application queries](#validate-queries), as well as transaction contention due to long-running transactions when you [conduct the migration](#conduct-the-migration) and bulk load data.
-
-Transaction retry errors are more frequent under CockroachDB's default [`SERIALIZABLE` isolation level]({% link {{ page.version.version }}/demo-serializable.md %}). If you are migrating an application that was built at a `READ COMMITTED` isolation level, you should first [enable `READ COMMITTED` isolation]({% link {{ page.version.version }}/read-committed.md %}#enable-read-committed-isolation) on the CockroachDB cluster for compatibility.
-
-#### Unimplemented features and syntax incompatibilities
-
-Update your queries to resolve differences in functionality and SQL syntax.
-
-{{site.data.alerts.callout_success}}
-The [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}) automatically flags syntax incompatibilities and unimplemented features in your schema.
-{{site.data.alerts.end}}
-
-CockroachDB supports the [PostgreSQL wire protocol](https://www.postgresql.org/docs/current/protocol.html) and is largely compatible with PostgreSQL syntax. However, the following PostgreSQL features do not yet exist in CockroachDB:
-
-{% include {{page.version.version}}/sql/unsupported-postgres-features.md %}
-
-If your source database uses any of the preceding features, you may need to implement workarounds in your schema design, in your [data manipulation language (DML)]({% link {{ page.version.version }}/sql-statements.md %}#data-manipulation-statements), or in your application code.
-
-For more details on the CockroachDB SQL implementation, see [SQL Feature Support]({% link {{ page.version.version }}/sql-feature-support.md %}).
-
-## Prepare for migration
-
-Once you have a migration plan, prepare the team, application, source database, and CockroachDB cluster for the migration.
-
-### Run a migration "pre-mortem"
-
-{{site.data.alerts.callout_success}}
-This step is optional.
-{{site.data.alerts.end}}
-
-To minimize issues after [cutover](#cutover-strategy), compose a migration "pre-mortem":
-
-- Clearly describe the roles and processes of each team member performing the migration.
-- List the likely failure points and issues that you may encounter as you [conduct the migration](#conduct-the-migration).
-- Rank potential issues by severity, and identify ways to reduce risk.
-- Create a plan for implementing the actions that would most effectively reduce risk.
-
-### Set up monitoring and alerting
-
-{{site.data.alerts.callout_success}}
-This step is optional.
-{{site.data.alerts.end}}
-
-Based on the error budget you [defined in your migration plan](#develop-a-migration-plan), identify the metrics that you can use to measure your success criteria and set up monitoring for the migration. These metrics may be identical to those you normally use in production, but can also be specific to your migration needs.
-
-### Update the schema and queries
-
-In the following order:
-
-1. [Convert your schema](#convert-the-schema).
-1. [Load test data](#load-test-data).
-1. [Validate your application queries](#validate-queries).
-
-
-
-You can use the following [MOLT (Migrate Off Legacy Technology) tools]({% link molt/molt-overview.md %}) to simplify these steps:
-
-- [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
-- [MOLT Fetch]({% link molt/molt-fetch.md %})
-- [MOLT Verify]({% link molt/molt-verify.md %})
-
-#### Convert the schema
-
-First, convert your database schema to an equivalent CockroachDB schema:
-
-- Use the [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}) to convert your schema line-by-line. The Schema Conversion Tool accepts `.sql` files from PostgreSQL, MySQL, Oracle, and Microsoft SQL Server. This requires a free [CockroachDB {{ site.data.products.cloud }} account]({% link cockroachcloud/create-an-account.md %}). The tool will convert the syntax, identify [unimplemented features and syntax incompatibilities](#unimplemented-features-and-syntax-incompatibilities) in the schema, and suggest edits according to CockroachDB [best practices](#schema-design-best-practices).
- {{site.data.alerts.callout_success}}
- If the Schema Conversion Tool is not an option when migrating from PostgreSQL or MySQL, you can enable automatic schema creation when [loading data](#load-test-data) with MOLT Fetch. The [`--table-handling drop-on-target-and-recreate`]({% link molt/molt-fetch.md %}#target-table-handling) option creates a one-to-one mapping between the source database and CockroachDB, and works well when the the source schema is well-defined. For additional help, contact your account team.
- {{site.data.alerts.end}}
-
-- Alternatively, manually convert the schema according to our [schema design best practices](#schema-design-best-practices){% comment %}and data type mappings{% endcomment %}. You can also [export a partially converted schema]({% link cockroachcloud/migrations-page.md %}#export-the-schema) from the Schema Conversion Tool to finish the conversion manually.
-
-Then import the converted schema to a CockroachDB cluster:
-
-- For CockroachDB {{ site.data.products.cloud }}, use the Schema Conversion Tool to [migrate the converted schema to a new {{ site.data.products.cloud }} database]({% link cockroachcloud/migrations-page.md %}#migrate-the-schema).
-- For CockroachDB {{ site.data.products.core }}, pipe the [data definition language (DDL)]({% link {{ page.version.version }}/sql-statements.md %}#data-definition-statements) directly into [`cockroach sql`]({% link {{ page.version.version }}/cockroach-sql.md %}). You can [export a converted schema file]({% link cockroachcloud/migrations-page.md %}#export-the-schema) from the Schema Conversion Tool.
- {{site.data.alerts.callout_success}}
- For the fastest performance, you can use a [local, single-node CockroachDB cluster]({% link {{ page.version.version }}/cockroach-start-single-node.md %}#start-a-single-node-cluster) to convert your schema and [check the results of queries](#test-query-results-and-performance).
- {{site.data.alerts.end}}
-
-#### Load test data
-
-{{site.data.alerts.callout_success}}
-Before moving data, Cockroach Labs recommends [dropping any indexes]({% link {{ page.version.version }}/drop-index.md %}) on the CockroachDB database. The indexes can be [recreated]({% link {{ page.version.version }}/create-index.md %}) after the data is loaded. Doing so will optimize performance.
-{{site.data.alerts.end}}
-
-After [converting the schema](#convert-the-schema), load your data into CockroachDB so that you can [test your application queries](#validate-queries). Then use [MOLT Fetch]({% link molt/molt-fetch.md %}) to move the source data to CockroachDB.
-
-Alternatively, you can use one of the following methods to migrate the data. Additional tooling may be required to extract or convert the data to a supported file format.
-
-- {% include {{ page.version.version }}/migration/load-data-import-into.md %} Typically during a migration, data is initially loaded before foreground application traffic begins to be served, so the impact of taking the table offline when running `IMPORT INTO` may be minimal.
-- {% include {{ page.version.version }}/migration/load-data-third-party.md %} Within the tool, you can select the database tables to migrate to the test cluster.
-- {% include {{ page.version.version }}/migration/load-data-copy-from.md %}
-
-#### Validate queries
-
-After you [load the test data](#load-test-data), validate your queries on CockroachDB. You can do this by [shadowing](#shadowing) or by [manually testing](#test-query-results-and-performance) the queries.
-
-Note that CockroachDB defaults to the [`SERIALIZABLE`]({% link {{ page.version.version }}/demo-serializable.md %}) transaction isolation level. If you are migrating an application that was built at a `READ COMMITTED` isolation level on the source database, you must [enable `READ COMMITTED` isolation]({% link {{ page.version.version }}/read-committed.md %}#enable-read-committed-isolation) on the CockroachDB cluster for compatibility.
-
-##### Shadowing
-
-You can "shadow" your production workload by executing your source SQL statements on CockroachDB in parallel. You can then [validate the queries](#test-query-results-and-performance) on CockroachDB for consistency, performance, and potential issues with the migration. Shadowing should **not** be used in production when performing a [live migration](#zero-downtime).
-
-##### Test query results and performance
-
-You can manually validate your queries by testing a subset of "critical queries" on an otherwise idle CockroachDB cluster:
-
-- Check the application logs for error messages and the API response time. If application requests are slower than expected, use the **SQL Activity** page on the [CockroachDB {{ site.data.products.cloud }} Console]({% link cockroachcloud/statements-page.md %}) or [DB Console]({% link {{ page.version.version }}/ui-statements-page.md %}) to find the longest-running queries that are part of that application request. If necessary, tune the queries according to our best practices for [SQL performance]({% link {{ page.version.version }}/performance-best-practices-overview.md %}).
-
-- Compare the results of the queries and check that they are identical in both the source database and CockroachDB. To do this, you can use [MOLT Verify]({% link molt/molt-verify.md %}).
-
-Test performance on a CockroachDB cluster that is appropriately [sized](#capacity-planning) for your workload:
-
-1. Run the application with single- or very low-concurrency and verify the app's performance is acceptable. The cluster should be provisioned with more than enough resources to handle this workload, because you need to verify that the queries will be fast enough when there are zero resource bottlenecks.
-
-1. Run stress tests with at least the production concurrency and rate, but ideally higher in order to verify that the system can handle unexpected spikes in load. This can also uncover [contention]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#transaction-contention) issues that will appear during spikes in app load, which may require [application design changes](#handling-transaction-contention) to avoid.
-
-### Perform a dry run
-
-To further minimize potential surprises when you conduct the migration, practice [cutover](#cutover-strategy) using your application and similar volumes of data on a "dry-run" environment. Use a test or development environment that is as similar as possible to production.
-
-Performing a dry run is highly recommended. In addition to demonstrating how long the migration may take, a dry run also helps to ensure that team members understand what they need to do during the migration, and that changes to the application are coordinated.
-
-## Conduct the migration
-
-Before proceeding, double-check that you are [prepared to migrate](#prepare-for-migration).
-
-Once you are ready to migrate, optionally [drop the database]({% link {{ page.version.version }}/drop-database.md %}) and delete the test cluster so that you can get a clean start:
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-DROP DATABASE {database-name} CASCADE;
-~~~
-
-Alternatively, [truncate]({% link {{ page.version.version }}/truncate.md %}) each table you used for testing to avoid having to recreate your schema:
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-TRUNCATE {table-name} CASCADE;
-~~~
-
-Migrate your data to CockroachDB using the method that is appropriate for your [downtime requirements](#approach-to-downtime) and [cutover strategy](#cutover-strategy).
-
-### Lift and Shift
-
-Using this method, consistency is achieved by only performing the cutover once all writes have been replicated from the source database to CockroachDB. This requires downtime during which the application traffic is stopped.
-
-The following is a high-level overview of the migration steps. For considerations and details about the pros and cons of this approach, see [Migration Strategy: Lift and Shift]({% link {{ page.version.version }}/migration-strategy-lift-and-shift.md %}).
-
-1. Stop application traffic to your source database. **This begins downtime.**
-1. Use [MOLT Fetch]({% link molt/molt-fetch.md %}) to move the source data to CockroachDB.
-1. After the data is migrated, use [MOLT Verify]({% link molt/molt-verify.md %}) to validate the consistency of the data between the source database and CockroachDB.
-1. Perform a [cutover](#cutover-strategy) by resuming application traffic, now to CockroachDB.
-{% comment %}1. If you want the ability to [roll back](#all-at-once-rollback) the migration, replicate data back to the source database.{% endcomment %}
-
-### Zero Downtime
-
-During a "live migration", downtime is minimized by performing the cutover while writes are still being replicated from the source database to CockroachDB. Inconsistencies are resolved through manual reconciliation.
-
-The following is a high-level overview of the migration steps. The two approaches are mutually exclusive, and each has [tradeoffs](#minimal-downtime). {% comment %}For details on this migration strategy, see [Migration Strategy: Zero Downtime]({% link {{ page.version.version }}/migration-strategy-lift-and-shift.md %}).{% endcomment %}
-
-To prioritize consistency and minimize downtime:
-
-1. Use [MOLT Fetch]({% link molt/molt-fetch.md %}) to move the source data to CockroachDB. Enable [**continuous replication**]({% link molt/molt-fetch.md %}#load-data-and-replicate-changes) after it performs the initial load of data into CockroachDB.
-1. As the data is migrating, use [MOLT Verify]({% link molt/molt-verify.md %}) to validate the consistency of the data between the source database and CockroachDB.
-1. Once nearly all data from your source database has been moved to CockroachDB (for example, with a <1 second delay or <1000 rows), stop application traffic to your source database. **This begins downtime.**
-1. Wait for MOLT Fetch to finish replicating changes to CockroachDB.
-1. Perform a [cutover](#cutover-strategy) by resuming application traffic, now to CockroachDB.
-
-To achieve zero downtime with inconsistency:
-
-1. Use [MOLT Fetch]({% link molt/molt-fetch.md %}) to move the source data to CockroachDB. Use the tool to **replicate ongoing changes** after performing the initial load of data into CockroachDB.
-1. As the data is migrating, you can use [MOLT Verify]({% link molt/molt-verify.md %}) to validate the consistency of the data between the source database and CockroachDB.
-1. After nearly all data from your source database has been moved to CockroachDB (for example, with a <1 second delay or <1000 rows), perform an [*immediate cutover*](#cutover-strategy) by pointing application traffic to CockroachDB.
-1. Manually reconcile any inconsistencies caused by writes that were not replicated during the cutover.
-1. Close the connection to the source database when you are ready to finish the migration.
-
-## Complete the migration
-
-After you have successfully [conducted the migration](#conduct-the-migration):
-
-- Notify the teams and other stakeholders impacted by the migration.
-- Retire any test or development environments used to verify the migration.
-- Extend the document you created when [developing your migration plan](#develop-a-migration-plan) with any issues encountered and follow-up work that needs to be done.
-
-## See also
-
-- [Can a PostgreSQL or MySQL application be migrated to CockroachDB?]({% link {{ page.version.version }}/frequently-asked-questions.md %}#can-a-postgresql-or-mysql-application-be-migrated-to-cockroachdb)
-- [PostgreSQL Compatibility]({% link {{ page.version.version }}/postgresql-compatibility.md %})
-- [Use the Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
-- [Schema Design Overview]({% link {{ page.version.version }}/schema-design-overview.md %})
-- [Create a User-defined Schema]({% link {{ page.version.version }}/schema-design-schema.md %})
-- [Primary key best practices]({% link {{ page.version.version }}/schema-design-table.md %}#primary-key-best-practices)
-- [Secondary index best practices]({% link {{ page.version.version }}/schema-design-indexes.md %}#best-practices)
-- [Transaction contention best practices]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#transaction-contention)
-- [Back Up and Restore]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %})
diff --git a/src/current/v24.2/migration-strategy-lift-and-shift.md b/src/current/v24.2/migration-strategy-lift-and-shift.md
deleted file mode 100644
index 402f7126bec..00000000000
--- a/src/current/v24.2/migration-strategy-lift-and-shift.md
+++ /dev/null
@@ -1,139 +0,0 @@
----
-title: "Migration Strategy: Lift and Shift"
-summary: Learn about the 'Lift and Shift' data migration strategy
-toc: true
-docs_area: migrate
----
-
-There are multiple strategies for [migrating off legacy technology]({% link {{ page.version.version }}/migration-overview.md %}) to CockroachDB.
-
-This page discusses the ["Lift and Shift" strategy]({% link {{ page.version.version }}/migration-overview.md %}#lift-and-shift) for migrating your database, which is a commonly used approach. This approach, which is also known as "Big Bang" (and by other names), refers to the process where your data is moved in its entirety from a source system to a target system within a defined period of time. This typically involves some application downtime and can involve some service degradation.
-
-Lift and Shift may not be the right approach if a strong application service continuity during the migration is required. It may be a viable method if application downtime is permitted.
-
-{{site.data.alerts.callout_info}}
-The information on this page assumes you have already reviewed the [migration overview]({% link {{ page.version.version }}/migration-overview.md %}).
-{{site.data.alerts.end}}
-
-## Pros and Cons
-
-On the spectrum of different data migration strategies, Lift and Shift has the following pros and cons. The terms "lower" and "higher" are not absolute, but relative to other approaches.
-
-Pros:
-
-- Conceptually straightforward.
-- Less complex: If you can afford some downtime, the overall effort will usually be lower, and the chance of errors is lower.
-- Lower time start-to-finish: In general, the more downtime you can afford, the shorter the overall migration project timeframe can be.
-- Lower technical risk: It does not involve running multiple systems alongside each other for an extended period of time.
-- Easy to practice [dry runs]({% link {{ page.version.version }}/migration-overview.md %}#perform-a-dry-run) of import/export using testing/non-production systems.
-- Good import/export tooling is available (e.g., external tools like: [AWS Database Migration Service (DMS)]({% link {{ page.version.version }}/aws-dms.md %}), [Qlik Replicate]({% link {{ page.version.version }}/qlik.md %}), [Striim]({% link {{ page.version.version }}/striim.md %}); or internal tools like [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}), [`COPY FROM`]({% link {{ page.version.version }}/copy.md %}), [`cockroach userdata`]({% link {{ page.version.version }}/cockroach-userfile-upload.md %})).
-- If your application already has regularly scheduled maintenance windows, your customers will not encounter application downtime.
-
-Cons:
-
-- All or nothing: It either works or does not work; once you start, you have to finish or [roll back]({% link {{ page.version.version }}/migration-overview.md %}#all-at-once-rollback).
-- Higher project risk: The project **must** be completed to meet a given [downtime / service degradation window]({% link {{ page.version.version }}/migration-overview.md %}#downtime-window).
-- Application service continuity requirements must be relaxed (that is, application downtime or increased latency may be needed).
-
-## Process design considerations
-
-{{site.data.alerts.callout_info}}
-The high-level considerations in this section only refer to the data-loading portion of your migration. They assume you are following the steps in the overall migration process described in [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}).
-{{site.data.alerts.end}}
-
-Keep in mind the following considerations when designing a Lift and Shift data migration process.
-
-- [Decide on your data migration tooling.](#managed-migration)
-- [Decide which data formats you will use.](#data-formats)
-- [Design a restartable process.](#restartable)
-- [Design a scalable process.](#scalable)
-
-
-
-### Decide on your data migration tooling
-
-If you plan to do your bulk data migration using a managed migration service, you must have a secure, publicly available CockroachDB cluster. CockroachDB supports the following [third-party migration services]({% link {{ page.version.version }}/third-party-database-tools.md %}#data-migration-tools):
-
-- [AWS Database Migration Service (DMS)]({% link {{ page.version.version }}/aws-dms.md %})
-- [Qlik Replicate]({% link {{ page.version.version }}/qlik.md %})
-- [Striim]({% link {{ page.version.version }}/striim.md %})
-
-{{site.data.alerts.callout_info}}
-Depending on the migration service you choose, [long-running transactions]({% link {{ page.version.version }}/query-behavior-troubleshooting.md %}#hanging-or-stuck-queries) can occur. In some cases, these queries will cause [transaction retry errors]({% link {{ page.version.version }}/common-errors.md %}#restart-transaction). If you encounter these errors while migrating to CockroachDB using a managed migration service, please reach out to our [Support Resources]({% link {{ page.version.version }}/support-resources.md %}).
-{{site.data.alerts.end}}
-
-If you will not be using a managed migration service, see the following sections for more information on how to use SQL statements like [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}), [`COPY FROM`]({% link {{ page.version.version }}/copy.md %}), etc.
-
-
-
-### Decide which data formats and storage media you will use
-
-It's important to decide which data formats, storage media, and database features you will use to migrate your data.
-
-Data formats that can be imported by CockroachDB include:
-
-- [SQL]({% link {{ page.version.version }}/schema-design-overview.md %}) for the [schema import]({% link cockroachcloud/migrations-page.md %}).
-- [CSV]({% link {{ page.version.version }}/migrate-from-csv.md %}) for table data.
-- [Avro]({% link {{ page.version.version }}/migrate-from-avro.md %}) for table data.
-
-The storage media you use to export / import from can be intermediate data files or streaming data coming over the network. Options include:
-
-- Local "userdata" storage for small tables (see [`cockroach userdata`]({% link {{ page.version.version }}/cockroach-userfile-upload.md %}), [Use a Local File Server]({% link {{ page.version.version }}/use-a-local-file-server.md %})).
-- Cloud blob storage (see [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}), [Import Performance Best Practices]({% link {{ page.version.version }}/import-performance-best-practices.md %}), [Use Cloud Storage]({% link {{ page.version.version }}/use-cloud-storage.md %})).
-- Direct wire transfers (see [managed migration services](#managed-migration)).
-
-Database features for export/import from the source and target databases can include:
-
-- Tools for exporting from the source database may include: `pg_dump --schema-only` and `COPY FROM`, `mysqldump`, `expdp`, etc.
-- For import into CockroachDB, use [`COPY FROM`]({% link {{ page.version.version }}/copy.md %}) or [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}). For a bulk data migrations, most users should use `IMPORT INTO` because the tables will be offline anyway, and `IMPORT INTO` can [perform the data import much faster]({% link {{ page.version.version }}/import-performance-best-practices.md %}) than `COPY FROM`.
-
-Decide which of the options above will meet your requirements while resulting in a process that is [restartable](#restartable) and [scalable](#scalable).
-
-
-
-### Design a restartable process
-
-To have a higher chance of success, design your data migration process so it can be stopped and restarted from an intermediate state at any time during the process. This will help minimize errors and avoid wasted effort.
-
-Keep the following requirements in mind as you design a restartable import/export process:
-
-- Bulk migrate data in manageable size batches for your source and target systems.
- - This is a best practice. If something happens to the target cluster during import, the amount of wasted work will be minimized.
-- Implement progress/state keeping with process restart capabilities.
-- Make sure your export process is idempotent: the same input to your export process should return the same output data.
-- If possible, export and import the majority of your data before taking down the source database. This can ensure that you only have to deal with the incremental changes from your last import to complete the migration process.
-
-
-
-### Design a scalable and performant process
-
-Once your process is [restartable and resilient to failures](#design-a-restartable-process), it's important to also make sure it will scale to the needs of your data set. The larger the data set you are migrating to CockroachDB, the more important the performance and scalability of your process will be.
-
-Keep the following requirements in mind:
-
-- Schema and data should be imported separately.
-- Your process should handle multiple files across multiple export/import streams concurrently.
- - For best performance, these files should contain presorted, disjoint data sets.
-- Benchmark the performance of your migration process to help ensure it will complete within the allotted downtime window.
-
-For more information about import performance, see [Import Performance Best Practices]({% link {{ page.version.version }}/import-performance-best-practices.md %}).
-
-## See also
-
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
-- [Use the Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
-- [Migrate with AWS Database Migration Service (DMS)]({% link {{ page.version.version }}/aws-dms.md %})
-- [AWS DMS documentation](https://docs.aws.amazon.com/dms/latest/userguide/Welcome.html)
-- [Migrate and Replicate Data with Qlik Replicate]({% link {{ page.version.version }}/qlik.md %})
-- [Migrate and Replicate Data with Striim]({% link {{ page.version.version }}/striim.md %})
-- [Schema Design Overview]({% link {{ page.version.version }}/schema-design-overview.md %})
-- [Back Up and Restore]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %})
-- [Export data with Changefeeds]({% link {{ page.version.version }}/export-data-with-changefeeds.md %})
-- [`COPY`]({% link {{ page.version.version }}/copy.md %})
-- [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %})
-- [Migrate from CSV]({% link {{ page.version.version }}/migrate-from-csv.md %})
-- [Migrate from Avro]({% link {{ page.version.version }}/migrate-from-avro.md %})
-- [Client connection parameters]({% link {{ page.version.version }}/connection-parameters.md %})
-
-
-{% comment %} eof {% endcomment %}
diff --git a/src/current/v24.2/qlik.md b/src/current/v24.2/qlik.md
index bdb646d7b20..da53e969bfd 100644
--- a/src/current/v24.2/qlik.md
+++ b/src/current/v24.2/qlik.md
@@ -68,7 +68,7 @@ Complete the following items before using Qlik Replicate:
- If you are migrating from PostgreSQL, MySQL, Oracle, or Microsoft SQL Server, [use the **Schema Conversion Tool**]({% link cockroachcloud/migrations-page.md %}) to convert and export your schema. Ensure that any schema changes are also reflected on your tables, or add transformation rules. If you make substantial schema changes, the Qlik Replicate migration may fail.
{{site.data.alerts.callout_info}}
- All tables must have an explicitly defined primary key. For more guidance, see the [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}#schema-design-best-practices).
+ All tables must have an explicitly defined primary key. For more guidance, see [Migration Strategy]({% link molt/migration-strategy.md %}#schema-design-best-practices).
{{site.data.alerts.end}}
## Migrate and replicate data to CockroachDB
@@ -96,7 +96,7 @@ In the Qlik Replicate interface, CockroachDB is configured as a PostgreSQL **sou
## See also
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
- [Change Data Capture Overview]({% link {{ page.version.version }}/change-data-capture-overview.md %})
- [Third-Party Tools Supported by Cockroach Labs]({% link {{ page.version.version }}/third-party-database-tools.md %})
diff --git a/src/current/v24.2/read-committed.md b/src/current/v24.2/read-committed.md
index 83d7f9ad7e6..bcf0a5d9f92 100644
--- a/src/current/v24.2/read-committed.md
+++ b/src/current/v24.2/read-committed.md
@@ -13,7 +13,7 @@ docs_area: deploy
- Your application needs to maintain a high workload concurrency with minimal [transaction retries]({% link {{ page.version.version }}/developer-basics.md %}#transaction-retries), and it can tolerate potential [concurrency anomalies](#concurrency-anomalies). Predictable query performance at high concurrency is more valuable than guaranteed transaction [serializability]({% link {{ page.version.version }}/developer-basics.md %}#serializability-and-transaction-contention).
-- You are [migrating an application to CockroachDB]({% link {{ page.version.version }}/migration-overview.md %}) that was built at a `READ COMMITTED` isolation level on the source database, and it is not feasible to modify your application to use `SERIALIZABLE` isolation.
+- You are [migrating an application to CockroachDB]({% link molt/migration-overview.md %}) that was built at a `READ COMMITTED` isolation level on the source database, and it is not feasible to modify your application to use `SERIALIZABLE` isolation.
Whereas `SERIALIZABLE` isolation guarantees data correctness by placing transactions into a [serializable ordering]({% link {{ page.version.version }}/demo-serializable.md %}), `READ COMMITTED` isolation permits some [concurrency anomalies](#concurrency-anomalies) in exchange for minimizing transaction aborts, [retries]({% link {{ page.version.version }}/developer-basics.md %}#transaction-retries), and blocking. Compared to `SERIALIZABLE` transactions, `READ COMMITTED` transactions do **not** return [serialization errors]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}) that require client-side handling. See [`READ COMMITTED` transaction behavior](#read-committed-transaction-behavior).
@@ -919,4 +919,4 @@ SELECT * FROM schedules
- [Serializable Transactions]({% link {{ page.version.version }}/demo-serializable.md %})
- [What Write Skew Looks Like](https://www.cockroachlabs.com/blog/what-write-skew-looks-like/)
- [Read Committed RFC](https://github.com/cockroachdb/cockroach/blob/master/docs/RFCS/20230122_read_committed_isolation.md)
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
diff --git a/src/current/v24.2/striim.md b/src/current/v24.2/striim.md
index 436d7440cbd..67526c8ff9d 100644
--- a/src/current/v24.2/striim.md
+++ b/src/current/v24.2/striim.md
@@ -37,7 +37,7 @@ Complete the following items before using Striim:
- If you are migrating from PostgreSQL, MySQL, Oracle, or Microsoft SQL Server, [use the **Schema Conversion Tool**]({% link cockroachcloud/migrations-page.md %}) to convert and export your schema. Ensure that any schema changes are also reflected on your tables.
{{site.data.alerts.callout_info}}
- All tables must have an explicitly defined primary key. For more guidance, see the [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}#schema-design-best-practices).
+ All tables must have an explicitly defined primary key. For more guidance, see [Migration Strategy]({% link molt/migration-strategy.md %}#schema-design-best-practices).
{{site.data.alerts.end}}
## Migrate and replicate data to CockroachDB
@@ -110,7 +110,7 @@ To perform continuous replication of ongoing changes, create a Striim applicatio
## See also
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
- [Change Data Capture Overview]({% link {{ page.version.version }}/change-data-capture-overview.md %})
- [Third-Party Tools Supported by Cockroach Labs]({% link {{ page.version.version }}/third-party-database-tools.md %})
diff --git a/src/current/v24.3/aws-dms.md b/src/current/v24.3/aws-dms.md
index bb3b53b22a2..99a42ac0532 100644
--- a/src/current/v24.3/aws-dms.md
+++ b/src/current/v24.3/aws-dms.md
@@ -41,7 +41,7 @@ Complete the following items before starting the DMS migration:
- Manually create all schema objects in the target CockroachDB cluster. If you are migrating from PostgreSQL, MySQL, Oracle, or Microsoft SQL Server, you can [use the **Schema Conversion Tool**]({% link cockroachcloud/migrations-page.md %}) to convert and export your schema.
- - All tables must have an explicitly defined primary key. For more guidance, see the [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}#schema-design-best-practices).
+ - All tables must have an explicitly defined primary key. For more guidance, see [Migration Strategy]({% link molt/migration-strategy.md %}#schema-design-best-practices).
- Drop all [constraints]({% link {{ page.version.version }}/constraints.md %}) per the [AWS DMS best practices](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_BestPractices.html#CHAP_BestPractices.Performance). You can recreate them after the [full load completes](#step-3-verify-the-migration). AWS DMS can create a basic schema, but does not create [indexes]({% link {{ page.version.version }}/indexes.md %}) or constraints such as [foreign keys]({% link {{ page.version.version }}/foreign-key.md %}) and [defaults]({% link {{ page.version.version }}/default-value.md %}).
@@ -406,7 +406,7 @@ The `BatchApplyEnabled` setting can improve replication performance and is recom
## See Also
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
- [`cockroach demo`]({% link {{ page.version.version }}/cockroach-demo.md %})
- [AWS DMS documentation](https://docs.aws.amazon.com/dms/latest/userguide/Welcome.html)
diff --git a/src/current/v24.3/build-a-java-app-with-cockroachdb-hibernate.md b/src/current/v24.3/build-a-java-app-with-cockroachdb-hibernate.md
index c19bdcaed7b..c4b53d27629 100644
--- a/src/current/v24.3/build-a-java-app-with-cockroachdb-hibernate.md
+++ b/src/current/v24.3/build-a-java-app-with-cockroachdb-hibernate.md
@@ -130,9 +130,9 @@ APP: getAccountBalance(2) --> 350.00
If you are trying to get a large data set into CockroachDB all at once (a bulk import), avoid writing client-side code altogether and use the [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) statement instead. It is much faster and more efficient than making a series of [`INSERT`s]({% link {{ page.version.version }}/insert.md %}) and [`UPDATE`s]({% link {{ page.version.version }}/update.md %}). It bypasses the [SQL layer]({% link {{ page.version.version }}/architecture/sql-layer.md %}) altogether and writes directly to the [storage layer](architecture/storage-layer.html) of the database.
-For more information about importing data from PostgreSQL, see [Migrate from PostgreSQL]({% link {{ page.version.version }}/migrate-from-postgres.md %}).
+For more information about importing data from PostgreSQL, see [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %}).
-For more information about importing data from MySQL, see [Migrate from MySQL]({% link {{ page.version.version }}/migrate-from-mysql.md %}).
+For more information about importing data from MySQL, see [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql).
### Use `reWriteBatchedInserts` for increased speed
diff --git a/src/current/v24.3/build-a-java-app-with-cockroachdb.md b/src/current/v24.3/build-a-java-app-with-cockroachdb.md
index eae610ab364..d798e9dbaee 100644
--- a/src/current/v24.3/build-a-java-app-with-cockroachdb.md
+++ b/src/current/v24.3/build-a-java-app-with-cockroachdb.md
@@ -269,9 +269,9 @@ props.setProperty("options", "-c sql_safe_updates=true -c statement_timeout=30")
If you are trying to get a large data set into CockroachDB all at once (a bulk import), avoid writing client-side code altogether and use the [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) statement instead. It is much faster and more efficient than making a series of [`INSERT`s]({% link {{ page.version.version }}/insert.md %}) and [`UPDATE`s]({% link {{ page.version.version }}/update.md %}). It bypasses the [SQL layer]({% link {{ page.version.version }}/architecture/sql-layer.md %}) altogether and writes directly to the [storage layer](architecture/storage-layer.html) of the database.
-For more information about importing data from PostgreSQL, see [Migrate from PostgreSQL]({% link {{ page.version.version }}/migrate-from-postgres.md %}).
+For more information about importing data from PostgreSQL, see [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %}).
-For more information about importing data from MySQL, see [Migrate from MySQL]({% link {{ page.version.version }}/migrate-from-mysql.md %}).
+For more information about importing data from MySQL, see [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql).
### Use `reWriteBatchedInserts` for increased speed
diff --git a/src/current/v24.3/build-a-python-app-with-cockroachdb-sqlalchemy.md b/src/current/v24.3/build-a-python-app-with-cockroachdb-sqlalchemy.md
index 73e5dfa528b..e56b9c136a8 100644
--- a/src/current/v24.3/build-a-python-app-with-cockroachdb-sqlalchemy.md
+++ b/src/current/v24.3/build-a-python-app-with-cockroachdb-sqlalchemy.md
@@ -204,9 +204,9 @@ Instead, we recommend breaking your transaction into smaller units of work (or "
If you are trying to get a large data set into CockroachDB all at once (a bulk import), avoid writing client-side code that uses an ORM and use the [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) statement instead. It is much faster and more efficient than making a series of [`INSERT`s]({% link {{ page.version.version }}/insert.md %}) and [`UPDATE`s]({% link {{ page.version.version }}/update.md %}) such as are generated by calls to [`session.bulk_save_objects()`](https://docs.sqlalchemy.org/orm/session_api.html?highlight=bulk_save_object#sqlalchemy.orm.session.Session.bulk_save_objects).
-For more information about importing data from PostgreSQL, see [Migrate from PostgreSQL]({% link {{ page.version.version }}/migrate-from-postgres.md %}).
+For more information about importing data from PostgreSQL, see [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %}).
-For more information about importing data from MySQL, see [Migrate from MySQL]({% link {{ page.version.version }}/migrate-from-mysql.md %}).
+For more information about importing data from MySQL, see [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql).
### Prefer the query builder
diff --git a/src/current/v24.3/copy.md b/src/current/v24.3/copy.md
index aa825fdc003..144da73b4ac 100644
--- a/src/current/v24.3/copy.md
+++ b/src/current/v24.3/copy.md
@@ -358,10 +358,10 @@ You can copy CSV data into CockroachDB using the following methods:
## See also
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %})
- [`EXPORT`]({% link {{ page.version.version }}/export.md %})
- [Install a Driver or ORM Framework]({% link {{ page.version.version }}/install-client-drivers.md %})
{% comment %}
-- [Migrate from PostgreSQL]({% link {{ page.version.version }}/migrate-from-postgres.md %})
+- [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %})
{% endcomment %}
\ No newline at end of file
diff --git a/src/current/v24.3/debezium.md b/src/current/v24.3/debezium.md
index ea2eb513b50..a9d14707ea4 100644
--- a/src/current/v24.3/debezium.md
+++ b/src/current/v24.3/debezium.md
@@ -116,7 +116,7 @@ Once all of the [prerequisite steps](#before-you-begin) are completed, you can u
## See also
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
- [Change Data Capture Overview]({% link {{ page.version.version }}/change-data-capture-overview.md %})
- [Third-Party Tools Supported by Cockroach Labs]({% link {{ page.version.version }}/third-party-database-tools.md %})
diff --git a/src/current/v24.3/frequently-asked-questions.md b/src/current/v24.3/frequently-asked-questions.md
index 07e00b6959f..c94ba15582b 100644
--- a/src/current/v24.3/frequently-asked-questions.md
+++ b/src/current/v24.3/frequently-asked-questions.md
@@ -147,7 +147,7 @@ Note, however, that the protocol used doesn't significantly impact how easy it i
### Can a PostgreSQL or MySQL application be migrated to CockroachDB?
-Yes. Most users should be able to follow the instructions in [Migrate from PostgreSQL]({% link {{ page.version.version }}/migrate-from-postgres.md %}) or [Migrate from MySQL]({% link {{ page.version.version }}/migrate-from-mysql.md %}). Due to differences in available features and syntax, some features supported by these databases may require manual effort to port to CockroachDB. Check those pages for details.
+Yes. Most users should be able to follow the instructions in [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %}) or [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql). Due to differences in available features and syntax, some features supported by these databases may require manual effort to port to CockroachDB. Check those pages for details.
We also fully support [importing your data via CSV]({% link {{ page.version.version }}/migrate-from-csv.md %}).
diff --git a/src/current/v24.3/goldengate.md b/src/current/v24.3/goldengate.md
index 7fd572094e4..30ee23c17df 100644
--- a/src/current/v24.3/goldengate.md
+++ b/src/current/v24.3/goldengate.md
@@ -514,7 +514,7 @@ Run the steps in this section on a machine and in a directory where Oracle Golde
## See also
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
- [Change Data Capture Overview]({% link {{ page.version.version }}/change-data-capture-overview.md %})
- [Third-Party Tools Supported by Cockroach Labs]({% link {{ page.version.version }}/third-party-database-tools.md %})
diff --git a/src/current/v24.3/import-into.md b/src/current/v24.3/import-into.md
index ad4a2c9ca5c..780b9195104 100644
--- a/src/current/v24.3/import-into.md
+++ b/src/current/v24.3/import-into.md
@@ -158,7 +158,7 @@ You can control the `IMPORT` process's behavior using any of the following key-v
For examples showing how to use these options, see the [Examples section]({% link {{ page.version.version }}/import-into.md %}#examples).
-For instructions and working examples showing how to migrate data from other databases and formats, see the [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}).
+For instructions and working examples showing how to migrate data from other databases and formats, see the [Migration Overview]({% link molt/migration-overview.md %}).
## View and control import jobs
@@ -285,6 +285,6 @@ For more information about importing data from Avro, including examples, see [Mi
## See also
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [Use Cloud Storage]({% link {{ page.version.version }}/use-cloud-storage.md %})
- [Import Performance Best Practices]({% link {{ page.version.version }}/import-performance-best-practices.md %})
diff --git a/src/current/v24.3/import-performance-best-practices.md b/src/current/v24.3/import-performance-best-practices.md
index b85d28f1fe6..ca69a57fc4a 100644
--- a/src/current/v24.3/import-performance-best-practices.md
+++ b/src/current/v24.3/import-performance-best-practices.md
@@ -160,9 +160,9 @@ If you cannot both split and sort your dataset, the performance of either split
## See also
- [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %})
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [Migrate from Oracle]({% link {{ page.version.version }}/migrate-from-oracle.md %})
-- [Migrate from PostgreSQL]({% link {{ page.version.version }}/migrate-from-postgres.md %})
-- [Migrate from MySQL]({% link {{ page.version.version }}/migrate-from-mysql.md %})
+- [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %})
+- [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql)
- [Migrate from CSV]({% link {{ page.version.version }}/migrate-from-csv.md %})
- [Migrate from Avro]({% link {{ page.version.version }}/migrate-from-avro.md %})
diff --git a/src/current/v24.3/index.md b/src/current/v24.3/index.md
index eff4d508353..78ac87b9f4c 100644
--- a/src/current/v24.3/index.md
+++ b/src/current/v24.3/index.md
@@ -99,11 +99,11 @@ docs_area:
diff --git a/src/current/v24.3/insert-data.md b/src/current/v24.3/insert-data.md
index a7e76489a0c..dc3c26ee1f5 100644
--- a/src/current/v24.3/insert-data.md
+++ b/src/current/v24.3/insert-data.md
@@ -105,7 +105,7 @@ conn.commit()
Reference information related to this task:
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [`INSERT`]({% link {{ page.version.version }}/insert.md %})
- [`UPSERT`]({% link {{ page.version.version }}/upsert.md %})
- [Transaction Contention]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#transaction-contention)
diff --git a/src/current/v24.3/migrate-from-avro.md b/src/current/v24.3/migrate-from-avro.md
index 39b7bdbc9aa..de42d232917 100644
--- a/src/current/v24.3/migrate-from-avro.md
+++ b/src/current/v24.3/migrate-from-avro.md
@@ -216,8 +216,8 @@ You will need to run [`ALTER TABLE ... ADD CONSTRAINT`]({% link {{ page.version.
- [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %})
- [Import Performance Best Practices]({% link {{ page.version.version }}/import-performance-best-practices.md %})
- [Migrate from CSV][csv]
-- [Migrate from MySQL][mysql]
-- [Migrate from PostgreSQL][postgres]
+- [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql)
+- [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %})
- [Back Up and Restore Data]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %})
- [Use the Built-in SQL Client]({% link {{ page.version.version }}/cockroach-sql.md %})
- [`cockroach` Commands Overview]({% link {{ page.version.version }}/cockroach-commands.md %})
diff --git a/src/current/v24.3/migrate-from-csv.md b/src/current/v24.3/migrate-from-csv.md
index e0e1d92da9c..d30eeb2e98c 100644
--- a/src/current/v24.3/migrate-from-csv.md
+++ b/src/current/v24.3/migrate-from-csv.md
@@ -176,8 +176,8 @@ IMPORT INTO employees (emp_no, birth_date, first_name, last_name, gender, hire_d
- [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %})
- [Import Performance Best Practices]({% link {{ page.version.version }}/import-performance-best-practices.md %})
-- [Migrate from MySQL][mysql]
-- [Migrate from PostgreSQL][postgres]
+- [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql)
+- [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %})
- [Back Up and Restore Data]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %})
- [Use the Built-in SQL Client]({% link {{ page.version.version }}/cockroach-sql.md %})
- [`cockroach` Commands Overview]({% link {{ page.version.version }}/cockroach-commands.md %})
diff --git a/src/current/v24.3/migrate-from-geojson.md b/src/current/v24.3/migrate-from-geojson.md
index 2c3326af39e..e0d804f0be7 100644
--- a/src/current/v24.3/migrate-from-geojson.md
+++ b/src/current/v24.3/migrate-from-geojson.md
@@ -122,9 +122,9 @@ IMPORT INTO underground_storage_tank CSV DATA ('http://localhost:3000/tanks.csv'
- [Migrate from Shapefiles]({% link {{ page.version.version }}/migrate-from-shapefiles.md %})
- [Migrate from GeoPackage]({% link {{ page.version.version }}/migrate-from-geopackage.md %})
- [Spatial indexes]({% link {{ page.version.version }}/spatial-indexes.md %})
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
-- [Migrate from MySQL][mysql]
-- [Migrate from PostgreSQL][postgres]
+- [Migration Overview]({% link molt/migration-overview.md %})
+- [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql)
+- [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %})
- [Back Up and Restore Data]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %})
- [Use the Built-in SQL Client]({% link {{ page.version.version }}/cockroach-sql.md %})
- [`cockroach` Commands Overview]({% link {{ page.version.version }}/cockroach-commands.md %})
diff --git a/src/current/v24.3/migrate-from-geopackage.md b/src/current/v24.3/migrate-from-geopackage.md
index 53acdaff4a7..c3fabeb57ef 100644
--- a/src/current/v24.3/migrate-from-geopackage.md
+++ b/src/current/v24.3/migrate-from-geopackage.md
@@ -114,9 +114,9 @@ IMPORT INTO busstops CSV DATA ('http://localhost:3000/busstops.csv') WITH skip =
- [Spatial indexes]({% link {{ page.version.version }}/spatial-indexes.md %})
- [Migrate from OpenStreetMap]({% link {{ page.version.version }}/migrate-from-openstreetmap.md %})
- [Migrate from Shapefiles]({% link {{ page.version.version }}/migrate-from-shapefiles.md %})
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
-- [Migrate from MySQL][mysql]
-- [Migrate from PostgreSQL][postgres]
+- [Migration Overview]({% link molt/migration-overview.md %})
+- [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql)
+- [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %})
- [Back Up and Restore Data]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %})
- [Use the Built-in SQL Client]({% link {{ page.version.version }}/cockroach-sql.md %})
- [`cockroach` Commands Overview]({% link {{ page.version.version }}/cockroach-commands.md %})
diff --git a/src/current/v24.3/migrate-from-mysql.md b/src/current/v24.3/migrate-from-mysql.md
deleted file mode 100644
index 29cbebde972..00000000000
--- a/src/current/v24.3/migrate-from-mysql.md
+++ /dev/null
@@ -1,412 +0,0 @@
----
-title: Migrate from MySQL
-summary: Learn how to migrate data from MySQL into a CockroachDB cluster.
-toc: true
-docs_area: migrate
----
-
-This page describes basic considerations and provides a basic [example](#example-migrate-world-to-cockroachdb) of migrating data from MySQL to CockroachDB. The information on this page assumes that you have read [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}), which describes the broad phases and considerations of migrating a database to CockroachDB.
-
-The [MySQL migration example](#example-migrate-world-to-cockroachdb) on this page demonstrates how to use [MOLT tooling]({% link {{ page.version.version }}/migration-overview.md %}#molt) to update the MySQL schema, perform an initial load of data, and validate the data. These steps are essential when [preparing for a full migration]({% link {{ page.version.version }}/migration-overview.md %}#prepare-for-migration).
-
-{{site.data.alerts.callout_success}}
-If you need help migrating to CockroachDB, contact our sales team.
-{{site.data.alerts.end}}
-
-## Syntax differences
-
-You will likely need to make application changes due to differences in syntax between MySQL and CockroachDB. Along with the [general considerations in the migration overview]({% link {{ page.version.version }}/migration-overview.md %}#application-changes), also consider the following MySQL-specific information as you develop your migration plan.
-
-When [using the Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}?filters=mysql#convert-a-schema), MySQL syntax that cannot automatically be converted will be displayed in the [**Summary Report**]({% link cockroachcloud/migrations-page.md %}?filters=mysql#summary-report). These may include the following.
-
-#### String case sensitivity
-
-Strings are case-insensitive in MySQL and case-sensitive in CockroachDB. You may need to edit your MySQL data to get the results you expect from CockroachDB. For example, you may have been doing string comparisons in MySQL that will need to be changed to work with CockroachDB.
-
-For more information about the case sensitivity of strings in MySQL, see [Case Sensitivity in String Searches](https://dev.mysql.com/doc/refman/8.0/en/case-sensitivity.html) from the MySQL documentation. For more information about CockroachDB strings, see [`STRING`]({% link {{ page.version.version }}/string.md %}).
-
-#### Identifier case sensitivity
-
-Identifiers are case-sensitive in MySQL and [case-insensitive in CockroachDB]({% link {{ page.version.version }}/keywords-and-identifiers.md %}#identifiers). When [using the Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}?filters=mysql#convert-a-schema), you can either keep case sensitivity by enclosing identifiers in double quotes, or make identifiers case-insensitive by converting them to lowercase.
-
-#### `AUTO_INCREMENT` attribute
-
-The MySQL [`AUTO_INCREMENT`](https://dev.mysql.com/doc/refman/8.0/en/example-auto-increment.html) attribute, which creates sequential column values, is not supported in CockroachDB. When [using the Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}?filters=mysql#convert-a-schema), columns with `AUTO_INCREMENT` can be converted to use [sequences]({% link {{ page.version.version }}/create-sequence.md %}), `UUID` values with [`gen_random_uuid()`]({% link {{ page.version.version }}/functions-and-operators.md %}#id-generation-functions), or unique `INT8` values using [`unique_rowid()`]({% link {{ page.version.version }}/functions-and-operators.md %}#id-generation-functions). Cockroach Labs does not recommend using a sequence to define a primary key column. For more information, see [Unique ID best practices]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#unique-id-best-practices).
-
-{{site.data.alerts.callout_info}}
-Changing a column type during schema conversion will cause [MOLT Verify]({% link molt/molt-verify.md %}) to identify a type mismatch during [data validation](#step-3-validate-the-migrated-data). This is expected behavior.
-{{site.data.alerts.end}}
-
-#### `ENUM` type
-
-MySQL `ENUM` types are defined in table columns. On CockroachDB, [`ENUM`]({% link {{ page.version.version }}/enum.md %}) is a standalone type. When [using the Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}?filters=mysql#convert-a-schema), you can either deduplicate the `ENUM` definitions or create a separate type for each column.
-
-#### `TINYINT` type
-
-`TINYINT` data types are not supported in CockroachDB. The [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}?filters=mysql) automatically converts `TINYINT` columns to [`INT2`]({% link {{ page.version.version }}/int.md %}) (`SMALLINT`).
-
-#### Geospatial types
-
-MySQL geometry types are not converted to CockroachDB [geospatial types]({% link {{ page.version.version }}/spatial-data-overview.md %}#spatial-objects) by the [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}?filters=mysql). They should be manually converted to the corresponding types in CockroachDB.
-
-#### `FIELD` function
-
-The MYSQL `FIELD` function is not supported in CockroachDB. Instead, you can use the [`array_position`]({% link {{ page.version.version }}/functions-and-operators.md %}#array-functions) function, which returns the index of the first occurrence of element in the array.
-
-Example usage:
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-SELECT array_position(ARRAY[4,1,3,2],1);
-~~~
-
-~~~
- array_position
-------------------
- 2
-(1 row)
-~~~
-
-While MySQL returns 0 when the element is not found, CockroachDB returns `NULL`. So if you are using the `ORDER BY` clause in a statement with the `array_position` function, the caveat is that sort is applied even when the element is not found. As a workaround, you can use the [`COALESCE`]({% link {{ page.version.version }}/functions-and-operators.md %}#conditional-and-function-like-operators) operator.
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-SELECT * FROM table_a ORDER BY COALESCE(array_position(ARRAY[4,1,3,2],5),999);
-~~~
-
-## Load MySQL data
-
-You can use [MOLT Fetch]({% link molt/molt-fetch.md %}) to migrate MySQL data to CockroachDB.
-
-Alternatively, you can use one of the following methods to migrate the data:
-
-- {% include {{ page.version.version }}/migration/load-data-import-into.md %}
-
- {% include {{ page.version.version }}/misc/import-perf.md %}
-
-- {% include {{ page.version.version }}/migration/load-data-third-party.md %}
-
-The [following example](#example-migrate-world-to-cockroachdb) uses `IMPORT INTO` to perform the initial data load.
-
-## Example: Migrate `world` to CockroachDB
-
-The following steps demonstrate [converting a schema]({% link {{ page.version.version }}/migration-overview.md %}#convert-the-schema), performing an [initial load of data]({% link {{ page.version.version }}/migration-overview.md %}#load-test-data), and [validating data consistency]({% link {{ page.version.version }}/migration-overview.md %}#validate-queries) during a migration.
-
-In the context of a full migration, these steps ensure that MySQL data can be properly migrated to CockroachDB and your application queries tested against the cluster. For details, see the [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}#prepare-for-migration).
-
-### Before you begin
-
-The example uses the [MySQL `world` data set](https://dev.mysql.com/doc/index-other.html) and demonstrates how to migrate the schema and data to a CockroachDB {{ site.data.products.standard }} cluster. To follow along with these steps:
-
-1. Download the [`world` data set](https://dev.mysql.com/doc/index-other.html).
-
-1. Create the `world` database on your MySQL instance, specifying the path of the downloaded file:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- mysqlsh -uroot --sql --file {path}/world-db/world.sql
- ~~~
-
-1. Create a free [{{ site.data.products.cloud }} account]({% link cockroachcloud/create-an-account.md %}), which is used to access the [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}?filters=mysql) and [create the CockroachDB {{ site.data.products.standard }} cluster]({% link cockroachcloud/create-your-cluster.md %}).
-
-{{site.data.alerts.callout_success}}
-{% include cockroachcloud/migration/sct-self-hosted.md %}
-{{site.data.alerts.end}}
-
-### Step 1. Convert the MySQL schema
-
-Use the [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}?filters=mysql) to convert the `world` schema for compatibility with CockroachDB. The schema has three tables: `city`, `country`, and `countrylanguage`.
-
-1. Dump the MySQL `world` schema with the following [`mysqldump`](https://dev.mysql.com/doc/refman/8.0/en/mysqldump-sql-format.html) command:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- mysqldump -uroot --no-data world > world_schema.sql
- ~~~
-
-1. Open the [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}?filters=mysql) in the {{ site.data.products.cloud }} Console and [add a new MySQL schema]({% link cockroachcloud/migrations-page.md %}?filters=mysql#convert-a-schema).
-
- For **AUTO_INCREMENT Conversion Option**, select the [`unique_rowid()`]({% link {{ page.version.version }}/functions-and-operators.md %}#id-generation-functions) option. This will convert the `ID` column in the `city` table, which has MySQL type `int` and `AUTO_INCREMENT`, to a CockroachDB [`INT8`]({% link {{ page.version.version }}/int.md %}) type with default values generated by [`unique_rowid()`]({% link {{ page.version.version }}/functions-and-operators.md %}#id-generation-functions). For context on this option, see [`AUTO_INCREMENT` attribute](#auto_increment-attribute).
-
- The `UUID` and `unique_rowid()` options are each preferred for [different use cases]({% link {{ page.version.version }}/sql-faqs.md %}#what-are-the-differences-between-uuid-sequences-and-unique_rowid). For this example, selecting the `unique_rowid()` option makes [loading the data](#step-2-load-the-mysql-data) more straightforward in a later step, since both the source and target columns will have integer types.
-
-1. [Upload `world_schema.sql`]({% link cockroachcloud/migrations-page.md %}?filters=mysql#upload-file) to the Schema Conversion Tool.
-
- After conversion is complete, [review the results]({% link cockroachcloud/migrations-page.md %}?filters=mysql#review-the-schema). The [**Summary Report**]({% link cockroachcloud/migrations-page.md %}?filters=mysql#summary-report) shows that there are no errors. This means that the schema is ready to migrate to CockroachDB.
-
- {{site.data.alerts.callout_success}}
- You can also [add your MySQL database credentials]({% link cockroachcloud/migrations-page.md %}?filters=mysql#use-credentials) to have the Schema Conversion Tool obtain the schema directly from the MySQL database.
- {{site.data.alerts.end}}
-
- This example migrates directly to a CockroachDB {{ site.data.products.standard }} cluster. {% include cockroachcloud/migration/sct-self-hosted.md %}
-
-1. Before you migrate the converted schema, click the **Statements** tab to view the [Statements list]({% link cockroachcloud/migrations-page.md %}?filters=mysql#statements-list). Scroll down to the `CREATE TABLE countrylanguage` statement and edit the statement to add a [collation]({% link {{ page.version.version }}/collate.md %}) (`COLLATE en_US`) on the `language` column:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- CREATE TABLE countrylanguage (
- countrycode VARCHAR(3) DEFAULT '' NOT NULL,
- language VARCHAR(30) COLLATE en_US DEFAULT '' NOT NULL,
- isofficial countrylanguage_isofficial_enum
- DEFAULT 'F'
- NOT NULL,
- percentage DECIMAL(4,1) DEFAULT '0.0' NOT NULL,
- PRIMARY KEY (countrycode, language),
- INDEX countrycode (countrycode),
- CONSTRAINT countrylanguage_ibfk_1
- FOREIGN KEY (countrycode) REFERENCES country (code)
- )
- ~~~
-
- Click **Save**.
-
- This is a workaround to prevent [data validation](#step-3-validate-the-migrated-data) from failing due to collation mismatches. For more details, see the [MOLT Verify] ({% link molt/molt-verify.md %}#known-limitations) documentation.
-
-1. Click [**Migrate Schema**]({% link cockroachcloud/migrations-page.md %}?filters=mysql#migrate-the-schema) to create a new {{ site.data.products.standard }} cluster with the converted schema. Name the database `world`.
-
- You can view this database on the [**Databases** page]({% link cockroachcloud/databases-page.md %}) of the {{ site.data.products.cloud }} Console.
-
-1. Open a SQL shell to the CockroachDB `world` cluster. To find the command, open the **Connect** dialog in the {{ site.data.products.cloud }} Console and select the `world` database and **CockroachDB Client** option. It will look like:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- cockroach sql --url "postgresql://{username}@{hostname}:{port}/world?sslmode=verify-full"
- ~~~
-
-1. For large imports, Cockroach Labs recommends [removing indexes prior to loading data]({% link {{ page.version.version }}/import-performance-best-practices.md %}#import-into-a-schema-with-secondary-indexes) and recreating them afterward. This provides increased visibility into the import progress and the ability to retry each step independently.
-
- Show the indexes on the `world` database:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- SHOW INDEXES FROM DATABASE world;
- ~~~
-
- The `countrycode` [foreign key]({% link {{ page.version.version }}/foreign-key.md %}) indexes on the `city` and `countrylanguage` tables can be removed for now:
-
- ~~~
- table_name | index_name | index_schema | non_unique | seq_in_index | column_name | definition | direction | storing | implicit | visible
- ---------------------------------+-------------------------------------------------+--------------+------------+--------------+-----------------+-----------------+-----------+---------+----------+----------
- ...
- city | countrycode | public | t | 2 | id | id | ASC | f | t | t
- city | countrycode | public | t | 1 | countrycode | countrycode | ASC | f | f | t
- ...
- countrylanguage | countrycode | public | t | 1 | countrycode | countrycode | ASC | f | f | t
- countrylanguage | countrycode | public | t | 2 | language | language | ASC | f | t | t
- ...
- ~~~
-
-1. Drop the `countrycode` indexes:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- DROP INDEX city@countrycode;
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- DROP INDEX countrylanguage@countrycode;
- ~~~
-
- You will recreate the indexes after [loading the data](#step-2-load-the-mysql-data).
-
-### Step 2. Load the MySQL data
-
-Load the `world` data into CockroachDB using [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) with CSV-formatted data. {% include {{ page.version.version }}/sql/export-csv-tsv.md %}
-
-{{site.data.alerts.callout_info}}
-When MySQL dumps data, the tables are not ordered by [foreign key]({% link {{ page.version.version }}/foreign-key.md %}) constraints, and foreign keys are not placed in the correct dependency order. It is best to disable foreign key checks when loading data into CockroachDB, and revalidate foreign keys on each table after the data is loaded.
-
-By default, [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) invalidates all [foreign key]({% link {{ page.version.version }}/foreign-key.md %}) constraints on the target table.
-{{site.data.alerts.end}}
-
-1. Dump the MySQL `world` data with the following [`mysqldump` command](https://dev.mysql.com/doc/refman/8.0/en/mysqldump-delimited-text.html):
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- mysqldump -uroot -T /{path}/world-data --fields-terminated-by ',' --fields-enclosed-by '"' --fields-escaped-by '\' --no-create-info world
- ~~~
-
- This dumps each table in your database to the path `/{path}/world-data` as a `.txt` file in CSV format.
- - `--fields-terminated-by` specifies that values are separated by commas instead of tabs.
- - `--fields-enclosed-by` and `--fields-escaped-by` specify the characters that enclose and escape column values, respectively.
- - `--no-create-info` dumps only the [data manipulation language (DML)]({% link {{ page.version.version }}/sql-statements.md %}#data-manipulation-statements).
-
-1. Host the files where the CockroachDB cluster can access them.
-
- Each node in the CockroachDB cluster needs to have access to the files being imported. There are several ways for the cluster to access the data; for more information on the types of storage [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) can pull from, see the following:
- - [Use Cloud Storage]({% link {{ page.version.version }}/use-cloud-storage.md %})
- - [Use a Local File Server]({% link {{ page.version.version }}/use-a-local-file-server.md %})
-
- Cloud storage such as Amazon S3 or Google Cloud is highly recommended for hosting the data files you want to import.
-
- The dump files generated in the preceding step are already hosted on a public S3 bucket created for this example.
-
-1. Open a SQL shell to the CockroachDB `world` cluster, using the same command as before:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- cockroach sql --url "postgresql://{username}@{hostname}:{port}/world?sslmode=verify-full"
- ~~~
-
-1. Use [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) to import each MySQL dump file into the corresponding table in the `world` database.
-
- The following commands point to a public S3 bucket where the `world` data dump files are hosted for this example. The `nullif='\N'` clause specifies that `\N` values, which are produced by the `mysqldump` command, should be read as [`NULL`]({% link {{ page.version.version }}/null-handling.md %}).
-
- {{site.data.alerts.callout_success}}
- You can add the `row_limit` [option]({% link {{ page.version.version }}/import-into.md %}#import-options) to specify the number of rows to import. For example, `row_limit = '10'` will import the first 10 rows of the table. This option is useful for finding errors quickly before executing a more time- and resource-consuming import.
- {{site.data.alerts.end}}
-
- ~~~ sql
- IMPORT INTO countrylanguage
- CSV DATA (
- 'https://cockroachdb-migration-examples.s3.us-east-1.amazonaws.com/mysql/world/countrylanguage.txt'
- )
- WITH
- nullif='\N';
- ~~~
-
- ~~~
- job_id | status | fraction_completed | rows | index_entries | bytes
- ---------------------+-----------+--------------------+------+---------------+---------
- 887782070812344321 | succeeded | 1 | 984 | 984 | 171555
- ~~~
-
- ~~~ sql
- IMPORT INTO country
- CSV DATA (
- 'https://cockroachdb-migration-examples.s3.us-east-1.amazonaws.com/mysql/world/country.txt'
- )
- WITH
- nullif='\N';
- ~~~
-
- ~~~
- job_id | status | fraction_completed | rows | index_entries | bytes
- ---------------------+-----------+--------------------+------+---------------+--------
- 887782114360819713 | succeeded | 1 | 239 | 0 | 33173
- ~~~
-
- ~~~ sql
- IMPORT INTO city
- CSV DATA (
- 'https://cockroachdb-migration-examples.s3.us-east-1.amazonaws.com/mysql/world/city.txt'
- )
- WITH
- nullif='\N';
- ~~~
-
- ~~~
- job_id | status | fraction_completed | rows | index_entries | bytes
- ---------------------+-----------+--------------------+------+---------------+---------
- 887782154421567489 | succeeded | 1 | 4079 | 4079 | 288140
- ~~~
-
- {{site.data.alerts.callout_info}}
- After [converting the schema](#step-1-convert-the-mysql-schema) to work with CockroachDB, the `id` column in `city` is an [`INT8`]({% link {{ page.version.version }}/int.md %}) with default values generated by [`unique_rowid()`]({% link {{ page.version.version }}/functions-and-operators.md %}#id-generation-functions). However, `unique_rowid()` values are only generated when new rows are [inserted]({% link {{ page.version.version }}/insert.md %}) without an `id` value. The MySQL data dump still includes the sequential `id` values generated by the MySQL [`AUTO_INCREMENT` attribute](#auto_increment-attribute), and these are imported with the `IMPORT INTO` command.
-
- In an actual migration, you can either update the primary key into a [multi-column key]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#use-multi-column-primary-keys) or add a new primary key column that [generates unique IDs]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#unique-id-best-practices).
- {{site.data.alerts.end}}
-
-1. Recreate the indexes that you deleted before importing the data:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- CREATE INDEX countrycode ON city (countrycode, id);
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- CREATE INDEX countrycode ON countrylanguage (countrycode, language);
- ~~~
-
-1. Recall that `IMPORT INTO` invalidates all [foreign key]({% link {{ page.version.version }}/foreign-key.md %}) constraints on the target table. View the constraints that are defined on `city` and `countrylanguage`:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- SHOW CONSTRAINTS FROM city;
- ~~~
-
- ~~~
- table_name | constraint_name | constraint_type | details | validated
- -------------+-----------------+-----------------+--------------------------------------------------------------+------------
- city | city_ibfk_1 | FOREIGN KEY | FOREIGN KEY (countrycode) REFERENCES country(code) NOT VALID | f
- city | city_pkey | PRIMARY KEY | PRIMARY KEY (id ASC) | t
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- SHOW CONSTRAINTS FROM countrylanguage;
- ~~~
-
- ~~~
- table_name | constraint_name | constraint_type | details | validated
- ------------------+------------------------+-----------------+--------------------------------------------------------------+------------
- countrylanguage | countrylanguage_ibfk_1 | FOREIGN KEY | FOREIGN KEY (countrycode) REFERENCES country(code) NOT VALID | f
- countrylanguage | countrylanguage_pkey | PRIMARY KEY | PRIMARY KEY (countrycode ASC, language ASC) | t
- ~~~
-
-1. To validate the foreign keys, issue an [`ALTER TABLE ... VALIDATE CONSTRAINT`]({% link {{ page.version.version }}/alter-table.md %}#validate-constraint) statement for each table:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- ALTER TABLE city VALIDATE CONSTRAINT city_ibfk_1;
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- ALTER TABLE countrylanguage VALIDATE CONSTRAINT countrylanguage_ibfk_1;
- ~~~
-
-### Step 3. Validate the migrated data
-
-Use [MOLT Verify]({% link molt/molt-verify.md %}) to check that the data on MySQL and CockroachDB are consistent.
-
-1. [Install MOLT Verify.]({% link molt/molt-verify.md %})
-
-1. In the directory where you installed MOLT Verify, use the following command to compare the two databases, specifying the [JDBC connection string for MySQL](https://dev.mysql.com/doc/connector-j/8.1/en/connector-j-reference-jdbc-url-format.html) with `--source` and the SQL connection string for CockroachDB with `--target`:
-
- {{site.data.alerts.callout_success}}
- To find the CockroachDB connection string, open the **Connect** dialog in the {{ site.data.products.cloud }} Console and select the `world` database and the **General connection string** option.
- {{site.data.alerts.end}}
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- ./molt verify --source 'jdbc:mysql://{user}:{password}@tcp({host}:{port})/world' --target 'postgresql://{user}:{password}@{host}:{port}/world?sslmode=verify-full'
- ~~~
-
- You will see the initial output:
-
- ~~~
- INF verification in progress
- ~~~
-
- The following warnings indicate that the MySQL and CockroachDB columns have different types. This is an expected result, since some columns were [changed to `ENUM` types](#enum-type) when you [converted the schema](#step-1-convert-the-mysql-schema):
-
- ~~~
- WRN mismatching table definition mismatch_info="column type mismatch on continent: text vs country_continent_enum" table_name=country table_schema=public
- WRN mismatching table definition mismatch_info="column type mismatch on isofficial: text vs countrylanguage_isofficial_enum" table_name=countrylanguage table_schema=public
- ~~~
-
- The following output indicates that MOLT Verify has completed verification:
-
- ~~~
- INF finished row verification on public.country (shard 1/1): truth rows seen: 239, success: 239, missing: 0, mismatch: 0, extraneous: 0, live_retry: 0
- INF finished row verification on public.countrylanguage (shard 1/1): truth rows seen: 984, success: 984, missing: 0, mismatch: 0, extraneous: 0, live_retry: 0
- INF finished row verification on public.city (shard 1/1): truth rows seen: 4079, success: 4079, missing: 0, mismatch: 0, extraneous: 0, live_retry: 0
- INF verification complete
- ~~~
-
-With the schema migrated and the initial data load verified, the next steps in a real-world migration are to ensure that you have made any necessary [application changes]({% link {{ page.version.version }}/migration-overview.md %}#application-changes), [validate application queries]({% link {{ page.version.version }}/migration-overview.md %}#validate-queries), and [perform a dry run]({% link {{ page.version.version }}/migration-overview.md %}#perform-a-dry-run) before [conducting the full migration]({% link {{ page.version.version }}/migration-overview.md %}#conduct-the-migration).
-
-To learn more, see the [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}).
-
-## See also
-
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
-- [Use the Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
-- [Use the MOLT Verify tool]({% link molt/molt-verify.md %})
-- [Import Performance Best Practices]({% link {{ page.version.version }}/import-performance-best-practices.md %})
-- [Migrate from CSV]({% link {{ page.version.version }}/migrate-from-csv.md %})
-- [Migrate from PostgreSQL]({% link {{ page.version.version }}/migrate-from-postgres.md %})
-- [Can a PostgreSQL or MySQL application be migrated to CockroachDB?]({% link {{ page.version.version }}/frequently-asked-questions.md %}#can-a-postgresql-or-mysql-application-be-migrated-to-cockroachdb)
diff --git a/src/current/v24.3/migrate-from-openstreetmap.md b/src/current/v24.3/migrate-from-openstreetmap.md
index 843c335beba..a4bcdc56eef 100644
--- a/src/current/v24.3/migrate-from-openstreetmap.md
+++ b/src/current/v24.3/migrate-from-openstreetmap.md
@@ -128,9 +128,9 @@ Osm2pgsql took 2879s overall
- [Migrate from GeoPackages]({% link {{ page.version.version }}/migrate-from-geopackage.md %})
- [Migrate from GeoJSON]({% link {{ page.version.version }}/migrate-from-geojson.md %})
- [Migrate from Shapefiles]({% link {{ page.version.version }}/migrate-from-shapefiles.md %})
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
-- [Migrate from MySQL][mysql]
-- [Migrate from PostgreSQL][postgres]
+- [Migration Overview]({% link molt/migration-overview.md %})
+- [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql)
+- [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %})
- [Back Up and Restore Data]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %})
- [Use the Built-in SQL Client]({% link {{ page.version.version }}/cockroach-sql.md %})
- [`cockroach` Commands Overview]({% link {{ page.version.version }}/cockroach-commands.md %})
diff --git a/src/current/v24.3/migrate-from-oracle.md b/src/current/v24.3/migrate-from-oracle.md
index 2979cd47a36..a62e9ae2c11 100644
--- a/src/current/v24.3/migrate-from-oracle.md
+++ b/src/current/v24.3/migrate-from-oracle.md
@@ -390,8 +390,8 @@ You will have to refactor Oracle SQL and functions that do not comply with [ANSI
- [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %})
- [Import Performance Best Practices]({% link {{ page.version.version }}/import-performance-best-practices.md %})
- [Migrate from CSV]({% link {{ page.version.version }}/migrate-from-csv.md %})
-- [Migrate from MySQL]({% link {{ page.version.version }}/migrate-from-mysql.md %})
-- [Migrate from PostgreSQL]({% link {{ page.version.version }}/migrate-from-postgres.md %})
+- [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql)
+- [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %})
- [Back Up and Restore Data]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %})
- [Use the Built-in SQL Client]({% link {{ page.version.version }}/cockroach-sql.md %})
- [`cockroach` Commands Overview]({% link {{ page.version.version }}/cockroach-commands.md %})
diff --git a/src/current/v24.3/migrate-from-postgres.md b/src/current/v24.3/migrate-from-postgres.md
deleted file mode 100644
index 795d303e656..00000000000
--- a/src/current/v24.3/migrate-from-postgres.md
+++ /dev/null
@@ -1,297 +0,0 @@
----
-title: Migrate from PostgreSQL
-summary: Learn how to migrate data from PostgreSQL into a CockroachDB cluster.
-toc: true
-docs_area: migrate
----
-
-This page describes basic considerations and provides a basic [example](#example-migrate-frenchtowns-to-cockroachdb) of migrating data from PostgreSQL to CockroachDB. The information on this page assumes that you have read [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}), which describes the broad phases and considerations of migrating a database to CockroachDB.
-
-The [PostgreSQL migration example](#example-migrate-frenchtowns-to-cockroachdb) on this page demonstrates how to use [MOLT tooling]({% link {{ page.version.version }}/migration-overview.md %}#molt) to update the PostgreSQL schema, perform an initial load of data, and validate the data. These steps are essential when [preparing for a full migration]({% link {{ page.version.version }}/migration-overview.md %}#prepare-for-migration).
-
-{{site.data.alerts.callout_success}}
-If you need help migrating to CockroachDB, contact our sales team.
-{{site.data.alerts.end}}
-
-## Syntax differences
-
-CockroachDB supports the [PostgreSQL wire protocol](https://www.postgresql.org/docs/current/protocol.html) and is largely compatible with PostgreSQL syntax.
-
-For syntax differences, refer to [Features that differ from PostgreSQL]({% link {{ page.version.version }}/postgresql-compatibility.md %}#features-that-differ-from-postgresql).
-
-### Unsupported features
-
-The following PostgreSQL features do not yet exist in CockroachDB:
-
-{% include {{page.version.version}}/sql/unsupported-postgres-features.md %}
-
-## Load PostgreSQL data
-
-You can use [MOLT Fetch]({% link molt/molt-fetch.md %}) to migrate PostgreSQL data to CockroachDB.
-
-Alternatively, you can use one of the following methods to migrate the data:
-
-- {% include {{ page.version.version }}/migration/load-data-import-into.md %}
-
- {% include {{ page.version.version }}/misc/import-perf.md %}
-
-- {% include {{ page.version.version }}/migration/load-data-third-party.md %}
-
-- {% include {{ page.version.version }}/migration/load-data-copy-from.md %}
-
-The [following example](#example-migrate-frenchtowns-to-cockroachdb) uses `IMPORT INTO` to perform the initial data load.
-
-## Example: Migrate `frenchtowns` to CockroachDB
-
-The following steps demonstrate [converting a schema]({% link {{ page.version.version }}/migration-overview.md %}#convert-the-schema), performing an [initial load of data]({% link {{ page.version.version }}/migration-overview.md %}#load-test-data), and [validating data consistency]({% link {{ page.version.version }}/migration-overview.md %}#validate-queries) during a migration.
-
-In the context of a full migration, these steps ensure that PostgreSQL data can be properly migrated to CockroachDB and your application queries tested against the cluster. For details, see the [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}#prepare-for-migration).
-
-### Before you begin
-
-The example uses a modified version of the PostgreSQL `french-towns-communes-francais` data set and demonstrates how to migrate the schema and data to a CockroachDB {{ site.data.products.standard }} cluster. To follow along with these steps:
-
-1. Download the `frenchtowns` data set:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- curl -O https://cockroachdb-migration-examples.s3.us-east-1.amazonaws.com/postgresql/frenchtowns/frenchtowns.sql
- ~~~
-
-1. Create a `frenchtowns` database on your PostgreSQL instance:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- createdb frenchtowns
- ~~~
-
-1. Load the `frenchtowns` data into PostgreSQL, specifying the path of the downloaded file:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- psql frenchtowns -a -f frenchtowns.sql
- ~~~
-
-1. Create a free [{{ site.data.products.cloud }} account]({% link cockroachcloud/create-an-account.md %}), which is used to access the [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}) and [create the CockroachDB {{ site.data.products.standard }} cluster]({% link cockroachcloud/create-your-cluster.md %}).
-
-{{site.data.alerts.callout_success}}
-{% include cockroachcloud/migration/sct-self-hosted.md %}
-{{site.data.alerts.end}}
-
-### Step 1. Convert the PostgreSQL schema
-
-Use the [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}) to convert the `frenchtowns` schema for compatibility with CockroachDB. The schema has three tables: `regions`, `departments`, and `towns`.
-
-1. Dump the PostgreSQL `frenchtowns` schema with the following [`pg_dump`](https://www.postgresql.org/docs/15/app-pgdump.html) command:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- pg_dump --schema-only frenchtowns > frenchtowns_schema.sql
- ~~~
-
-1. Open the [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}) in the {{ site.data.products.cloud }} Console and [add a new PostgreSQL schema]({% link cockroachcloud/migrations-page.md %}#convert-a-schema).
-
- After conversion is complete, [review the results]({% link cockroachcloud/migrations-page.md %}#review-the-schema). The [**Summary Report**]({% link cockroachcloud/migrations-page.md %}#summary-report) shows that there are errors under **Required Fixes**. You must resolve these in order to migrate the schema to CockroachDB.
-
- {{site.data.alerts.callout_success}}
- You can also [add your PostgreSQL database credentials]({% link cockroachcloud/migrations-page.md %}#use-credentials) to have the Schema Conversion Tool obtain the schema directly from the PostgreSQL database.
- {{site.data.alerts.end}}
-
-1. `Missing user: postgres` errors indicate that the SQL user `postgres` is missing from CockroachDB. Click **Add User** to create the user.
-
-1. `Miscellaneous Errors` includes a `SELECT pg_catalog.set_config('search_path', '', false)` statement that can safely be removed. Click **Delete** to remove the statement from the schema.
-
-1. Review the `CREATE SEQUENCE` statements listed under **Suggestions**. Cockroach Labs does not recommend using a sequence to define a primary key column. For more information, see [Unique ID best practices]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#unique-id-best-practices).
-
- For this example, **Acknowledge** the suggestion without making further changes. In practice, after [conducting the full migration]({% link {{ page.version.version }}/migration-overview.md %}#conduct-the-migration) to CockroachDB, you would modify your CockroachDB schema to use unique and non-sequential primary keys.
-
-1. Click **Retry Migration**. The **Summary Report** now shows that there are no errors. This means that the schema is ready to migrate to CockroachDB.
-
- This example migrates directly to a CockroachDB {{ site.data.products.standard }} cluster. {% include cockroachcloud/migration/sct-self-hosted.md %}
-
-1. Click [**Migrate Schema**]({% link cockroachcloud/migrations-page.md %}#migrate-the-schema) to create a new CockroachDB {{ site.data.products.standard }} cluster with the converted schema. Name the database `frenchtowns`.
-
- You can view this database on the [**Databases** page]({% link cockroachcloud/databases-page.md %}) of the {{ site.data.products.cloud }} Console.
-
-### Step 2. Load the PostgreSQL data
-
-Load the `frenchtowns` data into CockroachDB using [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) with CSV-formatted data. {% include {{ page.version.version }}/sql/export-csv-tsv.md %}
-
-{{site.data.alerts.callout_info}}
-By default, [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) invalidates all [foreign key]({% link {{ page.version.version }}/foreign-key.md %}) constraints on the target table.
-{{site.data.alerts.end}}
-
-1. Dump each table in the PostgreSQL `frenchtowns` database to a CSV-formatted file:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- psql frenchtowns -c "COPY regions TO stdout DELIMITER ',' CSV;" > regions.csv
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- psql frenchtowns -c "COPY departments TO stdout DELIMITER ',' CSV;" > departments.csv
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- psql frenchtowns -c "COPY towns TO stdout DELIMITER ',' CSV;" > towns.csv
- ~~~
-
-1. Host the files where the CockroachDB cluster can access them.
-
- Each node in the CockroachDB cluster needs to have access to the files being imported. There are several ways for the cluster to access the data; for more information on the types of storage [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) can pull from, see the following:
- - [Use Cloud Storage]({% link {{ page.version.version }}/use-cloud-storage.md %})
- - [Use a Local File Server]({% link {{ page.version.version }}/use-a-local-file-server.md %})
-
- Cloud storage such as Amazon S3 or Google Cloud is highly recommended for hosting the data files you want to import.
-
- The dump files generated in the preceding step are already hosted on a public S3 bucket created for this example.
-
-1. Open a SQL shell to the CockroachDB `frenchtowns` cluster. To find the command, open the **Connect** dialog in the {{ site.data.products.cloud }} Console and select the `frenchtowns` database and **CockroachDB Client** option. It will look like:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- cockroach sql --url "postgresql://{username}@{hostname}:{port}/frenchtowns?sslmode=verify-full"
- ~~~
-
-1. Use [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) to import each PostgreSQL dump file into the corresponding table in the `frenchtowns` database.
-
- The following commands point to a public S3 bucket where the `frenchtowns` data dump files are hosted for this example.
-
- {{site.data.alerts.callout_success}}
- You can add the `row_limit` [option]({% link {{ page.version.version }}/import-into.md %}#import-options) to specify the number of rows to import. For example, `row_limit = '10'` will import the first 10 rows of the table. This option is useful for finding errors quickly before executing a more time- and resource-consuming import.
- {{site.data.alerts.end}}
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- IMPORT INTO regions
- CSV DATA (
- 'https://cockroachdb-migration-examples.s3.us-east-1.amazonaws.com/postgresql/frenchtowns/regions.csv'
- );
- ~~~
-
- ~~~
- job_id | status | fraction_completed | rows | index_entries | bytes
- ---------------------+-----------+--------------------+------+---------------+--------
- 893753132185026561 | succeeded | 1 | 26 | 52 | 2338
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- IMPORT INTO departments
- CSV DATA (
- 'https://cockroachdb-migration-examples.s3.us-east-1.amazonaws.com/postgresql/frenchtowns/departments.csv'
- );
- ~~~
-
- ~~~
- job_id | status | fraction_completed | rows | index_entries | bytes
- ---------------------+-----------+--------------------+------+---------------+--------
- 893753147892465665 | succeeded | 1 | 100 | 300 | 11166
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- IMPORT INTO towns
- CSV DATA (
- 'https://cockroachdb-migration-examples.s3.us-east-1.amazonaws.com/postgresql/frenchtowns/towns.csv'
- );
- ~~~
-
- ~~~
- job_id | status | fraction_completed | rows | index_entries | bytes
- ---------------------+-----------+--------------------+-------+---------------+----------
- 893753162225680385 | succeeded | 1 | 36684 | 36684 | 2485007
- ~~~
-
-1. Recall that `IMPORT INTO` invalidates all [foreign key]({% link {{ page.version.version }}/foreign-key.md %}) constraints on the target table. View the constraints that are defined on `departments` and `towns`:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- SHOW CONSTRAINTS FROM departments;
- ~~~
-
- ~~~
- table_name | constraint_name | constraint_type | details | validated
- --------------+-------------------------+-----------------+---------------------------------------------------------+------------
- departments | departments_capital_key | UNIQUE | UNIQUE (capital ASC) | t
- departments | departments_code_key | UNIQUE | UNIQUE (code ASC) | t
- departments | departments_name_key | UNIQUE | UNIQUE (name ASC) | t
- departments | departments_pkey | PRIMARY KEY | PRIMARY KEY (id ASC) | t
- departments | departments_region_fkey | FOREIGN KEY | FOREIGN KEY (region) REFERENCES regions(code) NOT VALID | f
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- SHOW CONSTRAINTS FROM towns;
- ~~~
-
- ~~~
- table_name | constraint_name | constraint_type | details | validated
- -------------+---------------------------+-----------------+-----------------------------------------------------------------+------------
- towns | towns_code_department_key | UNIQUE | UNIQUE (code ASC, department ASC) | t
- towns | towns_department_fkey | FOREIGN KEY | FOREIGN KEY (department) REFERENCES departments(code) NOT VALID | f
- towns | towns_pkey | PRIMARY KEY | PRIMARY KEY (id ASC) | t
- ~~~
-
-1. To validate the foreign keys, issue an [`ALTER TABLE ... VALIDATE CONSTRAINT`]({% link {{ page.version.version }}/alter-table.md %}#validate-constraint) statement for each table:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- ALTER TABLE departments VALIDATE CONSTRAINT departments_region_fkey;
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- ALTER TABLE towns VALIDATE CONSTRAINT towns_department_fkey;
- ~~~
-
-### Step 3. Validate the migrated data
-
-Use [MOLT Verify]({% link molt/molt-verify.md %}) to check that the data on PostgreSQL and CockroachDB are consistent.
-
-1. [Install MOLT Verify.]({% link molt/molt-verify.md %})
-
-1. In the directory where you installed MOLT Verify, use the following command to compare the two databases, specifying the PostgreSQL connection string with `--source` and the CockroachDB connection string with `--target`:
-
- {{site.data.alerts.callout_success}}
- To find the CockroachDB connection string, open the **Connect** dialog in the {{ site.data.products.cloud }} Console and select the `frenchtowns` database and the **General connection string** option.
- {{site.data.alerts.end}}
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- ./molt verify --source 'postgresql://{username}:{password}@{host}:{port}/frenchtowns' --target 'postgresql://{user}:{password}@{host}:{port}/frenchtowns?sslmode=verify-full'
- ~~~
-
- You will see the initial output:
-
- ~~~
- INF verification in progress
- ~~~
-
- The following output indicates that MOLT Verify has completed verification:
-
- ~~~
- INF finished row verification on public.regions (shard 1/1): truth rows seen: 26, success: 26, missing: 0, mismatch: 0, extraneous: 0, live_retry: 0
- INF finished row verification on public.departments (shard 1/1): truth rows seen: 100, success: 100, missing: 0, mismatch: 0, extraneous: 0, live_retry: 0
- INF progress on public.towns (shard 1/1): truth rows seen: 10000, success: 10000, missing: 0, mismatch: 0, extraneous: 0, live_retry: 0
- INF progress on public.towns (shard 1/1): truth rows seen: 20000, success: 20000, missing: 0, mismatch: 0, extraneous: 0, live_retry: 0
- INF progress on public.towns (shard 1/1): truth rows seen: 30000, success: 30000, missing: 0, mismatch: 0, extraneous: 0, live_retry: 0
- INF finished row verification on public.towns (shard 1/1): truth rows seen: 36684, success: 36684, missing: 0, mismatch: 0, extraneous: 0, live_retry: 0
- INF verification complete
- ~~~
-
-With the schema migrated and the initial data load verified, the next steps in a real-world migration are to ensure that you have made any necessary [application changes]({% link {{ page.version.version }}/migration-overview.md %}#application-changes), [validate application queries]({% link {{ page.version.version }}/migration-overview.md %}#validate-queries), and [perform a dry run]({% link {{ page.version.version }}/migration-overview.md %}#perform-a-dry-run) before [conducting the full migration]({% link {{ page.version.version }}/migration-overview.md %}#conduct-the-migration).
-
-To learn more, see the [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}).
-
-## See also
-
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
-- [Use the Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
-- [Use the MOLT Verify tool]({% link molt/molt-verify.md %})
-- [Import Performance Best Practices]({% link {{ page.version.version }}/import-performance-best-practices.md %})
-- [Migrate from CSV]({% link {{ page.version.version }}/migrate-from-csv.md %})
-- [Migrate from MySQL]({% link {{ page.version.version }}/migrate-from-mysql.md %})
-- [Can a PostgreSQL or MySQL application be migrated to CockroachDB?]({% link {{ page.version.version }}/frequently-asked-questions.md %}#can-a-postgresql-or-mysql-application-be-migrated-to-cockroachdb)
diff --git a/src/current/v24.3/migrate-from-shapefiles.md b/src/current/v24.3/migrate-from-shapefiles.md
index ea4e7a368e6..44f366d5a69 100644
--- a/src/current/v24.3/migrate-from-shapefiles.md
+++ b/src/current/v24.3/migrate-from-shapefiles.md
@@ -140,9 +140,9 @@ IMPORT INTO tornadoes CSV DATA ('http://localhost:3000/tornadoes.csv') WITH skip
- [Migrate from OpenStreetMap]({% link {{ page.version.version }}/migrate-from-openstreetmap.md %})
- [Migrate from GeoJSON]({% link {{ page.version.version }}/migrate-from-geojson.md %})
- [Migrate from GeoPackage]({% link {{ page.version.version }}/migrate-from-geopackage.md %})
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
-- [Migrate from MySQL][mysql]
-- [Migrate from PostgreSQL][postgres]
+- [Migration Overview]({% link molt/migration-overview.md %})
+- [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql)
+- [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %})
- [Back Up and Restore Data]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %})
- [Use the Built-in SQL Client]({% link {{ page.version.version }}/cockroach-sql.md %})
- [`cockroach` Commands Overview]({% link {{ page.version.version }}/cockroach-commands.md %})
diff --git a/src/current/v24.3/migration-overview.md b/src/current/v24.3/migration-overview.md
deleted file mode 100644
index b1e02d03254..00000000000
--- a/src/current/v24.3/migration-overview.md
+++ /dev/null
@@ -1,355 +0,0 @@
----
-title: Migration Overview
-summary: Learn how to migrate your database to a CockroachDB cluster.
-toc: true
-docs_area: migrate
----
-
-This page provides an overview of how to migrate a database to CockroachDB.
-
-A database migration broadly consists of the following phases:
-
-1. [Develop a migration plan:](#develop-a-migration-plan) Evaluate your [downtime requirements](#approach-to-downtime) and [cutover strategy](#cutover-strategy), [size the CockroachDB cluster](#capacity-planning) that you will migrate to, and become familiar with the [application changes](#application-changes) that you need to make for CockroachDB.
-1. [Prepare for migration:](#prepare-for-migration) Run a [pre-mortem](#run-a-migration-pre-mortem) (optional), set up [metrics](#set-up-monitoring-and-alerting) (optional), [convert your schema](#convert-the-schema), perform an [initial load of test data](#load-test-data), [validate your application queries](#validate-queries) for correctness and performance, and [perform a dry run](#perform-a-dry-run) of the migration.
-1. [Conduct the migration:](#conduct-the-migration) Use a [lift-and-shift](#lift-and-shift) or ["zero-downtime"](#zero-downtime) method to migrate your data, application, and users to CockroachDB.
-1. [Complete the migration:](#complete-the-migration) Notify the appropriate parties and summarize the details.
-
-{{site.data.alerts.callout_success}}
-If you need help migrating to CockroachDB, contact our sales team.
-{{site.data.alerts.end}}
-
-## Develop a migration plan
-
-Consider the following as you plan your migration:
-
-- Who will lead and perform the migration? Which teams are involved, and which aspects are they responsible for?
-- Which internal and external parties do you need to inform about the migration?
-- Which external or third-party tools (e.g., microservices, analytics, payment processors, aggregators, CRMs) must be tested and migrated along with your application?
-- What portion of the data can be inconsistent, and for how long? What is the tolerable percentage of latency and application errors? This comprises your "error budget".
-- What is the tolerable [downtime](#approach-to-downtime), and what [cutover strategy](#cutover-strategy) will you use to switch users to CockroachDB?
-- Will you set up a "dry-run" environment to test the migration? How many [dry-run migrations](#perform-a-dry-run) will you perform?
-- When is the best time to perform this migration to be minimally disruptive to the database's users?
-- What is your target date for completing the migration?
-
-Create a document that summarizes the intent of the migration, the technical details, and the team members involved.
-
-### Approach to downtime
-
-A primary consideration is whether your application can tolerate downtime:
-
-- What types of operations can you suspend: reads, writes, or both?
-- How long can operations be suspended: seconds, minutes, or hours?
-- Should writes be queued while service is suspended?
-
-Take the following two use cases:
-
-- An application that is primarily in use during daytime business hours may be able to be taken offline during a predetermined timeframe without disrupting the user experience and business continuity. In this case, your migration can occur in a [downtime window](#downtime-window).
-- An application that must serve writes continuously cannot tolerate a long downtime window. In this case, you will aim for [zero or near-zero downtime](#minimal-downtime).
-
-#### Downtime window
-
-If your application can tolerate downtime, then it will likely be easiest to take your application offline, load a snapshot of the data into CockroachDB, and perform a [cutover](#cutover-strategy) to CockroachDB once the data is migrated. This is known as a *lift-and-shift* migration.
-
-A lift-and-shift approach is the most straightforward. However, it's important to fully [prepare the migration](#prepare-for-migration) in order to be certain that it can be completed successfully during the downtime window.
-
-- *Scheduled downtime* is made known to your users in advance. Once you have [prepared for the migration](#prepare-for-migration), you take the application offline, [conduct the migration](#conduct-the-migration), and bring the application back online on CockroachDB. To succeed, you should estimate the amount of downtime required to migrate your data, and ideally schedule the downtime outside of peak hours. Scheduling downtime is easiest if your application traffic is "periodic", meaning that it varies by the time of day, day of week, or day of month.
-
-- *Unscheduled downtime* impacts as few customers as possible, ideally without impacting their regular usage. If your application is intentionally offline at certain times (e.g., outside business hours), you can migrate the data without users noticing. Alternatively, if your application's functionality is not time-sensitive (e.g., it sends batched messages or emails), then you can queue requests while your system is offline, and process those requests after completing the migration to CockroachDB.
-
-- *Reduced functionality* takes some, but not all, application functionality offline. For example, you can disable writes but not reads while you migrate the application data, and queue data to be written after completing the migration.
-
-For an overview of lift-and-shift migrations to CockroachDB, see [Lift and Shift](#lift-and-shift).
-
-#### Minimal downtime
-
-If your application cannot tolerate downtime, then you should aim for a "zero-downtime" approach. This reduces downtime to an absolute minimum, such that users do not notice the migration.
-
-The minimum possible downtime depends on whether you can tolerate inconsistency in the migrated data:
-
-- Migrations performed using *consistent cutover* reduce downtime to an absolute minimum (i.e., seconds or sub-seconds) while keeping data synchronized between the source database and CockroachDB. **Consistency requires downtime.** In this approach, downtime occurs right before [cutover](#cutover-strategy), as you drain the remaining transactions from the source database to CockroachDB.
-
-- Migrations performed using *immediate cutover* can reduce downtime to zero. These require the most preparation, and typically allow read/write traffic to both databases for at least a short period of time, sacrificing consistency for availability. Without stopping application traffic, you perform an **immediate** [cutover](#cutover-strategy), while assuming that some writes will not be replicated to CockroachDB. You may want to manually reconcile these data inconsistencies after switching over.
-
-For an overview of zero-downtime migrations to CockroachDB, see [Zero Downtime](#zero-downtime). {% comment %}For details, see [Migration Strategy: Zero Downtime](migration-strategy-zero-downtime).{% endcomment %}
-
-### Cutover strategy
-
-*Cutover* is the process of switching application traffic from the source database to CockroachDB. Consider the following:
-
-- Will you perform the cutover all at once, or incrementally (e.g., by a subset of users, workloads, or tables)?
-
- - Switching all at once generally follows a [downtime window](#downtime-window) approach. Once the data is migrated to CockroachDB, you "flip the switch" to route application traffic to the new database, thus ending downtime.
-
- - Migrations with [zero or near-zero downtime](#minimal-downtime) can switch either all at once or incrementally, since writes are being synchronously replicated and the system can be gradually migrated as you [validate the queries](#validate-queries).
-
-- Will you have a fallback plan that allows you to reverse ("roll back") the migration from CockroachDB to the source database? A fallback plan enables you to fix any issues or inconsistencies that you encounter during or after cutover, then retry the migration.
-
-#### All at once (no rollback)
-
-This is the simplest cutover method, since you won't need to develop and execute a fallback plan.
-
-As part of [migration preparations](#prepare-for-migration), you will have already [tested your queries and performance](#test-query-results-and-performance) to have confidence to migrate without a rollback option. After moving all of the data from the source database to CockroachDB, you switch application traffic to CockroachDB.
-
-#### All at once (rollback)
-
-This method adds a fallback plan to the simple [all-at-once](#all-at-once-no-rollback) cutover.
-
-In addition to moving data to CockroachDB, data is also replicated from CockroachDB back to the source database in case you need to roll back the migration. Continuous replication is already possible when performing a [zero-downtime migration](#zero-downtime) that dual writes to both databases. Otherwise, you will need to ensure that data is replicated in the reverse direction at cutover. The challenge is to find a point at which both the source database and CockroachDB are in sync, so that you can roll back to that point. You should also avoid falling into a circular state where updates continuously travel back and forth between the source database and CockroachDB.
-
-#### Phased rollout
-
-Also known as the ["strangler fig"](https://en.wikipedia.org/wiki/Strangler_fig) approach, a phased rollout migrates a portion of your users, workloads, or tables over time. Until all users, workloads, and/or tables are migrated, the application will continue to write to both databases.
-
-This approach enables you to take your time with the migration, and to pause or roll back as you [monitor the migration](#set-up-monitoring-and-alerting) for issues and performance. Rolling back the migration involves the same caveats and considerations as for the [all-at-once](#all-at-once-rollback) method. Because you can control the blast radius of your migration by routing traffic for a subset of users or services, a phased rollout has reduced business risk and user impact at the cost of increased implementation risk. You will need to figure out how to migrate in phases while ensuring that your application is unaffected.
-
-### Capacity planning
-
-{{site.data.alerts.callout_success}}
-If you need help migrating to CockroachDB, contact our sales team.
-{{site.data.alerts.end}}
-
-Determine the size of the target CockroachDB cluster. To do this, consider your data volume and workload characteristics:
-
-- What is the total size of the data you will migrate?
-- How many active [application connections]({% link {{ page.version.version }}/recommended-production-settings.md %}#connection-pooling) will be running in the CockroachDB environment?
-
-Use this information to size the CockroachDB cluster you will create. If you are migrating to a CockroachDB {{ site.data.products.cloud }} cluster, see [Plan Your Cluster]({% link cockroachcloud/plan-your-cluster.md %}) for details:
-
-- For CockroachDB {{ site.data.products.standard }} and {{ site.data.products.basic }}, your cluster will scale automatically to meet your storage and usage requirements. Refer to the [CockroachDB {{ site.data.products.standard }}]({% link cockroachcloud/plan-your-cluster-basic.md %}#request-units) and [CockroachDB {{ site.data.products.basic }}]({% link cockroachcloud/plan-your-cluster-basic.md %}#request-units) documentation to learn about how to limit your resource consumption.
-- For CockroachDB {{ site.data.products.advanced }}, refer to the [example]({% link cockroachcloud/plan-your-cluster-advanced.md %}#example) that shows how your data volume, storage requirements, and replication factor affect the recommended node size (number of vCPUs per node) and total number of nodes on the cluster.
-- For guidance on sizing for connection pools, see the CockroachDB {{ site.data.products.cloud }} [Production Checklist]({% link cockroachcloud/production-checklist.md %}#sql-connection-handling).
-
-If you are migrating to a CockroachDB {{ site.data.products.core }} cluster:
-
-- Refer to our [sizing methodology]({% link {{ page.version.version }}/recommended-production-settings.md %}#sizing) to determine the total number of vCPUs on the cluster and the number of vCPUs per node (which determines the number of nodes on the cluster).
-- Refer to our [storage recommendations]({% link {{ page.version.version }}/recommended-production-settings.md %}#storage) to determine the amount of storage to provision on each node.
-- For guidance on sizing for connection pools, see the CockroachDB {{ site.data.products.core }} [Production Checklist]({% link {{ page.version.version }}/recommended-production-settings.md %}#connection-pooling).
-
-### Application changes
-
-As you develop your migration plan, consider the application changes that you will need to make. These may relate to the following:
-
-- [Designing a schema that is compatible with CockroachDB.](#schema-design-best-practices)
-- [Creating effective indexes on CockroachDB.](#index-creation-best-practices)
-- [Handling transaction contention.](#handling-transaction-contention)
-- [Unimplemented features and syntax incompatibilities.](#unimplemented-features-and-syntax-incompatibilities)
-
-#### Schema design best practices
-
-Follow these recommendations when [converting your schema](#convert-the-schema) for compatibility with CockroachDB.
-
-{{site.data.alerts.callout_success}}
-The [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}) automatically identifies potential improvements to your schema.
-{{site.data.alerts.end}}
-
-- You should define an explicit primary key on every table. For more information, see [Primary key best practices]({% link {{ page.version.version }}/schema-design-table.md %}#primary-key-best-practices).
-
-- Do not use a sequence to define a primary key column. Instead, Cockroach Labs recommends that you use [multi-column primary keys]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#use-multi-column-primary-keys) or [auto-generating unique IDs]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#use-functions-to-generate-unique-ids) for primary key columns.
-
-- By default on CockroachDB, `INT` is an alias for `INT8`, which creates 64-bit signed integers. Depending on your source database or application requirements, you may need to change the integer size to `4`. For example, [PostgreSQL defaults to 32-bit integers](https://www.postgresql.org/docs/9.6/datatype-numeric.html). For more information, see [Considerations for 64-bit signed integers]({% link {{ page.version.version }}/int.md %}#considerations-for-64-bit-signed-integers).
-
-#### Index creation best practices
-
-Review the [best practices for creating secondary indexes]({% link {{ page.version.version }}/schema-design-indexes.md %}#best-practices) on CockroachDB.
-
-{% include {{page.version.version}}/performance/use-hash-sharded-indexes.md %}
-
-#### Handling transaction contention
-
-Optimize your queries against [transaction contention]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#transaction-contention). You may encounter [transaction retry errors]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}) when you [test application queries](#validate-queries), as well as transaction contention due to long-running transactions when you [conduct the migration](#conduct-the-migration) and bulk load data.
-
-Transaction retry errors are more frequent under CockroachDB's default [`SERIALIZABLE` isolation level]({% link {{ page.version.version }}/demo-serializable.md %}). If you are migrating an application that was built at a `READ COMMITTED` isolation level, you should first [enable `READ COMMITTED` isolation]({% link {{ page.version.version }}/read-committed.md %}#enable-read-committed-isolation) on the CockroachDB cluster for compatibility.
-
-#### Unimplemented features and syntax incompatibilities
-
-Update your queries to resolve differences in functionality and SQL syntax.
-
-{{site.data.alerts.callout_success}}
-The [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}) automatically flags syntax incompatibilities and unimplemented features in your schema.
-{{site.data.alerts.end}}
-
-CockroachDB supports the [PostgreSQL wire protocol](https://www.postgresql.org/docs/current/protocol.html) and is largely compatible with PostgreSQL syntax. However, the following PostgreSQL features do not yet exist in CockroachDB:
-
-{% include {{page.version.version}}/sql/unsupported-postgres-features.md %}
-
-If your source database uses any of the preceding features, you may need to implement workarounds in your schema design, in your [data manipulation language (DML)]({% link {{ page.version.version }}/sql-statements.md %}#data-manipulation-statements), or in your application code.
-
-For more details on the CockroachDB SQL implementation, see [SQL Feature Support]({% link {{ page.version.version }}/sql-feature-support.md %}).
-
-## Prepare for migration
-
-Once you have a migration plan, prepare the team, application, source database, and CockroachDB cluster for the migration.
-
-### Run a migration "pre-mortem"
-
-{{site.data.alerts.callout_success}}
-This step is optional.
-{{site.data.alerts.end}}
-
-To minimize issues after [cutover](#cutover-strategy), compose a migration "pre-mortem":
-
-- Clearly describe the roles and processes of each team member performing the migration.
-- List the likely failure points and issues that you may encounter as you [conduct the migration](#conduct-the-migration).
-- Rank potential issues by severity, and identify ways to reduce risk.
-- Create a plan for implementing the actions that would most effectively reduce risk.
-
-### Set up monitoring and alerting
-
-{{site.data.alerts.callout_success}}
-This step is optional.
-{{site.data.alerts.end}}
-
-Based on the error budget you [defined in your migration plan](#develop-a-migration-plan), identify the metrics that you can use to measure your success criteria and set up monitoring for the migration. These metrics may be identical to those you normally use in production, but can also be specific to your migration needs.
-
-### Update the schema and queries
-
-In the following order:
-
-1. [Convert your schema](#convert-the-schema).
-1. [Load test data](#load-test-data).
-1. [Validate your application queries](#validate-queries).
-
-
-
-You can use the following [MOLT (Migrate Off Legacy Technology) tools]({% link molt/molt-overview.md %}) to simplify these steps:
-
-- [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
-- [MOLT Fetch]({% link molt/molt-fetch.md %})
-- [MOLT Verify]({% link molt/molt-verify.md %})
-
-#### Convert the schema
-
-First, convert your database schema to an equivalent CockroachDB schema:
-
-- Use the [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}) to convert your schema line-by-line. The Schema Conversion Tool accepts `.sql` files from PostgreSQL, MySQL, Oracle, and Microsoft SQL Server. This requires a free [CockroachDB {{ site.data.products.cloud }} account]({% link cockroachcloud/create-an-account.md %}). The tool will convert the syntax, identify [unimplemented features and syntax incompatibilities](#unimplemented-features-and-syntax-incompatibilities) in the schema, and suggest edits according to CockroachDB [best practices](#schema-design-best-practices).
- {{site.data.alerts.callout_success}}
- If the Schema Conversion Tool is not an option when migrating from PostgreSQL or MySQL, you can enable automatic schema creation when [loading data](#load-test-data) with MOLT Fetch. The [`--table-handling drop-on-target-and-recreate`]({% link molt/molt-fetch.md %}#target-table-handling) option creates a one-to-one mapping between the source database and CockroachDB, and works well when the the source schema is well-defined. For additional help, contact your account team.
- {{site.data.alerts.end}}
-
-- Alternatively, manually convert the schema according to our [schema design best practices](#schema-design-best-practices){% comment %}and data type mappings{% endcomment %}. You can also [export a partially converted schema]({% link cockroachcloud/migrations-page.md %}#export-the-schema) from the Schema Conversion Tool to finish the conversion manually.
-
-Then import the converted schema to a CockroachDB cluster:
-
-- For CockroachDB {{ site.data.products.cloud }}, use the Schema Conversion Tool to [migrate the converted schema to a new {{ site.data.products.cloud }} database]({% link cockroachcloud/migrations-page.md %}#migrate-the-schema).
-- For CockroachDB {{ site.data.products.core }}, pipe the [data definition language (DDL)]({% link {{ page.version.version }}/sql-statements.md %}#data-definition-statements) directly into [`cockroach sql`]({% link {{ page.version.version }}/cockroach-sql.md %}). You can [export a converted schema file]({% link cockroachcloud/migrations-page.md %}#export-the-schema) from the Schema Conversion Tool.
- {{site.data.alerts.callout_success}}
- For the fastest performance, you can use a [local, single-node CockroachDB cluster]({% link {{ page.version.version }}/cockroach-start-single-node.md %}#start-a-single-node-cluster) to convert your schema and [check the results of queries](#test-query-results-and-performance).
- {{site.data.alerts.end}}
-
-#### Load test data
-
-{{site.data.alerts.callout_success}}
-Before moving data, Cockroach Labs recommends [dropping any indexes]({% link {{ page.version.version }}/drop-index.md %}) on the CockroachDB database. The indexes can be [recreated]({% link {{ page.version.version }}/create-index.md %}) after the data is loaded. Doing so will optimize performance.
-{{site.data.alerts.end}}
-
-After [converting the schema](#convert-the-schema), load your data into CockroachDB so that you can [test your application queries](#validate-queries). Then use [MOLT Fetch]({% link molt/molt-fetch.md %}) to move the source data to CockroachDB.
-
-Alternatively, you can use one of the following methods to migrate the data. Additional tooling may be required to extract or convert the data to a supported file format.
-
-- {% include {{ page.version.version }}/migration/load-data-import-into.md %} Typically during a migration, data is initially loaded before foreground application traffic begins to be served, so the impact of taking the table offline when running `IMPORT INTO` may be minimal.
-- {% include {{ page.version.version }}/migration/load-data-third-party.md %} Within the tool, you can select the database tables to migrate to the test cluster.
-- {% include {{ page.version.version }}/migration/load-data-copy-from.md %}
-
-#### Validate queries
-
-After you [load the test data](#load-test-data), validate your queries on CockroachDB. You can do this by [shadowing](#shadowing) or by [manually testing](#test-query-results-and-performance) the queries.
-
-Note that CockroachDB defaults to the [`SERIALIZABLE`]({% link {{ page.version.version }}/demo-serializable.md %}) transaction isolation level. If you are migrating an application that was built at a `READ COMMITTED` isolation level on the source database, you must [enable `READ COMMITTED` isolation]({% link {{ page.version.version }}/read-committed.md %}#enable-read-committed-isolation) on the CockroachDB cluster for compatibility.
-
-##### Shadowing
-
-You can "shadow" your production workload by executing your source SQL statements on CockroachDB in parallel. You can then [validate the queries](#test-query-results-and-performance) on CockroachDB for consistency, performance, and potential issues with the migration. Shadowing should **not** be used in production when performing a [live migration](#zero-downtime).
-
-##### Test query results and performance
-
-You can manually validate your queries by testing a subset of "critical queries" on an otherwise idle CockroachDB cluster:
-
-- Check the application logs for error messages and the API response time. If application requests are slower than expected, use the **SQL Activity** page on the [CockroachDB {{ site.data.products.cloud }} Console]({% link cockroachcloud/statements-page.md %}) or [DB Console]({% link {{ page.version.version }}/ui-statements-page.md %}) to find the longest-running queries that are part of that application request. If necessary, tune the queries according to our best practices for [SQL performance]({% link {{ page.version.version }}/performance-best-practices-overview.md %}).
-
-- Compare the results of the queries and check that they are identical in both the source database and CockroachDB. To do this, you can use [MOLT Verify]({% link molt/molt-verify.md %}).
-
-Test performance on a CockroachDB cluster that is appropriately [sized](#capacity-planning) for your workload:
-
-1. Run the application with single- or very low-concurrency and verify the app's performance is acceptable. The cluster should be provisioned with more than enough resources to handle this workload, because you need to verify that the queries will be fast enough when there are zero resource bottlenecks.
-
-1. Run stress tests with at least the production concurrency and rate, but ideally higher in order to verify that the system can handle unexpected spikes in load. This can also uncover [contention]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#transaction-contention) issues that will appear during spikes in app load, which may require [application design changes](#handling-transaction-contention) to avoid.
-
-### Perform a dry run
-
-To further minimize potential surprises when you conduct the migration, practice [cutover](#cutover-strategy) using your application and similar volumes of data on a "dry-run" environment. Use a test or development environment that is as similar as possible to production.
-
-Performing a dry run is highly recommended. In addition to demonstrating how long the migration may take, a dry run also helps to ensure that team members understand what they need to do during the migration, and that changes to the application are coordinated.
-
-## Conduct the migration
-
-Before proceeding, double-check that you are [prepared to migrate](#prepare-for-migration).
-
-Once you are ready to migrate, optionally [drop the database]({% link {{ page.version.version }}/drop-database.md %}) and delete the test cluster so that you can get a clean start:
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-DROP DATABASE {database-name} CASCADE;
-~~~
-
-Alternatively, [truncate]({% link {{ page.version.version }}/truncate.md %}) each table you used for testing to avoid having to recreate your schema:
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-TRUNCATE {table-name} CASCADE;
-~~~
-
-Migrate your data to CockroachDB using the method that is appropriate for your [downtime requirements](#approach-to-downtime) and [cutover strategy](#cutover-strategy).
-
-### Lift and Shift
-
-Using this method, consistency is achieved by only performing the cutover once all writes have been replicated from the source database to CockroachDB. This requires downtime during which the application traffic is stopped.
-
-The following is a high-level overview of the migration steps. For considerations and details about the pros and cons of this approach, see [Migration Strategy: Lift and Shift]({% link {{ page.version.version }}/migration-strategy-lift-and-shift.md %}).
-
-1. Stop application traffic to your source database. **This begins downtime.**
-1. Use [MOLT Fetch]({% link molt/molt-fetch.md %}) to move the source data to CockroachDB.
-1. After the data is migrated, use [MOLT Verify]({% link molt/molt-verify.md %}) to validate the consistency of the data between the source database and CockroachDB.
-1. Perform a [cutover](#cutover-strategy) by resuming application traffic, now to CockroachDB.
-{% comment %}1. If you want the ability to [roll back](#all-at-once-rollback) the migration, replicate data back to the source database.{% endcomment %}
-
-### Zero Downtime
-
-During a "live migration", downtime is minimized by performing the cutover while writes are still being replicated from the source database to CockroachDB. Inconsistencies are resolved through manual reconciliation.
-
-The following is a high-level overview of the migration steps. The two approaches are mutually exclusive, and each has [tradeoffs](#minimal-downtime). {% comment %}For details on this migration strategy, see [Migration Strategy: Zero Downtime]({% link {{ page.version.version }}/migration-strategy-lift-and-shift.md %}).{% endcomment %}
-
-To prioritize consistency and minimize downtime:
-
-1. Use [MOLT Fetch]({% link molt/molt-fetch.md %}) to move the source data to CockroachDB. Enable [**continuous replication**]({% link molt/molt-fetch.md %}#load-data-and-replicate-changes) after it performs the initial load of data into CockroachDB.
-1. As the data is migrating, use [MOLT Verify]({% link molt/molt-verify.md %}) to validate the consistency of the data between the source database and CockroachDB.
-1. Once nearly all data from your source database has been moved to CockroachDB (for example, with a <1 second delay or <1000 rows), stop application traffic to your source database. **This begins downtime.**
-1. Wait for MOLT Fetch to finish replicating changes to CockroachDB.
-1. Perform a [cutover](#cutover-strategy) by resuming application traffic, now to CockroachDB.
-
-To achieve zero downtime with inconsistency:
-
-1. Use [MOLT Fetch]({% link molt/molt-fetch.md %}) to move the source data to CockroachDB. Use the tool to **replicate ongoing changes** after performing the initial load of data into CockroachDB.
-1. As the data is migrating, you can use [MOLT Verify]({% link molt/molt-verify.md %}) to validate the consistency of the data between the source database and CockroachDB.
-1. After nearly all data from your source database has been moved to CockroachDB (for example, with a <1 second delay or <1000 rows), perform an [*immediate cutover*](#cutover-strategy) by pointing application traffic to CockroachDB.
-1. Manually reconcile any inconsistencies caused by writes that were not replicated during the cutover.
-1. Close the connection to the source database when you are ready to finish the migration.
-
-## Complete the migration
-
-After you have successfully [conducted the migration](#conduct-the-migration):
-
-- Notify the teams and other stakeholders impacted by the migration.
-- Retire any test or development environments used to verify the migration.
-- Extend the document you created when [developing your migration plan](#develop-a-migration-plan) with any issues encountered and follow-up work that needs to be done.
-
-## See also
-
-- [Can a PostgreSQL or MySQL application be migrated to CockroachDB?]({% link {{ page.version.version }}/frequently-asked-questions.md %}#can-a-postgresql-or-mysql-application-be-migrated-to-cockroachdb)
-- [PostgreSQL Compatibility]({% link {{ page.version.version }}/postgresql-compatibility.md %})
-- [Use the Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
-- [Schema Design Overview]({% link {{ page.version.version }}/schema-design-overview.md %})
-- [Create a User-defined Schema]({% link {{ page.version.version }}/schema-design-schema.md %})
-- [Primary key best practices]({% link {{ page.version.version }}/schema-design-table.md %}#primary-key-best-practices)
-- [Secondary index best practices]({% link {{ page.version.version }}/schema-design-indexes.md %}#best-practices)
-- [Transaction contention best practices]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#transaction-contention)
-- [Back Up and Restore]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %})
diff --git a/src/current/v24.3/migration-strategy-lift-and-shift.md b/src/current/v24.3/migration-strategy-lift-and-shift.md
deleted file mode 100644
index 402f7126bec..00000000000
--- a/src/current/v24.3/migration-strategy-lift-and-shift.md
+++ /dev/null
@@ -1,139 +0,0 @@
----
-title: "Migration Strategy: Lift and Shift"
-summary: Learn about the 'Lift and Shift' data migration strategy
-toc: true
-docs_area: migrate
----
-
-There are multiple strategies for [migrating off legacy technology]({% link {{ page.version.version }}/migration-overview.md %}) to CockroachDB.
-
-This page discusses the ["Lift and Shift" strategy]({% link {{ page.version.version }}/migration-overview.md %}#lift-and-shift) for migrating your database, which is a commonly used approach. This approach, which is also known as "Big Bang" (and by other names), refers to the process where your data is moved in its entirety from a source system to a target system within a defined period of time. This typically involves some application downtime and can involve some service degradation.
-
-Lift and Shift may not be the right approach if a strong application service continuity during the migration is required. It may be a viable method if application downtime is permitted.
-
-{{site.data.alerts.callout_info}}
-The information on this page assumes you have already reviewed the [migration overview]({% link {{ page.version.version }}/migration-overview.md %}).
-{{site.data.alerts.end}}
-
-## Pros and Cons
-
-On the spectrum of different data migration strategies, Lift and Shift has the following pros and cons. The terms "lower" and "higher" are not absolute, but relative to other approaches.
-
-Pros:
-
-- Conceptually straightforward.
-- Less complex: If you can afford some downtime, the overall effort will usually be lower, and the chance of errors is lower.
-- Lower time start-to-finish: In general, the more downtime you can afford, the shorter the overall migration project timeframe can be.
-- Lower technical risk: It does not involve running multiple systems alongside each other for an extended period of time.
-- Easy to practice [dry runs]({% link {{ page.version.version }}/migration-overview.md %}#perform-a-dry-run) of import/export using testing/non-production systems.
-- Good import/export tooling is available (e.g., external tools like: [AWS Database Migration Service (DMS)]({% link {{ page.version.version }}/aws-dms.md %}), [Qlik Replicate]({% link {{ page.version.version }}/qlik.md %}), [Striim]({% link {{ page.version.version }}/striim.md %}); or internal tools like [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}), [`COPY FROM`]({% link {{ page.version.version }}/copy.md %}), [`cockroach userdata`]({% link {{ page.version.version }}/cockroach-userfile-upload.md %})).
-- If your application already has regularly scheduled maintenance windows, your customers will not encounter application downtime.
-
-Cons:
-
-- All or nothing: It either works or does not work; once you start, you have to finish or [roll back]({% link {{ page.version.version }}/migration-overview.md %}#all-at-once-rollback).
-- Higher project risk: The project **must** be completed to meet a given [downtime / service degradation window]({% link {{ page.version.version }}/migration-overview.md %}#downtime-window).
-- Application service continuity requirements must be relaxed (that is, application downtime or increased latency may be needed).
-
-## Process design considerations
-
-{{site.data.alerts.callout_info}}
-The high-level considerations in this section only refer to the data-loading portion of your migration. They assume you are following the steps in the overall migration process described in [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}).
-{{site.data.alerts.end}}
-
-Keep in mind the following considerations when designing a Lift and Shift data migration process.
-
-- [Decide on your data migration tooling.](#managed-migration)
-- [Decide which data formats you will use.](#data-formats)
-- [Design a restartable process.](#restartable)
-- [Design a scalable process.](#scalable)
-
-
-
-### Decide on your data migration tooling
-
-If you plan to do your bulk data migration using a managed migration service, you must have a secure, publicly available CockroachDB cluster. CockroachDB supports the following [third-party migration services]({% link {{ page.version.version }}/third-party-database-tools.md %}#data-migration-tools):
-
-- [AWS Database Migration Service (DMS)]({% link {{ page.version.version }}/aws-dms.md %})
-- [Qlik Replicate]({% link {{ page.version.version }}/qlik.md %})
-- [Striim]({% link {{ page.version.version }}/striim.md %})
-
-{{site.data.alerts.callout_info}}
-Depending on the migration service you choose, [long-running transactions]({% link {{ page.version.version }}/query-behavior-troubleshooting.md %}#hanging-or-stuck-queries) can occur. In some cases, these queries will cause [transaction retry errors]({% link {{ page.version.version }}/common-errors.md %}#restart-transaction). If you encounter these errors while migrating to CockroachDB using a managed migration service, please reach out to our [Support Resources]({% link {{ page.version.version }}/support-resources.md %}).
-{{site.data.alerts.end}}
-
-If you will not be using a managed migration service, see the following sections for more information on how to use SQL statements like [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}), [`COPY FROM`]({% link {{ page.version.version }}/copy.md %}), etc.
-
-
-
-### Decide which data formats and storage media you will use
-
-It's important to decide which data formats, storage media, and database features you will use to migrate your data.
-
-Data formats that can be imported by CockroachDB include:
-
-- [SQL]({% link {{ page.version.version }}/schema-design-overview.md %}) for the [schema import]({% link cockroachcloud/migrations-page.md %}).
-- [CSV]({% link {{ page.version.version }}/migrate-from-csv.md %}) for table data.
-- [Avro]({% link {{ page.version.version }}/migrate-from-avro.md %}) for table data.
-
-The storage media you use to export / import from can be intermediate data files or streaming data coming over the network. Options include:
-
-- Local "userdata" storage for small tables (see [`cockroach userdata`]({% link {{ page.version.version }}/cockroach-userfile-upload.md %}), [Use a Local File Server]({% link {{ page.version.version }}/use-a-local-file-server.md %})).
-- Cloud blob storage (see [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}), [Import Performance Best Practices]({% link {{ page.version.version }}/import-performance-best-practices.md %}), [Use Cloud Storage]({% link {{ page.version.version }}/use-cloud-storage.md %})).
-- Direct wire transfers (see [managed migration services](#managed-migration)).
-
-Database features for export/import from the source and target databases can include:
-
-- Tools for exporting from the source database may include: `pg_dump --schema-only` and `COPY FROM`, `mysqldump`, `expdp`, etc.
-- For import into CockroachDB, use [`COPY FROM`]({% link {{ page.version.version }}/copy.md %}) or [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}). For a bulk data migrations, most users should use `IMPORT INTO` because the tables will be offline anyway, and `IMPORT INTO` can [perform the data import much faster]({% link {{ page.version.version }}/import-performance-best-practices.md %}) than `COPY FROM`.
-
-Decide which of the options above will meet your requirements while resulting in a process that is [restartable](#restartable) and [scalable](#scalable).
-
-
-
-### Design a restartable process
-
-To have a higher chance of success, design your data migration process so it can be stopped and restarted from an intermediate state at any time during the process. This will help minimize errors and avoid wasted effort.
-
-Keep the following requirements in mind as you design a restartable import/export process:
-
-- Bulk migrate data in manageable size batches for your source and target systems.
- - This is a best practice. If something happens to the target cluster during import, the amount of wasted work will be minimized.
-- Implement progress/state keeping with process restart capabilities.
-- Make sure your export process is idempotent: the same input to your export process should return the same output data.
-- If possible, export and import the majority of your data before taking down the source database. This can ensure that you only have to deal with the incremental changes from your last import to complete the migration process.
-
-
-
-### Design a scalable and performant process
-
-Once your process is [restartable and resilient to failures](#design-a-restartable-process), it's important to also make sure it will scale to the needs of your data set. The larger the data set you are migrating to CockroachDB, the more important the performance and scalability of your process will be.
-
-Keep the following requirements in mind:
-
-- Schema and data should be imported separately.
-- Your process should handle multiple files across multiple export/import streams concurrently.
- - For best performance, these files should contain presorted, disjoint data sets.
-- Benchmark the performance of your migration process to help ensure it will complete within the allotted downtime window.
-
-For more information about import performance, see [Import Performance Best Practices]({% link {{ page.version.version }}/import-performance-best-practices.md %}).
-
-## See also
-
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
-- [Use the Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
-- [Migrate with AWS Database Migration Service (DMS)]({% link {{ page.version.version }}/aws-dms.md %})
-- [AWS DMS documentation](https://docs.aws.amazon.com/dms/latest/userguide/Welcome.html)
-- [Migrate and Replicate Data with Qlik Replicate]({% link {{ page.version.version }}/qlik.md %})
-- [Migrate and Replicate Data with Striim]({% link {{ page.version.version }}/striim.md %})
-- [Schema Design Overview]({% link {{ page.version.version }}/schema-design-overview.md %})
-- [Back Up and Restore]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %})
-- [Export data with Changefeeds]({% link {{ page.version.version }}/export-data-with-changefeeds.md %})
-- [`COPY`]({% link {{ page.version.version }}/copy.md %})
-- [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %})
-- [Migrate from CSV]({% link {{ page.version.version }}/migrate-from-csv.md %})
-- [Migrate from Avro]({% link {{ page.version.version }}/migrate-from-avro.md %})
-- [Client connection parameters]({% link {{ page.version.version }}/connection-parameters.md %})
-
-
-{% comment %} eof {% endcomment %}
diff --git a/src/current/v24.3/qlik.md b/src/current/v24.3/qlik.md
index bdb646d7b20..da53e969bfd 100644
--- a/src/current/v24.3/qlik.md
+++ b/src/current/v24.3/qlik.md
@@ -68,7 +68,7 @@ Complete the following items before using Qlik Replicate:
- If you are migrating from PostgreSQL, MySQL, Oracle, or Microsoft SQL Server, [use the **Schema Conversion Tool**]({% link cockroachcloud/migrations-page.md %}) to convert and export your schema. Ensure that any schema changes are also reflected on your tables, or add transformation rules. If you make substantial schema changes, the Qlik Replicate migration may fail.
{{site.data.alerts.callout_info}}
- All tables must have an explicitly defined primary key. For more guidance, see the [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}#schema-design-best-practices).
+ All tables must have an explicitly defined primary key. For more guidance, see [Migration Strategy]({% link molt/migration-strategy.md %}#schema-design-best-practices).
{{site.data.alerts.end}}
## Migrate and replicate data to CockroachDB
@@ -96,7 +96,7 @@ In the Qlik Replicate interface, CockroachDB is configured as a PostgreSQL **sou
## See also
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
- [Change Data Capture Overview]({% link {{ page.version.version }}/change-data-capture-overview.md %})
- [Third-Party Tools Supported by Cockroach Labs]({% link {{ page.version.version }}/third-party-database-tools.md %})
diff --git a/src/current/v24.3/read-committed.md b/src/current/v24.3/read-committed.md
index 83d7f9ad7e6..bcf0a5d9f92 100644
--- a/src/current/v24.3/read-committed.md
+++ b/src/current/v24.3/read-committed.md
@@ -13,7 +13,7 @@ docs_area: deploy
- Your application needs to maintain a high workload concurrency with minimal [transaction retries]({% link {{ page.version.version }}/developer-basics.md %}#transaction-retries), and it can tolerate potential [concurrency anomalies](#concurrency-anomalies). Predictable query performance at high concurrency is more valuable than guaranteed transaction [serializability]({% link {{ page.version.version }}/developer-basics.md %}#serializability-and-transaction-contention).
-- You are [migrating an application to CockroachDB]({% link {{ page.version.version }}/migration-overview.md %}) that was built at a `READ COMMITTED` isolation level on the source database, and it is not feasible to modify your application to use `SERIALIZABLE` isolation.
+- You are [migrating an application to CockroachDB]({% link molt/migration-overview.md %}) that was built at a `READ COMMITTED` isolation level on the source database, and it is not feasible to modify your application to use `SERIALIZABLE` isolation.
Whereas `SERIALIZABLE` isolation guarantees data correctness by placing transactions into a [serializable ordering]({% link {{ page.version.version }}/demo-serializable.md %}), `READ COMMITTED` isolation permits some [concurrency anomalies](#concurrency-anomalies) in exchange for minimizing transaction aborts, [retries]({% link {{ page.version.version }}/developer-basics.md %}#transaction-retries), and blocking. Compared to `SERIALIZABLE` transactions, `READ COMMITTED` transactions do **not** return [serialization errors]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}) that require client-side handling. See [`READ COMMITTED` transaction behavior](#read-committed-transaction-behavior).
@@ -919,4 +919,4 @@ SELECT * FROM schedules
- [Serializable Transactions]({% link {{ page.version.version }}/demo-serializable.md %})
- [What Write Skew Looks Like](https://www.cockroachlabs.com/blog/what-write-skew-looks-like/)
- [Read Committed RFC](https://github.com/cockroachdb/cockroach/blob/master/docs/RFCS/20230122_read_committed_isolation.md)
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
diff --git a/src/current/v24.3/striim.md b/src/current/v24.3/striim.md
index 436d7440cbd..67526c8ff9d 100644
--- a/src/current/v24.3/striim.md
+++ b/src/current/v24.3/striim.md
@@ -37,7 +37,7 @@ Complete the following items before using Striim:
- If you are migrating from PostgreSQL, MySQL, Oracle, or Microsoft SQL Server, [use the **Schema Conversion Tool**]({% link cockroachcloud/migrations-page.md %}) to convert and export your schema. Ensure that any schema changes are also reflected on your tables.
{{site.data.alerts.callout_info}}
- All tables must have an explicitly defined primary key. For more guidance, see the [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}#schema-design-best-practices).
+ All tables must have an explicitly defined primary key. For more guidance, see [Migration Strategy]({% link molt/migration-strategy.md %}#schema-design-best-practices).
{{site.data.alerts.end}}
## Migrate and replicate data to CockroachDB
@@ -110,7 +110,7 @@ To perform continuous replication of ongoing changes, create a Striim applicatio
## See also
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
- [Change Data Capture Overview]({% link {{ page.version.version }}/change-data-capture-overview.md %})
- [Third-Party Tools Supported by Cockroach Labs]({% link {{ page.version.version }}/third-party-database-tools.md %})
diff --git a/src/current/v25.1/aws-dms.md b/src/current/v25.1/aws-dms.md
index bb3b53b22a2..99a42ac0532 100644
--- a/src/current/v25.1/aws-dms.md
+++ b/src/current/v25.1/aws-dms.md
@@ -41,7 +41,7 @@ Complete the following items before starting the DMS migration:
- Manually create all schema objects in the target CockroachDB cluster. If you are migrating from PostgreSQL, MySQL, Oracle, or Microsoft SQL Server, you can [use the **Schema Conversion Tool**]({% link cockroachcloud/migrations-page.md %}) to convert and export your schema.
- - All tables must have an explicitly defined primary key. For more guidance, see the [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}#schema-design-best-practices).
+ - All tables must have an explicitly defined primary key. For more guidance, see [Migration Strategy]({% link molt/migration-strategy.md %}#schema-design-best-practices).
- Drop all [constraints]({% link {{ page.version.version }}/constraints.md %}) per the [AWS DMS best practices](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_BestPractices.html#CHAP_BestPractices.Performance). You can recreate them after the [full load completes](#step-3-verify-the-migration). AWS DMS can create a basic schema, but does not create [indexes]({% link {{ page.version.version }}/indexes.md %}) or constraints such as [foreign keys]({% link {{ page.version.version }}/foreign-key.md %}) and [defaults]({% link {{ page.version.version }}/default-value.md %}).
@@ -406,7 +406,7 @@ The `BatchApplyEnabled` setting can improve replication performance and is recom
## See Also
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
- [`cockroach demo`]({% link {{ page.version.version }}/cockroach-demo.md %})
- [AWS DMS documentation](https://docs.aws.amazon.com/dms/latest/userguide/Welcome.html)
diff --git a/src/current/v25.1/build-a-java-app-with-cockroachdb-hibernate.md b/src/current/v25.1/build-a-java-app-with-cockroachdb-hibernate.md
index c19bdcaed7b..c4b53d27629 100644
--- a/src/current/v25.1/build-a-java-app-with-cockroachdb-hibernate.md
+++ b/src/current/v25.1/build-a-java-app-with-cockroachdb-hibernate.md
@@ -130,9 +130,9 @@ APP: getAccountBalance(2) --> 350.00
If you are trying to get a large data set into CockroachDB all at once (a bulk import), avoid writing client-side code altogether and use the [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) statement instead. It is much faster and more efficient than making a series of [`INSERT`s]({% link {{ page.version.version }}/insert.md %}) and [`UPDATE`s]({% link {{ page.version.version }}/update.md %}). It bypasses the [SQL layer]({% link {{ page.version.version }}/architecture/sql-layer.md %}) altogether and writes directly to the [storage layer](architecture/storage-layer.html) of the database.
-For more information about importing data from PostgreSQL, see [Migrate from PostgreSQL]({% link {{ page.version.version }}/migrate-from-postgres.md %}).
+For more information about importing data from PostgreSQL, see [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %}).
-For more information about importing data from MySQL, see [Migrate from MySQL]({% link {{ page.version.version }}/migrate-from-mysql.md %}).
+For more information about importing data from MySQL, see [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql).
### Use `reWriteBatchedInserts` for increased speed
diff --git a/src/current/v25.1/build-a-java-app-with-cockroachdb.md b/src/current/v25.1/build-a-java-app-with-cockroachdb.md
index eae610ab364..d798e9dbaee 100644
--- a/src/current/v25.1/build-a-java-app-with-cockroachdb.md
+++ b/src/current/v25.1/build-a-java-app-with-cockroachdb.md
@@ -269,9 +269,9 @@ props.setProperty("options", "-c sql_safe_updates=true -c statement_timeout=30")
If you are trying to get a large data set into CockroachDB all at once (a bulk import), avoid writing client-side code altogether and use the [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) statement instead. It is much faster and more efficient than making a series of [`INSERT`s]({% link {{ page.version.version }}/insert.md %}) and [`UPDATE`s]({% link {{ page.version.version }}/update.md %}). It bypasses the [SQL layer]({% link {{ page.version.version }}/architecture/sql-layer.md %}) altogether and writes directly to the [storage layer](architecture/storage-layer.html) of the database.
-For more information about importing data from PostgreSQL, see [Migrate from PostgreSQL]({% link {{ page.version.version }}/migrate-from-postgres.md %}).
+For more information about importing data from PostgreSQL, see [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %}).
-For more information about importing data from MySQL, see [Migrate from MySQL]({% link {{ page.version.version }}/migrate-from-mysql.md %}).
+For more information about importing data from MySQL, see [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql).
### Use `reWriteBatchedInserts` for increased speed
diff --git a/src/current/v25.1/build-a-python-app-with-cockroachdb-sqlalchemy.md b/src/current/v25.1/build-a-python-app-with-cockroachdb-sqlalchemy.md
index 73e5dfa528b..e56b9c136a8 100644
--- a/src/current/v25.1/build-a-python-app-with-cockroachdb-sqlalchemy.md
+++ b/src/current/v25.1/build-a-python-app-with-cockroachdb-sqlalchemy.md
@@ -204,9 +204,9 @@ Instead, we recommend breaking your transaction into smaller units of work (or "
If you are trying to get a large data set into CockroachDB all at once (a bulk import), avoid writing client-side code that uses an ORM and use the [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) statement instead. It is much faster and more efficient than making a series of [`INSERT`s]({% link {{ page.version.version }}/insert.md %}) and [`UPDATE`s]({% link {{ page.version.version }}/update.md %}) such as are generated by calls to [`session.bulk_save_objects()`](https://docs.sqlalchemy.org/orm/session_api.html?highlight=bulk_save_object#sqlalchemy.orm.session.Session.bulk_save_objects).
-For more information about importing data from PostgreSQL, see [Migrate from PostgreSQL]({% link {{ page.version.version }}/migrate-from-postgres.md %}).
+For more information about importing data from PostgreSQL, see [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %}).
-For more information about importing data from MySQL, see [Migrate from MySQL]({% link {{ page.version.version }}/migrate-from-mysql.md %}).
+For more information about importing data from MySQL, see [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql).
### Prefer the query builder
diff --git a/src/current/v25.1/copy.md b/src/current/v25.1/copy.md
index aa825fdc003..144da73b4ac 100644
--- a/src/current/v25.1/copy.md
+++ b/src/current/v25.1/copy.md
@@ -358,10 +358,10 @@ You can copy CSV data into CockroachDB using the following methods:
## See also
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %})
- [`EXPORT`]({% link {{ page.version.version }}/export.md %})
- [Install a Driver or ORM Framework]({% link {{ page.version.version }}/install-client-drivers.md %})
{% comment %}
-- [Migrate from PostgreSQL]({% link {{ page.version.version }}/migrate-from-postgres.md %})
+- [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %})
{% endcomment %}
\ No newline at end of file
diff --git a/src/current/v25.1/debezium.md b/src/current/v25.1/debezium.md
index ea2eb513b50..a9d14707ea4 100644
--- a/src/current/v25.1/debezium.md
+++ b/src/current/v25.1/debezium.md
@@ -116,7 +116,7 @@ Once all of the [prerequisite steps](#before-you-begin) are completed, you can u
## See also
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
- [Change Data Capture Overview]({% link {{ page.version.version }}/change-data-capture-overview.md %})
- [Third-Party Tools Supported by Cockroach Labs]({% link {{ page.version.version }}/third-party-database-tools.md %})
diff --git a/src/current/v25.1/frequently-asked-questions.md b/src/current/v25.1/frequently-asked-questions.md
index 07e00b6959f..c94ba15582b 100644
--- a/src/current/v25.1/frequently-asked-questions.md
+++ b/src/current/v25.1/frequently-asked-questions.md
@@ -147,7 +147,7 @@ Note, however, that the protocol used doesn't significantly impact how easy it i
### Can a PostgreSQL or MySQL application be migrated to CockroachDB?
-Yes. Most users should be able to follow the instructions in [Migrate from PostgreSQL]({% link {{ page.version.version }}/migrate-from-postgres.md %}) or [Migrate from MySQL]({% link {{ page.version.version }}/migrate-from-mysql.md %}). Due to differences in available features and syntax, some features supported by these databases may require manual effort to port to CockroachDB. Check those pages for details.
+Yes. Most users should be able to follow the instructions in [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %}) or [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql). Due to differences in available features and syntax, some features supported by these databases may require manual effort to port to CockroachDB. Check those pages for details.
We also fully support [importing your data via CSV]({% link {{ page.version.version }}/migrate-from-csv.md %}).
diff --git a/src/current/v25.1/goldengate.md b/src/current/v25.1/goldengate.md
index 7fd572094e4..30ee23c17df 100644
--- a/src/current/v25.1/goldengate.md
+++ b/src/current/v25.1/goldengate.md
@@ -514,7 +514,7 @@ Run the steps in this section on a machine and in a directory where Oracle Golde
## See also
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
- [Change Data Capture Overview]({% link {{ page.version.version }}/change-data-capture-overview.md %})
- [Third-Party Tools Supported by Cockroach Labs]({% link {{ page.version.version }}/third-party-database-tools.md %})
diff --git a/src/current/v25.1/import-into.md b/src/current/v25.1/import-into.md
index ad4a2c9ca5c..780b9195104 100644
--- a/src/current/v25.1/import-into.md
+++ b/src/current/v25.1/import-into.md
@@ -158,7 +158,7 @@ You can control the `IMPORT` process's behavior using any of the following key-v
For examples showing how to use these options, see the [Examples section]({% link {{ page.version.version }}/import-into.md %}#examples).
-For instructions and working examples showing how to migrate data from other databases and formats, see the [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}).
+For instructions and working examples showing how to migrate data from other databases and formats, see the [Migration Overview]({% link molt/migration-overview.md %}).
## View and control import jobs
@@ -285,6 +285,6 @@ For more information about importing data from Avro, including examples, see [Mi
## See also
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [Use Cloud Storage]({% link {{ page.version.version }}/use-cloud-storage.md %})
- [Import Performance Best Practices]({% link {{ page.version.version }}/import-performance-best-practices.md %})
diff --git a/src/current/v25.1/import-performance-best-practices.md b/src/current/v25.1/import-performance-best-practices.md
index b85d28f1fe6..ca69a57fc4a 100644
--- a/src/current/v25.1/import-performance-best-practices.md
+++ b/src/current/v25.1/import-performance-best-practices.md
@@ -160,9 +160,9 @@ If you cannot both split and sort your dataset, the performance of either split
## See also
- [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %})
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [Migrate from Oracle]({% link {{ page.version.version }}/migrate-from-oracle.md %})
-- [Migrate from PostgreSQL]({% link {{ page.version.version }}/migrate-from-postgres.md %})
-- [Migrate from MySQL]({% link {{ page.version.version }}/migrate-from-mysql.md %})
+- [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %})
+- [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql)
- [Migrate from CSV]({% link {{ page.version.version }}/migrate-from-csv.md %})
- [Migrate from Avro]({% link {{ page.version.version }}/migrate-from-avro.md %})
diff --git a/src/current/v25.1/index.md b/src/current/v25.1/index.md
index eff4d508353..78ac87b9f4c 100644
--- a/src/current/v25.1/index.md
+++ b/src/current/v25.1/index.md
@@ -99,11 +99,11 @@ docs_area:
diff --git a/src/current/v25.1/insert-data.md b/src/current/v25.1/insert-data.md
index a7e76489a0c..dc3c26ee1f5 100644
--- a/src/current/v25.1/insert-data.md
+++ b/src/current/v25.1/insert-data.md
@@ -105,7 +105,7 @@ conn.commit()
Reference information related to this task:
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [`INSERT`]({% link {{ page.version.version }}/insert.md %})
- [`UPSERT`]({% link {{ page.version.version }}/upsert.md %})
- [Transaction Contention]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#transaction-contention)
diff --git a/src/current/v25.1/migrate-failback.md b/src/current/v25.1/migrate-failback.md
index acc45022370..98d7844bcf6 100644
--- a/src/current/v25.1/migrate-failback.md
+++ b/src/current/v25.1/migrate-failback.md
@@ -111,10 +111,10 @@ The following example watches the `employees` table for change events.
## See also
-- [MOLT Overview]({% link molt/molt-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [MOLT Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
- [MOLT Fetch]({% link molt/molt-fetch.md %})
- [MOLT Verify]({% link molt/molt-verify.md %})
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [Migrate to CockroachDB]({% link {{ page.version.version }}/migrate-to-cockroachdb.md %})
- [Migrate to CockroachDB in Phases]({% link {{ page.version.version }}/migrate-in-phases.md %})
\ No newline at end of file
diff --git a/src/current/v25.1/migrate-from-avro.md b/src/current/v25.1/migrate-from-avro.md
index 39b7bdbc9aa..de42d232917 100644
--- a/src/current/v25.1/migrate-from-avro.md
+++ b/src/current/v25.1/migrate-from-avro.md
@@ -216,8 +216,8 @@ You will need to run [`ALTER TABLE ... ADD CONSTRAINT`]({% link {{ page.version.
- [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %})
- [Import Performance Best Practices]({% link {{ page.version.version }}/import-performance-best-practices.md %})
- [Migrate from CSV][csv]
-- [Migrate from MySQL][mysql]
-- [Migrate from PostgreSQL][postgres]
+- [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql)
+- [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %})
- [Back Up and Restore Data]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %})
- [Use the Built-in SQL Client]({% link {{ page.version.version }}/cockroach-sql.md %})
- [`cockroach` Commands Overview]({% link {{ page.version.version }}/cockroach-commands.md %})
diff --git a/src/current/v25.1/migrate-from-csv.md b/src/current/v25.1/migrate-from-csv.md
index e0e1d92da9c..d30eeb2e98c 100644
--- a/src/current/v25.1/migrate-from-csv.md
+++ b/src/current/v25.1/migrate-from-csv.md
@@ -176,8 +176,8 @@ IMPORT INTO employees (emp_no, birth_date, first_name, last_name, gender, hire_d
- [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %})
- [Import Performance Best Practices]({% link {{ page.version.version }}/import-performance-best-practices.md %})
-- [Migrate from MySQL][mysql]
-- [Migrate from PostgreSQL][postgres]
+- [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql)
+- [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %})
- [Back Up and Restore Data]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %})
- [Use the Built-in SQL Client]({% link {{ page.version.version }}/cockroach-sql.md %})
- [`cockroach` Commands Overview]({% link {{ page.version.version }}/cockroach-commands.md %})
diff --git a/src/current/v25.1/migrate-from-geojson.md b/src/current/v25.1/migrate-from-geojson.md
index 2c3326af39e..e0d804f0be7 100644
--- a/src/current/v25.1/migrate-from-geojson.md
+++ b/src/current/v25.1/migrate-from-geojson.md
@@ -122,9 +122,9 @@ IMPORT INTO underground_storage_tank CSV DATA ('http://localhost:3000/tanks.csv'
- [Migrate from Shapefiles]({% link {{ page.version.version }}/migrate-from-shapefiles.md %})
- [Migrate from GeoPackage]({% link {{ page.version.version }}/migrate-from-geopackage.md %})
- [Spatial indexes]({% link {{ page.version.version }}/spatial-indexes.md %})
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
-- [Migrate from MySQL][mysql]
-- [Migrate from PostgreSQL][postgres]
+- [Migration Overview]({% link molt/migration-overview.md %})
+- [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql)
+- [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %})
- [Back Up and Restore Data]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %})
- [Use the Built-in SQL Client]({% link {{ page.version.version }}/cockroach-sql.md %})
- [`cockroach` Commands Overview]({% link {{ page.version.version }}/cockroach-commands.md %})
diff --git a/src/current/v25.1/migrate-from-geopackage.md b/src/current/v25.1/migrate-from-geopackage.md
index 53acdaff4a7..c3fabeb57ef 100644
--- a/src/current/v25.1/migrate-from-geopackage.md
+++ b/src/current/v25.1/migrate-from-geopackage.md
@@ -114,9 +114,9 @@ IMPORT INTO busstops CSV DATA ('http://localhost:3000/busstops.csv') WITH skip =
- [Spatial indexes]({% link {{ page.version.version }}/spatial-indexes.md %})
- [Migrate from OpenStreetMap]({% link {{ page.version.version }}/migrate-from-openstreetmap.md %})
- [Migrate from Shapefiles]({% link {{ page.version.version }}/migrate-from-shapefiles.md %})
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
-- [Migrate from MySQL][mysql]
-- [Migrate from PostgreSQL][postgres]
+- [Migration Overview]({% link molt/migration-overview.md %})
+- [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql)
+- [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %})
- [Back Up and Restore Data]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %})
- [Use the Built-in SQL Client]({% link {{ page.version.version }}/cockroach-sql.md %})
- [`cockroach` Commands Overview]({% link {{ page.version.version }}/cockroach-commands.md %})
diff --git a/src/current/v25.1/migrate-from-mysql.md b/src/current/v25.1/migrate-from-mysql.md
deleted file mode 100644
index 1fd506ca8c7..00000000000
--- a/src/current/v25.1/migrate-from-mysql.md
+++ /dev/null
@@ -1,416 +0,0 @@
----
-title: Migrate from MySQL
-summary: Learn how to migrate data from MySQL into a CockroachDB cluster.
-toc: true
-docs_area: migrate
----
-
-{{site.data.alerts.callout_info}}
-For current migration instructions using the [MOLT tools]({% link molt/molt-overview.md %}), refer to [Migrate to CockroachDB]({% link {{ page.version.version }}/migrate-to-cockroachdb.md %}).
-{{site.data.alerts.end}}
-
-This page describes basic considerations and provides a basic [example](#example-migrate-world-to-cockroachdb) of migrating data from MySQL to CockroachDB. The information on this page assumes that you have read [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}), which describes the broad phases and considerations of migrating a database to CockroachDB.
-
-The [MySQL migration example](#example-migrate-world-to-cockroachdb) on this page demonstrates how to update the MySQL schema, perform an initial load of data, and validate the data. These steps are essential when [preparing for a full migration]({% link {{ page.version.version }}/migration-overview.md %}#prepare-for-migration).
-
-{{site.data.alerts.callout_success}}
-If you need help migrating to CockroachDB, contact our sales team.
-{{site.data.alerts.end}}
-
-## Syntax differences
-
-You will likely need to make application changes due to differences in syntax between MySQL and CockroachDB. Along with the [general considerations in the migration overview]({% link {{ page.version.version }}/migration-overview.md %}#application-changes), also consider the following MySQL-specific information as you develop your migration plan.
-
-When [using the Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}?filters=mysql#convert-a-schema), MySQL syntax that cannot automatically be converted will be displayed in the [**Summary Report**]({% link cockroachcloud/migrations-page.md %}?filters=mysql#summary-report). These may include the following.
-
-#### String case sensitivity
-
-Strings are case-insensitive in MySQL and case-sensitive in CockroachDB. You may need to edit your MySQL data to get the results you expect from CockroachDB. For example, you may have been doing string comparisons in MySQL that will need to be changed to work with CockroachDB.
-
-For more information about the case sensitivity of strings in MySQL, see [Case Sensitivity in String Searches](https://dev.mysql.com/doc/refman/8.0/en/case-sensitivity.html) from the MySQL documentation. For more information about CockroachDB strings, see [`STRING`]({% link {{ page.version.version }}/string.md %}).
-
-#### Identifier case sensitivity
-
-Identifiers are case-sensitive in MySQL and [case-insensitive in CockroachDB]({% link {{ page.version.version }}/keywords-and-identifiers.md %}#identifiers). When [using the Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}?filters=mysql#convert-a-schema), you can either keep case sensitivity by enclosing identifiers in double quotes, or make identifiers case-insensitive by converting them to lowercase.
-
-#### `AUTO_INCREMENT` attribute
-
-The MySQL [`AUTO_INCREMENT`](https://dev.mysql.com/doc/refman/8.0/en/example-auto-increment.html) attribute, which creates sequential column values, is not supported in CockroachDB. When [using the Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}?filters=mysql#convert-a-schema), columns with `AUTO_INCREMENT` can be converted to use [sequences]({% link {{ page.version.version }}/create-sequence.md %}), `UUID` values with [`gen_random_uuid()`]({% link {{ page.version.version }}/functions-and-operators.md %}#id-generation-functions), or unique `INT8` values using [`unique_rowid()`]({% link {{ page.version.version }}/functions-and-operators.md %}#id-generation-functions). Cockroach Labs does not recommend using a sequence to define a primary key column. For more information, see [Unique ID best practices]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#unique-id-best-practices).
-
-{{site.data.alerts.callout_info}}
-Changing a column type during schema conversion will cause [MOLT Verify]({% link molt/molt-verify.md %}) to identify a type mismatch during [data validation](#step-3-validate-the-migrated-data). This is expected behavior.
-{{site.data.alerts.end}}
-
-#### `ENUM` type
-
-MySQL `ENUM` types are defined in table columns. On CockroachDB, [`ENUM`]({% link {{ page.version.version }}/enum.md %}) is a standalone type. When [using the Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}?filters=mysql#convert-a-schema), you can either deduplicate the `ENUM` definitions or create a separate type for each column.
-
-#### `TINYINT` type
-
-`TINYINT` data types are not supported in CockroachDB. The [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}?filters=mysql) automatically converts `TINYINT` columns to [`INT2`]({% link {{ page.version.version }}/int.md %}) (`SMALLINT`).
-
-#### Geospatial types
-
-MySQL geometry types are not converted to CockroachDB [geospatial types]({% link {{ page.version.version }}/spatial-data-overview.md %}#spatial-objects) by the [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}?filters=mysql). They should be manually converted to the corresponding types in CockroachDB.
-
-#### `FIELD` function
-
-The MYSQL `FIELD` function is not supported in CockroachDB. Instead, you can use the [`array_position`]({% link {{ page.version.version }}/functions-and-operators.md %}#array-functions) function, which returns the index of the first occurrence of element in the array.
-
-Example usage:
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-SELECT array_position(ARRAY[4,1,3,2],1);
-~~~
-
-~~~
- array_position
-------------------
- 2
-(1 row)
-~~~
-
-While MySQL returns 0 when the element is not found, CockroachDB returns `NULL`. So if you are using the `ORDER BY` clause in a statement with the `array_position` function, the caveat is that sort is applied even when the element is not found. As a workaround, you can use the [`COALESCE`]({% link {{ page.version.version }}/functions-and-operators.md %}#conditional-and-function-like-operators) operator.
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-SELECT * FROM table_a ORDER BY COALESCE(array_position(ARRAY[4,1,3,2],5),999);
-~~~
-
-## Load MySQL data
-
-You can use [MOLT Fetch]({% link molt/molt-fetch.md %}) to migrate MySQL data to CockroachDB.
-
-Alternatively, you can use one of the following methods to migrate the data:
-
-- {% include {{ page.version.version }}/migration/load-data-import-into.md %}
-
- {% include {{ page.version.version }}/misc/import-perf.md %}
-
-- {% include {{ page.version.version }}/migration/load-data-third-party.md %}
-
-The [following example](#example-migrate-world-to-cockroachdb) uses `IMPORT INTO` to perform the initial data load.
-
-## Example: Migrate `world` to CockroachDB
-
-The following steps demonstrate converting a schema, performing an [initial load of data]({% link {{ page.version.version }}/migration-overview.md %}#load-test-data), and [validating data consistency]({% link {{ page.version.version }}/migration-overview.md %}#validate-queries) during a migration.
-
-In the context of a full migration, these steps ensure that MySQL data can be properly migrated to CockroachDB and your application queries tested against the cluster. For details, see the [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}#prepare-for-migration).
-
-### Before you begin
-
-The example uses the [MySQL `world` data set](https://dev.mysql.com/doc/index-other.html) and demonstrates how to migrate the schema and data to a CockroachDB {{ site.data.products.standard }} cluster. To follow along with these steps:
-
-1. Download the [`world` data set](https://dev.mysql.com/doc/index-other.html).
-
-1. Create the `world` database on your MySQL instance, specifying the path of the downloaded file:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- mysqlsh -uroot --sql --file {path}/world-db/world.sql
- ~~~
-
-1. Create a free [{{ site.data.products.cloud }} account]({% link cockroachcloud/create-an-account.md %}), which is used to access the [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}?filters=mysql) and [create the CockroachDB {{ site.data.products.standard }} cluster]({% link cockroachcloud/create-your-cluster.md %}).
-
-{{site.data.alerts.callout_success}}
-{% include cockroachcloud/migration/sct-self-hosted.md %}
-{{site.data.alerts.end}}
-
-### Step 1. Convert the MySQL schema
-
-Use the [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}?filters=mysql) to convert the `world` schema for compatibility with CockroachDB. The schema has three tables: `city`, `country`, and `countrylanguage`.
-
-1. Dump the MySQL `world` schema with the following [`mysqldump`](https://dev.mysql.com/doc/refman/8.0/en/mysqldump-sql-format.html) command:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- mysqldump -uroot --no-data world > world_schema.sql
- ~~~
-
-1. Open the [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}?filters=mysql) in the {{ site.data.products.cloud }} Console and [add a new MySQL schema]({% link cockroachcloud/migrations-page.md %}?filters=mysql#convert-a-schema).
-
- For **AUTO_INCREMENT Conversion Option**, select the [`unique_rowid()`]({% link {{ page.version.version }}/functions-and-operators.md %}#id-generation-functions) option. This will convert the `ID` column in the `city` table, which has MySQL type `int` and `AUTO_INCREMENT`, to a CockroachDB [`INT8`]({% link {{ page.version.version }}/int.md %}) type with default values generated by [`unique_rowid()`]({% link {{ page.version.version }}/functions-and-operators.md %}#id-generation-functions). For context on this option, see [`AUTO_INCREMENT` attribute](#auto_increment-attribute).
-
- The `UUID` and `unique_rowid()` options are each preferred for [different use cases]({% link {{ page.version.version }}/sql-faqs.md %}#what-are-the-differences-between-uuid-sequences-and-unique_rowid). For this example, selecting the `unique_rowid()` option makes [loading the data](#step-2-load-the-mysql-data) more straightforward in a later step, since both the source and target columns will have integer types.
-
-1. [Upload `world_schema.sql`]({% link cockroachcloud/migrations-page.md %}?filters=mysql#upload-file) to the Schema Conversion Tool.
-
- After conversion is complete, [review the results]({% link cockroachcloud/migrations-page.md %}?filters=mysql#review-the-schema). The [**Summary Report**]({% link cockroachcloud/migrations-page.md %}?filters=mysql#summary-report) shows that there are no errors. This means that the schema is ready to migrate to CockroachDB.
-
- {{site.data.alerts.callout_success}}
- You can also [add your MySQL database credentials]({% link cockroachcloud/migrations-page.md %}?filters=mysql#use-credentials) to have the Schema Conversion Tool obtain the schema directly from the MySQL database.
- {{site.data.alerts.end}}
-
- This example migrates directly to a CockroachDB {{ site.data.products.standard }} cluster. {% include cockroachcloud/migration/sct-self-hosted.md %}
-
-1. Before you migrate the converted schema, click the **Statements** tab to view the [Statements list]({% link cockroachcloud/migrations-page.md %}?filters=mysql#statements-list). Scroll down to the `CREATE TABLE countrylanguage` statement and edit the statement to add a [collation]({% link {{ page.version.version }}/collate.md %}) (`COLLATE en_US`) on the `language` column:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- CREATE TABLE countrylanguage (
- countrycode VARCHAR(3) DEFAULT '' NOT NULL,
- language VARCHAR(30) COLLATE en_US DEFAULT '' NOT NULL,
- isofficial countrylanguage_isofficial_enum
- DEFAULT 'F'
- NOT NULL,
- percentage DECIMAL(4,1) DEFAULT '0.0' NOT NULL,
- PRIMARY KEY (countrycode, language),
- INDEX countrycode (countrycode),
- CONSTRAINT countrylanguage_ibfk_1
- FOREIGN KEY (countrycode) REFERENCES country (code)
- )
- ~~~
-
- Click **Save**.
-
- This is a workaround to prevent [data validation](#step-3-validate-the-migrated-data) from failing due to collation mismatches. For more details, see the [MOLT Verify] ({% link molt/molt-verify.md %}#known-limitations) documentation.
-
-1. Click [**Migrate Schema**]({% link cockroachcloud/migrations-page.md %}?filters=mysql#migrate-the-schema) to create a new {{ site.data.products.standard }} cluster with the converted schema. Name the database `world`.
-
- You can view this database on the [**Databases** page]({% link cockroachcloud/databases-page.md %}) of the {{ site.data.products.cloud }} Console.
-
-1. Open a SQL shell to the CockroachDB `world` cluster. To find the command, open the **Connect** dialog in the {{ site.data.products.cloud }} Console and select the `world` database and **CockroachDB Client** option. It will look like:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- cockroach sql --url "postgresql://{username}@{hostname}:{port}/world?sslmode=verify-full"
- ~~~
-
-1. For large imports, Cockroach Labs recommends [removing indexes prior to loading data]({% link {{ page.version.version }}/import-performance-best-practices.md %}#import-into-a-schema-with-secondary-indexes) and recreating them afterward. This provides increased visibility into the import progress and the ability to retry each step independently.
-
- Show the indexes on the `world` database:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- SHOW INDEXES FROM DATABASE world;
- ~~~
-
- The `countrycode` [foreign key]({% link {{ page.version.version }}/foreign-key.md %}) indexes on the `city` and `countrylanguage` tables can be removed for now:
-
- ~~~
- table_name | index_name | index_schema | non_unique | seq_in_index | column_name | definition | direction | storing | implicit | visible
- ---------------------------------+-------------------------------------------------+--------------+------------+--------------+-----------------+-----------------+-----------+---------+----------+----------
- ...
- city | countrycode | public | t | 2 | id | id | ASC | f | t | t
- city | countrycode | public | t | 1 | countrycode | countrycode | ASC | f | f | t
- ...
- countrylanguage | countrycode | public | t | 1 | countrycode | countrycode | ASC | f | f | t
- countrylanguage | countrycode | public | t | 2 | language | language | ASC | f | t | t
- ...
- ~~~
-
-1. Drop the `countrycode` indexes:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- DROP INDEX city@countrycode;
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- DROP INDEX countrylanguage@countrycode;
- ~~~
-
- You will recreate the indexes after [loading the data](#step-2-load-the-mysql-data).
-
-### Step 2. Load the MySQL data
-
-Load the `world` data into CockroachDB using [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) with CSV-formatted data. {% include {{ page.version.version }}/sql/export-csv-tsv.md %}
-
-{{site.data.alerts.callout_info}}
-When MySQL dumps data, the tables are not ordered by [foreign key]({% link {{ page.version.version }}/foreign-key.md %}) constraints, and foreign keys are not placed in the correct dependency order. It is best to disable foreign key checks when loading data into CockroachDB, and revalidate foreign keys on each table after the data is loaded.
-
-By default, [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) invalidates all [foreign key]({% link {{ page.version.version }}/foreign-key.md %}) constraints on the target table.
-{{site.data.alerts.end}}
-
-1. Dump the MySQL `world` data with the following [`mysqldump` command](https://dev.mysql.com/doc/refman/8.0/en/mysqldump-delimited-text.html):
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- mysqldump -uroot -T /{path}/world-data --fields-terminated-by ',' --fields-enclosed-by '"' --fields-escaped-by '\' --no-create-info world
- ~~~
-
- This dumps each table in your database to the path `/{path}/world-data` as a `.txt` file in CSV format.
- - `--fields-terminated-by` specifies that values are separated by commas instead of tabs.
- - `--fields-enclosed-by` and `--fields-escaped-by` specify the characters that enclose and escape column values, respectively.
- - `--no-create-info` dumps only the [data manipulation language (DML)]({% link {{ page.version.version }}/sql-statements.md %}#data-manipulation-statements).
-
-1. Host the files where the CockroachDB cluster can access them.
-
- Each node in the CockroachDB cluster needs to have access to the files being imported. There are several ways for the cluster to access the data; for more information on the types of storage [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) can pull from, see the following:
- - [Use Cloud Storage]({% link {{ page.version.version }}/use-cloud-storage.md %})
- - [Use a Local File Server]({% link {{ page.version.version }}/use-a-local-file-server.md %})
-
- Cloud storage such as Amazon S3 or Google Cloud is highly recommended for hosting the data files you want to import.
-
- The dump files generated in the preceding step are already hosted on a public S3 bucket created for this example.
-
-1. Open a SQL shell to the CockroachDB `world` cluster, using the same command as before:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- cockroach sql --url "postgresql://{username}@{hostname}:{port}/world?sslmode=verify-full"
- ~~~
-
-1. Use [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) to import each MySQL dump file into the corresponding table in the `world` database.
-
- The following commands point to a public S3 bucket where the `world` data dump files are hosted for this example. The `nullif='\N'` clause specifies that `\N` values, which are produced by the `mysqldump` command, should be read as [`NULL`]({% link {{ page.version.version }}/null-handling.md %}).
-
- {{site.data.alerts.callout_success}}
- You can add the `row_limit` [option]({% link {{ page.version.version }}/import-into.md %}#import-options) to specify the number of rows to import. For example, `row_limit = '10'` will import the first 10 rows of the table. This option is useful for finding errors quickly before executing a more time- and resource-consuming import.
- {{site.data.alerts.end}}
-
- ~~~ sql
- IMPORT INTO countrylanguage
- CSV DATA (
- 'https://cockroachdb-migration-examples.s3.us-east-1.amazonaws.com/mysql/world/countrylanguage.txt'
- )
- WITH
- nullif='\N';
- ~~~
-
- ~~~
- job_id | status | fraction_completed | rows | index_entries | bytes
- ---------------------+-----------+--------------------+------+---------------+---------
- 887782070812344321 | succeeded | 1 | 984 | 984 | 171555
- ~~~
-
- ~~~ sql
- IMPORT INTO country
- CSV DATA (
- 'https://cockroachdb-migration-examples.s3.us-east-1.amazonaws.com/mysql/world/country.txt'
- )
- WITH
- nullif='\N';
- ~~~
-
- ~~~
- job_id | status | fraction_completed | rows | index_entries | bytes
- ---------------------+-----------+--------------------+------+---------------+--------
- 887782114360819713 | succeeded | 1 | 239 | 0 | 33173
- ~~~
-
- ~~~ sql
- IMPORT INTO city
- CSV DATA (
- 'https://cockroachdb-migration-examples.s3.us-east-1.amazonaws.com/mysql/world/city.txt'
- )
- WITH
- nullif='\N';
- ~~~
-
- ~~~
- job_id | status | fraction_completed | rows | index_entries | bytes
- ---------------------+-----------+--------------------+------+---------------+---------
- 887782154421567489 | succeeded | 1 | 4079 | 4079 | 288140
- ~~~
-
- {{site.data.alerts.callout_info}}
- After [converting the schema](#step-1-convert-the-mysql-schema) to work with CockroachDB, the `id` column in `city` is an [`INT8`]({% link {{ page.version.version }}/int.md %}) with default values generated by [`unique_rowid()`]({% link {{ page.version.version }}/functions-and-operators.md %}#id-generation-functions). However, `unique_rowid()` values are only generated when new rows are [inserted]({% link {{ page.version.version }}/insert.md %}) without an `id` value. The MySQL data dump still includes the sequential `id` values generated by the MySQL [`AUTO_INCREMENT` attribute](#auto_increment-attribute), and these are imported with the `IMPORT INTO` command.
-
- In an actual migration, you can either update the primary key into a [multi-column key]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#use-multi-column-primary-keys) or add a new primary key column that [generates unique IDs]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#unique-id-best-practices).
- {{site.data.alerts.end}}
-
-1. Recreate the indexes that you deleted before importing the data:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- CREATE INDEX countrycode ON city (countrycode, id);
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- CREATE INDEX countrycode ON countrylanguage (countrycode, language);
- ~~~
-
-1. Recall that `IMPORT INTO` invalidates all [foreign key]({% link {{ page.version.version }}/foreign-key.md %}) constraints on the target table. View the constraints that are defined on `city` and `countrylanguage`:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- SHOW CONSTRAINTS FROM city;
- ~~~
-
- ~~~
- table_name | constraint_name | constraint_type | details | validated
- -------------+-----------------+-----------------+--------------------------------------------------------------+------------
- city | city_ibfk_1 | FOREIGN KEY | FOREIGN KEY (countrycode) REFERENCES country(code) NOT VALID | f
- city | city_pkey | PRIMARY KEY | PRIMARY KEY (id ASC) | t
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- SHOW CONSTRAINTS FROM countrylanguage;
- ~~~
-
- ~~~
- table_name | constraint_name | constraint_type | details | validated
- ------------------+------------------------+-----------------+--------------------------------------------------------------+------------
- countrylanguage | countrylanguage_ibfk_1 | FOREIGN KEY | FOREIGN KEY (countrycode) REFERENCES country(code) NOT VALID | f
- countrylanguage | countrylanguage_pkey | PRIMARY KEY | PRIMARY KEY (countrycode ASC, language ASC) | t
- ~~~
-
-1. To validate the foreign keys, issue an [`ALTER TABLE ... VALIDATE CONSTRAINT`]({% link {{ page.version.version }}/alter-table.md %}#validate-constraint) statement for each table:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- ALTER TABLE city VALIDATE CONSTRAINT city_ibfk_1;
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- ALTER TABLE countrylanguage VALIDATE CONSTRAINT countrylanguage_ibfk_1;
- ~~~
-
-### Step 3. Validate the migrated data
-
-Use [MOLT Verify]({% link molt/molt-verify.md %}) to check that the data on MySQL and CockroachDB are consistent.
-
-1. [Install MOLT Verify.]({% link molt/molt-verify.md %})
-
-1. In the directory where you installed MOLT Verify, use the following command to compare the two databases, specifying the [JDBC connection string for MySQL](https://dev.mysql.com/doc/connector-j/8.1/en/connector-j-reference-jdbc-url-format.html) with `--source` and the SQL connection string for CockroachDB with `--target`:
-
- {{site.data.alerts.callout_success}}
- To find the CockroachDB connection string, open the **Connect** dialog in the {{ site.data.products.cloud }} Console and select the `world` database and the **General connection string** option.
- {{site.data.alerts.end}}
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- ./molt verify --source 'jdbc:mysql://{user}:{password}@tcp({host}:{port})/world' --target 'postgresql://{user}:{password}@{host}:{port}/world?sslmode=verify-full'
- ~~~
-
- You will see the initial output:
-
- ~~~
- INF verification in progress
- ~~~
-
- The following warnings indicate that the MySQL and CockroachDB columns have different types. This is an expected result, since some columns were [changed to `ENUM` types](#enum-type) when you [converted the schema](#step-1-convert-the-mysql-schema):
-
- ~~~
- WRN mismatching table definition mismatch_info="column type mismatch on continent: text vs country_continent_enum" table_name=country table_schema=public
- WRN mismatching table definition mismatch_info="column type mismatch on isofficial: text vs countrylanguage_isofficial_enum" table_name=countrylanguage table_schema=public
- ~~~
-
- The following output indicates that MOLT Verify has completed verification:
-
- ~~~
- INF finished row verification on public.country (shard 1/1): truth rows seen: 239, success: 239, missing: 0, mismatch: 0, extraneous: 0, live_retry: 0
- INF finished row verification on public.countrylanguage (shard 1/1): truth rows seen: 984, success: 984, missing: 0, mismatch: 0, extraneous: 0, live_retry: 0
- INF finished row verification on public.city (shard 1/1): truth rows seen: 4079, success: 4079, missing: 0, mismatch: 0, extraneous: 0, live_retry: 0
- INF verification complete
- ~~~
-
-With the schema migrated and the initial data load verified, the next steps in a real-world migration are to ensure that you have made any necessary [application changes]({% link {{ page.version.version }}/migration-overview.md %}#application-changes), [validate application queries]({% link {{ page.version.version }}/migration-overview.md %}#validate-queries), and [perform a dry run]({% link {{ page.version.version }}/migration-overview.md %}#perform-a-dry-run) before [conducting the full migration]({% link {{ page.version.version }}/migration-overview.md %}#conduct-the-migration).
-
-To learn more, see the [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}).
-
-## See also
-
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
-- [Use the Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
-- [Use the MOLT Verify tool]({% link molt/molt-verify.md %})
-- [Import Performance Best Practices]({% link {{ page.version.version }}/import-performance-best-practices.md %})
-- [Migrate from CSV]({% link {{ page.version.version }}/migrate-from-csv.md %})
-- [Migrate from PostgreSQL]({% link {{ page.version.version }}/migrate-from-postgres.md %})
-- [Can a PostgreSQL or MySQL application be migrated to CockroachDB?]({% link {{ page.version.version }}/frequently-asked-questions.md %}#can-a-postgresql-or-mysql-application-be-migrated-to-cockroachdb)
diff --git a/src/current/v25.1/migrate-from-openstreetmap.md b/src/current/v25.1/migrate-from-openstreetmap.md
index 843c335beba..a4bcdc56eef 100644
--- a/src/current/v25.1/migrate-from-openstreetmap.md
+++ b/src/current/v25.1/migrate-from-openstreetmap.md
@@ -128,9 +128,9 @@ Osm2pgsql took 2879s overall
- [Migrate from GeoPackages]({% link {{ page.version.version }}/migrate-from-geopackage.md %})
- [Migrate from GeoJSON]({% link {{ page.version.version }}/migrate-from-geojson.md %})
- [Migrate from Shapefiles]({% link {{ page.version.version }}/migrate-from-shapefiles.md %})
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
-- [Migrate from MySQL][mysql]
-- [Migrate from PostgreSQL][postgres]
+- [Migration Overview]({% link molt/migration-overview.md %})
+- [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql)
+- [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %})
- [Back Up and Restore Data]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %})
- [Use the Built-in SQL Client]({% link {{ page.version.version }}/cockroach-sql.md %})
- [`cockroach` Commands Overview]({% link {{ page.version.version }}/cockroach-commands.md %})
diff --git a/src/current/v25.1/migrate-from-oracle.md b/src/current/v25.1/migrate-from-oracle.md
index 2979cd47a36..a62e9ae2c11 100644
--- a/src/current/v25.1/migrate-from-oracle.md
+++ b/src/current/v25.1/migrate-from-oracle.md
@@ -390,8 +390,8 @@ You will have to refactor Oracle SQL and functions that do not comply with [ANSI
- [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %})
- [Import Performance Best Practices]({% link {{ page.version.version }}/import-performance-best-practices.md %})
- [Migrate from CSV]({% link {{ page.version.version }}/migrate-from-csv.md %})
-- [Migrate from MySQL]({% link {{ page.version.version }}/migrate-from-mysql.md %})
-- [Migrate from PostgreSQL]({% link {{ page.version.version }}/migrate-from-postgres.md %})
+- [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql)
+- [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %})
- [Back Up and Restore Data]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %})
- [Use the Built-in SQL Client]({% link {{ page.version.version }}/cockroach-sql.md %})
- [`cockroach` Commands Overview]({% link {{ page.version.version }}/cockroach-commands.md %})
diff --git a/src/current/v25.1/migrate-from-postgres.md b/src/current/v25.1/migrate-from-postgres.md
deleted file mode 100644
index e8dcf51f649..00000000000
--- a/src/current/v25.1/migrate-from-postgres.md
+++ /dev/null
@@ -1,301 +0,0 @@
----
-title: Migrate from PostgreSQL
-summary: Learn how to migrate data from PostgreSQL into a CockroachDB cluster.
-toc: true
-docs_area: migrate
----
-
-{{site.data.alerts.callout_info}}
-For current migration instructions using the [MOLT tools]({% link molt/molt-overview.md %}), refer to [Migrate to CockroachDB]({% link {{ page.version.version }}/migrate-to-cockroachdb.md %}).
-{{site.data.alerts.end}}
-
-This page describes basic considerations and provides a basic [example](#example-migrate-frenchtowns-to-cockroachdb) of migrating data from PostgreSQL to CockroachDB. The information on this page assumes that you have read [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}), which describes the broad phases and considerations of migrating a database to CockroachDB.
-
-The [PostgreSQL migration example](#example-migrate-frenchtowns-to-cockroachdb) on this page demonstrates how to update the PostgreSQL schema, perform an initial load of data, and validate the data. These steps are essential when [preparing for a full migration]({% link {{ page.version.version }}/migration-overview.md %}#prepare-for-migration).
-
-{{site.data.alerts.callout_success}}
-If you need help migrating to CockroachDB, contact our sales team.
-{{site.data.alerts.end}}
-
-## Syntax differences
-
-CockroachDB supports the [PostgreSQL wire protocol](https://www.postgresql.org/docs/current/protocol.html) and is largely compatible with PostgreSQL syntax.
-
-For syntax differences, refer to [Features that differ from PostgreSQL]({% link {{ page.version.version }}/postgresql-compatibility.md %}#features-that-differ-from-postgresql).
-
-### Unsupported features
-
-The following PostgreSQL features do not yet exist in CockroachDB:
-
-{% include {{page.version.version}}/sql/unsupported-postgres-features.md %}
-
-## Load PostgreSQL data
-
-You can use [MOLT Fetch]({% link molt/molt-fetch.md %}) to migrate PostgreSQL data to CockroachDB.
-
-Alternatively, you can use one of the following methods to migrate the data:
-
-- {% include {{ page.version.version }}/migration/load-data-import-into.md %}
-
- {% include {{ page.version.version }}/misc/import-perf.md %}
-
-- {% include {{ page.version.version }}/migration/load-data-third-party.md %}
-
-- {% include {{ page.version.version }}/migration/load-data-copy-from.md %}
-
-The [following example](#example-migrate-frenchtowns-to-cockroachdb) uses `IMPORT INTO` to perform the initial data load.
-
-## Example: Migrate `frenchtowns` to CockroachDB
-
-The following steps demonstrate converting a schema, performing an [initial load of data]({% link {{ page.version.version }}/migration-overview.md %}#load-test-data), and [validating data consistency]({% link {{ page.version.version }}/migration-overview.md %}#validate-queries) during a migration.
-
-In the context of a full migration, these steps ensure that PostgreSQL data can be properly migrated to CockroachDB and your application queries tested against the cluster. For details, see the [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}#prepare-for-migration).
-
-### Before you begin
-
-The example uses a modified version of the PostgreSQL `french-towns-communes-francais` data set and demonstrates how to migrate the schema and data to a CockroachDB {{ site.data.products.standard }} cluster. To follow along with these steps:
-
-1. Download the `frenchtowns` data set:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- curl -O https://cockroachdb-migration-examples.s3.us-east-1.amazonaws.com/postgresql/frenchtowns/frenchtowns.sql
- ~~~
-
-1. Create a `frenchtowns` database on your PostgreSQL instance:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- createdb frenchtowns
- ~~~
-
-1. Load the `frenchtowns` data into PostgreSQL, specifying the path of the downloaded file:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- psql frenchtowns -a -f frenchtowns.sql
- ~~~
-
-1. Create a free [{{ site.data.products.cloud }} account]({% link cockroachcloud/create-an-account.md %}), which is used to access the [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}) and [create the CockroachDB {{ site.data.products.standard }} cluster]({% link cockroachcloud/create-your-cluster.md %}).
-
-{{site.data.alerts.callout_success}}
-{% include cockroachcloud/migration/sct-self-hosted.md %}
-{{site.data.alerts.end}}
-
-### Step 1. Convert the PostgreSQL schema
-
-Use the [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}) to convert the `frenchtowns` schema for compatibility with CockroachDB. The schema has three tables: `regions`, `departments`, and `towns`.
-
-1. Dump the PostgreSQL `frenchtowns` schema with the following [`pg_dump`](https://www.postgresql.org/docs/15/app-pgdump.html) command:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- pg_dump --schema-only frenchtowns > frenchtowns_schema.sql
- ~~~
-
-1. Open the [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}) in the {{ site.data.products.cloud }} Console and [add a new PostgreSQL schema]({% link cockroachcloud/migrations-page.md %}#convert-a-schema).
-
- After conversion is complete, [review the results]({% link cockroachcloud/migrations-page.md %}#review-the-schema). The [**Summary Report**]({% link cockroachcloud/migrations-page.md %}#summary-report) shows that there are errors under **Required Fixes**. You must resolve these in order to migrate the schema to CockroachDB.
-
- {{site.data.alerts.callout_success}}
- You can also [add your PostgreSQL database credentials]({% link cockroachcloud/migrations-page.md %}#use-credentials) to have the Schema Conversion Tool obtain the schema directly from the PostgreSQL database.
- {{site.data.alerts.end}}
-
-1. `Missing user: postgres` errors indicate that the SQL user `postgres` is missing from CockroachDB. Click **Add User** to create the user.
-
-1. `Miscellaneous Errors` includes a `SELECT pg_catalog.set_config('search_path', '', false)` statement that can safely be removed. Click **Delete** to remove the statement from the schema.
-
-1. Review the `CREATE SEQUENCE` statements listed under **Suggestions**. Cockroach Labs does not recommend using a sequence to define a primary key column. For more information, see [Unique ID best practices]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#unique-id-best-practices).
-
- For this example, **Acknowledge** the suggestion without making further changes. In practice, after [conducting the full migration]({% link {{ page.version.version }}/migration-overview.md %}#conduct-the-migration) to CockroachDB, you would modify your CockroachDB schema to use unique and non-sequential primary keys.
-
-1. Click **Retry Migration**. The **Summary Report** now shows that there are no errors. This means that the schema is ready to migrate to CockroachDB.
-
- This example migrates directly to a CockroachDB {{ site.data.products.standard }} cluster. {% include cockroachcloud/migration/sct-self-hosted.md %}
-
-1. Click [**Migrate Schema**]({% link cockroachcloud/migrations-page.md %}#migrate-the-schema) to create a new CockroachDB {{ site.data.products.standard }} cluster with the converted schema. Name the database `frenchtowns`.
-
- You can view this database on the [**Databases** page]({% link cockroachcloud/databases-page.md %}) of the {{ site.data.products.cloud }} Console.
-
-### Step 2. Load the PostgreSQL data
-
-Load the `frenchtowns` data into CockroachDB using [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) with CSV-formatted data. {% include {{ page.version.version }}/sql/export-csv-tsv.md %}
-
-{{site.data.alerts.callout_info}}
-By default, [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) invalidates all [foreign key]({% link {{ page.version.version }}/foreign-key.md %}) constraints on the target table.
-{{site.data.alerts.end}}
-
-1. Dump each table in the PostgreSQL `frenchtowns` database to a CSV-formatted file:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- psql frenchtowns -c "COPY regions TO stdout DELIMITER ',' CSV;" > regions.csv
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- psql frenchtowns -c "COPY departments TO stdout DELIMITER ',' CSV;" > departments.csv
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- psql frenchtowns -c "COPY towns TO stdout DELIMITER ',' CSV;" > towns.csv
- ~~~
-
-1. Host the files where the CockroachDB cluster can access them.
-
- Each node in the CockroachDB cluster needs to have access to the files being imported. There are several ways for the cluster to access the data; for more information on the types of storage [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) can pull from, see the following:
- - [Use Cloud Storage]({% link {{ page.version.version }}/use-cloud-storage.md %})
- - [Use a Local File Server]({% link {{ page.version.version }}/use-a-local-file-server.md %})
-
- Cloud storage such as Amazon S3 or Google Cloud is highly recommended for hosting the data files you want to import.
-
- The dump files generated in the preceding step are already hosted on a public S3 bucket created for this example.
-
-1. Open a SQL shell to the CockroachDB `frenchtowns` cluster. To find the command, open the **Connect** dialog in the {{ site.data.products.cloud }} Console and select the `frenchtowns` database and **CockroachDB Client** option. It will look like:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- cockroach sql --url "postgresql://{username}@{hostname}:{port}/frenchtowns?sslmode=verify-full"
- ~~~
-
-1. Use [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) to import each PostgreSQL dump file into the corresponding table in the `frenchtowns` database.
-
- The following commands point to a public S3 bucket where the `frenchtowns` data dump files are hosted for this example.
-
- {{site.data.alerts.callout_success}}
- You can add the `row_limit` [option]({% link {{ page.version.version }}/import-into.md %}#import-options) to specify the number of rows to import. For example, `row_limit = '10'` will import the first 10 rows of the table. This option is useful for finding errors quickly before executing a more time- and resource-consuming import.
- {{site.data.alerts.end}}
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- IMPORT INTO regions
- CSV DATA (
- 'https://cockroachdb-migration-examples.s3.us-east-1.amazonaws.com/postgresql/frenchtowns/regions.csv'
- );
- ~~~
-
- ~~~
- job_id | status | fraction_completed | rows | index_entries | bytes
- ---------------------+-----------+--------------------+------+---------------+--------
- 893753132185026561 | succeeded | 1 | 26 | 52 | 2338
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- IMPORT INTO departments
- CSV DATA (
- 'https://cockroachdb-migration-examples.s3.us-east-1.amazonaws.com/postgresql/frenchtowns/departments.csv'
- );
- ~~~
-
- ~~~
- job_id | status | fraction_completed | rows | index_entries | bytes
- ---------------------+-----------+--------------------+------+---------------+--------
- 893753147892465665 | succeeded | 1 | 100 | 300 | 11166
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- IMPORT INTO towns
- CSV DATA (
- 'https://cockroachdb-migration-examples.s3.us-east-1.amazonaws.com/postgresql/frenchtowns/towns.csv'
- );
- ~~~
-
- ~~~
- job_id | status | fraction_completed | rows | index_entries | bytes
- ---------------------+-----------+--------------------+-------+---------------+----------
- 893753162225680385 | succeeded | 1 | 36684 | 36684 | 2485007
- ~~~
-
-1. Recall that `IMPORT INTO` invalidates all [foreign key]({% link {{ page.version.version }}/foreign-key.md %}) constraints on the target table. View the constraints that are defined on `departments` and `towns`:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- SHOW CONSTRAINTS FROM departments;
- ~~~
-
- ~~~
- table_name | constraint_name | constraint_type | details | validated
- --------------+-------------------------+-----------------+---------------------------------------------------------+------------
- departments | departments_capital_key | UNIQUE | UNIQUE (capital ASC) | t
- departments | departments_code_key | UNIQUE | UNIQUE (code ASC) | t
- departments | departments_name_key | UNIQUE | UNIQUE (name ASC) | t
- departments | departments_pkey | PRIMARY KEY | PRIMARY KEY (id ASC) | t
- departments | departments_region_fkey | FOREIGN KEY | FOREIGN KEY (region) REFERENCES regions(code) NOT VALID | f
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- SHOW CONSTRAINTS FROM towns;
- ~~~
-
- ~~~
- table_name | constraint_name | constraint_type | details | validated
- -------------+---------------------------+-----------------+-----------------------------------------------------------------+------------
- towns | towns_code_department_key | UNIQUE | UNIQUE (code ASC, department ASC) | t
- towns | towns_department_fkey | FOREIGN KEY | FOREIGN KEY (department) REFERENCES departments(code) NOT VALID | f
- towns | towns_pkey | PRIMARY KEY | PRIMARY KEY (id ASC) | t
- ~~~
-
-1. To validate the foreign keys, issue an [`ALTER TABLE ... VALIDATE CONSTRAINT`]({% link {{ page.version.version }}/alter-table.md %}#validate-constraint) statement for each table:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- ALTER TABLE departments VALIDATE CONSTRAINT departments_region_fkey;
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- ALTER TABLE towns VALIDATE CONSTRAINT towns_department_fkey;
- ~~~
-
-### Step 3. Validate the migrated data
-
-Use [MOLT Verify]({% link molt/molt-verify.md %}) to check that the data on PostgreSQL and CockroachDB are consistent.
-
-1. [Install MOLT Verify.]({% link molt/molt-verify.md %})
-
-1. In the directory where you installed MOLT Verify, use the following command to compare the two databases, specifying the PostgreSQL connection string with `--source` and the CockroachDB connection string with `--target`:
-
- {{site.data.alerts.callout_success}}
- To find the CockroachDB connection string, open the **Connect** dialog in the {{ site.data.products.cloud }} Console and select the `frenchtowns` database and the **General connection string** option.
- {{site.data.alerts.end}}
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- ./molt verify --source 'postgresql://{username}:{password}@{host}:{port}/frenchtowns' --target 'postgresql://{user}:{password}@{host}:{port}/frenchtowns?sslmode=verify-full'
- ~~~
-
- You will see the initial output:
-
- ~~~
- INF verification in progress
- ~~~
-
- The following output indicates that MOLT Verify has completed verification:
-
- ~~~
- INF finished row verification on public.regions (shard 1/1): truth rows seen: 26, success: 26, missing: 0, mismatch: 0, extraneous: 0, live_retry: 0
- INF finished row verification on public.departments (shard 1/1): truth rows seen: 100, success: 100, missing: 0, mismatch: 0, extraneous: 0, live_retry: 0
- INF progress on public.towns (shard 1/1): truth rows seen: 10000, success: 10000, missing: 0, mismatch: 0, extraneous: 0, live_retry: 0
- INF progress on public.towns (shard 1/1): truth rows seen: 20000, success: 20000, missing: 0, mismatch: 0, extraneous: 0, live_retry: 0
- INF progress on public.towns (shard 1/1): truth rows seen: 30000, success: 30000, missing: 0, mismatch: 0, extraneous: 0, live_retry: 0
- INF finished row verification on public.towns (shard 1/1): truth rows seen: 36684, success: 36684, missing: 0, mismatch: 0, extraneous: 0, live_retry: 0
- INF verification complete
- ~~~
-
-With the schema migrated and the initial data load verified, the next steps in a real-world migration are to ensure that you have made any necessary [application changes]({% link {{ page.version.version }}/migration-overview.md %}#application-changes), [validate application queries]({% link {{ page.version.version }}/migration-overview.md %}#validate-queries), and [perform a dry run]({% link {{ page.version.version }}/migration-overview.md %}#perform-a-dry-run) before [conducting the full migration]({% link {{ page.version.version }}/migration-overview.md %}#conduct-the-migration).
-
-To learn more, see the [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}).
-
-## See also
-
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
-- [Use the Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
-- [Use the MOLT Verify tool]({% link molt/molt-verify.md %})
-- [Import Performance Best Practices]({% link {{ page.version.version }}/import-performance-best-practices.md %})
-- [Migrate from CSV]({% link {{ page.version.version }}/migrate-from-csv.md %})
-- [Migrate from MySQL]({% link {{ page.version.version }}/migrate-from-mysql.md %})
-- [Can a PostgreSQL or MySQL application be migrated to CockroachDB?]({% link {{ page.version.version }}/frequently-asked-questions.md %}#can-a-postgresql-or-mysql-application-be-migrated-to-cockroachdb)
diff --git a/src/current/v25.1/migrate-from-shapefiles.md b/src/current/v25.1/migrate-from-shapefiles.md
index ea4e7a368e6..44f366d5a69 100644
--- a/src/current/v25.1/migrate-from-shapefiles.md
+++ b/src/current/v25.1/migrate-from-shapefiles.md
@@ -140,9 +140,9 @@ IMPORT INTO tornadoes CSV DATA ('http://localhost:3000/tornadoes.csv') WITH skip
- [Migrate from OpenStreetMap]({% link {{ page.version.version }}/migrate-from-openstreetmap.md %})
- [Migrate from GeoJSON]({% link {{ page.version.version }}/migrate-from-geojson.md %})
- [Migrate from GeoPackage]({% link {{ page.version.version }}/migrate-from-geopackage.md %})
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
-- [Migrate from MySQL][mysql]
-- [Migrate from PostgreSQL][postgres]
+- [Migration Overview]({% link molt/migration-overview.md %})
+- [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql)
+- [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %})
- [Back Up and Restore Data]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %})
- [Use the Built-in SQL Client]({% link {{ page.version.version }}/cockroach-sql.md %})
- [`cockroach` Commands Overview]({% link {{ page.version.version }}/cockroach-commands.md %})
diff --git a/src/current/v25.1/migrate-in-phases.md b/src/current/v25.1/migrate-in-phases.md
index 0f7ab399fc7..03b3d5e9b9c 100644
--- a/src/current/v25.1/migrate-in-phases.md
+++ b/src/current/v25.1/migrate-in-phases.md
@@ -5,7 +5,7 @@ toc: true
docs_area: migrate
---
-A phased migration to CockroachDB uses the [MOLT tools]({% link molt/molt-overview.md %}) to [convert your source schema](#step-2-prepare-the-source-schema), incrementally [load source data](#step-3-load-data-into-cockroachdb) and [verify the results](#step-4-verify-the-data-load), and finally [replicate ongoing changes](#step-6-replicate-changes-to-cockroachdb) before performing cutover.
+A phased migration to CockroachDB uses the [MOLT tools]({% link molt/migration-overview.md %}) to [convert your source schema](#step-2-prepare-the-source-schema), incrementally [load source data](#step-3-load-data-into-cockroachdb) and [verify the results](#step-4-verify-the-data-load), and finally [replicate ongoing changes](#step-6-replicate-changes-to-cockroachdb) before performing cutover.
{% assign tab_names_html = "Load and replicate;Phased migration;Failback" %}
{% assign html_page_filenames = "migrate-to-cockroachdb.html;migrate-in-phases.html;migrate-failback.html" %}
@@ -14,7 +14,7 @@ A phased migration to CockroachDB uses the [MOLT tools]({% link molt/molt-overvi
## Before you begin
-- Review the [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}).
+- Review the [Migration Overview]({% link molt/migration-overview.md %}).
- Install the [MOLT (Migrate Off Legacy Technology)]({% link releases/molt.md %}#installation) tools.
- Review the MOLT Fetch [setup]({% link molt/molt-fetch.md %}#setup) and [best practices]({% link molt/molt-fetch.md %}#best-practices).
{% include molt/fetch-secure-cloud-storage.md %}
@@ -149,10 +149,10 @@ Perform a cutover by resuming application traffic, now to CockroachDB.
## See also
-- [MOLT Overview]({% link molt/molt-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [MOLT Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
- [MOLT Fetch]({% link molt/molt-fetch.md %})
- [MOLT Verify]({% link molt/molt-verify.md %})
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [Migrate to CockroachDB]({% link {{ page.version.version }}/migrate-to-cockroachdb.md %})
- [Migration Failback]({% link {{ page.version.version }}/migrate-failback.md %})
\ No newline at end of file
diff --git a/src/current/v25.1/migrate-to-cockroachdb.md b/src/current/v25.1/migrate-to-cockroachdb.md
index be5561c12d8..202fe2c6651 100644
--- a/src/current/v25.1/migrate-to-cockroachdb.md
+++ b/src/current/v25.1/migrate-to-cockroachdb.md
@@ -5,7 +5,7 @@ toc: true
docs_area: migrate
---
-A migration to CockroachDB uses the [MOLT tools]({% link molt/molt-overview.md %}) to [convert your source schema](#step-2-prepare-the-source-schema), [load source data](#step-3-load-data-into-cockroachdb) into CockroachDB and immediately [replicate ongoing changes](#step-4-replicate-changes-to-cockroachdb), and [verify consistency](#step-5-stop-replication-and-verify-data) on the CockroachDB cluster before performing cutover.
+A migration to CockroachDB uses the [MOLT tools]({% link molt/migration-overview.md %}) to [convert your source schema](#step-2-prepare-the-source-schema), [load source data](#step-3-load-data-into-cockroachdb) into CockroachDB and immediately [replicate ongoing changes](#step-4-replicate-changes-to-cockroachdb), and [verify consistency](#step-5-stop-replication-and-verify-data) on the CockroachDB cluster before performing cutover.
{% assign tab_names_html = "Load and replicate;Phased migration;Failback" %}
{% assign html_page_filenames = "migrate-to-cockroachdb.html;migrate-in-phases.html;migrate-failback.html" %}
@@ -14,7 +14,7 @@ A migration to CockroachDB uses the [MOLT tools]({% link molt/molt-overview.md %
## Before you begin
-- Review the [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}).
+- Review the [Migration Overview]({% link molt/migration-overview.md %}).
- Install the [MOLT (Migrate Off Legacy Technology)]({% link releases/molt.md %}#installation) tools.
- Review the MOLT Fetch [setup]({% link molt/molt-fetch.md %}#setup) and [best practices]({% link molt/molt-fetch.md %}#best-practices).
{% include molt/fetch-secure-cloud-storage.md %}
@@ -109,10 +109,10 @@ Perform a cutover by resuming application traffic, now to CockroachDB.
## See also
-- [MOLT Overview]({% link molt/molt-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [MOLT Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
- [MOLT Fetch]({% link molt/molt-fetch.md %})
- [MOLT Verify]({% link molt/molt-verify.md %})
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [Migrate to CockroachDB in Phases]({% link {{ page.version.version }}/migrate-in-phases.md %})
- [Migration Failback]({% link {{ page.version.version }}/migrate-failback.md %})
\ No newline at end of file
diff --git a/src/current/v25.1/migration-overview.md b/src/current/v25.1/migration-overview.md
deleted file mode 100644
index 4e0d7a35e26..00000000000
--- a/src/current/v25.1/migration-overview.md
+++ /dev/null
@@ -1,169 +0,0 @@
----
-title: Migration Overview
-summary: Learn how to migrate your database to a CockroachDB cluster.
-toc: true
-docs_area: migrate
----
-
-This page provides a high-level overview of database migration.
-
-A database migration broadly consists of the following phases:
-
-1. [Develop a migration plan:](#develop-a-migration-plan) Evaluate your [downtime approach](#approach-to-downtime), [size the CockroachDB cluster](#capacity-planning) that you will migrate to, and become familiar with the [application changes](#application-changes) that you need to make for CockroachDB.
-1. [Prepare for migration:](#prepare-for-migration) Run a [pre-mortem](#run-a-migration-pre-mortem), set up [metrics](#set-up-monitoring-and-alerting), [load test data](#load-test-data), [validate your application queries](#validate-queries) for correctness and performance, and [perform a dry run](#perform-a-dry-run) of the migration.
-1. [Conduct the migration:](#conduct-the-migration) Use the [MOLT tools]({% link molt/molt-overview.md %}) to migrate the source data to CockroachDB, replicate ongoing changes, and verify consistency on CockroachDB.
-1. [Complete the migration:](#complete-the-migration) Notify the appropriate parties and summarize the details.
-
-{{site.data.alerts.callout_success}}
-For help migrating to CockroachDB, contact our sales team.
-{{site.data.alerts.end}}
-
-## Develop a migration plan
-
-Consider the following as you plan your migration:
-
-- Who will lead and perform the migration? Which teams are involved, and which aspects are they responsible for?
-- Which internal and external parties do you need to inform about the migration?
-- Which external or third-party tools (e.g., microservices, analytics, payment processors, aggregators, CRMs) must be tested and migrated along with your application?
-- What portion of the data can be inconsistent, and for how long? What is the tolerable percentage of latency and application errors? This comprises your "error budget".
-- When is the best time to perform this migration to be minimally disruptive to the database's users?
-- What is your target date for completing the migration?
-
-Create a document that summarizes the intent of the migration, the technical details, and the team members involved.
-
-### Approach to downtime
-
-It's important to fully [prepare the migration](#prepare-for-migration) in order to be certain that the migration can be completed successfully during the downtime window.
-
-- *Scheduled downtime* is made known to your users in advance. Once you have [prepared for the migration](#prepare-for-migration), you take the application offline, [conduct the migration](#conduct-the-migration), and bring the application back online on CockroachDB. To succeed, you should estimate the amount of downtime required to migrate your data, and ideally schedule the downtime outside of peak hours. Scheduling downtime is easiest if your application traffic is "periodic", meaning that it varies by the time of day, day of week, or day of month.
-
-- *Unscheduled downtime* impacts as few customers as possible, ideally without impacting their regular usage. If your application is intentionally offline at certain times (e.g., outside business hours), you can migrate the data without users noticing. Alternatively, if your application's functionality is not time-sensitive (e.g., it sends batched messages or emails), then you can queue requests while your system is offline, and process those requests after completing the migration to CockroachDB.
-
-- *Reduced functionality* takes some, but not all, application functionality offline. For example, you can disable writes but not reads while you migrate the application data, and queue data to be written after completing the migration.
-
-### Capacity planning
-
-Determine the size of the target CockroachDB cluster. To do this, consider your data volume and workload characteristics:
-
-- What is the total size of the data you will migrate?
-- How many active [application connections]({% link {{ page.version.version }}/recommended-production-settings.md %}#connection-pooling) will be running in the CockroachDB environment?
-
-Use this information to size the CockroachDB cluster you will create. If you are migrating to a CockroachDB {{ site.data.products.cloud }} cluster, see [Plan Your Cluster]({% link cockroachcloud/plan-your-cluster.md %}) for details:
-
-- For CockroachDB {{ site.data.products.standard }} and {{ site.data.products.basic }}, your cluster will scale automatically to meet your storage and usage requirements. Refer to the [CockroachDB {{ site.data.products.standard }}]({% link cockroachcloud/plan-your-cluster-basic.md %}#request-units) and [CockroachDB {{ site.data.products.basic }}]({% link cockroachcloud/plan-your-cluster-basic.md %}#request-units) documentation to learn about how to limit your resource consumption.
-- For CockroachDB {{ site.data.products.advanced }}, refer to the [example]({% link cockroachcloud/plan-your-cluster-advanced.md %}#example) that shows how your data volume, storage requirements, and replication factor affect the recommended node size (number of vCPUs per node) and total number of nodes on the cluster.
-- For guidance on sizing for connection pools, see the CockroachDB {{ site.data.products.cloud }} [Production Checklist]({% link cockroachcloud/production-checklist.md %}#sql-connection-handling).
-
-If you are migrating to a CockroachDB {{ site.data.products.core }} cluster:
-
-- Refer to our [sizing methodology]({% link {{ page.version.version }}/recommended-production-settings.md %}#sizing) to determine the total number of vCPUs on the cluster and the number of vCPUs per node (which determines the number of nodes on the cluster).
-- Refer to our [storage recommendations]({% link {{ page.version.version }}/recommended-production-settings.md %}#storage) to determine the amount of storage to provision on each node.
-- For guidance on sizing for connection pools, see the CockroachDB {{ site.data.products.core }} [Production Checklist]({% link {{ page.version.version }}/recommended-production-settings.md %}#connection-pooling).
-
-### Application changes
-
-As you develop your migration plan, consider the application changes that you will need to make. These may relate to the following:
-
-- [Designing a schema that is compatible with CockroachDB.](#schema-design-best-practices)
-- [Handling transaction contention.](#handling-transaction-contention)
-- [Unimplemented features and syntax incompatibilities.](#unimplemented-features-and-syntax-incompatibilities)
-
-#### Schema design best practices
-
-Follow these recommendations when converting your schema for compatibility with CockroachDB.
-
-- Define an explicit primary key on every table. For more information, see [Primary key best practices]({% link {{ page.version.version }}/schema-design-table.md %}#primary-key-best-practices).
-
-- Do not use a sequence to define a primary key column. Instead, Cockroach Labs recommends that you use [multi-column primary keys]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#use-multi-column-primary-keys) or [auto-generating unique IDs]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#use-functions-to-generate-unique-ids) for primary key columns.
-
-- By default on CockroachDB, `INT` is an alias for `INT8`, which creates 64-bit signed integers. Depending on your source database or application requirements, you may need to change the integer size to `4`. For example, [PostgreSQL defaults to 32-bit integers](https://www.postgresql.org/docs/9.6/datatype-numeric.html). For more information, see [Considerations for 64-bit signed integers]({% link {{ page.version.version }}/int.md %}#considerations-for-64-bit-signed-integers).
-
-#### Handling transaction contention
-
-Optimize your queries against [transaction contention]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#transaction-contention). You may encounter [transaction retry errors]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}) when you [test application queries](#validate-queries), as well as transaction contention due to long-running transactions when you [conduct the migration](#conduct-the-migration) and bulk load data.
-
-Transaction retry errors are more frequent under CockroachDB's default [`SERIALIZABLE` isolation level]({% link {{ page.version.version }}/demo-serializable.md %}). If you are migrating an application that was built at a `READ COMMITTED` isolation level, you should first [enable `READ COMMITTED` isolation]({% link {{ page.version.version }}/read-committed.md %}#enable-read-committed-isolation) on the CockroachDB cluster for compatibility.
-
-#### Unimplemented features and syntax incompatibilities
-
-Update your queries to resolve differences in functionality and SQL syntax.
-
-CockroachDB supports the [PostgreSQL wire protocol](https://www.postgresql.org/docs/current/protocol.html) and is largely compatible with PostgreSQL syntax. However, the following PostgreSQL features do not yet exist in CockroachDB:
-
-{% include {{page.version.version}}/sql/unsupported-postgres-features.md %}
-
-If your source database uses any of the preceding features, you may need to implement workarounds in your schema design, in your [data manipulation language (DML)]({% link {{ page.version.version }}/sql-statements.md %}#data-manipulation-statements), or in your application code.
-
-For more details on the CockroachDB SQL implementation, see [SQL Feature Support]({% link {{ page.version.version }}/sql-feature-support.md %}).
-
-## Prepare for migration
-
-Once you have a migration plan, prepare the team, application, source database, and CockroachDB cluster for the migration.
-
-### Run a migration "pre-mortem"
-
-To minimize issues after cutover, compose a migration "pre-mortem":
-
-1. Clearly describe the roles and processes of each team member performing the migration.
-1. List the likely failure points and issues that you may encounter as you [conduct the migration](#conduct-the-migration).
-1. Rank potential issues by severity, and identify ways to reduce risk.
-1. Create a plan for implementing the actions that would most effectively reduce risk.
-
-### Set up monitoring and alerting
-
-Based on the error budget you [defined in your migration plan](#develop-a-migration-plan), identify the metrics that you can use to measure your success criteria and set up monitoring for the migration. These metrics may be identical to those you normally use in production, but can also be specific to your migration needs.
-
-### Load test data
-
-It's useful to load test data into CockroachDB so that you can [test your application queries](#validate-queries). You can use the steps in [Migrate to CockroachDB in Phases]({% link {{ page.version.version }}/migrate-in-phases.md %}) to load and verify test data.
-
-### Validate queries
-
-After you [load the test data](#load-test-data), validate your queries on CockroachDB. You can do this by [shadowing](#shadowing) or by [manually testing](#test-query-results-and-performance) the queries.
-
-Note that CockroachDB defaults to the [`SERIALIZABLE`]({% link {{ page.version.version }}/demo-serializable.md %}) transaction isolation level. If you are migrating an application that was built at a `READ COMMITTED` isolation level on the source database, you must [enable `READ COMMITTED` isolation]({% link {{ page.version.version }}/read-committed.md %}#enable-read-committed-isolation) on the CockroachDB cluster for compatibility.
-
-#### Shadowing
-
-You can "shadow" your production workload by executing your source SQL statements on CockroachDB in parallel. You can then [validate the queries](#test-query-results-and-performance) on CockroachDB for consistency, performance, and potential issues with the migration.
-
-#### Test query results and performance
-
-You can manually validate your queries by testing a subset of "critical queries" on an otherwise idle CockroachDB cluster:
-
-- Check the application logs for error messages and the API response time. If application requests are slower than expected, use the **SQL Activity** page on the [CockroachDB {{ site.data.products.cloud }} Console]({% link cockroachcloud/statements-page.md %}) or [DB Console]({% link {{ page.version.version }}/ui-statements-page.md %}) to find the longest-running queries that are part of that application request. If necessary, tune the queries according to our best practices for [SQL performance]({% link {{ page.version.version }}/performance-best-practices-overview.md %}).
-
-- Compare the results of the queries and check that they are identical in both the source database and CockroachDB. To do this, you can use [MOLT Verify]({% link molt/molt-verify.md %}).
-
-Test performance on a CockroachDB cluster that is appropriately [sized](#capacity-planning) for your workload:
-
-1. Run the application with single- or very low-concurrency and verify the app's performance is acceptable. The cluster should be provisioned with more than enough resources to handle this workload, because you need to verify that the queries will be fast enough when there are zero resource bottlenecks.
-
-1. Run stress tests with at least the production concurrency and rate, but ideally higher in order to verify that the system can handle unexpected spikes in load. This can also uncover [contention]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#transaction-contention) issues that will appear during spikes in app load, which may require [application design changes](#handling-transaction-contention) to avoid.
-
-### Perform a dry run
-
-To further minimize potential surprises when you conduct the migration, practice cutover using your application and similar volumes of data on a "dry-run" environment. Use a test or development environment that is as similar as possible to production.
-
-Performing a dry run is highly recommended. In addition to demonstrating how long the migration may take, a dry run also helps to ensure that team members understand what they need to do during the migration, and that changes to the application are coordinated.
-
-## Conduct the migration
-
-Once you are ready to migrate, follow the steps in [Migrate to CockroachDB]({% link {{ page.version.version }}/migrate-to-cockroachdb.md %}) or [Migrate to CockroachDB in Phases]({% link {{ page.version.version }}/migrate-in-phases.md %}).
-
-## Complete the migration
-
-After you have successfully [conducted the migration](#conduct-the-migration):
-
-- Notify the teams and other stakeholders impacted by the migration.
-- Retire any test or development environments used to verify the migration.
-- Extend the document you created when [developing your migration plan](#develop-a-migration-plan) with any issues encountered and follow-up work that needs to be done.
-
-## See also
-
-- [Migrate to CockroachDB]({% link {{ page.version.version }}/migrate-to-cockroachdb.md %})
-- [Migrate to CockroachDB in Phases]({% link {{ page.version.version }}/migrate-in-phases.md %})
-- [Schema Design Overview]({% link {{ page.version.version }}/schema-design-overview.md %})
-- [Primary key best practices]({% link {{ page.version.version }}/schema-design-table.md %}#primary-key-best-practices)
-- [Secondary index best practices]({% link {{ page.version.version }}/schema-design-indexes.md %}#best-practices)
-- [Transaction contention best practices]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#transaction-contention)
\ No newline at end of file
diff --git a/src/current/v25.1/migration-strategy-lift-and-shift.md b/src/current/v25.1/migration-strategy-lift-and-shift.md
deleted file mode 100644
index 98935b546b6..00000000000
--- a/src/current/v25.1/migration-strategy-lift-and-shift.md
+++ /dev/null
@@ -1,139 +0,0 @@
----
-title: "Migration Strategy: Lift and Shift"
-summary: Learn about the 'Lift and Shift' data migration strategy
-toc: true
-docs_area: migrate
----
-
-There are multiple strategies for [migrating off legacy technology]({% link {{ page.version.version }}/migration-overview.md %}) to CockroachDB.
-
-This page discusses the "Lift and Shift" strategy for migrating your database, which is a commonly used approach. This approach, which is also known as "Big Bang" (and by other names), refers to the process where your data is moved in its entirety from a source system to a target system within a defined period of time. This typically involves some application downtime and can involve some service degradation.
-
-Lift and Shift may not be the right approach if a strong application service continuity during the migration is required. It may be a viable method if application downtime is permitted.
-
-{{site.data.alerts.callout_info}}
-The information on this page assumes you have already reviewed the [migration overview]({% link {{ page.version.version }}/migration-overview.md %}).
-{{site.data.alerts.end}}
-
-## Pros and Cons
-
-On the spectrum of different data migration strategies, Lift and Shift has the following pros and cons. The terms "lower" and "higher" are not absolute, but relative to other approaches.
-
-Pros:
-
-- Conceptually straightforward.
-- Less complex: If you can afford some downtime, the overall effort will usually be lower, and the chance of errors is lower.
-- Lower time start-to-finish: In general, the more downtime you can afford, the shorter the overall migration project timeframe can be.
-- Lower technical risk: It does not involve running multiple systems alongside each other for an extended period of time.
-- Easy to practice [dry runs]({% link {{ page.version.version }}/migration-overview.md %}#perform-a-dry-run) of import/export using testing/non-production systems.
-- Good import/export tooling is available (e.g., external tools like: [AWS Database Migration Service (DMS)]({% link {{ page.version.version }}/aws-dms.md %}), [Qlik Replicate]({% link {{ page.version.version }}/qlik.md %}), [Striim]({% link {{ page.version.version }}/striim.md %}); or internal tools like [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}), [`COPY FROM`]({% link {{ page.version.version }}/copy.md %}), [`cockroach userdata`]({% link {{ page.version.version }}/cockroach-userfile-upload.md %})).
-- If your application already has regularly scheduled maintenance windows, your customers will not encounter application downtime.
-
-Cons:
-
-- All or nothing: It either works or does not work; once you start, you have to finish or roll back.
-- Higher project risk: The project **must** be completed to meet a given [downtime / service degradation window]({% link {{ page.version.version }}/migration-overview.md %}#approach-to-downtime).
-- Application service continuity requirements must be relaxed (that is, application downtime or increased latency may be needed).
-
-## Process design considerations
-
-{{site.data.alerts.callout_info}}
-The high-level considerations in this section only refer to the data-loading portion of your migration. They assume you are following the steps in the overall migration process described in [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}).
-{{site.data.alerts.end}}
-
-Keep in mind the following considerations when designing a Lift and Shift data migration process.
-
-- [Decide on your data migration tooling.](#managed-migration)
-- [Decide which data formats you will use.](#data-formats)
-- [Design a restartable process.](#restartable)
-- [Design a scalable process.](#scalable)
-
-
-
-### Decide on your data migration tooling
-
-If you plan to do your bulk data migration using a managed migration service, you must have a secure, publicly available CockroachDB cluster. CockroachDB supports the following [third-party migration services]({% link {{ page.version.version }}/third-party-database-tools.md %}#data-migration-tools):
-
-- [AWS Database Migration Service (DMS)]({% link {{ page.version.version }}/aws-dms.md %})
-- [Qlik Replicate]({% link {{ page.version.version }}/qlik.md %})
-- [Striim]({% link {{ page.version.version }}/striim.md %})
-
-{{site.data.alerts.callout_info}}
-Depending on the migration service you choose, [long-running transactions]({% link {{ page.version.version }}/query-behavior-troubleshooting.md %}#hanging-or-stuck-queries) can occur. In some cases, these queries will cause [transaction retry errors]({% link {{ page.version.version }}/common-errors.md %}#restart-transaction). If you encounter these errors while migrating to CockroachDB using a managed migration service, please reach out to our [Support Resources]({% link {{ page.version.version }}/support-resources.md %}).
-{{site.data.alerts.end}}
-
-If you will not be using a managed migration service, see the following sections for more information on how to use SQL statements like [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}), [`COPY FROM`]({% link {{ page.version.version }}/copy.md %}), etc.
-
-
-
-### Decide which data formats and storage media you will use
-
-It's important to decide which data formats, storage media, and database features you will use to migrate your data.
-
-Data formats that can be imported by CockroachDB include:
-
-- [SQL]({% link {{ page.version.version }}/schema-design-overview.md %}) for the [schema import]({% link cockroachcloud/migrations-page.md %}).
-- [CSV]({% link {{ page.version.version }}/migrate-from-csv.md %}) for table data.
-- [Avro]({% link {{ page.version.version }}/migrate-from-avro.md %}) for table data.
-
-The storage media you use to export / import from can be intermediate data files or streaming data coming over the network. Options include:
-
-- Local "userdata" storage for small tables (see [`cockroach userdata`]({% link {{ page.version.version }}/cockroach-userfile-upload.md %}), [Use a Local File Server]({% link {{ page.version.version }}/use-a-local-file-server.md %})).
-- Cloud blob storage (see [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}), [Import Performance Best Practices]({% link {{ page.version.version }}/import-performance-best-practices.md %}), [Use Cloud Storage]({% link {{ page.version.version }}/use-cloud-storage.md %})).
-- Direct wire transfers (see [managed migration services](#managed-migration)).
-
-Database features for export/import from the source and target databases can include:
-
-- Tools for exporting from the source database may include: `pg_dump --schema-only` and `COPY FROM`, `mysqldump`, `expdp`, etc.
-- For import into CockroachDB, use [`COPY FROM`]({% link {{ page.version.version }}/copy.md %}) or [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}). For a bulk data migrations, most users should use `IMPORT INTO` because the tables will be offline anyway, and `IMPORT INTO` can [perform the data import much faster]({% link {{ page.version.version }}/import-performance-best-practices.md %}) than `COPY FROM`.
-
-Decide which of the options above will meet your requirements while resulting in a process that is [restartable](#restartable) and [scalable](#scalable).
-
-
-
-### Design a restartable process
-
-To have a higher chance of success, design your data migration process so it can be stopped and restarted from an intermediate state at any time during the process. This will help minimize errors and avoid wasted effort.
-
-Keep the following requirements in mind as you design a restartable import/export process:
-
-- Bulk migrate data in manageable size batches for your source and target systems.
- - This is a best practice. If something happens to the target cluster during import, the amount of wasted work will be minimized.
-- Implement progress/state keeping with process restart capabilities.
-- Make sure your export process is idempotent: the same input to your export process should return the same output data.
-- If possible, export and import the majority of your data before taking down the source database. This can ensure that you only have to deal with the incremental changes from your last import to complete the migration process.
-
-
-
-### Design a scalable and performant process
-
-Once your process is [restartable and resilient to failures](#design-a-restartable-process), it's important to also make sure it will scale to the needs of your data set. The larger the data set you are migrating to CockroachDB, the more important the performance and scalability of your process will be.
-
-Keep the following requirements in mind:
-
-- Schema and data should be imported separately.
-- Your process should handle multiple files across multiple export/import streams concurrently.
- - For best performance, these files should contain presorted, disjoint data sets.
-- Benchmark the performance of your migration process to help ensure it will complete within the allotted downtime window.
-
-For more information about import performance, see [Import Performance Best Practices]({% link {{ page.version.version }}/import-performance-best-practices.md %}).
-
-## See also
-
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
-- [Use the Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
-- [Migrate with AWS Database Migration Service (DMS)]({% link {{ page.version.version }}/aws-dms.md %})
-- [AWS DMS documentation](https://docs.aws.amazon.com/dms/latest/userguide/Welcome.html)
-- [Migrate and Replicate Data with Qlik Replicate]({% link {{ page.version.version }}/qlik.md %})
-- [Migrate and Replicate Data with Striim]({% link {{ page.version.version }}/striim.md %})
-- [Schema Design Overview]({% link {{ page.version.version }}/schema-design-overview.md %})
-- [Back Up and Restore]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %})
-- [Export data with Changefeeds]({% link {{ page.version.version }}/export-data-with-changefeeds.md %})
-- [`COPY`]({% link {{ page.version.version }}/copy.md %})
-- [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %})
-- [Migrate from CSV]({% link {{ page.version.version }}/migrate-from-csv.md %})
-- [Migrate from Avro]({% link {{ page.version.version }}/migrate-from-avro.md %})
-- [Client connection parameters]({% link {{ page.version.version }}/connection-parameters.md %})
-
-
-{% comment %} eof {% endcomment %}
diff --git a/src/current/v25.1/qlik.md b/src/current/v25.1/qlik.md
index bdb646d7b20..da53e969bfd 100644
--- a/src/current/v25.1/qlik.md
+++ b/src/current/v25.1/qlik.md
@@ -68,7 +68,7 @@ Complete the following items before using Qlik Replicate:
- If you are migrating from PostgreSQL, MySQL, Oracle, or Microsoft SQL Server, [use the **Schema Conversion Tool**]({% link cockroachcloud/migrations-page.md %}) to convert and export your schema. Ensure that any schema changes are also reflected on your tables, or add transformation rules. If you make substantial schema changes, the Qlik Replicate migration may fail.
{{site.data.alerts.callout_info}}
- All tables must have an explicitly defined primary key. For more guidance, see the [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}#schema-design-best-practices).
+ All tables must have an explicitly defined primary key. For more guidance, see [Migration Strategy]({% link molt/migration-strategy.md %}#schema-design-best-practices).
{{site.data.alerts.end}}
## Migrate and replicate data to CockroachDB
@@ -96,7 +96,7 @@ In the Qlik Replicate interface, CockroachDB is configured as a PostgreSQL **sou
## See also
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
- [Change Data Capture Overview]({% link {{ page.version.version }}/change-data-capture-overview.md %})
- [Third-Party Tools Supported by Cockroach Labs]({% link {{ page.version.version }}/third-party-database-tools.md %})
diff --git a/src/current/v25.1/read-committed.md b/src/current/v25.1/read-committed.md
index 83d7f9ad7e6..bcf0a5d9f92 100644
--- a/src/current/v25.1/read-committed.md
+++ b/src/current/v25.1/read-committed.md
@@ -13,7 +13,7 @@ docs_area: deploy
- Your application needs to maintain a high workload concurrency with minimal [transaction retries]({% link {{ page.version.version }}/developer-basics.md %}#transaction-retries), and it can tolerate potential [concurrency anomalies](#concurrency-anomalies). Predictable query performance at high concurrency is more valuable than guaranteed transaction [serializability]({% link {{ page.version.version }}/developer-basics.md %}#serializability-and-transaction-contention).
-- You are [migrating an application to CockroachDB]({% link {{ page.version.version }}/migration-overview.md %}) that was built at a `READ COMMITTED` isolation level on the source database, and it is not feasible to modify your application to use `SERIALIZABLE` isolation.
+- You are [migrating an application to CockroachDB]({% link molt/migration-overview.md %}) that was built at a `READ COMMITTED` isolation level on the source database, and it is not feasible to modify your application to use `SERIALIZABLE` isolation.
Whereas `SERIALIZABLE` isolation guarantees data correctness by placing transactions into a [serializable ordering]({% link {{ page.version.version }}/demo-serializable.md %}), `READ COMMITTED` isolation permits some [concurrency anomalies](#concurrency-anomalies) in exchange for minimizing transaction aborts, [retries]({% link {{ page.version.version }}/developer-basics.md %}#transaction-retries), and blocking. Compared to `SERIALIZABLE` transactions, `READ COMMITTED` transactions do **not** return [serialization errors]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}) that require client-side handling. See [`READ COMMITTED` transaction behavior](#read-committed-transaction-behavior).
@@ -919,4 +919,4 @@ SELECT * FROM schedules
- [Serializable Transactions]({% link {{ page.version.version }}/demo-serializable.md %})
- [What Write Skew Looks Like](https://www.cockroachlabs.com/blog/what-write-skew-looks-like/)
- [Read Committed RFC](https://github.com/cockroachdb/cockroach/blob/master/docs/RFCS/20230122_read_committed_isolation.md)
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
diff --git a/src/current/v25.1/striim.md b/src/current/v25.1/striim.md
index 436d7440cbd..67526c8ff9d 100644
--- a/src/current/v25.1/striim.md
+++ b/src/current/v25.1/striim.md
@@ -37,7 +37,7 @@ Complete the following items before using Striim:
- If you are migrating from PostgreSQL, MySQL, Oracle, or Microsoft SQL Server, [use the **Schema Conversion Tool**]({% link cockroachcloud/migrations-page.md %}) to convert and export your schema. Ensure that any schema changes are also reflected on your tables.
{{site.data.alerts.callout_info}}
- All tables must have an explicitly defined primary key. For more guidance, see the [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}#schema-design-best-practices).
+ All tables must have an explicitly defined primary key. For more guidance, see [Migration Strategy]({% link molt/migration-strategy.md %}#schema-design-best-practices).
{{site.data.alerts.end}}
## Migrate and replicate data to CockroachDB
@@ -110,7 +110,7 @@ To perform continuous replication of ongoing changes, create a Striim applicatio
## See also
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
- [Change Data Capture Overview]({% link {{ page.version.version }}/change-data-capture-overview.md %})
- [Third-Party Tools Supported by Cockroach Labs]({% link {{ page.version.version }}/third-party-database-tools.md %})
diff --git a/src/current/v25.2/aws-dms.md b/src/current/v25.2/aws-dms.md
index bb3b53b22a2..99a42ac0532 100644
--- a/src/current/v25.2/aws-dms.md
+++ b/src/current/v25.2/aws-dms.md
@@ -41,7 +41,7 @@ Complete the following items before starting the DMS migration:
- Manually create all schema objects in the target CockroachDB cluster. If you are migrating from PostgreSQL, MySQL, Oracle, or Microsoft SQL Server, you can [use the **Schema Conversion Tool**]({% link cockroachcloud/migrations-page.md %}) to convert and export your schema.
- - All tables must have an explicitly defined primary key. For more guidance, see the [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}#schema-design-best-practices).
+ - All tables must have an explicitly defined primary key. For more guidance, see [Migration Strategy]({% link molt/migration-strategy.md %}#schema-design-best-practices).
- Drop all [constraints]({% link {{ page.version.version }}/constraints.md %}) per the [AWS DMS best practices](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_BestPractices.html#CHAP_BestPractices.Performance). You can recreate them after the [full load completes](#step-3-verify-the-migration). AWS DMS can create a basic schema, but does not create [indexes]({% link {{ page.version.version }}/indexes.md %}) or constraints such as [foreign keys]({% link {{ page.version.version }}/foreign-key.md %}) and [defaults]({% link {{ page.version.version }}/default-value.md %}).
@@ -406,7 +406,7 @@ The `BatchApplyEnabled` setting can improve replication performance and is recom
## See Also
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
- [`cockroach demo`]({% link {{ page.version.version }}/cockroach-demo.md %})
- [AWS DMS documentation](https://docs.aws.amazon.com/dms/latest/userguide/Welcome.html)
diff --git a/src/current/v25.2/build-a-java-app-with-cockroachdb-hibernate.md b/src/current/v25.2/build-a-java-app-with-cockroachdb-hibernate.md
index c19bdcaed7b..c4b53d27629 100644
--- a/src/current/v25.2/build-a-java-app-with-cockroachdb-hibernate.md
+++ b/src/current/v25.2/build-a-java-app-with-cockroachdb-hibernate.md
@@ -130,9 +130,9 @@ APP: getAccountBalance(2) --> 350.00
If you are trying to get a large data set into CockroachDB all at once (a bulk import), avoid writing client-side code altogether and use the [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) statement instead. It is much faster and more efficient than making a series of [`INSERT`s]({% link {{ page.version.version }}/insert.md %}) and [`UPDATE`s]({% link {{ page.version.version }}/update.md %}). It bypasses the [SQL layer]({% link {{ page.version.version }}/architecture/sql-layer.md %}) altogether and writes directly to the [storage layer](architecture/storage-layer.html) of the database.
-For more information about importing data from PostgreSQL, see [Migrate from PostgreSQL]({% link {{ page.version.version }}/migrate-from-postgres.md %}).
+For more information about importing data from PostgreSQL, see [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %}).
-For more information about importing data from MySQL, see [Migrate from MySQL]({% link {{ page.version.version }}/migrate-from-mysql.md %}).
+For more information about importing data from MySQL, see [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql).
### Use `reWriteBatchedInserts` for increased speed
diff --git a/src/current/v25.2/build-a-java-app-with-cockroachdb.md b/src/current/v25.2/build-a-java-app-with-cockroachdb.md
index eae610ab364..d798e9dbaee 100644
--- a/src/current/v25.2/build-a-java-app-with-cockroachdb.md
+++ b/src/current/v25.2/build-a-java-app-with-cockroachdb.md
@@ -269,9 +269,9 @@ props.setProperty("options", "-c sql_safe_updates=true -c statement_timeout=30")
If you are trying to get a large data set into CockroachDB all at once (a bulk import), avoid writing client-side code altogether and use the [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) statement instead. It is much faster and more efficient than making a series of [`INSERT`s]({% link {{ page.version.version }}/insert.md %}) and [`UPDATE`s]({% link {{ page.version.version }}/update.md %}). It bypasses the [SQL layer]({% link {{ page.version.version }}/architecture/sql-layer.md %}) altogether and writes directly to the [storage layer](architecture/storage-layer.html) of the database.
-For more information about importing data from PostgreSQL, see [Migrate from PostgreSQL]({% link {{ page.version.version }}/migrate-from-postgres.md %}).
+For more information about importing data from PostgreSQL, see [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %}).
-For more information about importing data from MySQL, see [Migrate from MySQL]({% link {{ page.version.version }}/migrate-from-mysql.md %}).
+For more information about importing data from MySQL, see [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql).
### Use `reWriteBatchedInserts` for increased speed
diff --git a/src/current/v25.2/build-a-python-app-with-cockroachdb-sqlalchemy.md b/src/current/v25.2/build-a-python-app-with-cockroachdb-sqlalchemy.md
index 73e5dfa528b..e56b9c136a8 100644
--- a/src/current/v25.2/build-a-python-app-with-cockroachdb-sqlalchemy.md
+++ b/src/current/v25.2/build-a-python-app-with-cockroachdb-sqlalchemy.md
@@ -204,9 +204,9 @@ Instead, we recommend breaking your transaction into smaller units of work (or "
If you are trying to get a large data set into CockroachDB all at once (a bulk import), avoid writing client-side code that uses an ORM and use the [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) statement instead. It is much faster and more efficient than making a series of [`INSERT`s]({% link {{ page.version.version }}/insert.md %}) and [`UPDATE`s]({% link {{ page.version.version }}/update.md %}) such as are generated by calls to [`session.bulk_save_objects()`](https://docs.sqlalchemy.org/orm/session_api.html?highlight=bulk_save_object#sqlalchemy.orm.session.Session.bulk_save_objects).
-For more information about importing data from PostgreSQL, see [Migrate from PostgreSQL]({% link {{ page.version.version }}/migrate-from-postgres.md %}).
+For more information about importing data from PostgreSQL, see [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %}).
-For more information about importing data from MySQL, see [Migrate from MySQL]({% link {{ page.version.version }}/migrate-from-mysql.md %}).
+For more information about importing data from MySQL, see [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql).
### Prefer the query builder
diff --git a/src/current/v25.2/copy.md b/src/current/v25.2/copy.md
index aa825fdc003..144da73b4ac 100644
--- a/src/current/v25.2/copy.md
+++ b/src/current/v25.2/copy.md
@@ -358,10 +358,10 @@ You can copy CSV data into CockroachDB using the following methods:
## See also
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %})
- [`EXPORT`]({% link {{ page.version.version }}/export.md %})
- [Install a Driver or ORM Framework]({% link {{ page.version.version }}/install-client-drivers.md %})
{% comment %}
-- [Migrate from PostgreSQL]({% link {{ page.version.version }}/migrate-from-postgres.md %})
+- [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %})
{% endcomment %}
\ No newline at end of file
diff --git a/src/current/v25.2/debezium.md b/src/current/v25.2/debezium.md
index ea2eb513b50..a9d14707ea4 100644
--- a/src/current/v25.2/debezium.md
+++ b/src/current/v25.2/debezium.md
@@ -116,7 +116,7 @@ Once all of the [prerequisite steps](#before-you-begin) are completed, you can u
## See also
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
- [Change Data Capture Overview]({% link {{ page.version.version }}/change-data-capture-overview.md %})
- [Third-Party Tools Supported by Cockroach Labs]({% link {{ page.version.version }}/third-party-database-tools.md %})
diff --git a/src/current/v25.2/frequently-asked-questions.md b/src/current/v25.2/frequently-asked-questions.md
index 07e00b6959f..c94ba15582b 100644
--- a/src/current/v25.2/frequently-asked-questions.md
+++ b/src/current/v25.2/frequently-asked-questions.md
@@ -147,7 +147,7 @@ Note, however, that the protocol used doesn't significantly impact how easy it i
### Can a PostgreSQL or MySQL application be migrated to CockroachDB?
-Yes. Most users should be able to follow the instructions in [Migrate from PostgreSQL]({% link {{ page.version.version }}/migrate-from-postgres.md %}) or [Migrate from MySQL]({% link {{ page.version.version }}/migrate-from-mysql.md %}). Due to differences in available features and syntax, some features supported by these databases may require manual effort to port to CockroachDB. Check those pages for details.
+Yes. Most users should be able to follow the instructions in [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %}) or [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql). Due to differences in available features and syntax, some features supported by these databases may require manual effort to port to CockroachDB. Check those pages for details.
We also fully support [importing your data via CSV]({% link {{ page.version.version }}/migrate-from-csv.md %}).
diff --git a/src/current/v25.2/goldengate.md b/src/current/v25.2/goldengate.md
index 7fd572094e4..30ee23c17df 100644
--- a/src/current/v25.2/goldengate.md
+++ b/src/current/v25.2/goldengate.md
@@ -514,7 +514,7 @@ Run the steps in this section on a machine and in a directory where Oracle Golde
## See also
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
- [Change Data Capture Overview]({% link {{ page.version.version }}/change-data-capture-overview.md %})
- [Third-Party Tools Supported by Cockroach Labs]({% link {{ page.version.version }}/third-party-database-tools.md %})
diff --git a/src/current/v25.2/import-into.md b/src/current/v25.2/import-into.md
index ad4a2c9ca5c..780b9195104 100644
--- a/src/current/v25.2/import-into.md
+++ b/src/current/v25.2/import-into.md
@@ -158,7 +158,7 @@ You can control the `IMPORT` process's behavior using any of the following key-v
For examples showing how to use these options, see the [Examples section]({% link {{ page.version.version }}/import-into.md %}#examples).
-For instructions and working examples showing how to migrate data from other databases and formats, see the [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}).
+For instructions and working examples showing how to migrate data from other databases and formats, see the [Migration Overview]({% link molt/migration-overview.md %}).
## View and control import jobs
@@ -285,6 +285,6 @@ For more information about importing data from Avro, including examples, see [Mi
## See also
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [Use Cloud Storage]({% link {{ page.version.version }}/use-cloud-storage.md %})
- [Import Performance Best Practices]({% link {{ page.version.version }}/import-performance-best-practices.md %})
diff --git a/src/current/v25.2/import-performance-best-practices.md b/src/current/v25.2/import-performance-best-practices.md
index b85d28f1fe6..ca69a57fc4a 100644
--- a/src/current/v25.2/import-performance-best-practices.md
+++ b/src/current/v25.2/import-performance-best-practices.md
@@ -160,9 +160,9 @@ If you cannot both split and sort your dataset, the performance of either split
## See also
- [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %})
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [Migrate from Oracle]({% link {{ page.version.version }}/migrate-from-oracle.md %})
-- [Migrate from PostgreSQL]({% link {{ page.version.version }}/migrate-from-postgres.md %})
-- [Migrate from MySQL]({% link {{ page.version.version }}/migrate-from-mysql.md %})
+- [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %})
+- [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql)
- [Migrate from CSV]({% link {{ page.version.version }}/migrate-from-csv.md %})
- [Migrate from Avro]({% link {{ page.version.version }}/migrate-from-avro.md %})
diff --git a/src/current/v25.2/index.md b/src/current/v25.2/index.md
index eff4d508353..78ac87b9f4c 100644
--- a/src/current/v25.2/index.md
+++ b/src/current/v25.2/index.md
@@ -99,11 +99,11 @@ docs_area:
diff --git a/src/current/v25.2/insert-data.md b/src/current/v25.2/insert-data.md
index a7e76489a0c..dc3c26ee1f5 100644
--- a/src/current/v25.2/insert-data.md
+++ b/src/current/v25.2/insert-data.md
@@ -105,7 +105,7 @@ conn.commit()
Reference information related to this task:
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [`INSERT`]({% link {{ page.version.version }}/insert.md %})
- [`UPSERT`]({% link {{ page.version.version }}/upsert.md %})
- [Transaction Contention]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#transaction-contention)
diff --git a/src/current/v25.2/migrate-failback.md b/src/current/v25.2/migrate-failback.md
index acc45022370..98d7844bcf6 100644
--- a/src/current/v25.2/migrate-failback.md
+++ b/src/current/v25.2/migrate-failback.md
@@ -111,10 +111,10 @@ The following example watches the `employees` table for change events.
## See also
-- [MOLT Overview]({% link molt/molt-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [MOLT Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
- [MOLT Fetch]({% link molt/molt-fetch.md %})
- [MOLT Verify]({% link molt/molt-verify.md %})
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [Migrate to CockroachDB]({% link {{ page.version.version }}/migrate-to-cockroachdb.md %})
- [Migrate to CockroachDB in Phases]({% link {{ page.version.version }}/migrate-in-phases.md %})
\ No newline at end of file
diff --git a/src/current/v25.2/migrate-from-avro.md b/src/current/v25.2/migrate-from-avro.md
index 39b7bdbc9aa..de42d232917 100644
--- a/src/current/v25.2/migrate-from-avro.md
+++ b/src/current/v25.2/migrate-from-avro.md
@@ -216,8 +216,8 @@ You will need to run [`ALTER TABLE ... ADD CONSTRAINT`]({% link {{ page.version.
- [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %})
- [Import Performance Best Practices]({% link {{ page.version.version }}/import-performance-best-practices.md %})
- [Migrate from CSV][csv]
-- [Migrate from MySQL][mysql]
-- [Migrate from PostgreSQL][postgres]
+- [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql)
+- [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %})
- [Back Up and Restore Data]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %})
- [Use the Built-in SQL Client]({% link {{ page.version.version }}/cockroach-sql.md %})
- [`cockroach` Commands Overview]({% link {{ page.version.version }}/cockroach-commands.md %})
diff --git a/src/current/v25.2/migrate-from-csv.md b/src/current/v25.2/migrate-from-csv.md
index e0e1d92da9c..d30eeb2e98c 100644
--- a/src/current/v25.2/migrate-from-csv.md
+++ b/src/current/v25.2/migrate-from-csv.md
@@ -176,8 +176,8 @@ IMPORT INTO employees (emp_no, birth_date, first_name, last_name, gender, hire_d
- [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %})
- [Import Performance Best Practices]({% link {{ page.version.version }}/import-performance-best-practices.md %})
-- [Migrate from MySQL][mysql]
-- [Migrate from PostgreSQL][postgres]
+- [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql)
+- [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %})
- [Back Up and Restore Data]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %})
- [Use the Built-in SQL Client]({% link {{ page.version.version }}/cockroach-sql.md %})
- [`cockroach` Commands Overview]({% link {{ page.version.version }}/cockroach-commands.md %})
diff --git a/src/current/v25.2/migrate-from-geojson.md b/src/current/v25.2/migrate-from-geojson.md
index 2c3326af39e..e0d804f0be7 100644
--- a/src/current/v25.2/migrate-from-geojson.md
+++ b/src/current/v25.2/migrate-from-geojson.md
@@ -122,9 +122,9 @@ IMPORT INTO underground_storage_tank CSV DATA ('http://localhost:3000/tanks.csv'
- [Migrate from Shapefiles]({% link {{ page.version.version }}/migrate-from-shapefiles.md %})
- [Migrate from GeoPackage]({% link {{ page.version.version }}/migrate-from-geopackage.md %})
- [Spatial indexes]({% link {{ page.version.version }}/spatial-indexes.md %})
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
-- [Migrate from MySQL][mysql]
-- [Migrate from PostgreSQL][postgres]
+- [Migration Overview]({% link molt/migration-overview.md %})
+- [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql)
+- [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %})
- [Back Up and Restore Data]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %})
- [Use the Built-in SQL Client]({% link {{ page.version.version }}/cockroach-sql.md %})
- [`cockroach` Commands Overview]({% link {{ page.version.version }}/cockroach-commands.md %})
diff --git a/src/current/v25.2/migrate-from-geopackage.md b/src/current/v25.2/migrate-from-geopackage.md
index 53acdaff4a7..c3fabeb57ef 100644
--- a/src/current/v25.2/migrate-from-geopackage.md
+++ b/src/current/v25.2/migrate-from-geopackage.md
@@ -114,9 +114,9 @@ IMPORT INTO busstops CSV DATA ('http://localhost:3000/busstops.csv') WITH skip =
- [Spatial indexes]({% link {{ page.version.version }}/spatial-indexes.md %})
- [Migrate from OpenStreetMap]({% link {{ page.version.version }}/migrate-from-openstreetmap.md %})
- [Migrate from Shapefiles]({% link {{ page.version.version }}/migrate-from-shapefiles.md %})
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
-- [Migrate from MySQL][mysql]
-- [Migrate from PostgreSQL][postgres]
+- [Migration Overview]({% link molt/migration-overview.md %})
+- [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql)
+- [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %})
- [Back Up and Restore Data]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %})
- [Use the Built-in SQL Client]({% link {{ page.version.version }}/cockroach-sql.md %})
- [`cockroach` Commands Overview]({% link {{ page.version.version }}/cockroach-commands.md %})
diff --git a/src/current/v25.2/migrate-from-mysql.md b/src/current/v25.2/migrate-from-mysql.md
deleted file mode 100644
index 1fd506ca8c7..00000000000
--- a/src/current/v25.2/migrate-from-mysql.md
+++ /dev/null
@@ -1,416 +0,0 @@
----
-title: Migrate from MySQL
-summary: Learn how to migrate data from MySQL into a CockroachDB cluster.
-toc: true
-docs_area: migrate
----
-
-{{site.data.alerts.callout_info}}
-For current migration instructions using the [MOLT tools]({% link molt/molt-overview.md %}), refer to [Migrate to CockroachDB]({% link {{ page.version.version }}/migrate-to-cockroachdb.md %}).
-{{site.data.alerts.end}}
-
-This page describes basic considerations and provides a basic [example](#example-migrate-world-to-cockroachdb) of migrating data from MySQL to CockroachDB. The information on this page assumes that you have read [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}), which describes the broad phases and considerations of migrating a database to CockroachDB.
-
-The [MySQL migration example](#example-migrate-world-to-cockroachdb) on this page demonstrates how to update the MySQL schema, perform an initial load of data, and validate the data. These steps are essential when [preparing for a full migration]({% link {{ page.version.version }}/migration-overview.md %}#prepare-for-migration).
-
-{{site.data.alerts.callout_success}}
-If you need help migrating to CockroachDB, contact our sales team.
-{{site.data.alerts.end}}
-
-## Syntax differences
-
-You will likely need to make application changes due to differences in syntax between MySQL and CockroachDB. Along with the [general considerations in the migration overview]({% link {{ page.version.version }}/migration-overview.md %}#application-changes), also consider the following MySQL-specific information as you develop your migration plan.
-
-When [using the Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}?filters=mysql#convert-a-schema), MySQL syntax that cannot automatically be converted will be displayed in the [**Summary Report**]({% link cockroachcloud/migrations-page.md %}?filters=mysql#summary-report). These may include the following.
-
-#### String case sensitivity
-
-Strings are case-insensitive in MySQL and case-sensitive in CockroachDB. You may need to edit your MySQL data to get the results you expect from CockroachDB. For example, you may have been doing string comparisons in MySQL that will need to be changed to work with CockroachDB.
-
-For more information about the case sensitivity of strings in MySQL, see [Case Sensitivity in String Searches](https://dev.mysql.com/doc/refman/8.0/en/case-sensitivity.html) from the MySQL documentation. For more information about CockroachDB strings, see [`STRING`]({% link {{ page.version.version }}/string.md %}).
-
-#### Identifier case sensitivity
-
-Identifiers are case-sensitive in MySQL and [case-insensitive in CockroachDB]({% link {{ page.version.version }}/keywords-and-identifiers.md %}#identifiers). When [using the Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}?filters=mysql#convert-a-schema), you can either keep case sensitivity by enclosing identifiers in double quotes, or make identifiers case-insensitive by converting them to lowercase.
-
-#### `AUTO_INCREMENT` attribute
-
-The MySQL [`AUTO_INCREMENT`](https://dev.mysql.com/doc/refman/8.0/en/example-auto-increment.html) attribute, which creates sequential column values, is not supported in CockroachDB. When [using the Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}?filters=mysql#convert-a-schema), columns with `AUTO_INCREMENT` can be converted to use [sequences]({% link {{ page.version.version }}/create-sequence.md %}), `UUID` values with [`gen_random_uuid()`]({% link {{ page.version.version }}/functions-and-operators.md %}#id-generation-functions), or unique `INT8` values using [`unique_rowid()`]({% link {{ page.version.version }}/functions-and-operators.md %}#id-generation-functions). Cockroach Labs does not recommend using a sequence to define a primary key column. For more information, see [Unique ID best practices]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#unique-id-best-practices).
-
-{{site.data.alerts.callout_info}}
-Changing a column type during schema conversion will cause [MOLT Verify]({% link molt/molt-verify.md %}) to identify a type mismatch during [data validation](#step-3-validate-the-migrated-data). This is expected behavior.
-{{site.data.alerts.end}}
-
-#### `ENUM` type
-
-MySQL `ENUM` types are defined in table columns. On CockroachDB, [`ENUM`]({% link {{ page.version.version }}/enum.md %}) is a standalone type. When [using the Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}?filters=mysql#convert-a-schema), you can either deduplicate the `ENUM` definitions or create a separate type for each column.
-
-#### `TINYINT` type
-
-`TINYINT` data types are not supported in CockroachDB. The [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}?filters=mysql) automatically converts `TINYINT` columns to [`INT2`]({% link {{ page.version.version }}/int.md %}) (`SMALLINT`).
-
-#### Geospatial types
-
-MySQL geometry types are not converted to CockroachDB [geospatial types]({% link {{ page.version.version }}/spatial-data-overview.md %}#spatial-objects) by the [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}?filters=mysql). They should be manually converted to the corresponding types in CockroachDB.
-
-#### `FIELD` function
-
-The MYSQL `FIELD` function is not supported in CockroachDB. Instead, you can use the [`array_position`]({% link {{ page.version.version }}/functions-and-operators.md %}#array-functions) function, which returns the index of the first occurrence of element in the array.
-
-Example usage:
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-SELECT array_position(ARRAY[4,1,3,2],1);
-~~~
-
-~~~
- array_position
-------------------
- 2
-(1 row)
-~~~
-
-While MySQL returns 0 when the element is not found, CockroachDB returns `NULL`. So if you are using the `ORDER BY` clause in a statement with the `array_position` function, the caveat is that sort is applied even when the element is not found. As a workaround, you can use the [`COALESCE`]({% link {{ page.version.version }}/functions-and-operators.md %}#conditional-and-function-like-operators) operator.
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-SELECT * FROM table_a ORDER BY COALESCE(array_position(ARRAY[4,1,3,2],5),999);
-~~~
-
-## Load MySQL data
-
-You can use [MOLT Fetch]({% link molt/molt-fetch.md %}) to migrate MySQL data to CockroachDB.
-
-Alternatively, you can use one of the following methods to migrate the data:
-
-- {% include {{ page.version.version }}/migration/load-data-import-into.md %}
-
- {% include {{ page.version.version }}/misc/import-perf.md %}
-
-- {% include {{ page.version.version }}/migration/load-data-third-party.md %}
-
-The [following example](#example-migrate-world-to-cockroachdb) uses `IMPORT INTO` to perform the initial data load.
-
-## Example: Migrate `world` to CockroachDB
-
-The following steps demonstrate converting a schema, performing an [initial load of data]({% link {{ page.version.version }}/migration-overview.md %}#load-test-data), and [validating data consistency]({% link {{ page.version.version }}/migration-overview.md %}#validate-queries) during a migration.
-
-In the context of a full migration, these steps ensure that MySQL data can be properly migrated to CockroachDB and your application queries tested against the cluster. For details, see the [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}#prepare-for-migration).
-
-### Before you begin
-
-The example uses the [MySQL `world` data set](https://dev.mysql.com/doc/index-other.html) and demonstrates how to migrate the schema and data to a CockroachDB {{ site.data.products.standard }} cluster. To follow along with these steps:
-
-1. Download the [`world` data set](https://dev.mysql.com/doc/index-other.html).
-
-1. Create the `world` database on your MySQL instance, specifying the path of the downloaded file:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- mysqlsh -uroot --sql --file {path}/world-db/world.sql
- ~~~
-
-1. Create a free [{{ site.data.products.cloud }} account]({% link cockroachcloud/create-an-account.md %}), which is used to access the [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}?filters=mysql) and [create the CockroachDB {{ site.data.products.standard }} cluster]({% link cockroachcloud/create-your-cluster.md %}).
-
-{{site.data.alerts.callout_success}}
-{% include cockroachcloud/migration/sct-self-hosted.md %}
-{{site.data.alerts.end}}
-
-### Step 1. Convert the MySQL schema
-
-Use the [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}?filters=mysql) to convert the `world` schema for compatibility with CockroachDB. The schema has three tables: `city`, `country`, and `countrylanguage`.
-
-1. Dump the MySQL `world` schema with the following [`mysqldump`](https://dev.mysql.com/doc/refman/8.0/en/mysqldump-sql-format.html) command:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- mysqldump -uroot --no-data world > world_schema.sql
- ~~~
-
-1. Open the [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}?filters=mysql) in the {{ site.data.products.cloud }} Console and [add a new MySQL schema]({% link cockroachcloud/migrations-page.md %}?filters=mysql#convert-a-schema).
-
- For **AUTO_INCREMENT Conversion Option**, select the [`unique_rowid()`]({% link {{ page.version.version }}/functions-and-operators.md %}#id-generation-functions) option. This will convert the `ID` column in the `city` table, which has MySQL type `int` and `AUTO_INCREMENT`, to a CockroachDB [`INT8`]({% link {{ page.version.version }}/int.md %}) type with default values generated by [`unique_rowid()`]({% link {{ page.version.version }}/functions-and-operators.md %}#id-generation-functions). For context on this option, see [`AUTO_INCREMENT` attribute](#auto_increment-attribute).
-
- The `UUID` and `unique_rowid()` options are each preferred for [different use cases]({% link {{ page.version.version }}/sql-faqs.md %}#what-are-the-differences-between-uuid-sequences-and-unique_rowid). For this example, selecting the `unique_rowid()` option makes [loading the data](#step-2-load-the-mysql-data) more straightforward in a later step, since both the source and target columns will have integer types.
-
-1. [Upload `world_schema.sql`]({% link cockroachcloud/migrations-page.md %}?filters=mysql#upload-file) to the Schema Conversion Tool.
-
- After conversion is complete, [review the results]({% link cockroachcloud/migrations-page.md %}?filters=mysql#review-the-schema). The [**Summary Report**]({% link cockroachcloud/migrations-page.md %}?filters=mysql#summary-report) shows that there are no errors. This means that the schema is ready to migrate to CockroachDB.
-
- {{site.data.alerts.callout_success}}
- You can also [add your MySQL database credentials]({% link cockroachcloud/migrations-page.md %}?filters=mysql#use-credentials) to have the Schema Conversion Tool obtain the schema directly from the MySQL database.
- {{site.data.alerts.end}}
-
- This example migrates directly to a CockroachDB {{ site.data.products.standard }} cluster. {% include cockroachcloud/migration/sct-self-hosted.md %}
-
-1. Before you migrate the converted schema, click the **Statements** tab to view the [Statements list]({% link cockroachcloud/migrations-page.md %}?filters=mysql#statements-list). Scroll down to the `CREATE TABLE countrylanguage` statement and edit the statement to add a [collation]({% link {{ page.version.version }}/collate.md %}) (`COLLATE en_US`) on the `language` column:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- CREATE TABLE countrylanguage (
- countrycode VARCHAR(3) DEFAULT '' NOT NULL,
- language VARCHAR(30) COLLATE en_US DEFAULT '' NOT NULL,
- isofficial countrylanguage_isofficial_enum
- DEFAULT 'F'
- NOT NULL,
- percentage DECIMAL(4,1) DEFAULT '0.0' NOT NULL,
- PRIMARY KEY (countrycode, language),
- INDEX countrycode (countrycode),
- CONSTRAINT countrylanguage_ibfk_1
- FOREIGN KEY (countrycode) REFERENCES country (code)
- )
- ~~~
-
- Click **Save**.
-
- This is a workaround to prevent [data validation](#step-3-validate-the-migrated-data) from failing due to collation mismatches. For more details, see the [MOLT Verify] ({% link molt/molt-verify.md %}#known-limitations) documentation.
-
-1. Click [**Migrate Schema**]({% link cockroachcloud/migrations-page.md %}?filters=mysql#migrate-the-schema) to create a new {{ site.data.products.standard }} cluster with the converted schema. Name the database `world`.
-
- You can view this database on the [**Databases** page]({% link cockroachcloud/databases-page.md %}) of the {{ site.data.products.cloud }} Console.
-
-1. Open a SQL shell to the CockroachDB `world` cluster. To find the command, open the **Connect** dialog in the {{ site.data.products.cloud }} Console and select the `world` database and **CockroachDB Client** option. It will look like:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- cockroach sql --url "postgresql://{username}@{hostname}:{port}/world?sslmode=verify-full"
- ~~~
-
-1. For large imports, Cockroach Labs recommends [removing indexes prior to loading data]({% link {{ page.version.version }}/import-performance-best-practices.md %}#import-into-a-schema-with-secondary-indexes) and recreating them afterward. This provides increased visibility into the import progress and the ability to retry each step independently.
-
- Show the indexes on the `world` database:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- SHOW INDEXES FROM DATABASE world;
- ~~~
-
- The `countrycode` [foreign key]({% link {{ page.version.version }}/foreign-key.md %}) indexes on the `city` and `countrylanguage` tables can be removed for now:
-
- ~~~
- table_name | index_name | index_schema | non_unique | seq_in_index | column_name | definition | direction | storing | implicit | visible
- ---------------------------------+-------------------------------------------------+--------------+------------+--------------+-----------------+-----------------+-----------+---------+----------+----------
- ...
- city | countrycode | public | t | 2 | id | id | ASC | f | t | t
- city | countrycode | public | t | 1 | countrycode | countrycode | ASC | f | f | t
- ...
- countrylanguage | countrycode | public | t | 1 | countrycode | countrycode | ASC | f | f | t
- countrylanguage | countrycode | public | t | 2 | language | language | ASC | f | t | t
- ...
- ~~~
-
-1. Drop the `countrycode` indexes:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- DROP INDEX city@countrycode;
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- DROP INDEX countrylanguage@countrycode;
- ~~~
-
- You will recreate the indexes after [loading the data](#step-2-load-the-mysql-data).
-
-### Step 2. Load the MySQL data
-
-Load the `world` data into CockroachDB using [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) with CSV-formatted data. {% include {{ page.version.version }}/sql/export-csv-tsv.md %}
-
-{{site.data.alerts.callout_info}}
-When MySQL dumps data, the tables are not ordered by [foreign key]({% link {{ page.version.version }}/foreign-key.md %}) constraints, and foreign keys are not placed in the correct dependency order. It is best to disable foreign key checks when loading data into CockroachDB, and revalidate foreign keys on each table after the data is loaded.
-
-By default, [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) invalidates all [foreign key]({% link {{ page.version.version }}/foreign-key.md %}) constraints on the target table.
-{{site.data.alerts.end}}
-
-1. Dump the MySQL `world` data with the following [`mysqldump` command](https://dev.mysql.com/doc/refman/8.0/en/mysqldump-delimited-text.html):
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- mysqldump -uroot -T /{path}/world-data --fields-terminated-by ',' --fields-enclosed-by '"' --fields-escaped-by '\' --no-create-info world
- ~~~
-
- This dumps each table in your database to the path `/{path}/world-data` as a `.txt` file in CSV format.
- - `--fields-terminated-by` specifies that values are separated by commas instead of tabs.
- - `--fields-enclosed-by` and `--fields-escaped-by` specify the characters that enclose and escape column values, respectively.
- - `--no-create-info` dumps only the [data manipulation language (DML)]({% link {{ page.version.version }}/sql-statements.md %}#data-manipulation-statements).
-
-1. Host the files where the CockroachDB cluster can access them.
-
- Each node in the CockroachDB cluster needs to have access to the files being imported. There are several ways for the cluster to access the data; for more information on the types of storage [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) can pull from, see the following:
- - [Use Cloud Storage]({% link {{ page.version.version }}/use-cloud-storage.md %})
- - [Use a Local File Server]({% link {{ page.version.version }}/use-a-local-file-server.md %})
-
- Cloud storage such as Amazon S3 or Google Cloud is highly recommended for hosting the data files you want to import.
-
- The dump files generated in the preceding step are already hosted on a public S3 bucket created for this example.
-
-1. Open a SQL shell to the CockroachDB `world` cluster, using the same command as before:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- cockroach sql --url "postgresql://{username}@{hostname}:{port}/world?sslmode=verify-full"
- ~~~
-
-1. Use [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) to import each MySQL dump file into the corresponding table in the `world` database.
-
- The following commands point to a public S3 bucket where the `world` data dump files are hosted for this example. The `nullif='\N'` clause specifies that `\N` values, which are produced by the `mysqldump` command, should be read as [`NULL`]({% link {{ page.version.version }}/null-handling.md %}).
-
- {{site.data.alerts.callout_success}}
- You can add the `row_limit` [option]({% link {{ page.version.version }}/import-into.md %}#import-options) to specify the number of rows to import. For example, `row_limit = '10'` will import the first 10 rows of the table. This option is useful for finding errors quickly before executing a more time- and resource-consuming import.
- {{site.data.alerts.end}}
-
- ~~~ sql
- IMPORT INTO countrylanguage
- CSV DATA (
- 'https://cockroachdb-migration-examples.s3.us-east-1.amazonaws.com/mysql/world/countrylanguage.txt'
- )
- WITH
- nullif='\N';
- ~~~
-
- ~~~
- job_id | status | fraction_completed | rows | index_entries | bytes
- ---------------------+-----------+--------------------+------+---------------+---------
- 887782070812344321 | succeeded | 1 | 984 | 984 | 171555
- ~~~
-
- ~~~ sql
- IMPORT INTO country
- CSV DATA (
- 'https://cockroachdb-migration-examples.s3.us-east-1.amazonaws.com/mysql/world/country.txt'
- )
- WITH
- nullif='\N';
- ~~~
-
- ~~~
- job_id | status | fraction_completed | rows | index_entries | bytes
- ---------------------+-----------+--------------------+------+---------------+--------
- 887782114360819713 | succeeded | 1 | 239 | 0 | 33173
- ~~~
-
- ~~~ sql
- IMPORT INTO city
- CSV DATA (
- 'https://cockroachdb-migration-examples.s3.us-east-1.amazonaws.com/mysql/world/city.txt'
- )
- WITH
- nullif='\N';
- ~~~
-
- ~~~
- job_id | status | fraction_completed | rows | index_entries | bytes
- ---------------------+-----------+--------------------+------+---------------+---------
- 887782154421567489 | succeeded | 1 | 4079 | 4079 | 288140
- ~~~
-
- {{site.data.alerts.callout_info}}
- After [converting the schema](#step-1-convert-the-mysql-schema) to work with CockroachDB, the `id` column in `city` is an [`INT8`]({% link {{ page.version.version }}/int.md %}) with default values generated by [`unique_rowid()`]({% link {{ page.version.version }}/functions-and-operators.md %}#id-generation-functions). However, `unique_rowid()` values are only generated when new rows are [inserted]({% link {{ page.version.version }}/insert.md %}) without an `id` value. The MySQL data dump still includes the sequential `id` values generated by the MySQL [`AUTO_INCREMENT` attribute](#auto_increment-attribute), and these are imported with the `IMPORT INTO` command.
-
- In an actual migration, you can either update the primary key into a [multi-column key]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#use-multi-column-primary-keys) or add a new primary key column that [generates unique IDs]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#unique-id-best-practices).
- {{site.data.alerts.end}}
-
-1. Recreate the indexes that you deleted before importing the data:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- CREATE INDEX countrycode ON city (countrycode, id);
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- CREATE INDEX countrycode ON countrylanguage (countrycode, language);
- ~~~
-
-1. Recall that `IMPORT INTO` invalidates all [foreign key]({% link {{ page.version.version }}/foreign-key.md %}) constraints on the target table. View the constraints that are defined on `city` and `countrylanguage`:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- SHOW CONSTRAINTS FROM city;
- ~~~
-
- ~~~
- table_name | constraint_name | constraint_type | details | validated
- -------------+-----------------+-----------------+--------------------------------------------------------------+------------
- city | city_ibfk_1 | FOREIGN KEY | FOREIGN KEY (countrycode) REFERENCES country(code) NOT VALID | f
- city | city_pkey | PRIMARY KEY | PRIMARY KEY (id ASC) | t
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- SHOW CONSTRAINTS FROM countrylanguage;
- ~~~
-
- ~~~
- table_name | constraint_name | constraint_type | details | validated
- ------------------+------------------------+-----------------+--------------------------------------------------------------+------------
- countrylanguage | countrylanguage_ibfk_1 | FOREIGN KEY | FOREIGN KEY (countrycode) REFERENCES country(code) NOT VALID | f
- countrylanguage | countrylanguage_pkey | PRIMARY KEY | PRIMARY KEY (countrycode ASC, language ASC) | t
- ~~~
-
-1. To validate the foreign keys, issue an [`ALTER TABLE ... VALIDATE CONSTRAINT`]({% link {{ page.version.version }}/alter-table.md %}#validate-constraint) statement for each table:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- ALTER TABLE city VALIDATE CONSTRAINT city_ibfk_1;
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- ALTER TABLE countrylanguage VALIDATE CONSTRAINT countrylanguage_ibfk_1;
- ~~~
-
-### Step 3. Validate the migrated data
-
-Use [MOLT Verify]({% link molt/molt-verify.md %}) to check that the data on MySQL and CockroachDB are consistent.
-
-1. [Install MOLT Verify.]({% link molt/molt-verify.md %})
-
-1. In the directory where you installed MOLT Verify, use the following command to compare the two databases, specifying the [JDBC connection string for MySQL](https://dev.mysql.com/doc/connector-j/8.1/en/connector-j-reference-jdbc-url-format.html) with `--source` and the SQL connection string for CockroachDB with `--target`:
-
- {{site.data.alerts.callout_success}}
- To find the CockroachDB connection string, open the **Connect** dialog in the {{ site.data.products.cloud }} Console and select the `world` database and the **General connection string** option.
- {{site.data.alerts.end}}
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- ./molt verify --source 'jdbc:mysql://{user}:{password}@tcp({host}:{port})/world' --target 'postgresql://{user}:{password}@{host}:{port}/world?sslmode=verify-full'
- ~~~
-
- You will see the initial output:
-
- ~~~
- INF verification in progress
- ~~~
-
- The following warnings indicate that the MySQL and CockroachDB columns have different types. This is an expected result, since some columns were [changed to `ENUM` types](#enum-type) when you [converted the schema](#step-1-convert-the-mysql-schema):
-
- ~~~
- WRN mismatching table definition mismatch_info="column type mismatch on continent: text vs country_continent_enum" table_name=country table_schema=public
- WRN mismatching table definition mismatch_info="column type mismatch on isofficial: text vs countrylanguage_isofficial_enum" table_name=countrylanguage table_schema=public
- ~~~
-
- The following output indicates that MOLT Verify has completed verification:
-
- ~~~
- INF finished row verification on public.country (shard 1/1): truth rows seen: 239, success: 239, missing: 0, mismatch: 0, extraneous: 0, live_retry: 0
- INF finished row verification on public.countrylanguage (shard 1/1): truth rows seen: 984, success: 984, missing: 0, mismatch: 0, extraneous: 0, live_retry: 0
- INF finished row verification on public.city (shard 1/1): truth rows seen: 4079, success: 4079, missing: 0, mismatch: 0, extraneous: 0, live_retry: 0
- INF verification complete
- ~~~
-
-With the schema migrated and the initial data load verified, the next steps in a real-world migration are to ensure that you have made any necessary [application changes]({% link {{ page.version.version }}/migration-overview.md %}#application-changes), [validate application queries]({% link {{ page.version.version }}/migration-overview.md %}#validate-queries), and [perform a dry run]({% link {{ page.version.version }}/migration-overview.md %}#perform-a-dry-run) before [conducting the full migration]({% link {{ page.version.version }}/migration-overview.md %}#conduct-the-migration).
-
-To learn more, see the [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}).
-
-## See also
-
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
-- [Use the Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
-- [Use the MOLT Verify tool]({% link molt/molt-verify.md %})
-- [Import Performance Best Practices]({% link {{ page.version.version }}/import-performance-best-practices.md %})
-- [Migrate from CSV]({% link {{ page.version.version }}/migrate-from-csv.md %})
-- [Migrate from PostgreSQL]({% link {{ page.version.version }}/migrate-from-postgres.md %})
-- [Can a PostgreSQL or MySQL application be migrated to CockroachDB?]({% link {{ page.version.version }}/frequently-asked-questions.md %}#can-a-postgresql-or-mysql-application-be-migrated-to-cockroachdb)
diff --git a/src/current/v25.2/migrate-from-openstreetmap.md b/src/current/v25.2/migrate-from-openstreetmap.md
index 843c335beba..a4bcdc56eef 100644
--- a/src/current/v25.2/migrate-from-openstreetmap.md
+++ b/src/current/v25.2/migrate-from-openstreetmap.md
@@ -128,9 +128,9 @@ Osm2pgsql took 2879s overall
- [Migrate from GeoPackages]({% link {{ page.version.version }}/migrate-from-geopackage.md %})
- [Migrate from GeoJSON]({% link {{ page.version.version }}/migrate-from-geojson.md %})
- [Migrate from Shapefiles]({% link {{ page.version.version }}/migrate-from-shapefiles.md %})
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
-- [Migrate from MySQL][mysql]
-- [Migrate from PostgreSQL][postgres]
+- [Migration Overview]({% link molt/migration-overview.md %})
+- [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql)
+- [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %})
- [Back Up and Restore Data]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %})
- [Use the Built-in SQL Client]({% link {{ page.version.version }}/cockroach-sql.md %})
- [`cockroach` Commands Overview]({% link {{ page.version.version }}/cockroach-commands.md %})
diff --git a/src/current/v25.2/migrate-from-oracle.md b/src/current/v25.2/migrate-from-oracle.md
index 2979cd47a36..a62e9ae2c11 100644
--- a/src/current/v25.2/migrate-from-oracle.md
+++ b/src/current/v25.2/migrate-from-oracle.md
@@ -390,8 +390,8 @@ You will have to refactor Oracle SQL and functions that do not comply with [ANSI
- [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %})
- [Import Performance Best Practices]({% link {{ page.version.version }}/import-performance-best-practices.md %})
- [Migrate from CSV]({% link {{ page.version.version }}/migrate-from-csv.md %})
-- [Migrate from MySQL]({% link {{ page.version.version }}/migrate-from-mysql.md %})
-- [Migrate from PostgreSQL]({% link {{ page.version.version }}/migrate-from-postgres.md %})
+- [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql)
+- [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %})
- [Back Up and Restore Data]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %})
- [Use the Built-in SQL Client]({% link {{ page.version.version }}/cockroach-sql.md %})
- [`cockroach` Commands Overview]({% link {{ page.version.version }}/cockroach-commands.md %})
diff --git a/src/current/v25.2/migrate-from-postgres.md b/src/current/v25.2/migrate-from-postgres.md
deleted file mode 100644
index e8dcf51f649..00000000000
--- a/src/current/v25.2/migrate-from-postgres.md
+++ /dev/null
@@ -1,301 +0,0 @@
----
-title: Migrate from PostgreSQL
-summary: Learn how to migrate data from PostgreSQL into a CockroachDB cluster.
-toc: true
-docs_area: migrate
----
-
-{{site.data.alerts.callout_info}}
-For current migration instructions using the [MOLT tools]({% link molt/molt-overview.md %}), refer to [Migrate to CockroachDB]({% link {{ page.version.version }}/migrate-to-cockroachdb.md %}).
-{{site.data.alerts.end}}
-
-This page describes basic considerations and provides a basic [example](#example-migrate-frenchtowns-to-cockroachdb) of migrating data from PostgreSQL to CockroachDB. The information on this page assumes that you have read [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}), which describes the broad phases and considerations of migrating a database to CockroachDB.
-
-The [PostgreSQL migration example](#example-migrate-frenchtowns-to-cockroachdb) on this page demonstrates how to update the PostgreSQL schema, perform an initial load of data, and validate the data. These steps are essential when [preparing for a full migration]({% link {{ page.version.version }}/migration-overview.md %}#prepare-for-migration).
-
-{{site.data.alerts.callout_success}}
-If you need help migrating to CockroachDB, contact our sales team.
-{{site.data.alerts.end}}
-
-## Syntax differences
-
-CockroachDB supports the [PostgreSQL wire protocol](https://www.postgresql.org/docs/current/protocol.html) and is largely compatible with PostgreSQL syntax.
-
-For syntax differences, refer to [Features that differ from PostgreSQL]({% link {{ page.version.version }}/postgresql-compatibility.md %}#features-that-differ-from-postgresql).
-
-### Unsupported features
-
-The following PostgreSQL features do not yet exist in CockroachDB:
-
-{% include {{page.version.version}}/sql/unsupported-postgres-features.md %}
-
-## Load PostgreSQL data
-
-You can use [MOLT Fetch]({% link molt/molt-fetch.md %}) to migrate PostgreSQL data to CockroachDB.
-
-Alternatively, you can use one of the following methods to migrate the data:
-
-- {% include {{ page.version.version }}/migration/load-data-import-into.md %}
-
- {% include {{ page.version.version }}/misc/import-perf.md %}
-
-- {% include {{ page.version.version }}/migration/load-data-third-party.md %}
-
-- {% include {{ page.version.version }}/migration/load-data-copy-from.md %}
-
-The [following example](#example-migrate-frenchtowns-to-cockroachdb) uses `IMPORT INTO` to perform the initial data load.
-
-## Example: Migrate `frenchtowns` to CockroachDB
-
-The following steps demonstrate converting a schema, performing an [initial load of data]({% link {{ page.version.version }}/migration-overview.md %}#load-test-data), and [validating data consistency]({% link {{ page.version.version }}/migration-overview.md %}#validate-queries) during a migration.
-
-In the context of a full migration, these steps ensure that PostgreSQL data can be properly migrated to CockroachDB and your application queries tested against the cluster. For details, see the [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}#prepare-for-migration).
-
-### Before you begin
-
-The example uses a modified version of the PostgreSQL `french-towns-communes-francais` data set and demonstrates how to migrate the schema and data to a CockroachDB {{ site.data.products.standard }} cluster. To follow along with these steps:
-
-1. Download the `frenchtowns` data set:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- curl -O https://cockroachdb-migration-examples.s3.us-east-1.amazonaws.com/postgresql/frenchtowns/frenchtowns.sql
- ~~~
-
-1. Create a `frenchtowns` database on your PostgreSQL instance:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- createdb frenchtowns
- ~~~
-
-1. Load the `frenchtowns` data into PostgreSQL, specifying the path of the downloaded file:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- psql frenchtowns -a -f frenchtowns.sql
- ~~~
-
-1. Create a free [{{ site.data.products.cloud }} account]({% link cockroachcloud/create-an-account.md %}), which is used to access the [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}) and [create the CockroachDB {{ site.data.products.standard }} cluster]({% link cockroachcloud/create-your-cluster.md %}).
-
-{{site.data.alerts.callout_success}}
-{% include cockroachcloud/migration/sct-self-hosted.md %}
-{{site.data.alerts.end}}
-
-### Step 1. Convert the PostgreSQL schema
-
-Use the [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}) to convert the `frenchtowns` schema for compatibility with CockroachDB. The schema has three tables: `regions`, `departments`, and `towns`.
-
-1. Dump the PostgreSQL `frenchtowns` schema with the following [`pg_dump`](https://www.postgresql.org/docs/15/app-pgdump.html) command:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- pg_dump --schema-only frenchtowns > frenchtowns_schema.sql
- ~~~
-
-1. Open the [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}) in the {{ site.data.products.cloud }} Console and [add a new PostgreSQL schema]({% link cockroachcloud/migrations-page.md %}#convert-a-schema).
-
- After conversion is complete, [review the results]({% link cockroachcloud/migrations-page.md %}#review-the-schema). The [**Summary Report**]({% link cockroachcloud/migrations-page.md %}#summary-report) shows that there are errors under **Required Fixes**. You must resolve these in order to migrate the schema to CockroachDB.
-
- {{site.data.alerts.callout_success}}
- You can also [add your PostgreSQL database credentials]({% link cockroachcloud/migrations-page.md %}#use-credentials) to have the Schema Conversion Tool obtain the schema directly from the PostgreSQL database.
- {{site.data.alerts.end}}
-
-1. `Missing user: postgres` errors indicate that the SQL user `postgres` is missing from CockroachDB. Click **Add User** to create the user.
-
-1. `Miscellaneous Errors` includes a `SELECT pg_catalog.set_config('search_path', '', false)` statement that can safely be removed. Click **Delete** to remove the statement from the schema.
-
-1. Review the `CREATE SEQUENCE` statements listed under **Suggestions**. Cockroach Labs does not recommend using a sequence to define a primary key column. For more information, see [Unique ID best practices]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#unique-id-best-practices).
-
- For this example, **Acknowledge** the suggestion without making further changes. In practice, after [conducting the full migration]({% link {{ page.version.version }}/migration-overview.md %}#conduct-the-migration) to CockroachDB, you would modify your CockroachDB schema to use unique and non-sequential primary keys.
-
-1. Click **Retry Migration**. The **Summary Report** now shows that there are no errors. This means that the schema is ready to migrate to CockroachDB.
-
- This example migrates directly to a CockroachDB {{ site.data.products.standard }} cluster. {% include cockroachcloud/migration/sct-self-hosted.md %}
-
-1. Click [**Migrate Schema**]({% link cockroachcloud/migrations-page.md %}#migrate-the-schema) to create a new CockroachDB {{ site.data.products.standard }} cluster with the converted schema. Name the database `frenchtowns`.
-
- You can view this database on the [**Databases** page]({% link cockroachcloud/databases-page.md %}) of the {{ site.data.products.cloud }} Console.
-
-### Step 2. Load the PostgreSQL data
-
-Load the `frenchtowns` data into CockroachDB using [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) with CSV-formatted data. {% include {{ page.version.version }}/sql/export-csv-tsv.md %}
-
-{{site.data.alerts.callout_info}}
-By default, [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) invalidates all [foreign key]({% link {{ page.version.version }}/foreign-key.md %}) constraints on the target table.
-{{site.data.alerts.end}}
-
-1. Dump each table in the PostgreSQL `frenchtowns` database to a CSV-formatted file:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- psql frenchtowns -c "COPY regions TO stdout DELIMITER ',' CSV;" > regions.csv
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- psql frenchtowns -c "COPY departments TO stdout DELIMITER ',' CSV;" > departments.csv
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- psql frenchtowns -c "COPY towns TO stdout DELIMITER ',' CSV;" > towns.csv
- ~~~
-
-1. Host the files where the CockroachDB cluster can access them.
-
- Each node in the CockroachDB cluster needs to have access to the files being imported. There are several ways for the cluster to access the data; for more information on the types of storage [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) can pull from, see the following:
- - [Use Cloud Storage]({% link {{ page.version.version }}/use-cloud-storage.md %})
- - [Use a Local File Server]({% link {{ page.version.version }}/use-a-local-file-server.md %})
-
- Cloud storage such as Amazon S3 or Google Cloud is highly recommended for hosting the data files you want to import.
-
- The dump files generated in the preceding step are already hosted on a public S3 bucket created for this example.
-
-1. Open a SQL shell to the CockroachDB `frenchtowns` cluster. To find the command, open the **Connect** dialog in the {{ site.data.products.cloud }} Console and select the `frenchtowns` database and **CockroachDB Client** option. It will look like:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- cockroach sql --url "postgresql://{username}@{hostname}:{port}/frenchtowns?sslmode=verify-full"
- ~~~
-
-1. Use [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) to import each PostgreSQL dump file into the corresponding table in the `frenchtowns` database.
-
- The following commands point to a public S3 bucket where the `frenchtowns` data dump files are hosted for this example.
-
- {{site.data.alerts.callout_success}}
- You can add the `row_limit` [option]({% link {{ page.version.version }}/import-into.md %}#import-options) to specify the number of rows to import. For example, `row_limit = '10'` will import the first 10 rows of the table. This option is useful for finding errors quickly before executing a more time- and resource-consuming import.
- {{site.data.alerts.end}}
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- IMPORT INTO regions
- CSV DATA (
- 'https://cockroachdb-migration-examples.s3.us-east-1.amazonaws.com/postgresql/frenchtowns/regions.csv'
- );
- ~~~
-
- ~~~
- job_id | status | fraction_completed | rows | index_entries | bytes
- ---------------------+-----------+--------------------+------+---------------+--------
- 893753132185026561 | succeeded | 1 | 26 | 52 | 2338
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- IMPORT INTO departments
- CSV DATA (
- 'https://cockroachdb-migration-examples.s3.us-east-1.amazonaws.com/postgresql/frenchtowns/departments.csv'
- );
- ~~~
-
- ~~~
- job_id | status | fraction_completed | rows | index_entries | bytes
- ---------------------+-----------+--------------------+------+---------------+--------
- 893753147892465665 | succeeded | 1 | 100 | 300 | 11166
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- IMPORT INTO towns
- CSV DATA (
- 'https://cockroachdb-migration-examples.s3.us-east-1.amazonaws.com/postgresql/frenchtowns/towns.csv'
- );
- ~~~
-
- ~~~
- job_id | status | fraction_completed | rows | index_entries | bytes
- ---------------------+-----------+--------------------+-------+---------------+----------
- 893753162225680385 | succeeded | 1 | 36684 | 36684 | 2485007
- ~~~
-
-1. Recall that `IMPORT INTO` invalidates all [foreign key]({% link {{ page.version.version }}/foreign-key.md %}) constraints on the target table. View the constraints that are defined on `departments` and `towns`:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- SHOW CONSTRAINTS FROM departments;
- ~~~
-
- ~~~
- table_name | constraint_name | constraint_type | details | validated
- --------------+-------------------------+-----------------+---------------------------------------------------------+------------
- departments | departments_capital_key | UNIQUE | UNIQUE (capital ASC) | t
- departments | departments_code_key | UNIQUE | UNIQUE (code ASC) | t
- departments | departments_name_key | UNIQUE | UNIQUE (name ASC) | t
- departments | departments_pkey | PRIMARY KEY | PRIMARY KEY (id ASC) | t
- departments | departments_region_fkey | FOREIGN KEY | FOREIGN KEY (region) REFERENCES regions(code) NOT VALID | f
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- SHOW CONSTRAINTS FROM towns;
- ~~~
-
- ~~~
- table_name | constraint_name | constraint_type | details | validated
- -------------+---------------------------+-----------------+-----------------------------------------------------------------+------------
- towns | towns_code_department_key | UNIQUE | UNIQUE (code ASC, department ASC) | t
- towns | towns_department_fkey | FOREIGN KEY | FOREIGN KEY (department) REFERENCES departments(code) NOT VALID | f
- towns | towns_pkey | PRIMARY KEY | PRIMARY KEY (id ASC) | t
- ~~~
-
-1. To validate the foreign keys, issue an [`ALTER TABLE ... VALIDATE CONSTRAINT`]({% link {{ page.version.version }}/alter-table.md %}#validate-constraint) statement for each table:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- ALTER TABLE departments VALIDATE CONSTRAINT departments_region_fkey;
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- ALTER TABLE towns VALIDATE CONSTRAINT towns_department_fkey;
- ~~~
-
-### Step 3. Validate the migrated data
-
-Use [MOLT Verify]({% link molt/molt-verify.md %}) to check that the data on PostgreSQL and CockroachDB are consistent.
-
-1. [Install MOLT Verify.]({% link molt/molt-verify.md %})
-
-1. In the directory where you installed MOLT Verify, use the following command to compare the two databases, specifying the PostgreSQL connection string with `--source` and the CockroachDB connection string with `--target`:
-
- {{site.data.alerts.callout_success}}
- To find the CockroachDB connection string, open the **Connect** dialog in the {{ site.data.products.cloud }} Console and select the `frenchtowns` database and the **General connection string** option.
- {{site.data.alerts.end}}
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- ./molt verify --source 'postgresql://{username}:{password}@{host}:{port}/frenchtowns' --target 'postgresql://{user}:{password}@{host}:{port}/frenchtowns?sslmode=verify-full'
- ~~~
-
- You will see the initial output:
-
- ~~~
- INF verification in progress
- ~~~
-
- The following output indicates that MOLT Verify has completed verification:
-
- ~~~
- INF finished row verification on public.regions (shard 1/1): truth rows seen: 26, success: 26, missing: 0, mismatch: 0, extraneous: 0, live_retry: 0
- INF finished row verification on public.departments (shard 1/1): truth rows seen: 100, success: 100, missing: 0, mismatch: 0, extraneous: 0, live_retry: 0
- INF progress on public.towns (shard 1/1): truth rows seen: 10000, success: 10000, missing: 0, mismatch: 0, extraneous: 0, live_retry: 0
- INF progress on public.towns (shard 1/1): truth rows seen: 20000, success: 20000, missing: 0, mismatch: 0, extraneous: 0, live_retry: 0
- INF progress on public.towns (shard 1/1): truth rows seen: 30000, success: 30000, missing: 0, mismatch: 0, extraneous: 0, live_retry: 0
- INF finished row verification on public.towns (shard 1/1): truth rows seen: 36684, success: 36684, missing: 0, mismatch: 0, extraneous: 0, live_retry: 0
- INF verification complete
- ~~~
-
-With the schema migrated and the initial data load verified, the next steps in a real-world migration are to ensure that you have made any necessary [application changes]({% link {{ page.version.version }}/migration-overview.md %}#application-changes), [validate application queries]({% link {{ page.version.version }}/migration-overview.md %}#validate-queries), and [perform a dry run]({% link {{ page.version.version }}/migration-overview.md %}#perform-a-dry-run) before [conducting the full migration]({% link {{ page.version.version }}/migration-overview.md %}#conduct-the-migration).
-
-To learn more, see the [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}).
-
-## See also
-
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
-- [Use the Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
-- [Use the MOLT Verify tool]({% link molt/molt-verify.md %})
-- [Import Performance Best Practices]({% link {{ page.version.version }}/import-performance-best-practices.md %})
-- [Migrate from CSV]({% link {{ page.version.version }}/migrate-from-csv.md %})
-- [Migrate from MySQL]({% link {{ page.version.version }}/migrate-from-mysql.md %})
-- [Can a PostgreSQL or MySQL application be migrated to CockroachDB?]({% link {{ page.version.version }}/frequently-asked-questions.md %}#can-a-postgresql-or-mysql-application-be-migrated-to-cockroachdb)
diff --git a/src/current/v25.2/migrate-from-shapefiles.md b/src/current/v25.2/migrate-from-shapefiles.md
index ea4e7a368e6..44f366d5a69 100644
--- a/src/current/v25.2/migrate-from-shapefiles.md
+++ b/src/current/v25.2/migrate-from-shapefiles.md
@@ -140,9 +140,9 @@ IMPORT INTO tornadoes CSV DATA ('http://localhost:3000/tornadoes.csv') WITH skip
- [Migrate from OpenStreetMap]({% link {{ page.version.version }}/migrate-from-openstreetmap.md %})
- [Migrate from GeoJSON]({% link {{ page.version.version }}/migrate-from-geojson.md %})
- [Migrate from GeoPackage]({% link {{ page.version.version }}/migrate-from-geopackage.md %})
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
-- [Migrate from MySQL][mysql]
-- [Migrate from PostgreSQL][postgres]
+- [Migration Overview]({% link molt/migration-overview.md %})
+- [Migrate from MySQL]({% link molt/migrate-to-cockroachdb.md %}?filters=mysql)
+- [Migrate from PostgreSQL]({% link molt/migrate-to-cockroachdb.md %})
- [Back Up and Restore Data]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %})
- [Use the Built-in SQL Client]({% link {{ page.version.version }}/cockroach-sql.md %})
- [`cockroach` Commands Overview]({% link {{ page.version.version }}/cockroach-commands.md %})
diff --git a/src/current/v25.2/migrate-in-phases.md b/src/current/v25.2/migrate-in-phases.md
index 0f7ab399fc7..03b3d5e9b9c 100644
--- a/src/current/v25.2/migrate-in-phases.md
+++ b/src/current/v25.2/migrate-in-phases.md
@@ -5,7 +5,7 @@ toc: true
docs_area: migrate
---
-A phased migration to CockroachDB uses the [MOLT tools]({% link molt/molt-overview.md %}) to [convert your source schema](#step-2-prepare-the-source-schema), incrementally [load source data](#step-3-load-data-into-cockroachdb) and [verify the results](#step-4-verify-the-data-load), and finally [replicate ongoing changes](#step-6-replicate-changes-to-cockroachdb) before performing cutover.
+A phased migration to CockroachDB uses the [MOLT tools]({% link molt/migration-overview.md %}) to [convert your source schema](#step-2-prepare-the-source-schema), incrementally [load source data](#step-3-load-data-into-cockroachdb) and [verify the results](#step-4-verify-the-data-load), and finally [replicate ongoing changes](#step-6-replicate-changes-to-cockroachdb) before performing cutover.
{% assign tab_names_html = "Load and replicate;Phased migration;Failback" %}
{% assign html_page_filenames = "migrate-to-cockroachdb.html;migrate-in-phases.html;migrate-failback.html" %}
@@ -14,7 +14,7 @@ A phased migration to CockroachDB uses the [MOLT tools]({% link molt/molt-overvi
## Before you begin
-- Review the [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}).
+- Review the [Migration Overview]({% link molt/migration-overview.md %}).
- Install the [MOLT (Migrate Off Legacy Technology)]({% link releases/molt.md %}#installation) tools.
- Review the MOLT Fetch [setup]({% link molt/molt-fetch.md %}#setup) and [best practices]({% link molt/molt-fetch.md %}#best-practices).
{% include molt/fetch-secure-cloud-storage.md %}
@@ -149,10 +149,10 @@ Perform a cutover by resuming application traffic, now to CockroachDB.
## See also
-- [MOLT Overview]({% link molt/molt-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [MOLT Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
- [MOLT Fetch]({% link molt/molt-fetch.md %})
- [MOLT Verify]({% link molt/molt-verify.md %})
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [Migrate to CockroachDB]({% link {{ page.version.version }}/migrate-to-cockroachdb.md %})
- [Migration Failback]({% link {{ page.version.version }}/migrate-failback.md %})
\ No newline at end of file
diff --git a/src/current/v25.2/migrate-to-cockroachdb.md b/src/current/v25.2/migrate-to-cockroachdb.md
index be5561c12d8..202fe2c6651 100644
--- a/src/current/v25.2/migrate-to-cockroachdb.md
+++ b/src/current/v25.2/migrate-to-cockroachdb.md
@@ -5,7 +5,7 @@ toc: true
docs_area: migrate
---
-A migration to CockroachDB uses the [MOLT tools]({% link molt/molt-overview.md %}) to [convert your source schema](#step-2-prepare-the-source-schema), [load source data](#step-3-load-data-into-cockroachdb) into CockroachDB and immediately [replicate ongoing changes](#step-4-replicate-changes-to-cockroachdb), and [verify consistency](#step-5-stop-replication-and-verify-data) on the CockroachDB cluster before performing cutover.
+A migration to CockroachDB uses the [MOLT tools]({% link molt/migration-overview.md %}) to [convert your source schema](#step-2-prepare-the-source-schema), [load source data](#step-3-load-data-into-cockroachdb) into CockroachDB and immediately [replicate ongoing changes](#step-4-replicate-changes-to-cockroachdb), and [verify consistency](#step-5-stop-replication-and-verify-data) on the CockroachDB cluster before performing cutover.
{% assign tab_names_html = "Load and replicate;Phased migration;Failback" %}
{% assign html_page_filenames = "migrate-to-cockroachdb.html;migrate-in-phases.html;migrate-failback.html" %}
@@ -14,7 +14,7 @@ A migration to CockroachDB uses the [MOLT tools]({% link molt/molt-overview.md %
## Before you begin
-- Review the [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}).
+- Review the [Migration Overview]({% link molt/migration-overview.md %}).
- Install the [MOLT (Migrate Off Legacy Technology)]({% link releases/molt.md %}#installation) tools.
- Review the MOLT Fetch [setup]({% link molt/molt-fetch.md %}#setup) and [best practices]({% link molt/molt-fetch.md %}#best-practices).
{% include molt/fetch-secure-cloud-storage.md %}
@@ -109,10 +109,10 @@ Perform a cutover by resuming application traffic, now to CockroachDB.
## See also
-- [MOLT Overview]({% link molt/molt-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [MOLT Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
- [MOLT Fetch]({% link molt/molt-fetch.md %})
- [MOLT Verify]({% link molt/molt-verify.md %})
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [Migrate to CockroachDB in Phases]({% link {{ page.version.version }}/migrate-in-phases.md %})
- [Migration Failback]({% link {{ page.version.version }}/migrate-failback.md %})
\ No newline at end of file
diff --git a/src/current/v25.2/migration-overview.md b/src/current/v25.2/migration-overview.md
deleted file mode 100644
index 4e0d7a35e26..00000000000
--- a/src/current/v25.2/migration-overview.md
+++ /dev/null
@@ -1,169 +0,0 @@
----
-title: Migration Overview
-summary: Learn how to migrate your database to a CockroachDB cluster.
-toc: true
-docs_area: migrate
----
-
-This page provides a high-level overview of database migration.
-
-A database migration broadly consists of the following phases:
-
-1. [Develop a migration plan:](#develop-a-migration-plan) Evaluate your [downtime approach](#approach-to-downtime), [size the CockroachDB cluster](#capacity-planning) that you will migrate to, and become familiar with the [application changes](#application-changes) that you need to make for CockroachDB.
-1. [Prepare for migration:](#prepare-for-migration) Run a [pre-mortem](#run-a-migration-pre-mortem), set up [metrics](#set-up-monitoring-and-alerting), [load test data](#load-test-data), [validate your application queries](#validate-queries) for correctness and performance, and [perform a dry run](#perform-a-dry-run) of the migration.
-1. [Conduct the migration:](#conduct-the-migration) Use the [MOLT tools]({% link molt/molt-overview.md %}) to migrate the source data to CockroachDB, replicate ongoing changes, and verify consistency on CockroachDB.
-1. [Complete the migration:](#complete-the-migration) Notify the appropriate parties and summarize the details.
-
-{{site.data.alerts.callout_success}}
-For help migrating to CockroachDB, contact our sales team.
-{{site.data.alerts.end}}
-
-## Develop a migration plan
-
-Consider the following as you plan your migration:
-
-- Who will lead and perform the migration? Which teams are involved, and which aspects are they responsible for?
-- Which internal and external parties do you need to inform about the migration?
-- Which external or third-party tools (e.g., microservices, analytics, payment processors, aggregators, CRMs) must be tested and migrated along with your application?
-- What portion of the data can be inconsistent, and for how long? What is the tolerable percentage of latency and application errors? This comprises your "error budget".
-- When is the best time to perform this migration to be minimally disruptive to the database's users?
-- What is your target date for completing the migration?
-
-Create a document that summarizes the intent of the migration, the technical details, and the team members involved.
-
-### Approach to downtime
-
-It's important to fully [prepare the migration](#prepare-for-migration) in order to be certain that the migration can be completed successfully during the downtime window.
-
-- *Scheduled downtime* is made known to your users in advance. Once you have [prepared for the migration](#prepare-for-migration), you take the application offline, [conduct the migration](#conduct-the-migration), and bring the application back online on CockroachDB. To succeed, you should estimate the amount of downtime required to migrate your data, and ideally schedule the downtime outside of peak hours. Scheduling downtime is easiest if your application traffic is "periodic", meaning that it varies by the time of day, day of week, or day of month.
-
-- *Unscheduled downtime* impacts as few customers as possible, ideally without impacting their regular usage. If your application is intentionally offline at certain times (e.g., outside business hours), you can migrate the data without users noticing. Alternatively, if your application's functionality is not time-sensitive (e.g., it sends batched messages or emails), then you can queue requests while your system is offline, and process those requests after completing the migration to CockroachDB.
-
-- *Reduced functionality* takes some, but not all, application functionality offline. For example, you can disable writes but not reads while you migrate the application data, and queue data to be written after completing the migration.
-
-### Capacity planning
-
-Determine the size of the target CockroachDB cluster. To do this, consider your data volume and workload characteristics:
-
-- What is the total size of the data you will migrate?
-- How many active [application connections]({% link {{ page.version.version }}/recommended-production-settings.md %}#connection-pooling) will be running in the CockroachDB environment?
-
-Use this information to size the CockroachDB cluster you will create. If you are migrating to a CockroachDB {{ site.data.products.cloud }} cluster, see [Plan Your Cluster]({% link cockroachcloud/plan-your-cluster.md %}) for details:
-
-- For CockroachDB {{ site.data.products.standard }} and {{ site.data.products.basic }}, your cluster will scale automatically to meet your storage and usage requirements. Refer to the [CockroachDB {{ site.data.products.standard }}]({% link cockroachcloud/plan-your-cluster-basic.md %}#request-units) and [CockroachDB {{ site.data.products.basic }}]({% link cockroachcloud/plan-your-cluster-basic.md %}#request-units) documentation to learn about how to limit your resource consumption.
-- For CockroachDB {{ site.data.products.advanced }}, refer to the [example]({% link cockroachcloud/plan-your-cluster-advanced.md %}#example) that shows how your data volume, storage requirements, and replication factor affect the recommended node size (number of vCPUs per node) and total number of nodes on the cluster.
-- For guidance on sizing for connection pools, see the CockroachDB {{ site.data.products.cloud }} [Production Checklist]({% link cockroachcloud/production-checklist.md %}#sql-connection-handling).
-
-If you are migrating to a CockroachDB {{ site.data.products.core }} cluster:
-
-- Refer to our [sizing methodology]({% link {{ page.version.version }}/recommended-production-settings.md %}#sizing) to determine the total number of vCPUs on the cluster and the number of vCPUs per node (which determines the number of nodes on the cluster).
-- Refer to our [storage recommendations]({% link {{ page.version.version }}/recommended-production-settings.md %}#storage) to determine the amount of storage to provision on each node.
-- For guidance on sizing for connection pools, see the CockroachDB {{ site.data.products.core }} [Production Checklist]({% link {{ page.version.version }}/recommended-production-settings.md %}#connection-pooling).
-
-### Application changes
-
-As you develop your migration plan, consider the application changes that you will need to make. These may relate to the following:
-
-- [Designing a schema that is compatible with CockroachDB.](#schema-design-best-practices)
-- [Handling transaction contention.](#handling-transaction-contention)
-- [Unimplemented features and syntax incompatibilities.](#unimplemented-features-and-syntax-incompatibilities)
-
-#### Schema design best practices
-
-Follow these recommendations when converting your schema for compatibility with CockroachDB.
-
-- Define an explicit primary key on every table. For more information, see [Primary key best practices]({% link {{ page.version.version }}/schema-design-table.md %}#primary-key-best-practices).
-
-- Do not use a sequence to define a primary key column. Instead, Cockroach Labs recommends that you use [multi-column primary keys]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#use-multi-column-primary-keys) or [auto-generating unique IDs]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#use-functions-to-generate-unique-ids) for primary key columns.
-
-- By default on CockroachDB, `INT` is an alias for `INT8`, which creates 64-bit signed integers. Depending on your source database or application requirements, you may need to change the integer size to `4`. For example, [PostgreSQL defaults to 32-bit integers](https://www.postgresql.org/docs/9.6/datatype-numeric.html). For more information, see [Considerations for 64-bit signed integers]({% link {{ page.version.version }}/int.md %}#considerations-for-64-bit-signed-integers).
-
-#### Handling transaction contention
-
-Optimize your queries against [transaction contention]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#transaction-contention). You may encounter [transaction retry errors]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}) when you [test application queries](#validate-queries), as well as transaction contention due to long-running transactions when you [conduct the migration](#conduct-the-migration) and bulk load data.
-
-Transaction retry errors are more frequent under CockroachDB's default [`SERIALIZABLE` isolation level]({% link {{ page.version.version }}/demo-serializable.md %}). If you are migrating an application that was built at a `READ COMMITTED` isolation level, you should first [enable `READ COMMITTED` isolation]({% link {{ page.version.version }}/read-committed.md %}#enable-read-committed-isolation) on the CockroachDB cluster for compatibility.
-
-#### Unimplemented features and syntax incompatibilities
-
-Update your queries to resolve differences in functionality and SQL syntax.
-
-CockroachDB supports the [PostgreSQL wire protocol](https://www.postgresql.org/docs/current/protocol.html) and is largely compatible with PostgreSQL syntax. However, the following PostgreSQL features do not yet exist in CockroachDB:
-
-{% include {{page.version.version}}/sql/unsupported-postgres-features.md %}
-
-If your source database uses any of the preceding features, you may need to implement workarounds in your schema design, in your [data manipulation language (DML)]({% link {{ page.version.version }}/sql-statements.md %}#data-manipulation-statements), or in your application code.
-
-For more details on the CockroachDB SQL implementation, see [SQL Feature Support]({% link {{ page.version.version }}/sql-feature-support.md %}).
-
-## Prepare for migration
-
-Once you have a migration plan, prepare the team, application, source database, and CockroachDB cluster for the migration.
-
-### Run a migration "pre-mortem"
-
-To minimize issues after cutover, compose a migration "pre-mortem":
-
-1. Clearly describe the roles and processes of each team member performing the migration.
-1. List the likely failure points and issues that you may encounter as you [conduct the migration](#conduct-the-migration).
-1. Rank potential issues by severity, and identify ways to reduce risk.
-1. Create a plan for implementing the actions that would most effectively reduce risk.
-
-### Set up monitoring and alerting
-
-Based on the error budget you [defined in your migration plan](#develop-a-migration-plan), identify the metrics that you can use to measure your success criteria and set up monitoring for the migration. These metrics may be identical to those you normally use in production, but can also be specific to your migration needs.
-
-### Load test data
-
-It's useful to load test data into CockroachDB so that you can [test your application queries](#validate-queries). You can use the steps in [Migrate to CockroachDB in Phases]({% link {{ page.version.version }}/migrate-in-phases.md %}) to load and verify test data.
-
-### Validate queries
-
-After you [load the test data](#load-test-data), validate your queries on CockroachDB. You can do this by [shadowing](#shadowing) or by [manually testing](#test-query-results-and-performance) the queries.
-
-Note that CockroachDB defaults to the [`SERIALIZABLE`]({% link {{ page.version.version }}/demo-serializable.md %}) transaction isolation level. If you are migrating an application that was built at a `READ COMMITTED` isolation level on the source database, you must [enable `READ COMMITTED` isolation]({% link {{ page.version.version }}/read-committed.md %}#enable-read-committed-isolation) on the CockroachDB cluster for compatibility.
-
-#### Shadowing
-
-You can "shadow" your production workload by executing your source SQL statements on CockroachDB in parallel. You can then [validate the queries](#test-query-results-and-performance) on CockroachDB for consistency, performance, and potential issues with the migration.
-
-#### Test query results and performance
-
-You can manually validate your queries by testing a subset of "critical queries" on an otherwise idle CockroachDB cluster:
-
-- Check the application logs for error messages and the API response time. If application requests are slower than expected, use the **SQL Activity** page on the [CockroachDB {{ site.data.products.cloud }} Console]({% link cockroachcloud/statements-page.md %}) or [DB Console]({% link {{ page.version.version }}/ui-statements-page.md %}) to find the longest-running queries that are part of that application request. If necessary, tune the queries according to our best practices for [SQL performance]({% link {{ page.version.version }}/performance-best-practices-overview.md %}).
-
-- Compare the results of the queries and check that they are identical in both the source database and CockroachDB. To do this, you can use [MOLT Verify]({% link molt/molt-verify.md %}).
-
-Test performance on a CockroachDB cluster that is appropriately [sized](#capacity-planning) for your workload:
-
-1. Run the application with single- or very low-concurrency and verify the app's performance is acceptable. The cluster should be provisioned with more than enough resources to handle this workload, because you need to verify that the queries will be fast enough when there are zero resource bottlenecks.
-
-1. Run stress tests with at least the production concurrency and rate, but ideally higher in order to verify that the system can handle unexpected spikes in load. This can also uncover [contention]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#transaction-contention) issues that will appear during spikes in app load, which may require [application design changes](#handling-transaction-contention) to avoid.
-
-### Perform a dry run
-
-To further minimize potential surprises when you conduct the migration, practice cutover using your application and similar volumes of data on a "dry-run" environment. Use a test or development environment that is as similar as possible to production.
-
-Performing a dry run is highly recommended. In addition to demonstrating how long the migration may take, a dry run also helps to ensure that team members understand what they need to do during the migration, and that changes to the application are coordinated.
-
-## Conduct the migration
-
-Once you are ready to migrate, follow the steps in [Migrate to CockroachDB]({% link {{ page.version.version }}/migrate-to-cockroachdb.md %}) or [Migrate to CockroachDB in Phases]({% link {{ page.version.version }}/migrate-in-phases.md %}).
-
-## Complete the migration
-
-After you have successfully [conducted the migration](#conduct-the-migration):
-
-- Notify the teams and other stakeholders impacted by the migration.
-- Retire any test or development environments used to verify the migration.
-- Extend the document you created when [developing your migration plan](#develop-a-migration-plan) with any issues encountered and follow-up work that needs to be done.
-
-## See also
-
-- [Migrate to CockroachDB]({% link {{ page.version.version }}/migrate-to-cockroachdb.md %})
-- [Migrate to CockroachDB in Phases]({% link {{ page.version.version }}/migrate-in-phases.md %})
-- [Schema Design Overview]({% link {{ page.version.version }}/schema-design-overview.md %})
-- [Primary key best practices]({% link {{ page.version.version }}/schema-design-table.md %}#primary-key-best-practices)
-- [Secondary index best practices]({% link {{ page.version.version }}/schema-design-indexes.md %}#best-practices)
-- [Transaction contention best practices]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#transaction-contention)
\ No newline at end of file
diff --git a/src/current/v25.2/migration-strategy-lift-and-shift.md b/src/current/v25.2/migration-strategy-lift-and-shift.md
deleted file mode 100644
index 98935b546b6..00000000000
--- a/src/current/v25.2/migration-strategy-lift-and-shift.md
+++ /dev/null
@@ -1,139 +0,0 @@
----
-title: "Migration Strategy: Lift and Shift"
-summary: Learn about the 'Lift and Shift' data migration strategy
-toc: true
-docs_area: migrate
----
-
-There are multiple strategies for [migrating off legacy technology]({% link {{ page.version.version }}/migration-overview.md %}) to CockroachDB.
-
-This page discusses the "Lift and Shift" strategy for migrating your database, which is a commonly used approach. This approach, which is also known as "Big Bang" (and by other names), refers to the process where your data is moved in its entirety from a source system to a target system within a defined period of time. This typically involves some application downtime and can involve some service degradation.
-
-Lift and Shift may not be the right approach if a strong application service continuity during the migration is required. It may be a viable method if application downtime is permitted.
-
-{{site.data.alerts.callout_info}}
-The information on this page assumes you have already reviewed the [migration overview]({% link {{ page.version.version }}/migration-overview.md %}).
-{{site.data.alerts.end}}
-
-## Pros and Cons
-
-On the spectrum of different data migration strategies, Lift and Shift has the following pros and cons. The terms "lower" and "higher" are not absolute, but relative to other approaches.
-
-Pros:
-
-- Conceptually straightforward.
-- Less complex: If you can afford some downtime, the overall effort will usually be lower, and the chance of errors is lower.
-- Lower time start-to-finish: In general, the more downtime you can afford, the shorter the overall migration project timeframe can be.
-- Lower technical risk: It does not involve running multiple systems alongside each other for an extended period of time.
-- Easy to practice [dry runs]({% link {{ page.version.version }}/migration-overview.md %}#perform-a-dry-run) of import/export using testing/non-production systems.
-- Good import/export tooling is available (e.g., external tools like: [AWS Database Migration Service (DMS)]({% link {{ page.version.version }}/aws-dms.md %}), [Qlik Replicate]({% link {{ page.version.version }}/qlik.md %}), [Striim]({% link {{ page.version.version }}/striim.md %}); or internal tools like [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}), [`COPY FROM`]({% link {{ page.version.version }}/copy.md %}), [`cockroach userdata`]({% link {{ page.version.version }}/cockroach-userfile-upload.md %})).
-- If your application already has regularly scheduled maintenance windows, your customers will not encounter application downtime.
-
-Cons:
-
-- All or nothing: It either works or does not work; once you start, you have to finish or roll back.
-- Higher project risk: The project **must** be completed to meet a given [downtime / service degradation window]({% link {{ page.version.version }}/migration-overview.md %}#approach-to-downtime).
-- Application service continuity requirements must be relaxed (that is, application downtime or increased latency may be needed).
-
-## Process design considerations
-
-{{site.data.alerts.callout_info}}
-The high-level considerations in this section only refer to the data-loading portion of your migration. They assume you are following the steps in the overall migration process described in [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}).
-{{site.data.alerts.end}}
-
-Keep in mind the following considerations when designing a Lift and Shift data migration process.
-
-- [Decide on your data migration tooling.](#managed-migration)
-- [Decide which data formats you will use.](#data-formats)
-- [Design a restartable process.](#restartable)
-- [Design a scalable process.](#scalable)
-
-
-
-### Decide on your data migration tooling
-
-If you plan to do your bulk data migration using a managed migration service, you must have a secure, publicly available CockroachDB cluster. CockroachDB supports the following [third-party migration services]({% link {{ page.version.version }}/third-party-database-tools.md %}#data-migration-tools):
-
-- [AWS Database Migration Service (DMS)]({% link {{ page.version.version }}/aws-dms.md %})
-- [Qlik Replicate]({% link {{ page.version.version }}/qlik.md %})
-- [Striim]({% link {{ page.version.version }}/striim.md %})
-
-{{site.data.alerts.callout_info}}
-Depending on the migration service you choose, [long-running transactions]({% link {{ page.version.version }}/query-behavior-troubleshooting.md %}#hanging-or-stuck-queries) can occur. In some cases, these queries will cause [transaction retry errors]({% link {{ page.version.version }}/common-errors.md %}#restart-transaction). If you encounter these errors while migrating to CockroachDB using a managed migration service, please reach out to our [Support Resources]({% link {{ page.version.version }}/support-resources.md %}).
-{{site.data.alerts.end}}
-
-If you will not be using a managed migration service, see the following sections for more information on how to use SQL statements like [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}), [`COPY FROM`]({% link {{ page.version.version }}/copy.md %}), etc.
-
-
-
-### Decide which data formats and storage media you will use
-
-It's important to decide which data formats, storage media, and database features you will use to migrate your data.
-
-Data formats that can be imported by CockroachDB include:
-
-- [SQL]({% link {{ page.version.version }}/schema-design-overview.md %}) for the [schema import]({% link cockroachcloud/migrations-page.md %}).
-- [CSV]({% link {{ page.version.version }}/migrate-from-csv.md %}) for table data.
-- [Avro]({% link {{ page.version.version }}/migrate-from-avro.md %}) for table data.
-
-The storage media you use to export / import from can be intermediate data files or streaming data coming over the network. Options include:
-
-- Local "userdata" storage for small tables (see [`cockroach userdata`]({% link {{ page.version.version }}/cockroach-userfile-upload.md %}), [Use a Local File Server]({% link {{ page.version.version }}/use-a-local-file-server.md %})).
-- Cloud blob storage (see [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}), [Import Performance Best Practices]({% link {{ page.version.version }}/import-performance-best-practices.md %}), [Use Cloud Storage]({% link {{ page.version.version }}/use-cloud-storage.md %})).
-- Direct wire transfers (see [managed migration services](#managed-migration)).
-
-Database features for export/import from the source and target databases can include:
-
-- Tools for exporting from the source database may include: `pg_dump --schema-only` and `COPY FROM`, `mysqldump`, `expdp`, etc.
-- For import into CockroachDB, use [`COPY FROM`]({% link {{ page.version.version }}/copy.md %}) or [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}). For a bulk data migrations, most users should use `IMPORT INTO` because the tables will be offline anyway, and `IMPORT INTO` can [perform the data import much faster]({% link {{ page.version.version }}/import-performance-best-practices.md %}) than `COPY FROM`.
-
-Decide which of the options above will meet your requirements while resulting in a process that is [restartable](#restartable) and [scalable](#scalable).
-
-
-
-### Design a restartable process
-
-To have a higher chance of success, design your data migration process so it can be stopped and restarted from an intermediate state at any time during the process. This will help minimize errors and avoid wasted effort.
-
-Keep the following requirements in mind as you design a restartable import/export process:
-
-- Bulk migrate data in manageable size batches for your source and target systems.
- - This is a best practice. If something happens to the target cluster during import, the amount of wasted work will be minimized.
-- Implement progress/state keeping with process restart capabilities.
-- Make sure your export process is idempotent: the same input to your export process should return the same output data.
-- If possible, export and import the majority of your data before taking down the source database. This can ensure that you only have to deal with the incremental changes from your last import to complete the migration process.
-
-
-
-### Design a scalable and performant process
-
-Once your process is [restartable and resilient to failures](#design-a-restartable-process), it's important to also make sure it will scale to the needs of your data set. The larger the data set you are migrating to CockroachDB, the more important the performance and scalability of your process will be.
-
-Keep the following requirements in mind:
-
-- Schema and data should be imported separately.
-- Your process should handle multiple files across multiple export/import streams concurrently.
- - For best performance, these files should contain presorted, disjoint data sets.
-- Benchmark the performance of your migration process to help ensure it will complete within the allotted downtime window.
-
-For more information about import performance, see [Import Performance Best Practices]({% link {{ page.version.version }}/import-performance-best-practices.md %}).
-
-## See also
-
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
-- [Use the Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
-- [Migrate with AWS Database Migration Service (DMS)]({% link {{ page.version.version }}/aws-dms.md %})
-- [AWS DMS documentation](https://docs.aws.amazon.com/dms/latest/userguide/Welcome.html)
-- [Migrate and Replicate Data with Qlik Replicate]({% link {{ page.version.version }}/qlik.md %})
-- [Migrate and Replicate Data with Striim]({% link {{ page.version.version }}/striim.md %})
-- [Schema Design Overview]({% link {{ page.version.version }}/schema-design-overview.md %})
-- [Back Up and Restore]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %})
-- [Export data with Changefeeds]({% link {{ page.version.version }}/export-data-with-changefeeds.md %})
-- [`COPY`]({% link {{ page.version.version }}/copy.md %})
-- [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %})
-- [Migrate from CSV]({% link {{ page.version.version }}/migrate-from-csv.md %})
-- [Migrate from Avro]({% link {{ page.version.version }}/migrate-from-avro.md %})
-- [Client connection parameters]({% link {{ page.version.version }}/connection-parameters.md %})
-
-
-{% comment %} eof {% endcomment %}
diff --git a/src/current/v25.2/qlik.md b/src/current/v25.2/qlik.md
index bdb646d7b20..da53e969bfd 100644
--- a/src/current/v25.2/qlik.md
+++ b/src/current/v25.2/qlik.md
@@ -68,7 +68,7 @@ Complete the following items before using Qlik Replicate:
- If you are migrating from PostgreSQL, MySQL, Oracle, or Microsoft SQL Server, [use the **Schema Conversion Tool**]({% link cockroachcloud/migrations-page.md %}) to convert and export your schema. Ensure that any schema changes are also reflected on your tables, or add transformation rules. If you make substantial schema changes, the Qlik Replicate migration may fail.
{{site.data.alerts.callout_info}}
- All tables must have an explicitly defined primary key. For more guidance, see the [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}#schema-design-best-practices).
+ All tables must have an explicitly defined primary key. For more guidance, see [Migration Strategy]({% link molt/migration-strategy.md %}#schema-design-best-practices).
{{site.data.alerts.end}}
## Migrate and replicate data to CockroachDB
@@ -96,7 +96,7 @@ In the Qlik Replicate interface, CockroachDB is configured as a PostgreSQL **sou
## See also
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
- [Change Data Capture Overview]({% link {{ page.version.version }}/change-data-capture-overview.md %})
- [Third-Party Tools Supported by Cockroach Labs]({% link {{ page.version.version }}/third-party-database-tools.md %})
diff --git a/src/current/v25.2/read-committed.md b/src/current/v25.2/read-committed.md
index 83d7f9ad7e6..bcf0a5d9f92 100644
--- a/src/current/v25.2/read-committed.md
+++ b/src/current/v25.2/read-committed.md
@@ -13,7 +13,7 @@ docs_area: deploy
- Your application needs to maintain a high workload concurrency with minimal [transaction retries]({% link {{ page.version.version }}/developer-basics.md %}#transaction-retries), and it can tolerate potential [concurrency anomalies](#concurrency-anomalies). Predictable query performance at high concurrency is more valuable than guaranteed transaction [serializability]({% link {{ page.version.version }}/developer-basics.md %}#serializability-and-transaction-contention).
-- You are [migrating an application to CockroachDB]({% link {{ page.version.version }}/migration-overview.md %}) that was built at a `READ COMMITTED` isolation level on the source database, and it is not feasible to modify your application to use `SERIALIZABLE` isolation.
+- You are [migrating an application to CockroachDB]({% link molt/migration-overview.md %}) that was built at a `READ COMMITTED` isolation level on the source database, and it is not feasible to modify your application to use `SERIALIZABLE` isolation.
Whereas `SERIALIZABLE` isolation guarantees data correctness by placing transactions into a [serializable ordering]({% link {{ page.version.version }}/demo-serializable.md %}), `READ COMMITTED` isolation permits some [concurrency anomalies](#concurrency-anomalies) in exchange for minimizing transaction aborts, [retries]({% link {{ page.version.version }}/developer-basics.md %}#transaction-retries), and blocking. Compared to `SERIALIZABLE` transactions, `READ COMMITTED` transactions do **not** return [serialization errors]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}) that require client-side handling. See [`READ COMMITTED` transaction behavior](#read-committed-transaction-behavior).
@@ -919,4 +919,4 @@ SELECT * FROM schedules
- [Serializable Transactions]({% link {{ page.version.version }}/demo-serializable.md %})
- [What Write Skew Looks Like](https://www.cockroachlabs.com/blog/what-write-skew-looks-like/)
- [Read Committed RFC](https://github.com/cockroachdb/cockroach/blob/master/docs/RFCS/20230122_read_committed_isolation.md)
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
diff --git a/src/current/v25.2/striim.md b/src/current/v25.2/striim.md
index 436d7440cbd..67526c8ff9d 100644
--- a/src/current/v25.2/striim.md
+++ b/src/current/v25.2/striim.md
@@ -37,7 +37,7 @@ Complete the following items before using Striim:
- If you are migrating from PostgreSQL, MySQL, Oracle, or Microsoft SQL Server, [use the **Schema Conversion Tool**]({% link cockroachcloud/migrations-page.md %}) to convert and export your schema. Ensure that any schema changes are also reflected on your tables.
{{site.data.alerts.callout_info}}
- All tables must have an explicitly defined primary key. For more guidance, see the [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}#schema-design-best-practices).
+ All tables must have an explicitly defined primary key. For more guidance, see [Migration Strategy]({% link molt/migration-strategy.md %}#schema-design-best-practices).
{{site.data.alerts.end}}
## Migrate and replicate data to CockroachDB
@@ -110,7 +110,7 @@ To perform continuous replication of ongoing changes, create a Striim applicatio
## See also
-- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %})
+- [Migration Overview]({% link molt/migration-overview.md %})
- [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %})
- [Change Data Capture Overview]({% link {{ page.version.version }}/change-data-capture-overview.md %})
- [Third-Party Tools Supported by Cockroach Labs]({% link {{ page.version.version }}/third-party-database-tools.md %})