CREATE LOGICAL REPLICATION STREAM

On this page Carat arrow pointing down
Note:

This feature is in preview. This feature is subject to change. To share feedback and/or issues, contact Support.

Logical data replication is only supported in CockroachDB self-hosted clusters.

New in v24.3: The CREATE LOGICAL REPLICATION STREAM statement starts logical data replication (LDR) that runs between a source and destination cluster in an active-active setup.

This page is a reference for the CREATE LOGICAL REPLICATION STREAM SQL statement, which includes information on its parameters and possible options. For a step-by-step guide to set up LDR, refer to the Set Up Logical Data Replication page.

Required privileges

CREATE LOGICAL REPLICATION STREAM requires one of the following privileges:

Use the GRANT SYSTEM statement:

icon/buttons/copy
GRANT SYSTEM REPLICATION TO user;

Synopsis

CREATE LOGICAL REPLICATION STREAM FROM TABLE db_object_name TABLES ( logical_replication_resources_list ) ON source_connection_string INTO TABLE db_object_name TABLES ( logical_replication_resources_list ) WITH logical_replication_options ,

Parameters

Parameter Description
db_object_name The fully qualified name of the table on the source or destination cluster. Refer to Examples.
logical_replication_resources_list A list of the fully qualified table names on the source or destination cluster to include in the LDR stream. Refer to the LDR with multiple tables example.
source_connection_string The connection string to the source cluster. Use an external connection to store the source cluster's connection URI. To start LDR, you run CREATE LOGICAL REPLICATION STREAM from the destination cluster.
logical_replication_options Options to modify the behavior of the LDR stream.

Options

Option Description
cursor Emits any changes after the specified timestamp. LDR will not perform an initial backfill with the cursor option, it will stream any changes after the specified timestamp. The LDR job will encounter an error if you specify a cursor timestamp that is before the configured garbage collection window for that table. Warning: Apply the cursor option carefully to LDR streams. Using a timestamp in error could cause data loss.
discard Ignore TTL deletes in an LDR stream. Use discard = ttl-deletes. Note: To ignore row-level TTL deletes in an LDR stream, it is necessary to set the ttl_disable_changefeed_replication storage parameter on the source table. Refer to the Ignore row-level TTL deletes example.
label Tracks LDR metrics at the job level. Add a user-specified string with label. Refer to Metrics labels.
mode Determines how LDR replicates the data to the destination cluster. Possible values: immediate, validated. For more details refer to LDR modes.

LDR modes

Modes determine how LDR replicates the data to the destination cluster. There are two modes:

  • immediate (default): Attempts to replicate the changed row directly into the destination table, without re-running constraint validations. It does not support writing into tables with foreign key constraints.
  • validated: Attempts to apply the write in a similar way to a user-run query, which would re-run all constraint validations or triggers relevant to the destination table(s). It will potentially reject the change if it fails rather than writing it, which will cause the row change to enter the DLQ instead.

Bidirectional LDR

Bidirectional LDR consists of two clusters with two LDR jobs running in opposite directions between the clusters. If you're setting up bidirectional LDR, both clusters will act as a source and a destination in the respective LDR jobs.

LDR supports starting with two empty tables, or one non-empty table. LDR does not support starting with two non-empty tables. When you set up bidirectional LDR, if you're starting with one non-empty table, start the first LDR job from empty to non-empty table. Therefore, you would run CREATE LOGICAL REPLICATION STREAM from the destination cluster where the non-empty table exists.

Examples

To start LDR, you must run the CREATE LOGICAL REPLICATION STREAM statement from the destination cluster. Use the fully qualified table name(s) and ensure that the source and destination table names are identical. The following examples show statement usage with different options and use cases.

Start an LDR stream

There are some tradeoffs between enabling one table per LDR job versus multiple tables in one LDR job. Multiple tables in one LDR job can be easier to operate. For example, if you pause and resume the single job, LDR will stop and resume for all the tables. However, the most granular level observability will be at the job level. One table in one LDR job will allow for table-level observability.

Single table

icon/buttons/copy
CREATE LOGICAL REPLICATION STREAM FROM TABLE {database.public.table_name} ON 'external://{source_external_connection}' INTO TABLE {database.public.table_name};

Multiple tables

icon/buttons/copy
CREATE LOGICAL REPLICATION STREAM FROM TABLES ({database.public.table_name},{database.public.table_name},...)  ON 'external://{source_external_connection}' INTO TABLES ({database.public.table_name},{database.public.table_name},...);

Ignore row-level TTL deletes

If you would like to ignore row-level TTL deletes in the LDR stream, set the ttl_disable_changefeed_replication storage parameter on the table. On the source cluster, alter the table to set the table storage parameter:

icon/buttons/copy
ALTER TABLE {table_name} SET (ttl_disable_changefeed_replication = 'true');

When you start LDR on the destination cluster, include the discard = ttl-deletes option in the statement:

icon/buttons/copy
CREATE LOGICAL REPLICATION STREAM FROM TABLE {database.public.table_name} ON 'external://{source_external_connection}' INTO TABLE {database.public.table_name} WITH discard = ttl-deletes;

See also


Yes No
On this page

Yes No