Known Limitations in CockroachDB v24.2

On this page Carat arrow pointing down

New limitations in v24.2

This section describes newly identified limitations in CockroachDB v24.2.

Generic query plans

  • Because generic query plans use lookup joins instead of the scans and revscans used by custom query plans, generic query plans do not perform as well as custom query plans in some cases. #128916
  • Generic query plans are not included in the plan cache. This means a generic query plan built and optimized for a prepared statement in one session cannot be used by another session. To reuse generic query plans for maximum performance, a prepared statement should be executed multiple times instead of prepared and executed once. #128911

Limitations from v24.1 and earlier

This section describes limitations from previous CockroachDB versions that still impact v24.2.

SQL statements

Syntax and behavior differences from PostgreSQL

CockroachDB supports the PostgreSQL wire protocol and the majority of its syntax. For a list of known differences in syntax and behavior between CockroachDB and PostgreSQL, see Features that differ from PostgreSQL.

AS OF SYSTEM TIME limitations

  • CockroachDB does not support placeholders in AS OF SYSTEM TIME. The time value must be a constant value embedded in the SQL string. #30955
  • The ANALYZE alias of CREATE STATISTICS does not support specifying an AS OF SYSTEM TIME timestamp. ANALYZE statements use AS OF SYSTEM TIME '-0.001ms' automatically. For more control over the statistics interval, use the CREATE STATISTICS syntax instead. #96430

COPY syntax not supported by CockroachDB

CockroachDB does not yet support the following COPY syntax:

  • COPY ... WITH FREEZE. #85573
  • COPY ... WITH QUOTE. #85574
  • COPY ... FROM ... WHERE <expr>. #54580

IMPORT INTO limitations

IMPORT INTO has the following limitations:

  • While importing into an existing table, the table is taken offline.
  • After importing into an existing table, constraints will be un-validated and need to be re-validated.
  • Imported rows must not conflict with existing rows in the table or any unique secondary indexes.
  • IMPORT INTO works for only a single existing table.
  • IMPORT INTO can sometimes fail with a "context canceled" error, or can restart itself many times without ever finishing. If this is happening, it is likely due to a high amount of disk contention. This can be mitigated by setting the kv.bulk_io_write.max_rate cluster setting to a value below your max disk write speed. For example, to set it to 10MB/s, execute:

    icon/buttons/copy

    SET CLUSTER SETTING kv.bulk_io_write.max_rate = '10MB';
    

ALTER VIEW limitations

ALTER VIEW does not currently support:

  • Changing the SELECT statement executed by a view. Instead, you must drop the existing view and create a new view.
  • Renaming a view that other views depend on. This feature may be added in the future. #10083

Row-Level TTL limitations

  • Any queries you run against tables with Row-Level TTL enabled (or against tables that have foreign keys that reference TTL-enabled tables) do not filter out expired rows from the result set (this includes UPDATEs and DELETEs). This feature may be added in a future release. For now, follow the instructions in Filter out expired rows from a selection query.
  • Enabling Row-Level TTL on a table with multiple secondary indexes can have negative performance impacts on a cluster, including increased latency and contention. This is particularly true for large tables with terabytes of data and billions of rows that are split up into multiple ranges across separate nodes.
    • Increased latency may occur because secondary indexes aren't necessarily stored on the same underlying ranges as a table's primary indexes. Further, the secondary indexes' ranges may have leaseholders located on different nodes than the primary index.
    • Increased contention may occur because intents must be written as part of performing the deletions.
    • Finally, secondary indexes can also have a negative impact on the overall performance of TTL jobs. According to internal testing, the TTL job processing rate is worse on tables with secondary indexes. If you encounter this situation, decreasing the ttl_delete_batch_size storage parameter may help by decreasing the number of ranges that need to be accessed by the job.

CAST expressions containing a subquery with an ENUM target are not supported

Casting subqueries to ENUMs in views and UDFs is not supported. #108184

Statements containing multiple modification subqueries of the same table are disallowed

Statements containing multiple modification subqueries mutating the same row could cause corruption. These statements are disallowed by default, but you can enable multiple modification subqueries with one the following:

If multiple mutations inside the same statement affect different tables with FOREIGN KEY relations and ON CASCADE clauses between them, the results will be different from what is expected in PostgreSQL. #70731

Using default_int_size session variable in batch of statements

When setting the default_int_size session variable in a batch of statements such as SET default_int_size='int4'; SELECT 1::IN, the default_int_size variable will not take effect until the next statement. Statement parsing is asynchronous with statement execution.

As a workaround, set default_int_size via your database driver, or ensure that SET default_int_size is in its own statement. #32846

Overload resolution for collated strings

Many string operations are not properly overloaded for collated strings, for example:

icon/buttons/copy
> SELECT 'string1' || 'string2';
     ?column?
------------------
  string1string2
(1 row)
icon/buttons/copy
> SELECT ('string1' collate en) || ('string2' collate en);
pq: unsupported binary operator: <collatedstring{en}> || <collatedstring{en}>

#10679

Current sequence value not checked when updating min/max value

Altering the minimum or maximum value of a series does not check the current value of a series. This means that it is possible to silently set the maximum to a value less than, or a minimum value greater than, the current value. #23719

null_ordered_last does not produce correct results with tuples

By default, CockroachDB orders NULLs before all other values. For compatibility with PostgreSQL, the null_ordered_last session variable was added, which changes the default to order NULL values after all other values. This works in most cases, due to some transformations CockroachDB makes in the optimizer to add extra ordering columns. However, it does not work when the ordering column is a tuple. #93558

Functions and procedures

PL/pgSQL support

  • It is not possible to use a variable as a target more than once in the same INTO clause. For example, SELECT 1, 2 INTO x, x;. #121605
  • PLpgSQL variable declarations cannot inherit the type of a table row or column using %TYPE or %ROWTYPE syntax. #114676
  • PL/pgSQL arguments cannot be referenced with ordinals (e.g., $1, $2). #114701
  • The following statements are not supported:
    • FOR loops, including FOR cursor loops, FOR query loops, and FOREACH loops. #105246
    • RETURN NEXT and RETURN QUERY. #117744
    • PERFORM, EXECUTE, GET DIAGNOSTICS, and CASE. #117744
  • PL/pgSQL exception blocks cannot catch transaction retry errors. #111446
  • RAISE statements cannot be annotated with names of schema objects related to the error (i.e., using COLUMN, CONSTRAINT, DATATYPE, TABLE, or SCHEMA). #106237
  • RAISE statements message the client directly, and do not produce log output. #117750
  • ASSERT debugging checks are not supported. #117744
  • RECORD parameters and variables are not supported in user-defined functions. #105713
  • Variable shadowing (e.g., declaring a variable with the same name in an inner block) is not supported in PL/pgSQL. #117508
  • Syntax for accessing members of composite types without parentheses is not supported. #114687
  • NOT NULL variable declarations are not supported. #105243
  • Cursors opened in PL/pgSQL execute their queries on opening, affecting performance and resource usage. #111479
  • Cursors in PL/pgSQL cannot be declared with arguments. #117746
  • OPEN FOR EXECUTE is not supported for opening cursors. #117744
  • The print_strict_params option is not supported in PL/pgSQL. #123671
  • The FOUND local variable, which checks whether a statement affected any rows, is not supported in PL/pgSQL. #122306
  • By default, when a PL/pgSQL variable conflicts with a column name, CockroachDB resolves the ambiguity by treating it as a column reference rather than a variable reference. This behavior differs from PostgreSQL, where an ambiguous column error is reported, and it is possible to change the plpgsql.variable_conflict setting in order to prefer either columns or variables. #115680
  • It is not possible to define a RECORD-returning PL/pgSQL function that returns different-typed expressions from different RETURN statements. CockroachDB requires a consistent return type for RECORD-returning functions. #115384
  • Variables cannot be declared with an associated collation using the COLLATE keyword. #105245
  • Variables cannot be accessed using the label.var_name pattern. #122322

UDF and stored procedure support

  • Routines cannot be invoked with named arguments, e.g., SELECT foo(a => 1, b => 2); or SELECT foo(b := 1, a := 2);. #122264
  • Routines cannot be created if they reference temporary tables. #121375
  • Routines cannot be created with unnamed INOUT parameters. For example, CREATE PROCEDURE p(INOUT INT) AS $$ BEGIN NULL; END; $$ LANGUAGE PLpgSQL;. #121251
  • Routines cannot be created if they return fewer columns than declared. For example, CREATE FUNCTION f(OUT sum INT, INOUT a INT, INOUT b INT) LANGUAGE SQL AS $$ SELECT (a + b, b); $$;. #121247
  • Routines cannot be created with an OUT parameter of type RECORD. #123448
  • DDL statements (e.g., CREATE TABLE, CREATE INDEX) are not allowed within UDFs or stored procedures. #110080
  • Polymorphic types cannot be cast to other types (e.g., TEXT) within routine parameters. #123536
  • Routine parameters and return types cannot be declared using the ANYENUM polymorphic type, which is able to match any ENUM type. 123048

  • COMMIT and ROLLBACK statements are not supported within nested procedures. #122266

  • A RECORD-returning UDF cannot be created without a RETURN statement in the root block, which would restrict the wildcard type to a concrete one. #122945

  • User-defined functions are not currently supported in:

    • Expressions (column, index, constraint) in tables. #87699
    • Views. #87699
  • User-defined functions cannot call themselves recursively. #93049

  • Common table expressions (CTE), recursive or non-recursive, are not supported in user-defined functions (UDF). That is, you cannot use a WITH clause in the body of a UDF. #92961

  • The setval function cannot be resolved when used inside UDF bodies. #110860

  • Casting subqueries to user-defined types in UDFs is not supported. #108184

Transactions

Read Committed features and performance

Read Committed isolation has the following limitations:

  • Schema changes (e.g., CREATE TABLE, CREATE SCHEMA, CREATE INDEX) cannot be performed within explicit READ COMMITTED transactions when the autocommit_before_ddl session setting is set to off, and will cause transactions to abort. As a workaround, set the transaction's isolation level to SERIALIZABLE. #114778
  • READ COMMITTED transactions performing INSERT, UPDATE, or UPSERT cannot access REGIONAL BY ROW tables in which UNIQUE and PRIMARY KEY constraints exist, the region is not included in the constraint, and the region cannot be computed from the constraint columns. #110873
  • Multi-column-family checks during updates are not supported under READ COMMITTED isolation. #112488
  • Because locks acquired by foreign key checks, SELECT FOR UPDATE, and SELECT FOR SHARE are fully replicated under READ COMMITTED isolation, some queries experience a delay for Raft replication.
  • Foreign key checks are not performed in parallel under READ COMMITTED isolation.
  • SELECT FOR UPDATE and SELECT FOR SHARE statements are less optimized under READ COMMITTED isolation than under SERIALIZABLE isolation. Under READ COMMITTED isolation, SELECT FOR UPDATE and SELECT FOR SHARE usually perform an extra lookup join for every locked table when compared to the same queries under SERIALIZABLE. In addition, some optimization steps (such as de-correlation of correlated subqueries) are not currently performed on these queries.
  • Regardless of isolation level, SELECT FOR UPDATE and SELECT FOR SHARE statements in CockroachDB do not prevent insertion of new rows matching the search condition (i.e., phantom reads). This matches PostgreSQL behavior at all isolation levels. #120673

Follower reads

Exact staleness reads and long-running writes

Long-running write transactions will create write intents with a timestamp near when the transaction began. When an exact staleness follower read encounters a write intent, it will often end up in a "transaction wait queue", waiting for the operation to complete; however, this runs counter to the benefit exact staleness reads provide.

To counteract this, you can issue all follower reads in explicit transactions set with HIGH priority:

BEGIN PRIORITY HIGH AS OF SYSTEM TIME follower_read_timestamp();
SELECT ...
SELECT ...
COMMIT;
Exact staleness read timestamps must be far enough in the past

If an exact staleness read is not using an AS OF SYSTEM TIME value far enough in the past, CockroachDB cannot perform a follower read. Instead, the read must access the leaseholder replica. This adds network latency if the leaseholder is not the closest replica to the gateway node. Most users will use the follower_read_timestamp() function to get a timestamp far enough in the past that there is a high probability of getting a follower read.

Bounded staleness read limitations

Bounded staleness reads have the following limitations:

For example, let's look at a read query that cannot be served as a bounded staleness read. We will use a demo cluster, which automatically loads the MovR dataset.

icon/buttons/copy
cockroach demo
icon/buttons/copy
SELECT code FROM promo_codes AS OF SYSTEM TIME with_max_staleness('10s') LIMIT 1;
ERROR: unimplemented: cannot use bounded staleness for queries that may touch more than one row or require an index join
SQLSTATE: 0A000
HINT: You have attempted to use a feature that is not yet implemented.
See: https://go.crdb.dev/issue-v/67562/v23.2

As noted by the error message, this query cannot be served as a bounded staleness read because in this case it would touch more than one row. Even though we used a LIMIT 1 clause, the query would still have to touch more than one row in order to filter out the additional results.

We can verify that more than one row would be touched by issuing EXPLAIN on the same query, but without the AS OF SYSTEM TIME clause:

icon/buttons/copy
EXPLAIN SELECT code FROM promo_codes LIMIT 5;
                                     info
-------------------------------------------------------------------------------
  distribution: full
  vectorized: true

  • scan
    estimated row count: 1 (0.10% of the table; stats collected 1 minute ago)
    table: promo_codes@primary
    spans: LIMITED SCAN
    limit: 1
(8 rows)

The output verifies that this query performs a scan of the primary index on the promo_codes table, which is why it cannot be used for a bounded staleness read.

For an example showing how to successfully perform a bounded staleness read, see Run queries that use bounded staleness follower reads.

SELECT FOR UPDATE locks are dropped on lease transfers and range splits/merges

By default under SERIALIZABLE isolation, locks acquired using SELECT ... FOR UPDATE and SELECT ... FOR SHARE are implemented as fast, in-memory unreplicated locks. If a lease transfer or range split/merge occurs on a range held by an unreplicated lock, the lock is dropped. The following behaviors can occur:

  • The desired ordering of concurrent accesses to one or more rows of a table expressed by your use of SELECT ... FOR UPDATE may not be preserved (that is, a transaction B against some table T that was supposed to wait behind another transaction A operating on T may not wait for transaction A).
  • The transaction that acquired the (now dropped) unreplicated lock may fail to commit, leading to transaction retry errors with code 40001 and the restart transaction error message.

When running under SERIALIZABLE isolation, SELECT ... FOR UPDATE and SELECT ... FOR SHARE locks should be thought of as best-effort, and should not be relied upon for correctness. Note that serialization is preserved despite this limitation. This limitation is fixed when the enable_durable_locking_for_serializable session setting is set to true.

Note:

This limitation does not apply to READ COMMITTED transactions.

SET does not ROLLBACK in a transaction

SET does not properly apply ROLLBACK within a transaction. For example, in the following transaction, showing the TIME ZONE variable does not return 2 as expected after the rollback:

SET TIME ZONE +2;
BEGIN;
SET TIME ZONE +3;
ROLLBACK;
SHOW TIME ZONE;
timezone
------------
3

#69396

ROLLBACK TO SAVEPOINT in high-priority transactions containing DDL

1 Transactions with priority HIGH that contain DDL and ROLLBACK TO SAVEPOINT are not supported, as they could result in a deadlock. For example:

> BEGIN PRIORITY HIGH; SAVEPOINT s; CREATE TABLE t(x INT); ROLLBACK TO SAVEPOINT s;
ERROR: unimplemented: cannot use ROLLBACK TO SAVEPOINT in a HIGH PRIORITY transaction containing DDL
SQLSTATE: 0A000
HINT: You have attempted to use a feature that is not yet implemented.
See: https://github.com/cockroachdb/cockroach/issues/46414

#46414

CANCEL JOB limitations

  • To avoid transaction states that cannot properly roll back, the following statements cannot be cancelled with CANCEL JOB:

  • When an Enterprise RESTORE is canceled, partially restored data is properly cleaned up. This can have a minor, temporary impact on cluster performance.

SQL cursor support

CockroachDB implements SQL cursor support with the following limitations:

  • DECLARE only supports forward cursors. Reverse cursors created with DECLARE SCROLL are not supported. #77102
  • FETCH supports forward, relative, and absolute variants, but only for forward cursors. #77102
  • BINARY CURSOR, which returns data in the Postgres binary format, is not supported. #77099
  • WITH HOLD, which allows keeping a cursor open for longer than a transaction by writing its results into a buffer, is accepted as valid syntax within a single transaction but is not supported. It acts as a no-op and does not actually perform the function of WITH HOLD, which is to make the cursor live outside its parent transaction. Instead, if you are using WITH HOLD, you will be forced to close that cursor within the transaction it was created in. #77101

    • This syntax is accepted (but does not have any effect):

      icon/buttons/copy

      BEGIN;
      DECLARE test_cur CURSOR WITH HOLD FOR SELECT * FROM foo ORDER BY bar;
      CLOSE test_cur;
      COMMIT;
      
    • This syntax is not accepted, and will result in an error:

      icon/buttons/copy

      BEGIN;
      DECLARE test_cur CURSOR WITH HOLD FOR SELECT * FROM foo ORDER BY bar;
      COMMIT; -- This will fail with an error because CLOSE test_cur was not called inside the transaction.
      
  • Scrollable cursor (also known as reverse FETCH) is not supported. #77102

  • SELECT ... FOR UPDATE with a cursor is not supported. #77103

  • Respect for SAVEPOINTs is not supported. Cursor definitions do not disappear properly if rolled back to a SAVEPOINT from before they were created. #77104

Materialized views inside transactions

  • CockroachDB cannot refresh materialized views inside explicit transactions. Trying to refresh a materialized view inside an explicit transaction will result in an error.

    1. Start cockroach demo with the sample bank data set:

      icon/buttons/copy
      cockroach demo bank
      
    2. Create the materialized view described in Usage.

    3. Start a new multi-statement transaction with BEGIN TRANSACTION:

      icon/buttons/copy

        BEGIN TRANSACTION;
      
    4. Inside the open transaction, attempt to refresh the view. This will result in an error.

      icon/buttons/copy

        REFRESH MATERIALIZED VIEW overdrawn_accounts;
      
        ERROR: cannot refresh view in an explicit transaction
        SQLSTATE: 25000
      

    #66008

Schemas and indexes

Online schema change limitations

Schema changes within transactions

Most schema changes should not be performed within an explicit transaction with multiple statements, as they do not have the same atomicity guarantees as other SQL statements. Execute schema changes either as single statements (as an implicit transaction), or in an explicit transaction consisting of the single schema change statement. There are some exceptions to this, detailed below.

Schema changes keep your data consistent at all times, but they do not run inside [transactions][txns] in the general case. Making schema changes transactional would mean requiring a given schema change to propagate across all the nodes of a cluster. This would block all user-initiated transactions being run by your application, since the schema change would have to commit before any other transactions could make progress. This would prevent the cluster from servicing reads and writes during the schema change, requiring application downtime.

Tip:

Some tools and applications may be able to workaround CockroachDB's lack of transactional schema changes by enabling a setting that automatically commits before running schema changes inside transactions.

Some schema change operations can be run within explicit, multiple statement transactions. CREATE TABLE and CREATE INDEX statements can be run within the same transaction with the same atomicity guarantees as other SQL statements. There are no performance or rollback issues when using these statements within a multiple statement transaction.

Within a single transaction:

Note:

If a schema change within a transaction fails, manual intervention may be needed to determine which statement has failed. After determining which schema change(s) failed, you can then retry the schema change.

Schema change DDL statements inside a multi-statement transaction can fail while other statements succeed

Most schema change DDL statements that run inside a multi-statement transaction with non-DDL statements can fail at COMMIT time, even if other statements in the transaction succeed. This leaves such transactions in a "partially committed, partially aborted" state that may require manual intervention to determine whether the DDL statements succeeded.

Some DDL statements do not have this limitation. CREATE TABLE and CREATE INDEX statements have the same atomicity guarantees as other statements within a transaction.

If such a failure occurs, CockroachDB will emit a CockroachDB-specific error code, XXA00, and the following error message:

transaction committed but schema change aborted with error: <description of error>
HINT: Some of the non-DDL statements may have committed successfully, but some of the DDL statement(s) failed.
Manual inspection may be required to determine the actual state of the database.
Warning:

If you must execute schema change DDL statements inside a multi-statement transaction, we strongly recommend checking for this error code and handling it appropriately every time you execute such transactions.

This error will occur in various scenarios, including but not limited to:

  • Creating a unique index fails because values aren't unique.
  • The evaluation of a computed value fails.
  • Adding a constraint (or a column with a constraint) fails because the constraint is violated for the default/computed values in the column.

To see an example of this error, start by creating the following table.

icon/buttons/copy
CREATE TABLE T(x INT);
INSERT INTO T(x) VALUES (1), (2), (3);

Then, enter the following multi-statement transaction, which will trigger the error.

icon/buttons/copy
BEGIN;
ALTER TABLE t ADD CONSTRAINT unique_x UNIQUE(x);
INSERT INTO T(x) VALUES (3);
COMMIT;
pq: transaction committed but schema change aborted with error: (23505): duplicate key value (x)=(3) violates unique constraint "unique_x"
HINT: Some of the non-DDL statements may have committed successfully, but some of the DDL statement(s) failed.
Manual inspection may be required to determine the actual state of the database.

In this example, the INSERT statement committed, but the ALTER TABLE statement adding a UNIQUE constraint failed. We can verify this by looking at the data in table t and seeing that the additional non-unique value 3 was successfully inserted.

icon/buttons/copy
SELECT * FROM t;
  x
+---+
  1
  2
  3
  3
(4 rows)
No online schema changes if primary key change in progress

You cannot start an online schema change on a table if a primary key change is currently in progress on the same table.

No online schema changes between executions of prepared statements

When the schema of a table targeted by a prepared statement changes after the prepared statement is created, future executions of the prepared statement could result in an error. For example, adding a column to a table referenced in a prepared statement with a SELECT * clause will result in an error:

icon/buttons/copy
CREATE TABLE users (id INT PRIMARY KEY);
icon/buttons/copy
PREPARE prep1 AS SELECT * FROM users;
icon/buttons/copy
ALTER TABLE users ADD COLUMN name STRING;
icon/buttons/copy
INSERT INTO users VALUES (1, 'Max Roach');
icon/buttons/copy
EXECUTE prep1;
ERROR: cached plan must not change result type
SQLSTATE: 0A000

It's therefore recommended to explicitly list result columns instead of using SELECT * in prepared statements, when possible.

Adding a column with sequence-based DEFAULT values

It is currently not possible to add a column to a table when the column uses a sequence as the DEFAULT value, for example:

icon/buttons/copy
> CREATE TABLE t (x INT);
icon/buttons/copy
> INSERT INTO t(x) VALUES (1), (2), (3);
icon/buttons/copy
> CREATE SEQUENCE s;
icon/buttons/copy
> ALTER TABLE t ADD COLUMN y INT DEFAULT nextval('s');
ERROR: failed to construct index entries during backfill: nextval(): unimplemented: cannot evaluate scalar expressions containing sequence operations in this context
SQLSTATE: 0A000
HINT: You have attempted to use a feature that is not yet implemented.
See: https://go.crdb.dev/issue-v/42508/v24.2

#42508

Dropping a column referenced by a partial index

CockroachDB prevents a column from being dropped using ALTER TABLE ... DROP COLUMN if it is referenced by a partial index predicate. To drop such a column, the partial indexes need to be dropped first using DROP INDEX. #97813.

Schema change DDL statements inside a multi-statement transaction can fail while other statements succeed

Most schema change DDL statements that run inside a multi-statement transaction with non-DDL statements can fail at COMMIT time, even if other statements in the transaction succeed. This leaves such transactions in a "partially committed, partially aborted" state that may require manual intervention to determine whether the DDL statements succeeded.

Some DDL statements do not have this limitation. CREATE TABLE and CREATE INDEX statements have the same atomicity guarantees as other statements within a transaction.

If such a failure occurs, CockroachDB will emit a CockroachDB-specific error code, XXA00, and the following error message:

transaction committed but schema change aborted with error: <description of error>
HINT: Some of the non-DDL statements may have committed successfully, but some of the DDL statement(s) failed.
Manual inspection may be required to determine the actual state of the database.
Warning:

If you must execute schema change DDL statements inside a multi-statement transaction, we strongly recommend checking for this error code and handling it appropriately every time you execute such transactions.

This error will occur in various scenarios, including but not limited to:

  • Creating a unique index fails because values aren't unique.
  • The evaluation of a computed value fails.
  • Adding a constraint (or a column with a constraint) fails because the constraint is violated for the default/computed values in the column.

To see an example of this error, start by creating the following table.

icon/buttons/copy
CREATE TABLE T(x INT);
INSERT INTO T(x) VALUES (1), (2), (3);

Then, enter the following multi-statement transaction, which will trigger the error.

icon/buttons/copy
BEGIN;
ALTER TABLE t ADD CONSTRAINT unique_x UNIQUE(x);
INSERT INTO T(x) VALUES (3);
COMMIT;
pq: transaction committed but schema change aborted with error: (23505): duplicate key value (x)=(3) violates unique constraint "unique_x"
HINT: Some of the non-DDL statements may have committed successfully, but some of the DDL statement(s) failed.
Manual inspection may be required to determine the actual state of the database.

In this example, the INSERT statement committed, but the ALTER TABLE statement adding a UNIQUE constraint failed. We can verify this by looking at the data in table t and seeing that the additional non-unique value 3 was successfully inserted.

icon/buttons/copy
SELECT * FROM t;
  x
+---+
  1
  2
  3
  3
(4 rows)

Schema changes between executions of prepared statements

When the schema of a table targeted by a prepared statement changes after the prepared statement is created, future executions of the prepared statement could result in an error. For example, adding a column to a table referenced in a prepared statement with a SELECT * clause will result in an error:

icon/buttons/copy
CREATE TABLE users (id INT PRIMARY KEY);
icon/buttons/copy
PREPARE prep1 AS SELECT * FROM users;
icon/buttons/copy
ALTER TABLE users ADD COLUMN name STRING;
icon/buttons/copy
INSERT INTO users VALUES (1, 'Max Roach');
icon/buttons/copy
EXECUTE prep1;
ERROR: cached plan must not change result type
SQLSTATE: 0A000

It's therefore recommended to explicitly list result columns instead of using SELECT * in prepared statements, when possible.

Declarative schema changer does not track rows in system.privileges

The declarative schema changer does not track rows in the system.privileges table, which prevents the declarative schema changer from successfully running the DROP OWNED BY statement. #88149

New values generated by DEFAULT expressions during ALTER TABLE ADD COLUMN

When executing an ALTER TABLE ADD COLUMN statement with a DEFAULT expression, new values generated:

  • use the default search path regardless of the search path configured in the current session via SET SEARCH_PATH.
  • use the UTC time zone regardless of the time zone configured in the current session via SET TIME ZONE.
  • have no default database regardless of the default database configured in the current session via SET DATABASE, so you must specify the database of any tables they reference.
  • use the transaction timestamp for the statement_timestamp() function regardless of the time at which the ALTER statement was issued.

Some column-dropping schema changes do not roll back properly

Some schema changes that drop columns cannot be rolled back properly.

In some cases, the rollback will succeed, but the column data might be partially or totally missing, or stale due to the asynchronous nature of the schema change. #46541

In other cases, the rollback will fail in such a way that will never be cleaned up properly, leaving the table descriptor in a state where no other schema changes can be run successfully. #47712

To reduce the chance that a column drop will roll back incorrectly:

  • Perform column drops in transactions separate from other schema changes. This ensures that other schema change failures will not cause the column drop to be rolled back.

  • Drop all constraints (including unique indexes) on the column in a separate transaction, before dropping the column.

  • Drop any default values or computed expressions on a column before attempting to drop the column. This prevents conflicts between constraints and default/computed values during a column drop rollback.

If you think a rollback of a column-dropping schema change has occurred, check the jobs table. Schema changes with an error prefaced by cannot be reverted, manual cleanup may be required might require manual intervention.

ALTER COLUMN limitations

You cannot alter the data type of a column if:

  • The column is part of an index.
  • The column has CHECK constraints.
  • The column owns a sequence.
  • The column has a DEFAULT expression. This will result in an ERROR: ... column ... cannot also have a DEFAULT expression with SQLSTATE: 42P16.
  • The ALTER COLUMN TYPE statement is part of a combined ALTER TABLE statement.
  • The ALTER COLUMN TYPE statement is inside an explicit transaction.
Note:

Most ALTER COLUMN TYPE changes are finalized asynchronously. Schema changes on the table with the altered column may be restricted, and writes to the altered column may be rejected until the schema change is finalized.

CREATE TABLE AS limitations

The primary key of tables created with CREATE TABLE ... AS is not automatically derived from the query results. You must specify new primary keys at table creation. For examples, see Specify a primary key.

Remove a UNIQUE index created as part of CREATE TABLE

UNIQUE indexes created as part of a CREATE TABLE statement cannot be removed without using CASCADE. Unique indexes created with CREATE INDEX do not have this limitation.

Max size of a single column family

When creating or updating a row, if the combined size of all values in a single column family exceeds the max range size for the table, the operation may fail, or cluster performance may suffer.

As a workaround, you can either manually split a table's columns into multiple column families, or you can create a table-specific zone configuration with an increased max range size.

Dropping a single partition

CockroachDB does not currently support dropping a single partition from a table. In order to remove partitions, you can repartition the table.

Placeholders in PARTITION BY

When defining a table partition, either during table creation or table alteration, it is not possible to use placeholders in the PARTITION BY clause. #19464

Unsupported trigram syntax

The following PostgreSQL syntax and features are currently unsupported for trigrams:

  • word_similarity() built-in function.
  • strict_word_similarity() built-in function.
  • %> and <% comparisons and acceleration.
  • <<% and %>> comparisons and acceleration.
  • <->, <<->, <->>, <<<->, and <->>> comparisons.
  • Acceleration on regex string matching.
  • % comparisons, show_trgm, and trigram index creation on collated strings.

#41285

Unsupported full-text search features

The following PostgreSQL syntax and features are currently unsupported for full-text search:

  • Aspects of text search configurations other than the specified dictionary.
  • websearch_to_tsquery() built-in function.
  • tsquery_phrase() built-in function.
  • ts_rank_cd() built-in function.
  • setweight() built-in function.
  • Inverted joins on TSVECTOR values.
  • tsvector || tsvector comparisons.
  • tsquery || tsquery comparisons.
  • tsquery && tsquery comparisons.
  • tsquery <-> tsquery comparisons.
  • !! tsquery comparisons.
  • tsquery @> tsquery and tsquery <@ tsquery comparisons.

#41288

CockroachDB does not allow inverted indexes with STORING

CockroachDB does not allow inverted indexes with a STORING column. #88278

Multiple arbiter indexes for INSERT ON CONFLICT DO UPDATE

CockroachDB does not currently support multiple arbiter indexes for INSERT ON CONFLICT DO UPDATE, and will return an error if there are multiple unique or exclusion constraints matching the ON CONFLICT DO UPDATE specification. #53170

Expression index limitations

  • The expression cannot reference columns outside the index's table.
  • Functional expression output must be determined by the input arguments. For example, you can't use the volatile function now() to create an index because its output depends on more than just the function arguments.
  • CockroachDB does not allow expression indexes to reference computed columns. #67900
  • CockroachDB does not support expressions as ON CONFLICT targets. This means that unique expression indexes cannot be selected as arbiters for INSERT .. ON CONFLICT statements. For example:

    icon/buttons/copy
    CREATE TABLE t (a INT, b INT, UNIQUE INDEX ((a + b)));
    
    CREATE TABLE
    
    icon/buttons/copy
    INSERT INTO t VALUES (1, 2) ON CONFLICT ((a + b)) DO NOTHING;
    
    invalid syntax: statement ignored: at or near "(": syntax error
    SQLSTATE: 42601
    DETAIL: source SQL:
    INSERT INTO t VALUES (1, 2) ON CONFLICT ((a + b)) DO NOTHING
                                        ^
    HINT: try \h INSERT
    
    icon/buttons/copy
    INSERT INTO t VALUES (1, 2) ON CONFLICT ((a + b)) DO UPDATE SET a = 10;
    
    invalid syntax: statement ignored: at or near "(": syntax error
    SQLSTATE: 42601
    DETAIL: source SQL:
    INSERT INTO t VALUES (1, 2) ON CONFLICT ((a + b)) DO UPDATE SET a = 10
                                        ^
    HINT: try \h INSERT
    

    #67893

Data types

Spatial support limitations

CockroachDB supports efficiently storing and querying spatial data, with the following limitations:

OID limitations

Refer to OID best practices.

Limitations for composite types

ALTER TYPE limitations

  • When running the ALTER TYPE statement, you can only reference a user-defined type from the database that contains the type.
  • You can only cancel ALTER TYPE schema change jobs that drop values. This is because when you drop a value, CockroachDB searches through every row that could contain the type's value, which could take a long time. All other ALTER TYPE schema change jobs are non-cancellable.

JSONB limitations

Security and privileges

GRANT/REVOKE limitations

User/role management operations (such as GRANT and REVOKE) are schema changes. As such, they inherit the limitations of schema changes.

For example, schema changes wait for concurrent transactions using the same resources as the schema changes to complete. In the case of role memberships being modified inside a transaction, most transactions need access to the set of role memberships. Using the default settings, role modifications require schema leases to expire, which can take up to 5 minutes.

This means that long-running transactions elsewhere in the system can cause user/role management operations inside transactions to take several minutes to complete. This can have a cascading effect. When a user/role management operation inside a transaction takes a long time to complete, it can in turn block all user-initiated transactions being run by your application, since the user/role management operation in the transaction has to commit before any other transactions that access role memberships (i.e., most transactions) can make progress.

If you want user/role management operations to finish more quickly, and do not care whether concurrent transactions will immediately see the side effects of those operations, set the session variable allow_role_memberships_to_change_during_transaction to true.

When this session variable is enabled, any user/role management operations issued in the current session will only need to wait for the completion of statements in other sessions where allow_role_memberships_to_change_during_transaction is not enabled.

To accelerate user/role management operations across your entire application, you have the following options:

  1. Set the session variable in all sessions by passing it in the client connection string.
  2. Apply the allow_role_memberships_to_change_during_transaction setting globally to an entire cluster using the ALTER ROLE ALL statement:

    icon/buttons/copy
    ALTER ROLE ALL SET allow_role_memberships_to_change_during_transaction = true;
    

DROP OWNED BY limitations

  • ENUM types are not dropped.
  • DROP OWNED BY drops all owned objects as well as any grants on objects not owned by the role.
  • If the role for which you are trying to DROP OWNED BY was granted a system-level privilege (i.e., using the GRANT SYSTEM ... statement), the following error will be signalled:

    ERROR: cannot perform drop owned by if role has synthetic privileges; foo has entries in system.privileges
    SQLSTATE: 0A000
    HINT: perform REVOKE SYSTEM ... for the relevant privileges foo has in system.privileges
    

    The phrase "synthetic privileges" in the error message refers to system-level privileges.

    The workaround is to use SHOW SYSTEM GRANTS FOR {role} and then use REVOKE SYSTEM ... for each privilege in the result. #88149

Privileges for DELETE and UPDATE

Every DELETE or UPDATE statement constructs a SELECT statement, even when no WHERE clause is involved. As a result, the user executing DELETE or UPDATE requires both the DELETE and SELECT or UPDATE and SELECT privileges on the table.

Deployment and operations

Admission control

Admission control works on the level of each node, not at the cluster level. The admission control system queues requests until the operations are processed or the request exceeds the timeout value (for example by using SET statement_timeout). If you specify aggressive timeout values, the system may operate correctly but have low throughput as the operations exceed the timeout value while only completing part of the work. There is no mechanism for preemptively rejecting requests when the work queues are long.

Organizing operations by priority can mean that higher priority operations consume all the available resources while lower priority operations remain in the queue until the operation times out.

Data domiciling

  • When columns are indexed, a subset of data from the indexed columns may appear in meta ranges or other system tables. CockroachDB synchronizes these system ranges and system tables across nodes. This synchronization does not respect any multi-region settings applied via either the multi-region SQL statements, or the low-level zone configs mechanism.
  • Zone configs can be used for data placement but these features were historically built for performance, not for domiciling. The replication system's top priority is to prevent the loss of data and it may override the zone configurations if necessary to ensure data durability. For more information, see Replication Controls.
  • If your log files are kept in the region where they were generated, there is some cross-region leakage (like the system tables described previously), but the majority of user data that makes it into the logs is going to be homed in that region. If that's not strong enough, you can use the log redaction functionality to strip all raw data from the logs. You can also limit your log retention entirely.
  • If you start a node with a --locality flag that says the node is in region A, but the node is actually running in some region B, data domiciling based on the inferred node placement will not work. A CockroachDB node only knows its locality based on the text supplied to the --locality flag; it can not ensure that it is actually running in that physical location.

CockroachDB does not test for all connection failure scenarios

CockroachDB servers rely on the network to report when a TCP connection fails. In most scenarios when a connection fails, the network immediately reports a connection failure, resulting in a Connection refused error.

However, if there is no host at the target IP address, or if a firewall rule blocks traffic to the target address and port, a TCP handshake can linger while the client network stack waits for a TCP packet in response to network requests. To work around this kind of scenario, we recommend the following:

  • When migrating a node to a new machine, keep the server listening at the previous IP address until the cluster has completed the migration.
  • Configure any active network firewalls to allow node-to-node traffic.
  • Verify that orchestration tools (e.g., Kubernetes) are configured to use the correct network connection information.

#53410

No guaranteed state switch from DECOMMISSIONING to DECOMMISSIONED if node decommission is interrupted

There is no guaranteed state switch from DECOMMISSIONING to DECOMMISSIONED if node decommission is interrupted in one of the following ways:

  • The cockroach node decommission --wait-all command was run and then interrupted
  • The cockroach node decommission --wait=none command was run

This is because the state flip is effected by the CLI program at the end. Only the CLI (or its underlying API call) is able to finalize the "decommissioned" state. If the command is interrupted, or --wait=none is used, the state will only flip to "decommissioned" when the CLI program is run again after decommissioning has done all its work. #94430

Simultaneous client connections and running queries on a single node

When a node has both a high number of client connections and running queries, the node may crash due to memory exhaustion. This is due to CockroachDB not accurately limiting the number of clients and queries based on the amount of available RAM on the node.

To prevent memory exhaustion, monitor each node's memory usage and ensure there is some margin between maximum CockroachDB memory usage and available system RAM. For more details about memory usage in CockroachDB, see this blog post.

To control the maximum number of non-superuser (root user or other admin role) connections a gateway node can have open at one time, use the server.max_connections_per_gateway cluster setting. If a new non-superuser connection would exceed this limit, the error message "sorry, too many clients already" is returned, along with error code 53300. This may be useful in addition to your memory monitoring.

Load-based lease rebalancing in uneven latency deployments

When nodes are started with the --locality flag, CockroachDB attempts to place the replica lease holder (the replica that client requests are forwarded to) on the node closest to the source of the request. This means as client requests move geographically, so too does the replica lease holder.

However, you might see increased latency caused by a consistently high rate of lease transfers between datacenters in the following case:

  • Your cluster runs in datacenters which are very different distances away from each other.
  • Each node was started with a single tier of --locality, e.g., --locality=datacenter=a.
  • Most client requests get sent to a single datacenter because that's where all your application traffic is.

To detect if this is happening, open the DB Console, select the Queues dashboard, hover over the Replication Queue graph, and check the Leases Transferred / second data point. If the value is consistently larger than 0, you should consider stopping and restarting each node with additional tiers of locality to improve request latency.

For example, let's say that latency is 10ms from nodes in datacenter A to nodes in datacenter B but is 100ms from nodes in datacenter A to nodes in datacenter C. To ensure A's and B's relative proximity is factored into lease holder rebalancing, you could restart the nodes in datacenter A and B with a common region, --locality=region=foo,datacenter=a and --locality=region=foo,datacenter=b, while restarting nodes in datacenter C with a different region, --locality=region=bar,datacenter=c.

Size limits on statement input from SQL clients

CockroachDB imposes a hard limit of 16MiB on the data input for a single statement passed to CockroachDB from a client (including the SQL shell). We do not recommend attempting to execute statements from clients with large input.

Using \| to perform a large input in the SQL shell

In the built-in SQL shell, using the \| operator to perform a large number of inputs from a file can cause the server to close the connection. This is because \| sends the entire file as a single query to the server, which can exceed the upper bound on the size of a packet the server can accept from any client (16MB).

As a workaround, execute the file from the command line with cat data.sql | cockroach sql instead of from within the interactive shell.

Spatial features disabled for ARM Macs

Spatial features are disabled due to an issue with macOS code signing for the GEOS libraries. Users needing spatial features on an ARM Mac may instead use Rosetta to run the Intel binary or use the Docker image distribution. GitHub tracking issue

Logging system limitations

Log files can only be accessed in the DB Console if they are stored in the same directory as the file sink for the DEV channel.

Per-replica circuit breaker limitations

Per-replica circuit breakers have the following limitations:

  • They cannot prevent requests from hanging when the node's liveness range is unavailable. For more information about troubleshooting a cluster that's having node liveness issues, see Node liveness issues.
  • They are not tripped if all replicas of a range become unavailable, because the circuit breaker mechanism operates per-replica. This means at least one replica needs to be available to receive the request in order for the breaker to trip.

Kubernetes limitations

Refer to Kubernetes best practices.

Observability

Datadog

The integration of your CockroachDB self-hosted cluster with Datadog only supports displaying cluster-wide averages of reported metrics. Filtering by a specific node is unsupported.

DB Console may become inaccessible for secure clusters

Accessing the DB Console for a secure cluster now requires login information (i.e., username and password). This login information is stored in a system table that is replicated like other data in the cluster. If a majority of the nodes with the replicas of the system table data go down, users will be locked out of the DB Console.

Available capacity metric in the DB Console

If you are testing your deployment locally with multiple CockroachDB nodes running on a single machine (this is not recommended in production), you must explicitly set the store size per node in order to display the correct capacity. Otherwise, the machine's actual disk capacity will be counted as a separate store for each node, thus inflating the computed capacity.

Disaster recovery

Physical cluster replication

  • Physical cluster replication is supported only on CockroachDB self-hosted in new clusters on v23.2 or above. Physical Cluster Replication cannot be enabled on clusters that have been upgraded from a previous version of CockroachDB.
  • Read queries are not supported on the standby cluster before cutover.
  • The primary and standby cluster cannot have different region topology. For example, replicating a multi-region primary cluster to a single-region standby cluster is not supported. Mismatching regions between a multi-region primary and standby cluster is also not supported.
  • Cutting back to the primary cluster after a cutover is a manual process. Refer to Cut back to the primary cluster. In addition, after cutover, to continue using physical cluster replication, you must configure it again.
  • Before cutover to the standby, the standby cluster does not support running backups or changefeeds.
  • Large data imports, such as those produced by RESTORE or IMPORT INTO, may dramatically increase replication lag.

  • After the cutover process for physical cluster replication, scheduled changefeeds will continue on the promoted cluster. You will need to manage pausing or canceling the schedule on the promoted standby cluster to avoid two clusters running the same changefeed to one sink. #123776

  • After a cutover, there is no mechanism to stop applications from connecting to the original primary cluster. It is necessary to redirect application traffic manually, such as by using a network load balancer or adjusting DNS records.

RESTORE limitations

  • RESTORE will not restore a table that references a UDF, unless you skip restoring the function with the skip_missing_udfs option. Alternatively, take a database-level backup to include everything needed to restore the table. #118195
  • Restoring GLOBAL and REGIONAL BY TABLE tables into a non-multi-region database is not supported. #71502

  • REGIONAL BY TABLE and REGIONAL BY ROW tables can be restored only if the regions of the backed-up table match those of the target database. All of the following must be true for RESTORE to be successful:

    • The regions of the source database and the regions of the destination database have the same set of regions.
    • The regions were added to each of the databases in the same order.
    • The databases have the same primary region.

    The following example would be considered as having mismatched regions because the database regions were not added in the same order and the primary regions do not match.

    Running on the source database:

    ALTER DATABASE source_database SET PRIMARY REGION "us-east1";
    
    ALTER DATABASE source_database ADD region "us-west1";  
    

    Running on the destination database:

    ALTER DATABASE destination_database SET PRIMARY REGION "us-west1";
    
    ALTER DATABASE destination_database ADD region "us-east1";  
    

    In addition, the following scenario has mismatched regions between the databases since the regions were not added to the database in the same order.

    Running on the source database:

    ALTER DATABASE source_database SET PRIMARY REGION "us-east1";
    
    ALTER DATABASE source_database ADD region "us-west1";  
    

    Running on the destination database:

    ALTER DATABASE destination_database SET PRIMARY REGION "us-west1";
    
    ALTER DATABASE destination_database ADD region "us-east1";
    
    ALTER DATABASE destination_database SET PRIMARY REGION "us-east1";    
    

    #71071

Enterprise BACKUP does not capture database/table/column comments

The COMMENT ON statement associates comments to databases, tables, or columns. However, the internal table (system.comments) in which these comments are stored is not captured by a BACKUP of a table or database.

As a workaround, take a cluster backup instead, as the system.comments table is included in cluster backups. #44396

SHOW BACKUP does not support listing backups if the nodelocal storage location is a symlink. #70260

Change data capture

Change data capture (CDC) provides efficient, distributed, row-level changefeeds into Apache Kafka for downstream processing such as reporting, caching, or full-text indexing. It has the following known limitations:

  • Changefeed target options are limited to tables and column families. #73435
  • VPC Peering and AWS PrivateLink in CockroachDB Advanced clusters do not support connecting to a Kafka sink's internal IP addresses for changefeeds. To connect to a Kafka sink from CockroachDB Advanced, it is necessary to expose the Kafka cluster's external IP address and open ports with firewall rules to allow access from a CockroachDB Advanced cluster.

  • Webhook sinks only support HTTPS. Use the insecure_tls_skip_verify parameter when testing to disable certificate verification; however, this still requires HTTPS and certificates. #73431

  • Formats for changefeed messages are not supported by all changefeed sinks. Refer to the Changefeed Sinks page for details on compatible formats with each sink and the format option to specify a changefeed message format. #73432

  • Using the split_column_families and resolved options on the same changefeed will cause an error when using the following sinks: Kafka and Google Cloud Pub/Sub. Instead, use the individual FAMILY keyword to specify column families when creating a changefeed. #79452

  • Changefeed types are not fully integrated with user-defined composite types. Running changefeeds with user-defined composite types is in Preview. Certain changefeed types do not support user-defined composite types. Refer to the change data capture Known Limitations for more detail. The following limitations apply:

  • After the cutover process for physical cluster replication, scheduled changefeeds will continue on the promoted cluster. You will need to manage pausing or canceling the schedule on the promoted standby cluster to avoid two clusters running the same changefeed to one sink. #123776

  • You can only apply CDC queries on a single table in each statement.

  • Some stable functions, notably functions that return MVCC timestamps, are overridden to return the MVCC timestamp of the event, e.g., transaction_timestamp or statement_timestamp. Additionally, some time-based functions, such as now() are not supported. We recommend using the transaction_timestamp() function or the crdb_internal_mvcc_timestamp column instead.

  • The following are not permitted in CDC queries:

  • delete changefeed events will only contain the primary key. All other columns will emit as NULL. See Capture delete messages for detail on running a CDC query that emits the deleted values. #83835

  • Creating a changefeed with CDC queries on tables with more than one column family is not supported. #127761

  • When you create a changefeed on a table with more than one column family , the changefeed will emit messages per column family in separate streams. As a result, changefeed messages for different column families will arrive at the sink under separate topics. #127736

ALTER CHANGEFEED limitations

  • It is necessary to PAUSE the changefeed before performing any ALTER CHANGEFEED statement. #77171
  • CockroachDB does not keep track of the initial_scan option applied to tables when it is set to yes or only. For example:

    ALTER CHANGEFEED {job_ID} ADD table WITH initial_scan = 'yes';
    

    This will trigger an initial scan of the table and the changefeed will track table. The changefeed will not track initial_scan specified as an option, so it will not display in the output or after a SHOW CHANGEFEED JOB statement.

  • ALTER CHANGEFEED is not fully supported with changefeeds that use CDC queries. You can alter the options that a changefeed uses, but you cannot alter the changefeed target tables. #83033

Performance optimization

Optimizer and locking behavior

The SQL optimizer has limitations under certain isolation levels:

  • The new implementation of SELECT FOR UPDATE is not yet the default setting under SERIALIZABLE isolation. It can be used under SERIALIZABLE isolation by setting the optimizer_use_lock_op_for_serializable session setting to true. #114737
  • SELECT FOR UPDATE does not lock completely-NULL column families in multi-column-family tables. #116836

Statistics limitations

  • The automatic statistics refresher automatically checks whether it needs to refresh statistics for every table in the database upon startup of each node in the cluster. If statistics for a table have not been refreshed in a while, this will trigger collection of statistics for that table. If statistics have been refreshed recently, it will not force a refresh. As a result, the automatic statistics refresher does not necessarily perform a refresh of statistics after an upgrade. This could cause a problem, for example, if the upgrade moves from a version without histograms to a version with histograms. To refresh statistics manually, use CREATE STATISTICS. #54816

  • The following cluster settings do not immediately take effect, and instead only take effect when new statistics are collected for a table.

    Although SHOW STATISTICS WITH FORECAST shows the settings taking effect immediately, they do not actually take effect until new statistics are collected (as can be verified with EXPLAIN).

    As a workaround, disable and enable forecasting at the cluster or table level. This will invalidate the statistics cache and cause these settings to take effect immediately. #123852

Incorrect query plans for partitions with NULL values

In cases where the partition definition includes a comparison with NULL and a query constraint, incorrect query plans are returned. However, this case uses non-standard partitioning which defines partitions which could never hold values, so it is not likely to occur in production environments. #82774

Vectorized engine limitations

transaction_rows_read_err and transaction_rows_written_err do not halt query execution

The transaction_rows_read_err and transaction_rows_written_err session settings limit the number of rows read or written by a single transaction. These session settings will fail the transaction with an error, but not until the current query finishes executing and the results have been returned to the client. #70473

sql.guardrails.max_row_size_err misses indexed virtual computed columns

The sql.guardrails.max_row_size_err cluster setting misses large rows caused by indexed virtual computed columns. This is because the guardrail only checks the size of primary key rows, not secondary index rows. #69540

Using LIKE...ESCAPE in WHERE and HAVING constraints

CockroachDB tries to optimize most comparisons operators in WHERE and HAVING clauses into constraints on SQL indexes by only accessing selected rows. This is done for LIKE clauses when a common prefix for all selected rows can be determined in the search pattern (e.g., ... LIKE 'Joe%'). However, this optimization is not yet available if the ESCAPE keyword is also used. #30192

Import with a high amount of disk contention

IMPORT INTO can sometimes fail with a "context canceled" error, or can restart itself many times without ever finishing. If this is happening, it is likely due to a high amount of disk contention. This can be mitigated by setting the kv.bulk_io_write.max_rate cluster setting to a value below your max disk write speed. For example, to set it to 10MB/s, execute:

icon/buttons/copy
> SET CLUSTER SETTING kv.bulk_io_write.max_rate = '10MB';

CockroachDB does not properly optimize some left and anti joins with GIN indexes

Left joins and anti joins involving JSONB, ARRAY, or spatial-typed columns with a multi-column or partitioned GIN index will not take advantage of the index if the prefix columns of the index are unconstrained, or if they are constrained to multiple, constant values.

To work around this limitation, make sure that the prefix columns of the index are either constrained to single constant values, or are part of an equality condition with an input column (e.g., col1 = col2, where col1 is a prefix column and col2 is an input column).

For example, suppose you have the following multi-region database and tables:

CREATE DATABASE multi_region_test_db PRIMARY REGION "europe-west1" REGIONS "us-west1", "us-east1" SURVIVE REGION FAILURE;
USE multi_region_test_db;

CREATE TABLE t1 (
  k INT PRIMARY KEY,
  geom GEOMETRY
);

CREATE TABLE t2 (
  k INT PRIMARY KEY,
  geom GEOMETRY,
  INVERTED INDEX geom_idx (geom)
) LOCALITY REGIONAL BY ROW;

And you insert some data into the tables:

INSERT INTO t1 SELECT generate_series(1, 1000), 'POINT(1.0 1.0)';
INSERT INTO t2 (crdb_region, k, geom) SELECT 'us-east1', generate_series(1, 1000), 'POINT(1.0 1.0)';
INSERT INTO t2 (crdb_region, k, geom) SELECT 'us-west1', generate_series(1001, 2000), 'POINT(2.0 2.0)';
INSERT INTO t2 (crdb_region, k, geom) SELECT 'europe-west1', generate_series(2001, 3000), 'POINT(3.0 3.0)';

If you attempt a left join between t1 and t2 on only the geometry columns, CockroachDB will not be able to plan an inverted join:

> EXPLAIN SELECT * FROM t1 LEFT JOIN t2 ON st_contains(t1.geom, t2.geom);
                info
------------------------------------
  distribution: full
  vectorized: true

  • cross join (right outer)
  │ pred: st_contains(geom, geom)
  │
  ├── • scan
  │     estimated row count: 3,000
  │     table: t2@primary
  │     spans: FULL SCAN
  │
  └── • scan
        estimated row count: 1,000
        table: t1@primary
        spans: FULL SCAN
(15 rows)

However, if you constrain the crdb_region column to a single value, CockroachDB can plan an inverted join:

> EXPLAIN SELECT * FROM t1 LEFT JOIN t2 ON st_contains(t1.geom, t2.geom) AND t2.crdb_region = 'us-east1';
                       info
--------------------------------------------------
  distribution: full
  vectorized: true

  • lookup join (left outer)
  │ table: t2@primary
  │ equality: (crdb_region, k) = (crdb_region,k)
  │ equality cols are key
  │ pred: st_contains(geom, geom)
  │
  └── • inverted join (left outer)
      │ table: t2@geom_idx
      │
      └── • render
          │
          └── • scan
                estimated row count: 1,000
                table: t1@primary
                spans: FULL SCAN
(18 rows)

If you do not know which region to use, you can combine queries with UNION ALL:

> EXPLAIN SELECT * FROM t1 LEFT JOIN t2 ON st_contains(t1.geom, t2.geom) AND t2.crdb_region = 'us-east1'
UNION ALL SELECT * FROM t1 LEFT JOIN t2 ON st_contains(t1.geom, t2.geom) AND t2.crdb_region = 'us-west1'
UNION ALL SELECT * FROM t1 LEFT JOIN t2 ON st_contains(t1.geom, t2.geom) AND t2.crdb_region = 'europe-west1';
                           info
----------------------------------------------------------
  distribution: full
  vectorized: true

  • union all
  │
  ├── • union all
  │   │
  │   ├── • lookup join (left outer)
  │   │   │ table: t2@primary
  │   │   │ equality: (crdb_region, k) = (crdb_region,k)
  │   │   │ equality cols are key
  │   │   │ pred: st_contains(geom, geom)
  │   │   │
  │   │   └── • inverted join (left outer)
  │   │       │ table: t2@geom_idx
  │   │       │
  │   │       └── • render
  │   │           │
  │   │           └── • scan
  │   │                 estimated row count: 1,000
  │   │                 table: t1@primary
  │   │                 spans: FULL SCAN
  │   │
  │   └── • lookup join (left outer)
  │       │ table: t2@primary
  │       │ equality: (crdb_region, k) = (crdb_region,k)
  │       │ equality cols are key
  │       │ pred: st_contains(geom, geom)
  │       │
  │       └── • inverted join (left outer)
  │           │ table: t2@geom_idx
  │           │
  │           └── • render
  │               │
  │               └── • scan
  │                     estimated row count: 1,000
  │                     table: t1@primary
  │                     spans: FULL SCAN
  │
  └── • lookup join (left outer)
      │ table: t2@primary
      │ equality: (crdb_region, k) = (crdb_region,k)
      │ equality cols are key
      │ pred: st_contains(geom, geom)
      │
      └── • inverted join (left outer)
          │ table: t2@geom_idx
          │
          └── • render
              │
              └── • scan
                    estimated row count: 1,000
                    table: t1@primary
                    spans: FULL SCAN
(54 rows)

#59649

Locality optimized search limitations

Query plans for materialized views

  • The optimizer may not select the most optimal query plan when querying materialized views because CockroachDB does not collect statistics on materialized views. #78181.

Inverted join for tsvector and tsquery types is not supported

CockroachDB cannot index-accelerate queries with @@ predicates when both sides of the operator are variables. #102731


Yes No
On this page

Yes No