cockroach start-single-node

On this page Carat arrow pointing down

This page explains the cockroach start-single-node command, which you use to start a single-node cluster with replication disabled. A single-node cluster is appropriate for quick SQL testing or app development.

Warning:

A single-node cluster is not appropriate for use in production or for performance testing. To run a multi-node cluster with replicated data for availability, consistency and resiliency, including load balancing across multiple nodes, use cockroach start and cockroach init to start a multi-node cluster with a minimum of three nodes instead.

Synopsis

Start a single-node cluster:

$ cockroach start-single-node <flags>

View help:

$ cockroach start-single-node --help

Flags

The cockroach start-single-node command supports the following general-use, networking, security, and logging flags.

Many flags have useful defaults that can be overridden by specifying the flags explicitly. If you specify flags explicitly, however, be sure to do so each time the node is restarted, as they will not be remembered.

Note:

The cockroach start-single-node flags are identical to cockroach start flags. However, many of them are not relevant for single-node clusters but are provided for users who want to test concepts that appear in multi-node clusters. These flags are called out as such. In most cases, accepting most defaults is sufficient (see the examples below).

General

Flag Description
--attrs Not relevant for single-node clusters. Arbitrary strings, separated by colons, specifying node capability, which might include specialized hardware or number of cores, for example:

--attrs=ram:64gb

These can be used to influence the location of data replicas. See Replication Controls for full details.
--background Runs the node in the background. Control is returned to the shell only once the node is ready to accept requests, so this is recommended over appending & to the command. This flag is not available in Windows environments.

Note: --background is suitable for writing automated test suites or maintenance procedures that need a temporary server process running in the background. It is not intended to be used to start a long-running server, because it does not fully detach from the controlling terminal. Consider using a service manager or a tool like daemon(8) instead. If you use --background, using --pid-file is also recommended. To gracefully stop the cockroach process, ssend the SIGTERM signal to the process ID in the PID file. To gracefully restart the process, send the SIGHUP signal.
--cache The total size for caches, shared evenly if there are multiple storage devices. This can be a percentage (notated as a decimal or with %) or any bytes-based unit, for example:

--cache=.25
--cache=25%
--cache=1000000000 ----> 1000000000 bytes
--cache=1GB ----> 1000000000 bytes
--cache=1GiB ----> 1073741824 bytes

Note: If you use the % notation, you might need to escape the % sign, for instance, while configuring CockroachDB through systemd service files. For this reason, it's recommended to use the decimal notation instead.

Note: The sum of --cache, --max-sql-memory, and --max-tsdb-memory should not exceed 75% of the memory available to the cockroach process.

Default: 128MiB

The default cache size is reasonable for local development clusters. For production deployments, this should be increased to 25% or higher. Increasing the cache size will generally improve the node's read performance. See Recommended Production Settings for more details.
--external-io-dir The path of the external IO directory with which the local file access paths are prefixed while performing backup and restore operations using local node directories or NFS drives. If set to disabled, backups and restores using local node directories and NFS drives are disabled.

Default: extern subdirectory of the first configured store.

To set the --external-io-dir flag to the locations you want to use without needing to restart nodes, create symlinks to the desired locations from within the extern directory.
--listening-url-file The file to which the node's SQL connection URL will be written on successful startup, in addition to being printed to the standard output.

This is particularly helpful in identifying the node's port when an unused port is assigned automatically (--port=0).
--locality Not relevant for single-node clusters. Arbitrary key-value pairs that describe the location of the node. Locality might include country, region, datacenter, rack, etc. For more details, see Locality below.
--locality-file A file that contains arbitrary key-value pairs that describe the location of the node, as an alternative to the --locality flag.
--max-disk-temp-storage The maximum on-disk storage capacity available to store temporary data for SQL queries that exceed the memory budget (see --max-sql-memory). This ensures that JOINs, sorts, and other memory-intensive SQL operations are able to spill intermediate results to disk. This can be a percentage (notated as a decimal or with %) or any bytes-based unit (e.g., .25, 25%, 500GB, 1TB, 1TiB).

Note: If you use the % notation, you might need to escape the % sign, for instance, while configuring CockroachDB through systemd service files. For this reason, it's recommended to use the decimal notation instead. Also, if expressed as a percentage, this value is interpreted relative to the size of the first store. However, the temporary space usage is never counted towards any store usage; therefore, when setting this value, it's important to ensure that the size of this temporary storage plus the size of the first store doesn't exceed the capacity of the storage device.

The temporary files are located in the path specified by the --temp-dir flag, or in the subdirectory of the first store (see --store) by default.

Default: 32GiB
--max-go-memory The maximum soft memory limit for the Go runtime, which influences the behavior of Go's garbage collection. Defaults to --max-sql-memory x 2.25, but cannot exceed 90% of the node's available RAM. To disable the soft memory limit, set --max-go-memory to 0 (not recommended).
--max-sql-memory The maximum in-memory storage capacity available to store temporary data for SQL queries, including prepared queries and intermediate data rows during query execution. This can be a percentage (notated as a decimal or with %) or any bytes-based unit; for example:

--max-sql-memory=.25
--max-sql-memory=25%
--max-sql-memory=10000000000 ----> 1000000000 bytes
--max-sql-memory=1GB ----> 1000000000 bytes
--max-sql-memory=1GiB ----> 1073741824 bytes

The temporary files are located in the path specified by the --temp-dir flag, or in the subdirectory of the first store (see --store) by default.

Note: If you use the % notation, you might need to escape the % sign (for instance, while configuring CockroachDB through systemd service files). For this reason, it's recommended to use the decimal notation instead.

Note: The sum of --cache, --max-sql-memory, and --max-tsdb-memory should not exceed 75% of the memory available to the cockroach process.

Default: 25%

The default SQL memory size is suitable for production deployments but can be raised to increase the number of simultaneous client connections the node allows as well as the node's capacity for in-memory processing of rows when using ORDER BY, GROUP BY, DISTINCT, joins, and window functions. For local development clusters with memory-intensive workloads, reduce this value to, for example, 128MiB to prevent out-of-memory errors.
--max-tsdb-memory Maximum memory capacity available to store temporary data for use by the time-series database to display metrics in the DB Console. Consider raising this value if your cluster is comprised of a large number of nodes where individual nodes have very limited memory available (e.g., under 8 GiB). Insufficient memory capacity for the time-series database can constrain the ability of the DB Console to process the time-series queries used to render metrics for the entire cluster. This capacity constraint does not affect SQL query execution. This flag accepts numbers interpreted as bytes, size suffixes (e.g., 1GB and 1GiB) or a percentage of physical memory (e.g., 0.01).

Note: The sum of --cache, --max-sql-memory, and --max-tsdb-memory should not exceed 75% of the memory available to the cockroach process.

Default: 0.01 (i.e., 1%) of physical memory or 64 MiB, whichever is greater.
--pid-file The file to which the node's process ID will be written on successful startup. When this flag is not set, the process ID is not written to file.
--store
-s
The file path to a storage device and, optionally, store attributes and maximum size. When using multiple storage devices for a node, this flag must be specified separately for each device, for example:

--store=/mnt/ssd01 --store=/mnt/ssd02

For more details, see Store below.
--temp-dir The path of the node's temporary store directory. On node start up, the location for the temporary files is printed to the standard output.

Default: Subdirectory of the first store

Networking

Flag Description
--listen-addr The IP address/hostname and port to listen on for connections from clients. For IPv6, use the notation [...], e.g., [::1] or [fe80::f6f2:::].

Default: Listen on all IP addresses on port 26257
--http-addr The IP address/hostname and port to listen on for DB Console HTTP requests. For IPv6, use the notation [...], e.g., [::1]:8080 or [fe80::f6f2:::]:8080.

Default: Listen on the address part of --listen-addr on port 8080
--socket-dir The directory path on which to listen for Unix domain socket connections from clients installed on the same Unix-based machine. For an example, see Connect to a cluster listening for Unix domain socket connections.

Security

Flag Description
--certs-dir The path to the certificate directory. The directory must contain valid certificates if running in secure mode.

Default: ${HOME}/.cockroach-certs/
--insecure Note: The --insecure flag is intended for non-production testing only.

Run in insecure mode, skipping all TLS encryption and authentication. If this flag is not set, the --certs-dir flag must point to valid certificates.

Note the following risks: An insecure cluster is open to any client that can access any node's IP addresses; client connections must also be made insecurely; any user, even root, can log in without providing a password; any user, connecting as root, can read or write any data in your cluster; there is no network encryption or authentication, and thus no confidentiality.

Default: false
--accept-sql-without-tls This flag (in preview) allows you to connect to the cluster using a SQL user's password without validating the client's certificate. When connecting using the built-in SQL client, use the --insecure flag with the cockroach sql command.
--cert-principal-map A comma-separated list of cert-principal:db-principal mappings used to map the certificate principals to IP addresses, DNS names, and SQL users. This allows the use of certificates generated by Certificate Authorities that place restrictions on the contents of the commonName field. For usage information, see Create Security Certificates using Openssl.
--enterprise-encryption This optional flag specifies the encryption options for one of the stores on the node. If multiple stores exist, the flag must be specified for each store.

This flag takes a number of options. For a complete list of options, and usage instructions, see Encryption at Rest.

Store

The --store flag supports the following fields. Note that commas are used to separate fields, and so are forbidden in all field values.

Note:

In-memory storage is not suitable for production deployments at this time.

Field Description
type For in-memory storage, set this field to mem; otherwise, leave this field out. The path field must not be set when type=mem.
path The file path to the storage device. When not setting attr, size, or ballast-size, the path field label can be left out:

--store=/mnt/ssd01

When either of those fields are set, however, the path field label must be used:

--store=path=/mnt/ssd01,size=20GB

Default: cockroach-data
attrs Arbitrary strings, separated by colons, specifying disk type or capability. These can be used to influence the location of data replicas. See Replication Controls for full details.

In most cases, node-level --locality or --attrs are preferable to store-level attributes, but this field can be used to match capabilities for storage of individual databases or tables. For example, an OLTP database would probably want to allocate space for its tables only on solid state devices, whereas append-only time series might prefer cheaper spinning drives. Typical attributes include whether the store is flash (ssd) or spinny disk (hdd), as well as speeds and other specs, for example:

--store=path=/mnt/hda1,attrs=hdd:7200rpm
size The maximum size allocated to the node. When this size is reached, CockroachDB attempts to rebalance data to other nodes with available capacity. When no other nodes have available capacity, this limit will be exceeded. Data may also be written to the node faster than the cluster can rebalance it away; as long as capacity is available elsewhere, CockroachDB will gradually rebalance data down to the store limit.

The size can be specified either in a bytes-based unit or as a percentage of hard drive space (notated as a decimal or with %), for example:

--store=path=/mnt/ssd01,size=10000000000 ----> 10000000000 bytes
--store=path=/mnt/ssd01,size=20GB ----> 20000000000 bytes
--store=path=/mnt/ssd01,size=20GiB ----> 21474836480 bytes
--store=path=/mnt/ssd01,size=0.02TiB ----> 21474836480 bytes
--store=path=/mnt/ssd01,size=20% ----> 20% of available space
--store=path=/mnt/ssd01,size=0.2 ----> 20% of available space
--store=path=/mnt/ssd01,size=.2 ----> 20% of available space

Default: 100%

For an in-memory store, the size field is required and must be set to the true maximum bytes or percentage of available memory, for example:

--store=type=mem,size=20GB
--store=type=mem,size=90%

Note: If you use the % notation, you might need to escape the % sign, for instance, while configuring CockroachDB through systemd service files. For this reason, it's recommended to use the decimal notation instead.
ballast-size Configure the size of the automatically created emergency ballast file. Accepts the same value formats as the size field. For more details, see Automatic ballast files.

To disable automatic ballast file creation, set the value to 0:

--store=path=/mnt/ssd01,ballast-size=0

Logging

By default, cockroach start-single-node writes all messages to log files, and prints nothing to stderr. This includes events with INFO severity and higher. However, you can customize the logging behavior of this command by using the --log flag:

Flag Description
--log Configure logging parameters by specifying a YAML payload. For details, see Configure logs. If a YAML configuration is not specified, the default configuration is used.

--log-config-file can also be used.

Note: The logging flags below cannot be combined with --log, but can be defined instead in the YAML payload.
--log-config-file Configure logging parameters by specifying a path to a YAML file. For details, see Configure logs. If a YAML configuration is not specified, the default configuration is used.

--log can also be used.

Note: The logging flags below cannot be combined with --log-config-file, but can be defined instead in the YAML file.
--log-dir An alias for the --log flag, for configuring the log directory where log files are stored and written to. Specifically, --log-dir=XXX is an alias for --log='file-defaults: {dir: XXX}'.

Setting --log-dir to a blank directory (--log-dir=) disables logging to files. Do not use --log-dir=""; this creates a new directory named "" and stores log files in that directory.
--log-group-max-size An alias for the --log flag, for configuring the maximum size for a logging group (for example, cockroach, cockroach-sql-audit, cockroach-auth, cockroach-sql-exec, cockroach-pebble), after which the oldest log file is deleted. --log-group-max-size=XXX is an alias for --log='file-defaults: {max-group-size: XXX}'. Accepts a valid file size, such as --log-group-max-size=1GiB.

Default: 100MiB
--log-file-max-size An alias for --log, used to specify the maximum size that a log file can grow before a new log file is created. --log-file-max-size=XXX is an alias for --log='file-defaults: {max-file-size: XXX}'. Accepts a valid file size, such as --log-file-max-size=2MiB. Requires logging to files.

Default: 10MiB
--log-file-verbosity An alias for --log, used to specify the minimum severity level of messages that are logged. --log-file-verbosity=XXX is an alias for --log='file-defaults: {filter: XXX}'. When a severity is specified, such as --log-file-verbosity=WARNING, log messages that are below the specified severity level are not written to the target log file. Requires logging to files.

Default: INFO
--logtostderr An alias for --log, to optionally output log messages at or above the configured severity level to the stderr sink. --logtostderr=XXX is an alias for --log='sinks: {stderr: {filter: XXX}}'. Accepts a valid severity level. If no value is specified, by default messages related to server commands are logged to stderr at INFO severity and above, and messages related to client commands are logged to stderr at WARNING severity and above.

Setting --logtostderr=NONE disables logging to stderr.

Default: UNKNOWN
--no-color An alias for --log flag, used to control whether log output to the stderr sinc is colorized. --no-color=XXX is an alias for --log='sinks: {stderr: {no-color: XXX}}'. Accepts either true or false.

When set to false, messages logged to stderr are colorized based on severity level.

Default: false
--redactable-logs An alias for --log flag, used to whether redaction markers are used in place of secret or sensitive information in log messages. --redactable-logs=XXX is an alias for --log='file-defaults: {redactable: XXX}'. Accepts true or false.

Default: false
--sql-audit-dir An alias for --log, used to optionally confine log output of the SENSITIVE_ACCESS logging channel to a separate directory. --sql-audit-dir=XXX is an alias for --log='sinks: {file-groups: {sql-audit: {channels: SENSITIVE_ACCESS, dir: ...}}}'.

Enabling SENSITIVE_ACCESS logs can negatively impact performance. As a result, we recommend using the SENSITIVE_ACCESS channel for security purposes only. For more information, refer to Security and Audit Monitoring.

Defaults

See the default logging configuration.

Docker-specific features of single-node clusters

When you use the cockroach start-single-node command to start a single-node cluster with Docker, some additional features are available to help with testing and development. Refer to Start a local cluster in Docker (Linux) and Start a local cluster in Docker (macOS).

Standard output

When you run cockroach start-single-node, some helpful details are printed to the standard output:

CockroachDB node starting at 
build:               CCL v24.3.0-beta.3 @ 2024-11-05 go1.22.5
webui:               http://localhost:8080
sql:                 postgresql://root@localhost:26257?sslmode=disable
sql (JDBC):          jdbc:postgresql://localhost:26257/defaultdb?sslmode=disable&user=root
RPC client flags:    cockroach <client cmd> --host=localhost:26257 --insecure
logs:                /Users/<username>/node1/logs
temp dir:            /Users/<username>/node1/cockroach-temp242232154
external I/O path:   /Users/<username>/node1/extern
store[0]:            path=/Users/<username>/node1
status:              initialized new cluster
clusterID:           8a681a16-9623-4fc1-a537-77e9255daafd
nodeID:              1
Tip:

These details are also written to the INFO log in the /logs directory. You can retrieve them with a command like grep 'node starting' node1/logs/cockroach.log -A 11.

Field Description
build The version of CockroachDB you are running.
webui The URL for accessing the DB Console.
sql The connection URL for your client.
RPC client flags The flags to use when connecting to the node via cockroach client commands.
logs The directory containing debug log data.
temp dir The temporary store directory of the node.
external I/O path The external IO directory with which the local file access paths are prefixed while performing backup and restore operations using local node directories or NFS drives.
attrs If node-level attributes were specified in the --attrs flag, they are listed in this field. These details are potentially useful for configuring replication zones.
locality If values describing the locality of the node were specified in the --locality field, they are listed in this field. These details are potentially useful for configuring replication zones.
store[n] The directory containing store data, where [n] is the index of the store, e.g., store[0] for the first store, store[1] for the second store.

If store-level attributes were specified in the attrs field of the --store flag, they are listed in this field as well. These details are potentially useful for configuring replication zones.
status Whether the node is the first in the cluster (initialized new cluster), joined an existing cluster for the first time (initialized new node, joined pre-existing cluster), or rejoined an existing cluster (restarted pre-existing node).
clusterID The ID of the cluster.

When trying to join a node to an existing cluster, if this ID is different than the ID of the existing cluster, the node has started a new cluster. This may be due to conflicting information in the node's data directory. For additional guidance, see the troubleshooting docs.
nodeID The ID of the node.

Examples

Start a single-node cluster

  1. Create two directories for certificates:

    icon/buttons/copy
    $ mkdir certs my-safe-directory
    
    Directory Description
    certs You'll generate your CA certificate and all node and client certificates and keys in this directory.
    my-safe-directory You'll generate your CA key in this directory and then reference the key when generating node and client certificates.
  2. Create the CA (Certificate Authority) certificate and key pair:

    icon/buttons/copy
    $ cockroach cert create-ca \
    --certs-dir=certs \
    --ca-key=my-safe-directory/ca.key
    
  3. Create the certificate and key pair for the node:

    icon/buttons/copy
    $ cockroach cert create-node \
    localhost \
    $(hostname) \
    --certs-dir=certs \
    --ca-key=my-safe-directory/ca.key
    
  4. Create a client certificate and key pair for the root user:

    icon/buttons/copy
    $ cockroach cert create-client \
    root \
    --certs-dir=certs \
    --ca-key=my-safe-directory/ca.key
    
  5. Start the single-node cluster:

    icon/buttons/copy
    $ cockroach start-single-node \
    --certs-dir=certs \
    --listen-addr=localhost:26257 \
    --http-addr=localhost:8080
    

icon/buttons/copy
$ cockroach start-single-node \
--insecure \
--listen-addr=localhost:26257 \
--http-addr=localhost:8080

Scale to multiple nodes

Scaling a cluster started with cockroach start-single-node involves restarting the first node with the cockroach start command instead, and then adding new nodes with that command as well, all using a --join flag that forms them into a single multi-node cluster. Since replication is disabled in clusters started with start-single-node, you also need to enable replication to get CockroachDB's availability and consistency guarantees.

  1. Stop the single-node cluster:

    Get the process ID of the node:

    icon/buttons/copy
    ps -ef | grep cockroach | grep -v grep
    
      501 19584     1   0  6:13PM ttys001    0:01.27 cockroach start-single-node --certs-dir=certs --listen-addr=localhost:26257 --http-addr=localhost:8080
    

    Gracefully shut down the node, specifying its process ID:

    icon/buttons/copy
    kill -TERM 19584
    
    initiating graceful shutdown of server
    server drained and shutdown completed
    
  2. Restart the node with the cockroach start command:

    icon/buttons/copy
    $ cockroach start \
    --certs-dir=certs \
    --listen-addr=localhost:26257 \
    --http-addr=localhost:8080 \
    --join=localhost:26257,localhost:26258,localhost:26259
    

    The new flag to note is --join, which specifies the addresses and ports of the nodes that will initially comprise your cluster. You'll use this exact --join flag when starting other nodes as well.

    For a cluster in a single region, set 3-5 --join addresses. Each starting node will attempt to contact one of the join hosts. In case a join host cannot be reached, the node will try another address on the list until it can join the gossip network.

  3. In new terminal windows, add two more nodes:

    icon/buttons/copy
    $ cockroach start \
    --certs-dir=certs \
    --store=node2 \
    --listen-addr=localhost:26258 \
    --http-addr=localhost:8081 \
    --join=localhost:26257,localhost:26258,localhost:26259
    
    icon/buttons/copy
    $ cockroach start \
    --certs-dir=certs \
    --store=node3 \
    --listen-addr=localhost:26259 \
    --http-addr=localhost:8082 \
    --join=localhost:26257,localhost:26258,localhost:26259
    

    These commands are the same as before but with unique --store, --listen-addr, and --http-addr flags, since this all nodes are running on the same machine. Also, since all nodes use the same hostname (localhost), you can use the first node's certificate. Note that this is different than running a production cluster, where you would need to generate a certificate and key for each node, issued to all common names and IP addresses you might use to refer to the node as well as to any load balancer instances.

  4. Open the built-in SQL shell:

    icon/buttons/copy
    $ cockroach sql --certs-dir=certs --host=localhost:26257
    
  5. Update preconfigured replication zones to replicate user data 3 times and import internal data 5 times:

    icon/buttons/copy
    ALTER RANGE default CONFIGURE ZONE USING num_replicas = 3;
    ALTER DATABASE system CONFIGURE ZONE USING num_replicas = 5;
    ALTER RANGE meta CONFIGURE ZONE USING num_replicas = 5;
    ALTER RANGE system CONFIGURE ZONE USING num_replicas = 5;
    ALTER RANGE liveness CONFIGURE ZONE USING num_replicas = 5;
    ALTER TABLE system.public.replication_constraint_stats CONFIGURE ZONE DISCARD;
    ALTER TABLE system.public.replication_constraint_stats CONFIGURE ZONE USING gc.ttlseconds = 600, constraints = '[]', lease_preferences = '[]';
    ALTER TABLE system.public.replication_stats CONFIGURE ZONE DISCARD;
    ALTER TABLE system.public.replication_stats CONFIGURE ZONE USING gc.ttlseconds = 600, constraints = '[]', lease_preferences = '[]';
    
  1. Stop the single-node cluster:

    Get the process ID of the node:

    icon/buttons/copy
    ps -ef | grep cockroach | grep -v grep
    
      501 19584     1   0  6:13PM ttys001    0:01.27 cockroach start-single-node --insecure --listen-addr=localhost:26257 --http-addr=localhost:8080
    

    Gracefully shut down the node, specifying its process ID:

    icon/buttons/copy
    kill -TERM 19584
    
    initiating graceful shutdown of server
    server drained and shutdown completed
    
  2. Restart the node with the cockroach start command:

    icon/buttons/copy
    $ cockroach start \
    --insecure \
    --listen-addr=localhost:26257 \
    --http-addr=localhost:8080 \
    --join=localhost:26257,localhost:26258,localhost:26259
    

    The new flag to note is --join, which specifies the addresses and ports of the nodes that will comprise your cluster. You'll use this exact --join flag when starting other nodes as well.

  3. In new terminal windows, add two more nodes:

    icon/buttons/copy
    $ cockroach start \
    --insecure \
    --store=node2 \
    --listen-addr=localhost:26258 \
    --http-addr=localhost:8081 \
    --join=localhost:26257,localhost:26258,localhost:26259
    
    icon/buttons/copy
    $ cockroach start \
    --insecure \
    --store=node3 \
    --listen-addr=localhost:26259 \
    --http-addr=localhost:8082 \
    --join=localhost:26257,localhost:26258,localhost:26259
    

    These commands are the same as before but with unique --store, --listen-addr, and --http-addr flags, since this all nodes are running on the same machine.

  4. Open the built-in SQL shell:

    icon/buttons/copy
    $ cockroach sql --insecure --host=localhost:26257
    
  5. Update preconfigured replication zones to replicate user data 3 times and import internal data 5 times:

    icon/buttons/copy
    ALTER RANGE default CONFIGURE ZONE USING num_replicas = 3;
    ALTER DATABASE system CONFIGURE ZONE USING num_replicas = 5;
    ALTER RANGE meta CONFIGURE ZONE USING num_replicas = 5;
    ALTER RANGE system CONFIGURE ZONE USING num_replicas = 5;
    ALTER RANGE liveness CONFIGURE ZONE USING num_replicas = 5;
    ALTER TABLE system.public.replication_constraint_stats CONFIGURE ZONE DISCARD;
    ALTER TABLE system.public.replication_constraint_stats CONFIGURE ZONE USING gc.ttlseconds = 600, constraints = '[]', lease_preferences = '[]';
    ALTER TABLE system.public.replication_stats CONFIGURE ZONE DISCARD;
    ALTER TABLE system.public.replication_stats CONFIGURE ZONE USING gc.ttlseconds = 600, constraints = '[]', lease_preferences = '[]';
    ALTER TABLE system.public.tenant_usage CONFIGURE ZONE DISCARD;
    ALTER TABLE system.public.tenant_usage CONFIGURE ZONE USING gc.ttlseconds = 7200, constraints = '[]', lease_preferences = '[]';
    

See also


Yes No
On this page

Yes No