RESTORE

On this page Carat arrow pointing down
Warning:
As of May 12, 2021, CockroachDB v19.2 is no longer supported. For more details, refer to the Release Support Policy.
Warning:
The RESTORE feature is only available to enterprise users. For non-enterprise restores, see Restore Data.

The RESTORE statement restores your cluster's schemas and data from an enterprise BACKUP stored on a services such as AWS S3, Google Cloud Storage, NFS, or HTTP storage.

Because CockroachDB is designed with high fault tolerance, restores are designed primarily for disaster recovery, i.e., restarting your cluster if it loses a majority of its nodes. Isolated issues (such as small-scale node outages) do not require any intervention.

Functional details

Restore targets

You can restore entire tables (which automatically includes their indexes) or views from a backup. This process uses the data stored in the backup to create entirely new tables or views in the target database.

The notion of "restoring a database" simply restores all of the tables and views that belong to the database, but does not create the database. For more information, see Target Database.

Note:
RESTORE only offers table-level granularity; it does not support restoring subsets of a table.

Because this process is designed for disaster recovery, CockroachDB expects that the tables do not currently exist in the target database. This means the target database must have not have tables or views with the same name as the restored table or view. If any of the restore target's names are being used, you can:

Object dependencies

Dependent objects must be restored at the same time as the objects they depend on.

Object Depends On
Table with foreign key constraints The table it REFERENCES (however, this dependency can be removed during the restore).
Table with a sequence The sequence.
Views The tables used in the view's SELECT statement.
Interleaved tables The parent table in the interleaved hierarchy.

Target database

By default, tables and views are restored into a database with the name of the database from which they were backed up. However, also consider:

The target database must have not have tables or views with the same name as the tables or views you're restoring.

Users and privileges

Table and view users/privileges are not restored. Restored tables and views instead inherit the privileges of the database into which they're restored.

However, every backup includes system.users, so you can restore users and their passwords.

Table-level privileges must be granted to users after the restore is complete.

Restore types

You can either restore from a full backup or from a full backup with incremental backups, based on the backup files you include.

Restore Type Parameters
Full backup Include only the path to the full backup.
Full backup +
incremental backups
Include the path to the full backup as the first argument and the subsequent incremental backups from oldest to newest as the following arguments.

Point-in-time restore

Warning:

This is a beta feature. It is currently undergoing continued testing. Please file a Github issue with us if you identify a bug.

If the full or incremental backup was taken with revision history, you can restore the data as it existed at the specified point-in-time within the revision history captured by that backup.

If you do not specify a point-in-time, the data will be restored to the backup timestamp; that is, the restore will work as if the data was backed up without revision history.

Performance

The RESTORE process minimizes its impact to the cluster's performance by distributing work to all nodes. Subsets of the restored data (known as ranges) are evenly distributed among randomly selected nodes, with each range initially restored to only one node. Once the range is restored, the node begins replicating it others.

Note:
When a RESTORE fails or is canceled, partially restored data is properly cleaned up. This can have a minor, temporary impact on cluster performance.

Viewing and controlling restore jobs

After CockroachDB successfully initiates a restore, it registers the restore as a job, which you can view with SHOW JOBS.

After the restore has been initiated, you can control it with PAUSE JOB, RESUME JOB, and CANCEL JOB.

Note:

If initiated correctly, the statement returns when the restore is finished or if it encounters an error. In some cases, the restore can continue after an error has been returned (the error message will tell you that the restore has resumed in background).

Synopsis

RESTORE TABLE table_pattern , DATABASE database_name , FROM full_backup_location incremental_backup_location , AS OF SYSTEM TIME timestamp WITH kv_option_list
Note:
The RESTORE statement cannot be used within a transaction.

Required privileges

Only members of the admin role can run RESTORE. By default, the root user belongs to the admin role.

Parameters

Parameter Description
table_pattern The table or view you want to restore.
database_name The name of the database you want to restore (i.e., restore all tables and views in the database). You can restore an entire database only if you had backed up the entire database.
full_backup_location The URL where the full backup is stored.

For information about this URL structure, see Backup File URLs.
incremental_backup_location The URL where an incremental backup is stored.

Lists of incremental backups must be sorted from oldest to newest. The newest incremental backup's timestamp must be within the table's garbage collection period.

For information about this URL structure, see Backup File URLs.

For more information about garbage collection, see Configure Replication Zones.
AS OF SYSTEM TIME timestamp Restore data as it existed as of timestamp. You can restore point-in-time data only if you had taken full or incremental backup with revision history.
kv_option_list Control your backup's behavior with these options.

Backup file URLs

The URL for your backup's locations must use the following format:

[scheme]://[host]/[path]?[parameters]
Location Scheme Host Parameters
Amazon s3 Bucket name AUTH 1 (optional; can be implicit or specified), AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_SESSION_TOKEN
Azure azure N/A (see Example file URLs AZURE_ACCOUNT_KEY, AZURE_ACCOUNT_NAME
Google Cloud 2 gs Bucket name AUTH (optional; can be default, implicit, or specified), CREDENTIALS
HTTP 3 http Remote host N/A
NFS/Local 4 nodelocal Empty or nodeID 5 (see Example file URLs) N/A
S3-compatible services 6 s3 Bucket name AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_SESSION_TOKEN, AWS_REGION 7 (optional), AWS_ENDPOINT
Warning:

If you write to nodelocal storage in a multi-node cluster, individual data files will be written to the extern directories of arbitrary nodes and will likely not work as intended. To work correctly, each node must have the --external-io-dir flag point to the same NFS mount or other network-backed, shared storage.

Note:

The location parameters often contain special characters that need to be URI-encoded. Use Javascript's encodeURIComponent function or Go language's url.QueryEscape function to URI-encode the parameters. Other languages provide similar functions to URI-encode special characters.

Note:

If your environment requires an HTTP or HTTPS proxy server for outgoing connections, you can set the standard HTTP_PROXY and HTTPS_PROXY environment variables when starting CockroachDB.

  • 1 If the AUTH parameter is not provided, AWS connections default to specified and the access keys must be provided in the URI parameters. If the AUTH parameter is implicit, the access keys can be omitted and the credentials will be loaded from the environment.

  • 2 If the AUTH parameter is not specified, the cloudstorage.gs.default.key cluster setting will be used if it is non-empty, otherwise the implicit behavior is used. If the AUTH parameter is implicit, all GCS connections use Google's default authentication strategy. If the AUTH parameter is default, the cloudstorage.gs.default.key cluster setting must be set to the contents of a service account file which will be used during authentication. If the AUTH parameter is specified, GCS connections are authenticated on a per-statement basis, which allows the JSON key object to be sent in the CREDENTIALS parameter. The JSON key object should be base64-encoded (using the standard encoding in RFC 4648).

  • 3 You can create your own HTTP server with Caddy or nginx. A custom root CA can be appended to the system's default CAs by setting the cloudstorage.http.custom_ca cluster setting, which will be used when verifying certificates from HTTPS URLs.

  • 4 The file system backup location on the NFS drive is relative to the path specified by the --external-io-dir flag set while starting the node. If the flag is set to disabled, then imports from local directories and NFS drives are disabled.

  • 5 The host component of NFS/Local can either be empty or the nodeID. If the nodeID is specified, it is currently ignored (i.e., any node can be sent work and it will look in its local input/output directory); however, the nodeID will likely be required in the future.

  • 6 A custom root CA can be appended to the system's default CAs by setting the cloudstorage.http.custom_ca cluster setting, which will be used when verifying certificates from an S3-compatible service.

  • 7 The AWS_REGION parameter is optional since it is not a required parameter for most S3-compatible services. Specify the parameter only if your S3-compatible service requires it.

Example file URLs

Location Example
Amazon S3 s3://acme-co/employees.sql?AWS_ACCESS_KEY_ID=123&AWS_SECRET_ACCESS_KEY=456
Azure azure://employees.sql?AZURE_ACCOUNT_KEY=123&AZURE_ACCOUNT_NAME=acme-co
Google Cloud gs://acme-co/employees.sql
HTTP http://localhost:8080/employees.sql
NFS/Local nodelocal:///path/employees, nodelocal://2/path/employees

Note: If you write to nodelocal storage in a multi-node cluster, individual data files will be written to the extern directories of arbitrary nodes and will likely not work as intended. To work correctly, each node must have the --external-io-dir flag point to the same NFS mount or other network-backed, shared storage.

Restore option list

You can include the following options as key-value pairs in the kv_option_list to control the restore process's behavior.

into_db

  • Description: If you want to restore a table or view into a database other than the one it originally existed in, you can change the target database. This is useful if you want to restore a table that currently exists, but do not want to drop it.
  • Key: into_db
  • Value: The name of the database you want to use
  • Example: WITH into_db = 'newdb'

skip_missing_foreign_keys

  • Description: If you want to restore a table with a foreign key but do not want to restore the table it references, you can drop the Foreign Key constraint from the table and then have it restored.
  • Key: skip_missing_foreign_keys
  • Value: No value
  • Example: WITH skip_missing_foreign_keys

skip_missing_sequences

  • Description: If you want to restore a table that depends on a sequence but do not want to restore the sequence it references, you can drop the sequence dependency from a table (i.e., the DEFAULT expression that uses the sequence) and then have it restored.
  • Key: skip_missing_sequences
  • Value: No value
  • Example: WITH skip_missing_sequences

skip_missing_views

  • Description: New in v19.2 If you want to restore a table with a view but do not want to restore the view's dependencies, you can drop the view and then have the table restored.
  • Key: skip_missing_views
  • Value: No value
  • Example: WITH skip_missing_views

Examples

Restore a single table

icon/buttons/copy
> RESTORE bank.customers FROM 'gs://acme-co-backup/database-bank-2017-03-27-weekly';

Restore multiple tables

icon/buttons/copy
> RESTORE bank.customers, bank.accounts FROM 'gs://acme-co-backup/database-bank-2017-03-27-weekly';

Restore an entire database

icon/buttons/copy
> RESTORE DATABASE bank FROM 'gs://acme-co-backup/database-bank-2017-03-27-weekly';
Note:
RESTORE DATABASE can only be used if the entire database was backed up.

Point-in-time restore

icon/buttons/copy
> RESTORE bank.customers FROM 'gs://acme-co-backup/database-bank-2017-03-27-weekly' \
AS OF SYSTEM TIME '2017-02-26 10:00:00';

Restore from incremental backups

icon/buttons/copy
> RESTORE bank.customers \
FROM 'gs://acme-co-backup/database-bank-2017-03-27-weekly', 'gs://acme-co-backup/database-bank-2017-03-28-nightly', 'gs://acme-co-backup/database-bank-2017-03-29-nightly';
Tip:

Restoring from incremental backups requires previous full and incremental backups. In this example, -weekly is the full backup and the two -nightly are incremental backups.

Point-in-time restore from incremental backups

icon/buttons/copy
> RESTORE bank.customers \
FROM 'gs://acme-co-backup/database-bank-2017-03-27-weekly', 'gs://acme-co-backup/database-bank-2017-03-28-nightly', 'gs://acme-co-backup/database-bank-2017-03-29-nightly' \
AS OF SYSTEM TIME '2017-02-28 10:00:00';
Tip:

Restoring from incremental backups requires previous full and incremental backups. In this example, -weekly is the full backup and the two -nightly are incremental backups.

Restore into a different database

By default, tables and views are restored to the database they originally belonged to. However, using the into_db option, you can control the target database.

icon/buttons/copy
> RESTORE bank.customers \
FROM 'gs://acme-co-backup/database-bank-2017-03-27-weekly' \
WITH into_db = 'newdb';

Remove the foreign key before restore

By default, tables with Foreign Key constraints must be restored at the same time as the tables they reference. However, using the skip_missing_foreign_keys option you can remove the Foreign Key constraint from the table and then restore it.

icon/buttons/copy
> RESTORE bank.accounts \
FROM 'gs://acme-co-backup/database-bank-2017-03-27-weekly' \
WITH skip_missing_foreign_keys;

Restoring users from system.users backup

The system.users table stores your cluster's usernames and their hashed passwords. To restore them, you must restore the system.users table into a new database because you cannot drop the existing system.users table.

After it's restored into a new database, you can write the restored users table data to the cluster's existing system.users table.

icon/buttons/copy
> RESTORE system.users \
FROM 'azure://acme-co-backup/table-users-2017-03-27-full?AZURE_ACCOUNT_KEY=hash&AZURE_ACCOUNT_NAME=acme-co' \
WITH into_db = 'newdb';
icon/buttons/copy
> INSERT INTO system.users SELECT * FROM newdb.users;
icon/buttons/copy
> DROP TABLE newdb.users;

Restore from a locality-aware backup

New in v19.2: You can create locality-aware backups such that each node writes files only to the backup destination that matches the node locality configured at node startup.

A locality-aware backup is specified by a list of URIs, each of which has a COCKROACH_LOCALITY URL parameter whose single value is either default or a single locality key-value pair such as region=us-east. At least one COCKROACH_LOCALITY must be the default. Given a list of URIs that together contain the locations of all of the files for a single locality-aware backup, RESTORE can read in that backup.

Note that the list of URIs passed to RESTORE may be different from the URIs originally passed to BACKUP. This is because it's possible to move the contents of one of the parts of a locality-aware backup (i.e., the files written to that destination) to a different location, or even to consolidate all the files for a locality-aware backup into a single location.

Note:

RESTORE is not truly locality-aware; while restoring from backups, a node may read from a store that does not match its locality. This can happen because BACKUP does not back up zone configurations, so RESTORE has no way of knowing how to take node localities into account when restoring data from a backup.

Example - Restore from a locality-aware backup

For example, a backup created with

icon/buttons/copy
BACKUP DATABASE bank TO
    ('s3://us-east-bucket?COCKROACH_LOCALITY=default', 's3://us-west-bucket?COCKROACH_LOCALITY=region%3Dus-west');

can be restored by running:

icon/buttons/copy
RESTORE DATABASE bank FROM ('s3://us-east-bucket', 's3://us-west-bucket');

Note that the first URI in the list has to be the URI specified as the default URI when the backup was created. If you have moved your backups to a different location since the backup was originally taken, the first URI must be the new location of the files originally written to the default location.

Example - Restore from an incremental locality-aware backup

A locality-aware backup URI can also be used in place of any incremental backup URI in RESTORE.

For example, an incremental locality-aware backup created with

icon/buttons/copy
BACKUP DATABASE bank TO
    ('s3://us-east-bucket/database-bank-2019-10-08-nightly?COCKROACH_LOCALITY=default', 's3://us-west-bucket/database-bank-2019-10-08-nightly?COCKROACH_LOCALITY=region%3Dus-west')
INCREMENTAL FROM
    's3://us-east-bucket/database-bank-2019-10-07-weekly';

can be restored by running:

icon/buttons/copy
RESTORE DATABASE bank FROM
    ('s3://us-east-bucket/database-bank-2019-10-07-weekly', 's3://us-west-bucket/database-bank-2019-10-07-weekly'),
    ('s3://us-east-bucket/database-bank-2019-10-08-nightly', 's3://us-west-bucket/database-bank-2019-10-08-nightly');

Note: Restoring from incremental backups requires previous full and incremental backups. In this example, -weekly is the full backup and the two -nightly are incremental backups.

See also


Yes No
On this page

Yes No