IMPORT INTO

On this page Carat arrow pointing down
Warning:
As of November 24, 2023, CockroachDB v22.1 is no longer supported. For more details, refer to the Release Support Policy.

The IMPORT INTO statement imports CSV, Avro, or delimited data into an existing table, by appending new rows into the table.

Considerations

  • IMPORT INTO works for existing tables. To import data into new tables, read the following Import into a new table from a CSV file example.
  • IMPORT INTO takes the table offline before importing the data. The table will be online again once the job has completed successfully.
  • IMPORT INTO cannot be used during a rolling upgrade.
  • IMPORT INTO is a blocking statement. To run an IMPORT INTO job asynchronously, use the DETACHED option.
  • IMPORT INTO invalidates all foreign keys on the target table. To validate the foreign key(s), use the VALIDATE CONSTRAINT statement.
  • IMPORT INTO is an insert-only statement; it cannot be used to update existing rows—see UPDATE. Imported rows cannot conflict with primary keys in the existing table, or any other UNIQUE constraint on the table.
  • IMPORT INTO does not offer SELECT or WHERE clauses to specify subsets of rows. To do this, use INSERT.
  • IMPORT INTO will cause any changefeeds running on the targeted table to fail.
  • See the IMPORT page for guidance on importing PostgreSQL and MySQL dump files.

IMPORT INTO now supports importing into REGIONAL BY ROW tables.

Note:

Optimize import operations in your applications by following our Import Performance Best Practices.

Required privileges

Table privileges

The user must have the INSERT and DROP privileges on the specified table. (DROP is required because the table is taken offline during the IMPORT INTO.)

Source privileges

The source file URL does not require the ADMIN role in the following scenarios:

  • S3 and GS using SPECIFIED (and not IMPLICIT) credentials. Azure is always SPECIFIED by default.
  • Userfile

The source file URL does require the ADMIN role in the following scenarios:

Warning:

While Cockroach Labs actively tests Amazon S3, Google Cloud Storage, and Azure Storage, we do not test S3-compatible services (e.g., MinIO, Red Hat Ceph).

Learn more about cloud storage for bulk operations.

Synopsis

IMPORT INTO table_name ( column_name , ) CSV AVRO DELIMITED DATA ( file_location , ) WITH option = value ,
Note:

While importing into an existing table, the table is taken offline.

Parameters

Parameter Description
table_name The name of the table you want to import into.
column_name The table columns you want to import.

Note: Currently, target columns are not enforced.
file_location The URL of a CSV or Avro file containing the table data. This can be a comma-separated list of URLs. For an example, see Import into an existing table from multiple CSV files below.
<option> [= <value>] Control your import's behavior with import options.

Delimited data files

The DELIMITED DATA format can be used to import delimited data from any text file type, while ignoring characters that need to be escaped, like the following:

  • The file's delimiter (\t by default)
  • Double quotes (")
  • Newline (\n)
  • Carriage return (\r)

For examples showing how to use the DELIMITED DATA format, see the Examples section below.

Import options

You can control the IMPORT process's behavior using any of the following key-value pairs as a <option> [= <value>].

Key
Context
Value
delimiter CSV DATA The unicode character that delimits columns in your rows. Default: ,.

Example: To use tab-delimited values: IMPORT INTO foo (..) CSV DATA ('file.csv') WITH delimiter = e'\t';
comment CSV DATA The unicode character that identifies rows to skip.

Example: IMPORT INTO foo (..) CSV DATA ('file.csv') WITH comment = '#';
nullif CSV DATA, DELIMITED DATA The string that should be converted to NULL.

Example: To use empty columns as NULL: IMPORT INTO foo (..) CSV DATA ('file.csv') WITH nullif = '';
skip CSV DATA, DELIMITED DATA The number of rows to be skipped while importing a file. Default: '0'.

Example: To import CSV files with column headers: IMPORT INTO ... CSV DATA ('file.csv') WITH skip = '1';
decompress General The decompression codec to be used: gzip, bzip, auto, or none. Default: 'auto', which guesses based on file extension (.gz, .bz, .bz2). none disables decompression.

Example: IMPORT INTO ... WITH decompress = 'bzip';
rows_terminated_by DELIMITED DATA The unicode character to indicate new lines in the input file. Default: \n

Example: IMPORT INTO ... WITH rows_terminated_by='\m';
fields_terminated_by DELIMITED DATA The unicode character used to separate fields in each input line. Default: \t

Example: IMPORT INTO ... WITH fields_terminated_by='.';
fields_enclosed_by DELIMITED DATA The unicode character that encloses fields. Default: "

Example: IMPORT INTO ... WITH fields_enclosed_by='"';
fields_escaped_by DELIMITED DATA The unicode character, when preceding one of the above DELIMITED DATA options, to be interpreted literally.

Example: IMPORT INTO ... WITH fields_escaped_by='\';
strict_validation AVRO DATA Rejects Avro records that do not have a one-to-one mapping between Avro fields to the target CockroachDB schema. By default, CockroachDB ignores unknown Avro fields and sets missing SQL fields to NULL. CockroachDB will also attempt to convert the Avro field to the CockroachDB [data type][datatypes]; otherwise, it will report an error.

Example: IMPORT INTO foo (..) AVRO DATA ('file.avro') WITH strict_validation;
records_terminated_by AVRO DATA The unicode character to indicate new lines in the input binary or JSON file. This is not needed for Avro OCF. Default: \n

Example: To use tab-terminated records: IMPORT INTO foo (..) AVRO DATA ('file.csv') WITH records_terminated_by = e'\t';
data_as_binary_records AVRO DATA Use when importing a binary file containing Avro records. The schema is not included in the file, so you need to specify the schema with either the schema or schema_uri option.

Example: IMPORT INTO foo (..) AVRO DATA ('file.bjson') WITH data_as_binary_records, schema_uri='..';
data_as_json_records AVRO DATA Use when importing a JSON file containing Avro records. The schema is not included in the file, so you need to specify the schema with either the schema or schema_uri option.

Example: IMPORT INTO foo (..) AVRO DATA ('file.bjson') WITH data_as_json_records, schema='{ "type": "record",..}';
schema AVRO DATA The schema of the Avro records included in the binary or JSON file. This is not needed for Avro OCF.
See data_as_json_records example above.
schema_uri AVRO DATA The URI of the file containing the schema of the Avro records include in the binary or JSON file. This is not needed for Avro OCF.
See data_as_binary_records example above.
DETACHED N/A When an import runs in DETACHED mode, it will execute asynchronously and the job ID will be returned immediately without waiting for the job to finish. Note that with DETACHED specified, further job information and the job completion status will not be returned. To check on the job status, use the SHOW JOBS statement.

To run an import within a transaction, use the DETACHED option.

For examples showing how to use these options, see the Examples section.

For instructions and working examples showing how to migrate data from other databases and formats, see the Migration Overview.

Requirements

Before you begin

Before using IMPORT INTO, you should have:

  • An existing table to import into (use CREATE TABLE).

    IMPORT INTO supports computed columns and the DEFAULT expressions listed below.

  • The CSV or Avro data you want to import, preferably hosted on cloud storage. This location must be equally accessible to all nodes using the same import file location. This is necessary because the IMPORT INTO statement is issued once by the client, but is executed concurrently across all nodes of the cluster. For more information, see the Import file location section below.

Supported DEFAULT expressions

IMPORT INTO supports computed columns and the following DEFAULT expressions:

  • DEFAULT expressions with user-defined types.

  • Constant DEFAULT expressions, which are expressions that return the same value in different statements. Examples include:

    • Literals (booleans, strings, integers, decimals, dates)
    • Functions where each argument is a constant expression and the functions themselves depend solely on their arguments (e.g., arithmetic operations, boolean logical operations, string operations).
  • Current TIMESTAMP functions that record the transaction timestamp, which include:

    • current_date()
    • current_timestamp()
    • localtimestamp()
    • now()
    • statement_timestamp()
    • timeofday()
    • transaction_timestamp()
  • random()

  • gen_random_uuid()

  • unique_rowid()

  • nextval()

Available storage

Each node in the cluster is assigned an equal part of the imported data, and so must have enough temp space to store it. In addition, data is persisted as a normal table, and so there must also be enough space to hold the final, replicated data. The node's first-listed/default store directory must have enough available storage to hold its portion of the data.

On cockroach start, if you set --max-disk-temp-storage, it must also be greater than the portion of the data a node will store in temp space.

Import file location

CockroachDB uses the URL provided to construct a secure API call to the service you specify. The URL structure depends on the type of file storage you are using. For more information, see the following:

Performance

  • All nodes are used during the import job, which means all nodes' CPU and RAM will be partially consumed by the IMPORT task in addition to serving normal traffic.
  • To improve performance, import at least as many files as you have nodes (i.e., there is at least one file for each node to import) to increase parallelism.
  • To further improve performance, order the data in the imported files by primary key and ensure the primary keys do not overlap between files.
  • New in v22.1: An import job will pause if a node in the cluster runs out of disk space. See Viewing and controlling import jobs for information on resuming and showing the progress of import jobs.
  • New in v22.1: An import job will pause instead of entering a failed state if it continues to encounter transient errors once it has retried a maximum number of times. Once the import has paused, you can either resume or cancel it.

For more detail on optimizing import performance, see Import Performance Best Practices.

Viewing and controlling import jobs

After CockroachDB successfully initiates an import into an existing table, it registers the import as a job, which you can view with SHOW JOBS.

After the import has been initiated, you can control it with PAUSE JOB, RESUME JOB, and CANCEL JOB.

If initiated correctly, the statement returns when the import is finished or if it encounters an error. In some cases, the import can continue after an error has been returned (the error message will tell you that the import has resumed in background).

Warning:

Pausing and then resuming an IMPORT INTO job will cause it to restart from the beginning.

Examples

We recommend reading the Considerations section for important details when working with IMPORT INTO.

The following examples make use of:

Import into a new table from a CSV file

To import into a new table, use CREATE TABLE followed by IMPORT INTO.

Note:

Certain IMPORT TABLE statements that defined the table schema inline are not supported in v22.1+. We recommend using the following example to import data into a new table.

First, create the new table with the necessary columns and data types:

icon/buttons/copy
CREATE TABLE users (
        id UUID PRIMARY KEY,
        city STRING,
        name STRING,
        address STRING,
        credit_card STRING
      );

Next, use IMPORT INTO to import the data into the new table:

icon/buttons/copy
IMPORT INTO users (id, city, name, address, credit_card)
     CSV DATA (
       's3://{BUCKET NAME}/{customers.csv}?AWS_ACCESS_KEY_ID={KEY ID}&AWS_SECRET_ACCESS_KEY={SECRET ACCESS KEY}'
     );

Import into an existing table from a CSV file

icon/buttons/copy
> IMPORT INTO customers (id, name)
    CSV DATA (
      's3://{BUCKET NAME}/{customers.csv}?AWS_ACCESS_KEY_ID={KEY ID}&AWS_SECRET_ACCESS_KEY={SECRET ACCESS KEY}'
    );
Note:

The column order in your IMPORT statement must match the column order in the CSV being imported, regardless of the order in the existing table's schema.

Import into an existing table from multiple CSV files

icon/buttons/copy
> IMPORT INTO customers (id, name)
    CSV DATA (
      's3://{BUCKET NAME}/{customers.csv}?AWS_ACCESS_KEY_ID={KEY ID}&AWS_SECRET_ACCESS_KEY={SECRET ACCESS KEY}',
      's3://{BUCKET NAME}/{customers2.csv}?AWS_ACCESS_KEY_ID={KEY ID}&AWS_SECRET_ACCESS_KEY={SECRET ACCESS KEY}',
      's3://{BUCKET NAME}/{customers3.csv}?AWS_ACCESS_KEY_ID={KEY ID}&AWS_SECRET_ACCESS_KEY={SECRET ACCESS KEY}',
      's3://{BUCKET NAME}/{customers4.csv}?AWS_ACCESS_KEY_ID={KEY ID}&AWS_SECRET_ACCESS_KEY={SECRET ACCESS KEY}',
    );

Import into an existing table using a wildcard

You can specify file patterns to match instead of explicitly listing every file. Paths are matched using the * wildcard character to include matching files directly under the specified path. Use a wildcard to include:

  • All files in a given directory (e.g.,s3://bucket-name/path/to/data/*).
  • All files in a given directory that end with a given string (e.g., s3://bucket-name/files/*.csv).
  • All files in a given directory that start with a given string (e.g., s3://bucket-name/files/data*).
  • All files in a given directory that start and end with a given string (e.g., s3://bucket-name/files/data*.csv).

These only match files directly under the specified path and do not descend into additional directories recursively.

icon/buttons/copy
IMPORT INTO users (id, city, name, address, credit_card)
  CSV DATA (
    's3://{BUCKET NAME}/*.csv?AWS_ACCESS_KEY_ID={KEY ID}&AWS_SECRET_ACCESS_KEY={SECRET ACCESS KEY}'
  );

Import into an existing table from an Avro file

Avro OCF data, JSON records, or binary records can be imported. The following are examples of importing Avro OCF data.

To specify the table schema in-line:

icon/buttons/copy
> IMPORT INTO customers
    AVRO DATA (
      's3://{BUCKET NAME}/{customers.avro}?AWS_ACCESS_KEY_ID={KEY ID}&AWS_SECRET_ACCESS_KEY={SECRET ACCESS KEY}'
    );

For more information about importing data from Avro, including examples, see Migrate from Avro.

Import into an existing table from a delimited data file

icon/buttons/copy
> IMPORT INTO customers
    DELIMITED DATA (
      's3://{BUCKET NAME}/{customers.csv}?AWS_ACCESS_KEY_ID={KEY ID}&AWS_SECRET_ACCESS_KEY={SECRET ACCESS KEY}'
    )
    WITH
      fields_terminated_by='|',
      fields_enclosed_by='"',
      fields_escaped_by='\';

Known limitations

  • You cannot import into a table with partial indexes.
  • While importing into an existing table, the table is taken offline.
  • After importing into an existing table, constraints will be un-validated and need to be re-validated.
  • Imported rows must not conflict with existing rows in the table or any unique secondary indexes.
  • IMPORT INTO works for only a single existing table.
  • IMPORT INTO can sometimes fail with a "context canceled" error, or can restart itself many times without ever finishing. If this is happening, it is likely due to a high amount of disk contention. This can be mitigated by setting the kv.bulk_io_write.max_rate cluster setting to a value below your max disk write speed. For example, to set it to 10MB/s, execute:

    icon/buttons/copy

    > SET CLUSTER SETTING kv.bulk_io_write.max_rate = '10MB';
    

See also


Yes No
On this page

Yes No