SHOW JOBS

On this page Carat arrow pointing down

The SHOW JOBS statement lists all of the types of long-running tasks your cluster has performed in the last 12 hours, including:

Details for enterprise changefeeds, including the sink URI and full table name, are not displayed on running the SHOW JOBS statement. For details about enterprise changefeeds, including the sink URI and the full table name, use SHOW CHANGEFEED JOBS.

To block a call to SHOW JOBS that returns after all specified job ID(s) have a terminal state, use SHOW JOBS WHEN COMPLETE. The statement will return a row per job ID, which provides details of the job execution. Note that while this statement is blocking, it will time out after 24 hours.

Considerations

  • The SHOW JOBS statement shows only long-running tasks.
  • For jobs older than 12 hours, query the crdb_internal.jobs table.
  • For the SHOW JOBS statement, jobs are deleted after 14 days. This interval can be changed via the jobs.retention_time cluster setting. See Show changefeed jobs for changefeed job retention time.
  • While the SHOW JOBS WHEN COMPLETE statement is blocking, it will time out after 24 hours.
  • Garbage collection jobs are created for dropped tables and dropped indexes, and will execute after the GC TTL has elapsed. These jobs cannot be canceled.
  • CockroachDB automatically retries jobs that fail due to retry errors or job coordination failures, with exponential backoff. The jobs.registry.retry.initial_delay cluster setting sets the initial delay between retries and jobs.registry.retry.max_delay sets the maximum delay.

Required privileges

You must have at least one of the following to run SHOW JOBS:

  • The VIEWJOB privilege, which can view all jobs (including admin-owned jobs).
  • Be a member of the admin role.
  • The CONTROLJOB role option.
  • For changefeeds, users with the CHANGEFEED privilege on a set of tables can view changefeed jobs running on those tables.

Synopsis

SHOW AUTOMATIC JOBS JOBS select_stmt WITH show_job_options_list WHEN COMPLETE select_stmt for_schedules_clause CHANGEFEED JOBS select_stmt JOB job_id JOB job_id WITH show_job_options_list WHEN COMPLETE job_id

Parameters

Parameter Description
SHOW AUTOMATIC JOBS Show jobs performed for internal CockroachDB operations. See Show automatic jobs.
SHOW JOBS WHEN COMPLETE Block SHOW JOB until the provided job ID reaches a terminal state. For an example, see Show job when complete.
select_stmt A selection query that specifies the job_id(s) to view.
job_id The ID of the job to view.
for_schedules_clause The schedule you want to view jobs for. You can view jobs for a specific schedule (FOR SCHEDULE id) or view jobs for multiple schedules by nesting a SELECT clause in the statement (FOR SCHEDULES <select_clause>). For an example, see Show jobs for a schedule.
SHOW CHANGEFEED JOBS Show details about enterprise changefeeds, including the sink URI and the full table name. For an example, see Show changefeed jobs.

Response

The output of SHOW JOBS lists ongoing jobs first, then completed jobs within the last 12 hours. The list of ongoing jobs is sorted by starting time, whereas the list of completed jobs is sorted by finished time.

The following fields are returned for each job:

Field Description
job_id A unique ID to identify each job. This value is used if you want to control jobs (i.e., pause, resume, or cancel it).
job_type The type of job: SCHEMA CHANGE, NEW SCHEMA CHANGE, KEY VISUALIZER, MIGRATION, BACKUP, RESTORE, IMPORT, CHANGEFEED, CREATE STATS, ROW LEVEL TTL, REPLICATION STREAM INGESTION, REPLICATION STREAM PRODUCER(physical cluster replication or logical data replication), LOGICAL REPLICATION.

For job types of automatic jobs, see Show automatic jobs.
description The statement that started the job, or a textual description of the job. When you run SHOW JOBS, the description field is limited to 100 characters. To view the full description for a job, run SHOW JOB {job ID}.
statement When description is a textual description of the job, the statement that started the job is returned in this column. Currently, this field is populated only for the automatic table statistics jobs.
user_name The name of the user who started the job.
status The job's current state. Possible values: pending, paused, pause-requested, failed, succeeded, canceled, cancel-requested, running, retry-running, retry-reverting, reverting, revert-failed.

Refer to Jobs status for a description of each status.
running_status The job's detailed running status, which provides visibility into the progress of the dropping or truncating of tables (i.e., DROP TABLE, DROP DATABASE, or TRUNCATE). For dropping or truncating jobs, the detailed running status is determined by the status of the table at the earliest stage of the schema change. The job is completed when the GC TTL expires and both the table data and ID is deleted for each of the tables involved. Possible values: waiting for MVCC GC, deleting data, waiting for GC TTL, waiting in DELETE-ONLY, waiting in DELETE-AND-WRITE_ONLY, waiting in MERGING, populating schema, validating schema, or NULL (when the status cannot be determined).

For the SHOW AUTOMATIC JOBS statement, the value of this field is NULL.
created The TIMESTAMPTZ when the job was created.
started The TIMESTAMPTZ when the job first began running .
finished The TIMESTAMPTZ when the job was succeeded, failed, or canceled.
modified The TIMESTAMPTZ when the job record was last updated with the job's progress, or when the job was paused or resumed.
fraction_completed The fraction (between 0.00 and 1.00) of the job that's been completed.
error If the job failed with a terminal error, this column will contain the error generated by the failure.
coordinator_id The ID of the node running the job.
trace_id The job's internal trace ID for inflight debugging. Note: This ID can only be used by the Cockroach Labs support team for internal observability.
execution_errors A list of any retryable errors that a job may have encountered during its lifetime.

For details of changefeed-specific responses, see SHOW CHANGEFEED JOBS.

Job status

Status Description
pending Job is created but has not started running.
paused Job is paused.
pause-requested A request has been issued to pause the job. The status will move to paused when the node running the job registers the request.
failed Job failed to complete.
succeeded Job successfully completed.
canceled Job was canceled.
cancel-requested A request has been issued to cancel the job. The status will move to canceled when the node running the job registers the request.
running Job is running. A job that is running will be displayed with its percent completion and time remaining, rather than the RUNNING status.
retry-running Job is retrying another job that failed.
retry-reverting The retry failed or was canceled and its changes are being reverted.
reverting Job failed or was canceled and its changes are being reverted.
revert-failed Job encountered a non-retryable error when reverting the changes. It is necessary to manually clean up a job with this status.
Note:

We recommend monitoring paused jobs to protect historical data from garbage collection, or potential data accumulation in the case of changefeeds. See Monitoring paused jobs for detail on metrics to track paused jobs and protected timestamps.

Examples

Show jobs

icon/buttons/copy
> SHOW JOBS;
    job_id      | job_type  |               description                      |...
+---------------+-----------+------------------------------------------------+...
 27536791415282 |  RESTORE  | RESTORE db.* FROM 'azure-blob://backup/db/tbl' |...

Filter jobs

You can filter jobs by using SHOW JOBS as the data source for a SELECT statement, and then filtering the values with the WHERE clause.

icon/buttons/copy
> WITH x as (SHOW JOBS) SELECT * FROM x WHERE job_type = 'RESTORE' AND status IN ('running', 'failed') ORDER BY created DESC;
    job_id      | job_type  |              description                       |...
+---------------+-----------+------------------------------------------------+...
 27536791415282 |  RESTORE  | RESTORE db.* FROM 'azure-blob://backup/db/tbl' |...

Show automatic jobs

icon/buttons/copy
> SHOW AUTOMATIC JOBS;
    job_id           |       job_type                  |                    description                       |...
+--------------------+---------------------------------+------------------------------------------------------+...
  786475982730133505 | AUTO SPAN CONFIG RECONCILIATION | reconciling span configurations                      |...
  786483120403382274 | AUTO SQL STATS COMPACTION       | automatic SQL Stats compaction                       |...
  786476180299579393 | AUTO CREATE STATS               | Table statistics refresh for movr.public.promo_codes |...
...
(8 rows)

The job types of automatic jobs are:

  • AUTO SPAN CONFIG RECONCILIATION: A continuously running job that ensures that all declared zone configurations (ALTER … CONFIGURE ZONE …) are applied. For example, when num_replicas = 7 is set on a table, the reconciliation job listens in on those changes and then informs the underlying storage layer to maintain 7 replicas for the table.
  • AUTO SQL STATS COMPACTION: An hourly job that truncates the internal system.statement_statistics and system.transaction_statistics table row counts to the value of the sql.stats.persisted_rows.max cluster setting. Both tables contribute to the crdb_internal.statement_statistics and crdb_internal.transaction_statistics tables, respectively.
  • AUTO CREATE STATS: Creates and updates table statistics.

Filter automatic jobs

You can filter jobs by using SHOW AUTOMATIC JOBS as the data source for a SELECT statement, and then filtering the values with the WHERE clause.

icon/buttons/copy
> WITH x AS (SHOW AUTOMATIC JOBS) SELECT * FROM x WHERE status = ('succeeded') ORDER BY created DESC;
        job_id       |         job_type          |                             description                             |                                         statement                                          | user_name |  status   | ...
  786483120403382274 | AUTO SQL STATS COMPACTION | automatic SQL Stats compaction                                      |                                                                                            | node      | succeeded | ...
  786476180299579393 | AUTO CREATE STATS         | Table statistics refresh for movr.public.promo_codes                | CREATE STATISTICS __auto__ FROM [110] WITH OPTIONS THROTTLING 0.9 AS OF SYSTEM TIME '-30s' | root      | succeeded | ...
...
(7 rows)

Show changefeed jobs

You can display specific fields relating to changefeed jobs by running SHOW CHANGEFEED JOBS. These fields include:

  • high_water_timestamp: Guarantees all changes before or at this time have been emitted.
  • sink_uri: The destination URI of the configured sink for a changefeed.
  • full_table_names: The full name resolution for a table. For example, defaultdb.public.mytable refers to the defaultdb database, the public schema, and the table mytable table.
  • topics: The topic name to which Kafka and Google Cloud Pub/Sub changefeed messages will emit. If you start a changefeed with the split_column_families option targeting a table with multiple column families, the SHOW CHANGEFEED JOBS output will show the topic name with a family placeholder. For example, topic.{family}.
  • format: The format of the changefeed messages, e.g., json, avro.

All changefeed jobs will display regardless of if the job completed and when it completed. You can define a retention time and delete completed jobs by using the jobs.retention_time cluster setting.

icon/buttons/copy
SHOW CHANGEFEED JOBS;
    job_id             |                                                                                   description                                                                  | ...
+----------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------+ ...
  685724608744325121   | CREATE CHANGEFEED FOR TABLE mytable INTO 'kafka://localhost:9092' WITH confluent_schema_registry = 'http://localhost:8081', format = 'avro', resolved, updated | ...
  685723987509116929   | CREATE CHANGEFEED FOR TABLE mytable INTO 'kafka://localhost:9092' WITH confluent_schema_registry = 'http://localhost:8081', format = 'avro', resolved, updated | ...
(2 rows)

To show an individual Enterprise changefeed:

icon/buttons/copy
SHOW CHANGEFEED JOB {job_id};
        job_id       |                                     description                                      | user_name | status  |              running_status              |          created           |          started           | finished |          modified          |      high_water_timestamp      | error |    sink_uri    |  full_table_names   | topics | format
---------------------+--------------------------------------------------------------------------------------+-----------+---------+------------------------------------------+----------------------------+----------------------------+----------+----------------------------+--------------------------------+-------+----------------+---------------------+--------+----------
  866218332400680961 | CREATE CHANGEFEED FOR TABLE movr.users INTO 'external://aws' WITH format = 'parquet' | root      | running | running: resolved=1684438482.937939878,0 | 2023-05-18 14:14:16.323465 | 2023-05-18 14:14:16.360245 | NULL     | 2023-05-18 19:35:16.120407 | 1684438482937939878.0000000000 |       | external://aws | {movr.public.users} | NULL   | parquet
(1 row)

Changefeed jobs can be paused, resumed, altered, or canceled.

Filter changefeed jobs

You can filter jobs by using SHOW CHANGEFEED JOBS as the data source for a SELECT statement, and then filtering the values with a WHERE clause. For example, you can filter by the status of changefeed jobs:

icon/buttons/copy
WITH x AS (SHOW CHANGEFEED JOBS) SELECT * FROM x WHERE status = ('paused');
    job_id           |                                                              description         | ...
+--------------------+----------------------------------------------------------------------------------+ ...
  685723987509116929 | CREATE CHANGEFEED FOR TABLE mytable INTO 'kafka://localhost:9092' WITH confluent | ...
(1 row)

You can filter the columns that SHOW CHANGEFEED JOBS displays using a SELECT statement:

icon/buttons/copy
SELECT job_id, sink_uri, status, format FROM [SHOW CHANGEFEED JOBS] WHERE job_id = 997306743028908033;
        job_id       |    sink_uri      | status   | format
---------------------+------------------+----------+---------
  997306743028908033 | external://kafka | running  | json

Show schema changes

You can show just schema change jobs by using SHOW JOBS as the data source for a SELECT statement, and then filtering the job_type value with the WHERE clause:

icon/buttons/copy
> WITH x AS (SHOW JOBS) SELECT * FROM x WHERE job_type = 'SCHEMA CHANGE';
    job_id       | job_type        |              description                           |...
+----------------+-----------------+----------------------------------------------------+...
  27536791415282 |  SCHEMA CHANGE  | ALTER TABLE test.public.foo ADD COLUMN bar VARCHAR |...

Scheme change jobs can be paused, resumed, and canceled.

Show job when complete

To block SHOW JOB until the provided job ID reaches a terminal state, use SHOW JOB WHEN COMPLETE:

icon/buttons/copy
> SHOW JOB WHEN COMPLETE 27536791415282;
    job_id       | job_type  |               description                      |...
+----------------+-----------+------------------------------------------------+...
  27536791415282 |  RESTORE  | RESTORE db.* FROM 'azure-blob://backup/db/tbl' |...

Show jobs for a schedule

To view jobs for a specific backup schedule, use the schedule's id:

icon/buttons/copy
> SHOW JOBS FOR SCHEDULE 590204387299262465;
    job_id           | job_type |              description                                          |...
+--------------------+----------+-------------------------------------------------------------------+...
  590205481558802434 | BACKUP   | BACKUP INTO '/2020/09/15-161444.99' IN 's3://test/scheduled-backup| ...
(1 row)

You can also view multiple schedules by nesting a SELECT clause that retrieves id(s) inside the SHOW JOBS statement:

icon/buttons/copy
> SHOW JOBS FOR SCHEDULES WITH x AS (SHOW SCHEDULES) SELECT id FROM x WHERE label = 'test_schedule';
    job_id           | job_type  |              description                  |...
+--------------------+-----------+-------------------------------------------+...
  590204496007299074 | BACKUP    | BACKUP INTO '/2020/09/15-161444.99' IN'   |...
(2 rows)

See also


Yes No
On this page

Yes No