The SHOW JOBS
statement lists all of the types of long-running tasks your cluster has performed in the last 12 hours, including:
- Schema changes through
ALTER TABLE
,DROP DATABASE
,DROP TABLE
, andTRUNCATE
IMPORT
.- Enterprise
BACKUP
andRESTORE
. - Scheduled backups.
- User-created table statistics created for use by the cost-based optimizer. To view automatic table statistics, use
SHOW AUTOMATIC JOBS
. SHOW JOBS
now displays newly added columns fromcrdb_internal.jobs
(last_run
,next_run
,num_runs
, andexecution_errors
). The columns capture state related to retries, failures, and exponential backoff.These details can help you understand the status of crucial tasks that can impact the performance of your cluster, as well as help you control them.
Details for enterprise changefeeds, including the sink URI and full table name, are not displayed on running the SHOW JOBS
statement. For details about enterprise changefeeds, including the sink URI and the full table name, use SHOW CHANGEFEED JOBS
.
To block a call to SHOW JOBS
that returns after all specified job ID(s) have a terminal state, use SHOW JOBS WHEN COMPLETE
. The statement will return a row per job ID, which provides details of the job execution. Note that while this statement is blocking, it will time out after 24 hours.
Considerations
- The
SHOW JOBS
statement shows only long-running tasks. - For jobs older than 12 hours, query the
crdb_internal.jobs
table. - For the
SHOW JOBS
statement, jobs are deleted after 14 days. This interval can be changed via thejobs.retention_time
cluster setting. See Show changefeed jobs for changefeed job retention time. - While the
SHOW JOBS WHEN COMPLETE
statement is blocking, it will time out after 24 hours. - Garbage collection jobs are created for dropped tables and dropped indexes, and will execute after the GC TTL has elapsed. These jobs cannot be canceled.
- CockroachDB automatically retries jobs that fail due to retry errors or job coordination failures, with exponential backoff. The
jobs.registry.retry.initial_delay
cluster setting sets the initial delay between retries andjobs.registry.retry.max_delay
sets the maximum delay.
Required privileges
You must have at least one of the following to run SHOW JOBS
:
- The
VIEWJOB
privilege, which can view all jobs (includingadmin
-owned jobs). - Be a member of the
admin
role. - The
CONTROLJOB
role option. - For changefeeds, users with the
CHANGEFEED
privilege on a set of tables can view changefeed jobs running on those tables.
Synopsis
Parameters
Parameter | Description |
---|---|
SHOW AUTOMATIC JOBS |
Show jobs performed for internal CockroachDB operations. See Show automatic jobs. |
SHOW JOBS WHEN COMPLETE |
Block SHOW JOB until the provided job ID reaches a terminal state. For an example, see Show job when complete. |
select_stmt |
A selection query that specifies the job_id (s) to view. |
job_id |
The ID of the job to view. |
for_schedules_clause |
The schedule you want to view jobs for. You can view jobs for a specific schedule (FOR SCHEDULE id ) or view jobs for multiple schedules by nesting a SELECT clause in the statement (FOR SCHEDULES <select_clause> ). For an example, see Show jobs for a schedule. |
SHOW CHANGEFEED JOBS |
Show details about enterprise changefeeds, including the sink URI and the full table name. For an example, see Show changefeed jobs. |
Response
The output of SHOW JOBS
lists ongoing jobs first, then completed jobs within the last 12 hours. The list of ongoing jobs is sorted by starting time, whereas the list of completed jobs is sorted by finished time.
The following fields are returned for each job:
Field | Description |
---|---|
job_id |
A unique ID to identify each job. This value is used if you want to control jobs (i.e., pause, resume, or cancel it). |
job_type |
The type of job (e.g., SCHEMA CHANGE , NEW SCHEMA CHANGE , KEY VISUALIZER , MIGRATION , BACKUP , RESTORE , IMPORT , CHANGEFEED , CREATE STATS , ROW LEVEL TTL , REPLICATION STREAM INGESTION , or REPLICATION STREAM PRODUCER ). For job types of automatic jobs, see Show automatic jobs. |
description |
The statement that started the job, or a textual description of the job. |
statement |
When description is a textual description of the job, the statement that started the job is returned in this column. Currently, this field is populated only for the automatic table statistics jobs. |
user_name |
The name of the user who started the job. |
status |
The job's current state. Possible values: pending , paused , pause-requested , failed , succeeded , canceled , cancel-requested , running , retry-running , retry-reverting , reverting , revert-failed .Refer to Jobs status for a description of each status. |
running_status |
The job's detailed running status, which provides visibility into the progress of the dropping or truncating of tables (i.e., DROP TABLE , DROP DATABASE , or TRUNCATE ). For dropping or truncating jobs, the detailed running status is determined by the status of the table at the earliest stage of the schema change. The job is completed when the GC TTL expires and both the table data and ID is deleted for each of the tables involved. Possible values: waiting for MVCC GC , deleting data , waiting for GC TTL , waiting in DELETE-ONLY , waiting in DELETE-AND-WRITE_ONLY , waiting in MERGING , populating schema , validating schema , or NULL (when the status cannot be determined). For the SHOW AUTOMATIC JOBS statement, the value of this field is NULL . |
created |
The TIMESTAMPTZ when the job was created. |
started |
The TIMESTAMPTZ when the job first began running . |
finished |
The TIMESTAMPTZ when the job was succeeded , failed , or canceled . |
modified |
The TIMESTAMPTZ when the job record was last updated with the job's progress, or when the job was paused or resumed. |
fraction_completed |
The fraction (between 0.00 and 1.00 ) of the job that's been completed. |
error |
If the job failed with a terminal error, this column will contain the error generated by the failure. |
coordinator_id |
The ID of the node running the job. |
trace_id |
The job's internal trace ID for inflight debugging. Note: This ID can only be used by the Cockroach Labs support team for internal observability. |
last_run |
When a job fails with a retryable error, this column will contain the TIMESTAMPTZ of the last attempt to run the job. |
next_run |
When a job fails with a retryable error, this column will contain the TIMESTAMPTZ of the next attempt to run the job. |
num_runs |
The number of attempts to run the job. |
execution_errors |
A list of any retryable errors that a job may have encountered during its lifetime. |
For details of changefeed-specific responses, see SHOW CHANGEFEED JOBS
.
Job status
Status | Description |
---|---|
pending |
Job is created but has not started running. |
paused |
Job is paused. |
pause-requested |
A request has been issued to pause the job. The status will move to paused when the node running the job registers the request. |
failed |
Job failed to complete. |
succeeded |
Job successfully completed. |
canceled |
Job was canceled. |
cancel-requested |
A request has been issued to cancel the job. The status will move to canceled when the node running the job registers the request. |
running |
Job is running. A job that is running will be displayed with its percent completion and time remaining, rather than the RUNNING status. |
retry-running |
Job is retrying another job that failed. |
retry-reverting |
The retry failed or was canceled and its changes are being reverted. |
reverting |
Job failed or was canceled and its changes are being reverted. |
revert-failed |
Job encountered a non-retryable error when reverting the changes. It is necessary to manually clean up a job with this status. |
We recommend monitoring paused jobs to protect historical data from garbage collection, or potential data accumulation in the case of changefeeds. See Monitoring paused jobs for detail on metrics to track paused jobs and protected timestamps.
Examples
Show jobs
> SHOW JOBS;
job_id | job_type | description |...
+---------------+-----------+------------------------------------------------+...
27536791415282 | RESTORE | RESTORE db.* FROM 'azure-blob://backup/db/tbl' |...
Filter jobs
You can filter jobs by using SHOW JOBS
as the data source for a SELECT
statement, and then filtering the values with the WHERE
clause.
> WITH x as (SHOW JOBS) SELECT * FROM x WHERE job_type = 'RESTORE' AND status IN ('running', 'failed') ORDER BY created DESC;
job_id | job_type | description |...
+---------------+-----------+------------------------------------------------+...
27536791415282 | RESTORE | RESTORE db.* FROM 'azure-blob://backup/db/tbl' |...
Show automatic jobs
> SHOW AUTOMATIC JOBS;
job_id | job_type | description |...
+--------------------+---------------------------------+------------------------------------------------------+...
786475982730133505 | AUTO SPAN CONFIG RECONCILIATION | reconciling span configurations |...
786483120403382274 | AUTO SQL STATS COMPACTION | automatic SQL Stats compaction |...
786476180299579393 | AUTO CREATE STATS | Table statistics refresh for movr.public.promo_codes |...
...
(8 rows)
The job types of automatic jobs are:
AUTO SPAN CONFIG RECONCILIATION
: A continuously running job that ensures that all declared zone configurations (ALTER … CONFIGURE ZONE …
) are applied. For example, whennum_replicas = 7
is set on a table, the reconciliation job listens in on those changes and then informs the underlying storage layer to maintain 7 replicas for the table.AUTO SQL STATS COMPACTION
: An hourly job that truncates the internalsystem.statement_statistics
andsystem.transaction_statistics
table row counts to the value of thesql.stats.persisted_rows.max
cluster setting. Both tables contribute to thecrdb_internal.statement_statistics
andcrdb_internal.transaction_statistics
tables, respectively.AUTO CREATE STATS
: Creates and updates table statistics.
Filter automatic jobs
You can filter jobs by using SHOW AUTOMATIC JOBS
as the data source for a SELECT
statement, and then filtering the values with the WHERE
clause.
> WITH x AS (SHOW AUTOMATIC JOBS) SELECT * FROM x WHERE status = ('succeeded') ORDER BY created DESC;
job_id | job_type | description | statement | user_name | status | ...
786483120403382274 | AUTO SQL STATS COMPACTION | automatic SQL Stats compaction | | node | succeeded | ...
786476180299579393 | AUTO CREATE STATS | Table statistics refresh for movr.public.promo_codes | CREATE STATISTICS __auto__ FROM [110] WITH OPTIONS THROTTLING 0.9 AS OF SYSTEM TIME '-30s' | root | succeeded | ...
...
(7 rows)
Show changefeed jobs
You can display specific fields relating to changefeed jobs by running SHOW CHANGEFEED JOBS
. These fields include:
high_water_timestamp
: Guarantees all changes before or at this time have been emitted.sink_uri
: The destination URI of the configured sink for a changefeed.full_table_names
: The full name resolution for a table. For example,defaultdb.public.mytable
refers to thedefaultdb
database, thepublic
schema, and the tablemytable
table.topics
: The topic name to which Kafka and Google Cloud Pub/Sub changefeed messages will emit. If you start a changefeed with thesplit_column_families
option targeting a table with multiple column families, theSHOW CHANGEFEED JOBS
output will show the topic name with a family placeholder. For example,topic.{family}
.format
: The format of the changefeed messages, e.g.,json
,avro
.
All changefeed jobs will display regardless of if the job completed and when it completed. You can define a retention time and delete completed jobs by using the jobs.retention_time
cluster setting.
SHOW CHANGEFEED JOBS;
job_id | description | ...
+----------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------+ ...
685724608744325121 | CREATE CHANGEFEED FOR TABLE mytable INTO 'kafka://localhost:9092' WITH confluent_schema_registry = 'http://localhost:8081', format = 'avro', resolved, updated | ...
685723987509116929 | CREATE CHANGEFEED FOR TABLE mytable INTO 'kafka://localhost:9092' WITH confluent_schema_registry = 'http://localhost:8081', format = 'avro', resolved, updated | ...
(2 rows)
To show an individual Enterprise changefeed:
SHOW CHANGEFEED JOB {job_id};
job_id | description | user_name | status | running_status | created | started | finished | modified | high_water_timestamp | error | sink_uri | full_table_names | topics | format
---------------------+--------------------------------------------------------------------------------------+-----------+---------+------------------------------------------+----------------------------+----------------------------+----------+----------------------------+--------------------------------+-------+----------------+---------------------+--------+----------
866218332400680961 | CREATE CHANGEFEED FOR TABLE movr.users INTO 'external://aws' WITH format = 'parquet' | root | running | running: resolved=1684438482.937939878,0 | 2023-05-18 14:14:16.323465 | 2023-05-18 14:14:16.360245 | NULL | 2023-05-18 19:35:16.120407 | 1684438482937939878.0000000000 | | external://aws | {movr.public.users} | NULL | parquet
(1 row)
Changefeed jobs can be paused, resumed, altered, or canceled.
Filter changefeed jobs
You can filter jobs by using SHOW CHANGEFEED JOBS
as the data source for a SELECT
statement, and then filtering the values with a WHERE
clause. For example, you can filter by the status
of changefeed jobs:
WITH x AS (SHOW CHANGEFEED JOBS) SELECT * FROM x WHERE status = ('paused');
job_id | description | ...
+--------------------+----------------------------------------------------------------------------------+ ...
685723987509116929 | CREATE CHANGEFEED FOR TABLE mytable INTO 'kafka://localhost:9092' WITH confluent | ...
(1 row)
You can filter the columns that SHOW CHANGEFEED JOBS
displays using a SELECT
statement:
SELECT job_id, sink_uri, status, format FROM [SHOW CHANGEFEED JOBS] WHERE job_id = 997306743028908033;
job_id | sink_uri | status | format
---------------------+------------------+----------+---------
997306743028908033 | external://kafka | running | json
Show schema changes
You can show just schema change jobs by using SHOW JOBS
as the data source for a SELECT
statement, and then filtering the job_type
value with the WHERE
clause:
> WITH x AS (SHOW JOBS) SELECT * FROM x WHERE job_type = 'SCHEMA CHANGE';
job_id | job_type | description |...
+----------------+-----------------+----------------------------------------------------+...
27536791415282 | SCHEMA CHANGE | ALTER TABLE test.public.foo ADD COLUMN bar VARCHAR |...
Scheme change jobs can be paused, resumed, and canceled.
Show job when complete
To block SHOW JOB
until the provided job ID reaches a terminal state, use SHOW JOB WHEN COMPLETE
:
> SHOW JOB WHEN COMPLETE 27536791415282;
job_id | job_type | description |...
+----------------+-----------+------------------------------------------------+...
27536791415282 | RESTORE | RESTORE db.* FROM 'azure-blob://backup/db/tbl' |...
Show jobs for a schedule
To view jobs for a specific backup schedule, use the schedule's id
:
> SHOW JOBS FOR SCHEDULE 590204387299262465;
job_id | job_type | description |...
+--------------------+----------+-------------------------------------------------------------------+...
590205481558802434 | BACKUP | BACKUP INTO '/2020/09/15-161444.99' IN 's3://test/scheduled-backup| ...
(1 row)
You can also view multiple schedules by nesting a SELECT
clause that retrieves id
(s) inside the SHOW JOBS
statement:
> SHOW JOBS FOR SCHEDULES WITH x AS (SHOW SCHEDULES) SELECT id FROM x WHERE label = 'test_schedule';
job_id | job_type | description |...
+--------------------+-----------+-------------------------------------------+...
590204496007299074 | BACKUP | BACKUP INTO '/2020/09/15-161444.99' IN' |...
(2 rows)