Monitor account activity with system tables
This article explains the concept of system tables in Databricks and highlights resources you can use to get the most out of your system tables data.
What are system tables?
System tables are a Databricks-hosted analytical store of your account’s operational data found in the system
catalog. System tables can be used for historical observability across your account.
Note
For documentation on system.information_schema
, see Information schema.
Requirements
To access system tables, your workspace must be enabled for Unity Catalog. For more information, see Enable system table schemas.
For system tables GCP project numbers and identities, see Private Service Connect (PSC) attachment URIs and project numbers.
Which system tables are available?
Currently, Databricks hosts the following system tables:
Table |
Description |
Location |
Supports streaming |
Free retention period |
Includes global or regional data |
---|---|---|---|---|---|
Audit logs (Public Preview) |
Includes records for all audit events across your Databricks account. For a list of available audit events, see Audit log reference. |
|
Yes |
365 days |
Regional for workspace-level events. Global for account-level events. |
Table lineage (Public Preview) |
Includes a record for each read or write event on a Unity Catalog table or path. |
|
Yes |
365 days |
Regional |
Column lineage (Public Preview) |
Includes a record for each read or write event on a Unity Catalog column (but does not include events that do not have a source). |
|
Yes |
365 days |
Regional |
Includes records for all billable usage across your account. |
|
Yes |
365 days |
Global |
|
A historical log of SKU pricing. A record gets added each time there is a change to a SKU price. |
|
No |
N/A |
Global |
|
Clusters (Public Preview) |
A slow-changing dimension table that contains the full history of compute configurations over time for any cluster. |
|
Yes |
None |
Regional |
Node timeline (Public Preview) |
Captures the utilization metrics of your all-purpose and jobs compute resources. |
|
Yes |
30 days |
Regional |
Node types (Public Preview) |
Captures the currently available node types with their basic hardware information. |
|
No |
N/A |
Regional |
SQL warehouses (Public Preview) |
Contains the full history of configurations over time for any SQL warehouse. |
|
Yes |
365 days |
Regional |
SQL warehouse events (Public Preview) |
Captures events related to SQL warehouses. For example, starting, stopping, running, scaling up and down. |
|
Yes |
365 days |
Regional |
Tracks all jobs created in the account. |
|
Yes |
365 days |
Regional |
|
Job tasks (Public Preview) |
Tracks all job tasks that run in the account. |
|
Yes |
365 days |
Regional |
Job run timeline (Public Preview) |
Tracks the start and end times of job runs. |
|
Yes |
365 days |
Regional |
Job task timeline (Public Preview) |
Tracks the start and end times and compute resources used for job run tasks. |
|
Yes |
365 days |
Regional |
Marketplace funnel events (Public Preview) |
Includes consumer impression and funnel data for your listings. |
|
Yes |
365 days |
Regional |
Marketplace listing access (Public Preview) |
Includes consumer info for completed request data or get data events on your listings. |
|
Yes |
365 days |
Regional |
(Public Preview) Tracks user messages sent to the Databricks Assistant. |
|
No |
365 days |
Regional |
|
Query history (Public Preview) |
Captures records for all queries run on SQL warehouses. |
|
No |
90 days |
Regional |
Model serving endpoint usage (Public Preview) |
Captures token counts for each request to a model serving endpoint and its responses. To capture the endpoint usage in this table, you must enable usage tracking on your serving endpoint. |
|
Yes |
90 days |
Regional |
Model serving endpoint data (Public Preview) |
A slow-changing dimension table that stores metadata for each served external model in a model serving endpoint. |
|
Yes |
365 days |
Regional |
The billable usage and pricing tables are free to use. Tables in Public Preview are also free to use during the preview but could incur a charge in the future.
Note
You may see other system tables in your account besides the ones listed above. Those tables are currently in Private Preview and are empty by default. If you are interested in using any of these tables, please reach out to your Databricks account team.
Enable system table schemas
Since system tables are governed by Unity Catalog, you need to have at least one Unity Catalog-enabled workspace in your account to enable and access system tables. System tables include data from all workspaces in your account but they can only be accessed from a Unity Catalog-enabled workspace.
System tables are enabled at the schema level. If you enable a system schema, you enable all the tables within that schema. When new schemas are released, an account admin needs to manually enable the schema.
System tables must be enabled by an account admin. You can enable system tables using system-schemas
commands in the Databricks CLI or using the SystemSchemas API.
Note
The billing
schema is enabled by default. Other schemas must be enabled manually.
Use the List system schemas API to view which schemas are available and enabled in your account. The schemas will have one of the following states:
state: AVAILABLE
: The system schema is available but has not yet been enabled.state: EnableCompleted
: You have enabled the system schema and it is visible in Catalog Explorer.
Use the Enable system schema API to add the schema to your
system
catalog.Use the Disable system schema API to remove a schema from your
system
catalog.
Grant access to system tables
Access to system tables is governed by Unity Catalog. No user has access to these system schemas by default. To grant access, a user that is both a metastore admin and an account admin must grant USE
and SELECT
permissions on the system schemas. See Manage privileges in Unity Catalog.
System tables are read-only and cannot be modified.
Note
If your account was created after March 6, 2024, you might not have a metastore admin by default. For more information, see Set up and manage Unity Catalog.
Do system tables contain data for all workspaces in your account?
System tables contain operational data for all workspaces in your account deployed within the same cloud region. Billing system tables contain account-wide data.
Even though system tables can only be accessed through a Unity Catalog workspace, the tables also include operational data for the non-Unity Catalog workspaces in your account.
Where is system table data stored?
Your account’s system table data is stored in a Databricks-hosted storage account located in the same region as your metastore. The data is securely shared with you using Delta Sharing.
Each table has a free data retention period. For information on extending the retention period, contact your Databricks account team.
Where are system tables located in Catalog Explorer?
The system tables in your account are located in a catalog called system
, which is included in every Unity Catalog metastore. In the system
catalog you’ll see schemas such as access
and billing
that contain the system tables.
Considerations for streaming system tables
Databricks uses Delta Sharing to share system table data with customers. Be aware of the following considerations when streaming with Delta Sharing:
If you are using streaming with system tables, set the
skipChangeCommits
option totrue
. This ensures the streaming job is not disrupted from deletes in the system tables. See Ignore updates and deletes.Trigger.AvailableNow
is not supported with Delta Sharing streaming. It will be converted toTrigger.Once
.
If you use a trigger in your streaming job and find it isn’t catching up to the latest system table version, Databricks recommends increasing the scheduled frequency of the job.