What is Unity Catalog?

This article introduces Unity Catalog, a unified governance solution for data and AI assets on the Lakehouse.

Overview of Unity Catalog

Unity Catalog provides centralized access control, auditing, and data discovery capabilities across Databricks workspaces.

Unity Catalog diagram

Key features of Unity Catalog include:

  • Define once, secure everywhere: Unity Catalog offers a single place to administer data access policies that apply across all workspaces.

  • Standards-compliant security model: Unity Catalog’s security model is based on standard ANSI SQL and allows administrators to grant permissions in their existing data lake using familiar syntax, at the level of catalogs, databases (also called schemas), tables, and views.

  • Built-in auditing: Unity Catalog automatically captures user-level audit logs that record access to your data.

  • Data discovery: Unity Catalog lets you tag and document data assets, and provides a search interface to help data consumers find data.

The Unity Catalog object model

In Unity Catalog, the hierarchy of primary data objects flows from metastore to table or volume:

  • Metastore: The top-level container for metadata. Each metastore exposes a three-level namespace (catalog.schema.table) that organizes your data.

  • Catalog: The first layer of the object hierarchy, used to organize your data assets.

  • Schema: Also known as databases, schemas are the second layer of the object hierarchy and contain tables and views.

  • Volume: Volumes sit alongside tables and views at the lowest level of the object hierarchy and provide governance for non-tabular data.

  • Table: At the lowest level in the object hierarchy are tables and views.

Unity Catalog object model diagram

This is a simplified view of securable Unity Catalog objects. For more details, see Securable objects in Unity Catalog.

You reference all data in Unity Catalog using a three-level namespace: catalog.schema.table.

Metastores

A metastore is the top-level container of objects in Unity Catalog. It stores metadata about data assets (tables and views) and the permissions that govern access to them. Databricks account admins should create one metastore for each region in which they operate and assign them to Databricks workspaces in the same region. For a workspace to use Unity Catalog, it must have a Unity Catalog metastore attached.

Each metastore is configured with a managed storage location in a GCS bucket in your Google Cloud account. See Managed storage.

Note

This metastore is distinct from the Hive metastore included in Databricks workspaces that have not been enabled for Unity Catalog. If your workspace includes a legacy Hive metastore, the data in that metastore will still be available alongside data defined in Unity Catalog, in a catalog named hive_metastore. Note that the hive_metastore catalog is not managed by Unity Catalog and does not benefit from the same feature set as catalogs defined in Unity Catalog.

See Create a Unity Catalog metastore.

Managed storage

When an account admin creates a metastore, they must associate a storage location in a GCS bucket in your Google Cloud account to use as managed storage. Unity Catalog also allows users to associate managed storage locations with catalogs and schemas.

Managed storage has the following properties:

  • Managed tables and managed volumes store data and metadata files in managed storage.

  • Managed storage cannot overlap with external tables, external volumes, or other managed storage.

The following table describes how managed storage is declared and associated with Unity Catalog objects:

Associated Unity Catalog object

How to set

Relation to external locations

Metastore

Configured by account admin during metastore creation.

Cannot overlap an external location.

Catalog

Specified during catalog creation using the LOCATION keyword.

Must be contained within an external location.

Schema

Specified during schema creation using the LOCATION keyword.

Must be contained within an external location.

The managed storage location used to store data and metadata for managed tables and managed volumes uses the following rules:

  • If the containing schema has a managed location, the data is stored in the schema managed location.

  • If the containing schema does not have a managed location but the catalog has a managed location, the data is stored in the catalog managed location.

  • If neither the containing schema nor the containing catalog have a managed location, data is stored in the metastore managed location.

Storage credentials and external locations

To manage access to the underlying cloud storage for external table, external volumes, and managed storage, Unity Catalog introduces the following object types:

  • Storage credentials encapsulate a long-term cloud credential that provides access to cloud storage. For example, a service account that can access GCS buckets.

  • External locations contain a reference to a storage credential and a cloud storage path.

See Manage external locations and storage credentials.

Catalogs

A catalog is the first layer of Unity Catalog’s three-level namespace. It’s used to organize your data assets. Users can see all catalogs on which they have been assigned the USE CATALOG data permission.

All users have the USE CATALOG permission on the main catalog. The main catalog is intended for organizations that are just getting started with Unity Catalog. As you add users and data, you should add catalogs to maintain a data hierarchy that enables efficient control over access.

See Create and manage catalogs.

Schemas

A schema (also called a database) is the second layer of Unity Catalog’s three-level namespace. A schema organizes tables and views. Users can see all schemas on which they have been assigned the USE SCHEMA permission, along with the USE CATALOG permission on the schema’s parent catalog. To access or list a table or view in a schema, users must also have SELECT permission on the table or view.

See Create and manage schemas (databases).

Volumes

Preview

This feature is in Public Preview.

A volume resides in the third layer of Unity Catalog’s three-level namespace. Volumes are siblings to tables, views, and other objects organized under a schema in Unity Catalog.

Volumes contain directories and files for data stored in any format. Volumes provide non-tabular access to data, meaning that files in volumes cannot be registered as tables.

  • To create a volume, users must have CREATE VOLUME and USE SCHEMA permissions on the schema, and they must have the USE CATALOG permission on its parent catalog.

  • To read files and directories stored inside a volume, users must have the READ VOLUME permission, the USE SCHEMA permission on its parent schema, and the USE CATALOG permission on its parent catalog.

  • To add, remove, or modify files and directories stored inside a volume, users must have WRITE VOLUME permission, the USE SCHEMA permission on its parent schema, and the USE CATALOG permission on its parent catalog.

A volume can be managed or external.

Note

When you define a volume, you can no longer access any paths that overlap the volume location using external locations in the Catalog Explorer or cloud URIs.

Managed volumes

Managed volumes store files in the Unity Catalog default storage location for the schema in which they’re contained. Managed volumes are a convenient solution when you want to provision a governed location for working with files without the overhead of creating and managing external locations and storage credentials.

The following precedence governs which location is used for a managed volume:

  • Schema location

  • Catalog location

  • Unity Catalog root storage location

When you delete a managed volume, the files stored in this volume are also deleted from your cloud tenant within 30 days.

See What is a managed volume?.

External volumes

An external volume is registered to a Unity Catalog external location and provides access to existing files in cloud storage without requiring data migration. Users must have the CREATE EXTERNAL VOLUME permission on the external location to create an external volume.

External volumes support scenarios where files are produced by other systems and staged for access from within Databricks using object storage or where tools outside Databricks require direct file access.

Unity Catalog does not manage the lifecycle and layout of the files in external volumes. When you drop an external volume, Unity Catalog does not delete the underlying data.

See What is an external volume?.

Tables

A table resides in the third layer of Unity Catalog’s three-level namespace. It contains rows of data. To create a table, users must have CREATE and USE SCHEMA permissions on the schema, and they must have the USE CATALOG permission on its parent catalog. To query a table, users must have the SELECT permission on the table, the USE SCHEMA permission on its parent schema, and the USE CATALOG permission on its parent catalog.

A table can be managed or external.

Managed tables

Managed tables are the default way to create tables in Unity Catalog. Unity Catalog manages the lifecycle and file layout for these tables. You should not use tools outside of Databricks to manipulate files in these tables directly.

By default, managed tables are stored in the root storage location that you configure when you create a metastore. You can optionally specify managed table storage locations at the catalog or schema levels, overriding the root storage location. Managed tables always use the Delta table format.

When a managed table is dropped, its underlying data is deleted from your cloud tenant within 30 days.

See Managed tables.

External tables

External tables are tables whose data lifecycle and file layout are not managed by Unity Catalog. Use external tables to register large amounts of existing data in Unity Catalog, or if you require direct access to the data using tools outside of Databricks clusters or Databricks SQL warehouses.

When you drop an external table, Unity Catalog does not delete the underlying data. You can manage privileges on external tables and use them in queries in the same way as managed tables.

External tables can use the following file formats:

  • DELTA

  • CSV

  • JSON

  • AVRO

  • PARQUET

  • ORC

  • TEXT

See External tables.

Views

A view is a read-only object created from one or more tables and views in a metastore. It resides in the third layer of Unity Catalog’s three-level namespace. A view can be created from tables and other views in multiple schemas and catalogs. You can create dynamic views to enable row- and column-level permissions.

See Create a dynamic view.

Models

A model resides in the third layer of Unity Catalog’s three-level namespace. In this context, “model” refers to a machine learning model that is registered in the MLflow Model Registry. To create a model in Unity Catalog, users must have the CREATE MODEL privilege for the catalog or schema. The user must also have the USE CATALOG privilege on the parent catalog and USE SCHEMA on the parent schema.

Identity management for Unity Catalog

Unity Catalog uses the identities in the Databricks account to resolve users, service principals, and groups, and to enforce permissions.

To configure identities in the account, follow the instructions in Manage users, service principals, and groups. Refer to those users, service principals, and groups when you create access-control policies in Unity Catalog.

Unity Catalog users, service principals, and groups must also be added to workspaces to access Unity Catalog data in a notebook, a Databricks SQL query, Catalog Explorer, or a REST API command. The assignment of users, service principals, and groups to workspaces is called identity federation.

All workspaces that have a Unity Catalog metastore attached to them are enabled for identity federation.

Special considerations for groups

Any groups that already exist in the workspace are labeled Workspace local in the account console. These workspace-local groups cannot be used in Unity Catalog to define access policies. You must use account-level groups. If a workspace-local group is referenced in a command, that command will return an error that the group was not found. If you previously used workspace-local groups to manage access to notebooks and other artifacts, these permissions remain in effect.

See Manage groups.

Admin roles for Unity Catalog

The following admin roles are required for managing Unity Catalog:

  • Account admins can manage identities, cloud resources and the creation of workspaces and Unity Catalog metastores.

    Account admins can enable workspaces for Unity Catalog. They can grant both workspace and metastore admin permissions.

  • Metastore admins can manage privileges and ownership for all securable objects within a metastore, such as who can create catalogs or query a table.

    The account admin who creates the Unity Catalog metastore becomes the initial metastore admin. The metastore admin can also choose to delegate this role to another user or group. We recommend assigning the metastore admin to a group, in which case any member of the group receives the privileges of the metastore admin. See (Recommended) Transfer ownership of your metastore to a group.

  • Workspace admins can add users to a Databricks workspace, assign them the workspace admin role, and manage access to objects and functionality in the workspace, such as the ability to create clusters and change job ownership.

See Manage users, service principals, and groups.

Data permissions in Unity Catalog

In Unity Catalog, data is secure by default. Initially, users have no access to data in a metastore. Access can be granted by either a metastore admin, the owner of an object, or the owner of the catalog or schema that contains the object. Securable objects in Unity Catalog are hierarchical and privileges are inherited downward.

You can assign and revoke permissions using Catalog Explorer, SQL commands, or REST APIs.

See Manage privileges in Unity Catalog.

Cluster access modes for Unity Catalog

To access data in Unity Catalog, clusters must be configured with the correct access mode. Unity Catalog is secure by default. If a cluster is not configured with one of the Unity-Catalog-capable access modes (that is, shared or assigned), the cluster can’t access data in Unity Catalog.

See Create clusters & SQL warehouses with Unity Catalog access.

Lakehouse Federation and Unity Catalog

Lakehouse Federation is the query federation platform for Databricks. The term query federation describes a collection of features that enable users and systems to run queries against multiple siloed data sources without needing to migrate all data to a unified system.

Databricks uses Unity Catalog to manage query federation. You use Unity Catalog to configure read-only connections to popular external database systems and create foreign catalogs that mirror external databases. Unity Catalog’s data governance and data lineage tools ensure that data access is managed and audited for all federated queries made by the users in your Databricks workspaces.

See Run queries using Lakehouse Federation.

How do I set up Unity Catalog for my organization?

To set up Unity Catalog for your organization, you do the following:

  1. Configure a GCS bucket that Unity Catalog can use to store and access data in your GCP account.

    As part of metastore creation (in the next step), Databricks generates a service account that you will use grant access to this GCS bucket.

  2. Create a metastore for each region in which your organization operates.

  3. Attach workspaces to the metastore. Each workspace will have the same view of the data you manage in Unity Catalog.

  4. If you have a new account, add users, groups, and service principals to your Databricks account.

Next, you create and grant access to catalogs, schemas, and tables.

For complete setup instructions, see Get started using Unity Catalog.

Supported compute

Unity Catalog is supported on clusters that run Databricks Runtime 11.3 LTS or above. Unity Catalog is supported by default on all SQL warehouse compute versions.

Clusters running on earlier versions of Databricks Runtime do not provide support for all Unity Catalog GA features and functionality.

Some Unity Catalog functionality requires Databricks Runtime versions above 11.3 LTS. See Unity Catalog limitations.

For detailed information about Unity Catalog functionality changes in each Databricks Runtime version, see the release notes. For more information about Unity Catalog compute requirements, see Create clusters & SQL warehouses with Unity Catalog access.

Supported regions

For the list of regions that support Unity Catalog, see Databricks clouds and regions.

Supported data file formats

Unity Catalog supports the following table formats:

Unity Catalog limitations

Unity Catalog has the following limitations.

Note

If your cluster is running on a Databricks Runtime version below 11.3 LTS, there may be additional limitations, not listed here. Unity Catalog is supported on Databricks Runtime 11.3 LTS or above.

General Unity Catalog limitations

  • R is supported only on clusters that use single user access mode. Workloads in R do not support the use of dynamic views for row-level or column-level security.

  • On Databricks Runtime 13.2 and below, Scala is supported only on clusters that use single user access mode. To use Scala on a cluster that uses shared access mode, the cluster must be on Databricks Runtime 13.3 or above.

  • Workloads that use Databricks Runtime for Machine Learning are supported only on clusters that use single user access mode.

  • In Databricks Runtime 13.1 and above, shallow clones are supported to create Unity Catalog managed tables from existing Unity Catalog managed tables. In Databricks Runtime 13.0 and below, there is no support for shallow clones in Unity Catalog. See Shallow clone for Unity Catalog managed tables.

  • Bucketing is not supported for Unity Catalog tables. If you run commands that try to create a bucketed table in Unity Catalog, it will throw an exception.

  • Writing to the same path or Delta Lake table from workspaces in multiple regions can lead to unreliable performance if some clusters access Unity Catalog and others do not.

  • Custom partition schemes created using commands like ALTER TABLE ADD PARTITION are not supported for tables in Unity Catalog. Unity Catalog can access tables that use directory-style partitioning.

  • Overwrite mode for DataFrame write operations into Unity Catalog is supported only for Delta tables, not for other file formats. The user must have the CREATE privilege on the parent schema and must be the owner of the existing object or have the MODIFY privilege on the object.

  • Spark-submit jobs are supported on single user access but not shared clusters. See What is cluster access mode?.

  • In Databricks Runtime 13.1 and below, you cannot use Python UDFs, including UDAFs, UDTFs, and Pandas on Spark (applyInPandas and mapInPandas). In Databricks Runtime 13.2 and above, Python UDFs are supported.

  • Groups that were previously created in a workspace (that is, workspace-level groups) cannot be used in Unity Catalog GRANT statements. This is to ensure a consistent view of groups that can span across workspaces. To use groups in GRANT statements, create your groups at the account level and update any automation for principal or group management (such as SCIM, Okta and AAD connectors, and Terraform) to reference account endpoints instead of workspace endpoints. See Difference between account groups and workspace-local groups.

  • Standard Scala thread pools are not supported. Instead, use the special thread pools in org.apache.spark.util.ThreadUtils, for example, org.apache.spark.util.ThreadUtils.newDaemonFixedThreadPool. However, the following thread pools in ThreadUtils are not supported: ThreadUtils.newForkJoinPool and any ScheduledExecutorService thread pool.

The following limitations apply for all object names in Unity Catalog:

  • Object names cannot exceed 255 characters.

  • The following special characters are not allowed:

    • Period (.)

    • Space ( )

    • Forward slash (/)

  • All ASCII control characters (00-1F hex)

  • The DELETE character (7F hex)

  • Unity Catalog stores all object names as lowercase.

  • When referencing UC names in SQL, you must use backticks to escape names that contain special characters such as hyphens (-).

Note

Column names can use special characters, but the name must be escaped with backticks in all SQL statements if special characters are used. Unity Catalog preserves column name casing, but queries against Unity Catalog tables are case-insensitive.

Structured Streaming support in Unity Catalog

Support for Structured Streaming on Unity Catalog tables (managed or external) depends on the Databricks Runtime version that you are running and on whether you are using shared or single user access mode.

Support for shared clusters requires Databricks Runtime 12.2 LTS and above, with the following limitations:

  • Python only.

  • Apache Spark continuous processing mode is not supported. See Continuous Processing in the Spark Structured Streaming Programming Guide.

  • applyInPandasWithState is not supported.

  • Working with socket sources is not supported.

  • StreamingQueryListener cannot use credentials or interact with objects managed by Unity Catalog.

  • The sourceArchiveDir must be in the same external location as the source when you use option("cleanSource", "archive") with a data source managed by Unity Catalog.

  • For Kafka sources and sinks, the following options are unsupported:

    • kafka.sasl.client.callback.handler.class

    • kafka.sasl.login.callback.handler.class

    • kafka.sasl.login.class

    • kafka.partition.assignment.strategy

  • The following Kafka options are supported in Databricks Runtime 13.0 but unsupported in Databricks Runtime 12.2 LTS. You can only specify external locations managed by Unity Catalog for these options:

    • kafka.ssl.truststore.location

    • kafka.ssl.keystore.location

Support for Single User access is available on Databricks Runtime 11.3 LTS and above, with the following limitations:

  • Apache Spark continuous processing mode is not supported. See Continuous Processing in the Spark Structured Streaming Programming Guide.

  • StreamingQueryListener cannot use credentials or interact with objects managed by Unity Catalog.

  • Asynchronous checkpointing is not supported in Databricks Runtime 11.3 LTS and below. It is supported in Databricks Runtime 12.0 and above.

See also Using Unity Catalog with Structured Streaming.

Limitations on support for models in Unity Catalog

See Limitations on Unity Catalog support.

Resource quotas

Unity Catalog enforces resource quotas on all securable objects. Limits respect the same hierarchical organization throughout Unity Catalog. If you expect to exceed these resource limits, contact your Databricks account representative.

Quota values below are expressed relative to the parent object in the Unity Catalog.

Object

Parent

Value

table

schema

10000

volume

schema

10000

function

schema

10000

model

schema

1000

model version

registered model

10000

schema

catalog

10000

catalog

metastore

1000

connection

metastore

1000

storage credential

metastore

200

external location

metastore

500

For Delta Sharing limits, see Resource quotas.