Read and write data in Delta Live Tables pipelines
You can make the data created in your Delta Live Tables pipelines available for discovery and querying outside of Delta Live Tables by publishing the datasets to external data governance systems. You can also use data managed by external data governance systems as source data for your pipelines. This article is an introduction to supported data governance solutions and has links to articles that provide more information for using these solutions with Delta Live Tables.
All tables and views created in Delta Live Tables are local to the pipeline by default. To make output datasets available outside the pipeline, you must publish the datasets. To persist output data from a pipeline and make it discoverable and available to query, Delta Live Tables supports Unity Catalog and the Hive metastore. You can also use data stored in Unity Catalog or the Hive metastore as source data for Delta Live Tables pipelines.
The articles in this section detail how to use data governance solutions to read and write data with your pipelines.
Use Unity Catalog to read and write data with Delta Live Tables pipelines (Public Preview)
Unity Catalog is the data governance solution for the Databricks Platform and is the recommended way to manage the output datasets from Delta Live Tables pipelines. You can also use Unity Catalog as a data source for pipelines. To learn how to use Unity Catalog with your pipelines, see Use Unity Catalog with your Delta Live Tables pipelines.
Publish data to the Hive metastore from Delta Live Tables pipelines
You can also use the Hive metastore to read source data into a Delta Live Tables pipeline and publish output data from a pipeline to make it available to external systems. See Publish data from Delta Live Tables to the Hive metastore.