Migration guide
Migrate workloads to Delta Lake
When you migrate workloads to Delta Lake, you should be aware of the following simplifications and differences compared with the data sources provided by Apache Spark and Apache Hive.
Delta Lake handles the following operations automatically, which you should never perform manually:
REFRESH TABLE
: Delta tables always return the most up-to-date information, so there is no need to manually callREFRESH TABLE
after changes.
Add and remove partitions: Delta Lake automatically tracks the set of partitions present in a table and updates the list as data is added or removed. As a result, there is no need to run
ALTER TABLE [ADD|DROP] PARTITION
orMSCK
.Load a single partition: As an optimization, you may sometimes directly load the partition of data you are interested in. For example,
spark.read.format("parquet").load("/data/date=2017-01-01")
. This is unnecessary with Delta Lake, since it can quickly read the list of files from the transaction log to find the relevant ones. If you are interested in a single partition, specify it using aWHERE
clause. For example,spark.read.delta("/data").where("date = '2017-01-01'")
. For large tables with many files in the partition, this can be much faster than loading a single partition (with direct partition path, or withWHERE
) from a Parquet table because listing the files in the directory is often slower than reading the list of files from the transaction log.
When you port an existing application to Delta Lake, you should avoid the following operations, which bypass the transaction log:
Manually modify data: Delta Lake uses the transaction log to atomically commit changes to the table. Because the log is the source of truth, files that are written out but not added to the transaction log are not read by Spark. Similarly, even if you manually delete a file, a pointer to the file is still present in the transaction log. Instead of manually modifying files stored in a Delta table, always use the commands that are described in this guide.
Example
Suppose you have Parquet data stored in a directory named /data-pipeline
, and you want to create a Delta table named events
.
The first example shows how to:
Read the Parquet data from its original location,
/data-pipeline
, into a DataFrame.Save the DataFrame’s contents in Delta format in a separate location,
/tmp/delta/data-pipeline/
.Create the
events
table based on that separate location,/tmp/delta/data-pipeline/
.
The second example shows how to use CONVERT TO TABLE
to convert data from Parquet to Delta format without changing its original location, /data-pipeline/
.
Each of these examples create an unmanaged table, where you continue to manage the data in its specified location. Databricks records the table’s name and its specified location in the metastore.
Save as Delta table
Read the Parquet data into a DataFrame and then save the DataFrame’s contents to a new directory in
delta
format:data = spark.read.format("parquet").load("/data-pipeline") data.write.format("delta").save("/tmp/delta/data-pipeline/")
Create a Delta table named
events
that refers to the files in the new directory:spark.sql("CREATE TABLE events USING DELTA LOCATION '/tmp/delta/data-pipeline/'")
Convert to Delta table
You have three options for converting a Parquet table to a Delta table:
Convert files to Delta Lake format and then create a Delta table:
CONVERT TO DELTA parquet.`/data-pipeline/` CREATE TABLE events USING DELTA LOCATION '/data-pipeline/'
Create a Parquet table and then convert it to a Delta table:
CREATE TABLE events USING PARQUET OPTIONS (path '/data-pipeline/') CONVERT TO DELTA events
Convert a Parquet table to a Delta table:
CONVERT TO DELTA events
This assumes that the table named
events
is a Parquet table.
For details, see Generate a manifest file.