Optimize data file layout
The OPTIMIZE
command rewrites data files to improve data layout for Delta tables. For tables with liquid clustering enabled, OPTIMIZE
rewrites data files to group data by liquid clustering keys. For tables with partitions defined, file compaction and data layout are performed within partitions.
Tables without liquid clustering can optionally include a ZORDER BY
clause to improve data clustering on rewrite. Databricks recommends using liquid clustering instead of partitions, ZORDER
, or other data layout approaches.
See OPTIMIZE.
Important
In Databricks Runtime 16.0 and above, you can use OPTIMIZE FULL
to force reclustering for tables with liquid clustering enabled. See Force reclustering for all records.
Syntax examples
You trigger compaction by running the OPTIMIZE
command:
OPTIMIZE table_name
from delta.tables import *
deltaTable = DeltaTable.forName(spark, "table_name")
deltaTable.optimize().executeCompaction()
import io.delta.tables._
val deltaTable = DeltaTable.forName(spark, "table_name")
deltaTable.optimize().executeCompaction()
If you have a large amount of data and only want to optimize a subset of it, you can specify an optional partition predicate using WHERE
:
OPTIMIZE table_name WHERE date >= '2022-11-18'
from delta.tables import *
deltaTable = DeltaTable.forName(spark, "table_name")
deltaTable.optimize().where("date='2021-11-18'").executeCompaction()
import io.delta.tables._
val deltaTable = DeltaTable.forName(spark, "table_name")
deltaTable.optimize().where("date='2021-11-18'").executeCompaction()
Note
Bin-packing optimization is idempotent, meaning that if it is run twice on the same dataset, the second run has no effect.
Bin-packing aims to produce evenly-balanced data files with respect to their size on disk, but not necessarily number of tuples per file. However, the two measures are most often correlated.
Python and Scala APIs for executing
OPTIMIZE
operation are available from Databricks Runtime 11.3 LTS and above.
Readers of Delta tables use snapshot isolation, which means that they are not interrupted when OPTIMIZE
removes unnecessary files from the transaction log. OPTIMIZE
makes no data related changes to the table, so a read before and after an OPTIMIZE
has the same results. Performing OPTIMIZE
on a table that is a streaming source does not affect any current or future streams that treat this table as a source. OPTIMIZE
returns the file statistics (min, max, total, and so on) for the files removed and the files added by the operation. Optimize stats also contains the Z-Ordering statistics, the number of batches, and partitions optimized.
You can also compact small files automatically using auto compaction. See Auto compaction for Delta Lake on Databricks.
How often should I run OPTIMIZE
?
When you choose how often to run OPTIMIZE
, there is a trade-off between performance and cost. For better end-user query performance, run OPTIMIZE
more often. This will incur a higher cost because of the increased resource usage. To optimize cost, run it less often.
Databricks recommends that you start by running OPTIMIZE
on a daily basis (preferably at night when spot prices are low), and then adjust the frequency to balance cost and performance trade-offs.