October 2021
These features and Databricks platform improvements were released in October 2021.
Note
Releases are staged. Your Databricks account may not be updated until a week or more after the initial release date.
Databricks Runtime 8.2 series support ends
October 22, 2021
Support for Databricks Runtime 8.2 and Databricks Runtime 8.2 for Machine Learning ended on October 22. See Databricks support lifecycles.
Access Databricks File System (DBFS) mounts using the local file system
October 20, 2021
You can now access Databricks File System mounts using the local file system within a notebook or job. This feature is known as DBFS FUSE. Databricks configures each cluster node with a FUSE mount /dbfs
that allows processes running on cluster nodes to read and write to the underlying distributed storage layer with local file APIs using paths under /dbfs
.
You can create a DBFS mount of a GCS bucket and then call normal Python file system APIs or use the %sh
magic command for shell commands to access the data. For example, list files in a mount named my-mount
:
%sh ls /dbfs/my-mount/
Databricks Runtime 10.0 and 10.0 ML are GA; 10.0 Photon is Public Preview
October 20, 2021
Databricks Runtime 10.0 and 10.0 ML are now generally available. 10.0 Photon is in Public Preview.
See the release notes at Databricks Runtime 10.0 (EoS) and Databricks Runtime 10.0 for ML (EoS).
Databricks is available in region asia-southeast1
October 20, 2021
Databricks is now available in region asia-southeast1
. See Databricks clouds and regions.
User interface improvements for Delta Live Tables (Public Preview)
October 18-25, 2021: Version 3.57
This release includes the following enhancements to the Delta Live Tables UI:
A Notebook not found message is now displayed in the Create Pipeline dialog or the datasets detail panel when an invalid notebook path is provided.
You can now provide feedback on Delta Live Tables by clicking the Provide Feedback link on the Pipelines page or the Pipeline Details page. When you click the Provide Feedback link, a customer feedback survey opens in a new window.
Specify a fixed-size cluster when you create a new pipeline in Delta Live Tables (Public Preview)
October 18-25, 2021: Version 3.57
You can now create a cluster for your Delta Live Tables pipeline with a fixed number of worker nodes, providing more control over the cluster resources used by the pipeline. To create a cluster with a fixed number of nodes, disable Enable autoscaling and enter the number of nodes in the Workers field when you create a new pipeline.
View data quality metrics for tables in Delta Live Tables triggered pipelines (Public Preview)
October 18-25, 2021: Version 3.57
You can now see data quality metrics for tables when your pipeline runs in triggered mode, including the number of records written, the number of records dropped, and the number of records that passed or failed each data quality constraint. To view data quality metrics, select the table in the Graph tab on the Pipeline Details page.
Improved security for cluster connectivity in all regions
October 14, 2021
With secure cluster connectivity, customer VPCs in the compute plane have no open ports and Databricks Runtime cluster nodes have no public IP addresses. Databricks secure cluster connectivity on Google Cloud is implemented by two features: no public IP addresses on cluster nodes, which is enabled by default, and the new secure cluster connectivity relay. See Secure cluster connectivity.
All clusters automatically use the secure cluster connectivity relay, which is now generally available in all regions.
Jobs orchestration is now GA
October 14, 2021
Databricks is pleased to announce the general availability of Databricks jobs orchestration. Jobs orchestration allows you to define and run a job with multiple tasks, simplifying the creation, scheduling, execution, and monitoring of complex data and machine learning applications. Jobs orchestration needs to be enabled by an administrator and is disabled by default. See Schedule and orchestrate workflows.
Databricks is also pleased to announce general availability of version 2.1 of the Jobs API. This version includes updates that fully support the orchestration of multiple tasks with Databricks jobs. See Updating from Jobs API 2.0 to 2.1 for information on updating clients to support jobs that orchestrate multiple tasks.
Databricks Connector for Power BI
October 13, 2021
A new version of the Power BI connector is now available. This release adds support for navigation through three-level namespaces in the Unity Catalog, ensures that query execution can be cancelled, and enables native query passthrough for reduced latency on Databricks SQL and Databricks Runtime 8.3 and above.
More detailed job run output with the Jobs API
October 4-11, 2021: Version 3.56
The response from the Jobs API Runs get output request is updated with new fields to return additional output, logging, and error detail for job runs.
Improved readability of notebook paths in the Jobs UI
October 4-11, 2021: Version 3.56
Notebook paths are now truncated from the left when viewing job details in the jobs UI, ensuring visibility of the notebook name. Previously, notebook paths were truncated from the right, often obscuring the notebook name.
Open your Delta Live Tables pipeline in a new tab or window
October 4-11, 2021: Version 3.56
Pipeline names are now rendered as a link when you view the pipelines list in the Delta Live Tables UI, providing access to context menu options such as opening the pipeline details in a new tab or window.
New escape sequence for $
in legacy input widgets in SQL
October 4-11, 2021: Version 3.56
To escape the $
character in legacy input widgets in SQL, use \$
. If you have used $\
in existing widgets, it continues to work, but Databricks recommends that you update widgets to use the new escape sequence. See Databricks widgets.
Faster model deployment with automatically generated batch inference notebook
October 4-11, 2021: Version 3.56
After a model is registered in Model Registry, you can automatically generate a notebook to use the model for batch inference. For details, see Use model for inference.