Migrate data applications to Databricks

This article provides an introduction to migrating existing data applications to Databricks. Databricks provides a unified approach that lets you work with data from many source systems on a single platform.

For an overview of platform capabilities, see What is Databricks?.

For information on migrating between Databricks Runtime versions, see the Databricks Runtime migration guide.

Migrate ETL jobs to Databricks

You can migrate Apache Spark jobs used to extract, transform, and load data from on-premises or cloud-native implementations to Databricks with just a few steps. See Adapt your exisiting Apache Spark code for Databricks.

Databricks extends the functionality of Spark SQL with pre-configured open source integrations, partner integrations, and enterprise product offerings. If your ETL workloads are written in SQL or Hive, you can migrate to Databricks with minimal refactoring. Learn more about Databricks SQL offerings:

For specific instructions on migrating from various source systems to Databricks, see Migrate ETL pipelines to Databricks.

Replace your enterprise data warehouse with a lakehouse

Databricks provides optimal value and performance when workloads align around data stored in the lakehouse. Many enterprise data stacks include both a data lake and an enterprise data warehouse, and organizations create complex ETL workflows to try to keep these systems and data in sync. The lakehouse allows you to use the same data, stored in the data lake, across queries and systems that usually rely on a separate data warehouse. For more on the lakehouse, see What is a data lakehouse?. For more on data warehousing on Databricks, see What is data warehousing on Databricks?.

Migrating from an enterprise data warehouse to the lakehouse generally involves reducing the complexity of your data architecture and workflows, but there are some caveats and best practices to keep in mind while completing this work. See Migrate your data warehouse to the Databricks lakehouse.

Unify your ML, data science, and analytics workloads

Because the lakehouse provides optimized access to cloud-based data files through table queries or file paths, you can do ML, data science, and analytics on a single copy of your data. Databricks makes it easy to move workloads from both open source and proprietary tools, and maintains updated versions of many of open source libraries used by analysts and data scientists.

Pandas workloads in Jupyter notebooks can be synced and run using Databricks Git folders. Databricks provides native support for pandas in all Databricks Runtime versions, and configures many popular ML and deep learning libraries in Databricks Runtime for Machine Learning. If you sync your local workloads using Git and workspace files in Git folders, you can use the same relative paths for data and custom libaries present in your local environment.

Note

By default, Databricks maintains .ipynb extensions for Jupyter notebooks synced with Databricks Git folders, but automatically converts Jupyter notebooks to Databricks notebooks when imported with the UI. Databricks notebooks save with a .py extension, and so can live side-by-side with Jupyter notebooks in a Git repository.