Developer tools and guidance

Learn about tools and guidance you can use to work with Databricks assets and data and to develop Databricks applications.

Use an IDE

You can connect many popular third-party IDEs to a Databricks cluster. This allows you to write code on your local development machine by using the Spark APIs and then run that code as jobs remotely on a Databricks cluster.

These third-party IDEs include:

Use a connector or driver

You can use connectors and drivers to connect your code to a Databricks cluster. These connectors and drivers include:

For additional information about connecting your code through JDBC or ODBC, see the JDBC and ODBC configuration guidance.

Use a notebook

To run Python, R, or Scala code in a notebook to work with file systems, libraries, and secrets from a Databricks cluster, see Databricks Utilities.

Call Databricks REST APIs

You can use popular third-party utilities such as curl and tools such as Postman to work with Databricks resources directly through the Databricks REST APIs.

Category Use this API to work with…
REST API (latest) Data Science & Engineering workspace assets such as clusters, global init scripts, groups, pools, jobs, libraries, permissions, secrets, and tokens, by using the latest version of the Databricks REST API.
REST API 2.1 Data Science & Engineering workspace assets such as jobs, by using version 2.1 of the Databricks REST API.
REST API 2.0 Data Science & Engineering workspace assets such as clusters, global init scripts, groups, pools, jobs, libraries, permissions, secrets, and tokens, by using version 2.0 of the Databricks REST API.
REST API 1.2 Command executions and execution contexts by using version 1.2 of the Databricks REST API.

Provision infrastructure

You can use an infrastructure-as-code (IaC) approach to programmatically provision Databricks infrastructure and assets such as workspaces, clusters, jobs, groups, and users. For details, see Databricks Terraform provider.

Follow patterns and practices

To manage the lifecycle of Databricks assets and data, you can use continuous integration and delivery (CI/CD), data pipeline, and data engineering tools.

Area Use these patterns and best practices when you want to…
Continuous integration and delivery on Databricks using Jenkins Develop a CI/CD pipeline for Databricks that uses Jenkins.
Managing dependencies in data pipelines Manage and schedule a data pipeline that uses Apache Airflow.
dbt Core integration with Databricks Transform data in Databricks by simply writing select statements on your local development machine. dbt turns these select statements into tables and views.
dbt Cloud integration with Databricks Transform data in Databricks by simply writing select statements in your web browser. dbt turns these select statements into tables and views.

Use a SQL database tool

You can use these third-party tools to run SQL commands and scripts and to browse database objects in Databricks.

IDE Use this when you want to:
DataGrip integration with Databricks Use a query console, schema navigation, smart code completion, and other features to run SQL commands and scripts and to browse database objects in Databricks.
DBeaver integration with Databricks Run SQL commands and browse database objects in Databricks by using this client software application and database administration tool.
SQL Workbench/J Run SQL scripts (either interactively or as a batch) in Databricks by using this SQL query tool.

Use other tools

You can connect many popular third-party tools to clusters to access data in Databricks. See the Databricks integrations guide.