Databricks concepts

This article introduces the set of fundamental concepts you need to understand in order to use Databricks effectively.

Some concepts are general to Databricks, and others are specific to the persona-based Databricks environment you are using:

  • Databricks Data Science & Engineering

  • Databricks Machine Learning

General concepts

This section describes concepts and terms that apply across all Databricks persona-based environments.

Accounts and workspaces

In Databricks workspace has two meanings:

  1. A Databricks deployment in the cloud that functions as the unified environment that your team uses for accessing all of their Databricks assets. Your organization can choose to have multiple workspaces or just one: it depends on your needs.

  2. The UI for the Databricks persona-based environments. For example, the “workspace browser,” refers to the UI that lets you browse notebooks, libraries, and other files in the persona-based environments.

A Databricks account represents a single subscription for purposes of billing and support; it can include multiple workspaces.



Databricks bills based on Databricks units (DBUs), units of processing capability per hour based on VM instance type.

See the Databricks on Google Cloud pricing page and _.

Authentication and authorization

This section describes concepts that you need to know when you manage Databricks identities and their access to Databricks assets.


A unique individual who has access to the system. User identities are represented by email addresses.

Service principal

A service identity for use with jobs, automated tools, and systems such as scripts, apps, and CI/CD platforms. Service principals are represented by an application ID.


A collection of identities. Groups simplify identity management, making it easier to assign access to workspaces, data, and other securable objects. All Databricks identities can be assigned as members of groups.

Access control list (ACL)

A list of permissions attached to the workspace, cluster, job, table, or experiment. An ACL specifies which users or system processes are granted access to the objects, as well as what operations are allowed on the assets. Each entry in a typical ACL specifies a subject and an operation.

Personal access token

An opaque string is used to authenticate to the REST API and by tools in the Databricks integrations to connect to SQL warehouses.

Databricks Data Science & Engineering

Databricks Data Science & Engineering is the classic Databricks environment for collaboration among data scientists, data engineers, and data analysts. This section describes the fundamental concepts you need to understand in order to work effectively in the Databricks Data Science & Engineering environment.


A workspace is an environment for accessing all of your Databricks assets. A workspace organizes objects (notebooks, libraries, dashboards, and experiments) into folders and provides access to data objects and computational resources.

This section describes the objects contained in the Databricks workspace folders.


A web-based interface to documents that contain runnable commands, visualizations, and narrative text.


An interface that provides organized access to visualizations.


A package of code available to the notebook or job running on your cluster. Databricks runtimes include many libraries and you can add your own.


A folder whose contents are co-versioned together by syncing them to a remote Git repository.


A collection of MLflow runs for training a machine learning model.

Data Science & Engineering interface

This section describes the interfaces that Databricks supports for accessing your assets: UI and API.


The Databricks UI provides an easy-to-use graphical interface to workspace folders and their contained objects, data objects, and computational resources.


There are three versions of the REST API: 2.1, 2.0, and 1.2. The REST APIs 2.1 and 2.0 support most of the functionality of the REST API 1.2 and additional functionality and are preferred.

Data management in Data Science & Engineering

This section describes the objects that hold the data on which you perform analytics and feed into machine learning algorithms.

Databricks File System (DBFS)

A filesystem abstraction layer over a blob store. It contains directories, which can contain files (data files, libraries, and images), and other directories. DBFS is automatically populated with some datasets that you can use to learn Databricks.


A collection of information that is organized so that it can be easily accessed, managed, and updated.


A representation of structured data. You query tables with Apache Spark SQL and Apache Spark APIs.


The component that stores all the structure information of the various tables and partitions in the data warehouse including column and column type information, the serializers and deserializers necessary to read and write data, and the corresponding files where the data is stored. Every Databricks deployment has a central Hive metastore accessible by all clusters to persist table metadata. You also have the option to use an existing external Hive metastore.

Computation management in Data Science & Engineering

This section describes concepts that you need to know to run computations in Databricks Data Science & Engineering.


A set of computation resources and configurations on which you run notebooks and jobs. There are two types of clusters: all-purpose and job.

  • You create an all-purpose cluster using the UI, CLI, or REST API. You can manually terminate and restart an all-purpose cluster. Multiple users can share such clusters to do collaborative interactive analysis.

  • The Databricks job scheduler creates a job cluster when you run a job on a new job cluster and terminates the cluster when the job is complete. You cannot restart an job cluster.


A set of idle, ready-to-use instances that reduce cluster start and auto-scaling times. When attached to a pool, a cluster allocates its driver and worker nodes from the pool. If the pool does not have sufficient idle resources to accommodate the cluster’s request, the pool expands by allocating new instances from the instance provider. When an attached cluster is terminated, the instances it used are returned to the pool and can be reused by a different cluster.

Databricks runtime

The set of core components that run on the clusters managed by Databricks. Databricks offers several types of runtimes:

  • Databricks Runtime includes Apache Spark but also adds a number of components and updates that substantially improve the usability, performance, and security of big data analytics.

  • Databricks Runtime for Machine Learning is built on Databricks Runtime and provides a ready-to-go environment for machine learning and data science. It contains multiple popular libraries, including TensorFlow, Keras, PyTorch, and XGBoost.


Frameworks to develop and run data processing pipelines:


Databricks identifies two types of workloads subject to different pricing schemes: data engineering (job) and data analytics (all-purpose).

  • Data engineering An (automated) workload runs on a job cluster which the Databricks job scheduler creates for each workload.

  • Data analytics An (interactive) workload runs on an all-purpose cluster. Interactive workloads typically run commands within a Databricks notebook. However, running a job on an existing all-purpose cluster is also treated as an interactive workload.

Execution context

The state for a REPL environment for each supported programming language. The languages supported are Python, R, Scala, and SQL.

Databricks Machine Learning

The Databricks Machine Learning environment starts with the features provided in the Data Science & Engineering workspace and adds functionality. Important concepts include:


The main unit of organization for tracking machine learning model development. Experiments organize, display, and control access to individual logged runs of model training code.

Feature Store

A centralized repository of features. Databricks Feature Store enables feature sharing and discovery across your organization and also ensures that the same feature computation code is used for model training and inference.


A trained machine learning or deep learning model that has been registered in Model Registry.