Manage clusters
This article describes how to manage Databricks clusters, including displaying, editing, starting, terminating, deleting, controlling access, and monitoring performance and logs.
Display clusters
To view the clusters in your workspace, click Compute in the sidebar.
On the left side are two columns indicating if the cluster has been pinned and the status of the cluster. Hover over the status to get more information.
Pin a cluster
30 days after a cluster is terminated, it is permanently deleted. To keep an all-purpose cluster configuration after a cluster has been terminated for more than 30 days, an administrator can pin the cluster. Up to 100 clusters can be pinned.
Admins can pin a cluster from the cluster list or the cluster detail page by clicking the pin icon.
You can also invoke the Clusters API endpoint to pin a cluster programmatically.
View a cluster configuration as a JSON file
Sometimes it can be helpful to view your cluster configuration as JSON. This is especially useful when you want to create similar clusters using the Clusters API. When you view an existing cluster, go to the Configuration tab, click JSON in the top right of the tab, copy the JSON, and paste it into your API call. JSON view is read-only.
Edit a cluster
You can edit a cluster configuration from the cluster details UI. You can also invoke the Clusters API endpoint to edit the cluster programmatically.
Note
Notebooks and jobs that were attached to the cluster remain attached after editing.
Libraries installed on the cluster remain installed after editing.
If you edit any attribute of a running cluster (except for the cluster size and permissions), you must restart it. This can disrupt users who are currently using the cluster.
You can only edit running or terminated clusters. You can, however, update permissions for clusters that are not in those states, on the cluster details page.
Clone a cluster
To clone an existing cluster, select Clone from the cluster’s kebab menu (also known as the three-dot menu).
After you select clone, the cluster creation UI opens pre-populated with the cluster configuration. The following attributes are not included in the clone:
Cluster permissions
Installed libraries
Attached notebooks
Control access to clusters
Cluster access control within the admin settings page allows workspace admins to give fine-grained cluster access to other users. There are two types of cluster access control:
Cluster-creation permission: Workspace admins can choose which users are allowed to create clusters.
Cluster-level permissions: A user who has the Can manage permission for a cluster can configure whether other users can attach to, restart, resize, and manage that cluster.
To edit permissions for a cluster, select Edit Permissions from that cluster’s kebab menu.
For more on cluster access control and cluster-level permissions, see Cluster access control.
Terminate a cluster
To save cluster resources, you can terminate a cluster. The terminated cluster’s configuration is stored so that it can be reused (or, in the case of jobs, autostarted) at a later time. You can manually terminate a cluster or configure the cluster to terminate automatically after a specified period of inactivity. When the number of terminated clusters exceeds 150, the oldest clusters are deleted.
Unless a cluster is pinned or restarted, it is automatically and permanently deleted 30 days after termination.
Terminated clusters appear in the cluster list with a gray circle at the left of the cluster name.
Note
When you run a job on a New Job Cluster (which is usually recommended), the cluster terminates and is unavailable for restarting when the job is complete. On the other hand, if you schedule a job to run on an Existing All-Purpose Cluster that has been terminated, that cluster will autostart.
Manual termination
You can manually terminate a cluster from the cluster list (by clicking the square on the cluster’s row) or the cluster detail page (by clicking Terminate).
Automatic termination
You can also set auto termination for a cluster. During cluster creation, you can specify an inactivity period in minutes after which you want the cluster to terminate.
If the difference between the current time and the last command run on the cluster is more than the inactivity period specified, Databricks automatically terminates that cluster.
A cluster is considered inactive when all commands on the cluster, including Spark jobs, Structured Streaming, and JDBC calls, have finished executing.
Warning
Clusters do not report activity resulting from the use of DStreams. This means that an auto-terminating cluster may be terminated while it is running DStreams. Turn off auto termination for clusters running DStreams or consider using Structured Streaming.
The auto termination feature monitors only Spark jobs, not user-defined local processes. Therefore, if all Spark jobs have completed, a cluster may be terminated, even if local processes are running.
Idle clusters continue to accumulate DBU and cloud instance charges during the inactivity period before termination.
Configure automatic termination
You can configure automatic termination in the create cluster UI. Ensure that the box is checked, and enter the number of minutes in the Terminate after ___ of minutes of inactivity setting.
You can opt out of auto termination by clearing the Auto Termination checkbox or by specifying an inactivity period of 0
.
Note
Auto termination is best supported in the latest Spark versions. Older Spark versions have known limitations which can result in inaccurate reporting of cluster activity. For example, clusters running JDBC, R, or streaming commands can report a stale activity time that leads to premature cluster termination. Please upgrade to the most recent Spark version to benefit from bug fixes and improvements to auto termination.
Delete a cluster
Deleting a cluster terminates the cluster and removes its configuration. To delete a cluster, select Delete from the cluster’s menu.
Warning
You cannot undo this action.
To delete a pinned cluster, it must first be unpinned by an administrator.
You can also invoke the Clusters API endpoint to delete a cluster programmatically.
Restart a cluster
You can restart a previously terminated cluster from the cluster list, the cluster detail page, or a notebook. You can also invoke the Clusters API endpoint to start a cluster programmatically.
Databricks identifies a cluster using its unique cluster ID. When you start a terminated cluster, Databricks re-creates the cluster with the same ID, automatically installs all the libraries, and reattaches the notebooks.
Restart a cluster to update it with the latest images
When you restart a cluster, it gets the latest images for the compute resource containers and the VM hosts. It is important to schedule regular restarts for long-running clusters such as those used for processing streaming data.
It is your responsibility to restart all compute resources regularly to keep the image up-to-date with the latest image version.
Notebook example: Find long-running clusters
If you are a workspace admin, you can run a script that determines how long each of your clusters has been running, and optionally, restart them if they are older than a specified number of days. Databricks provides this script as a notebook.
Note
If your workspace is part of the public preview of automatic cluster update, you might not need this script. Clusters restart automatically if needed during the scheduled maintenance windows.
The first lines of the script define configuration parameters:
min_age_output
: The maximum number of days that a cluster can run. Default is 1.perform_restart
: IfTrue
, the script restarts clusters with age greater than the number of days specified bymin_age_output
. The default isFalse
, which identifies the long-running clusters but does not restart them.secret_configuration
: ReplaceREPLACE_WITH_SCOPE
andREPLACE_WITH_KEY
with a secret scope and key name. For more details of setting up the secrets, see the notebook.
Warning
If you set perform_restart
to True
, the script automatically restarts eligible clusters, which can cause active jobs to fail and reset open notebooks. To reduce the risk of disrupting your workspace’s business-critical jobs, plan a scheduled maintenance window and be sure to notify the workspace users.
Cluster autostart for jobs and JDBC/ODBC queries
When a job assigned to a terminated cluster is scheduled to run, or you connect to a terminated cluster from a JDBC/ODBC interface, the cluster is automatically restarted. See Create a job and JDBC connect.
Cluster autostart allows you to configure clusters to auto-terminate without requiring manual intervention to restart the clusters for scheduled jobs. Furthermore, you can schedule cluster initialization by scheduling a job to run on a terminated cluster.
Before a cluster is restarted automatically, cluster and job access control permissions are checked.
Note
If your cluster was created in Databricks platform version 2.70 or earlier, there is no autostart: jobs scheduled to run on terminated clusters will fail.
View cluster information in the Apache Spark UI
You can view detailed information about Spark jobs by selecting the Spark UI tab on the cluster details page.
If you restart a terminated cluster, the Spark UI displays information for the restarted cluster, not the historical information for the terminated cluster.
View cluster logs
Databricks provides three kinds of logging of cluster-related activity:
Cluster event logs, which capture cluster lifecycle events like creation, termination, and configuration edits.
Apache Spark driver and worker log, which you can use for debugging.
Cluster init-script logs, which are valuable for debugging init scripts.
This section discusses cluster event logs and driver and worker logs. For details about init-script logs, see Init script logging.
Cluster event logs
The cluster event log displays important cluster lifecycle events that are triggered manually by user actions or automatically by Databricks. Such events affect the operation of a cluster as a whole and the jobs running in the cluster.
For supported event types, see the Clusters API data structure.
Events are stored for 60 days, which is comparable to other data retention times in Databricks.
Cluster driver and worker logs
The direct print and log statements from your notebooks, jobs, and libraries go to the Spark driver logs. You can access these log files from the Driver logs tab on the cluster details page. Click the name of a log file to download it.
These logs have three outputs:
Standard output
Standard error
Log4j logs
To view Spark worker logs, use the Spark UI tab. You can also configure a log delivery location for the cluster. Both worker and cluster logs are delivered to the location you specify.
Monitor performance
You can also install Datadog agents on cluster nodes to send Datadog metrics to your Datadog account.
Notebook example: Datadog metrics

You can install Datadog agents on cluster nodes to send Datadog metrics to your Datadog account. The following notebook demonstrates how to install a Datadog agent on a cluster using a cluster-scoped init script.
To install the Datadog agent on all clusters, manage the cluster-scoped init script using a cluster policy.