Manage cluster policies

A cluster policy is a tool used to limit a user or group’s cluster creation permissions based on a set of policy rules.

Cluster policies let you:

  • Limit users to creating clusters with prescribed settings.

  • Limit users to creating a certain number of clusters.

  • Simplify the user interface and enable more users to create their own clusters (by fixing and hiding some values).

  • Control cost by limiting per cluster maximum cost (by setting limits on attributes whose values contribute to hourly price).

For an introduction to cluster policies and configuration recommendations, view the Databricks cluster policies video:

This article focuses on managing policies using the UI. You can also use the _ and the Permissions API to manage policies.

Requirements

Cluster policies require the Premium plan.

Enforcement rules

You can express the following types of constraints in policy rules:

  • Fixed value with disabled control element

  • Fixed value with control hidden in the UI (value is visible in the JSON view)

  • Attribute value limited to a set of values (either allow list or block list)

  • Attribute value matching a given regex

  • Numeric attribute limited to a certain range

  • Default value used by the UI with control enabled

Managed cluster attributes

Cluster policies support all cluster attributes controlled with the Clusters API. The specific type of restrictions supported may vary per field (based on their type and relation to the cluster form UI elements).

In addition, cluster policies support the following synthetic attributes:

  • A “max DBU-hour” metric, which is the maximum DBUs a cluster can use on an hourly basis. This metric is a direct way to control cost at the individual cluster level.

  • A limit on the source that creates the cluster: Jobs service (job clusters), Clusters UI, Clusters REST API (all-purpose clusters).

Unmanaged cluster attributes

The following cluster attributes cannot be restricted in a cluster policy:

  • Libraries, which are handled by the Libraries API. A workaround is to use a custom container or an init script.

  • Cluster permissions (ACLs), which is handled by a separate API.

Define a cluster policy

You define a cluster policy in a JSON policy definition, which you add when you create the cluster policy.

Create a cluster policy

You create a cluster policy using the cluster policies UI or the Cluster Policies API. To create a cluster policy using the UI:

  1. Click compute icon Compute in the sidebar.

  2. Click the Policies tab.

  3. Click Create Cluster Policy.

  4. Name the policy. Policy names are case insensitive.

  5. Optionally, select the policy family from the Family dropdown. This determines the template from which you build the policy. See policy family.

  6. Enter a Description of the policy. This helps others know the purpose of the policy.

  7. In the Definition tab, paste a policy definition.

  8. Click Create.

Clone an existing cluster policy

You can create a cluster policy by cloning an existing policy. To clone a cluster policy using the UI:

  1. Click compute icon Compute in the sidebar.

  2. Click the Policies tab.

  3. Select the policy you want to clone.

  4. Click Clone.

  5. In the next page, all fields are pre-populated with values from the existing policy. Change the values of the fields that you want to modify, then click Create.

Manage cluster policy permissions using the UI

Workspace admins have permission to all policies.

When creating a cluster, non-admins can only select policies for which they have been granted permission. If a user has cluster create permission, then they can also select the Unrestricted policy, allowing them to create fully-configurable clusters.

Note

If the user doesn’t have access to any policies, the policy dropdown does not display.

Add a cluster policy permission

To add a cluster policy permission using the UI:

  1. Click compute icon Compute in the sidebar.

  2. Click the Policies tab.

  3. Select the policy you want to update.

  4. Click the Permissions tab.

  5. In the Name column, select a principal.

  6. In the Permission column, select a permission.

  7. Click Add.

Delete a cluster policy permission

To delete a cluster policy permission using the UI:

  1. Click compute icon Compute in the sidebar.

  2. Click the Policies tab.

  3. Select the policy you want to update.

  4. Click the Permissions tab.

  5. Click the Delete Icon icon in the permission row.

Restrict the number of clusters per users using the UI

Policy permissions allow you to set a max number of clusters per user. This determines how many clusters a user can create using that policy. If the user exceeds the limit, the operation fails.

To restrict the number of clusters a user can create using a policy, use the Max clusters per user setting under the Permissions tab in the cluster policies UI.

Note

Databricks doesn’t proactively terminate clusters to maintain the limit. If a user has three clusters running with the policy and the admin reduces the limit to one, the three clusters will continue to run. Extra clusters must be manually terminated to comply with the limit.

Edit a cluster policy using the UI

You edit a cluster policy using the cluster policies UI or the Cluster Policies API. To edit a cluster policy using the UI:

  1. Click compute icon Compute in the sidebar.

  2. Click the Cluster Policies tab.

    Cluster Policies tab image
  3. Select the policy you want to edit.

  4. Click Edit.

  5. In the Definition tab, edit the policy definition.

  6. Click Update.

Delete a cluster policy using the UI

You delete a cluster policy using the cluster policies UI or the Cluster Policies API]. To delete a cluster policy using the UI:

  1. Click compute icon Compute in the sidebar.

  2. Click the Cluster Policies tab.

    Cluster Policies tab selected
  3. Select the policy you want to delete.

  4. Click Delete.

  5. Click Delete to confirm.

Cluster policy families

When you create a cluster policy, you can choose to use a policy family. Policy families provide you with pre-populated policy rules for common compute use cases.

When using a policy family, the rules for your policy are inherited from the policy family. After selecting a policy family, you can create the policy as-is, or choose to add rules or override the given rules.

Create a custom policy using a policy family

To customize a policy using a policy family:

  1. Click compute icon Compute in the sidebar.

  2. Click the Policies tab.

  3. Click Create Cluster Policy.

  4. Name the policy. Policy names are case insensitive.

  5. Select the policy family from the Family dropdown.

  6. Under the Definitions tab, click Edit.

  7. A modal appears where you can override policy definitions. In the Overrides section, add the updated definitions then click OK.

Cluster policy definitions

A cluster policy definition is a collection of individual policy definitions expressed in JSON.

Policy definitions

A policy definition is a map between a path string defining an attribute and a limit type. There can only be one limitation per attribute. A path is specific to the type of resource and reflects the resource creation API attribute name. If the resource creation uses nested attributes, the path concatenates the nested attribute names using dots. Attributes that aren’t defined in the policy definition are unlimited when you create a cluster using the policy.

interface Policy {
  [path: string]: PolicyElement
}

Policy elements

A policy element specifies one of the supported limit types on a given attribute and optionally a default value. You can specify a default value without defining a limit on the attribute in the policy.

type PolicyElement = FixedPolicy | ForbiddenPolicy | (LimitingPolicyBase & LimitingPolicy);
type LimitingPolicy = AllowlistPolicy | BlocklistPolicy | RegexPolicy | RangePolicy | UnlimitedPolicy;

This section describes the policy types:

Fixed policy

Limit the value to the specified value. For attribute values other than numeric and boolean, the value of the attribute must be represented by or convertible to a string.

Optionally the attribute can be hidden in the UI when the hidden flag is present and set to true. A fixed policy cannot specify a defaultValue attribute since the value attribute already determines the default value.

interface FixedPolicy {
    type: "fixed";
    value: string | number | boolean;
    hidden?: boolean;
}
Example
{
  "spark_version": { "type": "fixed", "value": "auto:latest-ml", "hidden": true }
}

Forbidden policy

For an optional attribute, prevent use of the attribute.

interface ForbiddenPolicy {
    type: "forbidden";
}
Example

This policy forbids attaching a pool to the cluster.

{
  "instance_pool_id": { "type": "forbidden" }
}

Limiting policies: common fields

In a limiting policy you can specify two additional fields:

  • defaultValue - the value that populates the cluster creation form in the UI.

  • isOptional - a limiting policy on an attribute makes it required. To make the attribute optional, set the isOptional field to true.

interface LimitedPolicyBase {
    defaultValue?: string | number | boolean;
    isOptional?: boolean;
}

Note

Default values don’t automatically get applied to clusters created with the Clusters API. To apply default values when creating a cluster with the API, add the parameter apply_policy_default_values to the cluster definition and set it to true. This is not needed for fixed policies.

Example
{
  "instance_pool_id": { "type": "unlimited", "isOptional": true, "defaultValue": "id1" }
}

This example policy specifies the default value id1 for the pool, but makes it optional. When creating the cluster, you can select a different pool or choose not to use one.

Allow list policy

A list of allowed values.

interface AllowlistPolicy {
  type: "allowlist";
  values: (string | number | boolean)[];
}
Example
{
  "spark_version":  { "type": "allowlist", "values": [ "11.3.x-scala2.12", "10.4.x-scala2.12" ] }
}

Block list policy

The list of disallowed values. Since the values must be exact matches, this policy may not work as expected when the attribute is lenient in how the value is represented (for example allowing leading and trailing spaces).

interface BlocklistPolicy {
  type: "blocklist";
  values: (string | number | boolean)[];
}
Example
{
  "spark_version":  { "type": "blocklist", "values": [ "7.3.x-scala2.12" ] }
}

Regex policy

Limits the value to the ones matching the regex. For safety, when matching the regex is always anchored to the beginning and end of the string value.

interface RegexPolicy {
  type: "regex";
  pattern: string;
}
Example
{
  "spark_version":  { "type": "regex", "pattern": "5\\.[3456].*" }
}

Range policy

Limits the value to the range specified by the minValue and maxValue attributes. The value must be a decimal number. The numeric limits must be representable as a double floating point value. To indicate lack of a specific limit, you can omit one of minValue, maxValue.

interface RangePolicy {
  type: "range";
  minValue?: number;
  maxValue?: number;
}
Example
{
  "num_workers":  { "type": "range", "maxValue": 10 }
}

Unlimited policy

Does not define value limits. You can use this policy type to make attributes required or to set the default value in the UI.

interface UnlimitedPolicy {
  type: "unlimited";
}
Example

To require adding the COST_BUCKET tag:

{
  "custom_tags.COST_BUCKET":  { "type": "unlimited" }
}

To set default a value for a Spark configuration variable, but also allow omitting (removing) it:

{
  "spark_conf.spark.my.conf":  { "type": "unlimited", "isOptional": true, "defaultValue": "my_value" }
}

Cluster policy attribute paths

The following table lists the supported cluster policy attribute paths.

Attribute path

Type

Description

autoscale.max_workers

optional number

When hidden, removes the maximum worker number field from the UI.

autoscale.min_workers

optional number

When hidden, removes the minimum worker number field from the UI.

autotermination_minutes

number

A value of 0 represents no auto termination. When hidden, removes the auto termination checkbox and value input from the UI.

cluster_name

string

The cluster name.

custom_tags.*

string

Controls specific tag values by appending the tag name, for example: custom_tags.<mytag>.

data_security_mode

string

Sets the security features of the cluster.

driver_node_type_id

optional string

When hidden, removes the driver node type selection from the UI.

instance_pool_id

string

Controls the pool used by worker nodes if driver_instance_pool_id is also defined, or for all cluster nodes otherwise. If you use pools for worker nodes, you must also use pools for the driver node. When hidden, removes pool selection from the UI.

driver_instance_pool_id

string

If specified, configures a different pool for the driver node than for worker nodes. If not specified, inherits instance_pool_id. If you use pools for worker nodes, you must also use pools for the driver node. When hidden, removes driver pool selection from the UI.

gcp_attributes.availability

string

Controls GCP availiability (PREEMPTIBLE_GCP, PREEMPTIBLE_WITH_FALLBACK_GCP, or ON_DEMAND_GCP).

gcp_attributes.use_preemptible_executors

boolean

Set to true to allow workers to be launched using preemptible VM instances.

gcp_attributes.zone_id

string

Controls the GCP zone ID.

node_type_id

string

When hidden, removes the worker node type selection from the UI.

num_workers

optional number

When hidden, removes the worker number specification from the UI.

single_user_name

string

The user name for credential passthrough single user access.

spark_conf.*

optional string

Control specific configuration values by appending the configuration key name. For example, spark_conf.spark.executor.memory.

spark_env_vars.*

optional string

Control specific Spark environment variable values by appending the environment variable, for example: spark_env_vars.<environment variable name>.

spark_version

string

The Spark image version name (as specified through the API).

Cluster policy virtual attribute paths

Attribute path

Type

Description

dbus_per_hour

number

Calculated attribute representing (maximum, in case of autoscaling clusters) DBU cost of the cluster including the driver node. For use with range limitation.

cluster_type

string

Represents the type of cluster that can be created:

  • all-purpose for Databricks all-purpose clusters

  • job for job clusters created by the job scheduler

  • dlt for clusters created for Delta Live Tables pipelines

Allow or block specified types of clusters to be created from the policy. If the all-purpose value is not allowed, the policy is not shown in the all-purpose cluster creation form. If the job value is not allowed, the policy is not shown in the job new cluster form.

Array attributes

You can specify policies for array attributes in two ways:

  • Generic limitations for all array elements. These limitations use the * wildcard symbol in the policy path.

  • Specific limitations for an array element at a specific index. These limitation use a number in the path.

Cluster policy examples

General cluster policy

A general purpose cluster policy meant to guide users and restrict some functionality, while requiring tags, restricting the maximum number of instances, and enforcing timeout.

{
  "spark_conf.spark.databricks.cluster.profile": {
    "type": "fixed",
    "value": "serverless",
    "hidden": true
  },
  "instance_pool_id": {
    "type": "forbidden",
    "hidden": true
  },
  "spark_version": {
    "type": "regex",
    "pattern": "12\\.[0-9]+\\.x-scala.*"
  },
  "node_type_id": {
    "type": "allowlist",
    "values": [
      "n2-highmem-4",
      "n2-highmem-8",
      "n2-highmem-16"
    ],
    "defaultValue": "n2-highmem-4"
  },
  "driver_node_type_id": {
    "type": "fixed",
    "value": "n2-highmem-8",
    "hidden": true
  },
  "autoscale.min_workers": {
    "type": "fixed",
    "value": 1,
    "hidden": true
  },
  "autoscale.max_workers": {
    "type": "range",
    "maxValue": 25,
    "defaultValue": 5
  },
  "autotermination_minutes": {
    "type": "fixed",
    "value": 30,
    "hidden": true
  },
  "custom_tags.team": {
    "type": "fixed",
    "value": "product"
  }
}

Define limits on Delta Live Tables pipeline clusters

Note

When using cluster policies to configure Delta Live Tables clusters, Databricks recommends applying a single policy to both the default and maintenance clusters.

To configure a cluster policy for a pipeline cluster, create a policy with the cluster_type field set to dlt. The following example creates a minimal policy for a Delta Live Tables cluster:

{
  "cluster_type": {
    "type": "fixed",
    "value": "dlt"
  },
  "num_workers": {
    "type": "unlimited",
    "defaultValue": 3,
    "isOptional": true
  },
  "node_type_id": {
    "type": "unlimited",
    "isOptional": true
  },
  "spark_version": {
    "type": "unlimited",
    "hidden": true
  }
}

Simple medium-sized policy

Allows users to create a medium-sized cluster with minimal configuration. The only required field at creation time is cluster name; the rest is fixed and hidden.

{
  "instance_pool_id": {
    "type": "forbidden",
    "hidden": true
  },
  "spark_conf.spark.databricks.cluster.profile": {
    "type": "forbidden",
    "hidden": true
  },
  "autoscale.min_workers": {
    "type": "fixed",
    "value": 1,
    "hidden": true
  },
  "autoscale.max_workers": {
    "type": "fixed",
    "value": 10,
    "hidden": true
  },
  "autotermination_minutes": {
    "type": "fixed",
    "value": 60,
    "hidden": true
  },
  "node_type_id": {
    "type": "fixed",
    "value": "n2-highmem-4",
    "hidden": true
  },
  "driver_node_type_id": {
    "type": "fixed",
    "value": "i3.xlarge",
    "hidden": true
  },
  "spark_version": {
    "type": "fixed",
    "value": "auto:latest-ml",
    "hidden": true
  },
  "custom_tags.team": {
    "type": "fixed",
    "value": "product"
  }
}

Job-only policy

Allows users to create job clusters and run jobs using the cluster. Users cannot create an all-purpose cluster using this policy.

{
  "cluster_type": {
    "type": "fixed",
    "value": "job"
  },
  "dbus_per_hour": {
    "type": "range",
    "maxValue": 100
  },
  "instance_pool_id": {
    "type": "forbidden",
    "hidden": true
  },
  "num_workers": {
    "type": "range",
    "minValue": 1
  },
  "node_type_id": {
    "type": "regex",
    "pattern": "[na][1-2]d?-(?:standard|highmem)-[0-96]"
  },
  "driver_node_type_id": {
    "type": "regex",
    "pattern": "[na][1-2]d?-(?:standard|highmem)-[0-96]"
  },
  "spark_version": {
    "type": "unlimited",
    "defaultValue": "auto:latest-lts"
  },
  "custom_tags.team": {
    "type": "fixed",
    "value": "product"
  }
}

External metastore policy

Allows users to create a cluster with an admin-defined metastore already attached. This is useful to allow users to create their own clusters without requiring additional configuration.

{
  "spark_conf.spark.hadoop.javax.jdo.option.ConnectionURL": {
      "type": "fixed",
      "value": "jdbc:sqlserver://<jdbc-url>"
  },
  "spark_conf.spark.hadoop.javax.jdo.option.ConnectionDriverName": {
      "type": "fixed",
      "value": "com.microsoft.sqlserver.jdbc.SQLServerDriver"
  },
  "spark_conf.spark.databricks.delta.preview.enabled": {
      "type": "fixed",
      "value": "true"
  },
  "spark_conf.spark.hadoop.javax.jdo.option.ConnectionUserName": {
      "type": "fixed",
      "value": "<metastore-user>"
  },
  "spark_conf.spark.hadoop.javax.jdo.option.ConnectionPassword": {
      "type": "fixed",
      "value": "<metastore-password>"
  }
}