Secrets
A secret is a key-value pair that stores secret material, with a key name unique within a secret scope. Each scope is limited to 1000 secrets. The maximum allowed secret value size is 128 KB. To access secrets using Databricks Utilities, see Secrets utility (dbutils.secrets).
See also the Secrets API.
Create a secret
Secret names are case insensitive.
Create a secret in a Databricks-backed scope
To create a secret in a Databricks-backed scope using the Databricks CLI (version 0.205 and above):
databricks secrets put-secret --json '{
"scope": "<scope-name>",
"key": "<key-name>",
"string_value": "<secret>"
}'
If you are creating a multi-line secret, you can pass the secret using standard input. For example:
(cat << EOF
this
is
a
multi
line
secret
EOF
) | databricks secrets put-secret <secret_scope> <secret_key>
You can also provide a secret from a file. For more information about writing secrets, see What is the Databricks CLI?.
List secrets
To list secrets in a given scope:
databricks secrets list-secrets <scope-name>
The response displays metadata information about the secrets, such as the secrets’ key names. You use the Secrets utility (dbutils.secrets) in a notebook or job to list this metadata. For example:
dbutils.secrets.list('my-scope')
Read a secret
You create secrets using the REST API or CLI, but you must use the Secrets utility (dbutils.secrets) in a notebook or job to read a secret.
Delete a secret
To delete a secret from a scope with the Databricks CLI:
databricks secrets delete-secret <scope-name> <key-name>
You can also use the Secrets API.
Use a secret in a Spark configuration property or environment variable
Preview
This feature is in Public Preview.
You can reference a secret in a Spark configuration property or environment variable. Retrieved secrets are redacted from notebook output and Spark driver and executor logs.
Important
Keep the following security implications in mind when referencing secrets in a Spark configuration property or environment variable:
If table access control is not enabled on a cluster, any user with Can Attach To permissions on a cluster or Run permissions on a notebook can read Spark configuration properties from within the notebook. This includes users who do not have direct permission to read a secret. Databricks recommends enabling table access control on all clusters or managing access to secrets using secret scopes.
Even when table access control is enabled, users with Can Attach To permissions on a cluster or Run permissions on a notebook can read cluster environment variables from within the notebook. Databricks does not recommend storing secrets in cluster environment variables if they must not be available to all users on the cluster.
Secrets are not redacted from the Spark driver log
stdout
andstderr
streams. To protect sensitive data, by default, Spark driver logs are viewable only by users with CAN MANAGE permission on job, single user access mode, and shared access mode clusters. To allow users with CAN ATTACH TO or CAN RESTART permission to view the logs on these clusters, set the following Spark configuration property in the cluster configuration:spark.databricks.acl.needAdminPermissionToViewLogs false
.On No Isolation Shared access mode clusters, the Spark driver logs can be viewed by users with CAN ATTACH TO or CAN MANAGE permission. To limit who can read the logs to only users with the CAN MANAGE permission, set
spark.databricks.acl.needAdminPermissionToViewLogs
totrue
.
Requirements and limitations
The following requirements and limitations apply to referencing secrets in Spark configuration properties and environment variables:
Cluster owners must have CAN READ permission on the secret scope.
Only cluster owners can add a reference to a secret in a Spark configuration property or environment variable and edit the existing scope and name. Owners change a secret using the Secrets API. You must restart your cluster to fetch the secret again.
Users with the CAN MANAGE permission on the cluster can delete a secret Spark configuration property or environment variable.
Syntax for referencing secrets in a Spark configuration property or environment variable
You can refer to a secret using any valid variable name or Spark configuration property. Databricks enables special behavior for variables referencing secrets based on the syntax of the value being set, not the variable name.
The syntax of the Spark configuration property or environment variable value must be {{secrets/<scope-name>/<secret-name>}}
. The value must start with {{secrets/
and end with }}
.
The variable portions of the Spark configuration property or environment variable are:
<scope-name>
: The name of the scope in which the secret is associated.<secret-name>
: The unique name of the secret in the scope.
For example, {{secrets/scope1/key1}}
.
Note
There should be no spaces between the curly brackets. If there are spaces, they are treated as part of the scope or secret name.
Reference a secret with a Spark configuration property
You specify a reference to a secret in a Spark configuration property in the following format:
spark.<property-name> {{secrets/<scope-name>/<secret-name>}}
Any Spark configuration <property-name>
can reference a secret. Each Spark configuration property can only reference one secret, but you can configure multiple Spark properties to reference secrets.
For example:
You set a Spark configuration to reference a secret:
spark.password {{secrets/scope1/key1}}
To fetch the secret in the notebook and use it:
spark.conf.get("spark.password")
SELECT ${spark.password};
Reference a secret in an environment variable
You specify a secret path in an environment variable in the following format:
<variable-name>={{secrets/<scope-name>/<secret-name>}}
You can use any valid variable name when you reference a secret. Access to secrets referenced in environment variables is determined by the permissions of the user who configured the cluster. Secrets stored in environmental variables are accessible by all users of the cluster, but are redacted from plaintext display like secrets referenced elsewhere.
Environment variables that reference secrets are accessible from a cluster-scoped init script. See Set and use environment variables with init scripts.
For example:
You set an environment variable to reference a secret:
SPARKPASSWORD={{secrets/scope1/key1}}
To fetch the secret in an init script, access $SPARKPASSWORD
using the following pattern:
if [ -n "$SPARKPASSWORD" ]; then
# code to use ${SPARKPASSWORD}
fi
Manage secrets permissions
This section describes how to manage secret access control using the What is the Databricks CLI? (version 0.205 and above). You can also use the Secrets API or Databricks Terraform provider. For secret permission levels, see Secret ACLs
Create a secret ACL
To create a secret ACL for a given secret scope using the Databricks CLI (legacy)
databricks secrets put-acl <scope-name> <principal> <permission>
Making a put request for a principal that already has an applied permission overwrites the existing permission level.
The principal
field specifies an existing Databricks principal. A user is specified using their email address, a service principal using its applicationId
value, and a group using its group name. For more information, see Principal.
View secret ACLs
To view all secret ACLs for a given secret scope:
databricks secrets list-acls <scope-name>
To get the secret ACL applied to a principal for a given secret scope:
databricks secrets get-acl <scope-name> <principal>
If no ACL exists for the given principal and scope, this request will fail.