Databricks Utilities
Databricks Utilities (dbutils
) make it easy to perform powerful combinations of tasks. You can use the utilities to work with object storage efficiently, to chain and parameterize notebooks, and to work with secrets. dbutils
are not supported outside of notebooks.
Important
Calling dbutils
inside of executors can produce unexpected results. To learn more about limitations of dbutils
and alternatives that could be used instead, see Limitations.
dbutils
utilities are available in Python, R, and Scala notebooks.
How to: List utilities, list commands, display command help
Utilities: data, fs, jobs, library, notebook, secrets, widgets
List available utilities
To list available utilities along with a short description for each utility, run dbutils.help()
for Python or Scala.
This example lists available commands for the Databricks Utilities.
dbutils.help()
dbutils.help()
This module provides various utilities for users to interact with the rest of Databricks.
fs: DbfsUtils -> Manipulates the Databricks filesystem (DBFS) from the console
jobs: JobsUtils -> Utilities for leveraging jobs features
library: LibraryUtils -> Utilities for session isolated libraries
notebook: NotebookUtils -> Utilities for the control flow of a notebook (EXPERIMENTAL)
secrets: SecretUtils -> Provides utilities for leveraging secrets within notebooks
widgets: WidgetsUtils -> Methods to create and get bound value of input widgets inside notebooks
List available commands for a utility
To list available commands for a utility along with a short description of each command, run .help()
after the programmatic name for the utility.
This example lists available commands for the Databricks File System (DBFS) utility.
dbutils.fs.help()
dbutils.fs.help()
dbutils.fs.help()
dbutils.fs provides utilities for working with FileSystems. Most methods in this package can take either a DBFS path (e.g., "/foo" or "dbfs:/foo"), or another FileSystem URI. For more info about a method, use dbutils.fs.help("methodName"). In notebooks, you can also use the %fs shorthand to access DBFS. The %fs shorthand maps straightforwardly onto dbutils calls. For example, "%fs head --maxBytes=10000 /file/path" translates into "dbutils.fs.head("/file/path", maxBytes = 10000)".
fsutils
cp(from: String, to: String, recurse: boolean = false): boolean -> Copies a file or directory, possibly across FileSystems
head(file: String, maxBytes: int = 65536): String -> Returns up to the first 'maxBytes' bytes of the given file as a String encoded in UTF-8
ls(dir: String): Seq -> Lists the contents of a directory
mkdirs(dir: String): boolean -> Creates the given directory if it does not exist, also creating any necessary parent directories
mv(from: String, to: String, recurse: boolean = false): boolean -> Moves a file or directory, possibly across FileSystems
put(file: String, contents: String, overwrite: boolean = false): boolean -> Writes the given String out to a file, encoded in UTF-8
rm(dir: String, recurse: boolean = false): boolean -> Removes a file or directory
mount
mount(source: String, mountPoint: String, encryptionType: String = "", owner: String = null, extraConfigs: Map = Map.empty[String, String]): boolean -> Mounts the given source directory into DBFS at the given mount point
mounts: Seq -> Displays information about what is mounted within DBFS
refreshMounts: boolean -> Forces all machines in this cluster to refresh their mount cache, ensuring they receive the most recent information
unmount(mountPoint: String): boolean -> Deletes a DBFS mount point
updateMount(source: String, mountPoint: String, encryptionType: String = "", owner: String = null, extraConfigs: Map = Map.empty[String, String]): boolean -> Similar to mount(), but updates an existing mount point instead of creating a new one
Display help for a command
To display help for a command, run .help("<command-name>")
after the command name.
This example displays help for the DBFS copy command.
dbutils.fs.help("cp")
dbutils.fs.help("cp")
dbutils.fs.help("cp")
/**
* Copies a file or directory, possibly across FileSystems.
*
* Example: cp("/mnt/my-folder/a", "dbfs:/a/b")
*
* @param from FileSystem URI of the source file or directory
* @param to FileSystem URI of the destination file or directory
* @param recurse if true, all files and directories will be recursively copied
* @return true if all files were successfully copied
*/
cp(from: java.lang.String, to: java.lang.String, recurse: boolean = false): boolean
Data utility (dbutils.data)
Preview
This feature is in Public Preview.
Note
Available in Databricks Runtime 9.0 and above.
Commands: summarize
The data utility allows you to understand and interpret datasets. To list the available commands, run dbutils.data.help()
.
dbutils.data provides utilities for understanding and interpreting datasets. This module is currently in preview and may be unstable. For more info about a method, use dbutils.data.help("methodName").
summarize(df: Object, precise: boolean): void -> Summarize a Spark DataFrame and visualize the statistics to get quick insights
summarize command (dbutils.data.summarize)
Calculates and displays summary statistics of an Apache Spark DataFrame or pandas DataFrame. This command is available for Python, Scala and R.
To display help for this command, run dbutils.data.help("summarize")
.
In Databricks Runtime 10.1 and above, you can use the additional precise
parameter to adjust the precision of the computed statistics.
Note
This feature is in Public Preview.
When
precise
is set to false (the default), some returned statistics include approximations to reduce run time.The number of distinct values for categorical columns may have ~5% relative error for high-cardinality columns.
The frequent value counts may have an error of up to 0.01% when the number of distinct values is greater than 10000.
The histograms and percentile estimates may have an error of up to 0.01% relative to the total number of rows.
When
precise
is set to true, the statistics are computed with higher precision. All statistics except for the histograms and percentiles for numeric columns are now exact.The histograms and percentile estimates may have an error of up to 0.0001% relative to the total number of rows.
The tooltip at the top of the data summary output indicates the mode of current run.
This example displays summary statistics for an Apache Spark DataFrame with approximations enabled by default. To see the results, run this command in a notebook. This example is based on Sample datasets.
df = spark.read.format('csv').load(
'/databricks-datasets/Rdatasets/data-001/csv/ggplot2/diamonds.csv',
header=True,
inferSchema=True
)
dbutils.data.summarize(df)
df <- read.df("/databricks-datasets/Rdatasets/data-001/csv/ggplot2/diamonds.csv", source = "csv", header="true", inferSchema = "true")
dbutils.data.summarize(df)
val df = spark.read.format("csv")
.option("inferSchema", "true")
.option("header", "true")
.load("/databricks-datasets/Rdatasets/data-001/csv/ggplot2/diamonds.csv")
dbutils.data.summarize(df)
Note that the visualization uses SI notation to concisely render numerical values smaller than 0.01 or larger than 10000. As an example, the numerical value 1.25e-15
will be rendered as 1.25f
. One exception: the visualization uses “B
” for 1.0e9
(giga) instead of “G
”.
File system utility (dbutils.fs)
Warning
The Python implementation of all dbutils.fs
methods uses snake_case
rather than camelCase
for keyword formatting.
For example: while dbuitls.fs.help()
displays the option extraConfigs
for dbutils.fs.mount()
, in Python you would use the keyword extra_configs
.
Commands: cp, head, ls, mkdirs, mount, mounts, mv, put, refreshMounts, rm, unmount, updateMount
The file system utility allows you to access What is the Databricks File System (DBFS)?, making it easier to use Databricks as a file system. To list the available commands, run dbutils.fs.help()
.
dbutils.fs provides utilities for working with FileSystems. Most methods in this package can take either a DBFS path (e.g., "/foo" or "dbfs:/foo"), or another FileSystem URI. For more info about a method, use dbutils.fs.help("methodName"). In notebooks, you can also use the %fs shorthand to access DBFS. The %fs shorthand maps straightforwardly onto dbutils calls. For example, "%fs head --maxBytes=10000 /file/path" translates into "dbutils.fs.head("/file/path", maxBytes = 10000)".
fsutils
cp(from: String, to: String, recurse: boolean = false): boolean -> Copies a file or directory, possibly across FileSystems
head(file: String, maxBytes: int = 65536): String -> Returns up to the first 'maxBytes' bytes of the given file as a String encoded in UTF-8
ls(dir: String): Seq -> Lists the contents of a directory
mkdirs(dir: String): boolean -> Creates the given directory if it does not exist, also creating any necessary parent directories
mv(from: String, to: String, recurse: boolean = false): boolean -> Moves a file or directory, possibly across FileSystems
put(file: String, contents: String, overwrite: boolean = false): boolean -> Writes the given String out to a file, encoded in UTF-8
rm(dir: String, recurse: boolean = false): boolean -> Removes a file or directory
mount
mount(source: String, mountPoint: String, encryptionType: String = "", owner: String = null, extraConfigs: Map = Map.empty[String, String]): boolean -> Mounts the given source directory into DBFS at the given mount point
mounts: Seq -> Displays information about what is mounted within DBFS
refreshMounts: boolean -> Forces all machines in this cluster to refresh their mount cache, ensuring they receive the most recent information
unmount(mountPoint: String): boolean -> Deletes a DBFS mount point
updateMount(source: String, mountPoint: String, encryptionType: String = "", owner: String = null, extraConfigs: Map = Map.empty[String, String]): boolean -> Similar to mount(), but updates an existing mount point instead of creating a new one
cp command (dbutils.fs.cp)
Copies a file or directory, possibly across filesystems.
To display help for this command, run dbutils.fs.help("cp")
.
This example copies the file named old_file.txt
from /FileStore
to /tmp/new
, renaming the copied file to new_file.txt
.
dbutils.fs.cp("/FileStore/old_file.txt", "/tmp/new/new_file.txt")
# Out[4]: True
dbutils.fs.cp("/FileStore/old_file.txt", "/tmp/new/new_file.txt")
# [1] TRUE
dbutils.fs.cp("/FileStore/old_file.txt", "/tmp/new/new_file.txt")
// res3: Boolean = true
head command (dbutils.fs.head)
Returns up to the specified maximum number bytes of the given file. The bytes are returned as a UTF-8 encoded string.
To display help for this command, run dbutils.fs.help("head")
.
This example displays the first 25 bytes of the file my_file.txt
located in /tmp
.
dbutils.fs.head("/tmp/my_file.txt", 25)
# [Truncated to first 25 bytes]
# Out[12]: 'Apache Spark is awesome!\n'
dbutils.fs.head("/tmp/my_file.txt", 25)
# [1] "Apache Spark is awesome!\n"
dbutils.fs.head("/tmp/my_file.txt", 25)
// [Truncated to first 25 bytes]
// res4: String =
// "Apache Spark is awesome!
// "
ls command (dbutils.fs.ls)
Lists the contents of a directory.
To display help for this command, run dbutils.fs.help("ls")
.
This example displays information about the contents of /tmp
. The modificationTime
field is available in Databricks Runtime 10.2 and above. In R, modificationTime
is returned as a string.
dbutils.fs.ls("/tmp")
# Out[13]: [FileInfo(path='dbfs:/tmp/my_file.txt', name='my_file.txt', size=40, modificationTime=1622054945000)]
dbutils.fs.ls("/tmp")
# For prettier results from dbutils.fs.ls(<dir>), please use `%fs ls <dir>`
# [[1]]
# [[1]]$path
# [1] "dbfs:/tmp/my_file.txt"
# [[1]]$name
# [1] "my_file.txt"
# [[1]]$size
# [1] 40
# [[1]]$isDir
# [1] FALSE
# [[1]]$isFile
# [1] TRUE
# [[1]]$modificationTime
# [1] "1622054945000"
dbutils.fs.ls("/tmp")
// res6: Seq[com.databricks.backend.daemon.dbutils.FileInfo] = WrappedArray(FileInfo(dbfs:/tmp/my_file.txt, my_file.txt, 40, 1622054945000))
mkdirs command (dbutils.fs.mkdirs)
Creates the given directory if it does not exist. Also creates any necessary parent directories.
To display help for this command, run dbutils.fs.help("mkdirs")
.
This example creates the directory structure /parent/child/grandchild
within /tmp
.
dbutils.fs.mkdirs("/tmp/parent/child/grandchild")
# Out[15]: True
dbutils.fs.mkdirs("/tmp/parent/child/grandchild")
# [1] TRUE
dbutils.fs.mkdirs("/tmp/parent/child/grandchild")
// res7: Boolean = true
mount command (dbutils.fs.mount)
Mounts the specified source directory into DBFS at the specified mount point.
To display help for this command, run dbutils.fs.help("mount")
.
bucket_name = "my-bucket"
mount_name = "gs-my-bucket"
dbutils.fs.mount("gs://%s" % bucket_name, "/mnt/%s" % mount_name)
val BucketName = "my-bucket"
val MountName = "gs-my-bucket"
dbutils.fs.mount(s"gs://$BucketName", s"/mnt/$MountName")
For additional code examples, see Google Cloud Storage.
mounts command (dbutils.fs.mounts)
Displays information about what is currently mounted within DBFS.
To display help for this command, run dbutils.fs.help("mounts")
.
Warning
Call dbutils.fs.refreshMounts()
on all other running clusters to propagate the new mount. See refreshMounts command (dbutils.fs.refreshMounts).
dbutils.fs.mounts()
dbutils.fs.mounts()
For additional code examples, see Google Cloud Storage.
mv command (dbutils.fs.mv)
Moves a file or directory, possibly across filesystems. A move is a copy followed by a delete, even for moves within filesystems.
To display help for this command, run dbutils.fs.help("mv")
.
This example moves the file my_file.txt
from /FileStore
to /tmp/parent/child/granchild
.
dbutils.fs.mv("/FileStore/my_file.txt", "/tmp/parent/child/grandchild")
# Out[2]: True
dbutils.fs.mv("/FileStore/my_file.txt", "/tmp/parent/child/grandchild")
# [1] TRUE
dbutils.fs.mv("/FileStore/my_file.txt", "/tmp/parent/child/grandchild")
// res1: Boolean = true
put command (dbutils.fs.put)
Writes the specified string to a file. The string is UTF-8 encoded.
To display help for this command, run dbutils.fs.help("put")
.
This example writes the string Hello, Databricks!
to a file named hello_db.txt
in /tmp
. If the file exists, it will be overwritten.
dbutils.fs.put("/tmp/hello_db.txt", "Hello, Databricks!", True)
# Wrote 18 bytes.
# Out[6]: True
dbutils.fs.put("/tmp/hello_db.txt", "Hello, Databricks!", TRUE)
# [1] TRUE
dbutils.fs.put("/tmp/hello_db.txt", "Hello, Databricks!", true)
// Wrote 18 bytes.
// res2: Boolean = true
refreshMounts command (dbutils.fs.refreshMounts)
Forces all machines in the cluster to refresh their mount cache, ensuring they receive the most recent information.
To display help for this command, run dbutils.fs.help("refreshMounts")
.
dbutils.fs.refreshMounts()
dbutils.fs.refreshMounts()
For additional code examples, see Google Cloud Storage.
rm command (dbutils.fs.rm)
Removes a file or directory.
To display help for this command, run dbutils.fs.help("rm")
.
This example removes the file named hello_db.txt
in /tmp
.
dbutils.fs.rm("/tmp/hello_db.txt")
# Out[8]: True
dbutils.fs.rm("/tmp/hello_db.txt")
# [1] TRUE
dbutils.fs.rm("/tmp/hello_db.txt")
// res6: Boolean = true
unmount command (dbutils.fs.unmount)
Deletes a DBFS mount point.
Warning
To avoid errors, never modify a mount point while other jobs are reading or writing to it. After modifying a mount, always run dbutils.fs.refreshMounts()
on all other running clusters to propagate any mount updates. See refreshMounts command (dbutils.fs.refreshMounts).
To display help for this command, run dbutils.fs.help("unmount")
.
dbutils.fs.unmount("/mnt/<mount-name>")
For additional code examples, see Google Cloud Storage.
updateMount command (dbutils.fs.updateMount)
Similar to the dbutils.fs.mount
command, but updates an existing mount point instead of creating a new one. Returns an error if the mount point is not present.
To display help for this command, run dbutils.fs.help("updateMount")
.
Warning
To avoid errors, never modify a mount point while other jobs are reading or writing to it. After modifying a mount, always run dbutils.fs.refreshMounts()
on all other running clusters to propagate any mount updates. See refreshMounts command (dbutils.fs.refreshMounts).
This command is available in Databricks Runtime 10.2 and above.
bucket_name = "my-bucket"
mount_name = "gs-my-bucket"
dbutils.fs.updateMount("gs://%s" % bucket_name, "/mnt/%s" % mount_name)
val BucketName = "my-bucket"
val MountName = "gs-my-bucket"
dbutils.fs.updateMount(s"gs://$BucketName", s"/mnt/$MountName")
Jobs utility (dbutils.jobs)
Subutilities: taskValues
Note
Available in Databricks Runtime 7.3 and above.
This utility is available only for Python.
The jobs utility allows you to leverage jobs features. To display help for this utility, run dbutils.jobs.help()
.
Provides utilities for leveraging jobs features.
taskValues: TaskValuesUtils -> Provides utilities for leveraging job task values
taskValues subutility (dbutils.jobs.taskValues)
Note
Available in Databricks Runtime 7.3 and above.
This subutility is available only for Python.
Provides commands for leveraging job task values.
Use this sub utility to set and get arbitrary values during a job run. These values are called task values. You can access task values in downstream tasks in the same job run. For example, you can communicate identifiers or metrics, such as information about the evaluation of a machine learning model, between different tasks within a job run. Each task can set multiple task values, get them, or both. Each task value has a unique key within the same task. This unique key is known as the task value’s key. A task value is accessed with the task name and the task value’s key.
To display help for this subutility, run dbutils.jobs.taskValues.help()
.
get command (dbutils.jobs.taskValues.get)
Note
Available in Databricks Runtime 7.3 and above.
This command is available only for Python.
On Databricks Runtime 10.4 and earlier, if get
cannot find the task, a Py4JJavaError is raised instead of a ValueError
.
Gets the contents of the specified task value for the specified task in the current job run.
To display help for this command, run dbutils.jobs.taskValues.help("get")
.
For example:
dbutils.jobs.taskValues.get(taskKey = "my-task", \
key = "my-key", \
default = 7, \
debugValue = 42)
In the preceding example:
taskKey
is the name of the task that set the task value. If the command cannot find this task, aValueError
is raised.key
is the name of the task value’s key that you set with the set command (dbutils.jobs.taskValues.set). If the command cannot find this task value’s key, aValueError
is raised (unlessdefault
is specified).default
is an optional value that is returned ifkey
cannot be found.default
cannot beNone
.debugValue
is an optional value that is returned if you try to get the task value from within a notebook that is running outside of a job. This can be useful during debugging when you want to run your notebook manually and return some value instead of raising aTypeError
by default.debugValue
cannot beNone
.
If you try to get a task value from within a notebook that is running outside of a job, this command raises a TypeError
by default. However, if the debugValue
argument is specified in the command, the value of debugValue
is returned instead of raising a TypeError
.
set command (dbutils.jobs.taskValues.set)
Note
Available in Databricks Runtime 7.3 and above.
This command is available only for Python.
Sets or updates a task value. You can set up to 250 task values for a job run.
To display help for this command, run dbutils.jobs.taskValues.help("set")
.
Some examples include:
dbutils.jobs.taskValues.set(key = "my-key", \
value = 5)
dbutils.jobs.taskValues.set(key = "my-other-key", \
value = "my other value")
In the preceding examples:
key
is the task value’s key. This key must be unique to the task. That is, if two different tasks each set a task value with keyK
, these are two different task values that have the same keyK
.value
is the value for this task value’s key. This command must be able to represent the value internally in JSON format. The size of the JSON representation of the value cannot exceed 48 KiB.
If you try to set a task value from within a notebook that is running outside of a job, this command does nothing.
Library utility (dbutils.library)
Note
The library utility is deprecated.
Commands: install, installPyPI, list, restartPython, updateCondaEnv
The library utility allows you to install Python libraries and create an environment scoped to a notebook session. The libraries are available both on the driver and on the executors, so you can reference them in user defined functions. This enables:
Library dependencies of a notebook to be organized within the notebook itself.
Notebook users with different library dependencies to share a cluster without interference.
Detaching a notebook destroys this environment. However, you can recreate it by re-running the library install
API commands in the notebook. See the restartPython
API for how you can reset your notebook state without losing your environment.
Important
Library utilities are not available on Databricks Runtime ML. Instead, See Notebook-scoped Python libraries.
Databricks recommends using %pip
magic commands to install notebook-scoped libraries. See Notebook-scoped Python libraries.
Library utilities are enabled by default. Therefore, by default the Python environment for each notebook is isolated by using a separate Python executable that is created when the notebook is attached to and inherits the default Python environment on the cluster. Libraries installed through an init script into the Databricks Python environment are still available. You can disable this feature by setting spark.databricks.libraryIsolation.enabled
to false
.
This API is compatible with the existing cluster-wide library installation through the UI and Libraries API. Libraries installed through this API have higher priority than cluster-wide libraries.
To list the available commands, run dbutils.library.help()
.
install(path: String): boolean -> Install the library within the current notebook session
installPyPI(pypiPackage: String, version: String = "", repo: String = "", extras: String = ""): boolean -> Install the PyPI library within the current notebook session
list: List -> List the isolated libraries added for the current notebook session via dbutils
restartPython: void -> Restart python process for the current notebook session
updateCondaEnv(envYmlContent: String): boolean -> Update the current notebook's Conda environment based on the specification (content of environment
install command (dbutils.library.install)
Given a path to a library, installs that library within the current notebook session. Libraries installed by calling this command are available only to the current notebook.
To display help for this command, run dbutils.library.help("install")
.
This example installs a .egg
or .whl
library within a notebook.
Important
dbutils.library.install
is removed in Databricks Runtime 11.0 and above.
Databricks recommends that you put all your library install commands in the first cell of your notebook and call restartPython
at the end of that cell. The Python notebook state is reset after running restartPython
; the notebook loses all state including but not limited to local variables, imported libraries, and other ephemeral states. Therefore, we recommend that you install libraries and reset the notebook state in the first notebook cell.
The accepted library sources are dbfs
and gs
.
dbutils.library.install("dbfs:/path/to/your/library.egg")
dbutils.library.restartPython() # Removes Python state, but some libraries might not work without calling this command.
dbutils.library.install("dbfs:/path/to/your/library.whl")
dbutils.library.restartPython() # Removes Python state, but some libraries might not work without calling this command.
Note
You can directly install custom wheel files using %pip
. In the following example we are assuming you have uploaded your library wheel file to DBFS:
%pip install /dbfs/path/to/your/library.whl
Egg files are not supported by pip, and wheel is considered the standard for build and binary packaging for Python. See Wheel vs Egg for more details. However, if you want to use an egg file in a way that’s compatible with %pip
, you can use the following workaround:
# This step is only needed if no %pip commands have been run yet.
# It will trigger setting up the isolated notebook environment
%pip install <any-lib> # This doesn't need to be a real library; for example "%pip install any-lib" would work
import sys
# Assuming the preceding step was completed, the following command
# adds the egg file to the current notebook environment
sys.path.append("/local/path/to/library.egg")
installPyPI command (dbutils.library.installPyPI)
Given a Python Package Index (PyPI) package, install that package within the current notebook session. Libraries installed by calling this command are isolated among notebooks.
To display help for this command, run dbutils.library.help("installPyPI")
.
This example installs a PyPI package in a notebook. version
, repo
, and extras
are optional. Use the extras
argument to specify the Extras feature (extra requirements).
dbutils.library.installPyPI("pypipackage", version="version", repo="repo", extras="extras")
dbutils.library.restartPython() # Removes Python state, but some libraries might not work without calling this command.
Important
dbutils.library.installPyPI
is removed in Databricks Runtime 11.0 and above.
The version
and extras
keys cannot be part of the PyPI package string. For example: dbutils.library.installPyPI("azureml-sdk[databricks]==1.19.0")
is not valid. Use the version
and extras
arguments to specify the version and extras information as follows:
dbutils.library.installPyPI("azureml-sdk", version="1.19.0", extras="databricks")
dbutils.library.restartPython() # Removes Python state, but some libraries might not work without calling this command.
Note
When replacing dbutils.library.installPyPI
commands with %pip
commands, the Python interpreter is automatically restarted. You can run the install command as follows:
%pip install azureml-sdk[databricks]==1.19.0
This example specifies library requirements in one notebook and installs them by using %run
in the other. To do this, first define the libraries to install in a notebook. This example uses a notebook named InstallDependencies
.
dbutils.library.installPyPI("torch")
dbutils.library.installPyPI("scikit-learn", version="1.19.1")
dbutils.library.installPyPI("azureml-sdk", extras="databricks")
dbutils.library.restartPython() # Removes Python state, but some libraries might not work without calling this command.
Then install them in the notebook that needs those dependencies.
%run /path/to/InstallDependencies # Install the dependencies in the first cell.
import torch
from sklearn.linear_model import LinearRegression
import azureml
...
This example resets the Python notebook state while maintaining the environment. This technique is available only in Python notebooks. For example, you can use this technique to reload libraries Databricks preinstalled with a different version:
dbutils.library.installPyPI("numpy", version="1.15.4")
dbutils.library.restartPython()
# Make sure you start using the library in another cell.
import numpy
You can also use this technique to install libraries such as tensorflow that need to be loaded on process start up:
dbutils.library.installPyPI("tensorflow")
dbutils.library.restartPython()
# Use the library in another cell.
import tensorflow
list command (dbutils.library.list)
Lists the isolated libraries added for the current notebook session through the library utility. This does not include libraries that are attached to the cluster.
To display help for this command, run dbutils.library.help("list")
.
This example lists the libraries installed in a notebook.
dbutils.library.list()
Note
The equivalent of this command using %pip
is:
%pip freeze
restartPython command (dbutils.library.restartPython)
Restarts the Python process for the current notebook session.
To display help for this command, run dbutils.library.help("restartPython")
.
This example restarts the Python process for the current notebook session.
dbutils.library.restartPython() # Removes Python state, but some libraries might not work without calling this command.
updateCondaEnv command (dbutils.library.updateCondaEnv)
Updates the current notebook’s Conda environment based on the contents of environment.yml
. This method is supported only for Databricks Runtime on Conda.
To display help for this command, run dbutils.library.help("updateCondaEnv")
.
This example updates the current notebook’s Conda environment based on the contents of the provided specification.
dbutils.library.updateCondaEnv(
"""
channels:
- anaconda
dependencies:
- gensim=3.4
- nltk=3.4
""")
Notebook utility (dbutils.notebook)
The notebook utility allows you to chain together notebooks and act on their results. See Run a Databricks notebook from another notebook.
To list the available commands, run dbutils.notebook.help()
.
exit(value: String): void -> This method lets you exit a notebook with a value
run(path: String, timeoutSeconds: int, arguments: Map): String -> This method runs a notebook and returns its exit value.
exit command (dbutils.notebook.exit)
Exits a notebook with a value.
To display help for this command, run dbutils.notebook.help("exit")
.
This example exits the notebook with the value Exiting from My Other Notebook
.
dbutils.notebook.exit("Exiting from My Other Notebook")
# Notebook exited: Exiting from My Other Notebook
dbutils.notebook.exit("Exiting from My Other Notebook")
# Notebook exited: Exiting from My Other Notebook
dbutils.notebook.exit("Exiting from My Other Notebook")
// Notebook exited: Exiting from My Other Notebook
Note
If the run has a query with structured streaming running in the background, calling dbutils.notebook.exit()
does not terminate the run. The run will continue to execute for as long as query is executing in the background. You can stop the query running in the background by clicking Cancel in the cell of the query or by running query.stop()
. When the query stops, you can terminate the run with dbutils.notebook.exit()
.
run command (dbutils.notebook.run)
Runs a notebook and returns its exit value. The notebook will run in the current cluster by default.
Note
The maximum length of the string value returned from the run
command is 5 MB. See Get the output for a single run (GET /jobs/runs/get-output
).
To display help for this command, run dbutils.notebook.help("run")
.
This example runs a notebook named My Other Notebook
in the same location as the calling notebook. The called notebook ends with the line of code dbutils.notebook.exit("Exiting from My Other Notebook")
. If the called notebook does not finish running within 60 seconds, an exception is thrown.
dbutils.notebook.run("My Other Notebook", 60)
# Out[14]: 'Exiting from My Other Notebook'
dbutils.notebook.run("My Other Notebook", 60)
// res2: String = Exiting from My Other Notebook
Secrets utility (dbutils.secrets)
Commands: get, getBytes, list, listScopes
The secrets utility allows you to store and access sensitive credential information without making them visible in notebooks. See Secret management and Use the secrets in a notebook. To list the available commands, run dbutils.secrets.help()
.
get(scope: String, key: String): String -> Gets the string representation of a secret value with scope and key
getBytes(scope: String, key: String): byte[] -> Gets the bytes representation of a secret value with scope and key
list(scope: String): Seq -> Lists secret metadata for secrets within a scope
listScopes: Seq -> Lists secret scopes
get command (dbutils.secrets.get)
Gets the string representation of a secret value for the specified secrets scope and key.
Warning
Administrators, secret creators, and users granted permission can read Databricks secrets. While Databricks makes an effort to redact secret values that might be displayed in notebooks, it is not possible to prevent such users from reading secrets. For more information, see Secret redaction.
To display help for this command, run dbutils.secrets.help("get")
.
This example gets the string representation of the secret value for the scope named my-scope
and the key named my-key
.
dbutils.secrets.get(scope="my-scope", key="my-key")
# Out[14]: '[REDACTED]'
dbutils.secrets.get(scope="my-scope", key="my-key")
# [1] "[REDACTED]"
dbutils.secrets.get(scope="my-scope", key="my-key")
// res0: String = [REDACTED]
getBytes command (dbutils.secrets.getBytes)
Gets the bytes representation of a secret value for the specified scope and key.
To display help for this command, run dbutils.secrets.help("getBytes")
.
This example gets the byte representation of the secret value (in this example, a1!b2@c3#
) for the scope named my-scope
and the key named my-key
.
dbutils.secrets.getBytes(scope="my-scope", key="my-key")
# Out[1]: b'a1!b2@c3#'
dbutils.secrets.getBytes(scope="my-scope", key="my-key")
# [1] 61 31 21 62 32 40 63 33 23
dbutils.secrets.getBytes(scope="my-scope", key="my-key")
// res1: Array[Byte] = Array(97, 49, 33, 98, 50, 64, 99, 51, 35)
list command (dbutils.secrets.list)
Lists the metadata for secrets within the specified scope.
To display help for this command, run dbutils.secrets.help("list")
.
This example lists the metadata for secrets within the scope named my-scope
.
dbutils.secrets.list("my-scope")
# Out[10]: [SecretMetadata(key='my-key')]
dbutils.secrets.list("my-scope")
# [[1]]
# [[1]]$key
# [1] "my-key"
dbutils.secrets.list("my-scope")
// res2: Seq[com.databricks.dbutils_v1.SecretMetadata] = ArrayBuffer(SecretMetadata(my-key))
listScopes command (dbutils.secrets.listScopes)
Lists the available scopes.
To display help for this command, run dbutils.secrets.help("listScopes")
.
This example lists the available scopes.
dbutils.secrets.listScopes()
# Out[14]: [SecretScope(name='my-scope')]
dbutils.secrets.listScopes()
# [[1]]
# [[1]]$name
# [1] "my-scope"
dbutils.secrets.listScopes()
// res3: Seq[com.databricks.dbutils_v1.SecretScope] = ArrayBuffer(SecretScope(my-scope))
Widgets utility (dbutils.widgets)
Commands: combobox, dropdown, get, getArgument, multiselect, remove, removeAll, text
The widgets utility allows you to parameterize notebooks. See Databricks widgets.
To list the available commands, run dbutils.widgets.help()
.
combobox(name: String, defaultValue: String, choices: Seq, label: String): void -> Creates a combobox input widget with a given name, default value and choices
dropdown(name: String, defaultValue: String, choices: Seq, label: String): void -> Creates a dropdown input widget a with given name, default value and choices
get(name: String): String -> Retrieves current value of an input widget
getArgument(name: String, optional: String): String -> (DEPRECATED) Equivalent to get
multiselect(name: String, defaultValue: String, choices: Seq, label: String): void -> Creates a multiselect input widget with a given name, default value and choices
remove(name: String): void -> Removes an input widget from the notebook
removeAll: void -> Removes all widgets in the notebook
text(name: String, defaultValue: String, label: String): void -> Creates a text input widget with a given name and default value
combobox command (dbutils.widgets.combobox)
Creates and displays a combobox widget with the specified programmatic name, default value, choices, and optional label.
To display help for this command, run dbutils.widgets.help("combobox")
.
This example creates and displays a combobox widget with the programmatic name fruits_combobox
. It offers the choices apple
, banana
, coconut
, and dragon fruit
and is set to the initial value of banana
. This combobox widget has an accompanying label Fruits
. This example ends by printing the initial value of the combobox widget, banana
.
dbutils.widgets.combobox(
name='fruits_combobox',
defaultValue='banana',
choices=['apple', 'banana', 'coconut', 'dragon fruit'],
label='Fruits'
)
print(dbutils.widgets.get("fruits_combobox"))
# banana
dbutils.widgets.combobox(
name='fruits_combobox',
defaultValue='banana',
choices=list('apple', 'banana', 'coconut', 'dragon fruit'),
label='Fruits'
)
print(dbutils.widgets.get("fruits_combobox"))
# [1] "banana"
dbutils.widgets.combobox(
"fruits_combobox",
"banana",
Array("apple", "banana", "coconut", "dragon fruit"),
"Fruits"
)
print(dbutils.widgets.get("fruits_combobox"))
// banana
dropdown command (dbutils.widgets.dropdown)
Creates and displays a dropdown widget with the specified programmatic name, default value, choices, and optional label.
To display help for this command, run dbutils.widgets.help("dropdown")
.
This example creates and displays a dropdown widget with the programmatic name toys_dropdown
. It offers the choices alphabet blocks
, basketball
, cape
, and doll
and is set to the initial value of basketball
. This dropdown widget has an accompanying label Toys
. This example ends by printing the initial value of the dropdown widget, basketball
.
dbutils.widgets.dropdown(
name='toys_dropdown',
defaultValue='basketball',
choices=['alphabet blocks', 'basketball', 'cape', 'doll'],
label='Toys'
)
print(dbutils.widgets.get("toys_dropdown"))
# basketball
dbutils.widgets.dropdown(
name='toys_dropdown',
defaultValue='basketball',
choices=list('alphabet blocks', 'basketball', 'cape', 'doll'),
label='Toys'
)
print(dbutils.widgets.get("toys_dropdown"))
# [1] "basketball"
dbutils.widgets.dropdown(
"toys_dropdown",
"basketball",
Array("alphabet blocks", "basketball", "cape", "doll"),
"Toys"
)
print(dbutils.widgets.get("toys_dropdown"))
// basketball
get command (dbutils.widgets.get)
Gets the current value of the widget with the specified programmatic name. This programmatic name can be either:
The name of a custom widget in the notebook, for example
fruits_combobox
ortoys_dropdown
.The name of a custom parameter passed to the notebook as part of a notebook task, for example
name
orage
. For more information, see the coverage of parameters for notebook tasks in the Create a job UI or thenotebook_params
field in the Trigger a new job run (POST /jobs/run-now
) operation in the Jobs API.
To display help for this command, run dbutils.widgets.help("get")
.
This example gets the value of the widget that has the programmatic name fruits_combobox
.
dbutils.widgets.get('fruits_combobox')
# banana
dbutils.widgets.get('fruits_combobox')
# [1] "banana"
dbutils.widgets.get("fruits_combobox")
// res6: String = banana
This example gets the value of the notebook task parameter that has the programmatic name age
. This parameter was set to 35
when the related notebook task was run.
dbutils.widgets.get('age')
# 35
dbutils.widgets.get('age')
# [1] "35"
dbutils.widgets.get("age")
// res6: String = 35
getArgument command (dbutils.widgets.getArgument)
Gets the current value of the widget with the specified programmatic name. If the widget does not exist, an optional message can be returned.
Note
This command is deprecated. Use dbutils.widgets.get instead.
To display help for this command, run dbutils.widgets.help("getArgument")
.
This example gets the value of the widget that has the programmatic name fruits_combobox
. If this widget does not exist, the message Error: Cannot find fruits combobox
is returned.
dbutils.widgets.getArgument('fruits_combobox', 'Error: Cannot find fruits combobox')
# Deprecation warning: Use dbutils.widgets.text() or dbutils.widgets.dropdown() to create a widget and dbutils.widgets.get() to get its bound value.
# Out[3]: 'banana'
dbutils.widgets.getArgument('fruits_combobox', 'Error: Cannot find fruits combobox')
# Deprecation warning: Use dbutils.widgets.text() or dbutils.widgets.dropdown() to create a widget and dbutils.widgets.get() to get its bound value.
# [1] "banana"
dbutils.widgets.getArgument("fruits_combobox", "Error: Cannot find fruits combobox")
// command-1234567890123456:1: warning: method getArgument in trait WidgetsUtils is deprecated: Use dbutils.widgets.text() or dbutils.widgets.dropdown() to create a widget and dbutils.widgets.get() to get its bound value.
// dbutils.widgets.getArgument("fruits_combobox", "Error: Cannot find fruits combobox")
// ^
// res7: String = banana
multiselect command (dbutils.widgets.multiselect)
Creates and displays a multiselect widget with the specified programmatic name, default value, choices, and optional label.
To display help for this command, run dbutils.widgets.help("multiselect")
.
This example creates and displays a multiselect widget with the programmatic name days_multiselect
. It offers the choices Monday
through Sunday
and is set to the initial value of Tuesday
. This multiselect widget has an accompanying label Days of the Week
. This example ends by printing the initial value of the multiselect widget, Tuesday
.
dbutils.widgets.multiselect(
name='days_multiselect',
defaultValue='Tuesday',
choices=['Monday', 'Tuesday', 'Wednesday', 'Thursday',
'Friday', 'Saturday', 'Sunday'],
label='Days of the Week'
)
print(dbutils.widgets.get("days_multiselect"))
# Tuesday
dbutils.widgets.multiselect(
name='days_multiselect',
defaultValue='Tuesday',
choices=list('Monday', 'Tuesday', 'Wednesday', 'Thursday',
'Friday', 'Saturday', 'Sunday'),
label='Days of the Week'
)
print(dbutils.widgets.get("days_multiselect"))
# [1] "Tuesday"
dbutils.widgets.multiselect(
"days_multiselect",
"Tuesday",
Array("Monday", "Tuesday", "Wednesday", "Thursday",
"Friday", "Saturday", "Sunday"),
"Days of the Week"
)
print(dbutils.widgets.get("days_multiselect"))
// Tuesday
remove command (dbutils.widgets.remove)
Removes the widget with the specified programmatic name.
To display help for this command, run dbutils.widgets.help("remove")
.
Important
If you add a command to remove a widget, you cannot add a subsequent command to create a widget in the same cell. You must create the widget in another cell.
This example removes the widget with the programmatic name fruits_combobox
.
dbutils.widgets.remove('fruits_combobox')
dbutils.widgets.remove('fruits_combobox')
dbutils.widgets.remove("fruits_combobox")
removeAll command (dbutils.widgets.removeAll)
Removes all widgets from the notebook.
To display help for this command, run dbutils.widgets.help("removeAll")
.
Important
If you add a command to remove all widgets, you cannot add a subsequent command to create any widgets in the same cell. You must create the widgets in another cell.
This example removes all widgets from the notebook.
dbutils.widgets.removeAll()
dbutils.widgets.removeAll()
dbutils.widgets.removeAll()
text command (dbutils.widgets.text)
Creates and displays a text widget with the specified programmatic name, default value, and optional label.
To display help for this command, run dbutils.widgets.help("text")
.
This example creates and displays a text widget with the programmatic name your_name_text
. It is set to the initial value of Enter your name
. This text widget has an accompanying label Your name
. This example ends by printing the initial value of the text widget, Enter your name
.
dbutils.widgets.text(
name='your_name_text',
defaultValue='Enter your name',
label='Your name'
)
print(dbutils.widgets.get("your_name_text"))
# Enter your name
dbutils.widgets.text(
name='your_name_text',
defaultValue='Enter your name',
label='Your name'
)
print(dbutils.widgets.get("your_name_text"))
# [1] "Enter your name"
dbutils.widgets.text(
"your_name_text",
"Enter your name",
"Your name"
)
print(dbutils.widgets.get("your_name_text"))
// Enter your name
Limitations
Calling dbutils
inside of executors can produce unexpected results or potentially result in errors.
If you need to run file system operations on executors using dbutils
, there are several faster and more scalable alternatives available:
For file copy or move operations, you can check a faster option of running filesystem operations described in Parallelize filesystem operations.
For file system list and delete operations, you can refer to parallel listing and delete methods utilizing Spark in How to list and delete files faster in Databricks.
For information about executors, see Cluster Mode Overview on the Apache Spark website.