Libraries

To make third-party or custom code available to notebooks and jobs running on your clusters, you can install a library. Libraries can be written in Python, Java, Scala, and R. You can upload Java, Scala, and Python libraries and point to external packages in PyPI, Maven, and CRAN repositories.

This article focuses on performing library tasks in the workspace UI. You can also manage libraries using the Libraries CLI or the Libraries API.

Tip

Databricks includes many common libraries in Databricks Runtime. To see which libraries are included in Databricks Runtime, look at the System Environment subsection of the Databricks Runtime release notes for your Databricks Runtime version.

Important

Databricks does not invoke Python atexit functions when your notebook or job completes processing. If you use a Python library that registers atexit handlers, you must ensure your code calls required functions before exiting.

Installing Python eggs is deprecated and will be removed in a future Databricks Runtime release. Use Python wheels or install packages from PyPI instead.

Note

Unity Catalog has some limitations on library usage. On Databricks Runtime 13.0 and below, cluster-scoped libraries are not supported on clusters that use shared access mode in a Unity Catalog-enabled workspace. On Databricks Runtime 13.1 and above, cluster-scoped Python libraries are supported, including Python wheels that are uploaded as workspace files. Libraries that are referenced using DBFS filepaths are not supported, whether in the DBFS root or an external location mounted to DBFS. Non-Python libraries are not supported. See Cluster libraries.

You can install libraries in three modes: workspace, cluster-installed, and notebook-scoped.

  • Workspace libraries serve as a local repository from which you create cluster-installed libraries. A workspace library might be custom code created by your organization, or might be a particular version of an open-source library that your organization has standardized on.

  • Cluster libraries can be used by all notebooks running on a cluster. You can install a cluster library directly from a public repository such as PyPI or Maven, or create one from a previously installed workspace library.

  • Notebook-scoped libraries, available for Python and R, allow you to install libraries and create an environment scoped to a notebook session. These libraries do not affect other notebooks running on the same cluster. Notebook-scoped libraries do not persist and must be re-installed for each session. Use notebook-scoped libraries when you need a custom environment for a specific notebook.

This section covers:

Python environment management

The following table provides an overview of options you can use to install Python libraries in Databricks.

Note

  • Notebook-scoped libraries using the %pip magic command are enabled by default in all supported Databricks Runtime and Databricks Runtime ML versions. See Requirements for details.

  • Notebook-scoped libraries with the library utility are deprecated.

Python package source

Notebook-scoped libraries with %pip

Notebook-scoped libraries with the library utility (deprecated)

Cluster libraries

Job libraries with Jobs API

PyPI

Use %pip install. See example.

Use dbutils.library .installPyPI.

Select PyPI as the source.

Add a new pypi object to the job libraries and specify the package field.

Private PyPI mirror, such as Nexus or Artifactory

Use %pip install with the --index-url option. Secret management is available. See example.

Use dbutils.library .installPyPI and specify the repo argument.

Not supported.

Not supported.

VCS, such as GitHub, with raw source

Use %pip install and specify the repository URL as the package name. See example.

Not supported.

Select PyPI as the source and specify the repository URL as the package name.

Add a new pypi object to the job libraries and specify the repository URL as the package field.

Private VCS with raw source

Use %pip install and specify the repository URL with basic authentication as the package name. Secret management is available. See example.

Not supported.

Not supported.

Not supported.

DBFS

Use %pip install. See example.

Use dbutils.library .install(dbfs_path).

Select DBFS/GCS as the source.

Add a new egg or whl object to the job libraries and specify the DBFS path as the package field.

GCS

Use %pip install together with a pre-signed URL. Paths with the GCS protocol gs:// are not supported.

Use dbutils.library .install(gs_path).

Select DBFS/GCS as the source.

Add a new egg or whl object to the job libraries and specify the GCS path as the package field.

Python library precedence

You might encounter a situation where you need to override the version for a built-in library, or have a custom library that conflicts in name with another library installed on the cluster. When you run import <library>, the library with the high precedence is imported.

Important

Libraries stored in workspace files have different precedence depending on how they are added to the Python sys.path. Databricks Repos adds the current working directory to the path before all other libraries, while notebooks outside Repos add the current working directory after other libraries are installed. If you manually append workspace directories to your path, these always have the lowest precedence.

The following list orders precedence from highest to lowest. In this list, a lower number means higher precedence.

  1. Libraries in the current working directory (Repos only).

  2. Notebook-scoped libraries (%pip install in notebooks).

  3. Cluster libraries (using the UI, CLI, or API).

  4. Libraries included in Databricks Runtime.

    • Libraries installed with init scripts might resolve before or after built-in libraries, depending on how they are installed. Databricks does not recommend installing libraries with init scripts.

  5. Libraries in the current working directory (not in Repos).

  6. Workspace files appended to the sys.path.