Repos for Git integration

To support best practices for data science and engineering code development, Databricks Repos provides repository-level integration with Git providers. You can develop code in a Databricks notebook and sync it with a remote Git repository. Databricks Repos lets you use Git functionality such as cloning a remote repo, managing branches, pushing and pulling changes, and visually comparing differences upon commit.

Databricks Repos also provides an API that you can integrate with your CI/CD pipeline. For example, you can programmatically update a Databricks repo so that it always has the most recent code version.

Databricks Repos provides security features such as allow lists to control access to Git repositories and detection of clear text secrets in source code.

For more information about best practices for code development using Databricks Repos, see Best practices for integrating Databricks Repos with CI/CD workflows.


Databricks supports these Git providers:

  • GitHub

  • Bitbucket

  • GitLab

  • Azure DevOps (not available in Azure China regions)

  • AWS CodeCommit

  • GitHub AE

The Git server must be accessible from Databricks. Databricks does not support private Git servers, such as Git servers behind a VPN.

Support for arbitrary files in Databricks Repos is available in Databricks Runtime 8.4 and above.

Configure your Git integration with Databricks


  • Databricks recommends that you set an expiration date for all personal access tokens.

  1. Click User Settings Icon Settings in your Databricks workspace and select User Settings from the menu.

  2. On the User Settings page, go to the Git Integration tab.

  3. Follow the instructions for integration with GitHub, Bitbucket Cloud, GitLab, Azure DevOps, AWS CodeCommit, or GitHub AE.

    For Azure DevOps, Git integration does not support Azure Active Directory tokens. You must use an Azure DevOps personal access token.

  4. If your organization has SAML SSO enabled in GitHub, ensure that you have authorized your personal access token for SSO.

Enable support for arbitrary files in Databricks Repos


This feature is in Public Preview.

In addition to syncing notebooks with a remote Git repository, Files in Repos lets you sync any type of file, such as .py files, data files in .csv or .json format, or .yaml configuration files. You can import and read these files within a Databricks repo. You can also view and edit plain text files in the UI.

If support for this feature is not enabled, you will still see non-notebook files in your repo, but you will not be able to work with them.


To work with non-notebook files in Databricks Repos, you must be running Databricks Runtime 8.4 or above.

Enable Files in Repos

An admin can enable this feature as follows:

  1. Go to the Admin Console.

  2. Click the Workspace Settings tab.

  3. In the Repos section, click the Files in Repos toggle.

After the feature has been enabled, you must restart your cluster and refresh your browser before you can use Files in Repos.

Additionally, the first time you access a repo after Files in Repos is enabled, you must open the Git dialog. A dialog appears indicating that you must perform a pull operation to sync non-notebook files in the repo. Select Agree and Pull to sync files. If there are any merge conflicts, another dialog appears giving you the option of discarding your conflicting changes or pushing your changes to a new branch.

Confirm Files in Repos is enabled

You can use the command %sh pwd in a notebook inside a Repo to check if Files in Repos is enabled.

  • If Files in Repos is not enabled, the response is /databricks/driver.

  • If Files in Repos is enabled, the response is /Workspace/Repos/<path to notebook directory> .

Clone a remote Git repository

You can clone a remote Git repository and work on your notebooks or files in Databricks. You can create notebooks, edit notebooks and other files, and sync with the remote repository. You can also create new branches for your development work. For some tasks you must work in your Git provider, such as creating a PR, resolving conflicts, merging or deleting branches, or rebasing a branch.

  1. Click Repos Icon Repos in the sidebar.

  2. Click Add Repo.

    Add repo
  3. In the Add Repo dialog, click Clone remote Git repo and enter the repository URL. Select your Git provider from the drop-down menu, optionally change the name to use for the Databricks repo, and click Create. The contents of the remote repository are cloned to the Databricks repo.

    Clone from repo

Work with notebooks in a Databricks repo

To create a new notebook or folder in a repo, click the down arrow next to the repo name, and select Create > Notebook or Create > Folder from the menu.

Repo create menu

To move an notebook or folder in your workspace into a repo, navigate to the notebook or folder and select Move from the drop-down menu:

Move object

In the dialog, select the repo to which you want to move the object:

Move repo

You can import a SQL or Python file as a single-cell Databricks notebook.

  • Add the comment line -- Databricks notebook source at the top of a SQL file.

  • Add the comment line # Databricks notebook source at the top of a Python file.

Work with non-notebook files in a Databricks repo

This section covers how to add files to a repo and view and edit files.


This feature is in Public Preview.


Databricks Runtime 8.4 or above.

Create a new file

The most common way to create a file in a repo is to clone a Git repository. You can also create a new file directly from the Databricks repo. Click the down arrow next to the repo name, and select Create > File from the menu.

repos create file

Import a file

To import a file, click the down arrow next to the repo name, and select Import.

repos import file

The import dialog appears. You can drag files into the dialog or click browse to select files.

repos import dialog
  • Only notebooks can be imported from a URL.

  • When you import a .zip file, Databricks automatically unzips the file and imports each file and notebook that is included in the .zip file.

Edit a file

To edit a file in a repo, click the filename in the Repos browser. The file opens and you can edit it. Changes are saved automatically.

When you open a Markdown (.md) file, the rendered view is displayed by default. To edit the file, click in the file editor. To return to preview mode, click anywhere outside of the file editor.

Refactor code

A best practice for code development is to modularize code so it can be easily reused. You can create custom Python files in a repo and make the code in those files available to a notebook using the import statement. For an example, see the example notebook.

To refactor notebook code into reusable files:

  1. From the Repos UI, create a new branch.

  2. Create a new source code file for your code.

  3. Add Python import statements to the notebook to make the code in your new file available to the notebook.

  4. Commit and push your changes to your Git provider.

Access files in a repo programmatically

You can programmatically read small data files in a repo, such as .csv or .json files, directly from a notebook. You cannot programmatically create or edit files from a notebook.

import pandas as pd
df = pd.read_csv("./data/winequality-red.csv")

You can use Spark to access files in a repo. Spark requires absolute file paths for file data. The absolute file path for a file in a repo is file:/Workspace/Repos/<user_folder>/<repo_name>/file.

You can copy the absolute or relative path to a file in a repo from the drop-down menu next to the file:

file drop down menu

The example below shows the use of {os.getcwd()} to get the full path.

import os"csv").load(f"file:{os.getcwd()}/my_data.csv")

Example notebook

This notebook shows examples of working with arbitrary files in Databricks Repos.

Arbitrary Files in Repos example notebook

Open notebook in new tab

Work with Python and R modules


This feature is in Public Preview.


Databricks Runtime 8.4 or above.

Import Python and R modules

The current working directory of your repo and notebook are automatically added to the Python path. When you work in the repo root, you can import modules from the root directory and all subdirectories.

To import modules from another repo, you must add that repo to sys.path. For example:

import sys

# to use a relative path
import sys
import os

You import functions from a module in a repo just as you would from a module saved as a cluster library or notebook-scoped library:

from sample import power

Import Databricks Python notebooks

To distinguish between a regular Python file and a Databricks Python-language notebook exported in source-code format, Databricks adds the line # Databricks Notebook source at the top of the notebook source code file.

When you import the notebook, Databricks recognizes it and imports it as a notebook, not as a Python module.

If you want to import the notebook as a Python module, you must edit the notebook in a code editor and remove the line # Databricks Notebook source. Removing that line converts the notebook to a regular Python file.

Import precedence rules

When you use an import statement in a notebook in a repo, the library in the repo takes precedence over a library or wheel with the same name that is installed on the cluster.

Autoreload for Python modules

While developing Python code, if you are editing multiple files, you can use the following commands in any cell to force a reload of all modules.

%load_ext autoreload
%autoreload 2

Use Databricks web terminal for testing

You can use Databricks web terminal to test modifications to your Python or R code without having to import the file to a notebook and execute the notebook.

  1. Open web terminal.

  2. Change to the Repo directory: cd /Workspace/Repos/<path_to_repo>/.

  3. Run the Python or R file: python or Rscript file_name.r.

Sync with a remote Git repository

To sync with Git, use the Git dialog. The Git dialog lets you pull changes from your remote Git repository and push and commit changes. You can also change the branch you are working on or create a new branch.


Git operations that pull in upstream changes clear the notebook state. For more information, see Incoming changes clear the notebook state.

Open the Git dialog

You can access the Git dialog from a notebook or from the Databricks Repos browser.

  • From a notebook, click the button at the top left of the notebook that identifies the current Git branch.

    Git dialog button on notebook
  • From the Databricks Repos browser, click the button to the right of the repo name:

    Git dialog button in repo browser

    You can also click the down arrow next to the repo name, and select Git… from the menu.

    Repos menu 2

Pull changes from the remote Git repository

To pull changes from the remote Git repository, click Pullin the Git dialog. Notebooks and other files are updated automatically to the latest version in your remote repository.

See Merge conflicts for instructions on resolving merge conflicts.

Merge conflicts

To resolve a merge conflict, you must either discard conflicting changes or commit your changes to a new branch and then merge them into the original feature branch using a pull request.

  1. If there is a merge conflict, the Repos UI shows a notice allowing you to cancel the pull or resolve the conflict. If you select Resolve conflict using PR, a dialog appears that lets you create a new branch and commit your changes to it.

    resolve conflict dialog
  2. When you click Commit to new branch, a notice appears with a link: Create a pull request to resolve merge conflicts. Click the link to open your Git provider.

    merge conflict create PR message
  3. In your Git provider, create the PR, resolve the conflicts, and merge the new branch into the original branch.

  4. Return to the Repos UI. Use the Git dialog to pull changes from the Git repository to the original branch.

Commit and push changes to the remote Git repository

When you have added new notebooks or files, or made changes to existing notebooks or files, the Git dialog highlights the changes.

git dialog

Add a required Summary of the changes, and click Commit & Push to push these changes to the remote Git repository.

If you don’t have permission to commit to the default branch, such as main, create a new branch and use your Git provider interface to create a pull request (PR) to merge it into the default branch.


  • Results are not included with a notebook commit. All results are cleared before the commit is made.

  • For instructions on resolving merge conflicts, see Merge conflicts.

Create a new branch

You can create a new branch based on an existing branch from the Git dialog:

Git dialog new branch

Run jobs using notebooks in a remote repository

You can run jobs in Databricks using notebooks located in a remote Git repository. This is especially useful for managing CI/CD for production runs. See Run jobs using notebooks in a remote Git repository.

Control access to Databricks Repos

Manage permissions

When you create a repo, you have Can Manage permission. This lets you perform Git operations or modify the remote repository. You can clone public remote repositories without Git credentials (personal access token and username). To modify a public remote repository, or to clone or modify a private remote repository, you must have a Git provider username and personal access token with read and write permissions for the remote repository.

Use allow lists

An admin can limit which remote repositories users can commit and push to.

  1. Go to the Admin Console.

  2. Click the Workspace Settings tab.

  3. In the Advanced section, click the Enable Repos Git URL Allow List toggle.

  4. Click Confirm.

  5. In the field next to Repos Git URL Allow List: Empty list, enter a comma-separated list of URL prefixes.

  6. Click Save.

Users can only commit and push to Git repositories that start with one of the URL prefixes you specify. The default setting is “Empty list”, which disables access to all repositories. To allow access to all repositories, disable Enable Repos Git URL Allow List.


  • Users can load and pull remote repositories even if they are not on the allow list.

  • The list you save overwrites the existing set of saved URL prefixes.

  • It may take about 15 minutes for changes to take effect.

Secrets detection

Databricks Repos scans code for access key IDs that begin with the prefix AKIA and warns the user before committing.

Repos API

The Repos API allows you to programmatically manage Databricks Repos. For details, see Repos API 2.0.

Terraform integration

You can manage Databricks Repos in a fully automated setup using Databricks Terraform provider and databricks_repo:

resource "databricks_repo" "this" {
  url = ""

Best practices for integrating Databricks Repos with CI/CD workflows

This section includes best practices for integrating Databricks Repos with your CI/CD workflow. The following figure shows an overview of the steps.

Best practices overview

Admin workflow

Databricks Repos have user-level folders and non-user top level folders. User-level folders are automatically created when users first clone a remote repository. You can think of Databricks Repos in user folders as “local checkouts” that are individual for each user and where users make changes to their code.

Set up top-level folders

Admins can create non-user top level folders. The most common use case for these top level folders is to create Dev, Staging, and Production folders that contain Databricks Repos for the appropriate versions or branches for development, staging, and production. For example, if your company uses the Main branch for production, the Production folder would contain Repos configured to be at the Main branch.

Typically permissions on these top-level folders are read-only for all non-admin users within the workspace.

Top-level repo folders
Set up Git automation to update Databricks Repos on merge

To ensure that Databricks Repos are always at the latest version, you can set up Git automation to call the Repos API. In your Git provider, set up automation that, after every successful merge of a PR into the main branch, calls the Repos API endpoint on the appropriate repo in the Production folder to bring that repo to the latest version.

For example, on GitHub this can be achieved with GitHub Actions. For more information, see the Repos API.

Developer workflow

In your user folder in Databricks Repos, clone your remote repository. A best practice is to create a new feature branch, or select a previously created branch, for your work, instead of directly committing and pushing changes to the main branch. You can make changes, commit, and push changes in that branch. When you are ready to merge your code, create a pull request and follow the review and merge processes in Git.

Here is an example workflow.


This workflow requires that you have already configured your Git integration.


Databricks recommends that each developer work on their own feature branch. Sharing feature branches among developers can cause merge conflicts, which must be resolved using your Git provider. For information about how to resolve merge conflicts, see Merge conflicts.


  1. Clone your existing Git repository to your Databricks workspace.

  2. Use the Repos UI to create a feature branch from the main branch. This example uses a single feature branch feature-b for simplicity. You can create and use multiple feature branches to do your work.

  3. Make your modifications to Databricks notebooks and files in the Repo.

  4. Commit and push your changes to your Git provider.

  5. Coworkers can now clone the Git repository into their own user folder.

    1. Working on a new branch, a coworker makes changes to the notebooks and files in the Repo.

    2. The coworker commits and pushes their changes to the Git provider.

  6. To merge changes from other branches or rebase the feature branch, you must use the Git command line or an IDE on your local system. Then, in the Repos UI, use the Git dialog to pull changes into the feature-b branch in the Databricks Repo.

  7. When you are ready to merge your work to the main branch, use your Git provider to create a PR to merge the changes from feature-b.

  8. In the Repos UI, pull changes to the main branch.

Production job workflow

You can point a job directly to a notebook in a Databricks Repo. When a job kicks off a run, it uses the current version of the code in the repo.

If the automation is setup as described in Admin workflow, every successful merge calls the Repos API to update the repo. As a result, jobs that are configured to run code from a repo always use the latest version available when the job run was created.

Migration tips


This feature is in Public Preview.

If you are using %run commands to make Python or R functions defined in a notebook available to another notebook, or are installing custom .whl files on a cluster, consider including those custom modules in a Databricks repo. In this way, you can keep your notebooks and other code modules in sync, ensuring that your notebook always uses the correct version.

Migrate from %run commands

%run commands let you include one notebook within another and are often used to make supporting Python or R code available to a notebook. In this example, a notebook named includes the code below.

# This code is in a notebook named "".
def n_to_mth(n,m):
  print(n, "to the", m, "th power is", n**m)

You can then make functions defined in available to a different notebook with a %run command:

# This notebook uses a %run command to access the code in "".
%run ./power
n_to_mth(3, 4)

Using Files in Repos, you can directly import the module that contains the Python code and run the function.

from power import n_to_mth
n_to_mth(3, 4)

Migrate from installing custom Python .whl files

You can install custom .whl files onto a cluster and then import them into a notebook attached to that cluster. For code that is frequently updated, this process is cumbersome and error-prone. Files in Repos lets you keep these Python files in the same repo with the notebooks that use the code, ensuring that your notebook always uses the correct version.

For more information about packaging Python projects, see this tutorial.

Limitations and FAQ

Incoming changes clear the notebook state

Git operations that alter the notebook source code result in the loss of the notebook state, including cell results, comments, revision history, and widgets. For example, Git pull can change the source code of a notebook. In this case, Databricks Repos must overwrite the existing notebook to import the changes. Git commit and push or creating a new branch do not affect the notebook source code, so the notebook state is preserved in these operations.

Prevent data loss in MLflow experiments

MLflow experiment data in a notebook might be lost in this scenario: You rename the notebook and then, before calling any MLflow commands, change to a branch that doesn’t contain the notebook.

To prevent this situation, Databricks recommends you avoid renaming notebooks in repos.

Can I create an MLflow experiment in a repo?

No. You can only create an MLflow experiment in the workspace.

What happens if a job starts running on a notebook while a Git operation is in progress?

At any point while a Git operation is in progress, some notebooks in the Repo may have been updated while others have not. This can cause unpredictable behavior.

For example, suppose notebook A calls notebook Z using a %run command. If a job running during a Git operation starts the most recent version of notebook A, but notebook Z has not yet been updated, the %run command in notebook A might start the older version of notebook Z. During the Git operation, the notebook states are not predictable and the job might fail or run notebook A and notebook Z from different commits.

How can I run non-Databricks notebook files in a repo? For example, a .py file?

You can use any of the following:

  • Bundle and deploy as a library on the cluster.

  • Pip install the Git repository directly. This requires a credential in secrets manager.

  • Use %run with inline code in a notebook.

Can I create top-level folders that are not user folders?

Yes, admins can create top-level folders to a single depth. Repos does not support additional folder levels.

How and where are the Github tokens stored in Databricks? Who would have access from Databricks?

  • The authentication tokens are stored in the Databricks control plane, and a Databricks employee can only gain access through a temporary credential that is audited.

  • Databricks logs the creation and deletion of these tokens, but not their usage. Databricks has logging that tracks Git operations that could be used to audit the usage of the tokens by the Databricks application.

  • Github enterprise audits token usage. Other Git services may also have Git server auditing.

Does Repos support Git submodules?

No. You can clone a repo that contains Git submodules, but the submodule is not cloned.

Does Repos support SSH?

No, only HTTPS.

Does Repos support .gitignore files?

Yes. If you add a file to your repo and do not want it to be tracked by Git, create a .gitignore file or use one cloned from your remote repository and add the filename, including the extension.

.gitignore works only for files that are not already tracked by Git. If you add a file that is already tracked by Git to a .gitignore file, the file is still tracked by Git.

Can I pull the latest version of a repository from Git before running a job without relying on an external orchestration tool?

No. Typically you can integrate this as a pre-commit on the Git server so that every push to a branch (main/prod) updates the Production repo.

Can I pull in .ipynb files?

Yes. The file renders in .json format, not notebook format.

Can I export a Repo?

You can export notebooks, folders, or an entire Repo. You cannot export non-notebook files, and if you export an entire Repo, non-notebook files are not included. To export, use the Workspace CLI or the Workspace API 2.0.

Are there limits on the size of a repo or the number of files?

Databricks doesn’t enforce a limit on the size of a repo. However:

  • Working branches are limited to 200 MB.

  • Individual files are limited to 100 MB.

Databricks recommends that in a repo:

  • The total number of all files not exceed 10,000.

  • The total number of notebooks not exceed 5,000.

You may receive an error message if your repo exceeds these limits. You may also receive a timeout error when you clone the repo, but the operation might complete in the background.

Does Repos support branch merging?

No. Databricks recommends that you create a pull request and merge through your Git provider.

Are the contents of Databricks Repos encrypted?

The contents of Databricks Repos are encrypted by Databricks using a default key.

Can I delete a branch from a Databricks repo?

No. To delete a branch, you must work in your Git provider.

Where is Databricks repo content stored?

The contents of a repo are temporarily cloned onto disk in the control plane. Databricks notebook files are stored in the control plane database just like notebooks in the main workspace. Non-notebook files may be stored on disk for up to 30 days.

How can I disable Repos in my workspace?

Follow these steps to disable Repos for Git in your workspace.

  1. Go to the Admin Console.

  2. Click the Workspace Settings tab.

  3. In the Advanced section, click the Repos toggle.

  4. Click Confirm.

  5. Refresh your browser.

Files in Repos limitations


This feature is in Public Preview.

  • In Databricks Runtime 10.1 and below, Files in Repos is not compatible with Spark Streaming. To use Spark Streaming on a cluster running Databricks Runtime 10.1 or below, you must disable Files in Repos on the cluster. Set the Spark configuration spark.databricks.enableWsfs false.

  • Native file reads are supported in Python and R notebooks. Native file reads are not supported in Scala notebooks, but you can use Scala notebooks with DBFS as you do today.

  • The diff view in the Git dialog is not available for files.

  • Only text encoded files are rendered in the UI. To view files in Databricks, the files must not be larger than 10 MB.

  • You cannot create or edit a file from your notebook.

  • You can only export notebooks. You cannot export non-notebook files from a repo.


Error message: Invalid credentials

Try the following:

  • Confirm that the settings in the Git integration tab (User Settings > Git Integration) are correct.

    • You must enter both your Git provider username and token. Legacy Git integrations did not require a username, so you may need to add a username to work with Databricks Repos.

  • Confirm that you have selected the correct Git provider in the Add Repo dialog.

  • Ensure your personal access token or app password has the correct repo access.

  • If SSO is enabled on your Git provider, authorize your tokens for SSO.

  • Test your token with command line Git. Both of these options should work:

    git clone https://<username>:<personal-access-token><org>/<repo-name>.git
    git clone -c http.sslVerify=false -c http.extraHeader='Authorization: Bearer <personal-access-token>'

Error message: Secure connection could not be established because of SSL problems

<link>: Secure connection to <link> could not be established because of SSL problems

This error occurs if your Git server is not accessible from Databricks. Private Git servers are not supported.

Timeout errors

Expensive operations such as cloning a large repo or checking out a large branch may hit timeout errors, but the operation might complete in the background. You can also try again later if the workspace was under heavy load at the time.

404 errors

If you get a 404 error when you try to open a non-notebook file, try waiting a few minutes and then trying again. There is a delay of a few minutes between when the workspace is enabled and when the webapp picks up the configuration flag.

resource not found errors after pulling non-notebook files into a Databricks repo

This error can occur if you are not using Databricks Runtime 8.4 or above. A cluster running Databricks Runtime 8.4 or above is required to work with non-notebook files in a repo.

Errors suggesting re-cloning

There was a problem with deleting folders. The repo could be in an inconsistent state and re-cloning is recommended.

This error indicates that a problem occurred while deleting folders from the repo. This could leave the repo in an inconsistent state, where folders that should have been deleted still exist. If this error occurs, Databricks recommends deleting and re-cloning the repo to reset its state.

Unable to set repo to most recent state. This may be due to force pushes overriding commit history on the remote repo. Repo may be out of sync and re-cloning is recommended.

This error indicates that the local and remote Git state have diverged. This can happen when a force push on the remote overrides recent commits that still exist on the local repo. Databricks does not support a hard reset within Repos and recommends deleting and re-cloning the repo if this error occurs.

My admin enabled Files in Repos, but expected files do not appear after cloning a remote repository or pulling files into an existing one

  • You must refresh your browser and restart your cluster to pick up the new configuration.

  • Your cluster must be running Databricks Runtime 8.4 or above.