Access audit logs

Note

This feature requires the Databricks Premium Plan.

Warning

Audit logging is temporarily disabled for Databricks SQL.

Databricks provides access to audit logs of activities performed by Databricks users, allowing your enterprise to monitor detailed Databricks usage patterns.

There are two types of logs:

  • Workspace-level audit logs with workspace-level events.

  • Account-level audit logs with account-level events.

For a list of each of these types of events and the associated services, see Audit events.

As a Databricks account owner or account admin, you can configure delivery of audit logs in JSON file format to a Google Cloud Storage (GCS) storage bucket, where you can make the data available for usage analysis. Databricks delivers a separate JSON file for each workspace in your account and a separate file for account-level events.

To configure audit log delivery, you must set up a GCS bucket, give Databricks access to the bucket, and then use the account console to define a log delivery configuration that tells Databricks where to deliver your logs.

You cannot edit a log delivery configuration after creation, but you can temporarily or permanently disable a log delivery configuration using the account console. You can have a maximum of two currently-enabled audit log delivery configurations.

To configure log delivery, see Configure audit log delivery.

Configure verbose audit logs

In addition to the default events, you can configure a workspace to generate additional events by enabling verbose audit logs.

Additional notebook actions

Additional actions in audit log category notebook:

  • Action name runCommand, emitted after Databricks runs a command in a notebook. A command is corresponds to a cell in a notebook.

    Request parameters:

    • notebookId: Notebook ID

    • executionTime: The duration of the command in seconds. This is a decimal value such as 13.789.

    • status: Status of the command. Possible values are finished (the command finished), skipped (the command was skipped), cancelled (the command was cancelled), or failed (the command failed).

    • commandId: The unique ID for this command.

    • commandText: The text of the command. For multi-line commands, lines are separated by newline characters.

Additional Databricks SQL actions

Additional actions in audit log category databrickssql:

  • Action name commandSubmit, which runs when a command is submitted to Databricks SQL.

    Request parameters:

    • commandText: User-specified SQL statement or command.

    • warehouseId: ID for the SQL warehouse.

    • commandId: ID of the command.

  • Action name commandFinish, which runs when a command completes or a command is cancelled.

    Request parameters:

    • warehouseId: ID for the SQL warehouse.

    • commandId: ID of the command.

    Check the response field for additional information related to the command result:

    • statusCode - The HTTP response code. This will be error 400 if it is a general error.

    • errorMessage - Error message.

      Note

      In some cases for certain long-running commands, the errorMessage field may not be populated on failure.

    • result: This field is empty.

Enable or disable verbose audit logs

  1. As an admin, go to the Databricks admin console.

  2. Click Workspace settings.

  3. Next to Verbose Audit Logs, enable or disable the feature.

When you enable or disable verbose logging, an auditable event is emitted in the category workspace with action workspaceConfKeys. The workspaceConfKeys request parameter is enableVerboseAuditLogs. The request parameter workspaceConfValues is true (feature enabled) or false (feature disabled).

Latency

  • Up to one hour after log delivery configuration, audit delivery begins and you can access the JSON files.

  • After audit log delivery begins, auditable events are typically logged within one hour. New JSON files potentially overwrite existing files for each workspace. Overwriting ensures exactly-once semantics without requiring read or delete access to your account.

  • Enabling or disabling a log delivery configuration can take up to an hour to take effect.

Location

The delivery location is:

gs://<bucket-name>/<delivery-path-prefix>/workspaceId=<workspaceId>/date=<yyyy-mm-dd>/auditlogs_<internal-id>.json

If the optional delivery path prefix is omitted, the delivery path does not include <delivery-path-prefix>/.

Account-level audit events that are not associated with any single workspace are delivered to the workspaceId=0 partition.

For more information about accessing these files and analyzing them using Databricks, see Analyze audit logs.

Schema

Databricks delivers audit logs in JSON format. The schema of audit log records is as follows.

  • version: The schema version of the audit log format.

  • timestamp: UTC timestamp of the action.

  • workspaceId: ID of the workspace this event relates to. May be set to “0” for account-level events that apply to no workspace.

  • sourceIPAddress: The IP address of the source request.

  • userAgent: The browser or API client used to make the request.

  • sessionId: Session ID of the action.

  • userIdentity: Information about the user that makes the requests.

    • email: User email address.

  • serviceName: The service that logged the request.

  • actionName: The action, such as login, logout, read, write, and so on.

  • requestId: Unique request ID.

  • requestParams: Parameter key-value pairs used in the audited event.

  • response: Response to the request.

    • errorMessage: The error message if there was an error.

    • result: The result of the request.

    • statusCode: HTTP status code that indicates the request succeeds or not.

  • auditLevel: Specifies if this is a workspace-level event (WORKSPACE_LEVEL) or account-level event (ACCOUNT_LEVEL).

  • accountId: Account ID of this Databricks account.

Audit events

The serviceName and actionName properties identify an audit event in an audit log record.

Workspace-level audit logs are available for these services:

  • accounts

  • clusters

  • clusterPolicies

  • dbfs

  • genie

  • gitCredentials

  • globalInitScripts

  • groups

  • iamRole

  • instancePools

  • jobs

  • mlflowExperiment

  • notebook

  • repos

  • secrets

  • sqlAnalytics

  • sqlPermissions, which has all audit logs for table access when table access control lists are enabled.

  • ssh

  • webTerminal

  • workspace

Account-level audit logs are available for these services:

  • accountBillableUsage: Access to billable usage for the account.

  • logDelivery: Log delivery configurations.

  • accountsManager: Actions performed in the accounts console.

Account-level events have the workspaceId field set to a valid workspace ID if they reference workspace-related events like creating or deleting a workspace. If they are not associated with any workspace, the workspaceId field is set to 0.

Note

  • If actions take a long time, the request and response are logged separately but the request and response pair have the same requestId.

  • With the exception of mount-related operations, Databricks audit logs do not include DBFS-related operations.

  • Automated actions such as resizing a cluster due to autoscaling or launching a job due to scheduling are performed by the user System-User.

Request parameters

The request parameters in the field requestParams for each supported service and action are listed in the following sections, grouped by workspace-level events and account-level events.

The requestParams field is subject to truncation. If the size of its JSON representation exceeds 100 KB, values are truncated and the string ... truncated is appended to truncated entries. In rare cases where a truncated map is still larger than 100 KB, a single TRUNCATED key with an empty value is present instead.

Workspace-level audit log events

Service

Action

Request Parameters

accounts

add

[“targetUserName”, “endpoint”, “targetUserId”]

addPrincipalToGroup

[“targetGroupId”, “endpoint”, “targetUserId”, “targetGroupName”, “targetUserName”]

changePassword

[“newPasswordSource”, “targetUserId”, “serviceSource”, “wasPasswordChanged”, “userId”]

createGroup

[“endpoint”, “targetGroupId”, “targetGroupName”]

delete

[“targetUserId”, “targetUserName”, “endpoint”]

garbageCollectDbToken

[“tokenExpirationTime”, “userId”]

generateDbToken

[“userId”, “tokenExpirationTime”]

jwtLogin

[“user”]

login

[“user”]

logout

[“user”]

removeAdmin

[“targetUserName”, “endpoint”, “targetUserId”]

removeGroup

[“targetGroupId”, “targetGroupName”, “endpoint”]

resetPassword

[“serviceSource”, “userId”, “endpoint”, “targetUserId”, “targetUserName”, “wasPasswordChanged”, “newPasswordSource”]

revokeDbToken

[“userId”]

samlLogin

[“user”]

setAdmin

[“endpoint”, “targetUserName”, “targetUserId”]

tokenLogin

[“tokenId”, “user”]

validateEmail

[“endpoint”, “targetUserName”, “targetUserId”]

clusters

changeClusterAcl

[“shardName”, “aclPermissionSet”, “targetUserId”, “resourceId”]

create

[“cluster_log_conf”, “num_workers”, “enable_elastic_disk”, “driver_node_type_id”, “start_cluster”, “docker_image”, “ssh_public_keys”, “aws_attributes”, “acl_path_prefix”, “node_type_id”, “instance_pool_id”, “spark_env_vars”, “init_scripts”, “spark_version”, “cluster_source”, “autotermination_minutes”, “cluster_name”, “autoscale”, “custom_tags”, “cluster_creator”, “enable_local_disk_encryption”, “idempotency_token”, “spark_conf”, “organization_id”, “no_driver_daemon”, “user_id”]

createResult

[“clusterName”, “clusterState”, “clusterId”, “clusterWorkers”, “clusterOwnerUserId”]

delete

[“cluster_id”]

deleteResult

[“clusterWorkers”, “clusterState”, “clusterId”, “clusterOwnerUserId”, “clusterName”]

edit

[“spark_env_vars”, “no_driver_daemon”, “enable_elastic_disk”, “aws_attributes”, “driver_node_type_id”, “custom_tags”, “cluster_name”, “spark_conf”, “ssh_public_keys”, “autotermination_minutes”, “cluster_source”, “docker_image”, “enable_local_disk_encryption”, “cluster_id”, “spark_version”, “autoscale”, “cluster_log_conf”, “instance_pool_id”, “num_workers”, “init_scripts”, “node_type_id”]

permanentDelete

[“cluster_id”]

resize

[“cluster_id”, “num_workers”, “autoscale”]

resizeResult

[“clusterWorkers”, “clusterState”, “clusterId”, “clusterOwnerUserId”, “clusterName”]

restart

[“cluster_id”]

restartResult

[“clusterId”, “clusterState”, “clusterName”, “clusterOwnerUserId”, “clusterWorkers”]

start

[“init_scripts_safe_mode”, “cluster_id”]

startResult

[“clusterName”, “clusterState”, “clusterWorkers”, “clusterOwnerUserId”, “clusterId”]

clusterPolicies

create

[“name”]

edit

[“policy_id”, “name”]

delete

[“policy_id”]

changeClusterPolicyAcl

[“shardName”, “targetUserId”, “resourceId”, “aclPermissionSet”]

dbfs

addBlock

[“handle”, “data_length”]

create

[“path”, “bufferSize”, “overwrite”]

delete

[“recursive”, “path”]

getSessionCredentials

[“mountPoint”]

mkdirs

[“path”]

mount

[“mountPoint”, “owner”]

move

[“dst”, “source_path”, “src”, “destination_path”]

put

[“path”, “overwrite”]

unmount

[“mountPoint”]

databrickssql

addDashboardWidget

[“dashboardId”, “widgetId”]

cancelQueryExecution

[“queryExecutionId”]

changeWarehouseAcls

[“aclPermissionSet”, “resourceId”, “shardName”, “targetUserId”]

changePermissions

[“granteeAndPermission”, “objectId”, “objectType”]

cloneDashboard

[“dashboardId”]

commandSubmit(only for verbose audit logs)

[“orgId”, “sourceIpAddress”, “timestamp”, “userAgent”,”userIdentity”, “shardName” (see details)]

commandFinish (only for verbose audit logs)

[“orgId”, “sourceIpAddress”, “timestamp”, “userAgent”,”userIdentity”, “shardName” (see details)]

createAlertDestination

[“alertDestinationId”, “alertDestinationType”]

createDashboard

[“dashboardId”]

createDataPreviewDashboard

[“dashboardId”]

createWarehouse

[“auto_resume”, “auto_stop_mins”, “channel”, “cluster_size”, “conf_pairs”, “custom_cluster_confs”, “enable_databricks_compute”, “enable_photon”, “enable_serverless_compute”, “instance_profile_arn”, “max_num_clusters”, “min_num_clusters”, “name”, “size”, “spot_instance_policy”, “tags”, “test_overrides”]

createQuery

[“queryId”]

createQueryDraft

[“queryId”]

createQuerySnippet

[“querySnippetId”]

createRefreshSchedule

[“alertId”, “dashboardId”, “refreshScheduleId”]

createSampleDashboard

[“sampleDashboardId”]

createSubscription

[“dashboardId”, “refreshScheduleId”, “subscriptionId”]

createVisualization

[“queryId”, “visualizationId”]

deleteAlert

[“alertId”]

deleteAlertDestination

[“alertDestinationId”]

deleteDashboard

[“dashboardId”]

deleteDashboardWidget

[“widgetId”]

deleteWarehouse

[“id”]

deleteExternalDatasource

[“dataSourceId”]

deleteQuery

[“queryId”]

deleteQueryDraft

[“queryId”]

deleteQuerySnippet

[“querySnippetId”]

deleteRefreshSchedule

[“alertId”, “dashboardId”, “refreshScheduleId”]

deleteSubscription

[“subscriptionId”]

deleteVisualization

[“visualizationId”]

downloadQueryResult

[“fileType”, “queryId”, “queryResultId”]

editWarehouse

[“auto_stop_mins”, “channel”, “cluster_size”, “confs”, “enable_photon”, “enable_serverless_compute”, “id”, “instance_profile_arn”, “max_num_clusters”, “min_num_clusters”, “name”, “spot_instance_policy”, “tags”]

executeAdhocQuery

[“dataSourceId”]

executeSavedQuery

[“queryId”]

executeWidgetQuery

[“widgetId”]

favoriteDashboard

[“dashboardId”]

favoriteQuery

[“queryId”]

forkQuery

[“originalQueryId”, “queryId”]

listQueries

[“filter_by”, “include_metrics”, “max_results”, “page_token”]

moveDashboardToTrash

[“dashboardId”]

moveQueryToTrash

[“queryId”]

muteAlert

[“alertId”]

publishBatch

[“statuses”]

publishDashboardSnapshot

[“dashboardId”, “hookId”, “subscriptionId”]

restoreDashboard

[“dashboardId”]

restoreQuery

[“queryId”]

setWarehouseConfig

[“data_access_config”, “enable_serverless_compute”, “instance_profile_arn”, “security_policy”, “serverless_agreement”, “sql_configuration_parameters”, “try_create_databricks_managed_starter_warehouse”]

snapshotDashboard

[“dashboardId”]

startWarehouse

[“id”]

stopWarehouse

[“id”]

subscribeAlert

[“alertId”, “destinationId”]

transferObjectOwnership

[“newOwner”, “objectId”, “objectType”]

unfavoriteDashboard

[“dashboardId”]

unfavoriteQuery

[“queryId”]

unmuteAlert

[“alertId”]

unsubscribeAlert

[“alertId”, “subscriberId”]

updateAlert

[“alertId”, “queryId”]

updateAlertDestination

[“alertDestinationId”]

updateDashboard

[“dashboardId”]

updateDashboardWidget

[“widgetId”]

updateOrganizationSetting

[“has_configured_data_access”, “has_explored_sql_warehouses”, “has_granted_permissions”]

updateQuery

[“queryId”]

updateQueryDraft

[“queryId”]

updateQuerySnippet

[“querySnippetId”]

updateRefreshSchedule

[“alertId”, “dashboardId”, “refreshScheduleId”]

updateVisualization

[“visualizationId”]

genie

databricksAccess

[“duration”, “approver”, “reason”, “authType”, “user”]

gitCredentials

getGitCredential

[“id”]

listGitCredentials

[]

deleteGitCredential

[“id”]

updateGitCredential

[“id”, “git_provider”, “git_username”]

createGitCredential

[“git_provider”, “git_username”]

globalInitScripts

create

[“name”, “position”, “script-SHA256”, “enabled”]

update

[“script_id”, “name”, “position”, “script-SHA256”, “enabled”]

delete

[“script_id”]

groups

addPrincipalToGroup

[“user_name”, “parent_name”]

createGroup

[“group_name”]

getGroupMembers

[“group_name”]

removeGroup

[“group_name”]

iamRole

changeIamRoleAcl

[“targetUserId”, “shardName”, “resourceId”, “aclPermissionSet”]

instancePools

changeInstancePoolAcl

[“shardName”, “resourceId”, “targetUserId”, “aclPermissionSet”]

create

[“enable_elastic_disk”, “preloaded_spark_versions”, “idle_instance_autotermination_minutes”, “instance_pool_name”, “node_type_id”, “custom_tags”, “max_capacity”, “min_idle_instances”, “aws_attributes”]

delete

[“instance_pool_id”]

edit

[“instance_pool_name”, “idle_instance_autotermination_minutes”, “min_idle_instances”, “preloaded_spark_versions”, “max_capacity”, “enable_elastic_disk”, “node_type_id”, “instance_pool_id”, “aws_attributes”]

jobs

cancel

[“run_id”]

cancelAllRuns

[“job_id”]

changeJobAcl

[“shardName”, “aclPermissionSet”, “resourceId”, “targetUserId”]

create

[“spark_jar_task”, “email_notifications”, “notebook_task”, “spark_submit_task”, “timeout_seconds”, “libraries”, “name”, “spark_python_task”, “job_type”, “new_cluster”, “existing_cluster_id”, “max_retries”, “schedule”]

delete

[“job_id”]

deleteRun

[“run_id”]

reset

[“job_id”, “new_settings”]

resetJobAcl

[“grants”, “job_id”]

runFailed

[“jobClusterType”, “jobTriggerType”, “jobId”, “jobTaskType”, “runId”, “jobTerminalState”, “idInJob”, “orgId”]

runNow

[“notebook_params”, “job_id”, “jar_params”, “workflow_context”]

runSucceeded

[“idInJob”, “jobId”, “jobTriggerType”, “orgId”, “runId”, “jobClusterType”, “jobTaskType”, “jobTerminalState”]

submitRun

[“shell_command_task”, “run_name”, “spark_python_task”, “existing_cluster_id”, “notebook_task”, “timeout_seconds”, “libraries”, “new_cluster”, “spark_jar_task”]

update

[“fields_to_remove”, “job_id”, “new_settings”]

mlflowExperiment

deleteMlflowExperiment

[“experimentId”, “path”, “experimentName”]

moveMlflowExperiment

[“newPath”, “experimentId”, “oldPath”]

restoreMlflowExperiment

[“experimentId”, “path”, “experimentName”]

mlflowModelRegistry

listModelArtifacts

[“name”, “version”, “path”, “page_token”]

getModelVersionSignedDownloadUri

[“name”, “version”, “path”]

createRegisteredModel

[“name”, “tags”]

deleteRegisteredModel

[“name”]

renameRegisteredModel

[“name”, “new_name”]

setRegisteredModelTag

[“name”, “key”, “value”]

deleteRegisteredModelTag

[“name”, “key”]

createModelVersion

[“name”, “source”, “run_id”, “tags”, “run_link”]

deleteModelVersion

[“name”, “version”]

getModelVersionDownloadUri

[“name”, “version”]

setModelVersionTag

[“name”, “version”, “key”, “value”]

deleteModelVersionTag

[“name”, “version”, “key”]

createTransitionRequest

[“name”, “version”, “stage”]

deleteTransitionRequest

[“name”, “version”, “stage”, “creator”]

approveTransitionRequest

[“name”, “version”, “stage”, “archive_existing_versions”]

rejectTransitionRequest

[“name”, “version”, “stage”]

transitionModelVersionStage

[“name”, “version”, “stage”, “archive_existing_versions”]

transitionModelVersionStageDatabricks

[“name”, “version”, “stage”, “archive_existing_versions”]

createComment

[“name”, “version”]

updateComment

[“id”]

deleteComment

[“id”]

notebook

attachNotebook

[“path”, “clusterId”, “notebookId”]

createNotebook

[“notebookId”, “path”]

deleteFolder

[“path”]

deleteNotebook

[“notebookId”, “notebookName”, “path”]

detachNotebook

[“notebookId”, “clusterId”, “path”]

downloadLargeResults

[“notebookId”, “notebookFullPath”]

downloadPreviewResults

[“notebookId”, “notebookFullPath”]

importNotebook

[“path”]

moveNotebook

[“newPath”, “oldPath”, “notebookId”]

renameNotebook

[“newName”, “oldName”, “parentPath”, “notebookId”]

restoreFolder

[“path”]

restoreNotebook

[“path”, “notebookId”, “notebookName”]

runCommand (only for verbose audit logs)

[“notebookId”, “executionTime”, “status”, “commandId”, “commandText” (see details)]

takeNotebookSnapshot

[“path”]

repos

createRepo

[“url”, “provider”, “path”]

updateRepo

[“id”, “branch”, “tag”, “git_url”, “git_provider”]

getRepo

[“id”]

listRepos

[“path_prefix”, “next_page_token”]

deleteRepo

[“id”]

pull

[“id”]

commitAndPush

[“id”, “message”, “files”, “checkSensitiveToken”]

checkoutBranch

[“id”, “branch”]

discard

[“id”, “file_paths”]

secrets

createScope

[“scope”]

deleteScope

[“scope”]

deleteSecret

[“key”, “scope”]

getSecret

[“scope”, “key”]

listAcls

[“scope”]

listSecrets

[“scope”]

putSecret

[“string_value”, “scope”, “key”]

sqlanalytics

createEndpoint

startEndpoint

stopEndpoint

deleteEndpoint

editEndpoint

changeEndpointAcls

setEndpointConfig

createQuery

[“queryId”]

updateQuery

[“queryId”]

forkQuery

[“queryId”, “originalQueryId”]

moveQueryToTrash

[“queryId”]

deleteQuery

[“queryId”]

restoreQuery

[“queryId”]

createDashboard

[“dashboardId”]

updateDashboard

[“dashboardId”]

moveDashboardToTrash

[“dashboardId”]

deleteDashboard

[“dashboardId”]

restoreDashboard

[“dashboardId”]

createAlert

[“alertId”, “queryId”]

updateAlert

[“alertId”, “queryId”]

deleteAlert

[“alertId”]

createVisualization

[“visualizationId”, “queryId”]

updateVisualization

[“visualizationId”]

deleteVisualization

[“visualizationId”]

changePermissions

[“objectType”, “objectId”, “granteeAndPermission”]

createAlertDestination

[“alertDestinationId”, “alertDestinationType”]

updateAlertDestination

[“alertDestinationId”]

deleteAlertDestination

[“alertDestinationId”]

createQuerySnippet

[“querySnippetId”]

updateQuerySnippet

[“querySnippetId”]

deleteQuerySnippet

[“querySnippetId”]

downloadQueryResult

[“queryId”, “queryResultId”, “fileType”]

sqlPermissions

createSecurable

[“securable”]

grantPermission

[“permission”]

removeAllPermissions

[“securable”]

requestPermissions

[“requests”]

revokePermission

[“permission”]

showPermissions

[“securable”, “principal”]

ssh

login

[“containerId”, “userName”, “port”, “publicKey”, “instanceId”]

logout

[“userName”, “containerId”, “instanceId”]

webTerminal

startSession

[“socketGUID”, “clusterId”, “serverPort”, “ProxyTargetURI”]

closeSession

[“socketGUID”, “clusterId”, “serverPort”, “ProxyTargetURI”]

workspace

changeWorkspaceAcl

[“shardName”, “targetUserId”, “aclPermissionSet”, “resourceId”]

fileCreate

[“path”]

fileDelete

[“path”]

moveWorkspaceNode

[“destinationPath”, “path”]

purgeWorkspaceNodes

[“treestoreId”]

workspaceConfEdit

[“workspaceConfKeys (values: enableResultsDownloading, enableExportNotebook)”, “workspaceConfValues”]

workspaceExport

[“workspaceExportFormat”, “notebookFullPath”]

Account level audit log events

Service

Action

Request Parameters

accountBillableUsage

getAggregatedUsage

[“account_id”, “window_size”, “start_time”, “end_time”, “meter_name”, “workspace_ids_filter”]

getDetailedUsage

[“account_id”, “start_month”, “end_month”, “with_pii”]

accounts

login

[“user”]

gcpWorkspaceBrowserLogin

[“user”]

logout

[“user”]

accountsManager

updateAccount

[“account_id”, “account”]

changeAccountOwner

[“account_id”, “first_name”, “last_name”, “email”]

updateSubscription

[“account_id”, “subscription_id”, “subscription”]

listSubscriptions

[“account_id”]

createWorkspaceConfiguration

[“workspace”]

getWorkspaceConfiguration

[“account_id”, “workspace_id”]

listWorkspaceConfigurations

[“account_id”]

updateWorkspaceConfiguration

[“account_id”, “workspace_id”]

deleteWorkspaceConfiguration

[“account_id”, “workspace_id”]

listWorkspaceEncryptionKeyRecords

[“account_id”, “workspace_id”]

listWorkspaceEncryptionKeyRecordsForAccount

[“account_id”]

createVpcEndpoint

[“vpc_endpoint”]

getVpcEndpoint

[“account_id”, “vpc_endpoint_id”]

listVpcEndpoints

[“account_id”]

deleteVpcEndpoint

[“account_id”, “vpc_endpoint_id”]

createPrivateAccessSettings

[“private_access_settings”]

getPrivateAccessSettings

[“account_id”, “private_access_settings_id”]

listPrivateAccessSettingss

[“account_id”]

deletePrivateAccessSettings

[“account_id”, “private_access_settings_id”]

logDelivery

createLogDeliveryConfiguration

[“account_id”, “config_id”]

updateLogDeliveryConfiguration

[“config_id”, “account_id”, “status”]

getLogDeliveryConfiguration

[“log_delivery_configuration”]

listLogDeliveryConfigurations

[“account_id”, “storage_configuration_id”, “credentials_id”, “status”]

ssoConfigBackend

create

[“account_id”, “sso_type”, “config”]

update

[“account_id”, “sso_type”, “config”]

get

[“account_id”, “sso_type”]

Analyze audit logs

You can analyze audit logs using Databricks. The following example uses logs to report on Databricks access and Apache Spark versions.

Load audit logs as a DataFrame and register the DataFrame as a temp table.

val df = spark.read.format("json").load("gs://bucketName/path/to/your/audit-logs")
df.createOrReplaceTempView("audit_logs")

List the users who accessed Databricks and from where.

%sql
SELECT DISTINCT userIdentity.email, sourceIPAddress
FROM audit_logs
WHERE serviceName = "accounts" AND actionName LIKE "%login%"

Check the Apache Spark versions used.

%sql
SELECT requestParams.spark_version, COUNT(*)
FROM audit_logs
WHERE serviceName = "clusters" AND actionName = "create"
GROUP BY requestParams.spark_version

Check table data access.

%sql
SELECT *
FROM audit_logs
WHERE serviceName = "sqlPermissions" AND actionName = "requestPermissions"