Billable usage log schema (legacy)

Note

This article includes details about the legacy usage logs, which do not record usage for all products. Databricks recommends using the billable usage system table to access and query complete usage data.

This article explains how to read and analyze the usage log data downloaded from the account console.

You can view and download billable usage directly in the account console, or by using the Account API.

CSV file schema

Column

Type

Description

Example

workspaceId

string

ID of the workspace.

1234567890123456

timestamp

datetime

End of the hour for the provided usage.

2019-02-22T09:59:59.999Z

clusterId

string

ID of the cluster (for a cluster) or of the warehouse (for a SQL warehouse)

Cluster example: 0406-020048-brawl507

SQL warehouse example: 8e00f0c8b392983e

clusterName

string

User-provided name for the cluster/warehouse.

Shared Autoscaling

clusterNodeType

string

Instance type of the cluster/warehouse.

Cluster example: m4.16xlarge

SQL warehouse example: db.xlarge

clusterOwnerUserId

string

ID of the user who created the cluster/warehouse.

12345678901234

clusterCustomTags

string (“-escaped json)

Custom tags associated with the cluster/warehouse during this hour.

"{""dept"":""mktg"",""op_phase"":""dev""}"

sku

string

Billing SKU. See the Billing SKUs table for a list of values.

STANDARD_ALL_PURPOSE_COMPUTE

dbus

double

Number of DBUs used by the user during this hour.

1.2345

machineHours

double

Total number of machine hours used by all containers in the cluster/warehouse.

12.345

clusterOwnerUserName

string

Username (email) of the user who created the cluster/warehouse.

user@yourcompany.com

tags

string (“-escaped json)

Default and custom cluster/warehouse tags, and default and custom instance pool tags (if applicable) associated with the cluster during this hour. See Cluster tags, Warehouse tags, and Pool tags. This is a superset of the clusterCustomTags column.

"{""dept"":""mktg"",""op_phase"":""dev"", ""Vendor"":""Databricks"", ""ClusterId"":""0405-020048-brawl507"", ""Creator"":""user@yourcompany.com""}"

Billing SKUs

  • ENTERPRISE_ALL_PURPOSE_COMPUTE

  • ENTERPRISE_ALL_PURPOSE_COMPUTE_(PHOTON)

  • ENTERPRISE_JOBS_COMPUTE

  • ENTERPRISE_JOBS_COMPUTE_(PHOTON)

  • ENTERPRISE_JOBS_LIGHT_COMPUTE

  • ENTERPRISE_SQL_COMPUTE

  • ENTERPRISE_DLT_CORE_COMPUTE

  • ENTERPRISE_DLT_CORE_COMPUTE_(PHOTON)

  • ENTERPRISE_DLT_PRO_COMPUTE

  • ENTERPRISE_DLT_PRO_COMPUTE_(PHOTON)

  • ENTERPRISE_DLT_ADVANCED_COMPUTE

  • ENTERPRISE_DLT_ADVANCED_COMPUTE_(PHOTON)

  • PREMIUM_ALL_PURPOSE_COMPUTE

  • PREMIUM_ALL_PURPOSE_COMPUTE_(PHOTON)

  • PREMIUM_JOBS_COMPUTE

  • PREMIUM_JOBS_COMPUTE_(PHOTON)

  • PREMIUM_JOBS_LIGHT_COMPUTE

  • PREMIUM_SQL_COMPUTE

  • PREMIUM_DLT_CORE_COMPUTE

  • PREMIUM_DLT_CORE_COMPUTE_(PHOTON)

  • PREMIUM_DLT_PRO_COMPUTE

  • PREMIUM_DLT_PRO_COMPUTE_(PHOTON)

  • PREMIUM_DLT_ADVANCED_COMPUTE

  • PREMIUM_DLT_ADVANCED_COMPUTE_(PHOTON)

  • STANDARD_ALL_PURPOSE_COMPUTE

  • STANDARD_ALL_PURPOSE_COMPUTE_(PHOTON)

  • STANDARD_JOBS_COMPUTE

  • STANDARD_JOBS_COMPUTE_(PHOTON)

  • STANDARD_JOBS_LIGHT_COMPUTE

  • STANDARD_DLT_CORE_COMPUTE

  • STANDARD_DLT_CORE_COMPUTE_(PHOTON)

  • STANDARD_DLT_PRO_COMPUTE

  • STANDARD_DLT_PRO_COMPUTE_(PHOTON)

  • STANDARD_DLT_ADVANCED_COMPUTE

  • STANDARD_DLT_ADVANCED_COMPUTE_(PHOTON)

Analyze usage data in Databricks

This section describes how to make the data in the billable usage CSV file available to Databricks for analysis.

The CSV file uses a format that is standard for commercial spreadsheet applications but requires a modification to be read by Apache Spark. You must use option("escape", "\"") when you create the usage table in Databricks.

Total DBUs are the sum of the dbus column.

Import the log using the Create Table UI

You can use the Load data using the add data UI to import the CSV file into Databricks for analysis.

Create a Spark DataFrame

You can also use the following code to create the usage table from a path to the CSV file:

df = (spark.
      read.
      option("header", "true").
      option("inferSchema", "true").
      option("escape", "\"").
      csv("/FileStore/tables/usage_data.csv"))

df.createOrReplaceTempView("usage")

Create a Delta table

To create a Delta table from the DataFrame (df) in the previous example, use the following code:

(df.write
    .format("delta")
    .mode("overwrite")
    .saveAsTable("database_name.table_name")
)

Warning

The saved Delta table is not updated automatically when you add or replace new CSV files. If you need the latest data, re-run these commands before you use the Delta table.