Error conditions in Databricks

This is a list of common, named error conditions returned by Databricks.

Databricks Runtime and Databricks SQL

AMBIGUOUS_CONSTRAINT

Ambiguous reference to constraint <constraint>.

ARITHMETIC_OVERFLOW

<message>.<alternative> If necessary set <config> to “false” (except for ANSI interval type) to bypass this error.

BUILT_IN_CATALOG

<operation> doesn’t support built-in catalogs.

CANNOT_CAST_DATATYPE

Cannot cast <sourceType> to <targetType>.

CANNOT_CHANGE_DECIMAL_PRECISION

<value> cannot be represented as Decimal(<precision>, <scale>). If necessary set <ansiConfig> to “false” to bypass this error.

CANNOT_COPY_STATE

Cannot copy catalog state like current database and temporary views from Unity Catalog to a legacy catalog.

CANNOT_DELETE_SYSTEM_OWNED

System owned <resourceType> cannot be deleted.

CANNOT_DROP_AMBIGUOUS_CONSTRAINT

Cannot drop the constraint with the name <constraintName> shared by a CHECK constraint

and a PRIMARY KEY or FOREIGN KEY constraint. You can drop the PRIMARY KEY or

FOREIGN KEY constraint by queries:

ALTER TABLE .. DROP PRIMARY KEY or

ALTER TABLE .. DROP FOREIGN KEY ..

CANNOT_INFER_DATE

Cannot infer date in schema inference when LegacyTimeParserPolicy is “LEGACY”. Legacy Date formatter does not support strict date format matching which is required to avoid inferring timestamps and other non-date entries to date.

CANNOT_PARSE_DECIMAL

Cannot parse decimal

CANNOT_PARSE_TIMESTAMP

<message>. If necessary set <ansiConfig> to “false” to bypass this error.

CANNOT_READ_SENSITIVE_KEY_FROM_SECURE_PROVIDER

Cannot read sensitive key ‘<key>’ from secure provider

CANNOT_REFERENCE_UC_IN_HMS

Cannot reference a Unity Catalog <objType> in Hive Metastore objects.

CANNOT_RENAME_ACROSS_METASTORE

Renaming a table across metastore services is not allowed.

CANNOT_UP_CAST_DATATYPE

Cannot up cast <expression> from <sourceType> to <targetType>.

<details>

CAST_INVALID_INPUT

The value <expression> of the type <sourceType> cannot be cast to <targetType> because it is malformed. Correct the value as per the syntax, or change its target type. Use try_cast to tolerate malformed input and return NULL instead. If necessary set <ansiConfig> to “false” to bypass this error.

CAST_OVERFLOW

The value <value> of the type <sourceType> cannot be cast to <targetType> due to an overflow. Use try_cast to tolerate overflow and return NULL instead. If necessary set <ansiConfig> to “false” to bypass this error.

CAST_OVERFLOW_IN_TABLE_INSERT

Fail to insert a value of <sourceType> type into the <targetType> type column <columnName> due to an overflow. Use try_cast on the input value to tolerate overflow and return NULL instead.

CONCURRENT_QUERY

Another instance of this query was just started by a concurrent session.

CONSTRAINTS_REQUIRE_UNITY_CATALOG

Table constraints are only supported in Unity Catalog.

COPY_INTO_CREDENTIALS_NOT_ALLOWED_ON

Invalid scheme <scheme>. COPY INTO source encryption currently only supports s3/s3n/s3a/wasbs/abfss.

COPY_INTO_CREDENTIALS_REQUIRED

COPY INTO source credentials must specify <keyList>.

COPY_INTO_DUPLICATED_FILES_COPY_NOT_ALLOWED

Duplicated files were committed in a concurrent COPY INTO operation. Please try again later.

COPY_INTO_ENCRYPTION_NOT_ALLOWED_ON

Invalid scheme <scheme>. COPY INTO source encryption currently only supports s3/s3n/s3a/abfss.

COPY_INTO_ENCRYPTION_NOT_SUPPORTED_FOR_AZURE

COPY INTO encryption only supports ADLS Gen2, or abfss:// file scheme

COPY_INTO_ENCRYPTION_REQUIRED

COPY INTO source encryption must specify ‘<key>’.

COPY_INTO_ENCRYPTION_REQUIRED_WITH_EXPECTED

Invalid encryption option <requiredKey>. COPY INTO source encryption must specify ‘<requiredKey>’ = ‘<keyValue>’.

COPY_INTO_NON_BLIND_APPEND_NOT_ALLOWED

COPY INTO other than appending data is not allowed to run concurrently with other transactions. Please try again later.

COPY_INTO_ROCKSDB_MAX_RETRY_EXCEEDED

COPY INTO failed to load its state, maximum retries exceeded.

COPY_INTO_SOURCE_FILE_FORMAT_NOT_SUPPORTED

The format of the source files must be one of CSV, JSON, AVRO, ORC, PARQUET, TEXT, or BINARYFILE. Using COPY INTO on Delta tables as the source is not supported as duplicate data may be ingested after OPTIMIZE operations. This check can be turned off by running the SQL command set spark.databricks.delta.copyInto.formatCheck.enabled = false.

DATETIME_OVERFLOW

Datetime operation overflow: <operation>.

DIVIDE_BY_ZERO

Division by zero. Use try_divide to tolerate divisor being 0 and return NULL instead. If necessary set <ansiConfig> to “false” (except for ANSI interval type) to bypass this error.

DUPLICATE_KEY

Found duplicate keys <keyColumn>

ELEMENT_AT_BY_INDEX_ZERO

The index 0 is invalid. An index shall be either < 0 or > 0 (the first element has index 1).

EXCEPT_NESTED_COLUMN_INVALID_TYPE

EXCEPT column <columnName> was resolved and expected to be StructType, but found type <dataType>.

EXCEPT_OVERLAPPING_COLUMNS

Columns in an EXCEPT list must be distinct and non-overlapping.

EXCEPT_UNRESOLVED_COLUMN_IN_STRUCT_EXPANSION

The column/field name <objectName> in the EXCEPT clause cannot be resolved. Did you mean one of the following: [<objectList>]?

Note: nested columns in the EXCEPT clause may not include qualifiers (table name, parent struct column name, etc.) during a struct expansion; try removing qualifiers if they are used with nested columns.

EXT_TABLE_INVALID_SCHEME

External tables don’t support the <scheme> scheme.

FAILED_EXECUTE_UDF

Failed to execute user defined function (<functionName>: (<signature>) => <result>)

FAILED_RENAME_PATH

Failed to rename <sourcePath> to <targetPath> as destination already exists

FORBIDDEN_OPERATION

The operation <statement> is not allowed on the <objectType>: <objectName>

FOREIGN_KEY_MISMATCH

Foreign key parent columns <parentColumns> do not match primary key child columns <childColumns>.

GRAPHITE_SINK_INVALID_PROTOCOL

Invalid Graphite protocol: <protocol>

GRAPHITE_SINK_PROPERTY_MISSING

Graphite sink requires ‘<property>’ property.

GROUPING_COLUMN_MISMATCH

Column of grouping (<grouping>) can’t be found in grouping columns <groupingColumns>

GROUPING_ID_COLUMN_MISMATCH

Columns of grouping_id (<groupingIdColumn>) does not match grouping columns (<groupByColumns>)

GROUPING_SIZE_LIMIT_EXCEEDED

Grouping sets size cannot be greater than <maxSize>

INCOMPARABLE_PIVOT_COLUMN

Invalid pivot column <columnName>. Pivot columns must be comparable.

INCOMPATIBLE_DATASOURCE_REGISTER

Detected an incompatible DataSourceRegister. Please remove the incompatible library from classpath or upgrade it. Error: <message>

INCONSISTENT_BEHAVIOR_CROSS_VERSION

You may get a different result due to the upgrading to

For more details see INCONSISTENT_BEHAVIOR_CROSS_VERSION

INCORRECT_NUMBER_OF_ARGUMENTS

<failure>, <functionName> requires at least <minArgs> arguments and at most <maxArgs> arguments.

INDEX_OUT_OF_BOUNDS

Index <indexValue> must be between 0 and the length of the ArrayData.

INSUFFICIENT_PERMISSIONS_EXT_LOC

User ‘<user>’ has insufficient privileges for external location ‘<location>’.

INSUFFICIENT_PERMISSIONS_STORAGE_CRED

Storage credential ‘<credentialName>’ has insufficient privileges.

INTERNAL_ERROR

<message>

INVALID_AGGREGATE_FUNCTION_USAGE_IN_SQL_FUNCTION

Invalid aggregate function usage in SQL function: <functionName>

INVALID_ARRAY_INDEX

The index <indexValue> is out of bounds. The array has <arraySize> elements. Use try_element_at and increase the array index by 1(the starting array index is 1 for try_element_at) to tolerate accessing element at invalid index and return NULL instead. If necessary set <ansiConfig> to “false” to bypass this error.

INVALID_ARRAY_INDEX_IN_ELEMENT_AT

The index <indexValue> is out of bounds. The array has <arraySize> elements. Use try_element_at to tolerate accessing element at invalid index and return NULL instead. If necessary set <ansiConfig> to “false” to bypass this error.

INVALID_BUCKET_FILE

Invalid bucket file: <path>

INVALID_CLONE_PATH

The target location for CLONE needs to be an absolute path or table name. Use an

absolute path instead of <path>.

INVALID_COLUMN_OR_FIELD_DATA_TYPE

Column or field <name> is of type <type> while it’s required to be <expectedType>.

INVALID_DEST_CATALOG

Destination catalog of the SYNC command must be within Unity Catalog. Found <catalog>.

INVALID_FIELD_NAME

Field name <fieldName> is invalid: <path> is not a struct.

INVALID_FRACTION_OF_SECOND

The fraction of sec must be zero. Valid range is [0, 60]. If necessary set <ansiConfig> to “false” to bypass this error.

INVALID_IDENTIFIER

Invalid identifier <identifier>.

INVALID_JSON_SCHEMA_MAP_TYPE

Input schema <jsonSchema> can only contain STRING as a key type for a MAP.

INVALID_PANDAS_UDF_PLACEMENT

The group aggregate pandas UDF <functionList> cannot be invoked together with as other, non-pandas aggregate functions.

INVALID_PARAMETER_VALUE

The value of parameter(s) ‘<parameter>’ in <functionName> is invalid: <expected>

INVALID_PRIVILEGE

Privilege <privilege> is not valid for <securable>.

INVALID_PROPERTY_KEY

<key> is an invalid property key, please use quotes, e.g. SET <key>=<value>

INVALID_PROPERTY_VALUE

<value> is an invalid property value, please use quotes, e.g. SET <key>=<value>

INVALID_S3_COPY_CREDENTIALS

COPY INTO credentials must include AWS_ACCESS_KEY, AWS_SECRET_KEY, and AWS_SESSION_TOKEN.

INVALID_SCHEME

Unity catalog does not support <name> as the default file scheme.

INVALID_SOURCE_CATALOG

Source catalog must not be within Unity Catalog for the SYNC command. Found <catalog>.

INVALID_SQL_FUNCTION_PLAN_STRUCTURE

Invalid SQL function plan structure

<plan>

INVALID_SQL_SYNTAX

Invalid SQL syntax: <inputString>

INVALID_TIMESTAMP_FORMAT

The provided timestamp <timestamp> doesn’t match the expected syntax <format>.

MANAGED_TABLE_WITH_CRED

Create managed table with storage credential is not supported.

MISSING_NAME_FOR_CHECK_CONSTRAINT

CHECK constraint must have a name.

MISSING_STATIC_PARTITION_COLUMN

Unknown static partition column: <columnName>

MODIFY_BUILTIN_CATALOG

Modifying built-in catalog <catalogName> is not supported.

MULTIPLE_LOAD_PATH

Databricks Delta does not support multiple input paths in the load() API.

paths: <pathList>. To build a single DataFrame by loading

multiple paths from the same Delta table, please load the root path of

the Delta table with the corresponding partition filters. If the multiple paths

are from different Delta tables, please use Dataset’s union()/unionByName() APIs

to combine the DataFrames generated by separate load() API calls.

MULTIPLE_MATCHING_CONSTRAINTS

Found at least two matching constraints with the given condition.

MULTI_UDF_INTERFACE_ERROR

Not allowed to implement multiple UDF interfaces, UDF class <className>

MULTI_VALUE_SUBQUERY_ERROR

more than one row returned by a subquery used as an expression: <plan>

NON_LITERAL_PIVOT_VALUES

Literal expressions required for pivot values, found <expression>.

NON_PARTITION_COLUMN

PARTITION clause cannot contain the non-partition column: <columnName>.

NOT_A_TABLE_FUNCTION

<functionName> is not a table function. Please check the function usage: DESCRIBE FUNCTION <functionName>

NOT_A_VALID_DEFAULT_EXPRESSION

The DEFAULT expression of <functionName>.<parameterName> is not supported because it contains a subquery.

NOT_A_VALID_DEFAULT_PARAMETER_POSITION

In routine <functionName> parameter <parameterName> with DEFAULT must not be followed by parameter <nextParameterName> without DEFAULT.

NOT_SUPPORTED_WITH_DB_SQL

<operation> is not supported on a SQL <warehouse>.

NO_HANDLER_FOR_UDAF

No handler for UDAF ‘<functionName>’. Use sparkSession.udf.register(…) instead.

NO_UDF_INTERFACE_ERROR

UDF class <className> doesn’t implement any UDF interface

NULLABLE_ARRAY_OR_MAP_ELEMENT

Array or map at <columnPath> contains nullable element while it’s required to be non-nullable.

NULLABLE_COLUMN_OR_FIELD

Column or field <name> is nullable while it’s required to be non-nullable.

NULL_COMPARISON_RESULT

The comparison result is null. If you want to handle null as 0 (equal), you can set “spark.sql.legacy.allowNullComparisonResultInArraySort” to “true”.

OPERATION_REQUIRES_UNITY_CATALOG

Operation <operation> requires Unity Catalog enabled.

OP_NOT_SUPPORTED_READ_ONLY

<plan> is not supported in read-only session mode.

PARSE_CHAR_MISSING_LENGTH

DataType <type> requires a length parameter, for example <type>(10). Please specify the length.

PARSE_EMPTY_STATEMENT

Syntax error, unexpected empty statement

PARSE_SYNTAX_ERROR

Syntax error at or near <error><hint>

PARTITION_METADATA

<action> is not allowed on table <tableName> since storing partition metadata is not supported in Unity Catalog.

PARTITION_SCHEMA_IN_ICEBERG_TABLES

Partition schema cannot be specified when converting Iceberg tables

PIVOT_VALUE_DATA_TYPE_MISMATCH

Invalid pivot value ‘<value>’: value data type <valueType> does not match pivot column data type <pivotType>

RENAME_SRC_PATH_NOT_FOUND

Failed to rename as <sourcePath> was not found

RESERVED_CDC_COLUMNS_ON_WRITE

The write contains reserved columns <columnList> that are used

internally as metadata for Change Data Feed. To write to the table either rename/drop

these columns or disable Change Data Feed on the table by setting

<config> to false.

RESET_PERMISSION_TO_ORIGINAL

Failed to set original permission <permission> back to the created path: <path>. Exception: <message>

SAMPLE_TABLE_PERMISSIONS

Permissions not supported on sample databases/tables.

SECOND_FUNCTION_ARGUMENT_NOT_INTEGER

The second argument of <functionName> function needs to be an integer.

SYNC_METADATA_DELTA_ONLY

Repair table sync metadata command is only supported for delta table.

SYNC_METADATA_NOT_SUPPORTED

Repair table sync metadata command is only supported for Unity Catalog tables.

SYNC_SRC_TARGET_TBL_NOT_SAME

Source table name <srcTable> must be same as destination table name <destTable>.

UC_BUCKETED_TABLES

Bucketed tables are not supported in Unity Catalog.

UC_CATALOG_NAME_NOT_PROVIDED

For Unity Catalog, please specify the catalog name explicitly. E.g. SHOW GRANT your.address@email.com ON CATALOG main.

UC_COMMAND_NOT_SUPPORTED

<commandName> <isOrAre> not supported in Unity Catalog.

UC_DATASOURCE_NOT_SUPPORTED

Data source format <dataSourceFormatName> is not supported in Unity Catalog.

UC_DATASOURCE_OPTIONS_NOT_SUPPORTED

Data source options are not supported in Unity Catalog.

UC_INVALID_NAMESPACE

Nested or empty namespaces are not supported in Unity Catalog.

UC_INVALID_REFERENCE

Non-Unity-Catalog object <name> can’t be referenced in Unity Catalog objects.

UC_NOT_ENABLED

Unity Catalog is not enabled on this cluster.

UNABLE_TO_ACQUIRE_MEMORY

Unable to acquire <requestedBytes> bytes of memory, got <receivedBytes>

UNKNOWN_TABLE_TYPE

Unsupported table table <type>.

UNPIVOT_REQUIRES_VALUE_COLUMNS

At least one value column needs to be specified for UNPIVOT, all columns specified as ids

UNPIVOT_VALUE_DATA_TYPE_MISMATCH

Unpivot value columns must share a least common type, some types do not: [<types>]

UNRECOGNIZED_SQL_TYPE

Unrecognized SQL type <typeName>

UNRESOLVED_COLUMN

A column or function parameter with name <objectName> cannot be resolved. Did you mean one of the following? [<objectList>]

UNRESOLVED_FIELD

A field with name <fieldName> cannot be resolved with the struct-type column <columnPath>. Did you mean one of the following? [<proposal>]

UNRESOLVED_MAP_KEY

Cannot resolve column <columnName> as a map key. If the key is a string literal, please add single quotes around it. Otherwise, did you mean one of the following column(s)? [<proposal>]

UNSUPPORTED_CONSTRAINT_CLAUSES

Constraint clauses <clauses> are unsupported.

UNSUPPORTED_CONSTRAINT_TYPE

Unsupported constraint type. Only <supportedConstraintTypes> are supported

UNSUPPORTED_DATATYPE

Unsupported data type <typeName>

UNSUPPORTED_DESERIALIZER

The deserializer is not supported:

For more details see UNSUPPORTED_DESERIALIZER

UNSUPPORTED_FEATURE

The feature is not supported:

For more details see UNSUPPORTED_FEATURE

UNSUPPORTED_FN_TYPE

Unsupported user defined function type: <language>

UNSUPPORTED_GENERATOR

The generator is not supported:

For more details see UNSUPPORTED_GENERATOR

UNSUPPORTED_GROUPING_EXPRESSION

grouping()/grouping_id() can only be used with GroupingSets/Cube/Rollup

UNSUPPORTED_SAVE_MODE

The save mode <saveMode> is not supported for:

For more details see UNSUPPORTED_SAVE_MODE

UNTYPED_SCALA_UDF

You’re using untyped Scala UDF, which does not have the input type information. Spark may blindly pass null to the Scala closure with primitive-type argument, and the closure will see the default value of the Java type for the null argument, e.g. udf((x: Int) => x, IntegerType), the result is 0 for null input. To get rid of this error, you could:

  1. use typed Scala UDF APIs(without return type parameter), e.g. udf((x: Int) => x)

  2. use Java UDF APIs, e.g. udf(new UDF1[String, Integer] { override def call(s: String): Integer = s.length() }, IntegerType), if input types are all non primitive

  3. set “spark.sql.legacy.allowUntypedScalaUDF” to “true” and use this API with caution

UPGRADE_NOT_SUPPORTED

Table is not eligible for upgrade from Hive Metastore to Unity Catalog. Reason:

For more details see UPGRADE_NOT_SUPPORTED

WITH_CREDENTIAL

WITH CREDENTIAL syntax is not supported for <type>.

WRITING_JOB_ABORTED

Writing job aborted

ZORDERBY_COLUMN_DOES_NOT_EXIST

ZOrderBy column <columnName> doesn’t exist.

Delta Lake

DELTA_ACTIVE_SPARK_SESSION_NOT_FOUND

Could not find active SparkSession

DELTA_ACTIVE_TRANSACTION_ALREADY_SET

Cannot set a new txn as active when one is already active

DELTA_ADD_COLUMN_AT_INDEX_LESS_THAN_ZERO

Index <columnIndex> to add column <columnName> is lower than 0

DELTA_ADD_COLUMN_STRUCT_NOT_FOUND

Struct not found at position <position>

DELTA_ADD_CONSTRAINTS

Please use ALTER TABLE ADD CONSTRAINT to add CHECK constraints.

DELTA_AGGREGATE_IN_GENERATED_COLUMN

Found <sqlExpr>. A generated column cannot use an aggregate expression

DELTA_AGGREGATION_NOT_SUPPORTED

Aggregate functions are not supported in the <operation> <predicate>.

DELTA_ALTER_TABLE_CHANGE_COL_NOT_SUPPORTED

ALTER TABLE CHANGE COLUMN is not supported for changing column <currentType> to <newType>

DELTA_ALTER_TABLE_RENAME_NOT_ALLOWED

Operation not allowed: ALTER TABLE RENAME TO is not allowed for managed Delta tables on S3, as eventual consistency on S3 may corrupt the Delta transaction log. If you insist on doing so and are sure that there has never been a Delta table with the new name <newName> before, you can enable this by setting <key> to be true.

DELTA_AMBIGUOUS_PARTITION_COLUMN

Ambiguous partition column <column> can be <colMatches>.

DELTA_AMBIGUOUS_PATHS_IN_CREATE_TABLE

CREATE TABLE contains two different locations: <identifier> and <location>.

You can remove the LOCATION clause from the CREATE TABLE statement, or set

<config> to true to skip this check.

DELTA_BLOCK_CDF_COLUMN_MAPPING_READS

Change data feed (CDF) reads are currently not supported on tables with column mapping enabled.

DELTA_BLOCK_COLUMN_MAPPING_AND_CDC_OPERATION

Operation “<opName>” is not allowed when the table has enabled change data feed (CDF) and has undergone schema changes using DROP COLUMN or RENAME COLUMN.

DELTA_BLOOM_FILTER_DROP_ON_NON_EXISTING_COLUMNS

Cannot drop bloom filter indices for the following non-existent column(s): <unknownColumns>

DELTA_CANNOT_CHANGE_DATA_TYPE

Cannot change data type: <dataType>

DELTA_CANNOT_CHANGE_LOCATION

Cannot change the ‘location’ of the Delta table using SET TBLPROPERTIES. Please use ALTER TABLE SET LOCATION instead.

DELTA_CANNOT_CHANGE_PROVIDER

‘provider’ is a reserved table property, and cannot be altered.

DELTA_CANNOT_CONVERT_TO_FILEFORMAT

Can not convert<className> to FileFormat.

DELTA_CANNOT_CREATE_BLOOM_FILTER_NON_EXISTING_COL

Cannot create bloom filter indices for the following non-existent column(s): <unknownCols>

DELTA_CANNOT_CREATE_LOG_PATH

Cannot create <path>

DELTA_CANNOT_DESCRIBE_VIEW_HISTORY

Cannot describe the history of a view.

DELTA_CANNOT_DROP_BLOOM_FILTER_ON_NON_INDEXED_COLUMN

Cannot drop bloom filter index on a non indexed column: <columnName>

DELTA_CANNOT_EVALUATE_EXPRESSION

Cannot evaluate expression: <expression>

DELTA_CANNOT_FIND_BUCKET_SPEC

Expecting a bucketing Delta table but cannot find the bucket spec in the table

DELTA_CANNOT_FIND_VERSION

Cannot find ‘sourceVersion’ in <json>

DELTA_CANNOT_GENERATE_CODE_FOR_EXPRESSION

Cannot generate code for expression: <expression>

DELTA_CANNOT_GENERATE_UPDATE_EXPRESSIONS

Calling without generated columns should always return a update expression for each column

DELTA_CANNOT_MODIFY_APPEND_ONLY

This table is configured to only allow appends. If you would like to permit updates or deletes, use ‘ALTER TABLE <table_name> SET TBLPROPERTIES (<config>=false)’.

DELTA_CANNOT_RECONSTRUCT_PATH_FROM_URI

A uri (<uri>) which can’t be turned into a relative path was found in the transaction log.

DELTA_CANNOT_RELATIVIZE_PATH

A path (<path>) which can’t be relativized with the current input found in the

transaction log. Please re-run this as:

%%scala com.databricks.delta.Delta.fixAbsolutePathsInLog(“<userPath>”, true)

and then also run:

%%scala com.databricks.delta.Delta.fixAbsolutePathsInLog(“<path>”)

DELTA_CANNOT_RENAME_PATH

Cannot rename <currentPath> to <newPath>

DELTA_CANNOT_REPLACE_MISSING_TABLE

Table <tableName> cannot be replaced as it does not exist. Use CREATE OR REPLACE TABLE to create the table.

DELTA_CANNOT_RESOLVE_COLUMN

Can’t resolve column <columnName> in <schema>

DELTA_CANNOT_RESOLVE_SOURCE_COLUMN

Couldn’t resolve qualified source column <columnName> within the source query. Please contact Databricks support.

DELTA_CANNOT_RESTORE_TABLE_VERSION

Cannot restore table to version <version>. Available versions: [<startVersion>, <endVersion>].

DELTA_CANNOT_RESTORE_TIMESTAMP_GREATER

Cannot restore table to timestamp (<requestedTimestamp>) as it is after the latest version available. Please use a timestamp before (<latestTimestamp>)

DELTA_CANNOT_SET_LOCATION_MULTIPLE_TIMES

Can’t set location multiple times. Found <location>

DELTA_CANNOT_UPDATE_ARRAY_FIELD

Cannot update %1$s field %2$s type: update the element by updating %2$s.element

DELTA_CANNOT_UPDATE_MAP_FIELD

Cannot update %1$s field %2$s type: update a map by updating %2$s.key or %2$s.value

DELTA_CANNOT_UPDATE_STRUCT_FIELD

Cannot update <tableName> field <fieldName> type: update struct by adding, deleting, or updating its fields

DELTA_CANNOT_USE_ALL_COLUMNS_FOR_PARTITION

Cannot use all columns for partition columns

DELTA_CDC_NOT_ALLOWED_IN_THIS_VERSION

Configuration delta.enableChangeDataFeed cannot be set. Change data feed from Delta is not yet available.

DELTA_CHANGE_TABLE_FEED_DISABLED

Cannot write to table with delta.enableChangeDataFeed set. Change data feed from Delta is not available.

DELTA_CHECKPOINT_SNAPSHOT_MISMATCH

State of the checkpoint doesn’t match that of the snapshot.

DELTA_CLONE_AMBIGUOUS_TARGET

Two paths were provided as the CLONE target so it is ambiguous which to use. An external

location for CLONE was provided at <externalLocation> at the same time as the path

<targetIdentifier>.

DELTA_COLUMN_NOT_FOUND

Unable to find the column <columnName> given [<columnList>]

DELTA_COLUMN_NOT_FOUND_IN_MERGE

Unable to find the column ‘<targetCol>’ of the target table from the INSERT columns: <colNames>. INSERT clause must specify value for all the columns of the target table.

DELTA_COLUMN_NOT_FOUND_IN_SCHEMA

Couldn’t find column <columnName> in:

<tableSchema>

DELTA_COLUMN_STRUCT_TYPE_MISMATCH

Struct column <source> cannot be inserted into a <targetType> field <targetField> in <targetTable>.

DELTA_COMPLEX_TYPE_COLUMN_CONTAINS_NULL_TYPE

Found nested NullType in column <columName> which is of <dataType>. Delta doesn’t support writing NullType in complex types.

DELTA_CONFIGURE_SPARK_SESSION_WITH_EXTENSION_AND_CATALOG

This Delta operation requires the SparkSession to be configured with the

DeltaSparkSessionExtension and the DeltaCatalog. Please set the necessary

configurations when creating the SparkSession as shown below.

  SparkSession.builder()

    .option("spark.sql.extensions", "<sparkSessionExtensionName>")

    .option("<catalogKey>", "<catalogClassName>")

    ...

    .build()

DELTA_CONFLICT_SET_COLUMN

There is a conflict from these SET columns: <columnList>.

DELTA_CONSTRAINT_ALREADY_EXISTS

Constraint ‘<constraintName>’ already exists. Please delete the old constraint first.

Old constraint:

<oldConstraint>

DELTA_CONSTRAINT_DOES_NOT_EXIST

Cannot drop nonexistent constraint <constraintName> from table <tableName>. To avoid throwing an error, provide the parameter IF EXISTS or set the SQL session configuration <config> to <confValue>.

DELTA_CONVERSION_UNSUPPORTED_COLUMN_MAPPING

The configuration ‘<config>’ cannot be set to <mode> when using CONVERT TO DELTA.

DELTA_CONVERT_NON_PARQUET_TABLE

CONVERT TO DELTA only supports parquet tables, but you are trying to convert a <sourceName> source: <tableId>

DELTA_CREATE_EXTERNAL_TABLE_WITHOUT_SCHEMA

You are trying to create an external table <tableName>

from <path> using Delta, but the schema is not specified when the

input path is empty.

To learn more about Delta, see <docLink>

DELTA_CREATE_EXTERNAL_TABLE_WITHOUT_TXN_LOG

You are trying to create an external table <tableName>

from %2$s using Delta, but there is no transaction log present at

%2$s/_delta_log. Check the upstream job to make sure that it is writing using

format(“delta”) and that the path is the root of the table.

To learn more about Delta, see <docLink>

DELTA_CREATE_TABLE_SCHEME_MISMATCH

The specified schema does not match the existing schema at <path>.

== Specified ==

<specifiedSchema>

== Existing ==

<existingSchema>

== Differences ==

<schemaDifferences>

If your intention is to keep the existing schema, you can omit the

schema from the create table command. Otherwise please ensure that

the schema matches.

DELTA_CREATE_TABLE_WITH_DIFFERENT_PROPERTY

The specified properties do not match the existing properties at <path>.

== Specified ==

<specificiedProperties>

== Existing ==

<existingProperties>

DELTA_CREATE_TABLE_WITH_NON_EMPTY_LOCATION

Cannot create table (‘<tableId>’). The associated location (‘<tableLocation>’) is not empty but it’s not a Delta table

DELTA_DATA_CHANGE_FALSE

Cannot change table metadata because the ‘dataChange’ option is set to false. Attempted operation: ‘<op>’.

DELTA_DUPLICATE_COLUMNS_FOUND

Found duplicate column(s) <coltype>: <duplicateCols>

DELTA_DUPLICATE_COLUMNS_ON_INSERT

Duplicate column names in INSERT clause

DELTA_DUPLICATE_COLUMNS_ON_UPDATE_TABLE

<message>

Please remove duplicate columns before you update your table.

DELTA_EMPTY_DATA

Data used in creating the Delta table doesn’t have any columns.

DELTA_EMPTY_DIRECTORY

No file found in the directory: <directory>.

DELTA_EXCEED_CHAR_VARCHAR_LIMIT

Exceeds char/varchar type length limitation. Failed check: <expr>.

DELTA_EXPRESSIONS_NOT_FOUND_IN_GENERATED_COLUMN

Cannot find the expressions in the generated column <columnName>

DELTA_EXTRACT_REFERENCES_FIELD_NOT_FOUND

Field <fieldName> could not be found when extracting references.

DELTA_FAILED_CAST_PARTITION_VALUE

Failed to cast partition value <value> to <dataType>

DELTA_FAILED_FIND_ATTRIBUTE_IN_OUTPUT_COLLUMNS

Could not find <newAttributeName> among the existing target output <targetOutputCollumns>

DELTA_FAILED_INFER_SCHEMA

Failed to infer schema from the given list of files.

DELTA_FAILED_MERGE_SCHEMA_FILE

Failed to merge schema of file <file>:

<schema>

DELTA_FAILED_RECOGNIZE_PREDICATE

Cannot recognize the predicate ‘<predicate>’

DELTA_FAILED_SCAN_WITH_HISTORICAL_VERSION

Expect a full scan of the latest version of the Delta source, but found a historical scan of version <historicalVersion>

DELTA_FAILED_TO_MERGE_FIELDS

Failed to merge fields ‘<field>’ and ‘<fieldRoot>’. <fieldChild>

DELTA_FAIL_RELATIVIZE_PATH

Failed to relativize the path (<path>). This can happen when absolute paths make

it into the transaction log, which start with the scheme

s3://, wasbs:// or adls://. This is a bug that has existed before DBR 5.0.

To fix this issue, please upgrade your writer jobs to DBR 5.0 and please run:

%%scala com.databricks.delta.Delta.fixAbsolutePathsInLog(“<path>”).

If this table was created with a shallow clone across file systems

(different buckets/containers) and this table is NOT USED IN PRODUCTION, you can

set the SQL configuration <config>

to true. Using this SQL configuration could lead to accidental data loss,

therefore we do not recommend the use of this flag unless

this is a shallow clone for testing purposes.

DELTA_FILE_ALREADY_EXISTS

Existing file path <path>

DELTA_FILE_LIST_AND_PATTERN_STRING_CONFLICT

Cannot specify both file list and pattern string.

DELTA_FILE_NOT_FOUND

File path <path>

DELTA_FILE_OR_DIR_NOT_FOUND

No such file or directory: <path>

DELTA_FILE_TO_OVERWRITE_NOT_FOUND

File (<path>) to be rewritten not found among candidate files:

<pathList>

DELTA_FOUND_MAP_TYPE_COLUMN

A MapType was found. In order to access the key or value of a MapType, specify one

of:

<key> or

<value>

followed by the name of the column (only if that column is a struct type).

e.g. mymap.key.mykey

If the column is a basic type, mymap.key or mymap.value is sufficient.

DELTA_GENERATED_COLUMNS_DATA_TYPE_MISMATCH

Column <columnName> is a generated column or a column used by a generated column. The data type is <columnType>. It doesn’t accept data type <dataType>

DELTA_GENERATED_COLUMNS_EXPR_TYPE_MISMATCH

The expression type of the generated column <columnName> is <expressionType>, but the column type is <columnType>

DELTA_ILLEGAL_FILE_FOUND

Illegal files found in a dataChange = false transaction. Files: <file>

DELTA_ILLEGAL_USAGE

The usage of <option> is not allowed when <operation> a Delta table.

DELTA_INCOMPLETE_FILE_COPY

File (<fileName>) not copied completely. Expected file size: <expectedSize>, found: <actualSize>. To continue with the operation by ignoring the file size check set spark.databricks.delta.clone.checkWrite to false.

DELTA_INCONSISTENT_BUCKET_SPEC

BucketSpec on Delta bucketed table does not match BucketSpec from metadata.Expected: <expected>. Actual: <actual>.

DELTA_INCONSISTENT_LOGSTORE_CONFS

(<setKeys>) cannot be set to different values. Please only set one of them, or set them to the same value.

DELTA_INCORRECT_ARRAY_ACCESS

Incorrectly accessing an ArrayType. Use arrayname.element.elementname position to

add to an array.

DELTA_INCORRECT_ARRAY_ACCESS_BY_NAME

An ArrayType was found. In order to access elements of an ArrayType, specify

<rightName>

Instead of <wrongName>

DELTA_INCORRECT_GET_CONF

Use getConf() instead of `conf.getConf()

DELTA_INCORRECT_LOG_STORE_IMPLEMENTATION

The error typically occurs when the default LogStore implementation, that

is, HDFSLogStore, is used to write into a Delta table on a non-HDFS storage system.

In order to get the transactional ACID guarantees on table updates, you have to use the

correct implementation of LogStore that is appropriate for your storage system.

See <docLink> for details.

DELTA_INDEX_LARGER_OR_EQUAL_THAN_STRUCT

Index <position> to drop column equals to or is larger than struct length: <length>

DELTA_INDEX_LARGER_THAN_STRUCT

Index <index> to add column <columnName> is larger than struct length: <length>

DELTA_INSERT_COLUMN_ARITY_MISMATCH

Cannot write to ‘<tableName>’, <columnName>; target table has <numColumns> column(s) but the inserted data has <insertColumns> column(s)

DELTA_INSERT_COLUMN_MISMATCH

Column <columnName> is not specified in INSERT

DELTA_INVALID_BUCKET_COUNT

Invalid bucket count: <invalidBucketCount>. Bucket count should be a positive number that is power of 2 and at least 8. You can use <validBucketCount> instead.

DELTA_INVALID_BUCKET_INDEX

Cannot find the bucket column in the partition columns

DELTA_INVALID_CALENDAR_INTERVAL_EMPTY

Interval cannot be null or blank.

DELTA_INVALID_CDC_RANGE

CDC range from start <start> to end <end> was invalid. End cannot be before start.

DELTA_INVALID_CHARACTERS_IN_COLUMN_NAME

Attribute name “<columnName>” contains invalid character(s) among ” ,;{}()\n\t=”. Please use alias to rename it.

DELTA_INVALID_CHARACTERS_IN_COLUMN_NAMES

Found invalid character(s) among ‘ ,;{}()nt=’ in the column names of your schema. <advice>

DELTA_INVALID_COMMITTED_VERSION

The committed version is <committedVersion> but the current version is <currentVersion>. Please contact Databricks support.

DELTA_INVALID_FORMAT

Incompatible format detected.

A transaction log for Delta was found at <deltaRootPath>/_delta_log,

but you are trying to <operation> <path> using format(“<format>”). You must use

‘format(“delta”)’ when reading and writing to a delta table.

To disable this check, SET spark.databricks.delta.formatCheck.enabled=false

To learn more about Delta, see <docLink>

DELTA_INVALID_FORMAT_FROM_SOURCE_VERSION

Unsupported format. Expected version should be smaller than or equal to <expectedVersion> but was <realVersion>. Please upgrade to newer version of Delta.

DELTA_INVALID_GENERATED_COLUMN_REFERENCES

A generated column cannot use a non-existent column or another generated column

DELTA_INVALID_IDEMPOTENT_WRITES_OPTIONS

Invalid options for idempotent Dataframe writes: <reason>

DELTA_INVALID_INTERVAL

<interval> is not a valid INTERVAL.

DELTA_INVALID_ISOLATION_LEVEL

invalid isolation level ‘<isolationLevel>’

DELTA_INVALID_LOGSTORE_CONF

(<classConfig>) and (<schemeConfig>) cannot be set at the same time. Please set only one group of them.

DELTA_INVALID_MANAGED_TABLE_SYNTAX_NO_SCHEMA

You are trying to create a managed table <tableName>

using Delta, but the schema is not specified.

To learn more about Delta, see <docLink>

DELTA_INVALID_PARTITIONING_SCHEMA

The AddFile contains partitioning schema different from the table’s partitioning schema

expected: <neededPartitioning>

actual: <specifiedPartitioning>

To disable this check set <config> to “false”

DELTA_INVALID_PARTITION_COLUMN

<columnName> is not a valid partition column in table <tableName>.

DELTA_INVALID_PARTITION_COLUMN_NAME

Found partition columns having invalid character(s) among ” ,;{}()nt=”. Please change the name to your partition columns. This check can be turned off by setting spark.conf.set(“spark.databricks.delta.partitionColumnValidity.enabled”, false) however this is not recommended as other features of Delta may not work properly.

DELTA_INVALID_PARTITION_COLUMN_TYPE

Using column <name> of type <dataType> as a partition column is not supported.

DELTA_INVALID_PARTITION_PATH

A partition path fragment should be the form like part1=foo/part2=bar. The partition path: <path>

DELTA_INVALID_PROTOCOL_DOWNGRADE

Protocol version cannot be downgraded from <oldProtocol> to <newProtocol>

DELTA_INVALID_SOURCE_VERSION

sourceVersion(<version>) is invalid

DELTA_INVALID_TABLE_VALUE_FUNCTION

Function <function> is an unsupported table valued function for CDC reads.

DELTA_INVALID_TIMESTAMP_FORMAT

The provided timestamp <timestamp> does not match the expected syntax <format>.

DELTA_INVALID_V1_TABLE_CALL

<callVersion> call is not expected with path based <tableVersion>

DELTA_ITERATOR_ALREADY_CLOSED

Iterator is closed

DELTA_LOG_ALREADY_EXISTS

A Delta log already exists at <path>

DELTA_MAX_ARRAY_SIZE_EXCEEDED

Please use a limit less than Int.MaxValue - 8.

DELTA_MAX_COMMIT_RETRIES_EXCEEDED

This commit has failed as it has been tried <numAttempts> times but did not succeed.

This can be caused by the Delta table being committed continuously by many concurrent

commits.

Commit started at version: <startVersion>

Commit failed at version: <failVersion>

Number of actions attempted to commit: <numActions>

Total time spent attempting this commit: <timeSpent> ms

DELTA_MAX_LIST_FILE_EXCEEDED

File list must have at most <maxFileListSize> entries, had <numFiles>.

DELTA_MERGE_INCOMPATIBLE_DECIMAL_TYPE

Failed to merge decimal types with incompatible <decimalRanges>

DELTA_MERGE_INVALID_WHEN_NOT_MATCHED_CLAUSE

<clause> clauses cannot be part of the WHEN NOT MATCHED clause in MERGE INTO.

DELTA_MERGE_MISSING_WHEN

There must be at least one WHEN clause in a MERGE statement.

DELTA_MERGE_UNEXPECTED_ASSIGNMENT_KEY

Unexpected assignment key: <unexpectedKeyClass> - <unexpectedKeyObject>

DELTA_METADATA_ABSENT

Couldn’t find Metadata while committing the first version of the Delta table. To disable

this check set <deltaCommitValidationEnabled> to “false”

DELTA_MISSING_CHANGE_DATA

Error getting change data for range [<startVersion> , <endVersion>] as change data was not

recorded for version [<version>]. If you’ve enabled change data feed on this table,

use DESCRIBE HISTORY to see when it was first enabled.

Otherwise, to start recording change data, use `ALTER TABLE table_name SET TBLPROPERTIES

(<key>=true)`.

DELTA_MISSING_COLUMN

Cannot find <columnName> in table columns: <columnList>

DELTA_MISSING_DELTA_TABLE

<tableName> is not a Delta table.

DELTA_MISSING_FILES_UNEXPECTED_VERSION

The stream from your Delta table was expecting process data from version <startVersion>,

but the earliest available version in the deltalog directory is <earliestVersion>. The files

in the transaction log may have been deleted due to log cleanup. In order to avoid losing

data, we recommend that you restart your stream with a new checkpoint location and to

increase your delta.logRetentionDuration setting, if you have explicitly set it below 30

days.

If you would like to ignore the missed data and continue your stream from where it left

off, you can set the .option(“<option>”, “false”) as part

of your readStream statement.

DELTA_MISSING_NOT_NULL_COLUMN_VALUE

Column <columnName>, which has a NOT NULL constraint, is missing from the data being written into the table.

DELTA_MISSING_PARTITION_COLUMN

Partition column <columnName> not found in schema <columnList>

DELTA_MISSING_PART_FILES

Couldn’t find all part files of the checkpoint version: <version>

DELTA_MISSING_PROVIDER_FOR_CONVERT

CONVERT TO DELTA only supports parquet tables. Please rewrite your target as parquet.<path> if it’s a parquet directory.

DELTA_MISSING_SET_COLUMN

SET column <columnName> not found given columns: <columnList>.

DELTA_MISSING_TRANSACTION_LOG

Incompatible format detected.

You are trying to <operation> <path> using Delta, but there is no

transaction log present. Check the upstream job to make sure that it is writing

using format(“delta”) and that you are trying to %1$s the table base path.

To disable this check, SET spark.databricks.delta.formatCheck.enabled=false

To learn more about Delta, see <docLink>

DELTA_MODE_NOT_SUPPORTED

Specified mode ‘<mode>’ is not supported. Supported modes are: <supportedModes>

DELTA_MULTIPLE_CDC_BOUNDARY

Multiple <startingOrEnding> arguments provided for CDC read. Please provide one of either <startingOrEnding>Timestamp or <startingOrEnding>Version.

DELTA_MULTIPLE_CONF_FOR_SINGLE_COLUMN_IN_BLOOM_FILTER

Multiple bloom filter index configurations passed to command for column: <columnName>

DELTA_MULTIPLE_SOURCE_ROW_MATCHING_TARGET_ROW_IN_MERGE

Cannot perform Merge as multiple source rows matched and attempted to modify the same

target row in the Delta table in possibly conflicting ways. By SQL semantics of Merge,

when multiple source rows match on the same target row, the result may be ambiguous

as it is unclear which source row should be used to update or delete the matching

target row. You can preprocess the source table to eliminate the possibility of

multiple matches. Please refer to

<usageReference>

DELTA_NAME_CONFLICT_IN_BUCKETED_TABLE

The following column name(s) are reserved for Delta bucketed table internal usage only: <names>

DELTA_NESTED_FIELDS_NEED_RENAME

Nested fields need renaming to avoid data loss. Fields:

<fields>.

Original schema:

<schema>

DELTA_NESTED_SUBQUERY_NOT_SUPPORTED

Nested subquery is not supported in the <operation> condition.

DELTA_NEW_CHECK_CONSTRAINT_VIOLATION

<numRows> rows in <tableName> violate the new CHECK constraint (<checkConstraint>)

DELTA_NEW_NOT_NULL_VIOLATION

<numRows> rows in <tableName> violate the new NOT NULL constraint on <colName>

DELTA_NON_BOOLEAN_CHECK_CONSTRAINT

CHECK constraint ‘<name>’ (<expr>) should be a boolean expression.

DELTA_NON_DETERMINISTIC_FUNCTION_NOT_SUPPORTED

Non-deterministic functions are not supported in the <operation> <expression>

DELTA_NON_GENERATED_COLUMN_MISSING_UPDATE_EXPR

<columnName> is not a generated column but is missing its update expression

DELTA_NON_LAST_MATCHED_CLAUSE_OMIT_CONDITION

When there are more than one MATCHED clauses in a MERGE statement, only the last MATCHED clause can omit the condition.

DELTA_NON_LAST_NOT_MATCHED_CLAUSE_OMIT_CONDITION

When there are more than one NOT MATCHED clauses in a MERGE statement, only the last NOT MATCHED clause can omit the condition

DELTA_NON_PARSABLE_TAG

Could not parse tag <tag>.

File tags are: <tagList>

DELTA_NON_PARTITION_COLUMN_ABSENT

Data written into Delta needs to contain at least one non-partitioned column.<details>

DELTA_NON_PARTITION_COLUMN_REFERENCE

Predicate references non-partition column ‘<columnName>’. Only the partition columns may be referenced: [<columnList>]

DELTA_NON_PARTITION_COLUMN_SPECIFIED

Non-partitioning column(s) <columnList> are specified where only partitioning columns are expected: <fragment>.

DELTA_NOT_A_DATABRICKS_DELTA_TABLE

<table> is not a Delta table. Please drop this table first if you would like to create it with Databricks Delta.

DELTA_NOT_A_DELTA_TABLE

<tableName> is not a Delta table. Please drop this table first if you would like to recreate it with Delta Lake.

DELTA_NOT_NULL_COLUMN_NOT_FOUND_IN_STRUCT

Not nullable column not found in struct: <struct>

DELTA_NOT_NULL_CONSTRAINT_VIOLATED

NOT NULL constraint violated for column: <columnName>.

DELTA_NOT_NULL_NESTED_FIELD

A non-nullable nested field can’t be added to a nullable parent. Please set the nullability of the parent column accordingly.

DELTA_NO_COMMITS_FOUND

No commits found at <logPath>

DELTA_NO_NEW_ATTRIBUTE_ID

Could not find a new attribute ID for column <columnName>. This should have been checked earlier.

DELTA_NO_START_FOR_CDC_READ

No startingVersion or startingTimestamp provided for CDC read.

DELTA_NULL_SCHEMA_IN_STREAMING_WRITE

Delta doesn’t accept NullTypes in the schema for streaming writes.

DELTA_ONEOF_IN_TIMETRAVEL

Please either provide ‘timestampAsOf’ or ‘versionAsOf’ for time travel.

DELTA_ONLY_OPERATION

<operation> is only supported for Delta tables.

DELTA_OPERATION_MISSING_PATH

Please provide the path or table identifier for <operation>.

DELTA_OPERATION_NOT_ALLOWED

Operation not allowed: <operation> is not supported for Delta tables

DELTA_OPERATION_NOT_ALLOWED_DETAIL

Operation not allowed: <operation> is not supported for Delta tables: <tableName>

DELTA_OPERATION_ON_TEMP_VIEW_WITH_GENERATED_COLS_NOT_SUPPORTED

<operation> command on a temp view referring to a Delta table that contains generated columns is not supported. Please run the <operation> command on the Delta table directly

DELTA_OVERWRITE_MUST_BE_TRUE

Copy option overwriteSchema cannot be specified without setting OVERWRITE = ‘true’.

DELTA_PARTITION_COLUMN_CAST_FAILED

Failed to cast value <value> to <dataType> for partition column <columnName>

DELTA_PATH_DOES_NOT_EXIST

<path> doesn’t exist

DELTA_PATH_EXISTS

Cannot write to already existent path <path> without setting OVERWRITE = ‘true’.

DELTA_POST_COMMIT_HOOK_FAILED

Committing to the Delta table version <version> succeeded but error while executing post-commit hook <name><message>

DELTA_PROTOCOL_PROPERTY_NOT_INT

Protocol property <key> needs to be an integer. Found <value>

DELTA_READ_TABLE_WITHOUT_COLUMNS

You are trying to read a table <tableName> without columns using Delta.

Write some data with option mergeSchema = true to enable subsequent read access.

DELTA_REGEX_OPT_SYNTAX_ERROR

Please recheck your syntax for ‘<regExpOption>’

DELTA_REMOVE_FILE_CDC_MISSING_EXTENDED_METADATA

RemoveFile created without extended metadata is ineligible for CDC:

<file>

DELTA_REPLACE_WHERE_IN_OVERWRITE

You can’t use replaceWhere in conjunction with an overwrite by filter

DELTA_REPLACE_WHERE_MISMATCH

Data written out does not match replaceWhere ‘<replaceWhere>’.

<message>

DELTA_REPLACE_WHERE_WITH_DYNAMIC_PARTITION_OVERWRITE

A ‘replaceWhere’ expression and ‘partitionOverwriteMode’=’dynamic’ cannot both be set in the DataFrameWriter options.

DELTA_REPLACE_WHERE_WITH_FILTER_DATA_CHANGE_UNSET

‘replaceWhere’ cannot be used with data filters when ‘dataChange’ is set to false. Filters: <dataFilters>

DELTA_SCHEMA_CHANGE_SINCE_ANALYSIS

The schema of your Delta table has changed in an incompatible way since your DataFrame

or DeltaTable object was created. Please redefine your DataFrame or DeltaTable object.

Changes:

<schemaDiff><legacyFlagMessage>

DELTA_SCHEMA_NOT_CONSISTENT_WITH_TARGET

The table schema <tableSchema> is not consistent with the target attributes: <targetAttrs>

DELTA_SCHEMA_NOT_SET

Table schema is not set. Write data into it or use CREATE TABLE to set the schema.

DELTA_SET_LOCATION_SCHEMA_MISMATCH

The schema of the new Delta location is different than the current table schema.

original schema:

<original>

destination schema:

<destination>

If this is an intended change, you may turn this check off by running:

%%sql set <config> = true

DELTA_SHOW_PARTITION_IN_NON_PARTITIONED_TABLE

SHOW PARTITIONS is not allowed on a table that is not partitioned: <tableName>

DELTA_SOURCE_IGNORE_DELETE

Detected deleted data (for example <removedFile>) from streaming source at version <version>. This is currently not supported. If you’d like to ignore deletes, set the option ‘ignoreDeletes’ to ‘true’.

DELTA_SOURCE_TABLE_IGNORE_CHANGES

Detected a data update (for example <file>) in the source table at version <version>. This is currently not supported. If you’d like to ignore updates, set the option ‘ignoreChanges’ to ‘true’. If you would like the data update to be reflected, please restart this query with a fresh checkpoint directory.

DELTA_SPARK_SESSION_NOT_SET

Active SparkSession not set.

DELTA_SPARK_THREAD_NOT_FOUND

Not running on a Spark task thread

DELTA_STATE_RECOVER_ERROR

The <operation> of your Delta table could not be recovered while Reconstructing

version: <version>. Did you manually delete files in the deltalog directory?

Set <config> to “false”

to skip validation.

DELTA_TABLE_ALREADY_CONTAINS_CDC_COLUMNS

Unable to enable Change Data Capture on the table. The table already contains

reserved columns <columnList> that will

be used internally as metadata for the table’s Change Data Feed. To enable

Change Data Feed on the table rename/drop these columns.

DELTA_TABLE_ALREADY_EXISTS

Table <tableName> already exists.

DELTA_TABLE_FOUND_IN_EXECUTOR

DeltaTable cannot be used in executors

DELTA_TABLE_NOT_FOUND

Delta table <tableName> doesn’t exist. Please delete your streaming query checkpoint and restart.

DELTA_TABLE_NOT_SUPPORTED_IN_OP

Table is not supported in <operation>. Please use a path instead.

DELTA_TABLE_ONLY_OPERATION

<tableName> is not a Delta table. <operation> is only supported for Delta tables.

DELTA_TIMESTAMP_GREATER_THAN_COMMIT

The provided timestamp (<providedTimestamp>) is after the latest version available to this

table (<tableName>). Please use a timestamp before or at <maximumTimestamp>.

DELTA_TIME_TRAVEL_INVALID_BEGIN_VALUE

<timeTravelKey> needs to be a valid begin value.

DELTA_TRUNCATED_TRANSACTION_LOG

<path>: Unable to reconstruct state at version <version> as the transaction log has been truncated due to manual deletion or the log retention policy (<logRetentionKey>=<logRetention>) and checkpoint retention policy (<checkpointRetentionKey>=<checkpointRetention>)

DELTA_TRUNCATE_TABLE_PARTITION_NOT_SUPPORTED

Operation not allowed: TRUNCATE TABLE on Delta tables does not support partition predicates; use DELETE to delete specific partitions or rows.

DELTA_TXN_LOG_FAILED_INTEGRITY

The transaction log has failed integrity checks. Failed verification at version <version> of:

<mismatchStringOpt>

DELTA_UNEXPECTED_ACTION_IN_OPTIMIZE

Unexpected action <action> with type <actionClass>. Optimize should only have AddFiles and RemoveFiles.

DELTA_UNEXPECTED_ALIAS

Expected Alias but got <alias>

DELTA_UNEXPECTED_ATTRIBUTE_REFERENCE

Expected AttributeReference but got <ref>

DELTA_UNEXPECTED_CHANGE_FILES_FOUND

Change files found in a dataChange = false transaction. Files:

<fileList>

DELTA_UNEXPECTED_NUM_PARTITION_COLUMNS_FROM_FILE_NAME

Expecting <expectedColsSize> partition column(s): <expectedCols>, but found <parsedColsSize> partition column(s): <parsedCols> from parsing the file name: <path>

DELTA_UNEXPECTED_PARTIAL_SCAN

Expect a full scan of Delta sources, but found a partial scan. path:<path>

DELTA_UNEXPECTED_PARTITION_SCHEMA_FROM_USER

CONVERT TO DELTA was called with a partition schema different from the partition schema inferred from the catalog, please avoid providing the schema so that the partition schema can be chosen from the catalog.

catalog partition schema:

<catalogPartitionSchema>

provided partition schema:

<userPartitionSchema>

DELTA_UNKNOWN_CONFIGURATION

Unknown configuration was specified: <config>

DELTA_UNRECOGNIZED_COLUMN_CHANGE

Unrecognized column change <otherClass>. You may be running an out-of-date Delta Lake version.

DELTA_UNRECOGNIZED_FILE_ACTION

Unrecognized file action <action> with type <actionClass>.

DELTA_UNRECOGNIZED_INVARIANT

Unrecognized invariant. Please upgrade your Spark version.

DELTA_UNRECOGNIZED_LOGFILE

Unrecognized log file <fileName>

DELTA_UNSET_NON_EXISTENT_PROPERTY

Attempted to unset non-existent property ‘<property>’ in table <tableName>

DELTA_UNSUPPORTED_ABS_PATH_ADD_FILE

<path> does not support adding files with an absolute path

DELTA_UNSUPPORTED_ALTER_TABLE_REPLACE_COL_OP

Unsupported ALTER TABLE REPLACE COLUMNS operation. Reason: <details>

Failed to change schema from:

<oldSchema>

to:

<newSchema>

DELTA_UNSUPPORTED_CLONE_REPLACE_SAME_TABLE

You tried to REPLACE an existing table (<tableName>) with CLONE. This operation is

unsupported. Try a different target for CLONE or delete the table at the current target.

DELTA_UNSUPPORTED_COLUMN_MAPPING_MODE_CHANGE

Changing column mapping mode from ‘<oldMode>’ to ‘<newMode>’ is not supported.

DELTA_UNSUPPORTED_COLUMN_MAPPING_PROTOCOL

Your current table protocol version does not support changing column mapping modes

using <config>.

Required Delta protocol version for column mapping:

<requiredVersion>

Your table’s current Delta protocol version:

<currentVersion>

<advice>

DELTA_UNSUPPORTED_COLUMN_MAPPING_SCHEMA_CHANGE

Schema change is detected:

old schema:

<oldTableSchema>

new schema:

<newTableSchema>

Schema changes are not allowed during the change of column mapping mode.

DELTA_UNSUPPORTED_COLUMN_MAPPING_STREAMING_READS

Streaming reads from a Delta table with column mapping enabled are not supported.

DELTA_UNSUPPORTED_COLUMN_MAPPING_WRITE

Writing data with column mapping mode is not supported.

DELTA_UNSUPPORTED_COLUMN_TYPE_IN_BLOOM_FILTER

Creating a bloom filter index on a column with type <dataType> is unsupported: <columnName>

DELTA_UNSUPPORTED_DATA_TYPES

Found columns using unsupported data types: <dataTypeList>. You can set ‘<config>’ to ‘false’ to disable the type check. Disabling this type check may allow users to create unsupported Delta tables and should only be used when trying to read/write legacy tables.

DELTA_UNSUPPORTED_DESCRIBE_DETAIL_VIEW

<view> is a view. DESCRIBE DETAIL is only supported for tables.

DELTA_UNSUPPORTED_DROP_COLUMN

DROP COLUMN is not supported for your Delta table. <advice>

DELTA_UNSUPPORTED_DROP_NESTED_COLUMN_FROM_NON_STRUCT_TYPE

Can only drop nested columns from StructType. Found <struct>

DELTA_UNSUPPORTED_DROP_PARTITION_COLUMN

Dropping partition columns (<columnList>) is not allowed.

DELTA_UNSUPPORTED_EXPRESSION

Unsupported expression type(<expType>) for <causedBy>. The supported types are [<supportedTypes>].

DELTA_UNSUPPORTED_EXPRESSION_GENERATED_COLUMN

<expression> cannot be used in a generated column

DELTA_UNSUPPORTED_FIELD_UPDATE_NON_STRUCT

Updating nested fields is only supported for StructType, but you are trying to update a field of <columnName>, which is of type: <dataType>.

DELTA_UNSUPPORTED_GENERATE_WITH_DELETION_VECTORS

The ‘GENERATE symlink_format_manifest’ command is not supported on table versions with deletion vectors.

In order to produce a version of the table without deletion vectors, run ‘REORG TABLE table APPLY (PURGE)’. Then re-run the ‘GENERATE’ command.

Make sure that no concurrent transactions are adding deletion vectors again between REORG and GENERATE.

If you need to generate manifests regularly, or you cannot prevent concurrent transactions, consider disabling deletion vectors on this table using ‘ALTER TABLE table SET TBLPROPERTIES (createDeletionVectors = false)’.

DELTA_UNSUPPORTED_INVARIANT_NON_STRUCT

Invariants on nested fields other than StructTypes are not supported.

DELTA_UNSUPPORTED_LIST_KEYS_WITH_PREFIX

listKeywithPrefix not available

DELTA_UNSUPPORTED_MANIFEST_GENERATION_WITH_COLUMN_MAPPING

Manifest generation is not supported for tables that leverage column mapping, as external readers cannot read these Delta tables. See Delta documentation for more details.

DELTA_UNSUPPORTED_MERGE_SCHEMA_EVOLUTION_WITH_CDC

MERGE INTO operations with schema evolution do not currently support writing CDC output.

DELTA_UNSUPPORTED_MULTI_COL_IN_PREDICATE

Multi-column In predicates are not supported in the <operation> condition.

DELTA_UNSUPPORTED_NESTED_COLUMN_IN_BLOOM_FILTER

Creating a bloom filer index on a nested column is currently unsupported: <columnName>

DELTA_UNSUPPORTED_NESTED_FIELD_IN_OPERATION

Nested field is not supported in the <operation> (field = <fieldName>).

DELTA_UNSUPPORTED_OUTPUT_MODE

Data source <dataSource> does not support <mode> output mode

DELTA_UNSUPPORTED_PARTITION_COLUMN_IN_BLOOM_FILTER

Creating a bloom filter index on a partitioning column is unsupported: <columnName>

DELTA_UNSUPPORTED_RENAME_COLUMN

Column rename is not supported for your Delta table. <advice>

DELTA_UNSUPPORTED_SCHEMA_DURING_READ

Delta does not support specifying the schema at read time.

DELTA_UNSUPPORTED_SORT_ON_BUCKETED_TABLES

SORTED BY is not supported for Delta bucketed tables

DELTA_UNSUPPORTED_SOURCE

<operation> destination only supports Delta sources.

<plan>

DELTA_UNSUPPORTED_STATIC_PARTITIONS

Specifying static partitions in the partition spec is currently not supported during inserts

DELTA_UNSUPPORTED_STRATEGY_NAME

Unsupported strategy name: <strategy>

DELTA_UNSUPPORTED_SUBQUERY

Subqueries are not supported in the <operation> (condition = <cond>).

DELTA_UNSUPPORTED_SUBQUERY_IN_PARTITION_PREDICATES

Subquery is not supported in partition predicates.

DELTA_UNSUPPORTED_TIME_TRAVEL_MULTIPLE_FORMATS

Cannot specify time travel in multiple formats.

DELTA_UNSUPPORTED_TIME_TRAVEL_VIEWS

Cannot time travel views, subqueries or streams.

DELTA_UNSUPPORTED_TRUNCATE_SAMPLE_TABLES

Truncate sample tables is not supported

DELTA_UNSUPPORTED_VACUUM_SPECIFIC_PARTITION

Please provide the base path (<baseDeltaPath>) when Vacuuming Delta tables. Vacuuming specific partitions is currently not supported.

DELTA_UNSUPPORTED_WRITES_STAGED_TABLE

Table implementation does not support writes: <tableName>

DELTA_UNSUPPORTED_WRITE_SAMPLE_TABLES

Write to sample tables is not supported

DELTA_UPDATE_SCHEMA_MISMATCH_EXPRESSION

Cannot cast <fromCatalog> to <toCatalog>. All nested columns must match.

DELTA_VERSIONS_NOT_CONTIGUOUS

Versions (<versionList>) are not contiguous.

DELTA_VERSION_NOT_CONTIGUOUS

Versions (<versionList>) are not contiguous. This can happen when files have been manually removed from the Delta log. Please contact Databricks support to repair the table.

DELTA_VIOLATE_CONSTRAINT_WITH_VALUES

CHECK constraint <constraintName> <expression> violated by row with values:

<values>

DELTA_VIOLATE_TABLE_PROPERTY_VALIDATION_FAILED

The validation of the properties of table <table> has been violated:

For more details see DELTA_VIOLATE_TABLE_PROPERTY_VALIDATION_FAILED

DELTA_WRITE_INTO_VIEW_NOT_SUPPORTED

<viewIdentifier> is a view. You may not write data into a view.

DELTA_ZORDERING_COLUMN_DOES_NOT_EXIST

Z-Ordering column <columnName> does not exist in data schema.

DELTA_ZORDERING_ON_COLUMN_WITHOUT_STATS

Z-Ordering on <cols> will be

ineffective, because we currently do not collect stats for these columns. Please refer to

<link>

for more information on data skipping and z-ordering. You can disable

this check by setting

‘%%sql set <zorderColStatKey> = false’

DELTA_ZORDERING_ON_PARTITION_COLUMN

<colName> is a partition column. Z-Ordering can only be performed on data columns

Autoloader

CF_ADD_NEW_NOT_SUPPORTED

Schema evolution mode <addNewColumnsMode> is not supported when the schema is specified. To use this mode, you can provide the schema through cloudFiles.schemaHints instead.

CF_AMBIGUOUS_AUTH_OPTIONS_ERROR

Found notification-setup authentication options for the (default) directory

listing mode:

<options>

If you wish to use the file notification mode, please explicitly set:

  .option("cloudFiles.\<useNotificationsKey>", "true")

Alternatively, if you want to skip the validation of your options and ignore these

authentication options, you can set:

  .option("cloudFiles.ValidateOptionsKey>", "false")

CF_AMBIGUOUS_INCREMENTAL_LISTING_MODE_ERROR

Incremental listing mode (cloudFiles.<useIncrementalListingKey>)

and file notification (cloudFiles.<useNotificationsKey>)

have been enabled at the same time.

Please make sure that you select only one.

CF_AZURE_STORAGE_SUFFIXES_REQUIRED

Require adlsBlobSuffix and adlsDfsSuffix for Azure

CF_BUCKET_MISMATCH

The <storeType> in the file event <fileEvent> is different from expected by the source: <source>.

CF_CANNOT_EVOLVE_SCHEMA_LOG_EMPTY

Cannot evolve schema when the schema log is empty. Schema log location: <logPath>

CF_CANNOT_RESOLVE_CONTAINER_NAME

Cannot resolve container name from path: <path>, Resolved uri: <uri>

CF_CANNOT_RUN_DIRECTORY_LISTING

Cannot run directory listing when there is an async backfill thread running

CF_CLEAN_SOURCE_ALLOW_OVERWRITES_BOTH_ON

Cannot turn on cloudFiles.cleanSource and cloudFiles.allowOverwrites at the same time.

CF_DUPLICATE_COLUMN_IN_DATA

There was an error when trying to infer the partition schema of your table. You have the same column duplicated in your data and partition paths. To ignore the partition value, please provide your partition columns explicitly by using: .option(“cloudFiles.<partitionColumnsKey>”, “{comma-separated-list}”)

CF_EMPTY_DIR_FOR_SCHEMA_INFERENCE

Cannot infer schema when the input path <path> is empty. Please try to start the stream when there are files in the input path, or specify the schema.

CF_EVENT_GRID_AUTH_ERROR

Failed to create an Event Grid subscription. Please make sure that your service

principal has <permissionType> Event Grid Subscriptions. See more details at:

<docLink>

CF_EVENT_GRID_CREATION_FAILED

Failed to create event grid subscription. Please ensure that Microsoft.EventGrid is

registered as resource provider in your subscription. See more details at:

<docLink>

CF_EVENT_GRID_NOT_FOUND_ERROR

Failed to create an Event Grid subscription. Please make sure that your storage

account (<storageAccount>) is under your resource group (<resourceGroup>) and that

the storage account is a “StorageV2 (general purpose v2)” account. See more details at:

<docLink>

CF_EVENT_NOTIFICATION_NOT_SUPPORTED

Event notification setup for <cloudStore> is not supported.

CF_FAILED_TO_CHECK_STREAM_NEW

Failed to check if the stream is new

CF_FAILED_TO_CREATED_PUBSUB_SUBSCRIPTION

Failed to create subscription: <subscriptionName>. A subscription with the same name already exists and is associated with another topic: <otherTopicName>. The desired topic is <proposedTopicName>. Either delete the existing subscription or create a subscription with a new resource suffix.

CF_FAILED_TO_CREATED_PUBSUB_TOPIC

Failed to create topic: <topicName>. A topic with the same name already exists.<reason> Remove the existing topic or try again with another resource suffix

CF_FAILED_TO_DELETE_GCP_NOTIFICATION

Failed to delete notification with id <notificationId> on bucket <bucketName> for topic <topicName>. Please retry or manually remove the notification through the GCP console.

CF_FAILED_TO_DESERIALIZE_PERSISTED_SCHEMA

Failed to deserialize persisted schema from string: ‘<jsonSchema>’

CF_FAILED_TO_EVOLVE_SCHEMA

Cannot evolve schema without a schema log.

CF_FAILED_TO_FIND_PROVIDER

Failed to find provider for <fileFormatInput>

CF_FAILED_TO_INFER_SCHEMA

Failed to infer schema for format <fileFormatInput> from existing files in input path <path>. Please ensure you configured the options properly or explicitly specify the schema.

CF_FAILED_TO_WRITE_TO_SCHEMA_LOG

Failed to write to the schema log at location <path>.

CF_FILE_FORMAT_REQUIRED

Could not find required option: cloudFiles.format.

CF_FOUND_MULTIPLE_AUTOLOADER_PUBSUB_SUBSCRIPTIONS

Found multiple (<num>) subscriptions with the Auto Loader prefix for topic <topicName>:

<subscriptionList>

There should only be one subscription per topic. Please manually ensure that your topic does not have multiple subscriptions.

CF_GCP_AUTHENTICATION

Please either provide all of the following: <clientEmail>, <client>,

<privateKey>, and <privateKeyId> or provide none of them in order to use the default

GCP credential provider chain for authenticating with GCP resources.

CF_GCP_LABELS_COUNT_EXCEEDED

Received too many labels (<num>) for GCP resource. The maximum label count per resource is <maxNum>.

CF_GCP_RESOURCE_TAGS_COUNT_EXCEEDED

Received too many resource tags (<num>) for GCP resource. The maximum resource tag count per resource is <maxNum>, as resource tags are stored as GCP labels on resources, and Databricks specific tags consume some of this label quota.

CF_INCOMPLETE_LOG_FILE_IN_SCHEMA_LOG

Incomplete log file in the schema log

CF_INCOMPLETE_METADATA_FILE_IN_CHECKPOINT

Incomplete metadata file in the Auto Loader checkpoint

CF_INCORRECT_SQL_PARAMS

The cloud_files method accepts two required string parameters: the path to load from, and the file format. File reader options must be provided in a string key-value map. e.g. cloud_files(“path”, “json”, map(“option1”, “value1”)). Received: <params>

CF_INVALID_ARN

Invalid ARN: <arn>

CF_INVALID_CHECKPOINT

This checkpoint is not a valid CloudFiles source

CF_INVALID_CLEAN_SOURCE_MODE

Invalid mode for clean source option <value>.

CF_INVALID_GCP_RESOURCE_TAG_KEY

Invalid resource tag key for GCP resource: <key>. Keys must start with a lowercase letter, be within 1 to 63 characters long, and contain only lowercase letters, numbers, underscores (_), and hyphens (-).

CF_INVALID_GCP_RESOURCE_TAG_VALUE

Invalid resource tag value for GCP resource: <value>. Values must be within 0 to 63 characters long and must contain only lowercase letters, numbers, underscores (_), and hyphens (-).

CF_INVALID_SCHEMA_EVOLUTION_MODE

cloudFiles.\<schemaEvolutionModeKey> must be one of {
 "<addNewColumns>"
 "<failOnNewColumns>"
 "<rescue>"
 "<noEvolution>"}

CF_INVALID_SCHEMA_HINTS_OPTION

Schema hints can only specify a particular column once.

In this case, redefining column: <columnName>

multiple times in schemaHints:

<schemaHints>

CF_INVALID_SCHEMA_HINT_COLUMN

Schema hints can not be used to override maps’ and arrays’ nested types.

Conflicted column: <columnName>

CF_LATEST_OFFSET_READ_LIMIT_REQUIRED

latestOffset should be called with a ReadLimit on this source.

CF_LOG_FILE_MALFORMED

Log file was malformed: failed to read correct log version from <fileName>.

CF_MAX_MUST_BE_POSITIVE

max must be positive

CF_METADATA_FILE_CONCURRENTLY_USED

Multiple streaming queries are concurrently using <metadataFile>

CF_MISSING_METADATA_FILE_ERROR

The metadata file in the streaming source checkpoint directory is missing. This metadata

file contains important default options for the stream, so the stream cannot be restarted

right now. Please contact Databricks support for assistance.

CF_MISSING_PARTITION_COLUMN_ERROR

Partition column <columnName> does not exist in the provided schema:

<schema>

CF_MISSING_SCHEMA_IN_PATHLESS_MODE

Please specify a schema using .schema() if a path is not provided to the CloudFiles source while using file notification mode. Alternatively, to have Auto Loader to infer the schema please provide a base path in .load().

CF_MULTIPLE_PUBSUB_NOTIFICATIONS_FOR_TOPIC

Found existing notifications for topic <topicName> on bucket <bucketName>:

notification,id

<notificationList>

To avoid polluting the subscriber with unintended events, please delete the above notifications and retry.

CF_NEW_PARTITION_ERROR

New partition columns were inferred from your files: [<filesList>]. Please provide all partition columns in your schema or provide a list of partition columns which you would like to extract values for by using: .option(“cloudFiles.partitionColumns”, “{comma-separated-list|empty-string}”)

CF_PARTITON_INFERENCE_ERROR

There was an error when trying to infer the partition schema of the current batch of files. Please provide your partition columns explicitly by using: .option(“cloudFiles.<partitionColumnOption>”, “{comma-separated-list}”)

CF_PERIODIC_BACKFILL_NOT_SUPPORTED

Periodic backfill is not supported if asynchronous backfill is disabled. You can enable asynchronous backfill/directory listing by setting spark.databricks.cloudFiles.asyncDirListing to true

CF_PREFIX_MISMATCH

Found mismatched event: key <key> doesn’t have the prefix: <prefix>

CF_PROTOCOL_MISMATCH

<message>

If you don’t need to make any other changes to your code, then please set the SQL

configuration: ‘<sourceProtocolVersionKey> = <value>’

to resume your stream. Please refer to:

<docLink>

for more details.

CF_REGION_NOT_FOUND_ERROR

Could not get default AWS Region. Please specify a region using the cloudFiles.region option.

CF_RESOURCE_SUFFIX_EMPTY

Failed to create notification services: the resource suffix cannot be empty.

CF_RESOURCE_SUFFIX_INVALID_CHAR_AWS

Failed to create notification services: the resource suffix can only have alphanumeric characters, hyphens (-) and underscores (_).

CF_RESOURCE_SUFFIX_INVALID_CHAR_AZURE

Failed to create notification services: the resource suffix can only have lowercase letter, number, and dash (-).

CF_RESOURCE_SUFFIX_INVALID_CHAR_GCP

Failed to create notification services: the resource suffix can only have alphanumeric characters, hyphens (-), underscores (_), periods (.), tildes (~) plus signs (+), and percent signs (<percentSign>).

CF_RESOURCE_SUFFIX_LIMIT

Failed to create notification services: the resource suffix cannot have more than <limit> characters.

CF_RESOURCE_SUFFIX_LIMIT_GCP

Failed to create notification services: the resource suffix must be between <lowerLimit> and <upperLimit> characters.

CF_RESTRICTED_GCP_RESOURCE_TAG_KEY

Found restricted GCP resource tag key (<key>). The following GCP resource tag keys are restricted for Auto Loader: [<restrictedKeys>]

CF_RETENTION_GREATER_THAN_MAX_FILE_AGE

cloudFiles.cleanSource.retentionDuration cannot be greater than cloudFiles.maxFileAge.

CF_SAME_PUB_SUB_TOPIC_NEW_KEY_PREFIX

Failed to create notification for topic: <topic> with prefix: <prefix>. There is already a topic with the same name with another prefix: <oldPrefix>. Try using a different resource suffix for setup or delete the existing setup.

CF_SOURCE_DIRECTORY_PATH_REQUIRED

Please provide the source directory path with option path

CF_SOURCE_UNSUPPORTED

The cloud files source only supports S3, Azure Blob Storage (wasb/wasbs) and Azure Data Lake Gen1 (adl) and Gen2 (abfs/abfss) paths right now. path: ‘<path>’, resolved uri: ‘<uri>’

CF_THREAD_IS_DEAD

<threadName> thread is dead.

CF_UNABLE_TO_DERIVE_STREAM_CHECKPOINT_LOCATION

Unable to derive the stream checkpoint location from the source checkpoint location: <checkPointLocation>

CF_UNABLE_TO_EXTRACT_BUCKET_INFO

Unable to extract bucket information. Path: ‘<path>’, resolved uri: ‘<uri>’.

CF_UNABLE_TO_EXTRACT_KEY_INFO

Unable to extract key information. Path: ‘<path>’, resolved uri: ‘<uri>’.

CF_UNABLE_TO_EXTRACT_STORAGE_ACCOUNT_INFO

Unable to extract storage account information; path: ‘<path>’, resolved uri: ‘<uri>’

CF_UNABLE_TO_LIST_EFFICIENTLY

Received a directory rename event for the path <path>, but we are unable to list this directory efficiently. In order for the stream to continue, set the option ‘cloudFiles.ignoreDirRenames’ to true, and consider enabling regular backfills with cloudFiles.backfillInterval for this data to be processed.

CF_UNKNOWN_OPTION_KEYS_ERROR

Found unknown option keys:

<optionList>

Please make sure that all provided option keys are correct. If you want to skip the

validation of your options and ignore these unknown options, you can set:

  .option("cloudFiles.\<validateOptions>", "false")

CF_UNKNOWN_READ_LIMIT

Unknown ReadLimit: <readLimit>

CF_UNSUPPORTED_FORMAT_FOR_SCHEMA_INFERENCE

Schema inference is not supported for format: <format>. Please specify the schema.

CF_UNSUPPORTED_LOG_VERSION

UnsupportedLogVersion: maximum supported log version is v<maxVersion>, but encountered v<version>. The log file was produced by a newer version of DBR and cannot be read by this version. Please upgrade.

CF_UNSUPPORTED_SCHEMA_EVOLUTION_MODE

Schema evolution mode <mode> is not supported for format: <format>.

CF_USE_DELTA_FORMAT

If you would like to consume data from Delta,

please use ‘format(“delta”)’ instead of ‘format(“cloudFiles”)’.

The streaming source from Delta is already optimized

for incremental consumption of data.

Geospatial

GEOJSON_PARSE_ERROR

Error parsing GeoJSON: <parseError> at position <pos>

H3_INVALID_CELL_ID

<h3Cell> is not a valid H3 cell ID

H3_INVALID_GRID_DISTANCE_VALUE

H3 grid distance <k> must be non-negative

H3_INVALID_RESOLUTION_VALUE

H3 resolution <r> must be between <minR> and <maxR>, inclusive

H3_NOT_ENABLED

<h3Expression> is disabled or unsupported. Consider enabling Photon or switch to a tier that supports H3 expressions

H3_PENTAGON_ENCOUNTERED_ERROR

A pentagon was encountered while computing the hex ring of <h3Cell> with grid distance <k>

H3_UNDEFINED_GRID_DISTANCE

H3 grid distance between <h3Cell1> and <h3Cell2> is undefined

WKB_PARSE_ERROR

Error parsing WKB: <parseError> at position <pos>

WKT_PARSE_ERROR

Error parsing WKT: <parseError> at position <pos>