Compute access mode limitations for Unity Catalog
Databricks recommends using Unity Catalog and shared access mode for most workloads. This article outlines limitations and requirements for each access mode with Unity Catalog. For details on access modes, see Access modes.
Databricks recommends using compute policies to simplify configuration options for most users. See Create and manage compute policies.
Note
No-isolation shared is a legacy access mode that does not support Unity Catalog.
Single user access mode limitations on Unity Catalog
Single user access mode on Unity Catalog has the following limitations. These are in addition to the general limitations for all Unity Catalog access mode. See General limitations for Unity Catalog.
Fine-grained access control on single user compute is not supported. Specifically:
You cannot access a table that has a row filter or column mask.
You cannot access dynamic views.
To read from any view, you must have SELECT
on all tables and views that are referenced by the view.
To query dynamic views, views on which you don’t have SELECT
on the underlying tables and views, and tables with row filters or column masks, use one of the following:
Streaming limitations for Unity Catalog single user access mode
Asynchronous checkpointing is not supported in Databricks Runtime 11.3 LTS and below.
StreamingQueryListener
requires Databricks Runtime 15.1 or above to use credentials or interact with objects managed by Unity Catalog on single user compute.
Shared access mode limitations on Unity Catalog
Shared access mode in Unity Catalog has the following limitations. These are in addition to the general limitations for all Unity Catalog access modes. See General limitations for Unity Catalog.
Databricks Runtime ML and Spark Machine Learning Library (MLlib) are not supported.
Spark-submit jobs are not supported.
In Databricks Runtime 13.3 and above, individual rows must not exceed 128MB.
PySpark UDFs cannot access Git folders, workspace files, or volumes to import modules in Databricks Runtime 14.2 and below.
DBFS root and mounts do not support FUSE.
Language support for Unity Catalog shared access mode
Spark API limitations and requirements for Unity Catalog shared access mode
RDD APIs are not supported.
DBUtils and other clients that directly read the data from cloud storage are only supported when you use an external location to access the storage location. See Create an external location to connect cloud storage to Databricks.
Spark Context (sc
),spark.sparkContext
, and sqlContext
are not supported for Scala in any Databricks Runtime and are not supported for Python in Databricks Runtime 14.0 and above.
Databricks recommends using the spark
variable to interact with the SparkSession
instance.
The following sc
functions are also not supported: emptyRDD
, range
, init_batched_serializer
, parallelize
, pickleFile
, textFile
, wholeTextFiles
, binaryFiles
, binaryRecords
, sequenceFile
, newAPIHadoopFile
, newAPIHadoopRDD
, hadoopFile
, hadoopRDD
, union
, runJob
, setSystemProperty
, uiWebUrl
, stop
, setJobGroup
, setLocalProperty
, getConf
.
The following Scala Dataset API operations require Databricks Runtime 15.4 LTS or above: map
, mapPartitions
, foreachPartition
, flatMap
, reduce
and filter
.
UDF limitations and requirements for Unity Catalog shared access mode
User-defined functions (UDFs) have the following limitations with shared access mode:
Hive UDFs are not supported.
applyInPandas
and mapInPandas
require Databricks Runtime 14.3 or above.
Scala scalar UDFs require Databricks Runtime 14.2 or above. Other Scala UDFs and UDAFs are not supported.
In Databricks Runtime 14.2 and below, using a custom version of grpc
, pyarrow
, or protobuf
in a PySpark UDF through notebook-scoped or cluster-scoped libraries is not supported because the installed version is always preferred. To find the version of installed libraries, see the System Environment section of the specific Databricks Runtime version release notes.
Non-scalar Python and Pandas UDFs, including UDAFs, UDTFs, and Pandas on Spark, require Databricks Runtime 14.3 LTS or above.
See User-defined functions (UDFs) in Unity Catalog.
Streaming limitations and requirements for Unity Catalog shared access mode
For Scala, foreach
, foreachBatch
, and FlatMapGroupWithState
are not supported.
For Python, foreachBatch
has the following behavior changes in Databricks Runtime 14.0 and above:
print()
commands write output to the driver logs.
You cannot access the dbutils.widgets
submodule inside the function.
Any files, modules, or objects referenced in the function must be serializable and available on Spark.
For Scala, from_avro
requires Databricks Runtime 14.2 or above.
applyInPandasWithState
requires Databricks Runtime 14.3 LTS or above.
Working with socket sources is not supported.
The sourceArchiveDir
must be in the same external location as the source when you use option("cleanSource", "archive")
with a data source managed by Unity Catalog.
For Kafka sources and sinks, the following options are not supported:
kafka.sasl.client.callback.handler.class
kafka.sasl.login.callback.handler.class
kafka.sasl.login.class
kafka.partition.assignment.strategy
The following Kafka options are not supported in Databricks Runtime 13.3 LTS and above but unsupported in Databricks Runtime 12.2 LTS. You can only specify external locations managed by Unity Catalog for these options:
For Scala, StreamingQueryListener
requires Databricks Runtime 16.1 and above.
For Python, StreamingQueryListener
requires Databricks Runtime 14.3 LTS or above to use credentials or interact with objects managed by Unity Catalog on shared compute.
Network and file system access limitations and requirements for Unity Catalog shared access mode
You must run commands on compute nodes as a low-privilege user forbidden from accessing sensitive parts of the filesystem.
In Databricks Runtime 11.3 LTS and below, you can only create network connections to ports 80 and 443.
General limitations for Unity Catalog
The following limitations apply to all Unity Catalog-enabled access modes.