Advanced usage of Databricks Connect for Python

Note

This article covers Databricks Connect for Databricks Runtime 14.0 and above.

This article describes topics that go beyond the basic setup of Databricks Connect.

Logging and debug logs

Databricks Connect for Python produces logs using standard Python logging.

Logs are emitted to the standard error stream (stderr) and by default they are only logs at WARN level and higher are emitted.

Setting an environment variable SPARK_CONNECT_LOG_LEVEL=debug will modify this default and print all log messages at the DEBUG level and higher.

Pyspark shell

Databricks Connect for Python ships with a pyspark binary which is a PySpark REPL configured to use Databricks Connect. The REPL can be started by running:

pyspark

When started with no additional parameters, it picks up default credentials from the environment (for example., the DATABRICKS_ environment variables or the DEFAULT configuration profile) to connect to the Databricks cluster.

Once the REPL starts up, the spark object is available configured to run Apache Spark commands on the Databricks cluster.

>>> spark.range(3).show()
+---+
| id|
+---+
|  0|
|  1|
|  2|
+---+

The REPL can be configured to connect to a different remote by configuring the --remote parameter with a Spark connect connection string.

pyspark --remote "sc://<workspace-instance-name>:443/;token=<access-token-value>;x-databricks-cluster-id=<cluster-id>"

Additional HTTP headers

Databricks Connect communicates with the Databricks Clusters via gRPC over HTTP/2.

Some advanced users may choose to install a proxy service between the client and the Databricks cluster, to have better control over the requests coming from their clients.

The proxies, in some cases, may require custom headers in the HTTP requests.

The headers() method can be used to add custom headers to their HTTP requests.

spark = DatabricksSession.builder.header('x-custom-header', 'value').getOrCreate()