Model serving with Databricks
Preview
This feature is in Public Preview and is supported in us-east1
and us-central1
.
This article describes Mosaic AI Model Serving, including its advantages and limitations.
What is Mosaic AI Model Serving?
Mosaic AI Model Serving provides a unified interface to deploy, govern, and query AI models for real-time inference. Each model you serve is available as a REST API that you can integrate into your web or client application.
Model Serving provides a highly available and low-latency service for deploying models. The service automatically scales up or down to meet demand changes, saving infrastructure costs while optimizing latency performance. This functionality uses serverless compute. See the Model Serving pricing page for more details.
Model serving supports serving:
Custom models. These are Python models packaged in the MLflow format. They must be registered in Unity Catalog. Examples include scikit-learn, XGBoost, PyTorch, and Hugging Face transformer models.
External models. These are generative AI models that are hosted outside of Databricks. Examples include models like, OpenAI’s GPT-4, Anthropic’s Claude, and others. Endpoints that serve external models can be centrally governed and customers can establish rate limits and access control for them.
Model serving offers a unified REST API and MLflow Deployment API for CRUD and querying tasks. In addition, it provides a single UI to manage all your models and their respective serving endpoints.
For an introductory tutorial on how to serve custom models on Databricks, see Tutorial: Deploy and query a custom model.
Why use Model Serving?
Deploy and query any models: Model Serving provides a unified interface that so you can manage all models in one location and query them with a single API, regardless of whether they are hosted on Databricks or externally. This approach simplifies the process of experimenting with, customizing, and deploying models in production across various clouds and providers.
Reduce cost with optimized inference and fast scaling: Databricks has implemented a range of optimizations to ensure you get the best throughput and latency for large models. The endpoints automatically scale up or down to meet demand changes, saving infrastructure costs while optimizing latency performance. Monitor model serving costs.
Bring reliability and security to Model Serving: Model Serving is designed for high-availability, low-latency production use and can support over 25K queries per second with an overhead latency of less than 50 ms. The serving workloads are protected by multiple layers of security, ensuring a secure and reliable environment for even the most sensitive tasks.
Note
Model Serving does not provide security patches to existing model images because of the risk of destabilization to production deployments. A new model image created from a new model version will contain the latest patches. Reach out to your Databricks account team for more information.
Requirements
Registered model in Unity Catalog.
Permissions on the registered models as described in Serving endpoint ACLs.
MLflow 1.29 or higher
Enable Model Serving for your workspace
No additional steps are required to enable Model Serving in your workspace.
Limitations and region availability
Mosaic AI Model Serving imposes default limits to ensure reliable performance. See Model Serving limits and regions. If you have feedback on these limits or an endpoint in an unsupported region, reach out to your Databricks account team.
Data protection in Model Serving
Databricks takes data security seriously. Databricks understands the importance of the data you analyze using Mosaic AI Model Serving, and implements the following security controls to protect your data.
Every customer request to Model Serving is logically isolated, authenticated, and authorized.
Mosaic AI Model Serving encrypts all data at rest (AES-256) and in transit (TLS 1.2+).
For all paid accounts, Mosaic AI Model Serving does not use user inputs submitted to the service or outputs from the service to train any models or improve any Databricks services.
For Databricks Foundation Model APIs, as part of providing the service, Databricks may temporarily process and store inputs and outputs for the purposes of preventing, detecting, and mitigating abuse or harmful uses. Your inputs and outputs are isolated from those of other customers, stored in the same region as your workspace for up to thirty (30) days, and only accessible for detecting and responding to security or abuse concerns.