Monitor model quality and endpoint health
Preview
Mosaic AI Model Serving is in Public Preview and is supported in us-east1
and us-central1
.
Mosaic AI Model Serving provides advanced tooling for monitoring the quality and health of models and their deployments. The following table is an overview of each monitoring tool available.
Tool |
Description |
Purpose |
Access |
---|---|---|---|
Captures |
Useful for debugging during model deployment. Use |
Accessible using the Logs tab in the Serving UI. Logs are streamed in real-time and can be exported through the API. |
|
Displays output from the process which automatically creates a production-ready Python environment for the model serving endpoint. |
Useful for diagnosing model deployment and dependency issues. |
Available upon completion of the model serving build under Build logs in the Logs tab. Logs can be exported through the API. |
|
Provides insights into infrastructure metrics like latency, request rate, error rate, CPU usage, and memory usage. |
Important for understanding the performance and health of the serving infrastructure. |
Available by default in the Serving UI for the last 14 days. Data can also be streamed to observability tools in real-time. |