You must set up model monitoring for each model that you want to monitor.
-
When you trained the model, you registered or used a Domino Training Set in the same project from which you’re looking to publish a Model API.
-
If you will perform Model Quality analysis, set up your data sources for ground truth data. See Connect a Data Source.
Set up monitoring for Model APIs
The following topics explain the steps:
-
Set up Prediction Data Capture. Prediction data is required for monitoring both data drift and model quality.
-
Publish the Model API. Domino starts to log prediction data (and convert it to Parquet files), but it does not yet produce monitoring data.
-
Set up Drift Detection. Register a training set to monitor data drift.
-
Optional: Configure Set up Notifications or change the scheduled checks.
-
-
Validate your Setup. Confirm that your predication data is being captured.
-
Set up Model Quality Monitoring. Ingest ground truth data to monitor the quality of the model’s predictions.
-
Optional: Configure Set up Notifications or change the scheduled checks.
-
-
Set up Cohort Analysis. Get insights into model quality so you can find underperforming data hotspots for model remediation.