Set up monitoring for Model APIs

You must set up model monitoring for each model that you want to monitor.

Prerequisites

  • When you trained the model, you registered or used a Domino Training Set in the same project from which you’re looking to publish a Model API.

  • If you will perform Model Quality analysis, set up your data sources for ground truth data. See Connect a Data Source.

Set up monitoring for Model APIs

The following topics explain the steps:

  1. Set up Prediction Data Capture. Prediction data is required for monitoring both data drift and model quality.

  2. Publish the Model API. Domino starts to log prediction data (and convert it to Parquet files), but it does not yet produce monitoring data.

  3. Set up Drift Detection. Register a training set to monitor data drift.

  4. Validate your Setup. Confirm that your predication data is being captured.

  5. Set up Model Quality Monitoring. Ingest ground truth data to monitor the quality of the model’s predictions.

  6. Set up Cohort Analysis. Get insights into model quality so you can find underperforming data hotspots for model remediation.