Domino’s Model Monitoring uses data from several supported data sources to analyze models in production and alerts you when a model’s performance falls outside the specified range.
Model monitoring uses training and prediction data to track drift for the model’s input features and prediction variables, and alerts you about every feature that exceeds a configurable threshold. If you have ground truth data for the model’s predicted values, Domino can ingest it to produce model quality metrics using standard measures such as accuracy, precision, and recall. Domino can also alert you about every metric that exceeds a threshold.
Use the monitoring dashboard to observe your data science models.
You can set up your model to be continuously monitored, using APIs to ingest prediction and ground truth data when available, and define scheduled checks to alert about drift and model quality metrics periodically.
Use APIs to integrate into existing business processes, and programmatically retrain models.