Automated checks run validation logic as bundles progress through approval stages. Policies can include two types of automated checks: metrics checks and scripted checks.
Results are recorded in the governance notebook and attached as evidence for approvers to review. All check results are logged and traceable, supporting compliance requirements and reproducibility.
You need the GovernanceAdmin role to define checks in policies. Roles and security has details on role assignment.
For scripted checks, you also need:
-
An environment with required dependencies
-
Access to a hardware tier
-
Sufficient volume storage for script execution
Metrics checks validate model performance using metadata captured during model registration or experimentation. Each check defines metric names, optional aliases, and threshold conditions.
Domino evaluates metrics checks automatically when a bundle enters a stage that requires them. If thresholds aren’t met, the system creates findings for approvers to review. Approvers see the results and can decide whether to proceed.
How metrics checks work
When a bundle reaches a stage with metrics checks, Domino queries the model metadata for matching metric names or aliases. If a threshold is defined, Domino compares the actual value against the expected value using the specified operator.
-
Metric meets threshold: Check passes
-
Metric below threshold: Check fails, approver notified
-
Metric not found: Check fails, missing metric logged
-
No threshold defined: Metric value displayed for review
Governance administrators define these inputs when configuring policies. Define metrics checks in Governance policy components has YAML configuration details for metrics checks.
Scripted checks run custom validation logic as part of a policy. Use them to standardize complex evaluations like fairness assessments, bias detection, or compliance reporting. Scripts run in a specified environment and generate evidence that’s attached to the governance notebook.
Unlike metrics checks that evaluate existing metadata, scripted checks execute custom code in a controlled environment. You define the script command, input parameters, and expected outputs. When the check runs, Domino launches a job, executes your script, and captures the results as evidence in the governance notebook.
This approach lets you implement organization-specific validation logic while maintaining consistent execution and audit trails across all governed models.
How scripted checks work
When a bundle enters a stage with scripted checks, Domino launches a job in the specified environment and hardware tier. The script runs with the provided parameters, generates output, and attaches results to the governance notebook. A link to the execution run is included for reproducibility.
-
Execution: Job runs in specified environment with command-line parameters
-
Capture: Output files (TXT, PNG, JSON, CSV) are uploaded to governance notebook
-
Evidence: Results appear inline for approvers, with a link to reproduce the run
Define scripted checks in Policy components has YAML configuration details for scripted checks.
-
Manual evidence: add text responses, files, and artifacts to bundles
-
Monitoring checks: connect to Domino Model Monitoring
-
Policy components: YAML configuration for metrics and scripted checks
