Domino Governance lets you define how projects are reviewed, approved, and classified using policies written in YAML. These policies are managed in the Governance Console and applied to governed bundles to enforce consistent, auditable workflows.
Each policy defines what evidence needs to be collected, who must approve it, how risk is assessed, and whether any warnings or gates should apply. Once a policy is published, it can’t be edited. This guarantees that any bundle stays tied to the exact policy version it was approved under.
You’ll need the GovernanceAdmin role to create or edit policies. This role is already included in the CloudAdmins
group.
Policies are defined in YAML using a modular structure. Each policy is made up of components like stages, approvals, inputs, and checks. Each component is declared with an artifactType
and a details
section that sets its behavior.
Writing policies in YAML makes them easy to version, reuse, and review, whether you’re building one from scratch or editing an existing template.
This page explains how to define each type of policy component using YAML. It includes examples for:
-
Organizing stages and approvals
-
Grouping and reusing evidence
-
Running metrics checks and scripted checks
-
Creating input fields and guidance elements
-
Defining classification logic and visibility rules
-
Gating high-risk actions
Governance Admins define stages in YAML to organize evidence and approvals. Each stage can include:
-
One or more evidence sets (for direct evidence)
-
One or more approvals, each with a name, list of approvers, and optional evidence
Approvals
Approvals are defined within a stage. Each approval includes:
-
A name
-
A list of approvers (users or organizations)
-
Optional evidence, which must be satisfied before approval can be granted
Evidence may be used to support the approval process, but it is not required in all cases.
Example: Define an approval with evidence
In this example, the stage includes an approval named Stage 4: validation sign off
, which includes a checklist prompt for the approver:
- name: 'Stage 4: validation sign off'
approvers:
- model-gov-org
evidence:
id: Local.validation-approval-body
name: Sign-off
description: The checklist for approvals
definition:
- artifactType: input
details:
label: "Have you read the model validation reports?"
type: radio
options:
- Yes
- No
Sequential workflows
Sequential workflows define multi-stage approval processes where each stage must be completed before the next becomes available. Stages unlock in order, mirroring real-world review flows and helping maintain control throughout the governance process.
Progression is gated by required fields. All mandatory information must be provided before a bundle can move forward, supporting structured, auditable workflows and built-in compliance.
Sequential workflows are currently defined in YAML using the enforceSequentialOrder
field.
Evidence defines the inputs, approvals, or checks required during a stage. You can define evidence directly in a stage, or organize it into an evidenceSet
for reuse and clarity.
When you define local evidence, it must be declared in full the first time it’s used. Later references can reuse it by id
.
Metrics checks
Metrics checks use model metadata to validate performance against thresholds. These checks run automatically and reduce manual review.
Each check supports aliases to match metric names, as well as threshold logic to compare against expected values.
Example: Define metrics checks
This check validates model accuracy using multiple aliases
and a minimum threshold of 0.8
.
metrics:
- id: Local.model-quality
name: Model Quality
description: Describe the model quality
definition:
- artifactType: metadata
details:
type: modelmetric
metrics:
- name: Acc
label: Accuracy
aliases:
- acc
- Correct Classification Rate
- Percentage Correct
threshold:
operator: '>='
value: 0.8
Scripted checks
Scripted checks run custom validation logic as part of a policy. Each one defines a command, parameters, and expected outputs.
Scripts run in a specified environment and can generate evidence. Use them to standardize complex reviews like fairness evaluations.
Example: Define scripted checks
This scripted check runs a command-line model evaluation with input parameters and produces text and image output:
- artifactType: policyScriptedCheck
details:
name: Ethic and Fairness Evaluation
label: Ethic and Fairness Evaluation
command: evaluate_model.py create --model-hub ${model_hub} --model-name ${model_name}
parameters:
- name: model_hub
type: text
default: openai
- name: model_name
type: text
default: gpt-4
outputTypes:
- txt
- png
environmentId: [your environmentID]
hardwareTierId: small-k8s
volumeSizeGiB: 4
Input artifacts define form elements used to collect user input during policy execution. These inputs appear in the governed bundle approval interface and can be used as evidence, classification references, or dynamic visibility triggers.
Radio buttons
Radio buttons present users with a single-select list of labeled options. Each is defined by a display label and submitted value.
Example: Define radio buttons
This example defines a radio group with three labeled choices:
- artifactType: input
details:
type: radio
label: "How would you rate the model risk?"
options:
- label: "High"
value: "High"
- label: "Medium"
value: "Medium"
- label: "Low"
value: "Low"
tooltip: "Guidance text"
Example: Define text inputs
This example defines a single-line input field with placeholder and help text (helpText
):
- artifactType: input
details:
type: textinput
label: "What are the expected business benefits?"
placeholder: "Explain the benefit"
helpText: "The text under the input box to help the user"
Example: Define multi-select dropdowns
This example defines a multi-select input for selecting data sets:
- artifactType: input
details:
type: multiSelect
label: "Please select the data sets used in the model."
options:
- label: "data set1"
value: "dataset1"
- label: "data set2"
value: "dataset2"
- label: "data set3"
value: "dataset3"
File uploads
File upload artifacts let reviewers attach a file as evidence in the approval interface. The file is stored in the evidence notebook and automatically uploaded to the project directory.
Use file uploads when reviewers must submit supporting documents, such as validation reports or compliance forms. Unlike text inputs, file uploads capture and store the document itself.
Example: Define file uploads
This example defines a file input field labeled Model Validation Report
. The uploaded file becomes part of the governed evidence.
- artifactType: metadata
details:
type: file
label: Model Validation Report
description: Upload the model validation report. Make sure the file size does not exceed 500MB.
Guidance artifacts provide informational content to users, such as Markdown-formatted instructions or visual banners. These elements do not collect input but help orient users during policy execution.
Example: Define text blocks
This example shows how to include Markdown-formatted context as part of the policy:
- artifactType: guidance
details:
type: textblock
text: >-
[Map 1.4](https://ournistpolicyreferenceurl.com) The business value or
context of business use has been clearly defined or - in the case of
assessing existing AI systems - re-evaluated.
Example: Define text banners
This example shows how to display a banner with a policy reference:
- artifactType: guidance
details:
type: banner
text: >-
[Map 1.4](https://ournistpolicyreferenceurl.com) The business value or
context of business use has been clearly defined or - in the case of
assessing existing AI systems - re-evaluated.
Classification is a top-level policy feature used to assign a risk tier (such as low
, medium
, or high
) to a governed bundle. You can base classifications on user inputs, define rules to evaluate them, and display tooltips to guide decisions.
Classification inputs
You can link input artifacts to a classification by assigning an alias. These inputs appear in stages like any other artifact but can be referenced in classification logic.
Example: Define classification artifacts
This example defines a radio button input for model risk and links it to a classification alias:
classification:
rule:
artifacts:
- model-risk
stages:
- name: classificationExample
artifacts:
- id: Local.model-risk
name: Model Risk
description: Describe the risk of the model
definition:
- artifactType: input
aliasForClassification: model-risk
details:
label: "How would you rate the model risk?"
type: radio
options:
- High
- Low
Gates define policy controls for high-impact actions, like launching an app or creating a service endpoint. Each gate specifies:
-
What action(s) the gate applies to, like
CreateApp
, orCreateEndpoint
-
Which parameter values trigger it, like
hardwareTierId
ordataPlaneId
-
What approval(s) are required for the action to proceed
Gates are optional but provide a powerful way to prevent risky operations without manual oversight.
Example: Define a gate for resource-heavy deployments
In this example, the gate blocks creation of apps that use the large-k8s
or gpu-small-k8s
hardware tiers unless the Validation sign off
stage has been completed:
gates:
- name: Prod
rules:
- action: CreateApp
parameters:
hardwareTierId:
- large-k8s
- gpu-small-k8s
approvals:
- Validation sign off