Domino Governance policy components

Domino Governance lets you define how projects are reviewed, approved, and classified using policies written in YAML. These policies are managed in the Governance Console and applied to governed bundles to enforce consistent, auditable workflows.

Each policy defines what evidence needs to be collected, who must approve it, how risk is assessed, and whether any warnings or gates should apply. Once a policy is published, it can’t be edited. This guarantees that any bundle stays tied to the exact policy version it was approved under.

You’ll need the GovernanceAdmin role to create or edit policies. This role is already included in the CloudAdmins group.

How policies are written

Policies are defined in YAML using a modular structure. Each policy is made up of components like stages, approvals, inputs, and checks. Each component is declared with an artifactType and a details section that sets its behavior.

Writing policies in YAML makes them easy to version, reuse, and review, whether you’re building one from scratch or editing an existing template.

What this page covers

This page explains how to define each type of policy component using YAML. It includes examples for:

  • Organizing stages and approvals

  • Grouping and reusing evidence

  • Running metrics checks and scripted checks

  • Creating input fields and guidance elements

  • Defining classification logic and visibility rules

  • Gating high-risk actions

Stages

Governance Admins define stages in YAML to organize evidence and approvals. Each stage can include:

  • One or more evidence sets (for direct evidence)

  • One or more approvals, each with a name, list of approvers, and optional evidence

Example: Define stages in YAML

This example defines two stages by name:

 stages:
  - name: stage1
  - name: stage2

Approvals

Approvals are defined within a stage. Each approval includes:

  • A name

  • A list of approvers (users or organizations)

  • Optional evidence, which must be satisfied before approval can be granted

Evidence may be used to support the approval process, but it is not required in all cases.

Example: Define an approval with evidence

In this example, the stage includes an approval named Stage 4: validation sign off, which includes a checklist prompt for the approver:

 - name: 'Stage 4: validation sign off'
  approvers:
    - model-gov-org
  evidence:
    id: Local.validation-approval-body
    name: Sign-off
    description: The checklist for approvals
    definition:
      - artifactType: input
        details:
          label: "Have you read the model validation reports?"
          type: radio
          options:
            - Yes
            - No

Sequential workflows

Sequential workflows define multi-stage approval processes where each stage must be completed before the next becomes available. Stages unlock in order, mirroring real-world review flows and helping maintain control throughout the governance process.

Progression is gated by required fields. All mandatory information must be provided before a bundle can move forward, supporting structured, auditable workflows and built-in compliance.

Sequential workflows are currently defined in YAML using the enforceSequentialOrder field.

Example: Define a sequential approval workflow

The following YAML example enforces sequential approvals across designated approvers within the model-gov-org group:

 enforceSequentialOrder: true
approvers:
  - name: model-gov-org
    showByDefault: true
    editable: false

Evidence and evidence sets

Evidence defines the inputs, approvals, or checks required during a stage. You can define evidence directly in a stage, or organize it into an evidenceSet for reuse and clarity.

When you define local evidence, it must be declared in full the first time it’s used. Later references can reuse it by id.

Example: Define evidence sets

This example defines a local evidence set named sample local evidence:

 evidenceSet:
  - id: Local.sample
    name: sample local evidence
    description: Describe the sample local evidence
    definition: Define the evidence

Metrics checks

Metrics checks use model metadata to validate performance against thresholds. These checks run automatically and reduce manual review.

Each check supports aliases to match metric names, as well as threshold logic to compare against expected values.

Example: Define metrics checks

This check validates model accuracy using multiple aliases and a minimum threshold of 0.8.

 metrics:
- id: Local.model-quality
  name: Model Quality
  description: Describe the model quality
  definition:
    - artifactType: metadata
      details:
      type: modelmetric
      metrics:
        - name: Acc
          label: Accuracy
          aliases:
            - acc
            - Correct Classification Rate
            - Percentage Correct
          threshold:
            operator: '>='
            value: 0.8

Scripted checks

Scripted checks run custom validation logic as part of a policy. Each one defines a command, parameters, and expected outputs.

Scripts run in a specified environment and can generate evidence. Use them to standardize complex reviews like fairness evaluations.

Example: Define scripted checks

This scripted check runs a command-line model evaluation with input parameters and produces text and image output:

  - artifactType: policyScriptedCheck
  details:
    name: Ethic and Fairness Evaluation
    label: Ethic and Fairness Evaluation
    command: evaluate_model.py create --model-hub ${model_hub} --model-name ${model_name}
    parameters:
      - name: model_hub
        type: text
        default: openai
      - name: model_name
        type: text
        default: gpt-4
    outputTypes:
      - txt
      - png
    environmentId: [your environmentID]
    hardwareTierId: small-k8s
    volumeSizeGiB: 4

Input artifacts

Input artifacts define form elements used to collect user input during policy execution. These inputs appear in the governed bundle approval interface and can be used as evidence, classification references, or dynamic visibility triggers.

Radio buttons

Radio buttons present users with a single-select list of labeled options. Each is defined by a display label and submitted value.

Example: Define radio buttons

This example defines a radio group with three labeled choices:

 - artifactType: input
  details:
    type: radio
    label: "How would you rate the model risk?"
    options:
      - label: "High"
        value: "High"
      - label: "Medium"
        value: "Medium"
      - label: "Low"
        value: "Low"
    tooltip: "Guidance text"

Text inputs

Text inputs collect short, freeform written responses from users.

Example: Define text inputs

This example defines a single-line input field with placeholder and help text (helpText):

 - artifactType: input
  details:
    type: textinput
    label: "What are the expected business benefits?"
    placeholder: "Explain the benefit"
    helpText: "The text under the input box to help the user"

Text areas

Capture longer, multi-line responses. You can customize the visible height of the input.

Example: Define text areas

This example defines a 10-line textarea input:

 - artifactType: input
  details:
    type: textarea
    label: "What are the expected business benefits?"
    height: 10
    placeholder: "Explain the benefit"
    helpText: "The text under the input box to help the user"

Select dropdowns

Select dropdowns provide a list of options where only one can be selected.

Example: Define select dropdowns

This example defines a dropdown to select a base model template:

 - artifactType: input
  details:
    type: select
    label: "Please select the base model template."
    options:
      - label: "base model1"
        value: "baseModel1"
      - label: "base model2"
        value: "baseModel2"

Multi-select dropdowns

Multi-select dropdowns allow selection of multiple options from a list.

Example: Define multi-select dropdowns

This example defines a multi-select input for selecting data sets:

 - artifactType: input
  details:
    type: multiSelect
    label: "Please select the data sets used in the model."
    options:
      - label: "data set1"
        value: "dataset1"
      - label: "data set2"
        value: "dataset2"
      - label: "data set3"
        value: "dataset3"

Checkbox groups

Checkbox groups display multiple options with checkboxes. Users can select any combination.

Example: Define checkbox groups

This example defines checkboxes for department selection:

 - artifactType: input
  details:
    type: checkbox
    label: "Please select the departments that will use the model."
    options:
      - label: "Sales"
        value: "DEPT001"
      - label: "Customer Success"
        value: "DEPT002"

Date inputs

Date inputs allow users to select or enter a date. You can specify a start date and format.

Example: Define date inputs

This example defines a date input in ISO8601 format:

 - artifactType: input
  details:
    type: date
    label: "What is the scheduled release date?"
    startDate: 20240612
    format: ISO8601

Numeric inputs

Numeric inputs collect numbers within a defined range.

Example: Define numeric inputs

This example defines an input for F-score with min and max constraints:

 - artifactType: input
  details:
    type: numeric
    label: "What is the allowed F-score for the model to be deployed?"
    min: 0
    max: 1

File uploads

File upload artifacts let reviewers attach a file as evidence in the approval interface. The file is stored in the evidence notebook and automatically uploaded to the project directory.

Use file uploads when reviewers must submit supporting documents, such as validation reports or compliance forms. Unlike text inputs, file uploads capture and store the document itself.

Example: Define file uploads

This example defines a file input field labeled Model Validation Report. The uploaded file becomes part of the governed evidence.

 - artifactType: metadata
  details:
    type: file
    label: Model Validation Report
    description: Upload the model validation report. Make sure the file size does not exceed 500MB.

Guidance artifacts

Guidance artifacts provide informational content to users, such as Markdown-formatted instructions or visual banners. These elements do not collect input but help orient users during policy execution.

Text blocks

Text blocks display long-form guidance using Markdown syntax.

Example: Define text blocks

This example shows how to include Markdown-formatted context as part of the policy:

 - artifactType: guidance
  details:
    type: textblock
    text: >-
      [Map 1.4](https://ournistpolicyreferenceurl.com) The business value or
      context of business use has been clearly defined or - in the case of
      assessing existing AI systems - re-evaluated.

Text banners

Text banners provide high-visibility messages to users.

Example: Define text banners

This example shows how to display a banner with a policy reference:

 - artifactType: guidance
  details:
    type: banner
    text: >-
      [Map 1.4](https://ournistpolicyreferenceurl.com) The business value or
      context of business use has been clearly defined or - in the case of
      assessing existing AI systems - re-evaluated.

Classification

Classification is a top-level policy feature used to assign a risk tier (such as low, medium, or high) to a governed bundle. You can base classifications on user inputs, define rules to evaluate them, and display tooltips to guide decisions.

Classification inputs

You can link input artifacts to a classification by assigning an alias. These inputs appear in stages like any other artifact but can be referenced in classification logic.

Example: Define classification artifacts

This example defines a radio button input for model risk and links it to a classification alias:

 classification:
  rule:
  artifacts:
    - model-risk

stages:
  - name: classificationExample
    artifacts:
      - id: Local.model-risk
        name: Model Risk
        description: Describe the risk of the model
        definition:
          - artifactType: input
            aliasForClassification: model-risk
            details:
              label: "How would you rate the model risk?"
              type: radio
              options:
                - High
                - Low

Classification rules

Use rules to evaluate classification inputs and assign a final label. You can define simple conditions or write custom logic using scripts.

Example: Define a classification rule

This example returns "High" if the sum of all input values is greater than or equal to 1:

 func() string {
    var sum float64
    for _, value := range inputs {
        sum += value
    }
    if sum >= 1 {
        return "High"
    }
    return "Low"
}()

Visibility rules

Visibility rules control whether certain artifacts, such as evidence sets, appear in the interface. You can base visibility on classification outcomes, user input, or any other evaluated condition.

Example: Define a visibility rule

 evidenceSet:
  - id: Global.type-of-development
    visibility:
      conditions:
        - when: classification == "High"

Gating

Gates define policy controls for high-impact actions, like launching an app or creating a service endpoint. Each gate specifies:

  • What action(s) the gate applies to, like CreateApp, or CreateEndpoint

  • Which parameter values trigger it, like hardwareTierId or dataPlaneId

  • What approval(s) are required for the action to proceed

Gates are optional but provide a powerful way to prevent risky operations without manual oversight.

Example: Define a gate for resource-heavy deployments

In this example, the gate blocks creation of apps that use the large-k8s or gpu-small-k8s hardware tiers unless the Validation sign off stage has been completed:

 gates:
  - name: Prod
    rules:
      - action: CreateApp
        parameters:
          hardwareTierId:
            - large-k8s
            - gpu-small-k8s
    approvals:
      - Validation sign off