domino logo
Tech Ecosystem
Get started with Python
Step 0: Orient yourself to DominoStep 1: Create a projectStep 2: Configure your projectStep 3: Start a workspaceStep 4: Get your files and dataStep 5: Develop your modelStep 6: Clean up WorkspacesStep 7: Deploy your model
Get started with R
Step 0: Orient yourself to Domino (R Tutorial)Step 1: Create a projectStep 2: Configure your projectStep 3: Start a workspaceStep 4: Get your files and dataStep 5: Develop your modelStep 6: Clean up WorkspacesStep 7: Deploy your model
Get Started with MATLAB
Step 1: Orient yourself to DominoStep 2: Create a Domino ProjectStep 3: Configure Your Domino ProjectStep 4: Start a MATLAB WorkspaceStep 5: Fetch and Save Your DataStep 6: Develop Your ModelStep 7: Clean Up Your Workspace
Step 8: Deploy Your Model
Scheduled JobsLaunchers
Step 9: Working with Domino Datasets
Domino Reference
Notifications
On-Demand Open MPI
Configure MPI PrerequisitesFile Sync MPI ClustersValidate MPI VersionWork with your ClusterManage Dependencies
Projects
Projects OverviewProjects PortfolioReference ProjectsProject Goals in Domino 4+
Git Integration
Git Repositories in DominoGit-based Projects with CodeSyncWorking from a Commit ID in Git
Jira Integration in DominoUpload Files to Domino using your BrowserFork and Merge ProjectsSearchSharing and CollaborationCommentsDomino Service FilesystemCompare File RevisionsArchive a Project
Advanced Project Settings
Project DependenciesProject TagsRename a ProjectSet up your Project to Ignore FilesUpload files larger than 550MBExporting Files as a Python or R PackageTransfer Project Ownership
Domino Runs
JobsDiagnostic Statistics with dominostats.jsonNotificationsResultsRun Comparison
Advanced Options for Domino Runs
Run StatesDomino Environment VariablesEnvironment Variables for Secure Credential StorageUse Apache Airflow with Domino
Scheduled Jobs
Domino Workspaces
WorkspacesUse Git in Your WorkspaceRecreate A Workspace From A Previous CommitUse Visual Studio Code in Domino WorkspacesPersist RStudio PreferencesAccess Multiple Hosted Applications in one Workspace Session
Spark on Domino
On-Demand Spark
On-Demand Spark OverviewValidated Spark VersionConfigure PrerequisitesWork with your ClusterManage DependenciesWork with Data
External Hadoop and Spark
Hadoop and Spark OverviewConnect to a Cloudera CDH5 cluster from DominoConnect to a Hortonworks cluster from DominoConnect to a MapR cluster from DominoConnect to an Amazon EMR cluster from DominoRun Local Spark on a Domino ExecutorUse PySpark in Jupyter WorkspacesKerberos Authentication
On-Demand Ray
On-Demand Ray OverviewValidated Ray VersionConfigure PrerequisitesWork with your ClusterManage DependenciesWork with Data
On-Demand Dask
On-Demand Dask OverviewValidated Dask VersionConfigure PrerequisitesWork with Your ClusterManage DependenciesWork with Data
Customize the Domino Software Environment
Environment ManagementDomino Standard EnvironmentsInstall Packages and DependenciesAdd Workspace IDEsAdding Jupyter Kernels
Use Custom Images as a Compute Environment
Pre-requisites for Automatic Custom Image CompatibilityModify the Default Workspace ToolsCreate a Domino Image with an NGC ContainerCreate a Domino Environment with a Pre-Built ImageManually Modify Images for Domino Compatibility
Partner Environments for Domino
Use MATLAB as a WorkspaceUse Stata as a WorkspaceUse SAS as a Workspace
Advanced Options for Domino Software Environment
Publish in Domino with Custom ImagesInstall Custom Packages in Domino with Git IntegrationAdd Custom DNS Servers to Your Domino EnvironmentConfigure a Compute Environment to User Private Cran/Conda/PyPi MirrorsUse TensorBoard in Jupyter Workspaces
Publish your Work
Publish a Model API
Model Publishing OverviewModel Invocation SettingsModel Access and CollaborationModel Deployment ConfigurationPromote Projects to ProductionExport Model ImageExport to NVIDIA Fleet Command
Publish a Web Application
App Publishing OverviewGet Started with DashGet Started with ShinyGet Started with FlaskContent Security Policies for Web Apps
Advanced Web Application Settings in Domino
App Scaling and PerformanceHost HTML Pages from DominoHow to Get the Domino Username of an App Viewer
Launchers
Launchers OverviewAdvanced Launcher Editor
Manage Externally-Hosted Models
Model Requirements
Use Domino's REST API to Export a Model
Export Model ImageExport to NVIDIA Fleet Command
Create an ExportCheck the Status of an ExportPush a New VersionSet up Monitoring for an ExportArchive an ExportView Monitoring StatusTroubleshooting
Assets Portfolio Overview
Model Monitoring and Remediation
Monitor WorkflowsData Drift and Quality Monitoring
Set up Monitoring for Model APIs
Set up Prediction CaptureSet up Drift DetectionSet up Model Quality MonitoringSet up NotificationsSet Scheduled ChecksSet up Cohort Analysis
Set up Model Monitor
Connect a Data SourceRegister a ModelSet up Drift DetectionSet up Model Quality MonitoringSet up Cohort AnalysisSet up NotificationsSet Scheduled ChecksUnregister a Model
Use Monitoring
Access the Monitor DashboardAnalyze Data DriftAnalyze Model QualityExclude Features from Scheduled Checks
Remediation
Cohort Analysis
Review the Cohort Analysis
Remediate a Model API
Monitor Settings
API TokenHealth DashboardNotification ChannelsTest Defaults
Monitoring Config JSON
Supported Binning Methods
Model Monitoring APIsTroubleshoot the Model Monitor
Connect to your Data
Data in Domino
Datasets OverviewProject FilesDatasets Best Practices
Connect to Data Sources
External Data VolumesDomino Data Sources
Connect to External Data
Connect to Amazon S3 from DominoConnect to Azure Data Lake StorageConnect to BigQueryConnect to DataRobotConnect to Generic S3 from DominoConnect to Google Cloud StorageConnect to IBM DB2Connect to IBM NetezzaConnect to ImpalaConnect to MSSQLConnect to MySQLConnect to OkeraConnect to Oracle DatabaseConnect to PostgreSQLConnect to RedshiftConnect to Snowflake from DominoConnect to Teradata
Work with Data Best Practices
Work with Big Data in DominoWork with Lots of FilesMove Data Over a Network
Advanced User Configuration Settings
User API KeysDomino TokenOrganizations Overview
Use the Domino Command Line Interface (CLI)
Install the Domino Command Line (CLI)Domino CLI ReferenceDownload Files with the CLIForce-Restore a Local ProjectMove a Project Between Domino DeploymentsUse the Domino CLI Behind a Proxy
Browser Support
Get Help with Domino
Additional ResourcesGet Domino VersionContact Domino Technical SupportSupport Bundles
domino logo
About Domino
Domino Data LabKnowledge BaseData Science BlogTraining
User Guide
>
Domino Reference
>
Model Monitoring and Remediation
>
Monitoring Config JSON

Monitoring Config JSON

The Monitoring Config JSON captures all information required to register a model, a prediction dataset, or a ground truth dataset.This section describes the structure of the Monitoring Config JSON file. The following is a sample config:

{
        "variables": [
                {
                        "name": "age",
                        "valueType": "numerical",
                        "variableType": "feature",
                        "featureImportance": 0.9
                },
                {
                        "name": "y",
                        "valueType": "categorical",
                        "variableType": "prediction"
                },
                {
                        "name": "date",
                        "valueType": "datetime",
                        "variableType": "timestamp"
                },
                {
                        "name": "RowId",
                        "valueType": "string",
                        "variableType": "row_identifier"
                }
        ],
        "datasetDetails": {
                "name": "TrainingData.csv",
                "datasetType": "file",
                "datasetConfig": {
                        "path": "TrainingData.csv",
                        "fileFormat": "csv"
                },
                "datasourceName": "abc-shared-bucket",
                "datasourceType": "s3"
        },
        "modelMetadata": {
                "name": "test_psg",
                "modelType": "classification",
                "version": "2",
                "description": "",
                "author": "testadmin"
        }
}
variables

An array of variables that declare all features and prediction columns that you want to analyze. For each member in the array, specify the name, variableType, and valueType.

name

Name of the column.

variableType

Provides the attribute that identifies the column. Supported types are: +

feature
  1. Can only be of valueType numerical or categorical.

  2. Input feature of the model.

  3. Data drift will be calculated for this data column.

  4. Must be declared while registering the model along with its training data.

  5. The column must be present in all training and prediction datasets registered with the model.

prediction (optional)
  1. When declared, there can only be one Prediction column.

  2. Can only be of valueType numerical or categorical.

  3. Output prediction of the model.

  4. Data drift and model quality metrics are calculated for this data column. Include this column when registering your model to ensure both data drift and model quality analysis can be run. If it isn’t included, model quality metrics won’t be computed.

timestamp (optional)
  1. When present, there can be only one timestamp column.

  2. Can only be of datetime valueType.

  3. Although you can declare this column when adding prediction data for the first time, Domino recommends that it be declared during model registration.

  4. Identifies the column that contains the timestamp for the prediction made. If not declared, the ingestion time of the data in the Model Monitor is used as the timestamp of the prediction.

  5. Must contain the date/time when the prediction was made. Column values must follow the ISO 8601 time format.

  6. When it is not declared, the ingestion time of the prediction dataset into the Model Monitor is substituted as the timestamp of prediction.

  7. To use automatic ingestion for Snowflake, you must include this column. Snowflake’s documentation recommends setting the timezone to UTC for both the Spark cluster and the Snowflake deployment.

row_identifier (optional)
  1. Can only be of string valueType.

  2. Uniquely identifies each prediction row. Typically referred to as prediction ID, transaction ID, and so on.

  3. When present, there can be only one row_identifier column.

  4. Although you can declare this column when adding prediction data for the first time, Domino recommends that it be declared during model registration.

  5. Values are used to match ground truth data to the predictions to calculate model quality metrics. Model quality metrics will not be calculated if this column is not present. If used, must be present in both prediction and ground truth datasets.

ground_truth

Identifies the column that contains the ground truth labels in the ground truth datasets.

sample_weight

Column that contains the weight to be associated with each prediction to calculate the Gini Norm metric.

prediction_probability

Column that contains the probability value for the model’s prediction. Can be a single value (maps to the probability value of the positive class) or a list of values (the length of the list must match the number of unique prediction labels or classes present in the training dataset).

Note

For a field of this type, include forPredictionOutput to indicate the prediction column for which you are specifying the probability. This column is required if you want to measure AUC, ROC, Log Loss, and Gini Norm as part of the model quality analysis.

valueType

Identifies the value of the column. Supported types are categorical, numerical, datetime, or string. Can only be set to datetime for one column.

forPredictionOutput

Specifies which prediction column the ground truth variable represents in Ground Truth Config.

datasetDetails
name

The name to associate with this dataset instance. You can use the same name as the file you are selecting.

datasetType

The supported type is “file”.

datasetConfig

Defines the actual location of the file.

  1. path: The name of the file. If you are using Snowflake, the path is the name of the Snowflake table.

  2. fileFormat: .csv and parquet are supported. If you are using Snowflake, the path must be snowflake.

datasourceName

Name you provided when you created the data source.

datasourceType

A supported data source types s3, gcs, azure_blob, azure_data_lake_gen1, azure_data_lake_gen2, hdfs, or snowflake.

modelMetadata

Captures metadata related to the model. Specify the name, modelVersion, modelType, dataset, dateCreated, description. and author attributes. dateCreated must be in a valid UTC format (ISO 8601). Valid values for modelType are classification and regression.

featureImportance

Highlights the overall impact of the feature on predictions made to the model, relative to other features.

Domino Data LabKnowledge BaseData Science BlogTraining
Copyright © 2022 Domino Data Lab. All rights reserved.