domino logo
Tech Ecosystem
Get started with Python
Step 0: Orient yourself to DominoStep 1: Create a projectStep 2: Configure your projectStep 3: Start a workspaceStep 4: Get your files and dataStep 5: Develop your modelStep 6: Clean up WorkspacesStep 7: Deploy your model
Get started with R
Step 0: Orient yourself to Domino (R Tutorial)Step 1: Create a projectStep 2: Configure your projectStep 3: Start a workspaceStep 4: Get your files and dataStep 5: Develop your modelStep 6: Clean up WorkspacesStep 7: Deploy your model
Get Started with MATLAB
Step 1: Orient yourself to DominoStep 2: Create a Domino ProjectStep 3: Configure Your Domino ProjectStep 4: Start a MATLAB WorkspaceStep 5: Fetch and Save Your DataStep 6: Develop Your ModelStep 7: Clean Up Your Workspace
Step 8: Deploy Your Model
Scheduled JobsLaunchers
Step 9: Working with Domino Datasets
Domino Reference
Projects
Projects OverviewProjects PortfolioReference ProjectsProject Goals in Domino 4+
Git Integration
Git Repositories in DominoGit-based ProjectsWorking from a Commit ID in Git
Jira Integration in DominoUpload Files to Domino using your BrowserFork and Merge ProjectsSearchSharing and CollaborationCommentsDomino File SystemCompare File Revisions
Revert Projects and Files
Revert a FileRevert a Project
Archive a Project
Advanced Project Settings
Project DependenciesProject TagsRename a ProjectSet up your Project to Ignore FilesUpload files larger than 550MBExporting Files as a Python or R PackageTransfer Project Ownership
Domino Runs
JobsDiagnostic Statistics with dominostats.jsonNotificationsResultsRun Comparison
Advanced Options for Domino Runs
Run StatesDomino Environment VariablesEnvironment Variables for Secure Credential StorageUse Apache Airflow with Domino
Scheduled Jobs
Domino Workspaces
WorkspacesUse Git in Your WorkspaceRecreate A Workspace From A Previous CommitUse Visual Studio Code in Domino WorkspacesPersist RStudio PreferencesAccess Multiple Hosted Applications in one Workspace Session
Spark on Domino
On-Demand Spark
On-Demand Spark OverviewValidated Spark VersionConfigure PrerequisitesWork with your ClusterManage DependenciesWork with Data
External Hadoop and Spark
Hadoop and Spark OverviewConnect to a Cloudera CDH5 cluster from DominoConnect to a Hortonworks cluster from DominoConnect to a MapR cluster from DominoConnect to an Amazon EMR cluster from DominoRun Local Spark on a Domino ExecutorUse PySpark in Jupyter WorkspacesKerberos Authentication
On-Demand Ray
On-Demand Ray OverviewValidated Ray VersionConfigure PrerequisitesWork with your ClusterManage DependenciesWork with Data
On-Demand Dask
On-Demand Dask OverviewValidated Dask VersionConfigure PrerequisitesWork with Your ClusterManage DependenciesWork with Data
Customize the Domino Software Environment
Environment ManagementDomino Standard EnvironmentsInstall Packages and DependenciesAdd Workspace IDEsAdding Jupyter Kernels
Partner Environments for Domino
Use MATLAB as a WorkspaceUse Stata as a WorkspaceUse SAS as a WorkspaceNVIDIA NGC Containers
Advanced Options for Domino Software Environment
Install Custom Packages in Domino with Git IntegrationAdd Custom DNS Servers to Your Domino EnvironmentConfigure a Compute Environment to User Private Cran/Conda/PyPi MirrorsUse TensorBoard in Jupyter Workspaces
Publish your Work
Publish a Model API
Model Publishing OverviewModel Invocation SettingsModel Access and CollaborationModel Deployment ConfigurationPromote Projects to ProductionExport Model Image
Publish a Web Application
App Publishing OverviewGet Started with DashGet Started with ShinyGet Started with FlaskContent Security Policies for Web Apps
Advanced Web Application Settings in Domino
App Scaling and PerformanceHost HTML Pages from DominoHow to Get the Domino Username of an App Viewer
Launchers
Launchers OverviewAdvanced Launcher Editor
Assets Portfolio Overview
Model Monitoring and Remediation
Monitor WorkflowsData Drift and Quality Monitoring
Set up Monitoring for Model APIs
Set up Prediction CaptureSet up Drift DetectionSet up Model Quality MonitoringSet up NotificationsSet Scheduled ChecksSet up Cohort Analysis
Set up Model Monitor
Connect a Data SourceRegister a ModelSet up Drift DetectionSet up Model Quality MonitoringSet up Cohort AnalysisSet up NotificationsSet Scheduled ChecksUnregister a Model
Use Monitoring
Access the Monitor DashboardAnalyze Data DriftAnalyze Model QualityExclude Features from Scheduled Checks
Remediation
Cohort Analysis
Review the Cohort Analysis
Remediate a Model API
Monitor Settings
API TokenHealth DashboardNotification ChannelsTest Defaults
Monitoring Config JSON
Supported Binning Methods
Model Monitoring APIsTroubleshoot the Model Monitor
Connect to your Data
Data in Domino
Datasets OverviewProject FilesDatasets Best Practices
Connect to Data Sources
External Data VolumesDomino Data Sources
Connect to External Data
Connect to Amazon S3 from DominoConnect to BigQueryConnect to DataRobotConnect to Generic S3 from DominoConnect to IBM DB2Connect to IBM NetezzaConnect to ImpalaConnect to MSSQLConnect to MySQLConnect to OkeraConnect to Oracle DatabaseConnect to PostgreSQLConnect to RedshiftConnect to Snowflake from DominoConnect to Teradata
Work with Data Best Practices
Work with Big Data in DominoWork with Lots of FilesMove Data Over a Network
Advanced User Configuration Settings
User API KeysDomino TokenOrganizations Overview
Use the Domino Command Line Interface (CLI)
Install the Domino Command Line (CLI)Domino CLI ReferenceDownload Files with the CLIForce-Restore a Local ProjectMove a Project Between Domino DeploymentsUse the Domino CLI Behind a Proxy
Browser Support
Get Help with Domino
Additional ResourcesGet Domino VersionContact Domino Technical SupportSupport Bundles
domino logo
About Domino
Domino Data LabKnowledge BaseData Science BlogTraining
User Guide
>
Domino Reference
>
Connect to your Data
>
Connect to Data Sources
>
Domino Data Sources

Domino Data Sources

Domino data sources provide a mechanism to create and manage connection properties to a supported external data service. Data sources can be created by both Domino administrators and users, and can be shared among collaborators. Connection properties are stored securely and there is no need to install data source specific drivers or libraries. A tightly coupled library provides a consistent access pattern for both tabular and file based data.

To learn about using the Data API to work with data sources, see Data Source Use Cases.

Create a data source as an admin

One common configuration pattern is for Domino administrators to create, configure, and manage broadly-used data sources, which are then exposed to all or a subset of users.

Create a Data Source as an admin
  1. Go to Admin > Data > Data Sources.

  2. Click Create a Data Source.

  3. Select your data store.

  4. Enter the configuration details for accessing your data store.

    The details vary depending on your data store type. Connect to External Data has instructions for each supported type.

  5. Click Next.

  6. Select the credential type:

    • Individual - each user is required to provide their own credentials before using a data source

    • Service Account - Domino administrators provide a set of credentials that will be automatically applied on behalf of users with permissions to a given data source. End users cannot access or extract the credentials.

      Note
      OAuth authentication is not supported for service accounts. OAuth-authenticated connections can be used for any execution type except Model APIs.

    Data source credentials are stored securely in the Domino secret store which is backed by HashiCorp Vault.

  7. Click Next.

  8. Enter your credentials for data store authentication.

    You can click Test Credentials to verify that authentication works, or click Skip for Now to continue.

  9. Select whether Everyone can use this data source or just Specific users or organizations.

Note
  1. Click Finish Setup.

Create a data source as a user

Alternatively, individual users can also create data sources directly when they need access to a more specific Data Source than what an admin may have seeded on the deployment.

Create a data source as a user
  1. Go to Data in the left nav or in your project.

    Regardless of where the creation is initiated, the resulting data sources can be used in any project by users with the appropriate permissions.

  2. Click Create a Data Source.

  3. Select your data store.

  4. Enter the configuration details for accessing your data store.

    The details vary depending on your data store type. Connect to External Data has instructions for each supported type.

  5. Click Next.

  6. Select the credential type:

  7. Click Next.

  8. Enter your credentials for data store authentication.

    You can click Test Credentials to verify that authentication works.

  9. Select whether Only you can use this data source or whether it is shared with Specific users or organizations.

  10. Click Finish Setup.

Reference data sources in projects

Data sources have global scope in a Domino deployment and are accessible to any user with the appropriate permissions in any project. Users can add data sources to a project explicitly (Add a data source on project Data page) or implicitly when a data source is used directly in code from a project. This allows users to have visibility into which data sources are used in each of their projects.

When multiple users collaborate on a project, it is possible that a data source used by one user is not properly configured for another. Domino will proactively surface such problems both from the project Data page as well as from the Data tab in Domino Workspaces.

Domino notifies users when they do not have permissions. When a user sees this message, they should request access from the data source owner.

When users have access but have not configured their individual credentials, they will also see a notification and will be able to add their credentials. For a given data source, individual credentials need to be added only once.

projects data source

Retrieve data

After a data source is properly configured, the Domino Data API allows users to retrieve data using a uniform interface without having to install drivers or data source specific libraries.

There are two types of data sources - Tabular and File-based with each exposing a slightly different mechanism for retrieving data. The simplest way to get started is with the automatically generated code snippet example.

snippet data source

For more detailed information, see the Domino Data API reference.

Note

When using the Domino Data API from a Domino execution, user identity verification for the purposes of enforcing Domino permissions happens automatically. The library will first attempt to use a Domino JWT token, or, if not available, a user API key.

The following is a summary of the user identity that will be used for data source access based on Domino execution type.

  • Workspaces and Jobs - user who started the execution

  • Launchers - user who started the launcher regardless of who created the launcher

  • Domino Apps - user who published the app regardless of who is accessing the app

  • Model API - no user identity

For Model APIs and other advanced use cases that require establishing a different user identity it is possible to inject an API key into an execution through an environment variable, and then use it explicitly when retrieving a data source.

For more detailed information, see Custom Authentication from the API documentation.

Domino Training Sets

Note

Domino Training Sets allow you to persist dataframes for model training and other analysis. You can store and load multiple versions of a given dataframe from a training set, allowing you to connect a model to the specific version of a dataframe that was used to train it.

The dataframe used as the basis for a Training Set can be constructed using the result of a Domino Data Source query (as described above) or through any other construction method.

In addition to storing the underlying dataframe, training sets can be used to capture additional monitoring metadata. When this additional data is present, Training Set versions will be available as sources of baseline training data when publishing model APIs in Domino.

Training sets are only scoped to the projects in which they are created. Users with project contributor permissions can create, load, and delete Training Set versions in that project.

Training sets are available as an API-only feature. For information on how to use the API, see the Domino Data API documentation, especially these topics:

  • Create Training Sets

  • Retrieve Training Sets

  • Update Training Sets

  • Delete Training Sets

Domino Data LabKnowledge BaseData Science BlogTraining
Copyright © 2022 Domino Data Lab. All rights reserved.