domino logo
Tech Ecosystem
Get Started
Get started with Python
Step 0: Orient yourself to DominoStep 1: Create a projectStep 2: Configure your projectStep 3: Start a workspaceStep 4: Get your files and dataStep 5: Develop your modelStep 6: Clean up WorkspacesStep 7: Deploy your model
Get started with R
Step 0: Orient yourself to Domino (R Tutorial)Step 1: Create a projectStep 2: Configure your projectStep 3: Start a workspaceStep 4: Get your files and dataStep 5: Develop your modelStep 6: Clean up WorkspacesStep 7: Deploy your model
Get Started with MATLAB
Step 1: Orient yourself to DominoStep 2: Create a ProjectStep 3: Configure Your ProjectStep 4: Start a MATLAB WorkspaceStep 5: Fetch and Save Your DataStep 6: Develop Your ModelStep 7: Clean Up Your Workspace
Step 8: Deploy Your Model
Scheduled JobsLaunchers
Step 9: Working with Datasets
Domino Reference Projects
Search in Deployments
Security and Credentials
Secure Credential Storage
Store Project CredentialsStore User CredentialsStore Model Credentials
Get API KeyUse a Token for AuthenticationCreate a Mirror of Compute Environments
Collaborate
Share and Collaborate on Projects
Set Project VisibilityInvite CollaboratorsCollaborator Permissions
Add Comments
Reuse Work
Set Up ExportsSet Up Imports
Organizations
Organization PermissionsTransfer Projects to an Organization
Projects
Domino File System Projects
Domino File SystemOrganize Domino File System Project AssetsImport Git RepositoriesWork from a Commit ID in GitCopy a ProjectFork ProjectsMerge Projects
Manage Project Files
Upload Files to DominoCompare File RevisionsExclude Project Files From SyncExport Files as Python or R Package
Archive a Project
Revert Projects and Files
Revert a FileRevert a Project
Git-based Projects
Git-based Project Directory StructureCreate a Git-based ProjectCreate a New RepositoryOrganize Git-based Project AssetsDevelop Models in a WorkspaceSave Artifacts to the Domino File System
Project FilesSet Project SettingsStore Project Credentials
Project Goals
Add GoalsEdit GoalsLink Work to Goals
Organize Projects with TagsSet Project Stages
Project Status
Set Project as BlockedSet Project as CompleteSet Project as Unblocked
View Execution DetailsView Project ActivityTrack Project StatusRename a Project
Share and Collaborate
Set Project VisibilityInvite CollaboratorsCollaborator Permissions
Export and Import Project Content
Set Up ExportsSet Up Imports
See the Assets for Your ProjectPromote Projects to ProductionTransfer Project OwnershipIntegrate Jira
Domino Datasets
Manage Large DataDatasets Best PracticesCreate a DatasetUse an Existing DatasetFile Location of Datasets in Projects
Datasets and Snapshots
Update a DatasetAdd Tags to SnapshotsCreate a Snapshot of a DatasetDelete Snapshots of DatasetsDelete a Dataset
Upgrade from Versions Prior to 4.5
External Data
Considerations for Connecting to Data Sources
External Data Volumes
Mount an External VolumeView Mounted VolumesUse a Mounted VolumeUmount a Volume
Tips: Transfer Data Over a Network
Workspaces
Create a Workspace
Open a VS Code WorkspaceSet Custom Preferences for RStudio Workspaces
Workspace Settings
Edit Workspace SettingsChange Your Workspace's Volume SizeConfigure Long-Running Workspaces
Save Work in a WorkspaceSync ChangesView WorkspacesStop a WorkspaceResume a WorkspaceDelete a WorkspaceView Workspace LogsView Workspace UsageView Workspace HistoryWork with Legacy Workspaces
Use Git in Your Workspace
Commit and Push Changes to Your Git RepositoryCommit All Changes to Your Git RepositoryPull the Latest Changes from Your Git Repository
Run Multiple Applications in a Workspace
Clusters
Spark on Domino
Hadoop and Spark Overview
Connect to a Cloudera CDH5 cluster from DominoConnect to a Hortonworks cluster from DominoConnect to a MapR cluster from DominoConnect to an Amazon EMR cluster from DominoRun Local Spark on a Domino ExecutorUse PySpark in Jupyter WorkspacesKerberos Authentication
On-Demand Spark Overview
Validated Spark VersionConfigure PrerequisitesWork with your ClusterManage DependenciesWork with Data
On-Demand Ray Overview
Validated Ray VersionConfigure PrerequisitesWork with your ClusterManage DependenciesWork with Data
On-Demand Dask Overview
Validated Dask VersionConfigure PrerequisitesWork with Your ClusterManage DependenciesWork with Data
Environments
Set a Default EnvironmentCreate an EnvironmentEdit Environment DefinitionView Your EnvironmentsView Environment RevisionsDuplicate an EnvironmentArchive an Environment
Environments
Example: Create a New Environment
Customize Environments
Install Custom Packages with Git Integration
Add Packages to Environments
Use Dockerfile InstructionsUse requirements.txt (Python only)Use the Execution to Add a Package
Add Workspace IDEsAdd a Scala KernelAccess Additional Domains and HostnamesUse TensorBoard in Jupyter Workspaces
Use Partner Environments
Use MATLAB as a WorkspaceUse Stata as a WorkspaceAdd an NVIDIA NGC to DominoUse SAS as a Workspace
Executions
Execution StatesDomino Environment Variables
Jobs
Start a JobScheduled Jobs
Launchers
Launchers OverviewCreate a LauncherRun a LauncherCopy Launcher Definitions
View Job DetailsCompare JobsTag JobsStop JobsView Execution Performance
Execution Notifications
Set Notification PreferencesSet Custom Execution Notifications
Execution Results
Download Execution ResultsCustomize the Results DashboardAutomate Complex Pipelines with Apache Airflow
Model APIs
Configure a Model for Deployment
Scale Models
Scale Python ModelsScale Model Versions
Configure Compute ResourcesRoute Your ModelProject Files in ModelsEnvironments for ModelsShare and Collaborate on Models
Publish
Model APIs
Publish a ModelSend Test Calls to the ModelPublish a New Version of a ModelSelect How to Authorize a Model
Domino Apps
Publish a Domino AppHost HTML Pages from DominoGrant Access to Domino AppsView a Domino AppView All Domino AppsIdentify Resources to WhitelistPublish a Python App with DashPublish an R App with ShinyPublish a Project as a Website with FlaskOptimize App Scalability and PerformanceGet the Domino Username of an App Viewer
Launchers
Create a LauncherRun a LauncherCopy Launcher Definitions
Model Monitoring
Model Monitoring APIsAccessing The Model MonitorGet Started with Model MonitoringModel Monitor DeploymentIngest Data into The Model MonitorModel RegistrationMonitoring Data DriftMonitoring Model QualitySetting Scheduled Checks for the ModelConfigure Notification Channels for the ModelUse Model Monitoring APIsProduct Settings
Domino Command Line Interface (CLI)
Install the Domino Command Line Interface (CLI)Domino CLI ReferenceDownload Files with the CLIForce-Restore a Local ProjectMove a Project Between DeploymentsUse the Domino CLI Behind a Proxy
Troubleshooting
Troubleshoot Domino ModelsWork with Many FilesTroubleshoot Imports
Get Help
Additional ResourcesGet Domino VersionContact Technical SupportSupport BundlesBrowser SupportUser Guide Updates
domino logo
About Domino
Domino Data LabKnowledge BaseData Science BlogTraining
User Guide
>
Get Started
>
Get started with R
>
Step 7: Deploy your model

Step 7: Deploy your model

After you have developed your model and deemed it good enough to be useful, you will want to deploy it. There is no single deployment method that is best for all models. Therefore, Domino offers four different deployment options. One may fit your needs better than the others depending on your use case.

The available deployment methods are:

  • Scheduled reports

  • Launchers

  • Web applications

  • Model APIs

The remaining sections of this tutorial are not dependent on each other. For example, you will not need to complete the Scheduled report section to understand and complete the Web application section.

Compute Environments

In our previous section, Step 5, we installed the Prophet package in Rstudio in order to train the model. In Domino, any package installed in one work session will not persist to another. In order to avoid having to re-install Prophet each time we need it, you can add it to a custom compute environment.

  1. Create a new compute environment.

    show environments nav

    1. Go to the Environments page in Domino.

    2. Click Create Environment.

      create env button

    3. Name the environment and enter a description for the new environment.

      name env

    4. Click Create Environment.

    5. Click Edit Definition.

      edit env button

    6. In the Dockerfile Instructions section, enter the following:

      RUN R --no-save -e "install.packages(c('prophet'))"

      edit dockerfile r

    7. Scroll to the bottom of the page and click Build.

      This will start the creation of your new compute environment. These added packages will now be permanently installed into your environment and be ready whenever you start a job or workspace session with this environment selected.

    8. Go back to your project page and go to the Settings page.

    9. Select your newly created environment from the Compute Environments menu.

If you want to learn more about how to customize your environment, see the Environments tutorials. You can also learn more about what’s included in our default environment, the Domino Standard Environment.

Scheduled reports

The Scheduled Jobs feature in Domino allows you to run a script on a regular basis. In Domino, using the R package knitr, you can blend text, code, and plots in an RMarkdown to create attractive HTML or pdf reports automatically.

In our case, we can imagine that each day we receive new data on power usage and want to email out a visualization of the latest data daily.

  1. Start a new Rstudio session.

  2. Create a new Rmarkdown file named power_report.Rmd and select HTML as our desired output.

    new rmarkdown

  3. Rstudio automatically creates a sample Rmarkdown file for you, but you can replace it entirely with the following which reuses code from our power.R script from step 5.

    title: "Power_Report"
    output: html_document
    ---
    
    ```{r setup, include=FALSE}
    knitr::opts_chunk$set(echo = TRUE)
    library(tidyverse)
    library(lubridate)
    
    col_names <-  c('HDF', 'date', 'half_hour_increment',
                               'CCGT', 'OIL', 'COAL', 'NUCLEAR',
                               'WIND', 'PS', 'NPSHYD', 'OCGT',
                               'OTHER', 'INTFR', 'INTIRL', 'INTNED',
                               'INTEW', 'BIOMASS', 'INTEM')
    df <- read.csv('data.csv', header = FALSE, col.names = col_names, stringsAsFactors = FALSE)
    
    #remove the first and last row
    df <- df[-1,]
    df <- df[-nrow(df),]
    
    #Tidy the data
    df_tidy <- df %>% gather('CCGT', 'OIL', 'COAL', 'NUCLEAR',
                           'WIND', 'PS', 'NPSHYD', 'OCGT',
                           'OTHER', 'INTFR', 'INTIRL', 'INTNED',
                           'INTEW', 'BIOMASS', 'INTEM', key="fuel", value="megawatt")
    ```

    R Markdown

    Combining R Markdown, Knitr and Domino allows you to create attractive scheduled reports that mix text, code and plots.

    ```{r, echo=FALSE, warning=FALSE}
    df_tidy <- df_tidy %>% mutate(datetime=as.POSIXct(as.Date(date, "%Y%m%d"))+minutes(30*(half_hour_increment-1)))
    print(head(df_tidy))
    ```

    Including Plots

    You can also embed plots, for example:

    ```{r, echo=FALSE}
    p <- ggplot(data=df_tidy, aes(x=datetime, y=megawatt, group=fuel)) +
        geom_line(aes(color=fuel))
    print(p)
    ```
  4. With your new Rmarkdown file, you can "knit" this into an html file and preview it directly in Domino by hitting the "Knit" button.

    knit button

  5. To create a repeatable report, you must create a script that you can schedule that will automatically render your Rmarkdown file to html. Start by creating a new R script named render.R with the following code:

    rmarkdown::render("power_report.Rmd")
  6. Save your files and *Stop and Commit() your workspace.

  7. Go to the Scheduled Jobs page.

    knit button

  8. Enter the file that you want to run. This will be the render.R script you created earlier.

  9. Select how often and when to run the file.

  10. Enter emails of people to send the resulting file(s) to.

    knit button

  11. Click Schedule.

To discover more tips on how to customize the resulting email, see Set Notification Preferences for more information.

Launchers

Launchers are simple web forms that allow users to run templatized scripts. They are especially useful if your script has command line arguments that dynamically change the way the script executes. For heavily customized script, those command line arguments can quickly get complicated. Launcher allows you to expose all of that as a simple web form.

Typically, we parameterize script files (that is, files that end in .py, .R, or .sh). Since we have been working with an R script until now, we will parameterize and reuse our R script that we created in Step 5.

To do so, we will insert a few new lines of code into a copy of the R script, and configure a Launcher.

  1. Parameterize your R script by setting it to take command line arguments:

    1. Start an Rstudio session.

    2. Create script named Power_for_Launcher.R with the following:

      library(tidyverse)
      library(lubridate)
      
      #Pass in command line arguments
      args <- commandArgs(trailingOnly = TRUE)
      fuel_type <- args[1]
      
      col_names <-  c('HDF', 'date', 'half_hour_increment',
                      'CCGT', 'OIL', 'COAL', 'NUCLEAR',
                      'WIND', 'PS', 'NPSHYD', 'OCGT',
                      'OTHER', 'INTFR', 'INTIRL', 'INTNED',
                      'INTEW', 'BIOMASS', 'INTEM')
      df <- read.csv('data.csv', header = FALSE, col.names = col_names, stringsAsFactors = FALSE)
      
      #remove the first and last row
      df <- df[-1,]
      df <- df[-nrow(df),]
      
      #Tidy the data
      df_tidy <- df %>% gather('CCGT', 'OIL', 'COAL', 'NUCLEAR',
                               'WIND', 'PS', 'NPSHYD', 'OCGT',
                               'OTHER', 'INTFR', 'INTIRL', 'INTNED',
                               'INTEW', 'BIOMASS', 'INTEM', key="fuel", value="megawatt" )
      
      #Create a new column datetime that represents the starting datetime of the measured increment.
      df_tidy <- df_tidy %>% mutate(datetime=as.POSIXct(as.Date(date, "%Y%m%d"))+minutes(30*(half_hour_increment-1)))
      
      #Filter the data
      df_fuel_type <- df_tidy %>% filter(fuel==fuel_type) %>% select(datetime,megawatt)
      
      #Save out data as csv
      write.csv(df_fuel_type, paste(fuel_type,"_",Sys.Date(),".csv",sep=""))
    3. Notice the lines in our script that define an object from a command line arguments

      args <- commandArgs(trailingOnly = TRUE)
      fuel_type <- args[1]
    4. Save the files and Stop and Commit the workspace session.

  2. Configure the Launcher.

    1. Go to the Launcher page. It is under the Publish menu on the project page.

      empty launcher page

    2. Click New Launcher.

    3. Name the launcher "Power Generation Forecast Data".

    4. Copy and paste the following into the field "Command to run":

      Power_for_Launcher.R ${fuel}

      You should see the following parameters:

      create new launcher r

    5. Select the fuel_type parameter and change the type to Select (Drop-down menu).

    6. Copy and paste the following into the Allowed Values field:

      CCGT, OIL, COAL, NUCLEAR, WIND, PS, NPSHYD, OCGT, OTHER, INTFR, INTIRL, INTNED, INTEW, BIOMASS, INTEM
    7. Click Save Launcher.

  3. Try out the Launcher.

    1. Go back to the main Launcher page.

    2. Click Run for the "Power Generation Forecast Trainer" launcher.

    3. Select a fuel type from the dropdown.

    4. Click Run.

      run launcher r

      This will execute the parameterized R script with the parameters that you selected. In this particular launcher, your dataset is filtered based on your input parameter with the results returned as a csv. When the run has been completed, an email will be sent to you and others that you optionally specified in the launcher with the resulting files. If you optionally specified in the launcher with the resulting files. See Set Custom Execution Notifications to learn how to customize the resulting email.

Model APIs

If you want your model to serve another application, you will want to serve it in the form of an API endpoint. Model APIs are scalable REST APIs that can create an endpoint from any function in a Python or R script. The Model APIs are commonly used when you need an API to query your model in near real-time.

For example, we created a model to forecast power generation of combined cycle gas turbines in the UK.

In this section, we will deploy an API that uses the model that we trained in Step 5 to predict the generated power given a date in the future. To do so, we will create a new compute environment to install necessary packages, create a new file with the function we want to expose as an API, and finally deploy the API.

  1. Create a new file with the function we want to expose as an API.

    1. From the Files page of your project, click Add File.

      files page add

    2. Name your file forecast_predictor.R.

    3. Enter the following contents:

      library("prophet") m <- readRDS(file = "model.rds") model_api <- function(year, month, day, hour, minute) { date <- paste(year, "-", month, "-", day, " ", hour, ":", minute, sep="") date = as.POSIXct(date, "%Y-%m-%d %H:%M") df_api <- data.frame(ds=date) df2 <- predict(m, df_api) return(df2["yhat"]) }
      library("prophet")
      m <- readRDS(file = "model.rds")
      
      model_api <- function(year, month, day, hour, minute) {
        date <- paste(year, "-", month, "-", day, " ", hour, ":", minute, sep="")
        date = as.POSIXct(date, "%Y-%m-%d %H:%M")
        df_api <- data.frame(ds=date)
        df2 <- predict(m, df_api)
        return(df2["yhat"])
      }
    4. Click Save.

      add file for api r

  2. Deploy the API.

    1. Go to the Publish/Model APIs page in your project.

    2. Click New Model.

      model api empty state

    3. Name your model, provide a description, and click Next.

      new model setup r

    4. Enter the name of the file that you created in the previous step.

    5. Enter the name of the function that you want to expose as an API.

    6. Click Create Model.

      new model setup 3 r

  3. Test the API.

    1. Wait for the Model API status to turn to Running. This might take a few minutes.

    2. Click the Overview tab.

    3. Enter the following into the tester:

      { "data": { "year": 2019, "month": 10, "day": 15, "hour": 8, "minute": 15 } }
      {
        "data": {
          "year": 2019,
          "month": 10,
          "day": 15,
          "hour": 8,
          "minute": 15
        }
      }
    4. Click Send. If successful, you will see the response on the right panel.

      test model r

As a REST API, any other common programming language will be able to call it. Code snippets from some popular languages are listed in the other tabs.

Model APIs are built as docker images and deployed on Domino. You can export the model images to your external container registry and deploy them in any other hosting environment outside of Domino using your custom CI/CD pipeline. Domino supports REST APIs that enable you to programmatically build new model images on Domino and export them to your external container registry.

Web applications

When experiments in Domino yield interesting results that you want to share with your colleagues, you can easily do so with a Domino App. Domino supports hosting Apps built with many popular frameworks, including Flask, Shiny, and Dash.

While Apps can be significantly more sophisticated and provide far more functionality than a Launcher, they also require significantly more code and knowledge in at least one framework. In this section, we will convert some code that we developed in Step 5 and create a Shiny app.

  1. Add the app.R file, which will describe the app in Shiny, to the project:

    library(tidyverse)
    library(lubridate)
    library(prophet)
    library(dygraphs)
    
    col_names <-  c('HDF', 'date', 'half_hour_increment',
                    'CCGT', 'OIL', 'COAL', 'NUCLEAR',
                    'WIND', 'PS', 'NPSHYD', 'OCGT',
                    'OTHER', 'INTFR', 'INTIRL', 'INTNED',
                    'INTEW', 'BIOMASS', 'INTEM')
    df <- read.csv('data.csv',header = FALSE,col.names = col_names,stringsAsFactors = FALSE)
    
    #remove the first and last row
    df <- df[-1,]
    df <- df[-nrow(df),]
    
    fuels <- c('CCGT', 'OIL', 'COAL', 'NUCLEAR',
               'WIND', 'PS', 'NPSHYD', 'OCGT',
               'OTHER', 'INTFR', 'INTIRL', 'INTNED',
              'INTEW', 'BIOMASS', 'INTEM')
    
    predict_ln <- round((nrow(df))*.2)
    
    #Tidy the data and split by fuel
    df_tidy <- df %>%
      mutate(ds=as.POSIXct(as.Date(date, "%Y%m%d"))+minutes(30*(half_hour_increment-1))) %>%
      select(-c('HDF', 'date', 'half_hour_increment')) %>%
      gather("fuel", "y", -ds) %>%
      split(.$fuel)
    
    #remove unused column
    df_tidy <- lapply(df_tidy, function(x) { x["fuel"] <- NULL; x })
    
    #Train the model
    m_list <- map(df_tidy, prophet)
    
    #Create dataframes of future dates
    future_list <- map(m_list, make_future_dataframe, periods = predict_ln,freq = 1800 )
    
    #Pre-Calc yhat for future dates
    #forecast_list <- map2(m_list, future_list, predict) # map2 because we have two inputs
    
    
    
    ui <- fluidPage(
        verticalLayout(
          h2(textOutput("text1")),
          selectInput(inputId = "fuel_type",
                     label = "Fuel Type",
                     choices = fuels,
                     selected = "CCGT"),
          dygraphOutput("plot1")))
    
    server <- function(input, output) {
      output$plot1 <- renderDygraph({
        forecast <- predict(m_list[[input$fuel_type]], future_list[[input$fuel_type]])
        dyplot.prophet(m_list[[input$fuel_type]], forecast)
      })
      output$text1 <- renderText({ input$fuel_type })
    }
    
    shinyApp(ui = ui, server = server)
  2. Add an app.sh file to the project, which provides the commands to instantiate the app:

    R -e 'shiny::runApp("app.R", port=8888, host="0.0.0.0")'
  3. Publish the App.

    1. Go to the App page under the Publish menu of your project.

    2. Enter a title and a description for your app.

      app_publish

    3. Click Publish.

    4. After your app starts successfully, which might take a few minutes, you can click View App to open it.

      app_publish

  4. Share your app with your colleagues.

    1. Back on the Publish/App page, click the App Permissions tab.

    2. Invite your colleagues by username or email.

    3. Or, toggle the Access Permissions level to make it publicly available.

      share app

See Domino Apps for more information.

Domino Data LabKnowledge BaseData Science BlogTraining
Copyright © 2022 Domino Data Lab. All rights reserved.