Model publishing overview


For a quick video introduction to models in Domino, watch:


Domino models are REST API endpoints that run your Domino code. These endpoints are automatically served and scaled by Domino to provide programmatic access to your R and Python data science code. You can use Domino models to quickly and easily put data science code into production.

A Domino model is a REST API endpoint wrapped around a function in your code. The arguments to your function are supplied as parameters in the request payload, and the response from the API includes the return value from your function.

When a model is published, Domino first runs the script containing the function. The process then waits for input, so any objects or functions from the script remain available in memory. Each call to the endpoint runs the function within this same process. Because the script is only sourced once — at publish time — any more expensive initialization can happen up front, rather than happening on each call.

Key features

  • Production-ready infrastructure

    Models are reliable, available, and scalable for most businesses’ production use cases.
    Read about model deployment configuration for more details.
  • Versioning and reproducibility

    Models are versioned and each version can be redeployed giving you the ability to revert to previous good states.
  • Discoverability and access control

    Domino models are first class objects in Domino, separate from projects.
    • Models have their own permissions. Read about model access and collaboration for more details.

    • Models have audit logging so you can track usage, management, and maintenance activity.

    • Model endpoints can be set up to require access token authentication. Read about model invocation settings for more details.

  • Promote-to-production workflow

    Domino supports an advanced routing mode which allows for a promote-to-production workflow where you can test with one version of a model and take production traffic on another version. Read about model deployment configuration for more details.

Environments for models

Models run in Domino Environments, similarly to Runs and Workspaces. However, there are a few important details to note.

  • Model hosts do not read requirements.txt files or execute commands defined in the pre-setup, post-setup, pre-run, or post-run scripts of your environment. If your project currently uses requirements.txt or any of these setup scripts to install certain packages or repositories, you must add them to the Dockerfile instructions of your environment.

  • Your model does not inherit environment variables set at the project level. However, you can set model-specific environment variables on the model settings page. This is intended to decouple the management of projects and models. See this page for more details.

Project files in models

Your model has access to the project files for the project from which it was published. The project files are loaded onto the model host like they would be for an executor hosting a Run or Workspace, with few important differences:

  • The project files are added to the model image when the model version is built. Stopping and starting an existing model version will not cause the files available to that model version to change. If your project files have changed since your current model version was built, you need to build a new version of the model if you want it to see those changes.

  • Model hosts mount your project files at /mnt/<username>/<project_name>. This is different from the default behavior of a Run or Workspace, which hosts your project files at /mnt. There is a default Domino environment variable called DOMINO_WORKING_DIR that always points to the directory where your project is mounted, and allows you to easily write code that can work in both the standard run and model host environments.

  • Git repositories attached to projects are only pulled when a model version is built, not every time a model is started. If your external Git repository changes, and you want to pick up those changes in your model, you should build a new version.

  • Any project files mentioned in the .modelignore file present in the project’s root directory are excluded from the generated model image. Hence these excluded files are not mounted on the model host.

Publishing a model

There are three ways to publish a model.

(1) from the Domino web application

  1. Click Publish from the project menu.

  2. Click to open the Models tab.

  3. Click New Model.

  4. Fill in the first page of model setup by choosing a name for the model, supplying an optional description, setting the environment you want the model to run in, and choosing a logging mode. By default, Domino only logs basic origin and response code information about requests to the model. If you check the Log HTTP requests and responses to model instance logs box, Domino will also log the contents of the requests and responses, allowing you to see model inputs and outputs in the instance logs. Screen_Shot_2019-01-07_at_11.45.09_AM.png

  5. Click Next to advance to the second page of model setup. Enter the filename that contains your model code, and the function that you want called when the model handles a request. Optionally if you want to exclude any files from the model image, you can list those file patterns in a file named .modelignore and save the .modelignore file in the project’s root folder. Click Publish when finished. Your model will build, and upon a successful build it will automatically deploy.

(2) with a scheduled run

When setting up a scheduled run, you will see an option to Publish Model after Complete. This setting will use the state of the project’s files after the run to build and deploy a new model version. You can use this option with a script that pulls fresh data from sources your model depends on to keep the model up-to-date automatically.

(3) with the Domino API

Read the API docs for more information on programmatic model publishing.

Calling a model

On the overview page of a model, you will find a model tester. This can be used to make calls to the model from the Domino web application. You will find additional tabs on the overview with example code for calling the model with other tools and in various programming languages.

These examples all show a sample JSON input scheme. To construct your input JSON, you can use either a dictionary or an array. Each element of the list will be passed to the function as positional or named arguments. The elements themselves may be lists or arbitrary objects, as long as they are valid JSON.

If you’re using named parameters in your function definition, for example:

my_function(x, y, z)

You can use either a data dictionary or a parameter array:

{"data": {"x": 1, "y": 2, "z": 3}}
{"parameters": [1, 2, 3]}

If you’re using a dictionary in your function definition, for example:


and your function then uses dict[“x”], dict[“y”] etc, you can use only a parameter array:

{"parameters": [{"x": 1, "y": 2, "z": 3}]}

In Python, you can also use kwargs to pass in a variable number of arguments. If you do this:

my_function(x, **kwargs)

and your function then uses kwargs[“y”] and kwargs[“z”], you can use a data dictionary to call your model:

{"data": {"x": 1, "y": 2, "z": 3}}

Domino will take care of converting the inputs to the proper types in the language of your endpoint function.


Python Type

R Type



named list







number (int)



number (real)












The model’s output is contained in the result object which can be a literal, array or dictionary.

Updating a model

You can publish a new version of your model at any time. For example, you may want to re-train the model with new data, or switch to a different machine learning algorithm. Click New Version from the model overview page. The process is similar to when publishing a model for the first time.

You can also unpublish the model and Domino will stop serving it.


dataTypeError: don’t know how to serialize class

You may see this error with Python endpoints if you return values that are NumPy objects, rather than native Python primitives. To fix this, just convert your NumPy values to Python primitives. An easy way to do this is to call numpy.asscalar.

TypeError: <result> is not JSON serializable

The result of your endpoint function gets serialized to JSON before being sent in the response. However some object types, such as Decimal or certain NumPy data types, are not JSON serializable by default. For Decimal data types, cast the Decimal type to a Float type. For NumPy data types, convert the values to Python primitives. An easy way to do this is to call numpy.asscalar as described above.

Exception in thread “Thread-9” Server returned HTTP response code: 500 for URL <model_url>

You may encounter this and other storage related errors in the build log when your Model is too large in size. Currently there is a project size limit of 500MB for Models. If your project includes a large trained model data set then we do recommend that you exclude this upon publishing a new version of your model.