Domino is designed to give data science teams the flexibility to use the tools they work with most effectively. By supporting your preferred IDEs instead of requiring a proprietary interface, Domino makes it easy to use coding assistants that are built to work within those environments.
To see a full example in action, check out our Vibe Modeling article. It walks you through how to use GitHub Copilot with Domino, either by running VS Code inside a Domino Workspace or by connecting your local IDE to Domino through the MCP Server.
This guide explains how to enable commonly used coding assistants in Domino. The general process includes:
Step 1: Choose an LLM backend
Determine which large language model (LLM) your organization has approved for use with source code. This could be a commercial offering like ChatGPT, or a self-hosted open-source model such as Llama.
Most modern coding assistants support configuration options that let you connect them to the LLM of your choice.
Step 2: Choose your IDE(s)
Select the IDEs where you want to enable coding assistants. Domino supports any compatible setup. The following combinations are recommended:
| IDE / Language | Typical Usage | Recommended Assistant |
|---|---|---|
Generate or revise code in notebooks via prompts. | Jupyter AI | |
Auto-complete and scaffold code inside Domino. | GitHub Copilot | |
Use Copilot with local VS Code connected to Domino compute. SSH in Domino can support other IDEs as well. | GitHub Copilot |
-
Use GitHub Copilot in VS Code directly within Domino or through Remote SSH.
-
Jupyter AI provides an AI-powered assistant that you can configure to use with Domino.
-
Our Vibe Modeling article shows how to use GitHub Copilot with Domino: either by running VS Code inside a Domino Workspace or by connecting your local IDE to Domino through the MCP server.
