Use coding assistants

Domino is designed to give data science teams the flexibility to use the tools they work with most effectively. By supporting your preferred IDEs instead of requiring a proprietary interface, Domino makes it easy to use coding assistants that are built to work within those environments.

Prerequisites

This guide explains how to enable commonly used coding assistants in Domino. The general process includes:

Step 1: Choose an LLM backend

Determine which large language model (LLM) your organization has approved for use with source code. This could be a commercial offering like ChatGPT, or a self-hosted open-source model such as Llama.

Most modern coding assistants support configuration options that let you connect them to the LLM of your choice.

Step 2: Choose your IDE(s)

Select the IDEs where you want to enable coding assistants. Domino supports any compatible setup. The following combinations are recommended:

IDE / LanguageTypical UsageRecommended Assistant

Jupyter / Python

Generate or revise code in notebooks via prompts.

Jupyter AI

In-browser VS Code

Auto-complete and scaffold code inside Domino.

GitHub Copilot

Local VS Code via SSH

Use Copilot with local VS Code connected to Domino compute. SSH in Domino can support other IDEs as well.

GitHub Copilot

Step 3: Modify your compute environment

Install the appropriate assistant tool in your Compute Environment and configure it to use your selected LLM.

Using a shared Compute Environment ensures all users benefit from a consistent, organization-approved setup.

Next steps