Fine-tune large language models (LLMs) and computer vision models with Domino Foundation Models to affordably take advantage of large, general-purpose models for your domain-specific needs.
Domino Code Assist provides a friendly UI to select a foundation model and fine-tune it on a new dataset. Domino Code Assist generates code to preprocess the data and execute a model trainer. After training, Domino publishes the fine-tuned model to the Experiments page along with the experiment’s parameters and logged metrics.
Domino Foundation Models give you:
-
Full transparency and control of training code.
-
Rapid testing and iteration with processed data.
-
Integration with MLflow to easily monitor results, register, and deploy models.
-
Rapid setup and configuration.
Domino Foundation Models leverage models and datasets from Hugging Face.
For more information about the prerequisites, how to choose a model to fine-tune, choose a dataset, view the experiment, and customize and bootstrap, see Domino’s Fine-tune Foundation Models documentation.