-
Before you restore a backup you must put Domino into maintenance mode. See the domino-maintenance-mode readme.
-
Run the following to edit the
domino-data-importer
statefulset:kubectl -n domino-platform edit sts domino-data-importer
Set the replicas to 1.
-
When the domino-data-importer pod is up, run the following code to exec into it:
kubectl -n domino-platform exec -it domino-data-importer-0 /bin/bash
This command can take a long time, depending on the size of your backup. To prevent a timeout interrupting the command, Domino recommends using a terminal multiplexer tool like
/app/tmux
so that if your session is disconnected, the command continues running in the background and you can return to the session when you reconnect. For instructions about how to usetmux
, see their website or man pageman tmux
. -
Create the directory where the restore process expects to find the backup tarball:
mkdir -p /opt/scratch/migration-sessions/
-
Copy the backup tarball into the new directory
/opt/scratch/migration-sessions/
. If you’re restoring directly into the original deployment, copy straight from the backup location (for example, from aws s3 cp). -
Untar the backup
cd /opt/scratch/migration-sessions && tar xvf YYYYMMDD-HHMMSS.tar.gz
-
From the main app directory, run the importer with the path to the backup configuration. You can run it in dry mode first as a confidence check:
importer restore --into-cluster same --backup-dir /opt/scratch/migration-sessions/YYYYMMDD-HHMMSS --dry
Then run it for real:
importer restore --into-cluster same --backup-dir /opt/scratch/migration-sessions/YYYYMMDD-HHMMSS
The importer imports data from the backup into the current deployment.
Important
|
Remember that backup bundles only contain mongo, Git, keycloak, vault, experiment management (MLflow) tracking database, and model secrets.
|
You can restore the data elements that are not included in the backup bundle (object storage, shared storage, and external Docker registries) by putting the files in their original place. However, you cannot do this with project "blobs" when backed by S3.
Project blobs stored in S3 are unique because Domino uses S3 metadata. Losing this metadata renders the blob store non-operable. Because of this, ideally, backups for the blobs S3 bucket must be propagated in S3. You must use the Importer’s S3-to-S3 copy mechanism for transfers of blobs from one S3 bucket to another to preserve the metadata.