To restore a backup:
Run the following to edit the
kubectl -n domino-platform edit sts domino-data-importer
Set the replicas to 1.
When the domino-data-importer pod is up, run the following code to exec into it:
kubectl -n domino-platform exec -it domino-data-importer-0 /bin/bash
This command can take a long time, depending on the size of your backup. To prevent a timeout interrupting the command, Domino recommends using a terminal multiplexer tool like
/app/tmuxso that if your session is disconnected, the command continues running in the background and you can return to the session when you reconnect. For instructions about how to use
tmux, see their website or man page
Create the directory where the restore process expects to find the backup tarball:
mkdir -p /opt/scratch/migration-sessions/
Copy the backup tarball into the new directory
/opt/scratch/migration-sessions/. If you’re restoring directly into the original deployment, copy straight from the backup location (for example, from aws s3 cp).
Untar the backup
cd /opt/scratch/migration-sessions && tar xvf YYYYMMDD-HHMMSS.tar.gz
From the main app directory, run the importer with the path to the backup configuration. You can run it in dry mode first as a confidence check:
./importer -c /opt/scratch/migration-sessions/YYYYMMDD-HHMMSS/config.yaml --dry
Then run it for real:
./importer -c /opt/scratch/migration-sessions/YYYYMMDD-HHMMSS/config.yaml
The importer imports data from the backup into the current deployment.
You can restore the data elements that are not included in the backup bundle (object storage, shared storage, and external Docker registries) by putting the files in their original place. However, you cannot do this with project "blobs" when backed by S3.
Project blobs stored in S3 are unique because Domino uses S3 metadata. Losing this metadata renders the blob store non-operable. Because of this, ideally, backups for the blobs S3 bucket must be propagated in S3. You must use the Importer’s S3-to-S3 copy mechanism for transfers of blobs from one S3 bucket to another to preserve the metadata.