When your storage account is filling up, you can configure an additional storage location where Domino can store all future datasets and snapshots. Updates to existing datasets and snapshots are made in their original storage location, while new datasets and snapshots are stored in the secondary location. You can do this again if the secondary storage location also fills up.
Note
| We recommend reaching out to your Domino field team to understand whether this solution is appropriate for your scenario. |
This topic explains how to set up additional storage accounts for datasets. For the steps below, we recommend working with the Domino field team to ensure that your configuration is accurate.
-
Create a new storage account.
This can be in the cloud or anywhere, as long as it can be registered as a Kubernetes Persistent Volume resource, and made accessible to Domino deployments (namely
nucleus-
andrun-
). -
Create the appropriate PV/PVC pairs for the new storage account in the Domino cluster.
Two pairs of PV/PVC should be created in this step: one in the platform and one in the compute namespace. The content of the PV/PVC varies according to your choice of storage.
-
Make the PV/PVC accessible to other resources within the cluster.
The newly created PV/PVC must be accessible by pods in the platform and compute namespace. Depending on your choice of storage, this can mean creating appropriate Kubernetes secrets for the new storage, editing the storages network policies to authorize write-access, and so on.
-
Edit the Kubernetes
nucleus-*
deployments to mount the newly-registered PV at a specificmountPoint
. -
Once the pods have restarted, test that the PV can be accessed correctly:
-
Execute inside one of the
nucleus-frontend
pods. -
Navigate to the specified
mountPoint
. -
Try to write a test file.
-
Start this procedure only after successfully completing step 5 above. At this point, you should have a new storage account that is accessible to Domino via its own PV/PVC pairs. In order to instruct Domino to write every future dataset and snapshot in the new storage account, follow the below steps:
-
Go to Admin > Advanced > Central Config, and add the following key/value pair in the
common
namespace:-
com.cerebro.domino.datacache.pvc.names
- This is a comma-separated list of all compute-PVC names used for dataset/snapshot storage at Domino. If you only added one additional storage account, this list should contain two PVCs: the original one and the newly-created one. -
com.cerebro.domino.datacache.pvc.originalName
- The compute-PVC name corresponding to the original Domino storage (this defaults to the first PVC in the comma-separated list of step 1). -
com.cerebro.domino.datacache.pvc.primaryName
- The compute-PVC name of the volume in which you would like to store every next dataset and snapshot. -
com.cerebro.domino.datacache.pvc.<COMPUTE-PVC-NAME>.mountPoint
- Configure this for each PVC name specified in step 1; it is the mount point at which the PVC is mounted in thenucleus-*
deployments.
NoteAll the PersistentVolumeClaims (PVC) specified in the Central Configuration settings above must correspond to the claim names in the *-compute
namespace. -
-
Restart the necessary services as specified at the top of the Configuration Management page.
-
Go to Admin > Advanced > Feature Flags and set the
ShortLived.EnableDatasetMultiStorageSupport
totrue
if it is not already. -
Test that all dataset-related features work properly in the new storage account:
-
Create a dataset and uploading data to it.
-
Take a snapshot.
-
Download data from a dataset/snapshot.
-
Delete a dataset/snapshot.
-
Mount datasets/snapshots to executions.
Verify that all these actions took place in the newly-configured storage account as expected.
-
To preserve the changes made to the nucleus-*
deployments and ensure that multi-storage support continues to work properly after a Domino upgrade, you must add the following specs to the custom agent.yaml
, inside the nucleus
> chart_values
section:
config:
multiStorageSupportEnabled: true
additionalStorages:
- name: "<name of the additional volume(s) as per the modified nucleus-* deployment>"
mountPath: "<mount path at which volume was mounted>"
platformPvc: "<platform-PVC name corresponding to the new volume>"
computePvc: "<compute-PVC name corresponding to the new volume>"