If the concurrent execution capability of the standard size is insufficient, use the below recommendations to deploy a large size.
Consider this resource sizing a base recommendation. You may need to change resources relative to the recommendation to support each customer’s unique use of Domino. It is critical to pair this sizing with what you observe in monitoring and make other adjustments accordingly.
Domino recommends that you consult your Customer Success Manager for customizations and advice before you deploy this model.
Parameter | Value |
---|---|
Number of platform nodes | 5 |
CPU per node | 16 cores |
Memory per node | 64 GB |
Maximum concurrent executions | 600 |
If you use this sizing model, ensure that your platform node pool can scale up to the number of platform nodes above. Then, add the following resource request and limit overrides to the fleetcommand-agent configuration file:
release_overrides:
cluster-autoscaler:
chart_values:
resources:
limits:
cpu: 1
memory: 2Gi
requests:
cpu: 50m
memory: 100Mi
cost-analyzer-prometheus:
chart_values:
server:
resources:
limits:
memory: 8Gi
requests:
memory: 5Gi
data-plane:
chart_values:
agent:
deployment:
resources:
limits:
memory: 4Gi
requests:
memory: 2Gi
elasticsearch:
chart_values:
esJavaOpts: '-Xms4g -Xmx4g'
replicas: 3
resources:
requests:
cpu: 1
memory: 8Gi
git:
chart_values:
persistence:
size: 160Gi
resources:
limits:
cpu: 4
memory: 4Gi
requests:
memory: 2Gi
mongodb-replicaset:
chart_values:
persistentVolume:
size: 120Gi
resources:
requests:
cpu: 2
memory: 4Gi
newrelic-events:
chart_values:
resources:
limits:
memory: 1Gi
requests:
memory: 500Mi
newrelic-infrastructure:
chart_values:
kubelet:
resources:
limits:
memory: 1Gi
requests:
memory: 128Mi
ksm:
resources:
requests:
cpu: 1
newrelic-logging:
chart_values:
resources:
limits:
cpu: 1
memory: 600Mi
newrelic-open-metrics:
chart_values:
resources:
limits:
cpu: 2
memory: 4Gi
requests:
cpu: 1
memory: 2Gi
nucleus:
chart_values:
config:
javaMaxHeap:
dispatcher: 5G
keycloak: {}
replicaCount:
frontend: 3
resources:
dispatcher:
limits:
cpu: 5
memory: 9Gi
requests:
cpu: 4
memory: 9Gi
train:
limits:
memory: 3Gi
requests:
memory: 3Gi
prometheus:
chart_values:
server:
resources:
limits:
memory: 14Gi
requests:
cpu: 1100m
memory: 7Gi
rabbitmq-ha:
chart_values:
resources:
limits:
cpu: 2
memory: 6Gi
requests:
cpu: 1
memory: 6Gi
To check the status of your available platform resources and allocation, run the following:
kubectl describe nodes -l dominodatalab.com/node-pool=platform | grep -A 10 Allocated
Learn how to manage compute resources.