If the concurrent execution capability of the standard size is insufficient, use the below recommendations to deploy a XL size platform.
Consider this resource sizing a base recommendation. You may need to change resources relative to the recommendation to support each customer’s unique use of Domino. It is critical to pair this sizing with what you observe in monitoring and make other adjustments accordingly.
Domino recommends that you consult your Customer Success Manager for customizations and advice before you deploy this model.
Parameter | Value |
---|---|
Number of platform nodes | 6 |
CPU per node | 16 cores |
Memory per node | 64 GB |
Maximum concurrent executions | 1500 |
If you are running more than 1500 executions, consult your Customer Success Manager for customizations and advice.
If you use this sizing model, ensure that your platform node pool can scale up to the number of platform nodes shown above. Merge the following resource request and limit overrides to the fleetcommand-agent configuration file to apply them and then run the installer.
Important
|
It is critical to preserve values by running the installer. Applying ad hoc |
release_overrides:
cert-manager:
chart_values:
resources:
limits:
cpu: 500m
memory: 500Mi
cluster-autoscaler:
chart_values:
resources:
limits:
cpu: 1
memory: 2Gi
requests:
cpu: 50m
memory: 500Mi
cost-analyzer-prometheus:
chart_values:
server:
resources:
limits:
memory: 10Gi
requests:
memory: 5Gi
data-plane:
chart_values:
agent:
deployment:
resources:
limits:
memory: 2Gi
requests:
memory: 1Gi
workloads:
controller:
manager:
resources:
limits:
memory: 2Gi
requests:
memory: 1Gi
mongodb-replicaset:
chart_values:
persistentVolume:
size: 120Gi
resources:
requests:
cpu: 2
memory: 8Gi
elasticsearch:
chart_values:
replicas: 5
esJavaOpts: '-Xms4g -Xmx4g'
resources:
requests:
cpu: 1
memory: 8Gi
git:
chart_values:
resources:
limits:
cpu: 6
memory: 8Gi
requests:
memory: 4Gi
persistence:
size: 160Gi
prometheus:
chart_values:
server:
resources:
requests:
cpu: 1500m
memory: 12Gi
limits:
cpu: 2500m
memory: 24Gi
pusher-service:
chart_values:
resources:
requests:
cpu: 50m
memory: 2Gi
limits:
cpu: 1
memory: 2Gi
nucleus:
chart_values:
keycloak: {}
replicaCount:
frontend: 5
config:
javaMaxHeap:
dispatcher: 6G
train: 1500m
resources:
dispatcher:
limits:
cpu: 5
memory: 11Gi
requests:
memory: 11Gi
train:
limits:
memory: 3500Mi
requests:
cpu: 500m
memory: 3500Mi
develop:
limits:
memory: 2500Mi
requests:
memory: 2500Mi
rabbitmq-ha:
chart_values:
resources:
limits:
memory: 10Gi
requests:
memory: 10Gi
newrelic-events:
chart_values:
resources:
limits:
memory: 1Gi
requests:
memory: 1Gi
newrelic-open-metrics:
chart_values:
resources:
limits:
cpu: 4
memory: 8Gi
requests:
cpu: 1
memory: 4Gi
newrelic-prometheus-agent:
chart_values:
resources:
prometheus:
requests:
cpu: 1000m
memory: 2Gi
limits:
cpu: 1500m
memory: 4Gi
newrelic-logging:
chart_values:
resources:
limits:
cpu: 1
memory: 600Mi
newrelic-infrastructure:
chart_values:
kubelet:
resources:
limits:
memory: 1Gi
requests:
memory: 256Mi
ksm:
resources:
requests:
cpu: 1
redis-ha:
chart_values:
resources:
limits:
cpu: 1200m
memory: 6Gi
To check the status of your available platform resources and allocation, run the following:
kubectl describe nodes -l dominodatalab.com/node-pool=platform | grep -A 10 Allocated
Learn how to manage compute resources.