If the large deployment size is insufficient for concurrent execution, create a larger deployment by increasing the number of platform nodes and, accordingly, increasing the resource requests and limits for the platform services.
Consider the resource sizing recommendation a base recommendation. You might need to change resources relative to the recommendation. It is critical to pair this sizing with what you observe in monitoring and make other adjustments accordingly.
Domino recommends that you consult your Customer Success Manager for customizations and advice before you deploy this model.
Parameter | Value |
---|---|
Number of platform nodes | 5 |
CPU per node | 16 cores |
Memory per node | 64 GB |
Maximum concurrent executions | 1000 |
If you are running more than 1000 executions, consult your Customer Success Manager for customizations and advice.
If you use this sizing model, ensure that your platform node pool can scale up to the number of platform nodes shown above. Merge the following resource request and limit overrides to the fleetcommand-agent configuration file to apply them and then run the installer.
Important
|
It is critical to preserve values by running the installer. Applying ad hoc |
release_overrides:
cert-manager:
chart_values:
resources:
limits:
cpu: 500m
memory: 500Mi
cluster-autoscaler:
chart_values:
resources:
limits:
cpu: 1
memory: 2Gi
requests:
cpu: 50m
memory: 100Mi
data-plane:
chart_values:
agent:
deployment:
resources:
limits:
cpu: 4
memory: 4Gi
requests:
cpu: 2
memory: 2Gi
mongodb-replicaset:
chart_values:
persistentVolume:
size: 120Gi
resources:
requests:
cpu: 2
memory: 8Gi
elasticsearch:
chart_values:
replicas: 5
esJavaOpts: '-Xms8g -Xmx8g'
resources:
requests:
cpu: 1
memory: 8Gi
git:
chart_values:
resources:
limits:
cpu: 6
memory: 8Gi
requests:
cpu: 3
memory: 4Gi
persistence:
size: 160Gi
prometheus:
chart_values:
server:
resources:
requests:
cpu: 1100m
memory: 7Gi
limits:
memory: 14Gi
pusher-service:
chart_values:
resources:
requests:
cpu: 50m
memory: 2Gi
limits:
cpu: 1000m
memory: 2Gi
nucleus:
chart_values:
keycloak: {}
replicaCount:
frontend: 5
config:
javaMaxHeap:
dispatcher: 6G
train: 1500m
resources:
dispatcher:
limits:
cpu: 5
memory: 11Gi
requests:
cpu: 4
memory: 11Gi
train:
limits:
memory: 3Gi
requests:
memory: 3Gi
newrelic-logging:
chart_values:
resources:
limits:
cpu: 1
memory: 600Mi
rabbitmq-ha:
chart_values:
resources:
limits:
cpu: 2
memory: 9Gi
requests:
cpu: 1
memory: 9Gi
newrelic-events:
chart_values:
resources:
limits:
memory: 1Gi
requests:
memory: 500Mi
newrelic-open-metrics:
chart_values:
resources:
limits:
cpu: 4
memory: 8Gi
requests:
cpu: 1
memory: 2Gi
newrelic-infrastructure:
chart_values:
kubelet:
resources:
limits:
memory: 1Gi
requests:
cpu: 200m
memory: 256Mi
redis-ha:
chart_values:
resources:
limits:
cpu: 1200m
memory: 6Gi
requests:
cpu: 250m
memory: 4Gi
cost-analyzer-prometheus:
chart_values:
server:
resources:
limits:
cpu: 800m
memory: 10Gi
requests:
cpu: 500m
memory: 5Gi
To check the status of your available platform resources and allocation, run the following:
kubectl describe nodes -l dominodatalab.com/node-pool=platform | grep -A 10 Allocated
Learn how to manage compute resources.