Domino runs in Kubernetes, which is an orchestration framework for delivering applications to a distributed compute cluster. The Domino application runs the following types of workloads in Kubernetes; each has different principles to sizing infrastructure:
- Domino Platform
-
These always-on components provide user interfaces, the Domino API server, orchestration, metadata and supporting services. The standard architecture runs the platform on a stable set of nodes for high availability, and the capabilities of the platform are principally managed through vertical scaling. This means changing the CPU and memory resources available on those platform nodes and changing the resources requested by the platform components. See Size the Domino Platform.
- Domino Compute Grid
-
These on-demand components run users' data science, engineering, and machine learning workflows. Compute workloads run on customizable collections of nodes organized into node pools. The number of these nodes can be variable and elastic, and the capabilities are principally managed through horizontal scaling. This means changing the number of nodes.
However, when there are more resources present on compute nodes, they can handle additional workloads, and therefore there are benefits to vertical scaling.
Domino uses Kubernetes requests and limits to manage the CPU and memory resources that Domino pods use. These requests and limits can be scaled to adjust resource consumption and performance. Container workloads such as databases and search systems, whose data integrity is affected by the enforcement of limits, do not have limits added to their configuration and you must not add limits to them. See Manage Compute Resources for sizing information.
- Asynchronous Domino endpoints
-
Asynchronous Domino endpoints may impose additional storage requirements for MongoDB and RabbitMQ. See Asynchronous Domino endpoints Capacity Planning for sizing information.