In the context of Kubernetes and Domino, multi-tenancy means a Kubernetes cluster (hereinafter simply referred to as “cluster” unless otherwise disambiguated) that supports multiple applications and is not dedicated just to Domino (that is, each application is an individual cluster tenant).
Domino supports multi-tenant clusters (or multi-tenancy) by adhering to a set of principles that ensure it does not interfere with other applications or other cluster-wide services that might exist. This also translates to the installation of Domino into a multi-tenant cluster, assuming typical best practice multi-tenancy constraints.
On-premise and capacity constrained environments. In this case, you are trying to maximize the use of limited, often physical, infrastructure.
Minimize administration costs.
Shared Resource Loading. Multi-tenant clusters still share common resources, such as the Kubernetes control plane (for example, API server), DNS, and ingress. This results in how other applications will impact Domino and vice versa.
Imperfect Compute Isolation and Predictability. Unless you restrict node-level usage for applications, there is no isolation at the node level. Hence, Domino Runs will potentially share compute with other applications. Ill-behaved tenants can impact Domino Runs by hogging resources causing drops in resources available to Domino or in the worst case, bring down the node. In most cases, this will probably not happen. However, if particular Domino Runs need predictability or strict isolation, this might be an issue. You can reserve nodes just for the Domino application in your cluster, but this does drive down the argument for multi-tenancy.
Increased Security Complexity and Risk. Cluster administrators will likely have to manage a larger, or finer grain, set of RBAC objects and rules. Shared resources and node-level coupling exposes an additional attack surface for any malicious tenants.
Shared Cluster Maintenance. Any cluster maintenance will cause all applications to be subject to the same maintenance window. Hence, if the cluster maintenance is due to a particular application, all applications will be subjected to the same down time even though they do not require that maintenance.
If two or more applications attempt to map a file from the “host path” and read or modify that file, then problems can arise. The use of host paths are frowned upon except for monitoring software and currently, the only place that Domino requires a host mount is for fluentd to monitor container logs. As this is standard practice for fluentd and an explicitly read-only operation, we will not interfere with other applications.
Applications that require system settings be modified for performance or reliability can interfere with or overwrite other applications' settings.
Currently, the only service that requires an updated setting for Domino is Elasticsearch and might be disabled if the cluster operators have an acceptable setting already.
vm.map_max_count needs to be set for Elasticsearch to work; This is not a Domino requirement, but a mandatory requirement from the upstream Elasticsearch Helm chart.
We currently deploy the following DaemonSets for a standard install.
docker-registryCertificate Management. This allows the underlying Docker daemon to pull from the Domino deployed Docker registry, which backs Domino Compute Environments. The service mounts the underlying
/etc/docker/certs.ddirectory and creates additional files to support the Domino Docker registry. This is not something that can necessarily interfere with other applications, but might cause concern from cluster operators and any host-level operation is inherently risky.
image-cache-agent. This handles look-ahead caching and image management for the cluster Docker daemon, allowing for shorter Domino execution start-up times. This must not be deployed on non-Domino nodes.
fluentd. This monitors logs from the User’s compute containers that pushed through a system to feed into the Jobs and Workspaces dashboard. See Files.
prometheus-node-exporter. This monitors node metrics, such as network statistics, and it is polled by the Domino deployed Prometheus server. This can be disabled with the
All DaemonSets can be limited by a
nodeSelector flag which will cause the pods to only be scheduled on a subset of nodes with a specific label. Depending on the cluster operator’s needs, we will require a categorical label on nodes for Domino’s use that we can target for deployment.
Domino creates separate namespaces for its services and requires communication between these namespaces. Domino creates several ClusterRoles and bindings that control access to its namespaces or into global resources.
All Domino-created ClusterRoles are prefixed by the deployment name, which is specified by the
name key in the
domino.yml configuration file (See Configuration Reference).
By default, Domino uses pod security policies (PSP) to ensure that, by default, pods cannot use system-level permissions that they have not been granted. Unfortunately, PSPs are globally-namespaced so they too have been prefixed with the deployment name. Applications cannot use these PSPs without explicitly being granted access through a Role or Cluster Role.
Domino does not make extensive use of Custom Resource Definitions (CRDs) except for the on-demand spark feature in 4.x. Our CRD is named uniquely,
sparkclusters.apps.dominodatalab.com and should not interfere with other applications.
Domino uses persistent volumes extensively throughout the system to ensure that data storage is abstracted and permanent. With the exception of two shared storage mounts, which both incorporate namespaces to ensure uniqueness, we strictly use dynamic volume creation through persistent volume claims which dynamically allocates a name that will not conflict with any other application.
Separate Node Pool for Platform and Compute. Even if Domino is installed in a multi-tenant cluster, we prefer to have a separate node pool for our Platform and Compute Nodes. This is not always possible, but is preferred. Domino does set resource limits and requests so that it cannot overwhelm individual nodes.