This topic describes how to deploy Domino components on Amazon Elastic Kubernetes Service (EKS). EKS is hosted on Amazon Web Services (AWS).
-
Get the
$FLEETCOMMAND_AGENT_TAGfor your target release from the releases page. -
Use environment variables to set some values used by the
ddlctlCLI. This simplifies the commands you’ll run while installing Domino components:unset HISTFILE export QUAY_USERNAME=<`quay.io` username provided by Domino> export QUAY_PASSWORD=<`quay.io` password provided by Domino> export FLEETCOMMAND_AGENT_TAG=<Tag that corresponds to the version of Domino deployed> -
Generate an EKS configuration file.
For a standard control plane install:
ddlctl create config --agent-version $FLEETCOMMAND_AGENT_TAG --preset eksFor a data plane install:
ddlctl create config --agent-version $FLEETCOMMAND_AGENT_TAG --preset data-plane-eksImportantChanging the defaults in the generated configuration can affect the deployment. If you must adjust its parameters, contact a Domino representative. -
Gather Terraform values to update your configuration file:
-
If you deployed your infrastructure using the terraform-aws-eks module version v3.0.1 or above, you can use the included tf.sh script:
./tf.sh infra output domino_config_values -
Otherwise use:
terraform output domino_config_values
-
-
For both control plane and data plane installs, open the
domino.ymlfile and edit the attributes as follows:-
name: The name of the deployment. This can’t be changed post-deployment. -
hostname: The hostname for the Domino install (for example,domino.example.com). -
storage_classes.block.parameters.kmsKeyId: KMS key for block storage. -
storage_classes.shared.efs.region: AWS region for the EFS system. -
storage_classes.shared.efs.filesystem_id: EFS file system ID. -
storage_classes.shared.efs.access_point_id: EFS access point ID.For control plane installs, also configure:
-
autoscaler.auto_discovery.cluster_name: Name of the k8s cluster. -
autoscaler.aws.region: The AWS deployment region.NoteConfigure only either external_docker_registryorinternal_docker_registry. Theexternal_docker_registryshould only be set during new installations. If you are upgrading and have previously configuredinternal_docker_registry, you must continue to use it. -
external_docker_registry: Specifies the ECR Repository URL. -
internal_docker_registry.s3_override.region: AWS region for the S3 registry bucket. -
internal_docker_registry.s3_override.bucket: S3 bucket name for internal Docker registry. -
internal_docker_registry.s3_override.sse_kms_key_id: KMS key for S3 internal Docker registry bucket. -
blob_storage.projects.s3.region: S3 bucket region for projects. -
blob_storage.projects.s3.bucket: S3 bucket name for projects. -
blob_storage.projects.s3.sse_kms_key_id: KMS key for S3 projects bucket. -
blob_storage.logs.s3.region: S3 bucket region for logs. -
blob_storage.logs.s3.bucket: S3 bucket name for logs. -
blob_storage.logs.s3.sse_kms_key_id: KMS key for S3 logs bucket. -
blob_storage.backups.s3.region: S3 bucket region for backups. -
blob_storage.backups.s3.bucket: S3 bucket name for backups. -
blob_storage.backups.s3.sse_kms_key_id: KMS key for S3 backups bucket.
-
-
Configure the load balancer by adding the code below to the end of the file. Replace
<SSL certificate arn>and<Monitoring bucket name>with the values for your installation. Note that the CIDR0.0.0.0/0forloadBalancerSourceRangescan be updated to restrict access to certain CIDR blocks.release_overrides: nginx-ingress: chart_values: controller: kind: Deployment hostNetwork: false service: enabled: true externalTrafficPolicy: "Local" type: LoadBalancer annotations: service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy: 'ELBSecurityPolicy-TLS-1-2-2017-01' service.beta.kubernetes.io/aws-load-balancer-ssl-cert: '<SSL certificate arn>' service.beta.kubernetes.io/aws-load-balancer-type: 'nlb' service.beta.kubernetes.io/aws-load-balancer-internal: 'false' service.beta.kubernetes.io/aws-load-balancer-backend-protocol: 'ssl' service.beta.kubernetes.io/aws-load-balancer-ssl-ports: '443' service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '3600' service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: 'true' service.beta.kubernetes.io/aws-load-balancer-access-log-enabled: 'true' service.beta.kubernetes.io/aws-load-balancer-access-log-emit-interval: '5' service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-name: '<Monitoring bucket name>' service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-prefix: 'ELBAccessLogs' loadBalancerSourceRanges: - 0.0.0.0/0 # should always be 0.0.0.0/0, networkPolicy does the enforcement config: use-forwarded-headers: 'false' ssl-ciphers: 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:AES128-GCM-SHA256:AES128-SHA256:AES256-GCM-SHA384:AES256-SHA256:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA' ssl-protocols: 'TLSv1.2 TLSv1.3' networkPolicy: ipCidrs: - 10.0.0.0/16 # node network - 100.64.0.0/16 # pod network - 0.0.0.0/0 # access list starts here
|
Note
| We have switched to Network Load Balancers. Existing installs should either continue to use the existing Classic Load Balancer configuration or otherwise be prepared to adjust DNS for the replacement Network Load Balancer on upgrade, as Kubernetes will destroy the currently provisioned Classic Load Balancer when changing the configuration. |
For reference, this is the old Classic Load Balancer configuration:
release_overrides:
nginx-ingress:
chart_values:
controller:
kind: Deployment
hostNetwork: false
service:
enabled: true
type: LoadBalancer
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy: ELBSecurityPolicy-TLS-1-2-2017-01
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: <SSL certificate arn>
service.beta.kubernetes.io/aws-load-balancer-internal: false
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: '443'
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '3600'
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'
service.beta.kubernetes.io/aws-load-balancer-access-log-enabled: 'true'
service.beta.kubernetes.io/aws-load-balancer-access-log-emit-interval: '5'
service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-name: <Monitoring bucket name>
service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-prefix: ELBAccessLogs
loadBalancerSourceRanges:
- 0.0.0.0/0
targetPorts:
http: http
https: http
config:
use-proxy-protocol: 'true'ddlctl create domino --config {filepath-of-config-created-in-previous-step} --agent-version $FLEETCOMMAND_AGENT_TAGIf you use your own NGINX ingress controller by specifying ingress_controller.install = false, then you need to create a network policy in the Domino platform and compute namespace.
Here is an example of a network policy that allows ingress from the nginx namespace:
kubectl -n <domino-namespace> apply -f - <<EOF
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: external-nginx
spec:
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: nginx
podSelector: {}
policyTypes:
- Ingress
EOFRun the following to get the external IP to access your instance’s Domino management plane:
kubectl -n domino-platform get svc nginx-ingress-controllerYou can use this to update your DNS records accordingly.
|
Important
| Create a canonical name (CNAME) to this host in your DNS, not an address record (A record). |
-
Go to
https://<YOUR-DOMAIN>/auth/ -
Login with the username
keycloakand the password from thekeycloak-httpsecret in thedomino-platformnamespace. -
Use the following command to get the password:
echo -e "\nyour password is: $(kubectl get secret keycloak-http -n domino-platform --template={{.data.password}} | base64 -d)\n" -
Go to Users in the navigation pane and click Add User.
-
Enter the username, first name, last name, and email address, and then click Save.
-
Go to the Credentials tab and add a password.
-
Optional: Disable Temporary.
-
Click Set Password.
-
Go to Role Mappings.
-
From Client Roles, select domino-play.
-
Select the User role and add it to your user.
-
Go to the main page for your Domino deployment (for example,
https://\<YOUR-DOMAIN\>) and sign in with your new Domino user. -
Go to Environments > Domino Standard Environment Py3.8 R4.1 > Revisions and make sure the revision is active. If not, use Build Logs to try to solve the problem.
-
Go to Projects > Quick-start > Workspaces and launch a new workspace using Jupyter (this can take a while).
-
When the new workspace is created open
main.ipynband confirm that you can execute the script without errors.
Use Keycloak to enable user registration, so users can access your fresh Domino install. Keycloak is a user authentication service that runs on a pod in your cluster.
