This topic describes how to deploy Domino components on Amazon Elastic Kubernetes Service (EKS). EKS is hosted on Amazon Web Services (AWS).
Amazon Web Services Command Line Interface (AWS CLI) doesn’t support SOCKS5 proxies, so you must run the commands directly from a machine that has network access to the Kubernetes cluster.
Note
| Before installing the following binaries, check for existing versions. |
-
To connect to the bastion host, run the command generated by the Terraform output:
-
If you deployed your infrastructure using the terraform-aws-eks module version:
v3.0.1
or above you can use the included script tf.sh.
./tf.sh infra output ssh_bastion_command
Otherwise:
terraform output ssh_bastion_command
-
Verify that the following binaries are installed:
# kubectl
kubectl version --client=true --short=true
#aws-cli
aws --version
#docker daemon is installed and running.
docker --version
docker ps
Otherwise follow these steps to install those missing:
-
Update
aws-cli
to version 2.curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" unzip awscliv2.zip sudo ./aws/install --bin-dir /usr/local/bin --install-dir /usr/local/aws-cli --update && rm -fr aws awscliv2.zip
-
Install Docker and exit the terminal. When you log in again, the modifications that you just made will become active:
sudo yum install -y docker sudo systemctl start docker sudo usermod -a -G docker ec2-user exit
-
Install kubectl:
curl -LO https://dl.k8s.io/release/v1.24.10/bin/linux/amd64/kubectl chmod +x kubectl sudo mv kubectl /usr/local/bin/
ImportantThe kubectl
version you install must be within one minor version difference of your EKS control plane. For example, if your EKS clusters are on Kubernetes version 1.21, install a version 1.20, 1.21, or 1.22kubectl
client. -
Use environment variables to set the values of IDs, names, and labels. This simplifies the commands you’ll run while installing Domino components:
unset HISTFILE export DOMINO_VER=<The Domino version to deploy> export FLEET_COMMAND_TAG=<Tag that corresponds to the version of Domino deployed> export DEPLOY_NAME=<Name of deployment> export AWS_REGION=<The region to deploy the resources> export AWS_ACCESS_KEY_ID=<Your AWS access key ID> export AWS_SECRET_ACCESS_KEY=<Your AWS secret key> export QUAY_USERNAME=<`quay.io` username provided by Domino> export QUAY_PASSWORD=<`quay.io` password provided by Domino>
TipYou can get the deployment tag from the fleetcommand-agent release notes. -
To retrieve the credentials for your Kubernetes cluster, check your local kubeconfig with:
export KUBECONFIG=$(pwd)/kubeconfig
-
Update the kubeconfig:
aws eks update-kubeconfig --kubeconfig $KUBECONFIG --region $AWS_REGION --name $DEPLOY_NAME --alias $DEPLOY_NAME
-
If you aren’t already logged into
quay.io
, run the following:docker login -u $QUAY_USERNAME -p $QUAY_PASSWORD quay.io
-
Generate a configuration file named
domino.yml
in your working directory:CautionThis overwrites existing domino.yml
files.docker run --rm -it -v $(pwd):/install quay.io/domino/fleetcommand-agent:$FLEET_COMMAND_TAG init --file /install/domino.yml --version $DOMINO_VER --preset eks
NoteChanging the defaults in domino.yml
can affect the deployment. If you must adjust its parameters, contact a Domino representative. -
Gather Terraform values to update the
domino.yml
file: -
If you deployed your infrastructure using the terraform-aws-eks module version:
v3.0.1
or above you can use the included script tf.sh.
./tf.sh infra output domino_config_values
-
Otherwise:
terraform output domino_config_values
-
Open the
domino.yml
file and edit the attributes as follows:-
name
: The name of the deployment. This can’t be changed post-deployment. -
hostname
: The hostname for the Domino install (for example,domino.example.com
). -
autoscaler.auto_discovery.cluster_name
: Name of the k8s cluster. -
autoscaler.aws.region
: AWS deployment region. -
internal_docker_registry.s3_override.region
: AWS region for the S3 registry bucket. -
internal_docker_registry.s3_override.bucket
: S3 bucket name for internal docker registry. -
internal_docker_registry.s3_override.sse_kms_key_id
: KMS key for S3 internal docker registry bucket. -
storage_classes.block.parameters.kmsKeyId
: KMS key for block storage. -
storage_classes.shared.efs.region
: AWS region for the EFS system. -
storage_classes.shared.efs.filesystem_id
: EFS filesystem id. -
storage_classes.shared.efs.access_point_id
: EFS access point id. -
blob_storage.projects.s3.region
: S3 bucket region for projects. -
blob_storage.projects.s3.bucket
: S3 bucket name for projects. -
blob_storage.projects.s3.sse_kms_key_id
: KMS key for S3 projects bucket. -
blob_storage.logs.s3.region
: S3 bucket region for logs. -
blob_storage.logs.s3.bucket
: S3 bucket name for logs. -
blob_storage.logs.s3.sse_kms_key_id
: KMS key for S3 logs bucket. -
blob_storage.backups.s3.region
: S3 bucket region for backups. -
blob_storage.backups.s3.bucket
: S3 bucket name for backups. -
blob_storage.backups.s3.sse_kms_key_id
: KMS key for S3 backups bucket.
-
-
Configure the LoadBalancer by adding the following code to the end of the file. Replace
<SSL certificate arn>
and<Monitoring bucket name>
with the values for your installation.NoteThe CIDR 0.0.0.0/0
forloadBalancerSourceRanges
can be updated to restrict access to certain CIDR blocks.NoteWe have switched to Network Load Balancers. Existing installs should either continue to use the existing Classic Load Balancer configuration or otherwise be prepared to adjust DNS for the replacement Network Load Balancer on upgrade, as Kubernetes will destroy the currently provisioned Classic Load Balancer when changing the configuration.
release_overrides: nginx-ingress: chart_values: controller: kind: Deployment hostNetwork: false service: enabled: true externalTrafficPolicy: "Local" type: LoadBalancer annotations: service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy: 'ELBSecurityPolicy-TLS-1-2-2017-01' service.beta.kubernetes.io/aws-load-balancer-ssl-cert: '<SSL certificate arn>' service.beta.kubernetes.io/aws-load-balancer-type: 'nlb' service.beta.kubernetes.io/aws-load-balancer-internal: 'false' service.beta.kubernetes.io/aws-load-balancer-backend-protocol: 'ssl' service.beta.kubernetes.io/aws-load-balancer-ssl-ports: '443' service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '3600' service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: 'true' service.beta.kubernetes.io/aws-load-balancer-access-log-enabled: 'true' service.beta.kubernetes.io/aws-load-balancer-access-log-emit-interval: '5' service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-name: '<Monitoring bucket name>' service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-prefix: 'ELBAccessLogs' loadBalancerSourceRanges: - 0.0.0.0/0 # should always be 0.0.0.0/0, networkPolicy does the enforcement config: use-forwarded-headers: 'false' ssl-ciphers: 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:AES128-GCM-SHA256:AES128-SHA256:AES256-GCM-SHA384:AES256-SHA256:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA' ssl-protocols: 'TLSv1.2 TLSv1.3' networkPolicy: ipCidrs: - 10.0.0.0/16 # node network - 100.64.0.0/16 # pod network - 0.0.0.0/0 # access list starts here
For reference, this is the old Classic Load Balancer configuration:
release_overrides:
nginx-ingress:
chart_values:
controller:
kind: Deployment
hostNetwork: false
service:
enabled: true
type: LoadBalancer
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy: ELBSecurityPolicy-TLS-1-2-2017-01
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: <SSL certificate arn>
service.beta.kubernetes.io/aws-load-balancer-internal: false
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: '443'
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '3600'
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'
service.beta.kubernetes.io/aws-load-balancer-access-log-enabled: 'true'
service.beta.kubernetes.io/aws-load-balancer-access-log-emit-interval: '5'
service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-name: <Monitoring bucket name>
service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-prefix: ELBAccessLogs
loadBalancerSourceRanges:
- 0.0.0.0/0
targetPorts:
http: http
https: http
config:
use-proxy-protocol: 'true'
tag::fleetcommand-agent[]
.Install Domino with fleetcommand-agent
fleetcommand-agent
installs and configures Domino components.
It uses the installation template to gather the required parameters for the environment and sets them when installing Domino components.
To install Domino on the infrastructure you prepared, run the following:
curl -o fleetcommand-agent-install.sh https://docs.dominodatalab.com/attachments/fleetcommand-agent-install.sh
bash fleetcommand-agent-install.sh $DOMINO_VER
See fleetcommand-agent-install.sh Downloads for more information.
Tip
|
If you encounter errors, investigate and resolve the root cause before you run fleetcommand-agent-sh again.
Failures are often related to resource quotas and limits.
Contact a Domino representative for assistance.
end::fleetcommand-agent[]
tag::dns[]
.Set up DNS
Run the following to get the external IP to access your instance’s Domino management plane:
|
kubectl -n domino-platform get svc nginx-ingress-controller
You can use this to update your DNS records accordingly. end::dns[] tag::ingress[] end::ingress[] tag::validation[] == Validate your installation
-
Login with the username
keycloak
and the password from thekeycloak-http
secret in thedomino-platform
namespace. -
Use the following command to get the password:
echo -e " your password is: $(kubectl get secret keycloak-http -n domino-platform --template={{.data.password}} | base64 -d) "
-
Go to Users in the navigation pane and click Add User.
-
Enter the username, first name, last name, and email address, and then click Save.
-
Go to the Credentials tab and add a password.
-
Optional: Disable Temporary.
-
Click Set Password.
-
Go to Role Mappings.
-
From Client Roles, select domino-play.
-
Select the User role and add it to your user.
-
Go to the main page for your Domino deployment (for example,
https://<YOUR-DOMAIN>;
) and sign in with your new Domino user. -
Go to Environments > Domino Standard Environment Py3.8 R4.1 > Revisions and make sure the revision is active. If not, use Build Logs to try to solve the problem.
-
Go to Projects > Quick-start > Workspaces and launch a new workspace using Jupyter (this can take a while).
-
When the new workspace is created open
main.ipynb
and confirm that you can execute the script without errors.
Use Keycloak to enable user registration, so users can access your fresh Domino install. Keycloak is a user authentication service that runs on a pod in your cluster.
end::validation[]
tag::fleetcommand-agent[]
.Install Domino with fleetcommand-agent
fleetcommand-agent
installs and configures Domino components.
It uses the installation template to gather the required parameters for the environment and sets them when installing Domino components.
To install Domino on the infrastructure you prepared, run the following:
curl -o fleetcommand-agent-install.sh https://docs.dominodatalab.com/attachments/fleetcommand-agent-install.sh
bash fleetcommand-agent-install.sh $DOMINO_VER
See fleetcommand-agent-install.sh Downloads for more information.
Tip
|
If you encounter errors, investigate and resolve the root cause before you run fleetcommand-agent-sh again.
Failures are often related to resource quotas and limits.
Contact a Domino representative for assistance.
end::fleetcommand-agent[]
tag::dns[]
.Set up DNS
Run the following to get the external IP to access your instance’s Domino management plane:
|
kubectl -n domino-platform get svc nginx-ingress-controller
You can use this to update your DNS records accordingly. end::dns[] tag::ingress[] end::ingress[] tag::validation[] == Validate your installation
-
Login with the username
keycloak
and the password from thekeycloak-http
secret in thedomino-platform
namespace. -
Use the following command to get the password:
echo -e " your password is: $(kubectl get secret keycloak-http -n domino-platform --template={{.data.password}} | base64 -d) "
-
Go to Users in the navigation pane and click Add User.
-
Enter the username, first name, last name, and email address, and then click Save.
-
Go to the Credentials tab and add a password.
-
Optional: Disable Temporary.
-
Click Set Password.
-
Go to Role Mappings.
-
From Client Roles, select domino-play.
-
Select the User role and add it to your user.
-
Go to the main page for your Domino deployment (for example,
https://<YOUR-DOMAIN>;
) and sign in with your new Domino user. -
Go to Environments > Domino Standard Environment Py3.8 R4.1 > Revisions and make sure the revision is active. If not, use Build Logs to try to solve the problem.
-
Go to Projects > Quick-start > Workspaces and launch a new workspace using Jupyter (this can take a while).
-
When the new workspace is created open
main.ipynb
and confirm that you can execute the script without errors.
Use Keycloak to enable user registration, so users can access your fresh Domino install. Keycloak is a user authentication service that runs on a pod in your cluster.
end::validation[]
tag::fleetcommand-agent[]
.Install Domino with fleetcommand-agent
fleetcommand-agent
installs and configures Domino components.
It uses the installation template to gather the required parameters for the environment and sets them when installing Domino components.
To install Domino on the infrastructure you prepared, run the following:
curl -o fleetcommand-agent-install.sh https://docs.dominodatalab.com/attachments/fleetcommand-agent-install.sh
bash fleetcommand-agent-install.sh $DOMINO_VER
See fleetcommand-agent-install.sh Downloads for more information.
Tip
|
If you encounter errors, investigate and resolve the root cause before you run fleetcommand-agent-sh again.
Failures are often related to resource quotas and limits.
Contact a Domino representative for assistance.
end::fleetcommand-agent[]
tag::dns[]
.Set up DNS
Run the following to get the external IP to access your instance’s Domino management plane:
|
kubectl -n domino-platform get svc nginx-ingress-controller
You can use this to update your DNS records accordingly. end::dns[] tag::ingress[] end::ingress[] tag::validation[] == Validate your installation
-
Login with the username
keycloak
and the password from thekeycloak-http
secret in thedomino-platform
namespace. -
Use the following command to get the password:
echo -e " your password is: $(kubectl get secret keycloak-http -n domino-platform --template={{.data.password}} | base64 -d) "
-
Go to Users in the navigation pane and click Add User.
-
Enter the username, first name, last name, and email address, and then click Save.
-
Go to the Credentials tab and add a password.
-
Optional: Disable Temporary.
-
Click Set Password.
-
Go to Role Mappings.
-
From Client Roles, select domino-play.
-
Select the User role and add it to your user.
-
Go to the main page for your Domino deployment (for example,
https://<YOUR-DOMAIN>;
) and sign in with your new Domino user. -
Go to Environments > Domino Standard Environment Py3.8 R4.1 > Revisions and make sure the revision is active. If not, use Build Logs to try to solve the problem.
-
Go to Projects > Quick-start > Workspaces and launch a new workspace using Jupyter (this can take a while).
-
When the new workspace is created open
main.ipynb
and confirm that you can execute the script without errors.
Use Keycloak to enable user registration, so users can access your fresh Domino install. Keycloak is a user authentication service that runs on a pod in your cluster.
end::validation[]
Important
| Create a canonical name (CNAME) to this host in your DNS, not an address record (A record). |
tag::fleetcommand-agent[]
.Install Domino with fleetcommand-agent
fleetcommand-agent
installs and configures Domino components.
It uses the installation template to gather the required parameters for the environment and sets them when installing Domino components.
To install Domino on the infrastructure you prepared, run the following:
curl -o fleetcommand-agent-install.sh https://docs.dominodatalab.com/attachments/fleetcommand-agent-install.sh
bash fleetcommand-agent-install.sh $DOMINO_VER
See fleetcommand-agent-install.sh Downloads for more information.
Tip
|
If you encounter errors, investigate and resolve the root cause before you run fleetcommand-agent-sh again.
Failures are often related to resource quotas and limits.
Contact a Domino representative for assistance.
end::fleetcommand-agent[]
tag::dns[]
.Set up DNS
Run the following to get the external IP to access your instance’s Domino management plane:
|
kubectl -n domino-platform get svc nginx-ingress-controller
You can use this to update your DNS records accordingly. end::dns[] tag::ingress[] end::ingress[] tag::validation[] == Validate your installation
-
Login with the username
keycloak
and the password from thekeycloak-http
secret in thedomino-platform
namespace. -
Use the following command to get the password:
echo -e " your password is: $(kubectl get secret keycloak-http -n domino-platform --template={{.data.password}} | base64 -d) "
-
Go to Users in the navigation pane and click Add User.
-
Enter the username, first name, last name, and email address, and then click Save.
-
Go to the Credentials tab and add a password.
-
Optional: Disable Temporary.
-
Click Set Password.
-
Go to Role Mappings.
-
From Client Roles, select domino-play.
-
Select the User role and add it to your user.
-
Go to the main page for your Domino deployment (for example,
https://<YOUR-DOMAIN>;
) and sign in with your new Domino user. -
Go to Environments > Domino Standard Environment Py3.8 R4.1 > Revisions and make sure the revision is active. If not, use Build Logs to try to solve the problem.
-
Go to Projects > Quick-start > Workspaces and launch a new workspace using Jupyter (this can take a while).
-
When the new workspace is created open
main.ipynb
and confirm that you can execute the script without errors.
Use Keycloak to enable user registration, so users can access your fresh Domino install. Keycloak is a user authentication service that runs on a pod in your cluster.
end::validation[]