Skip to content

How to deploy EKS Clusters with the Terraform EKS Blueprints and GitHub Actions Workflows

License

Notifications You must be signed in to change notification settings

liamhelmer/eks-blueprints-actions-workflow

 
 

Repository files navigation

EKS Clusters using Karpenter on Fargate deployed with the Terraform EKS Blueprints and GitHub Actions Workflows

Warning You are responsible for the cost of the AWS services used while running this sample deployment. There is no additional cost for using this sample. For full details, see the pricing pages for each AWS service you will be using in this sample. Prices are subject to change. This sample code should only be used for demonstration purposes and should not be used in a production environment.

This example provides the following capabilities:

  • Deploy EKS clusters with GitHub Action Workflows
  • Execute test suites on ephemeral test clusters
  • Leverage Karpenter for Autoscaling

The following resources will be deployed by this example.

  • Creates EKS Cluster Control plane with one Fargate profile for the following namespaces:
    • kube-system
    • karpenter
    • argocd
    • external-dns
  • Karpenter add-on deployed through a Helm chart
  • One Karpenter default provisioner
  • AWS Load Balancer Controller add-on deployed through a Helm chart
  • External DNS add-on deployed through a Helm chart
  • The game-2048 application deployed through a Helm chart from this repository to demonstrates how Karpenter scales nodes based on workload requirements and how to configure the Ingress so that an application can be accessed over the internet.

How to Deploy Manually

Manual Deployment Prerequisites

Ensure that you have installed the following tools in your Linux, Mac or Windows Laptop before start working with this module and run Terraform Plan and Apply

The following resources need to be available in the AWS accounts where you want to deploy EKS clusters:

  • EKS Cluster Administrators IAM Role
  • VPC with private and public subnets with the appropriate elb tags
  • Route 53 hosted zone
  • Wildcard certificate issued in ACM
  • S3 bucket where to store the Terraform state

You also need to provide a GitHub Personal Access Token (PAT) to access the application helm chart on this repository.

Manual Deployment Steps

  1. Clone the repo using the command below

    git clone https://github.com/aws-samples/eks-blueprints-actions-workflow.git
  2. Run Terraform init to initialize a working directory with configuration files

    export S3_BUCKET_NAME="<BucketName>"
    export CLUSTER_NAME="<BucketKey>"
    export AWS_REGION="us-west-2"
    export CLUSTER_ID="01"
    export ENVIRONMENT="dev"
    export TEAM_NAME="demo"
    
    terraform init \
      -backend-config="bucket=${S3_BUCKET_NAME}" \
      -backend-config="key=${CLUSTER_NAME}/tfstate" \
      -backend-config="region=${AWS_REGION}"
  3. Create a tfvars file in the clusters folder with the values for your EKS cluster.

    Use clusters/demo-dev-01.tfvars as a reference Replace all values contained in the demo example with the required cluster configuration

  4. Run Terraform plan to verify the resources created by this execution

    # Personal Access Token (PAT) required to access the application helm chart repo
    export WORKLOADS_PAT="<github_token>"
    
    terraform plan \
      -var-file="./clusters/${CLUSTER_NAME}.tfvars" \
      -var="region=${AWS_REGION}" \
      -var="cluster_id=${CLUSTER_ID}" \
      -var="environment=${ENVIRONMENT}" \
      -var="team_name=${TEAM_NAME}" \
      -var="workloads_pat=${WORKLOADS_PAT}" \
      -out=tfplan
  5. Finally, Terraform apply.

    terraform apply tfplan

Manual Destroy Steps

To clean up your environment, delete the sample workload and then destroy the Terraform modules in reverse order.

  1. Run Terraform init to initialize a working directory with configuration files

    export S3_BUCKET_NAME="<BucketName>"
    export CLUSTER_NAME="<BucketKey>"
    export AWS_REGION="us-west-2"
    export CLUSTER_ID="01"
    export ENVIRONMENT="dev"
    export TEAM_NAME="demo"
    
    terraform init \
      -backend-config="bucket=${S3_BUCKET_NAME}" \
      -backend-config="key=${CLUSTER_NAME}/tfstate" \
      -backend-config="region=${AWS_REGION}"
  2. Run Terraform destroy to Destroy Argo CD, Karpenter Provisioner and IAM Role, Kubernetes Add-ons, and EKS cluster.

    # Argo CD
    terraform destroy \
      -target="module.eks_blueprints_kubernetes_addons.module.argocd" \
      -target="aws_secretsmanager_secret.argocd" \
      -target="bcrypt_hash.argo" \
      -var-file="./clusters/${CLUSTER_NAME}.tfvars" \
      -var="region=${AWS_REGION}" \
      -var="cluster_id=${CLUSTER_ID}" \
      -var="environment=${ENVIRONMENT}" \
      -var="team_name=${TEAM_NAME}" \
      -var="workloads_pat=${WORKLOADS_PAT}" \
      -auto-approve
    # Wait for 1-2 minutes to allow Karpenter to delete the empty nodes
    # Karpenter Provisioner
    terraform destroy \
      -target="kubectl_manifest.karpenter_provisioner" \
      -var-file="./clusters/${CLUSTER_NAME}.tfvars" \
      -var="region=${AWS_REGION}" \
      -var="cluster_id=${CLUSTER_ID}" \
      -var="environment=${ENVIRONMENT}" \
      -var="team_name=${TEAM_NAME}" \
      -var="workloads_pat=${WORKLOADS_PAT}" \
      -auto-approve
    # Kubernetes Add-Ons
    terraform destroy \
      -target="module.eks_blueprints_kubernetes_addons" \
      -var-file="./clusters/${CLUSTER_NAME}.tfvars" \
      -var="region=${AWS_REGION}" \
      -var="cluster_id=${CLUSTER_ID}" \
      -var="environment=${ENVIRONMENT}" \
      -var="team_name=${TEAM_NAME}" \
      -var="workloads_pat=${WORKLOADS_PAT}" \
      -auto-approve
    # EKS Cluster
    terraform destroy \
      -target="module.eks_blueprints" \
      -var-file="./clusters/${CLUSTER_NAME}.tfvars" \
      -var="region=${AWS_REGION}" \
      -var="cluster_id=${CLUSTER_ID}" \
      -var="environment=${ENVIRONMENT}" \
      -var="team_name=${TEAM_NAME}" \
      -var="workloads_pat=${WORKLOADS_PAT}" \
      -auto-approve
    # Karpenter IAM Role
    terraform destroy \
      -target="aws_iam_role.karpenter" \
      -var-file="./clusters/${CLUSTER_NAME}.tfvars" \
      -var="region=${AWS_REGION}" \
      -var="cluster_id=${CLUSTER_ID}" \
      -var="environment=${ENVIRONMENT}" \
      -var="team_name=${TEAM_NAME}" \
      -var="workloads_pat=${WORKLOADS_PAT}" \
      -auto-approve
    # VPC & Subnets Tags
    terraform destroy \
      -target="aws_ec2_tag.private_subnet_cluster_karpenter_tag" \
      -target="aws_ec2_tag.vpc_tag " \
      -var-file="./clusters/${CLUSTER_NAME}.tfvars" \
      -var="region=${AWS_REGION}" \
      -var="cluster_id=${CLUSTER_ID}" \
      -var="environment=${ENVIRONMENT}" \
      -var="team_name=${TEAM_NAME}" \
      -var="workloads_pat=${WORKLOADS_PAT}" \
      -auto-approve

How to Deploy with a GitHub Actions Workflow

GitHub Actions Workflow Prerequisites

The following resources need to be available in the AWS accounts where you want to deploy EKS clusters:

Ensure that you all the required Actions Secrets are present in the Secrets - Actions settings before creating a workflow to deploy an EKS cluster.

For example, to deploy a cluster in two environments named Dev and Staging you will need the following GitHub Actions Encrypted secrets:

  • DEMO_WORKLOADS_PAT
  • DEV_AWS_ACCOUNT
  • DEV_AWS_IAM_ROLE
  • STAGING_AWS_ACCOUNT
  • STAGING_AWS_IAM_ROLE

Create the Environments you want to manage in your GtiHub repository. This example uses the following GitHub Environments:

  • DEV
  • TEST
  • PR-TEST

Workflow Deployment Steps

  1. Clone the repo using the command below

    git clone https://github.com/aws-samples/eks-blueprints-actions-workflow.git
  2. Create a new repo into you own GitHub organization using the cloned repo.

  3. Create a branch.

  4. If it doesn't exist already, Create a .yml file in the .github/workflows folder containing the information required by the cluster you want to deploy:

    Use .github/workflows/terraform-deploy-eks-demo-01.yml as a reference Replace all values contained in the demo example with the required cluster configuration

  5. If it doesn't exist already, create the tfvars files in the clusters folder with the values for your EKS clusters.

    Use clusters/demo-dev-01.tfvars as a reference Replace all values contained in the demo example with the required cluster configuration

  6. Commit your changes and publish your branch.

  7. Create a Pull Request. This will trigger the workflow and add a comment with the expected plan outcome to the PR. The Terraform Apply step will not be executed at this stage.

  8. A workflow that triggers the deployment of an ephemeral cluster in the PR-TEST environment will be waiting for an approval. Add a required reviewer to approve workflow runs so you can decide when to deploy the ephemeral test cluster.

  9. Ask someone to review the PR and make the appropriate changes if necessary.

  10. Once the PR is approved and the code is merged to the main branch, the workflow will be triggered automatically and start the deploy job. The Terraform Apply step will only be executed if changes are required.

  11. When the PR is closed, a workflow to destroy the ephemeral test cluster in the PR-TEST environment. Approve the workflow run to destroy the EKS cluster.

Validation Steps

  1. Configure kubectl.

    EKS Cluster details can be extracted from terraform output or from AWS Console to get the name of cluster. This following command used to update the ~/.kube/config file in your local machine where you run kubectl commands to interact with your EKS Cluster.

    aws eks --region <region> update-kubeconfig --name <cluster-name>
  2. You can access the ArgoCD UI by running the following command:

    kubectl port-forward svc/argo-cd-argocd-server 8080:443 -n argocd

    Then, open your browser and navigate to https://localhost:8080/ Username should be admin.

    The password will be the generated password by random_password resource, stored in AWS Secrets Manager. You can easily retrieve the password by running the following command:

    aws secretsmanager get-secret-value --secret-id <SECRET_NAME> --region <REGION>

    Replace <SECRET_NAME> with the name of the secret name, if you haven't changed it then it should be argocd, also, make sure to replace <REGION> with the region you are using.

    Pickup the the secret from the SecretString.

  3. List all the worker nodes. You should see a multiple fargate nodes and one node provisioned by Karpenter up and running

    kubectl get nodes
  4. List all the pods running in karpenter namespace

    kubectl get pods -n karpenter
  5. List the karpenter provisioner deployed.

    kubectl get provisioners
  6. Check that the demo app workload is deployed on nodes provisioned by Karpenter provisioners.

    You can run this command to view the Karpenter Controller logs while the nodes are provisioned.

    kubectl logs --selector app.kubernetes.io/name=karpenter -n karpenter
  7. After a couple of minutes, you should see new nodes being added by Karpenter to accommodate the game-2048 application EC2 instance family, capacity type, availability zones placement, and pod anti-affinity requirements.

    Warning Because of known limitations with topology spread, the pods might not evenly spread through availability zones.

    kubectl get node \
      --selector=karpenter.sh/initialized=true \
      -L karpenter.sh/provisioner-name \
      -L topology.kubernetes.io/zone \
      -L karpenter.sh/capacity-type \
      -L karpenter.k8s.aws/instance-family
  8. Test by listing the game-2048 pods. You should see that all the pods are running on different nodes because of the pod anti-affinity rule.

    kubectl get pods -o wide
  9. Test that the sample application is now available.

    kubectl get ingress/ingress-2048 -n game-2048

    Open the browser to access the application via the ALB address https://game-2048-<ClusterName>.<Domain>/

    Warning You might need to wait a few minutes, and then refresh your browser.

Workflow Destroy Steps

  1. Create a branch.

  2. Create a .yml file in the .github/workflows folder containing the information required by the cluster you want to destroy:

    Use .github/workflows/terraform-destroy-eks-demo-01.yml as a reference Replace all values contained in the demo example with the required cluster configuration

  3. Commit your changes and publish your branch.

  4. Create a Pull Request. This will trigger the workflow and add a comment with the expected plan outcome to the PR. The Terraform Destroy step will not be executed at this stage.

  5. Ask someone to review the PR and make the appropriate changes if necessary.

  6. Once the PR is approved and the code is merged to the main branch, the workflow will have to be triggered manually to start the destroy job.

Requirements

Name Version
terraform >= 1.0.0
aws >= 3.72
bcrypt >= 0.1.2
helm >= 2.4.1
kubectl >= 1.14
kubernetes >= 2.10
random 3.3.2

Providers

Name Version
aws 4.47.0
bcrypt 0.1.2
kubectl 1.14.0
random 3.3.2

Modules

Name Source Version
eks_blueprints github.com/aws-ia/terraform-aws-eks-blueprints v4.17.0
eks_blueprints_kubernetes_addons github.com/aws-ia/terraform-aws-eks-blueprints//modules/kubernetes-addons v4.17.0

Resources

Name Type
aws_ec2_tag.private_subnet_cluster_alb_tag resource
aws_ec2_tag.private_subnet_cluster_karpenter_tag resource
aws_ec2_tag.public_subnet_cluster_alb_tag resource
aws_ec2_tag.vpc_tag resource
aws_iam_instance_profile.karpenter resource
aws_iam_role.karpenter resource
aws_iam_role_policy_attachment.karpenter_ecr_read_only resource
aws_iam_role_policy_attachment.karpenter_eks_cni resource
aws_iam_role_policy_attachment.karpenter_eks_worker_node resource
aws_iam_role_policy_attachment.karpenter_instance_core resource
aws_kms_key.argocd_secret resource
aws_secretsmanager_secret.argocd resource
aws_secretsmanager_secret_version.argocd resource
bcrypt_hash.argo resource
kubectl_manifest.karpenter_provisioner resource
random_password.argocd resource
aws_acm_certificate.issued data source
aws_availability_zones.available data source
aws_caller_identity.current data source
aws_eks_cluster_auth.this data source
aws_iam_policy.ecr_read_only data source
aws_iam_policy.eks_cni data source
aws_iam_policy.eks_worker_node data source
aws_iam_policy.instance_core data source
aws_iam_role.eks_admins data source
aws_partition.current data source
aws_secretsmanager_secret_version.admin_password_version data source
aws_subnet.eks_private_subnets data source
aws_subnet.eks_public_subnets data source
aws_subnets.eks_selected_private_subnets data source
aws_subnets.eks_selected_public_subnets data source
aws_vpc.eks data source
kubectl_path_documents.karpenter_provisioners data source

Inputs

Name Description Type Default Required
acm_certificate_domain Route53 certificate domain string n/a yes
argocd_version Argo CD version string n/a yes
aws_load_balancer_controller_version AWS Load Balancer Controller version string n/a yes
cluster_id The EKS Cluster ID string n/a yes
cluster_proportional_autoscaler_version Cluster Proportional Autoscaler version string n/a yes
eks_admins_iam_role The EKS Admins IAM Role Name string n/a yes
eks_cluster_domain Route53 domain for the cluster. string n/a yes
environment The environment of EKS Cluster string n/a yes
external_dns_version External DNS version string n/a yes
k8s_version Kubernetes version string n/a yes
karpenter_version Karpenter version string n/a yes
kube_proxy_version Kube Proxy add-on version string n/a yes
region The AWS region where to deploy the EKS Cluster string n/a yes
team_name The name of the team that will own EKS Cluster string n/a yes
vpc_cni_version VPC CNI add-on version string n/a yes
vpc_name The name of the VPC where to deploy the EKS Cluster Worker Nodes string n/a yes
workloads_org The Workloads GitHub Organization string n/a yes
workloads_pat The Workloads GitHub Personnal Access Token string n/a yes
workloads_path The Workloads Helm Chart Path string n/a yes
workloads_repo_url The Workloads GitHub Repository URL string n/a yes
workloads_target_revision The Workloads Git Repository Target Revision (Tag or Branch) string "main" no

Outputs

Name Description
cluster_name EKS Cluster Name
configure_kubectl Configure kubectl: make sure you're logged in with the correct AWS profile and run the following command to update your kubeconfig

Security

See CONTRIBUTING for more information.

License

This library is licensed under the MIT-0 License. See the LICENSE file.

About

How to deploy EKS Clusters with the Terraform EKS Blueprints and GitHub Actions Workflows

Resources

License

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • HCL 98.7%
  • JavaScript 1.3%