This repo deploys a highly available EKS cluster with an Autoscaling Group in region us-east-1 across 3 different Availability Zones.
Since I've used AWS Kodekloud Playgrounds to elaborate the contents of this repo, some actions and resources were restricted or had technical limitations.
Actions such as using specific terraform modules, selecting instance types other than t2.micro, setting custom EKS cluster naming or using an up to date EKS optimized ami, were either restricted or not possible.
📁 /
├──📁 terraform/
| ├── aws-auth-cm.yaml
| ├── controlplane.tf
| ├── dataplane.tf
| ├── iam.tf
| ├── networking.tf
| ├── providers.tf
| ├── variables.tf
| ├── userdata.sh
├── script.sh
├── kubelet.service
├── README.md
Run $ ./script.sh to verify if you meet all the prerequisites. The script must be executable $ sudo chmod u+x script.sh.
The script checks if:
- AWS CLI is installed and downdloads if it's not.
- Terraform is installed and downdloads if it's not.
- kubectl is installed and downdloads if it's not.
- Creates an AWS S3 Bucket with versioning enabled to store the Terraform State. Feel free to modify the var
BUCKETbut remember it must match the name of the bucket in the S3 backend, within theproviders.tffile - Generates a key pair
dataplane-kp.pemanddataplane-kp.pem.pubfor the EC2 Autoscaling Group.
Make sure that the script generates a key pair. Alternatively you can generate a new key pair with the commands below.
$ ssh-keygen -t rsa -N "" -f ${HOME}/dataplane-kp.pem$ export TF_VAR_dataplane_public_key=$(cat "${HOME}/dataplane-kp.pem.pub")
Navigate into the /terraform directory and start the terraform deployment cycle. It will take time, so grab a cup of coffee and let it work for 8 minutes or so.
$ terraform init$ terraform plan$ terraform apply
Once the deployment is completed, you need to copy the output value dataplane-role-arn shown in the console and paste it in the aws-auth-cm.yaml > rolearn field.
Update kube-config to access the EKS control plane with:
$ aws eks update-kubeconfig --region us-east-1 --name eks-demo
Join the dataplane nodes with the Controlplane with kubectl and allow EKS 60 seconds to detect the autoscaling group.
$ kubectl apply -f aws-auth-cm.yaml
-
The worker nodes of the dataplane use an EKS optimized ami that comes with
bootstrap.shpre installed. However, since this EKS deployment is using kubernetes 1.32 the ami needs an upgrade. -
The script
userdata.shthat is passed to the EC2 launch template contains a workaround to make communications between the dataplane and EKS controlplane possible. -
The AWS LoadBalancer Controller creates an AWS Application Load Balancer (ALB) when you create a Kubernetes Ingress resource and creates Network Load Balancer (NLB) when you create a Kubernetes service of type LoadBalancer.
In time, I will refactor the contents of this repo and implement them as part of a CI/CD pipeline using Jenkins and hopefully switch everything to Terraform modules.
