Full deployment making use of terraform, ansible, docker and, kubernetes.
First, terraform needs to initilise the eks cluster. It takes in these inputs as variables:
- access_key: String
- secret_key: String
- db_password: String
- key_name: String
- docker_user: String
Three pulled from system environment variables named;
- TF_VAR_access_key
- TF_VAR_secret_key
- TF_VAR_db_password
The key_name variable is set by default to terraforminit and, docker_user defaults to mine: stratcastor you can change these in the variables file.
You can run all required commands once setting up the environment variables by running:
cd infrastructure/
./.deploy.shTerraform will create:
1 VPC containing:
- 3 Public
Subnetsacross 3 AV - 5 Private
Subnetsacross 3 AV- 3 For the EKS Node Groups
- 2 For the Database
- 1
Internet Gateway - 1
NAT Gateway - All required
Route-Tables - 1
Subnet Group
1 EKS Cluster making use of the previously created VPC containing:
- 2
Auto-ScalingWorkerNode Groups- First starts at 2
Nodecapacity of t2.small - Second starts at 1
Nodecapacity of t2.medium- Both are configured to scale out or in as required.
- First starts at 2
2 EC2 Instances
- First preconfigured to install
ansible - Second preconfigured to be used as a
jenkinsCI/CD server
1 RDS Instance
- Configured to launch into the
VPC'ssubnet group
Then, terraform will generate some needed files and attempt to push the changes to github.
Second, you'll have to SSH onto the jenkins instance and configure the aws cli for the jenkins user:
sudo useradd -m jenkins
sudo su jenkins
aws configureNext, set up kubectl to use the terraform generated cluster:
aws eks --region eu-west-2 update-kubeconfig --name dnd-eks
# See outputs for cluster nameAdd some environment variables:
sudo su
echo 'export MYSQL_PWD=<YOUR_PASSWORD>'>>~/.bashrc
echo 'MYSQL_PWD=<YOUR_PASSWORD>'>>~/.profile
echo 'MYSQL_PWD=<YOUR_PASSWORD>'>>/etc/environment
echo 'export MYSQL_IP=<RDS_ENDPOINT>'>>~/.bashrc
echo 'MYSQL_IP=<RDS_ENDPOINT>'>>~/.profile
echo 'MYSQL_IP=<RDS_ENDPOINT>'>>/etc/environment
source ~/.bashrc
source ~/.profile
exitBack to technology assisted configuration;
SSH onto the bastion instance and clone down the updated version of the repo.
git clone <YOUR-FORK>.gitProvide your bastion with the ssh key that can access jenkins. Don't forget to
chmod 600 key.pemTerraform should of generated an inventory file for you and automatically pushed it up.
Running the ansible playbook should be as easy as:
cd ./infrastructure/ansible
ansible-playbook -i inventory Playbook.yamlAnsible will install Docker, Jenkins and, configure an artifact repository.
The app is 4 services managed by a kubernetes deployment, behind an NGINX load balancer.
The Jenkinsfile has been generated by terraform to work out of the box.
Simply set up a new Job to pull from the repository and look for a Jenkinsfile as the pipeline.
Whenever the Job triggers, jenkins will build new versions of the images, push them to the artifact repository (held on the jenkins instance) and then, perform a rolling update on the deployment.
SSH onto the jenkins instance and execute the following commands, replacing values where necessary:
sudo service docker restart
sudo su jenkins
docker login -u "<YOUR_USERNAME>" -p "<YOUR_PASSWORD>" docker.io
cd ~
git clone <YOUR_REPO>.git
repo_name=<YOUR_REPO_NAME>
perl -pe "s/{MYSQL_PWD}/$MYSQL_PWD/g" ./$repo_name/kubernetes/secret.yaml | perl -pe "s/{MYSQL_IP}/$MYSQL_IP/g" - | kubectl apply -f -
cd $repo_name
docker-compose build
docker-compose push
kubectl apply -f ./kubernetes/frontend.yaml
kubectl apply -f ./kubernetes/backend.yaml
kubectl apply -f ./kubernetes/randapp1.yaml
kubectl apply -f ./kubernetes/randapp2.yaml
kubectl apply -f ./kubernetes/nginx.yamlI know people are going to ask, the
docker-compose is so each image doesn't
have to be built seperately
This deployment makes use of two very well documented modules provided by Hasicorp. Much more in depth documentation can be found by clicking on either VPC Module or EKS Module.
This use of the VPC Module Creates a High-Availibility capable network.
The use of the EKS module is a bit more complex. For more information on everything eks actually does for us, I'd recommend the AWS documentation website itself, or delving into the GUI of AWS and reading what all the options will do for you. The terraform documentation is very good but not at a beginner level like the rest of the documentation. a certain amount of understanding is required beforehand.
The only hand made module used was the EC2 module:
The EC2 module is configured to create two EC2 instances in a public subnet. Then, it creates an RDS instance using SQL with the required initial database name.
- instance_type: String
- ami_id: String
- security_group: Security Group ID
- public_net_id: Public Subnet ID
- key_name: String
- av_zone: String
- db_password: String
- sec_group_ids: IDs of Security groups for database
- subnet_group_name: Subnet Group Name
- public_net_id: ID of public subnet to launch instance to
- nat_ip: NAT Gateway IP address
- bastion_ip: Public IP of bastion host instance on launch
- jenk_ip: Public IP of jenkins instance on launch
- db_ip: DNS Address of RDS instance on launch
With this module our infrastructure looks like this:
The Jenkinsfile is pretty much the same few commands repeated.
So, lets take one stage:
stage('--Front End--'){
steps{
sh '''
image="<YOUR-DOCKERHUBUSERNAME>/frontend:build-$BUILD_NUMBER"
docker build -t $image /var/lib/jenkins/workspace/$JOB_BASE_NAME/frontend
docker push $image
kubectl set image deployment/frontend frontend=$image
'''
}
}Firstly, jenkins sets a variable called image, it creates the value based on this pattern:
<Name_Of_Repository>/<Image_Name>:<Version_Tag>
You'll notice the version tag is being set using a jenkins variable called BUILD_NUMBER this variable will change based on the amount of times jenkins has run.
You'll also notice, jenkins is using your terraform set docker username as the location for the image, this is because we will pull the images from dockerhub.
Next, jenkins builds a new image based on the fresh version of the repo it has automatically cloned. Naming the image the way outlined above.
Then, docker pushes the image to it's own repo. (It goes to it's own repo because of how the image was named)
Finally, the stage interacts with the EKS control plane to update the image of the appropriate service, pulling the image from the repo, saving us the hassle of transferring images off the jenkins instance.