Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
.env
result
.direnv
3 changes: 3 additions & 0 deletions .vscode/settings.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
{
"ansible.python.interpreterPath": "/nix/store/99hl269v1igvjbp1znfk5jcarhzgy822-python3-3.12.8/bin/python"
}
157 changes: 143 additions & 14 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,21 +1,150 @@
# UR1/ESIR DevOps Course
This repository contains the material and content of the DevOps course at the engineering school ESIR of the University of Rennes 1.
# Configuration Management Tools and Infrastructure as Code

## Year 2024-2025
# Infrastructure Declaration using Terraform

### Scheduling
We choose to declare all the machine in our infrastructure using Terraform. We use it to declare our infrastructure on OVH Cloud.

- Introduction to the course and DevOps: March 25th, 2025
- Quick overview of DevSecOps and MLSecOps: May the 23rd, 2025
- Final presentations: May 16th, 2025 (8h-11h)
## Terraform

### Material
[Terraform](https://developer.hashicorp.com/) is an open-source Infrastructure as Code (IaC) tool developed by HashiCorp that uses a cloud provider–agnostic, declarative language to define and manage infrastructure. It allows users to provision resources across multiple cloud platforms like AWS, Azure, and Google Cloud without needing to learn each provider’s GUI—only Terraform's documentation is needed. Terraform (b1.5+) use the Business Source License (BSL).

All the material can be found on the Moodle module dedicated to this class.
## OVH

### Tutorial activities
[OCHCloud](https://www.ovhcloud.com/en/) is a European cloud service provider offering a wide range of infrastructure solutions, including virtual machines, dedicated servers, web hosting, and public/private cloud services. Known for its strong data privacy practices and competitive pricing, OVHcloud operates its own data centers and global fiber network, providing scalable and secure cloud solutions.

Students have to choose a system with micro-services to apply some DevOps related tools on it;
if they cannot think of such system/project, they can go to the [doodle](https://github.com/barais/doodlestudent) github page and use it.
You can also find a "detailled" pull request to launch the application on "dev mode".
This is the kind of pull requests that is expected to be __sent on THIS repo__ for the evaluation of your technical realisation
We took OHV Cloud because it is French and that get a 200€ Credit to try different things with it. The Web interface is quite convenient.

## Configuration

We followed [this](https://help.ovhcloud.com/csm/fr-public-cloud-compute-terraform?id=kb_article_view&sysparm_article=KB0050792) OVH tutorial to setup the Terraform using OVH Cloud.

When using Terraform in general we give access to Terraform to perform actions (Create a network, VM, Bucket, ...) under our CLoud provider account. Thus we need to given some identification token to the Terraform CLI.

<a href="https://api.ovh.com/createToken/?GET=/*&POST=/*&PUT=/*&DELETE=/*">
<img src="./assets/ovhKey.png" alt="drawing" width="200"/>
</a>

We can than deploy our infrastructure with the followings commands

```sh
# Enter a bash compliant shell

# Create workspace
terraform workspace new test_terraform

# Create .env file using .template.env and https://api.ovh.com/createToken

# Source .env
. .env
# Source OpenRC variable from openrc.sh
source ./openrc.sh

# Init terraform project
terraform init
terraform plan

# Create infra
terraform apply

# Delete infra (Optional)
terraform destroy

# Print available Openstack Images
openstack image list --public

# Print available Openstack Flavors (virtual machine type)
openstack flavor list

# Generate Ansible inventory
terraform output -json | jq .ansible_inventory.value | sed 's/^"\|"$//g' | { echo -e "$(cat)"; } > ../ansible/environments/production/hosts
```

# Systems Configuration using Ansible

We use [Ansible](https://github.com/ansible/ansible) to configure our systems. We use it to configure our machines.

We have 3 types of roles given to our machines:
- Common: for all the machines
- Master: for the master node
- Worker: for the worker nodes

And we created a production infrastructure.

To deploy our configuration to all the instances we use the following command:
```sh
ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook -i ansible/environments/production/hosts ansible/playbook.yml
```

This will install containerd, Kubernetes and configure the cluster. It will then deploy the application kubernetes configuration.

# Building the doodle application using Nix

We started to use nix to build the application reproducibly. Thanks to nix we managed to build the application and a a docker images that can be deployed on a kubernetes cluster or a docker isntances. We made 2 docker images one for the backend and one for the frontend.

# Kubernetes

We managed to deploy our application on a kubernetes cluster locally using minikube. you can find the relevant files and command in the kube/ folder.

# Configuration Pitfalls

We did some experimentation with creating virtual machine for each services of the application. But getting internet on those vm without edit the host network ocnfiguration was not possible. But we got the application in a "working state". But we were thinking that we could easly install teh vms natively on the instances. But we found out that we couldn't use a custom os images on OVH Cloud. But we sucessfuly managed to "corrupt" the image of the instance to get nixos installed on them (see terraform/nixos_deploy.fr).

Since we where short in time and we already had a working local kubernetes cluster. We deciced that it will be easier to just install kubernetes on the instances and deploy the application on it for the configuration using Ansible since we knew how to use it.

After more than 8 hours of trying to get kubernetes to work on the isntances it kept crashing. Not our services but the kube-system pods. We could not even use kubectl. We tried using containerd and docker-cri but still nothings worked. We decided to give up.

Deployment should only be done using the command in this files.

# Conclusion

Appart from kubernetes not working we manged to get the application working on the local cluster. And to deploy isntances with terraform without issues. So we decided to consider the project as working. Even if is not really. Since configuration on the ubuntu instances using ansible is not working. We might really consider using nix-anywhere to deploy the application. We have some starting point configuration of kubernetes master and worker nodes. in the nix/systems/ directory.

The teraform configuration is working really well. And Openstack is really a nice isntance provider over OVH Cloud and others. It is really realiable.

# Future Work

- Use nix to configure kubernetes master and worker nodes. It might work better. or jsut fix the ansible configuration but really error and documentation are not helpful.
- Manage secrets using vault. Either sops-nix if using nix or ansible-vault if using ansible.
- Handle SSL certificate using nginx in the kubernetes cluster. Would need IngressController and cert-manager. That is complex to setup without using helm. Helm seem to be the only way to setup easly cert-manager and nginx as an IngressController for the whole cluster.

# PS:

After some work (its +7hour after the deadline) we managed to install and deploy correctly kubernetes using nixos. We managed to corrupt the ovh vm. But since it's late we can't really fix things now.

<!-- Settings up new machine is time consuming and can become complicated when it need to be done entirely remotely.
We could use [Ansible](https://github.com/ansible/ansible), or [Chef](https://github.com/chef/chef) that would allow us to create "Cookbook" that specify commands and steps to configure our systems. Those tools are the most used in the industry in term of configuration. But they still have some issue in term of reproducibility and don't prevent configuration drift as they are not immutable.

It all boils down to "Declarative vs Imperative Configuration".

<img src="./assets/AnsibleNixosMeme.png" alt="drawing" width="200"/>

We choose to go the more declarative way and use Nix/NixOS to do the configuration fo the systems.

[Nix/NixOS](https://nixos.org/) is a powerful tool to create and build reproducible software systems. We can perform the building our project and configure the systems our software will run on using the same language and configuration language. Which is enjoyable & convenient. -->

# Annexes

## The Doodle App

The doodle application use multiple services. We have an SQL Server, etherpad and a mail server.

Here is the description of doodle app architecture and dependancies.
- Doodle Frontend (Angular 10/Typescript)
- Static files served by a httpd server.
- Port 3000
- Call by user (Must be publicly available):
- Doodle Backend with `http://doodle-api:8080` endpoint
- Etherpad with `http://etherpad:9001`
- Doodle Back (Quarkus/Java JDK 11)
- Port 8080
- Call directly (Can be intern):
- Database with `jdbc:mysql://mysql:3306` endpoint
- Mail Server with `http://mail:2525`
- Database (MariaDB)
- Port 3306
- Etherpad 1.8.6
- Port 9001
- Mail Server
- Port 2525

# Services Deployment using Kubernetes
6 changes: 6 additions & 0 deletions ansible/environments/production/hosts
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
[master]
162.19.100.84 ansible_user=ubuntu ansible_ssh_private_key_file=/home/titouan/.ssh/Olenixen_id_ed25519
[workers]
51.75.185.185 ansible_user=ubuntu ansible_ssh_private_key_file=/home/titouan/.ssh/Olenixen_id_ed25519
51.75.185.127 ansible_user=ubuntu ansible_ssh_private_key_file=/home/titouan/.ssh/Olenixen_id_ed25519

23 changes: 23 additions & 0 deletions ansible/playbook.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
- name: Kubernetes cluster setup
hosts: all
become: yes
roles:
- common

- name: Kubernetes master init
hosts: master
become: yes
roles:
- master

- name: Kubernetes worker join
hosts: workers
become: yes
roles:
- worker

- name: Kubernetes deploy yaml
hosts: master
become: yes
roles:
- master-deploy
101 changes: 101 additions & 0 deletions ansible/roles/common/tasks/main.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,101 @@
- name: Install Common packages
apt:
name:
- apt-transport-https
- ca-certificates
- curl
- gnupg
- software-properties-common
- python3-pip
- python3-setuptools
state: present
update_cache: yes

- name: Use the k8s apt key
get_url:
url: https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key
dest: /etc/apt/keyrings/kubernetes-apt-keyring.asc
mode: "0644"

- name: Install k8s apt sources
apt_repository:
repo: deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.asc] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /
state: present

- name: Install Kubernetes packages
apt:
name:
- kubelet
- kubeadm
- kubectl
state: present
update_cache: yes

- name: Use the docker apt key
get_url:
url: https://download.docker.com/linux/ubuntu/gpg
dest: /etc/apt/keyrings/docker-apt-keyring.asc
mode: "0644"

- name: Install docker apt sources
apt_repository:
repo: deb [signed-by=/etc/apt/keyrings/docker-apt-keyring.asc] https://download.docker.com/linux/ubuntu oracular stable
state: present

- name: Update apt and install docker-ce
apt:
name: docker-ce
state: latest
update_cache: true

- name: add modules required by containerd
modprobe:
name: "{{ item }}"
state: present
persistent: present
loop:
- overlay
- br_netfilter

- name: install containerd
apt:
name: containerd

- name: create containerd directory
file:
path: /etc/containerd
state: directory

- name: create containerd config
shell: containerd config default > /etc/containerd/config.toml

- name: Copy front docker images to remote
ansible.builtin.copy:
src: "{{ playbook_dir }}/../kube/containerd_config.toml"
dest: "/home/ubuntu/images/doodle-front-docker"
mode: '0644'

- name: enable containerd
service:
name: containerd
enabled: yes

- name: configure kubernetes networking
sysctl:
sysctl_file: /etc/sysctl.d/99-kubernetes-cri.conf
name: "{{ item.name }}"
value: "{{ item.value }}"
loop:
- { name: 'net.ipv4.ip_forward', value: '1'}
- { name: 'net.bridge.bridge-nf-call-iptables', value: '1'}
- { name: 'net.bridge.bridge-nf-call-ip6tables', value: '1'}

- name: enable kubelet
service:
name: kubelet
enabled: yes

- name: restart containerd
service:
name: containerd
state: restarted
64 changes: 64 additions & 0 deletions ansible/roles/master-deploy/tasks/main.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@
- name: Ensure /home/ubuntu/images directory exists
file:
path: /home/ubuntu/images
state: directory
mode: '0744'
owner: ubuntu

- name: Generate front docker images
delegate_to: localhost
register: gen_output
become: false # ensure no sudo
ansible.builtin.command: |
nix build {{ playbook_dir }}/../nix/.#doodle-front-docker --print-out-paths

- name: Copy front docker images to remote
ansible.builtin.copy:
src: "{{ gen_output.stdout }}"
dest: "/home/ubuntu/images/doodle-front-docker"
mode: '0644'

- name: Load front docker images image into containerd with ctr
ansible.builtin.command:
cmd: |
ctr -n k8s.io images import /home/ubuntu/images/doodle-front-docker
ctr -n k8s.io images tag docker.io/library/doodle-front:latest doordle.ovh/prod/doodle-front:latest

- name: Generate api docker images
delegate_to: localhost
register: gen_output
become: false # ensure no sudo
ansible.builtin.command: |
nix build {{ playbook_dir }}/../nix/.#doodle-api-docker --print-out-paths

- name: Copy api docker images to remote
ansible.builtin.copy:
src: "{{ gen_output.stdout }}"
dest: "/home/ubuntu/images/doodle-api-docker"
mode: '0644'

- name: Load api docker images image into containerd with ctr
ansible.builtin.command:
cmd: |
ctr -n k8s.io images import /home/ubuntu/images/doodle-api-docker
ctr -n k8s.io images tag docker.io/library/doodle-api:latest doordle.ovh/prod/doodle-api:latest

- name: Copy kubernetes yaml
ansible.builtin.copy:
src: ../../../../kube/
dest: /home/ubuntu/kube
owner: ubuntu
group: ubuntu
mode: '0644'

- name: Set up kubeconfig
become_user: ubuntu
shell: |
kubectl create namespace doodle
kubectl apply -f kube/doodle-api.yaml
# kubectl create secret generic etherpad-apikey --from-file=APIKEY.txt=kube/APIKEY.txt -n doodle
# kubectl apply -f kube/etherpad.yaml
# kubectl apply -f kube/mysql.yaml
# kubectl apply -f kube/nginx.yaml
# kubectl apply -f kube/doodle-frontend.yaml

22 changes: 22 additions & 0 deletions ansible/roles/master/tasks/main.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
- name: Initialize master
shell: kubeadm init --pod-network-cidr=10.244.0.0/16
register: kubeadm_output
args:
creates: /etc/kubernetes/admin.conf

- name: Save join command
shell: |
kubeadm token create --print-join-command > /tmp/kubeadm_join.sh
args:
executable: /bin/bash

- name: Set up kubeconfig
shell: |
mkdir -p /home/ubuntu/.kube
cp -i /etc/kubernetes/admin.conf /home/ubuntu/.kube/config
chown 1000:1000 /home/ubuntu/.kube/config

- name: Install Flannel CNI
become_user: ubuntu
shell: |
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Loading