Skip to content

xiloss/tf-libvirt-rke2

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Terraform Libvirt Stack for RKE2 — Workshop Guide

Table of Contents


Introduction

This repository provisions a local RKE2 (Kubernetes) cluster on libvirt/KVM and demonstrates running KubeVirt VMs (Linux, Windows, ARM64) with CDI for image import, Longhorn for storage, and optional Rancher.

At the end, we extend the environment by deploying a vCluster — a virtual Kubernetes cluster inside RKE2 — to demonstrate multi-cluster orchestration.

Return to Top


Requirements & Host Setup

AppArmor allowance for custom libvirt storage pools (Ubuntu)

Edit /etc/apparmor.d/libvirt/TEMPLATE.qemu and add your custom storage path (e.g. /home/user/libvirt_clusters):

# Allow access to custom storage pool
"/home/user/libvirt_clusters/" r,
"/home/user/libvirt_clusters/**" rwk,

Reload AppArmor after editing.

Download Ubuntu cloud image (RKE2 nodes)

Use Ubuntu 22.04 Server cloud image:

wget https://cloud-images.ubuntu.com/releases/jammy/release/ubuntu-22.04-server-cloudimg-amd64.img

Return to Top


Terraform: Create / Destroy Stack

Create the stack:

terraform init
terraform validate
terraform plan
terraform apply -auto-approve

Destroy the stack:

terraform apply -destroy -auto-approve

Return to Top


Accessing the RKE2 Cluster

SSH to the first master, retrieve kubeconfig, and use it locally:

# 1) SSH into the master VM
ssh -i id_ed25519.priv ubuntu@192.168.122.10

# 2) On the VM
sudo su -
cp /etc/rancher/rke2/rke2.yaml /home/ubuntu
chown ubuntu:ubuntu /home/ubuntu/rke2.yaml
exit

# 3) Back on host
scp -i id_ed25519.priv ubuntu@192.168.122.10:/home/ubuntu/rke2.yaml .
export KUBECONFIG=$(pwd)/rke2.yaml
vi rke2.yaml   # (adjust server address if needed)
kubectl get no

Return to Top


Install KubeVirt, CDI, and Client Tools

KubeVirt

export RELEASE=$(curl -s https://storage.googleapis.com/kubevirt-prow/release/kubevirt/kubevirt/stable.txt)
kubectl apply -f https://github.com/kubevirt/kubevirt/releases/download/${RELEASE}/kubevirt-operator.yaml
kubectl apply -f https://github.com/kubevirt/kubevirt/releases/download/${RELEASE}/kubevirt-cr.yaml
kubectl -n kubevirt wait kv kubevirt --for condition=Available

virtctl & kubectl virt

Install virtctl matching your KubeVirt version, or use:

kubectl krew install virt

Return to Top


Storage with Longhorn

On each RKE2 node:

sudo apt install open-iscsi -y
sudo systemctl enable --now iscsid

Install Longhorn:

helm repo add longhorn https://charts.longhorn.io
helm repo update
helm upgrade --install longhorn longhorn/longhorn \
  --namespace longhorn-system \
  --create-namespace \
  --set defaultSettings.defaultDataPath="/var/lib/longhorn" \
  --set service.ui.type=NodePort

Return to Top


Optional: Cert-Manager & Rancher

Cert-Manager

helm repo add jetstack https://charts.jetstack.io
helm repo update
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.6.1/cert-manager.crds.yaml
helm upgrade -i cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace

Rancher

helm repo add rancher-latest https://releases.rancher.com/server-charts/latest
helm upgrade -i rancher rancher-latest/rancher \
  --create-namespace --namespace cattle-system \
  --set hostname=rancher.local \
  --set bootstrapPassword=admin123 \
  --set replicas=1

Return to Top


Networking: Routing, MetalLB, Services

Add host routes to Service CIDR

ip route add 10.43.0.0/16 via 192.168.122.10

Install MetalLB

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/main/config/manifests/metallb-native.yaml

Pool configuration:

apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: vm-ip-pool
  namespace: metallb-system
spec:
  addresses:
    - 192.168.122.2-192.168.122.9
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: vm-advertisement
  namespace: metallb-system

Return to Top


VM Examples

Linux VM

kubectl create ns vm-lin-kvm

Upload image:

virtctl image-upload pvc ubuntu24-pvc \
  --size=16Gi \
  --image-path=/var/lib/libvirt/images/ubuntu24.04-2.qcow2 \
  --uploadproxy-url=https://192.168.122.2 \
  --storage-class=longhorn \
  --access-mode=ReadWriteOnce \
  --insecure --wait-secs=600 \
  --namespace vm-lin-kvm

Create VM:

apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
  name: ubuntu24-kvm
  namespace: vm-lin-kvm
spec:
  running: false
  template:
    spec:
      domain:
        cpu:
          cores: 2
        devices:
          disks:
            - name: rootdisk
              disk:
                bus: virtio
        resources:
          requests:
            memory: 4Gi
      volumes:
        - name: rootdisk
          persistentVolumeClaim:
            claimName: ubuntu24-pvc

Return to Top


vCluster Installation and Integration

The final step of the workshop introduces vCluster, a virtual Kubernetes cluster running inside the RKE2 host cluster.
It allows multi-tenant or isolated environments while leveraging the same underlying infrastructure.

Install vCluster CLI

Install the latest vCluster CLI from Loft Labs:

Linux/macOS (x86_64):

curl -L -o vcluster "https://github.com/loft-sh/vcluster/releases/latest/download/vcluster-linux-amd64"
chmod +x vcluster
sudo mv vcluster /usr/local/bin/

Verify:

vcluster --version

Install vCluster via Helm

Add the Helm repo and update:

helm repo add loft-sh https://charts.loft.sh
helm repo update

Create a namespace for virtual clusters:

kubectl create ns vcluster

Install vCluster in the RKE2 host cluster:

helm install vcluster loft-sh/vcluster --namespace vcluster

Wait for components to be ready:

kubectl get pods -n vcluster

Return to Top


Create and Access a vCluster

You can create your first virtual cluster directly with the CLI:

vcluster create dev-vcluster -n vcluster --expose

This command:

  • Creates a namespace vcluster
  • Deploys the control plane pods
  • Exposes the vCluster API endpoint (via LoadBalancer or NodePort)
  • Generates a kubeconfig automatically

Retrieve the kubeconfig

vcluster kubeconfig dev-vcluster -n vcluster > dev-vcluster.yaml
export KUBECONFIG=dev-vcluster.yaml
kubectl get ns

You are now inside your virtual cluster.

To return to the RKE2 host cluster:

export KUBECONFIG=rke2.yaml

Integrate vCluster with RKE2 and KubeVirt

You can now deploy workloads, pods, or even nested KubeVirt VMs inside the vCluster, leveraging the same storage (Longhorn) and networking setup (MetalLB).

Example deployment:

kubectl create deployment nginx --image=nginx
kubectl expose deployment nginx --port=80 --type=LoadBalancer

Demonstrate Access and Usage

List both host and virtual clusters:

# On host
kubectl get ns

# Inside vCluster
export KUBECONFIG=dev-vcluster.yaml
kubectl get pods -A

Forward vCluster UI (optional):

vcluster connect dev-vcluster -n vcluster --namespace vcluster

This command establishes a local proxy and opens the virtual cluster dashboard.

Return to Top


Cleanup

To remove vCluster:

vcluster delete dev-vcluster -n vcluster
kubectl delete ns vcluster

Then clean up your RKE2 environment:

kubectl delete vm,svc,pvc -A --all
terraform apply -destroy -auto-approve

Return to Top


About

A useful bundle to deploy rke2 using libvirt terraform provider

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •  

Languages