- Introduction
- Requirements & Host Setup
- Terraform: Create / Destroy Stack
- Accessing the RKE2 Cluster
- Install KubeVirt, CDI, and Client Tools
- Storage with Longhorn
- Optional: Cert-Manager & Rancher
- Networking: Routing, MetalLB, Services
- VM Examples
- Optional: Node Exporter on Ubuntu VM
- vCluster Installation and Integration
- Cleanup
This repository provisions a local RKE2 (Kubernetes) cluster on libvirt/KVM and demonstrates running KubeVirt VMs (Linux, Windows, ARM64) with CDI for image import, Longhorn for storage, and optional Rancher.
At the end, we extend the environment by deploying a vCluster — a virtual Kubernetes cluster inside RKE2 — to demonstrate multi-cluster orchestration.
Edit /etc/apparmor.d/libvirt/TEMPLATE.qemu and add your custom storage path (e.g. /home/user/libvirt_clusters):
# Allow access to custom storage pool
"/home/user/libvirt_clusters/" r,
"/home/user/libvirt_clusters/**" rwk,Reload AppArmor after editing.
Use Ubuntu 22.04 Server cloud image:
wget https://cloud-images.ubuntu.com/releases/jammy/release/ubuntu-22.04-server-cloudimg-amd64.imgCreate the stack:
terraform init
terraform validate
terraform plan
terraform apply -auto-approveDestroy the stack:
terraform apply -destroy -auto-approveSSH to the first master, retrieve kubeconfig, and use it locally:
# 1) SSH into the master VM
ssh -i id_ed25519.priv ubuntu@192.168.122.10
# 2) On the VM
sudo su -
cp /etc/rancher/rke2/rke2.yaml /home/ubuntu
chown ubuntu:ubuntu /home/ubuntu/rke2.yaml
exit
# 3) Back on host
scp -i id_ed25519.priv ubuntu@192.168.122.10:/home/ubuntu/rke2.yaml .
export KUBECONFIG=$(pwd)/rke2.yaml
vi rke2.yaml # (adjust server address if needed)
kubectl get noexport RELEASE=$(curl -s https://storage.googleapis.com/kubevirt-prow/release/kubevirt/kubevirt/stable.txt)
kubectl apply -f https://github.com/kubevirt/kubevirt/releases/download/${RELEASE}/kubevirt-operator.yaml
kubectl apply -f https://github.com/kubevirt/kubevirt/releases/download/${RELEASE}/kubevirt-cr.yaml
kubectl -n kubevirt wait kv kubevirt --for condition=AvailableInstall virtctl matching your KubeVirt version, or use:
kubectl krew install virtOn each RKE2 node:
sudo apt install open-iscsi -y
sudo systemctl enable --now iscsidInstall Longhorn:
helm repo add longhorn https://charts.longhorn.io
helm repo update
helm upgrade --install longhorn longhorn/longhorn \
--namespace longhorn-system \
--create-namespace \
--set defaultSettings.defaultDataPath="/var/lib/longhorn" \
--set service.ui.type=NodePorthelm repo add jetstack https://charts.jetstack.io
helm repo update
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.6.1/cert-manager.crds.yaml
helm upgrade -i cert-manager jetstack/cert-manager --namespace cert-manager --create-namespacehelm repo add rancher-latest https://releases.rancher.com/server-charts/latest
helm upgrade -i rancher rancher-latest/rancher \
--create-namespace --namespace cattle-system \
--set hostname=rancher.local \
--set bootstrapPassword=admin123 \
--set replicas=1ip route add 10.43.0.0/16 via 192.168.122.10kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/main/config/manifests/metallb-native.yamlPool configuration:
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: vm-ip-pool
namespace: metallb-system
spec:
addresses:
- 192.168.122.2-192.168.122.9
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: vm-advertisement
namespace: metallb-systemkubectl create ns vm-lin-kvmUpload image:
virtctl image-upload pvc ubuntu24-pvc \
--size=16Gi \
--image-path=/var/lib/libvirt/images/ubuntu24.04-2.qcow2 \
--uploadproxy-url=https://192.168.122.2 \
--storage-class=longhorn \
--access-mode=ReadWriteOnce \
--insecure --wait-secs=600 \
--namespace vm-lin-kvmCreate VM:
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: ubuntu24-kvm
namespace: vm-lin-kvm
spec:
running: false
template:
spec:
domain:
cpu:
cores: 2
devices:
disks:
- name: rootdisk
disk:
bus: virtio
resources:
requests:
memory: 4Gi
volumes:
- name: rootdisk
persistentVolumeClaim:
claimName: ubuntu24-pvcThe final step of the workshop introduces vCluster, a virtual Kubernetes cluster running inside the RKE2 host cluster.
It allows multi-tenant or isolated environments while leveraging the same underlying infrastructure.
Install the latest vCluster CLI from Loft Labs:
Linux/macOS (x86_64):
curl -L -o vcluster "https://github.com/loft-sh/vcluster/releases/latest/download/vcluster-linux-amd64"
chmod +x vcluster
sudo mv vcluster /usr/local/bin/Verify:
vcluster --versionAdd the Helm repo and update:
helm repo add loft-sh https://charts.loft.sh
helm repo updateCreate a namespace for virtual clusters:
kubectl create ns vclusterInstall vCluster in the RKE2 host cluster:
helm install vcluster loft-sh/vcluster --namespace vclusterWait for components to be ready:
kubectl get pods -n vclusterYou can create your first virtual cluster directly with the CLI:
vcluster create dev-vcluster -n vcluster --exposeThis command:
- Creates a namespace
vcluster - Deploys the control plane pods
- Exposes the vCluster API endpoint (via LoadBalancer or NodePort)
- Generates a kubeconfig automatically
vcluster kubeconfig dev-vcluster -n vcluster > dev-vcluster.yaml
export KUBECONFIG=dev-vcluster.yaml
kubectl get nsYou are now inside your virtual cluster.
To return to the RKE2 host cluster:
export KUBECONFIG=rke2.yamlYou can now deploy workloads, pods, or even nested KubeVirt VMs inside the vCluster, leveraging the same storage (Longhorn) and networking setup (MetalLB).
Example deployment:
kubectl create deployment nginx --image=nginx
kubectl expose deployment nginx --port=80 --type=LoadBalancerList both host and virtual clusters:
# On host
kubectl get ns
# Inside vCluster
export KUBECONFIG=dev-vcluster.yaml
kubectl get pods -AForward vCluster UI (optional):
vcluster connect dev-vcluster -n vcluster --namespace vclusterThis command establishes a local proxy and opens the virtual cluster dashboard.
To remove vCluster:
vcluster delete dev-vcluster -n vcluster
kubectl delete ns vclusterThen clean up your RKE2 environment:
kubectl delete vm,svc,pvc -A --all
terraform apply -destroy -auto-approve