Skip to content

phycoforce/home-ops

Repository files navigation

πŸš€ My Home Operations Repository 🚧

... managed with Flux, Renovate, and GitHub Actions πŸ€–

DiscordΒ Β  TalosΒ Β  KubernetesΒ Β  FluxΒ Β  Renovate

Age-DaysΒ Β  Uptime-DaysΒ Β  Node-CountΒ Β  Pod-CountΒ Β  CPU-UsageΒ Β  Memory-UsageΒ Β  Power-UsageΒ Β  Alerts


Overview

This is a monorepository is for my home kubernetes clusters. I try to adhere to Infrastructure as Code (IaC) and GitOps practices using tools like, Kubernetes, Flux, Renovate, and GitHub Actions.

The purpose here is to learn k8s, while practicing Gitops.


β›΅ Kubernetes

My Kubernetes cluster is deployed with Talos. It is a low power hyper converged cluster which runs all my workloads. I have a separate NAS with ZFS for NFS/SMB shares, bulk file storage and backups.

There is a template over at onedr0p/cluster-template if you want to try and follow along with some of the practices I use here.

Core Components

  • actions-runner-controller: self-hosted Github runners
  • cert-manager: creates SSL certificates for services in my cluster
  • cilium: eBPF-based networking for my workloads.
  • cloudflared: Enables Cloudflare secure access to my routes.
  • external-dns: automatically syncs DNS records from my cluster ingresses to a DNS provider
  • external-secrets: managed Kubernetes secrets using 1Password.
  • rook-ceph: Cloud native distributed block storage for Kubernetes
  • sops: managed secrets for Talos, which are committed to Git
  • spegel: stateless cluster local OCI registry mirror
  • envoy-gateway: Gateway API management for my HTTProutes.
  • volsync: backup and recovery of persistent volume claims

GitOps

Flux watches the clusters in my kubernetes folder (see Directories below) and makes the changes to my clusters based on the state of my Git repository.

The way Flux works for me here is it will recursively search the kubernetes/${cluster}/apps folder until it finds the most top level kustomization.yaml per directory and then apply all the resources listed in it. That aforementioned kustomization.yaml will generally only have a namespace resource and one or many Flux kustomizations (ks.yaml). Under the control of those Flux kustomizations there will be a HelmRelease or other resources related to the application which will be applied.

Renovate watches my entire repository looking for dependency updates, when they are found a PR is automatically created. When some PRs are merged Flux applies the changes to my cluster.

Directories

This Git repository contains the following directories under Kubernetes.

πŸ“ kubernetes
β”œβ”€β”€ πŸ“ apps       # applications
β”œβ”€β”€ πŸ“ components # re-useable kustomize components
└── πŸ“ flux       # flux system configuration

☁️ Cloud Dependencies

While most of my infrastructure and workloads are self-hosted I do rely upon the cloud for certain key parts of my setup. This saves me from having to worry about two things. (1) Dealing with chicken/egg scenarios and (2) services I critically need whether my cluster is online or not.

The alternative solution to these two problems would be to host a Kubernetes cluster in the cloud and deploy applications like HCVault, Vaultwarden, ntfy, and Gatus. However, maintaining another cluster and monitoring another group of workloads is a lot more time and effort than I am willing to put in.

Service Use Cost
1Password Secrets with External Secrets ~$80/yr$
Cloudflare Domain, DNS, WAF and R2 bucket (S3 Compatible endpoint) ~$30/yr
GitHub Hosting this repository and continuous integration/deployments Free
Healthchecks.io Monitoring internet connectivity and external facing applications Free
Total: ~$9/mo

🌐 DNS

In my cluster there are two instances of ExternalDNS running. One for syncing private DNS records to my RB5009 using ExternalDNS webhook provider for Mikrotik, while another instance syncs public DNS to Cloudflare. This setup is managed by creating ingresses with two specific classes: internal for private DNS and external for public DNS. The external-dns instances then syncs the DNS records to their respective platforms accordingly.


πŸ”§ Hardware

Main Kubernetes Cluster

Name Device CPU OS Disk Local Disk Rook Disk RAM OS Purpose
Logos MS-01 i9-13900H 960GB NVMe 960GB NVMe 1.92TB U.2 96GB Talos k8s control-plane
Ontos MS-01 i9-13900H 960GB NVMe 960GB NVMe 1.92TB U.2 96GB Talos k8s control-plane
Pneuma MS-01 i9-13900H 960GB NVMe 960GB NVMe 1.92TB U.2 96GB Talos k8s control-plane

OS Disk: m.2 Samsung PM983 960GB Local Disk: m.1 Micron 7450 Pro 960GB Rook Disk: u.2 Samsung PM9A3 1.92TB

Total CPU: 60 Cores/60 Threads Total RAM: 288GB

Supporting Hardware

Name Device CPU OS Disk Data Disk RAM OS Purpose
Aionios Custom Build i5-10400 512GB 3x20TB Raidz1 32GB Truenas Scale NAS/NFS/Backup/ZFS

Networking/UPS Hardware

Device Purpose
Mikrotik RB5009 Network - Router
Mikrotik CRS309-1G-8S+IN Network - 10G Switch
Back-UPS RS 1600SI Server/Network UPS

🀝 Thanks

Big shout out to the cluster-template, and the Home Operations Discord community. Be sure to check out kubesearch.dev for ideas on how to deploy applications or get ideas on what you may deploy.

Packages

No packages published

Contributors 3

  •  
  •  
  •