This guide walks you through the entire process of setting up a local Kubernetes development platform from scratch on macOS ARM (Apple Silicon).
- Prerequisites
- OrbStack Installation
- Kubernetes Verification
- CLI Tools Installation
- Cloning the Project
- ArgoCD Bootstrap
- Installation Verification
- Accessing Services
- Common Issues and Solutions
| Requirement | Minimum | Notes |
|---|---|---|
| macOS | 13.0+ (Ventura) | Apple Silicon (M1/M2/M3/M4) |
| RAM | 8 GiB | 16 GiB recommended |
| CPU | 4 cores | Apple Silicon default |
| Disk | 10 GB free | For container images |
| Internet | Required | To pull Helm charts and images |
| Homebrew | Latest | Package manager |
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"After installation, restart your terminal or run:
eval "$(/opt/homebrew/bin/brew shellenv)"OrbStack is a lightweight Docker and Kubernetes runtime for macOS. It uses significantly fewer resources than Docker Desktop.
brew install --cask orbstack- Open OrbStack from Spotlight or Launchpad
- Accept the license agreement
- OrbStack will start automatically (you'll see the OrbStack icon in the menu bar)
Kubernetes is enabled by default in OrbStack. If it's not active:
Method 1 — GUI:
- Click the OrbStack icon in the menu bar
- Go to Settings → Kubernetes
- Toggle Enable Kubernetes on
Method 2 — Terminal:
orb start k8s# kubectl comes bundled with OrbStack
kubectl --context orbstack get nodesExpected output:
NAME STATUS ROLES AGE VERSION
orbstack Ready control-plane,master 1m v1.31.x
Note: It may take 30-60 seconds for the node to reach
Readystatus. If you seeNotReady, wait a moment.
To verify the cluster is healthy:
# Check that the active context is orbstack
kubectl config current-context
# → orbstack
# API server health check
kubectl get --raw /healthz
# → ok
# Check system pods
kubectl get pods -n kube-systemAll system pods (coredns, kube-proxy, etc.) should be in Running state.
OrbStack provides automatic DNS resolution for the *.k8s.orb.local domain:
nslookup test.k8s.orb.localThis command should return an IP address. This allows you to use *.k8s.orb.local subdomains in your ingress definitions.
OrbStack comes with kubectl and kustomize. You'll additionally need the following tools:
# Helm — Kubernetes package manager
brew install helm
# kubeseal — Sealed Secrets encryption tool
brew install kubeseal
# kubeconform — Manifest validation tool (optional)
brew install kubeconform
# shellcheck — Shell script linter (optional)
brew install shellcheckkubectl version --client --short 2>/dev/null || kubectl version --client
helm version --short
kustomize version
kubeseal --versionAll commands should return version information.
git clone https://github.com/hbasria/specops-orbstack-argocd.git
cd specops-orbstack-argocd├── scripts/
│ ├── bootstrap.sh # Main setup script
│ └── prerequisites.sh # Tool check & installation
├── argocd/
│ ├── applications/ # ArgoCD Application definitions
│ ├── helm-values/ # Helm values for each component
│ └── projects/ # ArgoCD AppProject definition
├── kubernetes/
│ ├── cluster-config/ # Namespace and ClusterIssuer definitions
│ ├── namespace-template/ # Reusable project namespace template
│ └── apps/sample-app/ # Sample application
├── validation/
│ ├── pre-deploy/ # Pre-deployment validation
│ └── post-deploy/ # Post-deployment validation
└── docs/ # Documentation
The bootstrap script runs in 6 phases and is idempotent — it's safe to re-run at any time.
./scripts/bootstrap.shThe script performs the following operations in sequence:
| Phase | Operation | Description |
|---|---|---|
| 1/6 | Prerequisites | Checks for OrbStack, kubectl, helm, kustomize, kubeseal |
| 2/6 | Cluster Check | Node Ready, API Server /healthz, CoreDNS checks |
| 3/6 | ArgoCD Install | Installs ArgoCD 3.1.9 via Helm into the argocd namespace |
| 4/6 | AppProject | Creates the infrastructure AppProject |
| 5/6 | App-of-Apps | Applies the root Application and waits for sync |
| 6/6 | Validation | Runs ArgoCD, component, and ingress validations |
# Standard installation
./scripts/bootstrap.sh
# Skip prerequisites check (faster re-runs)
./scripts/bootstrap.sh --skip-prereq
# Dry run (nothing is applied, validation only)
./scripts/bootstrap.sh --dry-run
# Verbose output (for debugging)
./scripts/bootstrap.sh --verboseWhen the script completes successfully, you'll see:
============================================================
Bootstrap complete! Cluster is ready for application deployment.
ArgoCD UI: https://argocd.k8s.orb.local
Username: admin
Password: kubectl -n argocd get secret argocd-initial-admin-secret ...
Grafana: https://grafana.k8s.orb.local
Username: admin
Password: admin
Time elapsed: Xm Ys
============================================================
After bootstrap, ArgoCD automatically deploys the following components in sync-wave order:
| Sync Wave | Component | Namespace | Helm Chart | Purpose |
|---|---|---|---|---|
| 0 | cert-manager | cert-manager | v1.19.3 | TLS certificate management |
| 1 | ingress-nginx | ingress-nginx | v4.14.3 | HTTP/HTTPS routing |
| 1 | sealed-secrets | sealed-secrets | v2.18.0 | Git-safe secret encryption |
| 2 | kube-prometheus-stack | monitoring | v81.5.2 | Prometheus + Grafana monitoring |
| 3 | namespace-templates | sample-app | Kustomize | Namespace template |
Components may take 3-5 minutes to fully sync. Monitor progress with
kubectl -n argocd get applications.
kubectl -n argocd get applicationsExpected output — all applications should show Synced and Healthy:
NAME SYNC STATUS HEALTH STATUS
root-app-of-apps Synced Healthy
ingress-nginx Synced Healthy
cert-manager Synced Healthy
sealed-secrets Synced Healthy
kube-prometheus-stack Synced Healthy
namespace-templates Synced Healthy
The project includes ready-made validation scripts for each stage:
# Cluster health check
./validation/pre-deploy/check-cluster.sh
# ArgoCD health check
./validation/post-deploy/check-argocd.sh
# All infrastructure components check
./validation/post-deploy/check-components.sh
# Ingress routing check
./validation/post-deploy/check-ingress.sh
# Manifest validation (offline, no cluster needed)
./validation/pre-deploy/validate-manifests.sh# Check pods across all namespaces
for ns in argocd ingress-nginx cert-manager sealed-secrets monitoring; do
echo "=== $ns ==="
kubectl get pods -n "$ns"
echo ""
doneAll pods should be in Running or Completed state.
# Open in browser
open https://argocd.k8s.orb.local| Field | Value |
|---|---|
| URL | https://argocd.k8s.orb.local |
| Username | admin |
| Password | Retrieved with the command below |
# Get the admin password
kubectl -n argocd get secret argocd-initial-admin-secret \
-o jsonpath='{.data.password}' | base64 -d && echoNote: Since we use a self-signed certificate, your browser will show a security warning. Click "Advanced → Proceed" to continue.
If ingress is not ready yet (ingress-nginx hasn't synced), use port-forward:
kubectl -n argocd port-forward svc/argocd-server 8080:443
open https://localhost:8080open https://grafana.k8s.orb.local| Field | Value |
|---|---|
| URL | https://grafana.k8s.orb.local |
| Username | admin |
| Password | admin |
You'll be prompted to change the password on first login — for local development you can click "Skip".
Port-forward alternative:
kubectl -n monitoring port-forward svc/prometheus-grafana 3000:80
open http://localhost:3000After bootstrap, once namespace-templates has synced:
open https://sample.k8s.orb.localYou should see the nginx welcome page.
Cause: OrbStack Kubernetes is not running.
# Check OrbStack status
orb status
# Start Kubernetes
orb start k8s
# If OrbStack is completely stopped
orb startCause: Kubernetes is still starting up. This typically takes 30-60 seconds.
# Wait and check again
kubectl --context orbstack wait --for=condition=Ready node --all --timeout=120sIf it stays NotReady for more than 2 minutes:
# Restart Kubernetes
orb restart k8sCause: The active context is not orbstack.
# Check current context
kubectl config current-context
# Switch to orbstack context
kubectl config use-context orbstack
# Verify
kubectl get nodes# Reset Kubernetes
orb delete k8s
orb start k8s
# Re-run bootstrap
./scripts/bootstrap.shCause: Remnants from a previous installation.
# Uninstall and reinstall
helm uninstall argocd -n argocd
./scripts/bootstrap.shCause 1: Git repo URL is wrong or unreachable.
# Check the repo URL in the Application
kubectl -n argocd get app <app-name> -o jsonpath='{.spec.source.repoURL}'
# If the repo URL is wrong, update the Application YAML and re-applyCause 2: ArgoCD hasn't completed its sync cycle yet.
# Trigger a manual sync
kubectl -n argocd patch app <app-name> --type merge -p '{"operation":{"sync":{}}}'
# Or sync all applications (requires ArgoCD CLI)
# brew install argocd
# argocd app sync --allCause: A deployment rollout can't complete (usually due to insufficient resources or image pull errors).
# Inspect detailed status
kubectl -n argocd get app <app-name> -o yaml | grep -A 30 'status:'
# Check pods in the target namespace
kubectl get pods -n <target-namespace>
kubectl describe pod <pod-name> -n <target-namespace>
# Force a hard refresh
kubectl -n argocd patch app <app-name> --type merge \
-p '{"metadata":{"annotations":{"argocd.argoproj.io/refresh":"hard"}}}'Cause: The secret was deleted or ArgoCD hasn't created it yet.
# Check if the secret exists
kubectl -n argocd get secret argocd-initial-admin-secret
# If it doesn't exist, verify ArgoCD pods are running
kubectl -n argocd get pods
# If pods are running but the secret is missing, reset the password
kubectl -n argocd patch secret argocd-secret -p \
'{"stringData":{"admin.password":"","admin.passwordMtime":""}}'
kubectl -n argocd rollout restart deployment argocd-serverCause: OrbStack DNS service is not running.
# Test DNS resolution
nslookup argocd.k8s.orb.local
# Restart OrbStack
orb restart
# Flush macOS DNS cache
sudo dscacheutil -flushcache
sudo killall -HUP mDNSResponderCause 1: ingress-nginx controller is not ready yet.
# Check controller pod status
kubectl -n ingress-nginx get pods
# Check LoadBalancer service
kubectl -n ingress-nginx get svcCause 2: ingressClassName is missing from the Ingress resource.
# List all ingress resources
kubectl get ingress -A
# View details of a specific ingress
kubectl describe ingress <ingress-name> -n <namespace>Cause 3: IngressClass is not defined.
# Check IngressClass
kubectl get ingressclass
# "nginx" ingressclass should be listedCause: cert-manager or ClusterIssuer is not ready.
# Check cert-manager pods
kubectl -n cert-manager get pods
# Check ClusterIssuer status
kubectl get clusterissuer
# selfsigned-issuer and local-ca-issuer should show "True"
# Check Certificate status
kubectl -n cert-manager get certificate local-ca
# Ready: True expected
# If ClusterIssuer is not Ready, check cert-manager logs
kubectl -n cert-manager logs -l app.kubernetes.io/name=cert-manager --tail=30Cause 1: CRD conflict (leftover from a previous installation).
# Check CRDs
kubectl get crd | grep monitoring.coreos.com
# Sync with ServerSideApply
kubectl -n argocd patch app kube-prometheus-stack --type merge \
-p '{"spec":{"syncPolicy":{"syncOptions":["ServerSideApply=true"]}}}'Cause 2: Insufficient resources.
# Check node resource usage
kubectl top nodes
kubectl top pods -n monitoring --sort-by=memory
# If the machine doesn't have enough resources, lower resource limits in values.yamlCause: PVC (Persistent Volume Claim) issue. Persistence should be disabled for local development.
# Check Grafana pod status
kubectl describe pod -n monitoring -l app.kubernetes.io/name=grafana
# Delete PVCs if they exist
kubectl -n monitoring delete pvc -l app.kubernetes.io/name=grafanaEnsure persistence.enabled: false is set in argocd/helm-values/kube-prometheus-stack/values.yaml.
# Access Prometheus via port-forward
kubectl -n monitoring port-forward svc/prometheus-prometheus 9090:9090 &
# Check the number of targets
curl -s http://localhost:9090/api/v1/targets | python3 -m json.tool | grep -c '"health"'
# Stop port-forward
kill %1Cause: Sealed Secrets controller is not running or wrong namespace/name is being used.
# Check controller pod status
kubectl -n sealed-secrets get pods
# Try with the correct controller name and namespace
kubeseal --controller-name sealed-secrets \
--controller-namespace sealed-secrets \
--fetch-certCause: The SealedSecret was encrypted for a different cluster (each cluster has its own unique key).
# Check controller logs
kubectl -n sealed-secrets logs -l app.kubernetes.io/name=sealed-secrets --tail=20
# If the cluster has been reset, re-encrypt all SealedSecrets with the new key
kubeseal --controller-namespace sealed-secrets --fetch-cert > /tmp/cert.pem
# Then re-encrypt each secret using --cert /tmp/cert.pemPossible causes:
- Slow internet connection (image pull time)
- Insufficient system resources
- Too many background applications running
# Check which pods are pulling images
kubectl get events --all-namespaces --sort-by='.lastTimestamp' | grep -i pull
# Check resource usage
kubectl top nodes# Method 1: Remove ArgoCD only
helm uninstall argocd -n argocd
kubectl delete namespace argocd ingress-nginx cert-manager sealed-secrets monitoring sample-app
./scripts/bootstrap.sh
# Method 2: Reset the entire cluster (cleanest approach)
orb delete k8s
orb start k8s
kubectl --context orbstack wait --for=condition=Ready node --all --timeout=120s
./scripts/bootstrap.shCause: A previous Helm operation was interrupted.
# Check pending releases
helm list -n argocd --all
# Roll back the stuck release
helm rollback argocd -n argocd
# Or clean up corrupted secrets
kubectl -n argocd delete secret -l owner=helm,status=pending-install
kubectl -n argocd delete secret -l owner=helm,status=pending-upgrade# ArgoCD admin password
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath='{.data.password}' | base64 -d && echo
# List all ArgoCD applications
kubectl -n argocd get applications
# List all pods by namespace
kubectl get pods -A | grep -v kube-system
# Full cluster reset
orb delete k8s && orb start k8s && ./scripts/bootstrap.sh