A web-based platform for deploying and managing large language models on Kubernetes with support for multiple inference providers.
- πΈοΈ Web UI: Modern interface for all deployment and management tasks
- π¦ Model Catalog: Browse curated models or search the entire HuggingFace Hub
- π Smart Filtering: Automatically filters models by architecture compatibility
- π GPU Capacity Warnings: Visual indicators showing if models fit your cluster's GPU memory
- π° Real-Time Cost Estimation: Live pricing from Azure API for GPU node pools
- β‘ Autoscaler Integration: Detects cluster autoscaling and provides capacity guidance
- π§ AI Configurator: NVIDIA AI Configurator integration for optimal inference settings
- π One-Click Deploy: Configure and deploy models without writing YAML
- π Live Dashboard: Monitor deployments with auto-refresh and status tracking
- π Real-Time Logs: Stream container logs directly from the UI
- π Deployment Metrics: View Prometheus metrics for running deployments (in-cluster)
- π Multi-Provider Support: Extensible architecture supporting multiple inference runtimes
- π§ Multiple Engines: vLLM, SGLang, and TensorRT-LLM (via NVIDIA Dynamo)
- π₯ Installation Wizard: Install providers via Helm directly from the UI
- π οΈ Complete Uninstall: Clean uninstallation with optional CRD removal
- π¨ Dark Theme: Modern dark UI with provider-specific accents
| Provider | Status | Description |
|---|---|---|
| NVIDIA Dynamo | β Available | GPU-accelerated inference with aggregated or disaggregated serving |
| KubeRay | β Available | Ray-based distributed inference |
| KAITO | β Available | Flexible inference with vLLM (GPU) and llama.cpp (CPU/GPU) support |
- Kubernetes cluster with
kubectlconfigured helmCLI installed- GPU nodes with NVIDIA drivers (for GPU-accelerated inference)
- HuggingFace account (for accessing gated models like Llama)
Note: KAITO provider supports CPU-only inference, so GPU nodes are optional when using KAITO with CPU compute type.
Download the latest release for your platform and run:
./kubefoundryOpen the web UI at http://localhost:3001
Requires:
kubectlconfigured with cluster access,helmCLI installed
macOS users: If you see a quarantine warning, remove it with:
xattr -dr com.apple.quarantine kubefoundry
kubectl apply -f https://raw.githubusercontent.com/sozercan/kube-foundry/main/deploy/kubernetes/kubefoundry.yaml
# Access via port-forward
kubectl port-forward -n kubefoundry-system svc/kubefoundry 3001:80Open the web UI at http://localhost:3001
See Kubernetes Deployment for configuration options.
Navigate to the Installation page and click Install next to your preferred provider. The UI will guide you through the Helm installation process with real-time status updates.
Go to Settings β HuggingFace and click "Sign in with Hugging Face" to connect your account via OAuth. Your token will be automatically distributed to all required namespaces.
Note: A HuggingFace token is required to access gated models like Llama.
- Navigate to the Models page
- Browse the curated catalog or Search HuggingFace for any compatible model
- Review GPU memory estimates and fit indicators (β fits, β tight, β exceeds)
- Click Deploy on your chosen model
- Select Runtime: Choose between NVIDIA Dynamo, KubeRay, or KAITO based on installed runtimes
- Configure deployment options:
- Dynamo/KubeRay: Select engine (vLLM, SGLang, TRT-LLM), replicas, GPU configuration
- KAITO: Choose from three modes:
- Pre-made GGUF: Ready-to-deploy quantized models for CPU/GPU
- HuggingFace GGUF: Run any GGUF model from HuggingFace directly
- vLLM: GPU inference using the vLLM engine
- Click Create Deployment to launch
Note: Each deployment can use a different runtime. The deployment list shows which runtime each deployment is using.
Head to the Deployments page to:
- View real-time status of all deployments across all runtimes
- See pod readiness and health checks with node information
- Stream container logs directly from the UI
- View Prometheus metrics (when running in-cluster)
- Get intelligent guidance when pods are pending (GPU/resource constraints)
- Scale or delete deployments
Once status shows Running, your model exposes an OpenAI-compatible API. Use kubectl port-forward to access it locally:
# Port-forward to the service (check Deployments page for exact service name)
kubectl port-forward svc/<deployment-name> 8000:8000 -n <namespace>
# List available models
curl http://localhost:8000/v1/models
# Test with a chat completion
curl http://localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{"model": "<model-name>", "messages": [{"role": "user", "content": "Hello!"}]}'KubeFoundry supports any HuggingFace model with a compatible architecture. Browse the curated catalog for tested models, or search HuggingFace Hub for thousands more.
When searching HuggingFace, models are filtered by architecture compatibility:
| Engine | Supported Architectures |
|---|---|
| vLLM | LlamaForCausalLM, MistralForCausalLM, Qwen2ForCausalLM, GPT2LMHeadModel, and 40+ more |
| SGLang | LlamaForCausalLM, MistralForCausalLM, Qwen2ForCausalLM, and 20+ more |
| TensorRT-LLM | LlamaForCausalLM, GPTForCausalLM, MistralForCausalLM, and 15+ more |
KubeFoundry supports optional authentication using your existing kubeconfig OIDC credentials.
To enable, start the server with:
AUTH_ENABLED=true ./kubefoundryThen use the CLI to login:
kubefoundry login # Uses current kubeconfig context
kubefoundry login --server https://example.com # Specify server URL
kubefoundry login --context my-cluster # Use specific contextThe login command extracts your OIDC token and opens the browser automatically.
- Architecture Overview
- API Reference
- Development Guide
- Azure Cluster Autoscaling Setup
- Kubernetes Deployment
We welcome contributions! Please see CONTRIBUTING.md for development setup and guidelines.
We embrace AI-assisted contributions! You can submit traditional PRs or prompt requests β share the AI prompt that generates your changes, and maintainers can review the intent before running the code. See the AI-Assisted Contributions section for details.
