Internal infrastructure for on-premises AI deployments.
Tunnel gets you in. Conduit connects everything inside. Automatic DNS, internal TLS, service routing, and hardware monitoring. Eight commands. Structured audit logging. Zero internet dependency.
You deploy AI services on-premises. Each service needs a hostname, a TLS certificate, and health monitoring. Without automation, you configure dnsmasq by hand, generate certificates manually, write Caddy routes one at a time, and SSH into servers to check GPU utilization. Scale to five services and the maintenance burden is already unsustainable. Scale to twenty and something will break silently.
QP Conduit eliminates this with one-command service registration: DNS, TLS, and routing in a single operation, with continuous health monitoring and a cryptographic audit trail.
OUTSIDE BOUNDARY INSIDE
┌─────────────────────────────┐
┌──────────┐ ┌──────────────┐ │ QP Conduit │
│ Remote │ │ │ │ │
│ Users │────│ QP Tunnel │────│ DNS: grafana.internal │
│ │ │ (WireGuard) │ │ TLS: auto-cert via CA │
└──────────┘ │ │ │ Route: reverse proxy │
└──────────────┘ │ Monitor: GPU/CPU/disk │
Firewall │ Health: container checks │
└─────────────────────────────┘
One command, full stack. Register a service and Conduit creates the DNS entry, generates a TLS certificate, configures the reverse proxy route, and starts health checks. One command. Done.
Internal TLS everywhere. Caddy's built-in CA generates certificates automatically for every registered service. No manual cert management. No expiry surprises. No external certificate authority.
Automatic service discovery. Services register with human-readable names. grafana.internal resolves to the right container. hub.local routes to the Hub. No IP addresses to remember.
Hardware monitoring. GPU utilization, CPU load, memory pressure, disk usage, container health. Monitor local and remote servers on the LAN via SSH. One dashboard for your entire deployment.
Cryptographic audit trail. Every registration, deregistration, certificate rotation, and health state change logged as structured JSON. Optional Capsule Protocol integration seals each entry with SHA3-256 + Ed25519 for tamper evidence.
Air-gap compatible. Internal CA, local DNS, no external dependencies. Works in classified environments, air-gapped clinics, and disconnected field deployments.
Pairs with QP Tunnel. Tunnel handles the boundary (VPN access from outside). Conduit handles the interior (DNS, TLS, routing, monitoring). Together they form a complete networking layer for on-premises AI.
# 1. Initialize Conduit on your network
./conduit-setup.sh
# 2. Register your first service
./conduit-register.sh --name grafana --host 10.0.1.50:3000
# 3. Verify it works
./conduit-status.sh
# grafana.internal → 10.0.1.50:3000 [healthy] TLS ✓ DNS ✓After setup, grafana.internal resolves via DNS, serves over HTTPS with an auto-generated certificate, and reports health status continuously.
# Register more services
./conduit-register.sh --name hub --host 10.0.1.10:4200
./conduit-register.sh --name api --host 10.0.1.10:8000
./conduit-register.sh --name ollama --host 10.0.1.20:11434
# Check everything
./conduit-status.sh
# hub.local → 10.0.1.10:4200 [healthy] TLS ✓ DNS ✓
# api.local → 10.0.1.10:8000 [healthy] TLS ✓ DNS ✓
# ollama.internal → 10.0.1.20:11434 [healthy] TLS ✓ DNS ✓
# grafana.internal → 10.0.1.50:3000 [healthy] TLS ✓ DNS ✓| Command | Description |
|---|---|
conduit-setup.sh |
Initialize Conduit (install dnsmasq, configure Caddy, generate internal CA) |
conduit-register.sh --name <n> --host <ip:port> |
Register a service: DNS + TLS + routing in one step |
conduit-deregister.sh --name <n> |
Remove a service (DNS, route, and cert cleanup) |
conduit-status.sh |
Show all registered services with health, TLS, and DNS status |
conduit-monitor.sh |
Show server hardware stats (GPU, CPU, memory, disk) |
conduit-certs.sh |
List, rotate, or inspect TLS certificates |
conduit-dns.sh |
List or flush DNS entries |
conduit-logs.sh |
Aggregate and stream service logs |
┌────────────────────────────────────────────────────────────────────┐
│ QP Conduit │
│ │
│ ┌──────────┐ ┌──────────────┐ ┌──────────────────────────┐ │
│ │ dnsmasq │ │ Caddy │ │ Monitor Daemon │ │
│ │ │ │ │ │ │ │
│ │ DNS │ │ Internal CA │ │ GPU (nvidia-smi) │ │
│ │ resolver │ │ TLS certs │ │ CPU / Memory / Disk │ │
│ │ │ │ Reverse proxy│ │ Container health │ │
│ │ │ │ Health checks│ │ Remote servers (SSH) │ │
│ └────┬─────┘ └──────┬───────┘ └────────────┬─────────────┘ │
│ │ │ │ │
│ └────────┬────────┴─────────────────────────┘ │
│ │ │
│ ┌──────┴──────┐ │
│ │ Registry │ services.json │
│ │ + Audit │ audit.log │
│ └─────────────┘ capsules.db (optional) │
└────────────────────────────────────────────────────────────────────┘
│ │ │
┌────┴────┐ ┌────┴────┐ ┌─────┴─────┐
│ Hub │ │ Core │ │ Ollama │
│ :4200 │ │ :8000 │ │ :11434 │
└─────────┘ └─────────┘ └───────────┘
hub.local api.local ollama.internal
dnsmasq resolves internal hostnames to service addresses. All DNS queries for registered services return the correct IP without any external lookup.
Caddy serves three roles: internal certificate authority, TLS termination, and reverse proxy. When a service registers, Caddy generates a certificate from its internal CA, configures a route, and starts health checking the upstream.
Monitor Daemon polls hardware metrics (GPU utilization via nvidia-smi, CPU/memory/disk via standard tools) and container health (via Docker socket). For remote servers on the LAN, it connects over SSH.
Registry is the single source of truth: a JSON file listing all registered services with their hostnames, upstreams, health status, and certificate metadata. The audit log records every mutation.
Registration is atomic. One command creates the DNS entry, generates a TLS certificate, and configures the reverse proxy route:
./conduit-register.sh --name grafana --host 10.0.1.50:3000What happens:
- Adds
grafana.internal → 10.0.1.50to dnsmasq configuration - Reloads dnsmasq to activate the DNS entry
- Adds a reverse proxy route in Caddy (
grafana.internal → 10.0.1.50:3000) - Caddy's internal CA auto-generates a TLS certificate for
grafana.internal - Registers a health check against the upstream
- Writes the service to
services.json - Creates a Capsule audit record
Deregistration reverses all steps cleanly:
./conduit-deregister.sh --name grafanaEvery registered service gets HTTPS automatically. No manual certificate management.
┌─────────────────────────────────────────────────────────┐
│ Caddy Internal CA │
│ │
│ Root CA: Ed25519 (generated at conduit-setup) │
│ Per-service: auto-generated, auto-renewed │
│ Trust: distribute root cert to clients once │
│ │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ hub.local │ │ api.local │ │ grafana │ │
│ │ TLS cert │ │ TLS cert │ │ .internal │ │
│ │ (auto) │ │ (auto) │ │ TLS cert │ │
│ └──────────────┘ └──────────────┘ └──────────────┘ │
└─────────────────────────────────────────────────────────┘
Trust distribution: After setup, install the root CA certificate on client machines. Conduit outputs trust commands for macOS, Linux, and Windows. Install once, trust all services forever.
Certificate rotation: Caddy handles renewal automatically. For manual inspection or forced rotation:
./conduit-certs.sh # List all certificates with expiry dates
./conduit-certs.sh --rotate grafana # Force certificate rotation
./conduit-certs.sh --inspect grafana # Show full certificate details./conduit-monitor.shSERVER: 10.0.1.20 (gpu-server)
GPU 0: NVIDIA H200 | Util: 87% | Mem: 72.1/141.1 GB | Temp: 62°C
GPU 1: NVIDIA H200 | Util: 43% | Mem: 31.4/141.1 GB | Temp: 58°C
CPU: 24/48 cores | Load: 12.3
Memory: 189.2 / 256.0 GB (74%)
Disk: 1.2 / 3.8 TB (32%)
SERVER: 10.0.1.10 (app-server)
CPU: 8/16 cores | Load: 2.1
Memory: 12.4 / 32.0 GB (39%)
Disk: 45.2 / 500.0 GB (9%)
Conduit connects to the Docker socket for real-time container inspection:
./conduit-monitor.sh --containersCONTAINER STATUS CPU MEM UPTIME
qp-hub running 2.3% 384 MB 4d 12h
qp-core running 8.7% 1.2 GB 4d 12h
qp-postgres running 1.1% 256 MB 4d 12h
qp-redis running 0.2% 48 MB 4d 12h
qp-ollama running 45.2% 68.3 GB 4d 12h
qp-caddy running 0.4% 32 MB 4d 12h
Monitor servers across your LAN via SSH. Configure targets in .env.conduit:
CONDUIT_REMOTE_SERVERS="10.0.1.20:gpu-server,10.0.1.30:inference-node"Every operation writes a structured JSON entry to audit.log:
{
"timestamp": "2026-04-04T10:15:00Z",
"action": "service_register",
"status": "success",
"message": "Registered grafana.internal → 10.0.1.50:3000",
"user": "operator",
"details": {"name": "grafana", "host": "10.0.1.50:3000", "tls": true, "dns": true}
}Logged actions: conduit_setup, service_register, service_deregister, cert_rotate, dns_flush, health_change, monitor_alert, and all error traps.
When qp-capsule is installed, audit events are sealed as tamper-evident Capsules using SHA3-256 + Ed25519 signatures. This provides cryptographic proof that records have not been modified after creation.
pip install qp-capsule # Or: auto-installs on first use
qp-capsule verify --db capsules.db # Verify chain integrityThe JSON audit log is the fast local index. Capsules are the cryptographic source of truth. Golden test vectors for the audit format are in conformance/.
Conduit includes a browser-based admin UI for managing your entire on-premises infrastructure visually.
make ui-install # First time: install dependencies
make ui # Start the dashboard (http://localhost:5173)┌──────────────────────────────────────────────────────────────────────┐
│ QP Conduit │
├──────────┬───────────────────────────────────────────────────────────┤
│ │ Services 4 up · 0 degraded · 0 down │
│ Overview │ │
│ ┌──────┐ │ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │Dashbd│ │ │ Hub │ │ Core API │ │ Grafana │ │
│ ├──────┤ │ │ ● hub.local │ │ ● api.local │ │ ● grafana │ │
│ │Svc │ │ │ :4200 TLS ✓ │ │ :8000 TLS ✓ │ │ .internal │ │
│ │DNS │ │ │ 12ms healthy │ │ 8ms healthy │ │ 15ms healthy │ │
│ │TLS │ │ └──────────────┘ └──────────────┘ └──────────────┘ │
│ ├──────┤ │ │
│ │Server│ │ GPU Server (10.0.1.20) │
│ │Route │ │ GPU 0: H200 87% ███████░░ 72/141 GB 62°C │
│ └──────┘ │ GPU 1: H200 43% ████░░░░░ 31/141 GB 58°C │
│ │ CPU: 24/48 Mem: 189/256 GB Disk: 1.2/3.8 TB │
└──────────┴───────────────────────────────────────────────────────────┘
Six views: Dashboard (health overview), Services (register/manage), DNS (entries + resolver), TLS (certificates + CA), Servers (GPU/CPU/memory), Routing (proxy routes).
Tech: React 19, TypeScript, Vite, TailwindCSS 4 with OKLCH perceptual color system, Zustand, TanStack Query. Dark theme with 6-level surface hierarchy.
Keyboard-first: 1-6 switches views, / focuses search, Esc dismisses panels.
| Layer | Mechanism |
|---|---|
| TLS | Internal CA (Ed25519) with auto-generated per-service certificates |
| DNS | Local dnsmasq, no external queries, no DNS-over-HTTPS dependency |
| Routing | Caddy reverse proxy with upstream health checks |
| File protection | umask 077 on all keys and CA material (owner-only, mode 600) |
| Input validation | Strict [a-zA-Z0-9_-] regex on service names (prevents injection) |
| No eval | Zero use of eval in the entire codebase |
| Audit trail | Every operation logged with timestamp, user, and result |
| Tamper evidence | Optional Capsule Protocol sealing (SHA3-256 + Ed25519) |
| Isolation | Services are independently routed; one failure does not cascade |
| Certificate rotation | Automatic renewal; manual rotation available on demand |
Conduit's internal TLS, DNS isolation, and audit logging contribute to controls across five regulatory frameworks. Each mapping documents which controls Conduit satisfies and which require complementary application-level controls.
| Framework | Controls | Focus |
|---|---|---|
| HIPAA | 164.312(e)(1), 164.312(a)(1) | Transmission security, access control, audit |
| CMMC 2.0 | SC.L2-3.13.8, AU.L2-3.3.x | Network architecture, encrypted sessions, logging |
| FedRAMP | SC-8, SC-12, AU-2/3 | Transmission confidentiality, key management |
| SOC 2 | CC6.1, CC6.6, CC7.x | Logical access, network security, monitoring |
| ISO 27001 | A.8.20, A.8.21, A.8.24 | Network security, web filtering, cryptography |
Copy .env.conduit.example to .env.conduit and customize:
| Variable | Default | Description |
|---|---|---|
CONDUIT_APP_NAME |
qp-conduit |
Config directory, log tags |
CONDUIT_DOMAIN |
internal |
Default domain suffix for services |
CONDUIT_DNS_PORT |
53 |
dnsmasq listen port |
CONDUIT_DNS_UPSTREAM |
127.0.0.1 |
Upstream DNS for non-internal queries |
CONDUIT_CADDY_ADMIN |
localhost:2019 |
Caddy admin API address |
CONDUIT_CADDY_HTTPS_PORT |
443 |
HTTPS listen port |
CONDUIT_HEALTH_INTERVAL |
30 |
Health check interval in seconds |
CONDUIT_REMOTE_SERVERS |
(none) | Comma-separated ip:label pairs for remote monitoring |
CONDUIT_CONFIG_DIR |
~/.config/qp-conduit |
State directory (registry, certs, audit) |
CONDUIT_DOCKER_SOCKET |
/var/run/docker.sock |
Docker socket path for container monitoring |
All values are overridable via environment variables or .env.conduit.
Required:
| Dependency | Purpose |
|---|---|
bash 4.0+ |
Shell runtime |
jq |
JSON processing for service registry |
caddy 2.10+ |
Internal CA, TLS termination, reverse proxy |
dnsmasq |
Local DNS resolution for internal hostnames |
Optional:
| Dependency | Purpose |
|---|---|
docker |
Container inspection and health monitoring |
nvidia-smi |
GPU utilization monitoring |
ssh |
Remote server monitoring across LAN |
qp-capsule |
Tamper-evident audit sealing (auto-installs via pip) |
| Document | Audience |
|---|---|
| Architecture | Developers, Auditors |
| Security Evaluation | CISOs, Security Teams |
| Why Conduit | Decision-Makers, Architects |
| Compliance Mappings | Regulators, GRC |
| Guide | Use Case |
|---|---|
| Home Lab with GPU | Multi-GPU server with Ollama and Grafana |
| Healthcare Clinic | Air-gapped clinic with EHR and AI diagnostics |
| Defense Installation | Classified environment, no internet, full audit |
.
├── conduit-*.sh # 8 commands (setup, register, deregister, status, monitor, certs, dns, logs)
├── conduit-preflight.sh # Pre-flight setup (sourced by all scripts)
├── lib/
│ ├── common.sh # Logging, validation, config defaults
│ ├── registry.sh # Service registry CRUD (JSON/jq)
│ ├── audit.sh # Structured audit logging + Capsule sealing
│ ├── dns.sh # dnsmasq configuration and management
│ ├── tls.sh # Caddy CA and certificate operations
│ └── routing.sh # Reverse proxy route management
├── ui/ # Admin dashboard (React 19 + TypeScript)
│ └── src/
│ ├── components/views/ # 6 views (dashboard, services, dns, tls, servers, routing)
│ ├── components/layout/ # AppShell, Sidebar, StatusBar
│ ├── components/shared/ # HealthDot, StatCard, Chip, SlideOver, Toast
│ ├── api/ # Typed API client modules
│ ├── stores/ # Zustand state management
│ └── lib/ # Types, utilities, OKLCH theme
├── templates/
│ └── Caddyfile.service.tpl # Per-service Caddy configuration template
├── conformance/ # Audit log golden test vectors
├── completions/ # Bash and Zsh tab-completion scripts
├── tests/ # Unit, integration, and smoke tests (bats-core)
├── docs/ # Architecture, security, compliance, guides
├── examples/ # Deployment walkthroughs
├── .env.conduit.example # Configuration template
├── Makefile # All operations as Make targets
└── VERSION # 0.1.0
See CONTRIBUTING.md. Issues and pull requests welcome.
Apache License 2.0 with additional patent grant. You can use all patented innovations freely for any purpose, including commercial use.
Internal DNS. Automatic TLS. Service routing. Hardware monitoring. Full audit trail.
Documentation · Examples · Conformance · Security Policy · Patent Grant
Copyright 2026 Quantum Pipes Technologies, LLC