╦ ╦╦ ╦╔╦╗╔═╗
╠═╣║ ║║║║╠╦╝
╩ ╩╚═╝╩ ╩╩╚═
Run AI harnesses in production.
Isolated. Credentialed. Scheduled.
Kubernetes platform for running AI agent harnesses (Claude Code, Codex, Gemini CLI) in isolated environments with credential injection, network isolation, and scheduled execution.
git clone https://github.com/kagenti/humr && cd humrOpen your favorite AI coding agent in the repo and try:
Walk me through how Humr works step by step. I want to do a demo for myself.
Explain how things work on the way. Help me connect a model provider, create
an instance, add a connection to GitHub, and chat with an agent.
Once you're comfortable, go deeper:
Now show me the advanced stuff. Set up a Slack channel integration, create a
scheduled job, build a long-living agent with a heartbeat, and wire up an
MCP server.
Your agent has full context of the codebase, architecture decisions, and cluster commands.
See PITCH.md for the full story of what Humr is and why it exists.
For those who prefer pasting commands into a terminal:
mise install # install deps, configure git hooks
mise run cluster:install # create local k3s cluster + deploy (or upgrade) Humr
mise run cluster:status # check pods
export KUBECONFIG="$(mise run cluster:kubeconfig)" # activate cluster envOpen humr.localhost:4444 in your browser (login: dev / dev), create an instance from a template, and start chatting.
Agent harnesses and other connections require API tokens to communicate with their providers. These secrets are managed through the OneCLI dashboard at onecli.localhost:4444.
OneCLI acts as a proxy — agents never see the secrets directly. Instead, OneCLI intercepts outgoing requests from agent pods and injects the appropriate credentials before forwarding them to the provider.
- Add a secret — open the OneCLI UI and create a new secret. For Anthropic, you can use
claude setup-tokenas the token value. For other connections, use Apps or Generic secret. - Allow the secret for an agent — in the OneCLI UI, grant the secret to the specific agent that needs it. Only requests from allowed agents will have credentials injected.
Humr runs a single Slack app (Socket Mode) for the entire installation. Multiple instances can share a channel — the bot routes messages per thread.
-
Create a Slack app with Socket Mode enabled and bot/user token scopes:
app_mentions:read,channels:history,chat:write,reactions:write,commands,users:read. -
Add slash command
/humrpointing to your app. -
Generate an app-level token (
xapp-...) withconnections:writescope. Deploy with both tokens:mise run cluster:install -- \ --set=apiServer.slackBotToken=xoxb-... \ --set=apiServer.slackAppToken=xapp-...
-
In the Humr UI, click the Slack icon on any instance to connect it to a channel. Optionally configure an allowed-users list in instance settings.
Identity linking — users run /humr login in Slack to link their Slack account to Keycloak. Unlinked users are prompted automatically.
Routing — single-instance channels auto-route. Multi-instance channels show a dropdown to pick the target instance; the choice persists for the thread.
Access control — per-instance allowed-users list (empty = open to all channel members). Unauthorized users get an ephemeral rejection.
mise run check # lint + type-check
mise run test # run tests
mise run ui:run # start UI dev serverHumr detects it is running in a sandbox by env IS_SANDBOX and skips provisioning the Lima VM, instead installing k3s directly to avoid nested virtualization.
- Controller (Go) — K8s reconciler + cron scheduler
- API Server (TypeScript) — REST API + ACP WebSocket relay + serves UI
- Agent Runtime (TypeScript) — ACP server inside each agent pod
- OneCLI — credential injection proxy, network policy enforcement
- Web UI (React) — instance management, chat, scheduling