Drop Bansho in front of any MCP server to get API-key auth, role-scoped tool access, rate limiting, and a full audit log with zero upstream changes.
Named after the historical Japanese Bansho (番所), the guardhouses and security checkpoints of the Edo period, this project serves as a modern security checkpoint for the Model Context Protocol.
Bansho sits in front of any MCP server and adds API-key authentication, role-based tool authorization, Redis rate limiting, and PostgreSQL audit logging, all without touching a line of upstream code.
Built for Azure: Deploy to Azure using Azure Cache for Redis and Azure Database for PostgreSQL. See infra/README.md for deployment instructions.
MCP Client (Claude Code, OpenCode, Claude Desktop, Pi, Cursor, etc.)
│ JSON-RPC over stdio (bansho serve)
│ API key in metadata headers / X-API-Key
▼
┌──────────────────────────────────────────────────────────────┐
│ Bansho Gateway │
│ │
│ 1. Auth - resolve API key → role (Postgres api_keys) │
│ 2. AuthZ - check role against YAML tool allow-list │
│ 3. Rate limit - fixed-window counter (Redis) │
│ 4. Audit - persist event to Postgres audit_events │
│ 5. Forward - pass allowed request to upstream │
└──────────────────────────────────────────────────────────────┘
│ stdio or HTTP/SSE upstream
▼
Upstream MCP Server (any MCP server, unchanged)
What each store does:
| Store | Purpose |
|---|---|
| PostgreSQL | api_keys table (hashed key + role), audit_events table |
| Redis | Fixed-window rate-limit counters (per key + per tool, with TTL) |
- MCP passthrough proxy - stdio and HTTP upstream transports; protocol-transparent
- API key authentication - PBKDF2-hashed keys stored in Postgres; extracted from
Authorization: Bearer,X-API-Key, or?api_key= - Role-based tool authorization - YAML policy maps roles to allowed tools;
tools/listvisibility is filtered per-caller - Redis fixed-window rate limiting - separate per-key and per-tool quotas; per-tool overrides supported
- PostgreSQL audit log - every tool call persisted with timestamp, key ID, role, method, status, latency, and full decision payload
- Audit dashboard - dense cockpit-style HTML UI with sorting, row expansion, export, keyboard shortcuts, and a JSON API (
GET /api/events) - Fail-closed - policy load failure, missing key, unknown role, and exceeded limits all deny by default
- Zero upstream changes - wrap any existing MCP server without modifying it
Prerequisites: Go 1.21+, Docker
# 1. Clone and configure
git clone https://github.com/Microck/bansho.git
cd bansho
cp .env.example .env
# 2. Start Redis and Postgres
docker compose up -d redis postgres
# 3. Build
mkdir -p bin
go build -o ./bin/bansho ./cmd/bansho
go build -o ./bin/vulnerable-server ./cmd/vulnerable-server
# 4. Create an API key
./bin/bansho keys create --role admin
# → api_key_id: <uuid>
# → api_key: bsk_...
# 5. Start the gateway (stdio mode, upstream is the demo server)
export UPSTREAM_TRANSPORT=stdio
export UPSTREAM_CMD="./bin/vulnerable-server"
./bin/bansho serveBansho logs startup metadata to stderr:
bansho_proxy_start listen_addr=127.0.0.1:9000 upstream_transport=stdio upstream_target=./bin/vulnerable-server policy_path=config/policies.yaml
Deploy to Azure using Azure Cache for Redis and Azure Database for PostgreSQL:
# Deploy infrastructure
az group create --name bansho-rg --location eastus
az deployment group create \
--resource-group bansho-rg \
--template-file infra/main.bicep \
--parameters environmentName=prod
# Build and push container
docker build -t ghcr.io/microck/bansho:latest .
docker push ghcr.io/microck/bansho:latestSee infra/README.md for complete deployment instructions.
git clone https://github.com/Microck/bansho.git
cd bansho
go build -o ./bin/bansho ./cmd/banshoNo pre-built binaries are published yet. Build from source with Go 1.21+.
docker compose up -d redis postgresThe docker-compose.yml starts:
redis:7-alpineon127.0.0.1:6379postgres:16-alpineon127.0.0.1:5433(user/pass/db:bansho)
Set UPSTREAM_TRANSPORT=stdio and UPSTREAM_CMD to the command that launches your upstream MCP server:
export UPSTREAM_TRANSPORT=stdio
export UPSTREAM_CMD="python -m my_mcp_server --some-flag"
./bin/bansho serveBansho spawns the command as a subprocess and communicates over stdin/stdout.
Set UPSTREAM_TRANSPORT=http and UPSTREAM_URL to the upstream MCP endpoint:
export UPSTREAM_TRANSPORT=http
export UPSTREAM_URL=http://127.0.0.1:8080/mcp
./bin/bansho serveMCP clients send the key via request metadata. Bansho looks for it in three locations (in priority order):
| Location | Example |
|---|---|
Authorization header |
Authorization: Bearer bsk_abc123 |
X-API-Key header |
X-API-Key: bsk_abc123 |
| Query string | ?api_key=bsk_abc123 |
Starts the MCP gateway. All configuration is via environment variables (see Configuration).
bansho serve
Starts the audit dashboard HTTP server on DASHBOARD_HOST:DASHBOARD_PORT (default 127.0.0.1:9100).
bansho dashboard
Requires an admin API key to access (X-API-Key or Authorization: Bearer).
Creates a new API key and prints the plaintext value (only shown once).
bansho keys create [--role <role>]
| Flag | Default | Description |
|---|---|---|
--role |
readonly |
Role to assign (admin, user, readonly, or any role in your policy) |
Example:
./bin/bansho keys create --role admin
# api_key_id: 3f2b1c0a-...
# api_key: bsk_a1b2c3d4...Lists all API keys (ID, role, revoked status). The plaintext key is never shown again after creation.
bansho keys list
Output:
api_key_id role revoked
3f2b1c0a-... admin no
7a9e4d1f-... readonly no
Revokes an API key by ID. Revoked keys are immediately rejected on all subsequent requests.
bansho keys revoke <api_key_id>
Example:
./bin/bansho keys revoke 3f2b1c0a-0000-0000-0000-000000000000
# Revoked API key: 3f2b1c0a-0000-0000-0000-000000000000All settings are read from environment variables. Copy .env.example to .env for local development:
cp .env.example .env| Variable | Default | Description |
|---|---|---|
BANSHO_LISTEN_HOST |
127.0.0.1 |
Bind host for the MCP gateway |
BANSHO_LISTEN_PORT |
9000 |
Bind port for the MCP gateway |
DASHBOARD_HOST |
127.0.0.1 |
Bind host for the dashboard server |
DASHBOARD_PORT |
9100 |
Bind port for the dashboard server |
UPSTREAM_TRANSPORT |
stdio |
Upstream transport: stdio or http |
UPSTREAM_CMD |
(empty) | Command to spawn (required when UPSTREAM_TRANSPORT=stdio) |
UPSTREAM_URL |
(empty) | Upstream HTTP endpoint (required when UPSTREAM_TRANSPORT=http) |
POSTGRES_DSN |
postgresql://bansho:bansho@127.0.0.1:5433/bansho |
PostgreSQL connection string |
REDIS_URL |
redis://127.0.0.1:6379/0 |
Redis connection URL |
BANSHO_POLICY_PATH |
config/policies.yaml |
Path to the YAML policy file |
config/policies.yaml controls which tools each role may call and the rate-limit quotas:
roles:
admin:
allow:
- "*" # wildcard: all tools allowed
user:
allow:
- public.echo # only this tool
readonly:
allow:
- public.echo
rate_limits:
per_api_key:
requests: 120 # max 120 requests per key per window
window_seconds: 60
per_tool:
default:
requests: 30 # default per-tool quota
window_seconds: 60
overrides:
public.echo: # tighter quota for this specific tool
requests: 10
window_seconds: 60Behavior:
- Unknown or missing roles are denied by default.
allow: ["*"]grants wildcard access to all tools for that role.tools/listresponses are filtered - callers only see tools their role allows.- Per-tool overrides in
rate_limits.per_tool.overridestake precedence overdefault. - If the policy file fails to load, Bansho fails closed at startup.
- Override the policy path at runtime:
BANSHO_POLICY_PATH=/path/to/custom.yaml
The repo includes an intentionally vulnerable MCP server to demonstrate the value of the gateway.
Before state: the vulnerable server (cmd/vulnerable-server) exposes list_customers and delete_customer with zero authentication. Any client can call any tool.
After state: Bansho intercepts every call. Clients without a valid API key receive 401. Clients calling a tool outside their role receive 403. Clients exceeding the rate limit receive 429. Valid callers get 200.
bash demo/run_before_after.shThe script:
- Starts Redis + Postgres via Docker Compose and waits for health checks
- Builds all Go binaries (
bansho,vulnerable-server,demo-attack,demo-after) - Runs the before-state attack - confirms the vulnerable server allows unauthorized calls
- Creates
readonlyandadminAPI keys - Runs the after-state checks through Bansho - asserts
401,403,429, and200outcomes - Verifies audit rows increased in Postgres
- Starts the dashboard and confirms the events API returns audit evidence
Expected final lines:
==> Demo complete
Success: before/after demo ran with deterministic 401/403/429/200 + audit evidence.
Start the audit dashboard:
./bin/bansho dashboard
# Listening on http://127.0.0.1:9100All endpoints require an admin API key (via X-API-Key, Authorization: Bearer, or ?api_key=).
A dense, cockpit-style HTML dashboard for real-time audit monitoring. Server-rendered Go templates with vanilla JS (no framework, no build step). Supports dark and light themes.
Dashboard features:
| Feature | Description |
|---|---|
| Dark / light theme | Toggle in the header; persisted in localStorage |
| Inline KPI counters | Events, OK, Denied, and Rate-limited counts in the header bar |
| Filter bar | Filter by API key ID, tool name, and result limit |
| Row-level status coloring | Rows tinted green/red/yellow by status code, with a 3px left accent border. Togglable |
| Column sorting | Click any column header to sort ascending/descending. Numeric sort for status and latency |
| Row expansion | Click a row to expand an inline detail pane showing pretty-printed Decision, Request, and Response JSON in a 3-column grid |
| Auto-refresh | Interval selector (off / 5s / 10s / 30s / 60s); persisted in localStorage |
| CSV and JSON export | Download the current filtered view as bansho-events.csv or bansho-events.json |
| Column visibility | Gear menu to show/hide individual columns; persisted |
| Density toggle | Switch between compact (3px rows) and comfortable (5px rows) padding |
| Resizable columns | Drag column borders to resize |
| Keyboard shortcuts | ? help overlay, / focus search, j/k row navigation, Enter expand row, Esc close, r refresh, d toggle theme |
JSON audit feed.
Query parameters:
| Parameter | Default | Description |
|---|---|---|
limit |
50 |
Number of events to return (max 200) |
api_key_id |
(all) | Filter by key UUID |
tool_name |
(all) | Filter by tool name |
Example:
curl -H "X-API-Key: bsk_yourAdminKey" \
"http://127.0.0.1:9100/api/events?limit=5"Response:
{
"count": 2,
"filters": { "api_key_id": null, "tool_name": null, "limit": 5 },
"events": [
{
"ts": "2026-07-15T10:23:44Z",
"api_key_id": "3f2b1c0a-...",
"role": "readonly",
"method": "tools/call",
"tool_name": "delete_customer",
"status": 403,
"latency_ms": 2,
"decision": {
"auth": { "allowed": true, "api_key_id": "3f2b1c0a-...", "role": "readonly" },
"authz": { "allowed": false, "role": "readonly", "reason": "tool_not_allowed_for_role" },
"rate": { "allowed": false, "reason": "not_evaluated" }
},
"request_json": { "method": "tools/call", "params": { "name": "delete_customer" } },
"response_json": { "error": { "code": 403, "message": "Forbidden" } }
}
]
}Each event's decision object records the outcome of every pipeline stage (auth, authz, rate). The request_json and response_json fields contain the original MCP request and the upstream response.
graph TD
Client["MCP Client<br>(Claude Code, OpenCode, Claude Desktop, Pi, Cursor, etc.)"]
Gateway["Bansho Gateway<br>(bansho serve)"]
Auth["Auth middleware<br>api_keys table lookup<br>PBKDF2 verify"]
AuthZ["AuthZ middleware<br>YAML role policy<br>tools/list filter"]
RL["Rate limiter<br>Redis fixed-window<br>per-key + per-tool"]
Audit["Audit logger<br>audit_events table<br>status + decision JSON"]
Upstream["Upstream MCP Server<br>(stdio or HTTP)"]
PG[("PostgreSQL<br>api_keys<br>audit_events")]
Redis[("Redis<br>rate-limit counters")]
Client -->|"JSON-RPC + API key"| Gateway
Gateway --> Auth
Auth -->|"resolved role"| AuthZ
AuthZ -->|"tool allowed"| RL
RL -->|"within quota"| Audit
Audit -->|"forwarded request"| Upstream
Auth --- PG
Audit --- PG
RL --- Redis
Every tools/call request passes through four stages in sequence:
- Auth - extract API key from request metadata; verify against Postgres
api_keys; reject with401on failure - AuthZ - check the resolved role against the YAML policy allow-list for the requested tool; reject with
403on mismatch - Rate limit - increment per-key and per-tool Redis counters; reject with
429when a window is exceeded - Forward - pass the request to the upstream MCP server and return its response
An audit event is emitted for every call regardless of outcome, capturing the full decision payload for each stage.
bansho/
├── cmd/
│ ├── bansho/ # Main binary (serve, dashboard, keys)
│ ├── vulnerable-server/ # Demo insecure MCP server (no auth)
│ ├── demo-attack/ # Simulates unauthorized access against vulnerable server
│ └── demo-after/ # Asserts 401/403/429/200 outcomes through Bansho
├── internal/
│ ├── auth/ # API key creation, resolution, hashing (PBKDF2), revocation
│ ├── proxy/ # MCP gateway: security pipeline + upstream client
│ ├── policy/ # YAML policy loader and role/tool allow-list evaluator
│ ├── ratelimit/ # Redis fixed-window rate limiter
│ ├── storage/ # Postgres pool, Redis client, schema migrations
│ ├── audit/ # Audit event model, logger, query
│ ├── ui/ # Dashboard HTTP server, HTML template, and embedded SVG logos
│ └── config/ # Environment-variable based settings loader
├── config/
│ └── policies.yaml # Default policy (admin: *, others: empty)
├── demo/
│ ├── policies_demo.yaml # Demo policy (readonly: list_customers, tight rate limits)
│ ├── run_before_after.sh
│ └── README.md
├── docs/
│ ├── architecture.md
│ ├── policies.md
│ └── brand/ # Logo SVGs
├── docker-compose.yml # Redis + Postgres for local dev
└── .env.example # Environment variable template
# Start dependencies
docker compose up -d redis postgres
# Build and run the gateway (reload by re-running after code changes)
go build -o ./bin/bansho ./cmd/bansho && \
UPSTREAM_TRANSPORT=stdio \
UPSTREAM_CMD="./bin/vulnerable-server" \
./bin/bansho serveFast rebuild:
go build -o ./bin/bansho ./cmd/banshoEdit config/policies.yaml and restart bansho serve. Policy is loaded once at startup; changes require a restart.
To test a different policy without touching the default:
BANSHO_POLICY_PATH=demo/policies_demo.yaml ./bin/bansho servestorage.EnsureSchema is idempotent. Re-running bansho serve or bansho keys create against a fresh Postgres instance creates the api_keys and audit_events tables automatically.
go test ./...Integration tests require Redis and Postgres to be running:
docker compose up -d redis postgres
go test ./...The end-to-end demo also serves as a functional smoke test:
bash demo/run_before_after.shIssues and pull requests are welcome.
- Fork the repository
- Create a feature branch (
git switch -c feat/my-change) - Make your changes and add tests
- Open a pull request against
main
Please keep pull requests focused. Security-sensitive changes (auth, key hashing, policy evaluation) require extra care. Include a clear description of the threat model impact.
- API keys are hashed with PBKDF2-SHA256 before storage. The plaintext is shown once at creation and never stored.
- Bansho fails closed: missing policy file, unreachable Postgres/Redis, or unknown role all result in denial.
- The
cmd/vulnerable-serverandcmd/demo-attackbinaries are intentionally insecure demos. Do not expose the vulnerable server outside a local, controlled environment. - To report a security issue, open a GitHub issue with the
securitylabel or email the maintainer directly.
Apache-2.0. See LICENSE.
Built at the Microsoft AI Dev Days Hackathon 2026.
