/\_/\ Paw
( o.o ) Too lazy to pick one AI. So I use them all.
> ^ <
The multi-provider AI coding agent for the terminal. Use Anthropic, OpenAI Codex, and Ollama simultaneously β with automatic fallback, parallel sub-agents, cross-provider verification, and built-in safety. Not tied to one model, not tied to one provider. Switch with /model β no code changes, no lock-in.
| Multi-provider, zero lock-in | Anthropic (Claude), Codex (ChatGPT subscription), and Ollama (local/free) β all active at once. Rate limit on Claude? Auto-switches to Codex. Need free local inference? Ollama is always there. No manual intervention. |
| Parallel sub-agents | Spawn independent agents that work in background while you keep chatting. Each spawned agent inherits your current model and session context. Round-robin across providers or pin to a specific one. |
| Cross-provider verification | AI writes code β a different AI reviews it automatically. Catches N+1 queries, race conditions, injection vulnerabilities, and logic errors that single-model tools miss. |
| Agent safety | Every tool call is risk-classified in real-time. Destructive commands (rm -rf, mkfs, curl|sh) are blocked before they execute. High-risk operations auto-checkpoint via git stash. |
| Cross-session memory | PAW.md hierarchy β global instructions, project instructions, personal notes, and auto-learned context. Memory injected on session start, survives compaction, persists across sessions. |
| Skills + Hooks | 7 built-in slash commands + unlimited custom skills with $ARGUMENTS, !`command` injection, and SKILL.md directories. 10 lifecycle hook events with regex matchers, JSON stdin, and exit-code blocking. |
| AI-powered compaction | Conversation too long? Auto-compact summarizes old turns via AI, keeps recent messages intact, re-injects PAW.md. Manual /compact [focus] for targeted compression. |
| Smart Router | Just type naturally β Paw auto-detects the best mode from your message. Works in English, Korean, Japanese, and Chinese. Shell commands β /pipe, implementation tasks β /auto, code review β /review skill. |
Disclaimer: Paw is an independent, third-party project. Not affiliated with Anthropic, OpenAI, or any AI provider.
git clone https://github.com/jhcdev/paw.git
cd paw
npm install
npm linkWorks on Linux, macOS, and WSL2. Requires Node.js 22+ and at least one provider (Anthropic API key, Codex CLI, or Ollama).
After installation:
paw # Auto-detect providers and start
paw "explain this project" # Direct prompt
paw --continue # Resume last session
paw --provider codex # Force specific providerpaw # Interactive REPL β start coding
paw --provider ollama # Force a specific provider
paw --continue # Resume last session
paw --session abc123 # Join specific session
paw --help # All flags and MCP commands
paw mcp list # List connected MCP servers
paw --logout # Remove saved credentials| Provider | Auth | Models | Cost |
|---|---|---|---|
| Anthropic | ANTHROPIC_API_KEY |
Haiku 4.5, Sonnet 4/4.6, Opus 4/4.6 | Per-token |
| Codex | codex login |
GPT-5.4, GPT-5.3, o4 Mini, o3 | ChatGPT subscription |
| Ollama | (none) | Any pulled model | Free (local) |
# Anthropic β set in .env or configure via /settings
ANTHROPIC_API_KEY=sk-ant-api03-...
# Codex β install CLI and login
npm install -g @openai/codex && codex login
# Ollama β pull a model and go
ollama pull qwen3Coming soon: Gemini, Groq, OpenRouter.
Single provider handles all messages. Switch models anytime with /model.
5 agents collaborate on every message:
| Role | Job | Execution |
|---|---|---|
| Planner | Architecture & plan | Sequential |
| Coder | Implementation | Sequential |
| Reviewer | Bugs, security, correctness | Parallel |
| Tester | Test case generation | Parallel |
| Optimizer | Performance improvements | Sequential |
Roles auto-assigned by efficiency scores. Adapts from real usage after 3+ runs. Review β rework loop (MAJOR β recode β re-review, max 3x).
Self-driving agent: analyze β plan β execute β verify β fix, until done.
/auto add input validation to all API endpoints
β Analyzing project...
β Creating plan...
β Executing step 1/10...
β Verifying...
β Build error found
β Fixing errors...
β All checks passed
β COMPLETED (32.4s)
Spawn independent agents that work in parallel β even while the main AI is thinking.
you explain the architecture β main AI starts working
you /spawn add tests for auth β runs immediately in background
you /spawn update README β another agent, same or different provider
you /tasks β check progress anytime
- Uses your current
/modelselection (follows changes automatically) - Receives session context (last 10 entries) β understands what you're working on
- Completed results auto-injected into your next turn
- Interactive panel (
/spawn) or inline (/spawn codex/gpt-5.4 fix lint)
/pipe npm test β AI analyzes test failures
/pipe fix npm run build β AI fixes errors, re-runs until clean (max 5)
/pipe watch npm start β AI monitors startup output
Just type naturally β Paw picks the best mode:
| You type | Routed to |
|---|---|
npm test |
/pipe |
implement JWT auth |
/auto |
review this code |
/review skill |
μ΄ μ½λ 리뷰ν΄μ€ |
/review skill |
λͺ¨λ μλ¬ μμ ν΄μ€ |
/auto |
Supports: English, Korean, Japanese, Chinese.
AI generates code β a different AI reviews it. Choose reviewer via ββ panel.
---
Verification (by codex/gpt-5.4):
Confidence: 85/100
warning: src/auth.ts β Potential SQL injection
info: src/routes.ts β Consider rate limiting
---
| Level | Examples | Action |
|---|---|---|
| Low | read_file, search_text, glob |
Execute immediately |
| Medium | write_file, edit_file, npm run build |
Execute immediately |
| High | rm, git reset, terraform destroy |
Blocked + git checkpoint |
| Critical | rm -rf /, mkfs, curl|sh |
Permanently blocked |
25+ dangerous patterns blocked. Symlink traversal protection. SSRF blocked. Shell injection prevented. MCP env allowlist.
Cross-session memory via PAW.md hierarchy:
| File | Scope | Shared |
|---|---|---|
~/.paw/PAW.md |
All projects | No |
./PAW.md or .paw/PAW.md |
This project | Yes (commit to repo) |
./PAW.local.md |
This project, personal | No (git-ignored) |
~/.paw/memory/ |
Auto-learned context | No (auto-managed) |
Memory injected into first prompt of each session. Survives /compact.
/memory β view loaded sources
/remember <note> β save note across sessions
/compact [focus] β AI-powered conversation compression
/export β export full context as markdown
7 built-in + unlimited custom. $ARGUMENTS, !`command` injection, SKILL.md directories.
| Built-in | Description |
|---|---|
/review |
Bugs, security, best practices |
/refactor |
Refactoring improvements |
/test |
Generate test cases |
/explain |
Explain code in detail |
/optimize |
Performance optimization |
/document |
Generate documentation |
/commit |
Conventional commit from diff |
Custom skill β .paw/skills/deploy.md:
---
name: deploy
description: Deploy the application
argument-hint: [environment]
---
Deploy $ARGUMENTS to production.
Current branch: !`git branch --show-current`Directory-based β .paw/skills/analyze/SKILL.md with supporting files and scripts.
10 lifecycle events. Regex matchers. JSON stdin. Exit 2 = block.
| Event | When | Can Block |
|---|---|---|
pre-turn |
Before sending to model | β |
post-turn |
After model responds | β |
pre-tool |
Before tool execution | Yes |
post-tool |
After tool succeeds | β |
post-tool-failure |
After tool fails | β |
on-error |
On any error | β |
session-start |
REPL starts | β |
session-end |
REPL ends | β |
stop |
AI finishes responding | Yes |
notification |
Notification sent | β |
Markdown β .paw/hooks/lint.md:
---
event: post-tool
command: npm run lint --silent
name: auto-lint
---JSON β .paw/settings.json:
{
"hooks": {
"post-tool": [{
"matcher": "edit_file|write_file",
"hooks": [{ "type": "command", "command": "npx prettier --write $(jq -r '.tool_input.path')" }]
}]
}
}Exit 0 = proceed (stdout β AI context). Exit 2 = block (stderr β AI feedback). Env: PAW_EVENT, PAW_CWD, PAW_TOOL_NAME.
list_files Β· read_file Β· write_file Β· edit_file Β· search_text Β· run_shell Β· glob Β· web_fetch
paw mcp add --transport http github https://api.github.com/mcp
paw mcp add --transport stdio memory -- npx -y @modelcontextprotocol/server-memory
paw mcp list
paw mcp remove githubInteractive manager via /mcp. Supports stdio, HTTP, SSE. Tools auto-injected into all providers.
| Command | Description |
|---|---|
/help |
All commands |
/status |
Providers, usage, cost |
/settings |
Provider management (ββ) |
/model |
Model catalog & switch (ββ) |
/team |
Team dashboard (ββ) |
/spawn |
Spawn parallel sub-agent (ββ) |
/tasks |
Spawned agent status/results |
/auto <task> |
Autonomous agent mode |
/pipe <cmd> |
Shell output β AI |
/verify |
Cross-provider verification (ββ) |
/safety |
Safety guards |
/memory |
View loaded memory |
/remember <note> |
Save note across sessions |
/export |
Export full context as markdown |
/compact [focus] |
AI-powered conversation compression |
/skills |
List all skills |
/hooks |
List configured hooks |
/ask <provider> <prompt> |
Query specific provider |
/tools |
Built-in + MCP tools |
/mcp |
MCP server manager (ββ) |
/git |
Status + diff + log |
/sessions |
List past sessions |
/history |
Export chat to markdown |
/init |
Generate CONTEXT.md |
/doctor |
Diagnostics |
/clear |
Reset conversation |
/exit |
Quit |
Keyboard: ββ navigate Β· Enter select Β· Tab autocomplete Β· Esc back Β· Ctrl+C interrupt Β· Ctrl+L clear Β· Ctrl+K compact
| File | Purpose |
|---|---|
~/.paw/credentials.json |
API keys (0600) |
~/.paw/sessions/*.json |
Session history |
~/.paw/team-scores.json |
Team performance scores |
~/.paw/PAW.md |
Global instructions |
~/.paw/memory/ |
Auto-learned memory |
~/.paw/skills/*.md |
User-wide custom skills |
~/.paw/hooks/*.md |
User-wide hooks |
PAW.md |
Project instructions |
PAW.local.md |
Personal project notes |
.paw/skills/*.md |
Project skills |
.paw/hooks/*.md |
Project hooks |
.paw/settings.json |
Project settings |
.mcp.json |
MCP server config |
git clone https://github.com/jhcdev/paw.git
cd paw
npm install
npm test # 263 tests
npm run build # TypeScript β dist/
npm link # Install 'paw' command globallyMIT β see LICENSE.