You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# Clone the repository
git clone https://github.com/dollspace-gay/openclaudia.git
cd openclaudia
# Build release version (includes browser/web search support by default)
cargo build --release
# Build without browser feature (lighter binary, no headless Chrome)
cargo build --release --no-default-features
# The binary is at target/release/openclaudia
Quick Start
# Set your API key (choose your provider)export ANTHROPIC_API_KEY="your-key-here"# or: export OPENAI_API_KEY="your-key-here"# or: export GOOGLE_API_KEY="your-key-here"# or: export DEEPSEEK_API_KEY="your-key-here"# Initialize configuration in your project
openclaudia init
# Start chatting (uses default provider from config)
openclaudia
# Use a specific model (provider auto-detected from model name)
openclaudia -m gemini-2.5-flash
openclaudia -m gpt-4o
openclaudia -m claude-sonnet-4-20250514
# Start with a behavioral mode
openclaudia --mode create # Autonomous architect — build from scratch
openclaudia --mode safe # Collaborative minimal — surgical precision
openclaudia --mode debug # Investigation-first debugging
Configuration
Environment Variables
Variable
Provider
Required
ANTHROPIC_API_KEY
Anthropic (Claude)
For Anthropic
OPENAI_API_KEY
OpenAI (GPT)
For OpenAI
GOOGLE_API_KEY
Google (Gemini)
For Google
DEEPSEEK_API_KEY
DeepSeek
For DeepSeek
QWEN_API_KEY
Qwen/Alibaba
For Qwen
ZAI_API_KEY
Z.AI (GLM)
For Z.AI
TAVILY_API_KEY
Web search
Optional
BRAVE_API_KEY
Web search (alt)
Optional
Config File
Configuration is stored in .openclaudia/config.yaml:
proxy:
port: 8080host: "127.0.0.1"target: anthropic # Provider: anthropic, openai, google, deepseek, qwen, zai, ollama, localproviders:
anthropic:
base_url: https://api.anthropic.comopenai:
base_url: https://api.openai.comgoogle:
base_url: https://generativelanguage.googleapis.comdeepseek:
base_url: https://api.deepseek.com# Ollama for local LLM inferenceollama:
base_url: http://localhost:11434# Any OpenAI-compatible local server (LM Studio, LocalAI, etc.)local:
base_url: http://localhost:1234/v1# Thinking/reasoning mode configurationthinking:
enabled: falsebudget_tokens: 10000# Anthropic, Google Gemini 2.5reasoning_effort: "medium"# OpenAI o1/o3: low, medium, highsession:
timeout_minutes: 30persist_path: .openclaudia/sessionmax_turns: 25# 0 = unlimited agentic loop iterations# Verification-Driven Development (VDD) - Adversarial code review# vdd:# enabled: true# mode: advisory # advisory (single pass) or blocking (loop until clean)# adversary:# provider: google # Must differ from proxy.target# model: gemini-2.5-flash# Granular tool permissions# permissions:# denied_tools: ["bash"]# denied_commands: ["rm -rf /"]# Customize keybindingskeybindings:
ctrl-x n: new_sessionctrl-x x: exporttab: toggle_modeescape: cancel
CLI Commands
openclaudia # Start interactive chat (default)
openclaudia -m <model># Use specific model (auto-detects provider)
openclaudia -v # Verbose logging
openclaudia --resume # Resume last session
openclaudia --session-id <id># Resume specific session
openclaudia --coordinator # Multi-agent coordinator mode
openclaudia --tui-mode # Full-screen TUI (experimental)
openclaudia --mode <preset># Start with a behavioral mode preset
openclaudia init # Initialize config in current directory
openclaudia init --force # Overwrite existing config
openclaudia auth # Authenticate with Claude Max (OAuth)
openclaudia auth --status # Check auth status
openclaudia auth --logout # Clear stored credentials
openclaudia start # Start as proxy server
openclaudia start -p 9090 # Custom port
openclaudia start -t openai # Target specific provider
openclaudia acp # Start ACP server on stdin/stdout
openclaudia acp -m <model># ACP with specific model
openclaudia loop # Start iteration mode with Stop hooks
openclaudia loop -m 10 # Max 10 iterations
openclaudia config # Show current configuration
openclaudia doctor # Check connectivity and API keys
No file modifications, explain what you would do instead
context-pacing
Pace work to context limits with clean pause points
Usage
# CLI flag
openclaudia --mode create
openclaudia --mode safe
# In-session switching
/mode # Show current mode and list presets
/mode create # Switch to create preset
/mode create +bold # Create preset with bold modifier
/mode debug +context-pacing # Debug with pacing
/mode safe +bold +readonly # Stack multiple modifiers
The mode system integrates with Anthropic's prompt caching: behavioral axes and modifiers are part of the stable prompt prefix (cached across turns), while hooks, memory, and environment info are in the dynamic suffix (reprocessed each turn). Mode switches naturally invalidate the prefix cache.
Verification-Driven Development (VDD)
OpenClaudia includes a built-in adversarial code review system. When enabled, a separate AI model (the "adversary") reviews every response for bugs, security vulnerabilities, and logic errors.
vdd:
enabled: truemode: advisory # Single-pass review, findings injected as contextadversary:
provider: google # Use a different provider than your buildermodel: gemini-2.5-flashstatic_analysis:
auto_detect: true # Automatically runs cargo clippy, cargo test, etc.
Two modes:
Advisory — Single adversary pass after each response. Findings are displayed and injected into context for the next turn.
Blocking — Full adversarial loop. The builder must revise until the adversary's findings converge to false positives (confabulation threshold).
Findings include CWE classifications, severity levels (CRITICAL/HIGH/MEDIUM/LOW/INFO), and can automatically create Chainlink issues for tracking.
Hooks
Configure hooks in .openclaudia/config.yaml to run scripts at key moments:
pre_tool_use — Before executing a tool (with matcher for specific tools)
post_tool_use — After executing a tool
stop — For iteration/loop mode control
Auto-Learning Memory
OpenClaudia automatically learns from your coding sessions without any flags or model intervention. A SQLite database (.openclaudia/memory.db) captures knowledge from tool execution signals:
Coding Patterns — Conventions, pitfalls, and architecture observed from lint output and edit failures
Error Resolutions — Errors encountered and how they were fixed, matched automatically when subsequent commands succeed
File Relationships — Files frequently edited together (co-edit tracking), surfaced when you touch related code
User Preferences — Style and workflow preferences detected from corrections ("no, use tabs") and explicit statements ("always use snake_case")
Session Continuity — Recent session summaries and activity logs for context across restarts
Knowledge is injected into the model's context automatically — file-specific patterns when you read/edit a file, and preferences in every system prompt. Use /memory commands to inspect what's been learned.