-
Notifications
You must be signed in to change notification settings - Fork 81
Adapters
OpenSwarm uses a pluggable adapter system to support multiple AI providers. Each adapter implements the same interface, allowing runtime switching between providers.
adapter: claude # "claude" | "codex" | "gpt" | "local"Backend: Claude Code CLI (claude -p)
Models: claude-sonnet-4, claude-haiku-4.5, claude-opus-4
Auth: Claude Code CLI authentication
# Install and authenticate
npm i -g @anthropic-ai/claude-code
claude authUsage:
adapter: claude
autonomous:
defaultRoles:
worker:
adapter: claude
model: claude-sonnet-4-20250514The Claude adapter spawns claude -p as a subprocess, passing the task prompt via stdin. It parses the CLI output stream for progress updates, tool calls, and final results.
Backend: OpenAI API (HTTP) with agentic tool loop
Models: gpt-4o, gpt-4.1, o3, o3-mini
Auth: OAuth PKCE flow
# Authenticate
openswarm auth login --provider gpt
openswarm auth statusUsage:
adapter: gpt
autonomous:
defaultRoles:
worker:
adapter: gpt
model: gpt-4oThe GPT adapter uses an agentic tool loop: it sends the task to the OpenAI API, receives tool call requests, executes them locally (read_file, write_file, edit_file, search_files, bash), and feeds results back until the task is complete.
| Tool | Description | Safety |
|---|---|---|
read_file |
Read file contents | Path validation (cwd + /tmp only) |
write_file |
Create or overwrite a file | Path validation |
edit_file |
String replacement edit | Path validation |
search_files |
Grep/glob search | Path validation |
bash |
Execute shell commands | Blocked commands list, 30s timeout, 8KB output limit |
Backend: OpenAI Codex CLI (codex exec)
Models: o3, o4-mini
Auth: Codex CLI authentication
npm i -g @openai/codexUsage:
adapter: codex
autonomous:
defaultRoles:
worker:
adapter: codex
model: o4-miniBackend: Ollama, LMStudio, or llama.cpp server (OpenAI-compatible API)
Models: gemma4, llama3, mistral, qwen, codestral, and any model served locally
Auth: None (local server)
Start one of:
| Provider | Command | Default Port |
|---|---|---|
| Ollama | ollama serve |
11434 |
| LMStudio | Start from app | 1234 |
| llama.cpp | ./server -m model.gguf |
8080 |
OpenSwarm auto-detects the running provider by probing standard ports.
Usage:
adapter: local
autonomous:
defaultRoles:
worker:
adapter: local
model: gemma-4-e4b-it
reviewer:
adapter: local
model: gemma-4-e4b-it| Alias | Resolves To |
|---|---|
gemma4 |
gemma3:4b-it |
llama3 |
llama3.3:latest |
mistral |
mistral:latest |
codestral |
codestral:latest |
qwen |
qwen2.5-coder:7b |
The local adapter probes models for tool-calling capability. Known tool-capable model families: gemma, llama3.1+, mistral, qwen, codestral, command-r.
Models without tool support fall back to single-shot generation (no agentic loop).
Mix adapters per role for cost optimization:
autonomous:
defaultRoles:
worker:
adapter: claude # Best coding capability
model: claude-sonnet-4-20250514
reviewer:
adapter: local # Free, fast enough for review
model: gemma-4-e4b-it
documenter:
adapter: local # Free for docs
model: gemma-4-e4b-itSwitch adapters at runtime via Discord:
!provider claude
!provider gpt
!provider local
Or per-command:
openswarm run "Fix bug" --model gpt-4o # Infers GPT adapter
openswarm run "Fix bug" --model gemma-4-e4b-it # Infers local adapterGetting Started
Reference
Deep Dive
Help