Skip to content

Adapters

unohee edited this page Apr 16, 2026 · 1 revision

Adapters

OpenSwarm uses a pluggable adapter system to support multiple AI providers. Each adapter implements the same interface, allowing runtime switching between providers.

adapter: claude   # "claude" | "codex" | "gpt" | "local"

Claude (Default)

Backend: Claude Code CLI (claude -p)

Models: claude-sonnet-4, claude-haiku-4.5, claude-opus-4

Auth: Claude Code CLI authentication

# Install and authenticate
npm i -g @anthropic-ai/claude-code
claude auth

Usage:

adapter: claude
autonomous:
  defaultRoles:
    worker:
      adapter: claude
      model: claude-sonnet-4-20250514

The Claude adapter spawns claude -p as a subprocess, passing the task prompt via stdin. It parses the CLI output stream for progress updates, tool calls, and final results.


GPT

Backend: OpenAI API (HTTP) with agentic tool loop

Models: gpt-4o, gpt-4.1, o3, o3-mini

Auth: OAuth PKCE flow

# Authenticate
openswarm auth login --provider gpt
openswarm auth status

Usage:

adapter: gpt
autonomous:
  defaultRoles:
    worker:
      adapter: gpt
      model: gpt-4o

The GPT adapter uses an agentic tool loop: it sends the task to the OpenAI API, receives tool call requests, executes them locally (read_file, write_file, edit_file, search_files, bash), and feeds results back until the task is complete.

Available Tools

Tool Description Safety
read_file Read file contents Path validation (cwd + /tmp only)
write_file Create or overwrite a file Path validation
edit_file String replacement edit Path validation
search_files Grep/glob search Path validation
bash Execute shell commands Blocked commands list, 30s timeout, 8KB output limit

Codex

Backend: OpenAI Codex CLI (codex exec)

Models: o3, o4-mini

Auth: Codex CLI authentication

npm i -g @openai/codex

Usage:

adapter: codex
autonomous:
  defaultRoles:
    worker:
      adapter: codex
      model: o4-mini

Local

Backend: Ollama, LMStudio, or llama.cpp server (OpenAI-compatible API)

Models: gemma4, llama3, mistral, qwen, codestral, and any model served locally

Auth: None (local server)

Setup

Start one of:

Provider Command Default Port
Ollama ollama serve 11434
LMStudio Start from app 1234
llama.cpp ./server -m model.gguf 8080

OpenSwarm auto-detects the running provider by probing standard ports.

Usage:

adapter: local
autonomous:
  defaultRoles:
    worker:
      adapter: local
      model: gemma-4-e4b-it
    reviewer:
      adapter: local
      model: gemma-4-e4b-it

Model Aliases

Alias Resolves To
gemma4 gemma3:4b-it
llama3 llama3.3:latest
mistral mistral:latest
codestral codestral:latest
qwen qwen2.5-coder:7b

Tool Support

The local adapter probes models for tool-calling capability. Known tool-capable model families: gemma, llama3.1+, mistral, qwen, codestral, command-r.

Models without tool support fall back to single-shot generation (no agentic loop).


Hybrid Configuration

Mix adapters per role for cost optimization:

autonomous:
  defaultRoles:
    worker:
      adapter: claude                     # Best coding capability
      model: claude-sonnet-4-20250514
    reviewer:
      adapter: local                      # Free, fast enough for review
      model: gemma-4-e4b-it
    documenter:
      adapter: local                      # Free for docs
      model: gemma-4-e4b-it

Runtime Switching

Switch adapters at runtime via Discord:

!provider claude
!provider gpt
!provider local

Or per-command:

openswarm run "Fix bug" --model gpt-4o          # Infers GPT adapter
openswarm run "Fix bug" --model gemma-4-e4b-it   # Infers local adapter

Clone this wiki locally