Conclave is a distributed system of autonomous AI agents that communicate with each other using UDP multicast. Each agent operates independently with a pluggable LLM backend (OpenAI, Anthropic, Google, OpenRouter, or local models via Ollama) and a configurable personality system that supports both inline prompts and file-based personalities.
This project allows you to create a swarm of AI agents that can collaborate on tasks. The agents communicate in a decentralized manner, with each agent broadcasting messages to the group and responding to messages from others. This enables complex, emergent behaviors and decentralized problem-solving.
Agents can also be configured for voice responses using ElevenLabs integration and support structured debate scenarios.
Conclave includes Docker support for easy deployment and containerized execution. You can run agents using Docker without needing to install Rust or other dependencies locally.
Build the Docker image from the project root:
docker build -t conclave .Run a single agent using Docker:
docker run --rm \
-e OPENAI_API_KEY=your_openai_key \
conclave \
--agent-id agent-1 \
--llm-backend openai \
--model gpt-4Run multiple agents in separate containers:
# Terminal 1
docker run --rm \
-e OPENAI_API_KEY=your_openai_key \
conclave \
--agent-id agent-1 \
--llm-backend openai \
--model gpt-4
# Terminal 2
docker run --rm \
-e ANTHROPIC_API_KEY=your_anthropic_key \
conclave \
--agent-id agent-2 \
--llm-backend anthropic \
--model claude-3-sonnet-20240229For voice-enabled agents with Docker:
docker run --rm \
-e OPENAI_API_KEY=your_openai_key \
-e ELEVENLABS_API_KEY=your_elevenlabs_key \
conclave \
--agent-id agent-1 \
--llm-backend openai \
--model gpt-4 \
--voiceCreate a docker-compose.yml file for easier multi-agent deployment:
version: '3.8'
services:
agent-1:
image: conclave
environment:
- OPENAI_API_KEY=${OPENAI_API_KEY}
command: ["--agent-id", "agent-1", "--llm-backend", "openai", "--model", "gpt-4"]
agent-2:
image: conclave
environment:
- ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}
command: ["--agent-id", "agent-2", "--llm-backend", "anthropic", "--model", "claude-3-sonnet-20240229"]Run with:
docker-compose up- Decentralized Communication: Agents communicate via UDP multicast, eliminating the need for a central server.
- Pluggable LLM Backends: Easily switch between different LLM providers, including OpenAI, Anthropic, Google, OpenRouter, and local models.
- Voice Integration: Optional ElevenLabs text-to-speech (TTS) for voice responses. NOTE: Only ElevenLabs is supported for TTS.
- Docker Support: Run agents in containers without local Rust installation.
- Configurable Agents: Customize each agent's ID, personality (inline or file-based), and LLM model.
- Debate System: Built-in support for structured Public Forum debates with predefined personality files for affirmative, negative, and judge roles.
- Resilient Networking: The system is designed to be resilient to network errors and agent failures with retry logic.
- Concurrent Processing: Agents can process messages and generate responses concurrently, enabling real-time interaction.
- Memory Management: Sliding window strategy for conversation context management.
- Rust (latest stable version)
- An API key for your chosen LLM provider (e.g., OpenAI, Anthropic, Google, OpenRouter)
- Optional: ElevenLabs API key for voice responses
- Optional: ALSA development libraries for audio playback (Linux)
-
Clone the repository:
git clone https://github.com/your-username/conclave.git cd conclave -
Build the project:
cargo build --release
To run an agent, you need to provide a unique agent ID and specify the LLM backend and model to use.
Run a single agent using the OpenAI backend:
cargo run --release -- \
--agent-id agent-1 \
--llm-backend openai \
--model gpt-4 \
--api-key YOUR_OPENAI_API_KEYRun an agent with ElevenLabs voice responses:
ELEVENLABS_API_KEY=your_elevenlabs_key cargo run --release -- \
--agent-id agent-1 \
--llm-backend openai \
--model gpt-4 \
--api-key YOUR_OPENAI_API_KEY \
--voiceRun an agent configured for Public Forum debate (affirmative side):
cargo run --release -- \
--agent-id debater-1 \
--llm-backend openai \
--model gpt-4 \
--api-key YOUR_OPENAI_API_KEY \
--personality-file src/personalities/affirmative.mdTo create a swarm, run multiple agents in separate terminal windows. Each agent must have a unique ID.
Terminal 1:
cargo run --release -- \
--agent-id agent-1 \
--llm-backend openai \
--model gpt-4 \
--api-key YOUR_OPENAI_API_KEYTerminal 2:
cargo run --release -- \
--agent-id agent-2 \
--llm-backend anthropic \
--model claude-3-sonnet-20240229 \
--api-key YOUR_ANTHROPIC_API_KEYYou can configure the agents using the following command-line arguments:
| Argument | Short | Long | Description | Default |
|---|---|---|---|---|
| Agent ID | -i |
--agent-id |
Unique identifier for this agent | |
| Multicast Address | -a |
--multicast-address |
UDP multicast address for communication | 239.255.255.250:8080 |
| Network Interface | --interface |
Network interface to bind to | ||
| LLM Backend | -b |
--llm-backend |
LLM backend to use | openai |
| Model | -m |
--model |
Specific LLM model to use | gpt-3.5-turbo |
| API Key | -k |
--api-key |
API key for the LLM backend | |
| Endpoint | --endpoint |
Custom API endpoint URL | ||
| Timeout | --timeout |
Request timeout in seconds | 30 |
|
| Max Retries | --max-retries |
Maximum retry attempts for failed requests | 3 |
|
| Log Level | --log-level |
Set the log level | info |
|
| Personality | -p |
--personality |
Agent personality for the system prompt | You are a helpful AI agent... |
| Personality File | --personality-file |
Read personality from file (mutually exclusive with --personality) | ||
| Processing Delay | --processing-delay |
Processing delay in milliseconds for simulation | 0 |
|
| Voice | --voice |
Enable ElevenLabs voice responses | false |
You can also provide API keys via environment variables:
OPENAI_API_KEYANTHROPIC_API_KEYGEMINI_API_KEYOPENROUTER_API_KEYELEVENLABS_API_KEY
- OpenAI:
openai - Anthropic:
anthropic - Google:
google - OpenRouter:
openrouter - Local (Ollama):
local
Conclave includes built-in support for structured Public Forum debates. Use the provided personality files:
src/personalities/affirmative.md- For affirmative debaterssrc/personalities/negative.md- For negative debaterssrc/personalities/debate_judge_prompt.md- For debate judges
Example debate setup:
# Affirmative debater
cargo run --release -- \
--agent-id affirmative \
--llm-backend openai \
--model gpt-4 \
--api-key YOUR_OPENAI_API_KEY \
--personality-file src/personalities/affirmative.md
# Negative debater
cargo run --release -- \
--agent-id negative \
--llm-backend openai \
--model gpt-4 \
--api-key YOUR_OPENAI_API_KEY \
--personality-file src/personalities/negative.md
# Judge
cargo run --release -- \
--agent-id judge \
--llm-backend openai \
--model gpt-4 \
--api-key YOUR_OPENAI_API_KEY \
--personality-file src/personalities/debate_judge_prompt.mdTo run the test suite, use the following command:
cargo testThe project uses Protocol Buffers for message serialization. If you modify the .proto files, you'll need to rebuild the generated Rust code:
cargo buildThis project is licensed under the MIT License. See the LICENSE file for details.