The Autonomous Multi-Agent Software Engineering Platform.
SWE AI Fleet is a self-hostable, production-grade platform for autonomous software development. It replaces the "single AI assistant" model with a Council of Agents (Developer, Architect, QA) that deliberate, critique, and refine code before you ever see it.
It is built on a Decision-Centric Architecture, prioritizing "Why" (context) over just "What" (code), enabling small open-source models (7B-13B) to perform at the level of massive proprietary models.
We maintain a comprehensive Documentation Index that maps out all available guides.
Quick Links:
- System Overview: High-level architecture and core concepts.
- Microservices: Reference for the 7 deployable services.
- Core Contexts: Deep dive into Agents, Knowledge Graph, and Orchestration logic.
- Deployment Guide: How to run the fleet on Kubernetes.
The system follows Hexagonal Architecture and is composed of the following microservices:
| Service | Description | Tech Stack |
|---|---|---|
| Planning | Manages Projects, Epics, and User Stories. | Python, Neo4j |
| Workflow | Tracks task lifecycle (FSM) and enforces RBAC. | Python, Neo4j |
| Orchestrator | Runs the Multi-Agent Councils (Deliberation). | Python, NATS |
| Context | Assembles surgical context from the Knowledge Graph. | Python, Neo4j |
| Ray Executor | Gateway to the GPU cluster for agent execution. | Python, Ray |
| Task Derivation | Auto-breaks plans into executable tasks. | Python, NATS |
| Monitoring | Real-time dashboard and observability. | Python, React |
- Kubernetes Cluster (1.28+)
- NVIDIA GPUs (recommended for local inference)
- Podman (for local development)
See the Kubernetes Deployment Guide for full instructions.
# Quick deploy script (if configured)
./scripts/infra/fresh-redeploy.shWe welcome contributions! Please read CONTRIBUTING.md for details on our development workflow, coding standards, and testing requirements.
Apache License 2.0 - See LICENSE for details.