A modular multi-agent architecture designed to handle complex user requests by delegating tasks to specialized agents through a central orchestrator.
The system follows clear separation of concerns and supports deterministic execution for dependent workflows such as checkout and payment.
This project demonstrates how a Root Orchestrator Agent coordinates multiple specialist agents to solve end-to-end tasks:
- Recipe discovery and ranking
- Shopping list and cost planning
- Wallet-based payment execution
- Sequential task enforcement using workflows
Instead of a monolithic LLM agent, responsibilities are distributed across focused agents to improve clarity, scalability, and maintainability.
Core Design Pattern
- Router-based multi-agent orchestration
- Specialist agents with single responsibility
- Shared session/memory store
- Sequential workflow for dependent tasks
High-Level Flow
- User sends a request
- Root Agent analyzes intent
- Request is routed to the appropriate agent or workflow
- Agents use tools and update shared session state
- Root Agent composes the final response
- Entry point of the system
- Routes requests to agents or workflows
- Does not perform domain logic
- Extracts ingredients and preferences
- Searches and ranks recipes
- Writes results to session store
- Converts recipes into a purchase plan
- Calculates quantities and estimated cost
- Handles balance checks
- Authorizes and captures payments
- Invoked only after shopping is complete
- Enforces strict execution order:
- Shopping → Payment
- Prevents invalid or partial transactions
- Central memory layer
- Enables continuity across agent calls
- Stores intermediate and final results
- Parallel agents are used where tasks are independent
- Sequential agents are used where task order matters
- All agents communicate indirectly via shared session state
This mirrors real-world systems such as:
- Order processing pipelines
- Payment gateways
- Workflow engines
- Clear separation of concerns
- Deterministic execution for dependent steps
- Easy to extend with new agents
- Presentation and interview ready
- Industry-aligned architecture
- Intent routing depends on interpretation logic
- Session store must be externalized for production
- Error handling can be expanded for edge cases
These are expected trade-offs for a prototype-level system.
- Python
- LLM-based agents
- Tool-based execution
- Sequential workflows
- Session-based memory
- Academic projects
- Multi-agent system demos
- Agent orchestration learning
- Foundations for A2A / distributed agents
Detailed architecture and explanation are available in the project PDF:
Multi_Agent_Project_Explanation.pdf
Standard MIT Licence