This microservice provides REST endpoints for OpenAI Assistants API and Litellm, enabling agent management and LLM calls from your Next.js platform.
-
Create and activate a virtual environment (optional but recommended):
python3 -m venv venv source venv/bin/activate -
Install dependencies:
pip install -r requirements.txt
-
Set environment variables:
- For OpenAI:
OPENAI_API_KEY - For Litellm: see Litellm docs
- For OpenAI:
-
Run the server:
uvicorn main:app --host 0.0.0.0 --port 8000 --reload
The service will be available at http://localhost:8000.
docker build -t atlas-agent-backend .docker run --env-file .env -p 8000:8000 atlas-agent-backend- The backend will be available at http://localhost:8000
- Healthcheck endpoint:
/(returns status OK) - For production, ensure your
.envfile contains all required secrets (e.g.,OPENAI_API_KEY)
If you have a database or other services, consider adding a docker-compose.yml file.
POST /openai/assistant— Create an OpenAI AssistantGET /openai/assistant— List OpenAI AssistantsPOST /openai/thread— Create a threadGET /openai/thread— List threadsPOST /openai/message— Add message to threadPOST /openai/run— Run assistant on threadPOST /litellm/chat— Call LLM via Litellm
All endpoints are currently placeholders. Implementations should be added as needed.
-
POST /api/v1/a2a/register— Register agent (A2A) -
POST /api/v1/a2a/handshake— Handshake/authenticate (A2A) -
POST /api/v1/a2a/send— Relay message to agent (A2A) -
POST /api/v1/a2a/receive— Receive/process message (A2A) -
GET /api/v1/a2a/status/{agent_id}— Get agent status (A2A) -
GET /api/v1/a2a/error— Protocol-compliant error (A2A) -
Implements Google Agent2Agent (A2A) protocol for standardized, secure agent-to-agent communication. See
MODEL_CARD_A2A.mdfor details.
- Automated CI pipeline: see
.github/workflows/ci.yml(runs Ruff linter, pytest, and uploads coverage) - Linting: Ruff enforced via
ruff.toml - Test suite: Pytest, with smoke and coverage checks
- Profile agent runner performance:
python tools/profile_agent.py - Edit
tools/profile_agent.pyto customize profiling scenarios
- See
model_card.mdfor agent limitations, intended use, and ethical guidance - All agents/models must have a model card before deployment