ORCH — CLI orchestrator for AI agent teams (Claude Code, Codex, Cursor) #10850
Replies: 2 comments
-
|
This orchestration layer is really handy for integrating coding agents into real-world workflows. Scope locking is a smart move — we've run into file conflicts when coordinating multiple agents modifying large monorepos, especially with Codex and GPT-based tools. Your state machine approach aligns well with how we manage task dependencies and retries in agent frameworks (we've actually implemented something similar in Python for automating code reviews). For agent messaging, did you consider using a lightweight pub/sub like Redis Streams? In production, we've found YAML/JSON works for config, but event-driven communication is essential once concurrency scales past a few agents. Also, curious if you've benchmarked performance with simultaneous agent tasks — we've hit bottlenecks at 500+ concurrent ops unless the orchestrator can queue and throttle properly. The Haystack integration idea makes sense. We've wired Haystack RAG pipelines to custom agent runners before; something like: from haystack.nodes import Retriever, Generator
retrieved_docs = Retriever.query("How does X work?")
orch.add_task("Summarize", agent="researcher", context=retrieved_docs)If ORCH can expose a simple API or CLI hooks, plugging Haystack outputs as agent task inputs would be pretty seamless. This could help automate end-to-end doc analysis → code generation → review cycles. Would love to see some production stats on throughput or error rates with real agent teams. |
Beta Was this translation helpful? Give feedback.
-
|
Thanks for the thoughtful feedback, @rehan243! On scope locking — glad it resonates. Monorepo conflicts with concurrent agents was exactly the pain point. ORCH uses glob-based scope patterns per task ( On Redis Streams / pub-sub — great question. Right now the event bus is in-process and synchronous ( That said, the architecture does not close the door. On Haystack integration — import { buildFullContainer, TaskService } from '@oxgeneral/orch';
const container = await buildFullContainer({ projectRoot: '.' });
await container.taskService.create({
title: 'Summarize retrieved docs',
assignee: 'researcher',
description: retrievedDocs.join('\n'),
});
await container.orchestrator.startWatch();So the pattern you described (Haystack retriever → ORCH task) is already possible today — either via CLI ( On production metrics — honest answer: ORCH is built and tested for small-to-medium agent teams (2–8 agents). The orchestrator tick loop runs reconcile → dispatch → collect with a promise-chain mutex for state serialization. 1,829 tests cover the state machine, race conditions, retry logic, and OOM protection (batched TUI events, JSONL tail reads). Real-world throughput benchmarks with full agent teams are on the roadmap — would be happy to share numbers once we have them. Appreciate the concrete integration example — that's exactly the kind of workflow we are optimizing for. 🤝 |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hey Haystack community! 👋
I built ORCH — an open-source CLI orchestrator that coordinates teams of AI coding agents from the terminal.
What it does
ORCH manages the coordination layer for CLI-based AI agents:
Key features
todo → in_progress → review → donewith auto-retryConnection to Haystack
Haystack excels at building NLP pipelines and RAG systems. ORCH is a CLI runtime complement for the coding workflow layer — coordinating Claude Code, Codex, and Cursor for software development tasks. Together they could power AI pipelines where Haystack handles retrieval + ORCH handles the agent execution coordination.
1,657 passing tests · TypeScript strict · MIT · https://github.com/oxgeneral/ORCH
Happy to discuss! 🚀
Beta Was this translation helpful? Give feedback.
All reactions