A modular AI orchestration system with semantic tool discovery, intelligent error recovery, and pattern learning capabilities. Built for production use with PostgreSQL logging, tool lifecycle management, and 100% local execution.
What Actually Works (Production-Ready):
- Intent Classification with Pattern Caching: Learns from past decisions to speed up repeated queries
- 3-Stage Tool Discovery: Semantic search (ChromaDB) โ Statistical ranking โ LLM selection
- Intelligent Error Recovery: Automatic retry with exponential backoff, fallback strategies, and adaptation
- Tool Lifecycle Management: Detects deleted/modified tools and prevents stale executions
- PostgreSQL Analytics: Full execution history, tool statistics, and performance tracking
- Dynamic Tool Loading: Tools auto-discovered from filesystem with zero configuration
- Code Generation & Sandboxed Execution: Safe Python code execution with controlled namespaces
- Memory Operations: Persistent key-value storage for context and state
- 100% Local Execution: Runs entirely on your infrastructure with Ollama - no cloud APIs
Built but Not Yet Integrated (see docs/INTEGRATION_AUDIT.md):
- Neural Pathway Cache (System 1/2 fast path)
- Goal Decomposition Learning (pattern-based subgoal suggestions)
- Autonomous Loop (continuous self-improvement)
- Tool Forge (dynamic tool creation)
- Parallel Voting Systems (multi-voter consensus)
- Advanced Analytics & Monitoring
Why separated? Core system is production-ready and stable. Advanced features are being integrated systematically to maintain quality (see docs/INTEGRATION_ACTION_PLAN.md).
- Docker Engine 20.10+
- Docker Compose 2.0+
- 8GB+ RAM (for LLM models)
- (Optional) NVIDIA GPU with drivers for GPU acceleration
-
Clone the repository:
git clone https://github.com/gradrix/dendrite.git cd dendrite -
Start services:
# For CPU-only environments (CI, cloud servers) docker compose --profile cpu up -d # For GPU-enabled environments (local with NVIDIA GPU) docker compose --profile gpu up -d
-
Run setup:
./scripts/setup.sh
-
Execute a goal:
./scripts/run.sh ask "Say hello to the world, mate" ./scripts/run.sh ask "Remember that my favorite color is blue" ./scripts/run.sh ask "What is my favorite color?"
๐ง Neural Engine - Self-Improving AI System
==========================================
๐ณ Ensuring services are running...
โ
Redis ready
โ
PostgreSQL ready
โ
Ollama ready
๐๏ธ Running database migrations...
โ
Database migrations complete
โ
All services ready
๐ฏ NEW GOAL
================================================================================
Goal: Say hello to the world, mate
Time: 14:23:45
================================================================================
๐จ CACHE HIT (System 1 - Fast Path)
Intent cache hit: generative (confidence: 0.87)
โ
GOAL COMPLETED SUCCESSFULLY
================================================================================
Result: "G'day, world! How's it going, mate? Hope you're having a bonzer day!"
Duration: 2.31s
Steps: 1
๐ Execution Summary:
Total steps: 1
Duration: 2.31s
Intent cache hit: Yes
Decomposition pattern: No (not integrated)
Errors: No
# Run all tests
./scripts/test.sh
# Run specific test suite
pytest neural_engine/tests/test_phase6_full_pipeline.py -v
# Run with coverage
pytest --cov=neural_engine --cov-report=htmlOrchestrator
Central coordinator routing goals through generative or tool-use pipelines.
Intent Classifier (with Pattern Cache)
Determines intent (generative vs tool_use) and learns from past decisions for faster classification.
Tool Selector (3-Stage Process)
- Semantic Search: ChromaDB vector search (1000+ tools โ ~10 candidates)
- Statistical Ranking: Performance-based filtering (10 โ 5 top tools)
- LLM Selection: Final intelligent choice from top candidates
Code Generator
Generates executable Python code using selected tools with proper parameters.
Sandbox
Isolated Python execution with namespace control, timeout protection, and result handling.
Tool Discovery
ChromaDB-backed semantic search indexing tool descriptions for relevant matches.
Tool Lifecycle Manager
Monitors filesystem for tool changes, detects deleted tools, prevents stale executions.
Error Recovery Neuron
Implements retry with exponential backoff, fallback strategies, and adaptive error handling.
Execution Store (PostgreSQL)
Logs all executions, tool statistics, performance metrics, and analytics data.
Tool Registry
Dynamic tool loading from filesystem with automatic discovery and indexing.
Message Bus (Redis)
Event-driven communication between components with pub/sub messaging.
User Goal
โ
Intent Classifier (+ Pattern Cache) โ [generative] โ Generative Neuron โ Response
โ
[tool_use]
โ
Tool Discovery (Semantic Search)
โ
Tool Selector (3-stage)
โ
Code Generator
โ
Sandbox Execution (+ Error Recovery)
โ
Execution Store (PostgreSQL Logging)
โ
Result
See docs/INTEGRATION_AUDIT.md for details on:
- Neural Pathway Cache (System 1 fast path)
- Goal Decomposition Learner (pattern learning)
- Autonomous Loop (self-improvement)
- Tool Forge (dynamic tool creation)
- Voting Systems (parallel consensus)
- Advanced Analytics
CPU Profile (default, for CI and GPU-less environments):
docker compose --profile cpu up -dGPU Profile (for NVIDIA GPU acceleration):
docker compose --profile gpu up -dKey environment variables in docker-compose.yml:
OLLAMA_HOST: Ollama API endpoint (default: http://ollama:11434)OLLAMA_MODEL: LLM model to use (default: mistral)REDIS_HOST: Redis server for message bus (default: redis)REDIS_DB: Redis database number (0 for production, 1 for tests)POSTGRES_HOST: PostgreSQL for analytics storage
Tested with:
mistral(default, good balance)llama3.1:8b(higher quality)llama3.2:3b(faster, lower memory)
Change model:
docker compose exec ollama-cpu ollama pull llama3.1:8b
# Update OLLAMA_MODEL in docker-compose.yml.
โโโ neural_engine/
โ โโโ core/ # Core neurons and orchestration
โ โโโ tools/ # Available tool implementations
โ โโโ tests/ # Test suites
โ โโโ prompts/ # LLM prompt templates
โโโ scripts/ # Utility scripts
โโโ docs/ # Documentation
โโโ .github/workflows/ # CI configuration
โโโ docker-compose.yml # Service orchestration
-
Create tool file in
neural_engine/tools/:from neural_engine.tools.base_tool import BaseTool class MyCustomTool(BaseTool): def get_tool_definition(self): return { "name": "my_custom_tool", "description": "What this tool does", "parameters": [ {"name": "param1", "type": "string", "description": "...", "required": True} ] } def execute(self, **kwargs): param1 = kwargs.get('param1') # Your logic here return {"result": "success"}
-
Tool is automatically discovered by registry on startup
-
Test your tool:
pytest neural_engine/tests/test_tool_registry.py python run_goal.py "Use my custom tool with param1 as test"
# Start with live code reloading
./scripts/dev.sh
# Access shell in container
./scripts/shell.sh
# Watch logs
./scripts/logs.shTest suite organization:
test_phase0_*.py- Intent classificationtest_phase1_*.py- Generative pipelinetest_phase2_*.py- Tool registrytest_phase3_*.py- Tool selectiontest_phase4_*.py- Code generationtest_phase5_*.py- Sandbox executiontest_phase6_*.py- Full pipeline integrationtest_phase7_*.py- Tool forgetest_phase9*.py- Analytics and autonomous systemstest_tool_discovery.py- Semantic tool searchtest_autonomous_*.py- Self-improvement systems
Run specific test categories:
# Core pipeline tests
pytest neural_engine/tests/test_phase{0..6}*.py -v
# Autonomous systems
pytest neural_engine/tests/test_autonomous*.py -v
# Tool discovery
pytest neural_engine/tests/test_tool_discovery.py -vCurrent Status:
- Core pipeline: โ Production-ready
- Test coverage: 98%+ on active components
- Pattern cache: 75% similarity threshold for good hit rate
- Model caching: Docker volumes prevent re-downloads
- Error recovery: Automatic retry with fallback
Optimizations:
- Intent pattern caching reduces repeated LLM calls
- Semantic search limits tool candidates (prevents token overflow)
- Statistical ranking prioritizes proven tools
- Database migrations run automatically
- Tool lifecycle prevents stale executions
Performance Metrics (on second run with cache):
- Simple goals: ~2-3s (cache hit)
- Tool-based goals: ~5-10s (with semantic search)
- First run: +40-50s (model loading, now cached in Docker volumes)
- INTEGRATION_AUDIT.md - What's integrated vs what's built
- INTEGRATION_ACTION_PLAN.md - Integration roadmap and priorities
- ARCHITECTURE.md - System design details
- TESTING_STRATEGY.md - Test organization
- TOOL_LIFECYCLE_MANAGEMENT.md - Tool lifecycle system
- TOOL_LOADING_ARCHITECTURE.md - Dynamic tool loading
- DEBUGGING.md - Troubleshooting guide
- Fork the repository
- Create a feature branch
- Make your changes with tests
- Ensure all tests pass:
./scripts/test.sh - Submit a pull request
MIT License - see LICENSE file for details
Production Ready: Core orchestration pipeline is stable with 98%+ test coverage on active components.
What's Working:
- โ Intent classification with pattern learning
- โ 3-stage semantic tool discovery
- โ Intelligent error recovery
- โ Tool lifecycle management
- โ PostgreSQL analytics and logging
- โ Dynamic tool loading
- โ Redis message bus
- โ 100% local execution (no cloud APIs)
What's Next (see docs/INTEGRATION_ACTION_PLAN.md):
- ๐ง Neural Pathway Cache (Phase 2.1)
- ๐ง Goal Decomposition Learner (Phase 2.2)
- ๐ง Voting fallback for ambiguous decisions (Phase 2.4)
- ๐ฎ Autonomous Loop (Phase 3.1) - The big vision!
- ๐ฎ Tool Forge for dynamic tool creation (Phase 3.2)
Architecture Philosophy: We maintain a clean separation between production-ready core features and experimental advanced capabilities. This ensures stability while enabling innovation.