Distributed document RAG system with intelligent load balancing across heterogeneous hardware. Auto-discovers Ollama nodes, routes workloads adaptively, and achieves 2x+ speedups through SOLLOL-powered distributed processing. Privacy-first with local/network/cloud interfaces.
What makes this different: Real distributed systems engineeringβnot just API wrappers. Developed on CPU to ensure universal compatibility, designed for GPU acceleration when available. Handles heterogeneous hardware, network failures, and privacy requirements that rule out cloud APIs.
Clone, start a minimal demo, open the UI:
git clone https://github.com/B-A-M-N/FlockParser && cd FlockParser
# option A: docker-compose demo (recommended)
docker-compose up --build -d
# open Web UI: http://localhost:8501
# open API: http://localhost:8000If you prefer local Python (no Docker):
# Option B: Use the idempotent install script
./INSTALL_SOLLOL_IDEMPOTENT.sh --mode python
source .venv/bin/activate && python flock_webui.py
# Web UI opens at http://localhost:8501
# Or manually:
python -m venv .venv && source .venv/bin/activate
pip install -r requirements.txt
python flock_webui.py # or flockparsecli.py for CLIFor full setup instructions, see detailed quickstart below.
Status: Beta (v1.0.0) - Early adopters welcome, but read this first!
What works well:
- β Core distributed processing across heterogeneous nodes
- β GPU detection and VRAM-aware routing
- β Basic PDF extraction and OCR fallback
- β Privacy-first local processing (CLI/Web UI modes)
Known limitations:
β οΈ Limited battle testing - Tested by ~2 developers, not yet proven at scaleβ οΈ Security gaps - See SECURITY.md for current limitationsβ οΈ Edge cases - Some PDF types may fail (encrypted, complex layouts)β οΈ Test coverage - ~40% coverage, integration tests incomplete
Read before using: KNOWN_ISSUES.md documents all limitations, edge cases, and roadmap honestly.
Recommended for:
- π Learning distributed systems
- π¬ Research and experimentation
- π Personal projects with non-critical data
- π οΈ Contributors who want to help mature the project
Not yet recommended for:
- β Mission-critical production workloads
- β Regulated industries (healthcare, finance) without additional hardening
- β Large-scale deployments (>50 concurrent users)
Help us improve: Report issues, contribute fixes, share feedback!
FlockParser's distributed inference architecture originated from FlockParser-legacy, which pioneered:
- Auto-discovery of Ollama nodes across heterogeneous hardware
- Adaptive load balancing with GPU/CPU awareness
- VRAM-aware routing and automatic failover mechanisms
This core distributed logic from FlockParser-legacy was later extracted and generalized to become SOLLOL - a standalone distributed inference platform that now powers both FlockParser and SynapticLlamas.
Tested on 2-node CPU cluster:
| Version | Workload | Time | Speedup | Notes |
|---|---|---|---|---|
| Legacy | 20 PDFs (~400 pages) | 60.9 min | Baseline | Single-threaded routing |
| Current (SOLLOL) | 20 PDFs (~400 pages) | 30.0 min | 2.0Γ | Intelligent load balancing |
Hardware:
- 2Γ CPU nodes (consumer hardware)
- SOLLOL auto-discovery and adaptive routing
- Processing rate: 1.9 chunks/sec across cluster
GPU acceleration: Designed for GPU-aware routing (VRAM monitoring, adaptive allocation), not yet benchmarked.
See benchmarks: performance-comparison-sollol.png
| Interface | Privacy Level | External Calls | Best For |
|---|---|---|---|
CLI (flockparsecli.py) |
π’ 100% Local | None | Personal use, air-gapped systems |
Web UI (flock_webui.py) |
π’ 100% Local | None | GUI users, visual monitoring |
REST API (flock_ai_api.py) |
π‘ Local Network | None | Multi-user, app integration |
MCP Server (flock_mcp_server.py) |
π΄ Cloud | AI assistant integration |
- Key Features
- π₯ Who Uses This? - Target users & scenarios
- π How It Works (5-Second Overview) - Visual for non-technical evaluators
- Architecture | π Deep Dive: Architecture & Design Decisions
- Quickstart
- Performance & Benchmarks
- π Showcase: Real-World Example β Try it yourself
- Usage Examples
- Security & Production
- π Integration with SynapticLlamas & SOLLOL - Complete AI Ecosystem β
- Troubleshooting
- Contributing
- π Intelligent Load Balancing - Auto-discovers Ollama nodes, detects GPU vs CPU, monitors VRAM, and routes work adaptively (2x speedup on CPU clusters, designed for GPU acceleration)
- π Multi-Protocol Support - CLI (100% local), REST API (network), MCP (Claude Desktop), Web UI (Streamlit) - choose your privacy level
- π― Adaptive Routing - Sequential vs parallel decisions based on cluster characteristics (prevents slow nodes from bottlenecking)
- π Production Observability - Real-time health scores, performance tracking, VRAM monitoring, automatic failover
- π Privacy-First Architecture - No external API calls required (CLI mode), all processing on-premise
- π Complete Pipeline - PDF extraction β OCR fallback β Multi-format conversion β Vector embeddings β RAG with source citations
FlockParser is designed for engineers and researchers who need private, on-premise document intelligence with real distributed systems capabilities.
| User Type | Use Case | Why FlockParser? |
|---|---|---|
| π¬ ML/AI Engineers | Process research papers, build knowledge bases, experiment with RAG systems | GPU-aware routing, 21Γ faster embeddings, full pipeline control |
| π Data Scientists | Extract insights from large document corpora (100s-1000s of PDFs) | Distributed processing, semantic search, production observability |
| π’ Enterprise Engineers | On-premise document search for regulated industries (healthcare, legal, finance) | 100% local processing, no cloud APIs, privacy-first architecture |
| π Researchers | Build custom RAG systems, experiment with distributed inference patterns | Full source access, extensible architecture, real benchmarks |
| π οΈ DevOps/Platform Engineers | Set up document intelligence infrastructure for teams | Multi-node setup, health monitoring, automatic failover |
| π¨βπ» Students/Learners | Understand distributed systems, GPU orchestration, RAG architectures | Real working example, comprehensive docs, honest limitations |
β "I have 500 research papers and a spare GPU machine" β Process your corpus 20Γ faster with distributed nodes β "I can't send medical records to OpenAI" β 100% local processing (CLI/Web UI modes) β "I want to experiment with RAG without cloud costs" β Full pipeline, runs on your hardware β "I need to search 10,000 internal documents" β ChromaDB vector search with sub-20ms latency β "I have mismatched hardware (old laptop + gaming PC)" β Adaptive routing handles heterogeneous clusters
β Production SaaS with 1000+ concurrent users β Current SQLite backend limits concurrency (~50 users)
β Mission-critical systems requiring 99.9% uptime β Still in Beta, see KNOWN_ISSUES.md
β Simple one-time PDF extraction β Overkill; use pdfplumber directly
β Cloud-first deployments β Designed for on-premise/hybrid; cloud works but misses GPU routing benefits
Bottom line: If you're building document intelligence infrastructure on your own hardware and need distributed processing with privacy guarantees, FlockParser is for you.
For recruiters and non-technical evaluators:
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β INPUT β
β π Your Documents (PDFs, research papers, internal docs) β
ββββββββββββββββββββββββββ¬βββββββββββββββββββββββββββββββββββββββββ
β
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β FLOCKPARSER β
β β
β 1. Extracts text from PDFs (handles scans with OCR) β
β 2. Splits into chunks, creates vector embeddings β
β 3. Distributes work across GPU/CPU nodes (auto-discovery) β
β 4. Stores in searchable vector database (ChromaDB) β
β β
β β‘ Distributed Processing: 3 nodes β 13Γ faster β
β π Distributed Processing: SOLLOL routing β 2Γ speedup β
β π Privacy: 100% local (no cloud APIs) β
ββββββββββββββββββββββββββ¬βββββββββββββββββββββββββββββββββββββββββ
β
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β OUTPUT β
β π Semantic Search: "Find all mentions of transformers" β
β π¬ AI Chat: "Summarize the methodology section" β
β π Source Citations: Exact page/document references β
β π 4 Interfaces: CLI, Web UI, REST API, Claude Desktop β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Key Innovation: Auto-detects GPU nodes, measures performance, and routes work to fastest hardware. No manual configuration needed.
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Interfaces (Choose Your Privacy Level) β
β CLI (Local) | REST API (Network) | MCP (Claude) | Web UI β
ββββββββββββββββββββββββ¬βββββββββββββββββββββββββββββββββββββββ
β
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β FlockParse Core Engine β
β βββββββββββββββ ββββββββββββββββ ββββββββββββββββ β
β β PDF β β Semantic β β RAG β β
β β Processing ββ β Search ββ β Engine β β
β βββββββββββββββ ββββββββββββββββ ββββββββββββββββ β
β β β β β
β βΌ βΌ βΌ β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β ChromaDB Vector Store (Persistent) β β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββ β
ββββββββββββββββββββββββ¬βββββββββββββββββββββββββββββββββββββββ
β Intelligent Load Balancer
β β’ Health scoring (GPU/VRAM detection)
β β’ Adaptive routing (sequential vs parallel)
β β’ Automatic failover & caching
βΌ
ββββββββββββββββββββββββββββββββββββββββββββββββ
β Distributed Ollama Cluster β
β ββββββββββββ ββββββββββββ ββββββββββββ β
β β Node 1 β β Node 2 β β Node 3 β β
β β GPU A β β GPU B β β CPU β β
β β16GB VRAM β β 8GB VRAM β β 16GB RAM β β
β βHealth:367β βHealth:210β βHealth:50 β β
β ββββββββββββ ββββββββββββ ββββββββββββ β
ββββββββββββββββββββββββββββββββββββββββββββββββ
β² Auto-discovery | Performance tracking
Want to understand how this works? Read the π Architecture Deep Dive for detailed explanations of:
- Why distributed AI inference solves real-world problems
- How adaptive routing decisions are made (sequential vs parallel)
- MCP integration details and privacy implications
- Technical trade-offs and design decisions
Requirements:
- Python 3.10 or later
- Ollama 0.1.20+ (install from ollama.com)
- 4GB+ RAM (8GB+ recommended for GPU nodes)
# 1. Install FlockParser
pip install flockparser
# 2. Start Ollama and pull models
ollama serve # In a separate terminal
ollama pull mxbai-embed-large # Required for embeddings
ollama pull llama3.1:latest # Required for chat
# 3. Run your preferred interface
flockparse-webui # Web UI - easiest (recommended) β
flockparse # CLI - 100% local
flockparse-api # REST API - multi-user
flockparse-mcp # MCP - Claude Desktop integrationπ‘ Pro tip: Start with the Web UI to see distributed processing with real-time VRAM monitoring and node health dashboards.
Want to use FlockParser in your own Python code? Here's the minimal example:
# Programmatic example
from flockparser import FlockParser
fp = FlockParser() # uses default config/registry
fp.discover_nodes(timeout=3.0) # waits for any SOLLOL/agents to register
result = fp.process_pdf("example.pdf") # routes work via SOLLOL; returns result dict
print(result["summary"][:250])That's it! FlockParser handles:
- β GPU detection and routing
- β Load balancing across nodes
- β Vector embeddings and storage
- β Automatic failover
More examples: See showcase/process_arxiv_papers.py for batch processing and flockparsecli.py for the full CLI implementation.
If you want to contribute or modify the code:
git clone https://github.com/B-A-M-N/FlockParser.git
cd FlockParser
pip install -e . # Editable install# Start the CLI
python flockparsecli.py
# Process the sample PDF
> open_pdf testpdfs/sample.pdf
# Chat with it
> chat
π You: Summarize this documentFirst time? Start with the Web UI (streamlit run flock_webui.py) - it's the easiest way to see distributed processing in action with a visual dashboard.
# Clone and deploy everything
git clone https://github.com/B-A-M-N/FlockParser.git
cd FlockParser
docker-compose up -d
# Access services
# Web UI: http://localhost:8501
# REST API: http://localhost:8000
# Ollama: http://localhost:11434| Service | Port | Description |
|---|---|---|
| Web UI | 8501 | Streamlit interface with visual monitoring |
| REST API | 8000 | FastAPI with authentication |
| CLI | - | Interactive terminal (docker-compose run cli) |
| Ollama | 11434 | Local LLM inference engine |
β Multi-stage build - Optimized image size β Non-root user - Security hardened β Health checks - Auto-restart on failure β Volume persistence - Data survives restarts β GPU support - Uncomment deploy section for NVIDIA GPUs
# Set API key
export FLOCKPARSE_API_KEY="your-secret-key"
# Set log level
export LOG_LEVEL="DEBUG"
# Deploy with custom config
docker-compose up -dUncomment the GPU section in docker-compose.yml:
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]Then run: docker-compose up -d
graph LR
A[π Git Push] --> B[π Lint & Format]
B --> C[π§ͺ Test Suite]
B --> D[π Security Scan]
C --> E[π³ Build Multi-Arch]
D --> E
E --> F[π¦ Push to GHCR]
F --> G[π Deploy]
style A fill:#4CAF50
style B fill:#2196F3
style C fill:#2196F3
style D fill:#FF9800
style E fill:#9C27B0
style F fill:#9C27B0
style G fill:#F44336
Automated on every push to main:
| Stage | Tools | Purpose |
|---|---|---|
| Code Quality | black, flake8, mypy | Enforce formatting & typing standards |
| Testing | pytest (Python 3.10/3.11/3.12) | 78% coverage across versions |
| Security | Trivy | Vulnerability scanning & SARIF reports |
| Build | Docker Buildx | Multi-architecture (amd64, arm64) |
| Registry | GitHub Container Registry | Versioned image storage |
| Deploy | On release events | Automated production deployment |
Pull the latest image:
docker pull ghcr.io/benevolentjoker-johnl/flockparser:latestView pipeline runs: https://github.com/B-A-M-N/FlockParser/actions
Want distributed processing? Set up multiple Ollama nodes across your network for automatic load balancing.
On each additional machine:
# 1. Install Ollama
curl -fsSL https://ollama.com/install.sh | sh
# 2. Configure for network access
export OLLAMA_HOST=0.0.0.0:11434
ollama serve
# 3. Pull models
ollama pull mxbai-embed-large
ollama pull llama3.1:latest
# 4. Allow firewall (if needed)
sudo ufw allow 11434/tcp # LinuxFlockParser will automatically discover these nodes!
Check with:
python flockparsecli.py
> lb_stats # Shows all discovered nodes and their capabilitiesπ Complete Guide: See DISTRIBUTED_SETUP.md for:
- Step-by-step multi-machine setup
- Network configuration and firewall rules
- Troubleshooting node discovery
- Example setups (budget home lab to professional clusters)
- GPU router configuration for automatic optimization
- Web UI (
flock_webui.py): π’ 100% local, runs in your browser - CLI (
flockparsecli.py): π’ 100% local, zero external calls - REST API (
flock_ai_api.py): π‘ Local network only - MCP Server (
flock_mcp_server.py): π΄ Integrates with Claude Desktop (Anthropic cloud service)
Choose the interface that matches your privacy requirements!
| Feature | FlockParse | LangChain | LlamaIndex | Haystack |
|---|---|---|---|---|
| 100% Local/Offline | β Yes (CLI/JSON) | |||
| Zero External API Calls | β Yes (CLI/JSON) | β No | β No | β No |
| Built-in GPU Load Balancing | β Yes (auto) | β No | β No | β No |
| VRAM Monitoring | β Yes (dynamic) | β No | β No | β No |
| Multi-Node Auto-Discovery | β Yes | β No | β No | β No |
| CPU Fallback Detection | β Yes | β No | β No | β No |
| Document Format Export | β 4 formats | β Limited | β Limited | |
| Setup Complexity | π’ Simple | π΄ Complex | π΄ Complex | π‘ Medium |
| Dependencies | π’ Minimal | π΄ Heavy | π΄ Heavy | π‘ Medium |
| Learning Curve | π’ Low | π΄ Steep | π΄ Steep | π‘ Medium |
| Privacy Control | π’ High (CLI/JSON) | π΄ Limited | π΄ Limited | π‘ Medium |
| Out-of-Box Functionality | β Complete | |||
| MCP Integration | β Native | β No | β No | β No |
| Embedding Cache | β MD5-based | |||
| Batch Processing | β Parallel | |||
| Performance | π 2x faster with distributed CPU routing | |||
| Cost | π° Free | π°π° Free + Paid | π°π° Free + Paid | π°π° Free + Paid |
- Privacy by Design: CLI and JSON interfaces are 100% local with zero external calls (MCP interface uses Claude Desktop for chat)
- Intelligent GPU Management: Automatically finds, tests, and prioritizes GPU nodes
- Production-Ready: Works immediately with sensible defaults
- Resource-Aware: Detects VRAM exhaustion and prevents performance degradation
- Complete Solution: CLI, REST API, MCP, and batch interfaces - choose your privacy level
| Processing Mode | Workload | Time | Speedup | What It Shows |
|---|---|---|---|---|
| Legacy (single-threaded) | 20 PDFs | 60.9 min | 1x baseline | Basic routing |
| Current (SOLLOL routing) | 20 PDFs | 30.0 min | 2.0x faster | Intelligent load balancing across 2 CPU nodes |
Why the Speedup?
- SOLLOL intelligently distributes workload across available nodes
- Adaptive parallelism prevents slow nodes from bottlenecking
- Per-node queues with cross-node stealing optimize throughput
- No network overhead (local cluster, no cloud APIs)
GPU acceleration: Designed for GPU-aware routing with VRAM monitoring, not yet benchmarked.
Key Insight: The system automatically detects performance differences and makes routing decisions - no manual GPU configuration needed.
Hardware (Benchmark Cluster):
- Node 1 (10.9.66.154): Consumer CPU (Intel/AMD)
- Node 2 (10.9.66.250): Consumer CPU (Intel/AMD)
- Software: Python 3.10, Ollama, SOLLOL 0.9.60+
Reproducibility:
- Full source code available in this repo
- Test with your own hardware - results will vary based on cluster size and hardware
Compare FlockParser against LangChain and LlamaIndex on your hardware:
# Clone the repo if you haven't already
git clone https://github.com/B-A-M-N/FlockParser.git
cd FlockParser
# Install dependencies
pip install -r requirements.txt
# Run comparison benchmark
python benchmark_comparison.pyWhat it tests:
- β Processing time for 3 research papers (~50 pages total)
- β GPU utilization and load balancing
- β Memory efficiency
- β Caching effectiveness
Expected results:
- FlockParser: ~15-30s (with GPU cluster)
- LangChain: ~45-60s (single node, no load balancing)
- LlamaIndex: ~40-55s (single node, no GPU optimization)
Why FlockParser is faster:
- GPU-aware routing (automatic)
- Multi-node parallelization
- MD5-based embedding cache
- Model weight persistence
Results saved to benchmark_results.json for your records.
To reproduce the benchmark numbers used in this README:
python benchmark_comparison.py --runs 10 --concurrency 2The project offers four main interfaces:
- flock_webui.py - π¨ Beautiful Streamlit web interface (NEW!)
- flockparsecli.py - Command-line interface for personal document processing
- flock_ai_api.py - REST API server for multi-user or application integration
- flock_mcp_server.py - Model Context Protocol server for AI assistants like Claude Desktop
Processing influential AI research papers from arXiv.org
Want to see FlockParser in action on real documents? Run the included showcase:
pip install flockparser
python showcase/process_arxiv_papers.pyDownloads and processes 5 seminal AI research papers:
- Attention Is All You Need (Transformers) - arXiv:1706.03762
- BERT - Pre-training Deep Bidirectional Transformers - arXiv:1810.04805
- RAG - Retrieval-Augmented Generation for NLP - arXiv:2005.11401
- GPT-3 - Language Models are Few-Shot Learners - arXiv:2005.14165
- Llama 2 - Open Foundation Language Models - arXiv:2307.09288
Total: ~350 pages, ~25 MB of PDFs
| Configuration | Processing Time | Notes |
|---|---|---|
| Single CPU node | Baseline | Sequential processing |
| Multi-node CPU cluster | ~2x faster | SOLLOL distributed routing |
Note: GPU acceleration designed but not yet benchmarked. Actual performance will vary based on your hardware.
After processing, the script demonstrates:
-
Semantic Search across all papers:
# Example queries that work immediately: "What is the transformer architecture?" "How does retrieval-augmented generation work?" "What are the benefits of attention mechanisms?"
-
Performance Metrics (
showcase/results.json):{ "total_time": "Varies by hardware", "papers": [ { "title": "Attention Is All You Need", "processing_time": 4.2, "status": "success" } ], "node_info": [...] } -
Human-Readable Summary (
showcase/RESULTS.md) with:- Per-paper processing times
- Hardware configuration used
- Fastest/slowest/average performance
- Replication instructions
This isn't a toy demo - it's processing actual research papers that engineers read daily. It demonstrates:
β
Real document processing - Complex PDFs with equations, figures, multi-column layouts
β
Production-grade pipeline - PDF extraction β embeddings β vector storage β semantic search
β
Actual performance gains - Measurable speedups on heterogeneous hardware
β
Reproducible results - Run it yourself with pip install, compare your hardware
Perfect for portfolio demonstrations: Show this to hiring managers as proof of real distributed systems work.
git clone https://github.com/yourusername/flockparse.git
cd flockparse- Linux:
sudo apt-get update sudo apt-get install poppler-utils
- macOS:
brew install poppler
- Windows: Download from Poppler for Windows
FlockParse automatically detects scanned PDFs and uses OCR!
- Linux (Ubuntu/Debian):
sudo apt-get update sudo apt-get install tesseract-ocr tesseract-ocr-eng poppler-utils
- Linux (Fedora/RHEL):
sudo dnf install tesseract poppler-utils
- macOS:
brew install tesseract poppler
- Windows:
- Install Tesseract OCR - Download the installer
- Install Poppler for Windows
- Add both to your system PATH
Verify installation:
tesseract --version
pdftotext -vpip install -r requirements.txtKey Python dependencies (installed automatically):
- fastapi, uvicorn - Web server
- pdfplumber, PyPDF2, pypdf - PDF processing
- pytesseract - Python wrapper for Tesseract OCR (requires system Tesseract)
- pdf2image - PDF to image conversion (requires system Poppler)
- Pillow - Image processing for OCR
- chromadb - Vector database
- python-docx - DOCX generation
- ollama - AI model integration
- numpy - Numerical operations
- markdown - Markdown generation
How OCR fallback works:
- Tries PyPDF2 text extraction
- Falls back to pdftotext if no text
- Falls back to OCR if still no text (<100 chars) - Requires Tesseract + Poppler
- Automatically processes scanned documents without manual intervention
- Install Ollama from ollama.com
- Start the Ollama service:
ollama serve
- Pull the required models:
ollama pull mxbai-embed-large ollama pull llama3.1:latest
Launch the beautiful Streamlit web interface:
streamlit run flock_webui.pyThe web UI will open in your browser at http://localhost:8501
Features:
- π€ Upload & Process: Drag-and-drop PDF files for processing
- π¬ Chat Interface: Interactive chat with your documents
- π Load Balancer Dashboard: Real-time monitoring of GPU nodes
- π Semantic Search: Search across all documents
- π Node Management: Add/remove Ollama nodes, auto-discovery
- π― Routing Control: Switch between routing strategies
Perfect for:
- Users who prefer graphical interfaces
- Quick document processing and exploration
- Monitoring distributed processing
- Managing multiple Ollama nodes visually
Run the script:
python flockparsecli.pyAvailable commands:
π open_pdf <file> β Process a single PDF file
π open_dir <dir> β Process all PDFs in a directory
π¬ chat β Chat with processed PDFs
π list_docs β List all processed documents
π check_deps β Check for required dependencies
π discover_nodes β Auto-discover Ollama nodes on local network
β add_node <url> β Manually add an Ollama node
β remove_node <url> β Remove an Ollama node from the pool
π list_nodes β List all configured Ollama nodes
βοΈ lb_stats β Show load balancer statistics
β exit β Quit the program
Start the API server:
# Set your API key (or use default for testing)
export FLOCKPARSE_API_KEY="your-secret-key-here"
# Start server
python flock_ai_api.pyThe server will run on http://0.0.0.0:8000 by default.
All endpoints except / require an API key in the X-API-Key header:
# Default API key (change in production!)
X-API-Key: your-secret-api-key-change-this
# Or set via environment variable
export FLOCKPARSE_API_KEY="my-super-secret-key"| Endpoint | Method | Auth Required | Description |
|---|---|---|---|
/ |
GET | β No | API status and version info |
/upload/ |
POST | β Yes | Upload and process a PDF file |
/summarize/{file_name} |
GET | β Yes | Get an AI-generated summary |
/search/?query=... |
GET | β Yes | Search for relevant documents |
Check API status (no auth required):
curl http://localhost:8000/Upload a document (with authentication):
curl -X POST \
-H "X-API-Key: your-secret-api-key-change-this" \
-F "file=@your_document.pdf" \
http://localhost:8000/upload/Get a document summary:
curl -H "X-API-Key: your-secret-api-key-change-this" \
http://localhost:8000/summarize/your_document.pdfSearch across documents:
curl -H "X-API-Key: your-secret-api-key-change-this" \
"http://localhost:8000/search/?query=your%20search%20query"- Always change the default API key
- Use environment variables, never hardcode keys
- Use HTTPS in production (nginx/apache reverse proxy)
- Consider rate limiting for public deployments
The MCP server allows FlockParse to be used as a tool by AI assistants like Claude Desktop.
-
Start the MCP server:
python flock_mcp_server.py
-
Configure Claude Desktop: Add to your Claude Desktop config file (
~/Library/Application Support/Claude/claude_desktop_config.jsonon macOS, or%APPDATA%\Claude\claude_desktop_config.jsonon Windows):{ "mcpServers": { "flockparse": { "command": "python", "args": ["/absolute/path/to/FlockParser/flock_mcp_server.py"] } } } -
Restart Claude Desktop and you'll see FlockParse tools available!
process_pdf- Process and add PDFs to the knowledge basequery_documents- Search documents using semantic searchchat_with_documents- Ask questions about your documentslist_documents- List all processed documentsget_load_balancer_stats- View node performance metricsdiscover_ollama_nodes- Auto-discover Ollama nodesadd_ollama_node- Add an Ollama node manuallyremove_ollama_node- Remove an Ollama node
In Claude Desktop, you can now ask:
- "Process the PDF at /path/to/document.pdf"
- "What documents do I have in my knowledge base?"
- "Search my documents for information about quantum computing"
- "What does my research say about black holes?"
- Create searchable archives of research papers, legal documents, and technical manuals
- Generate summaries of lengthy documents for quick review
- Chat with your document collection to find specific information without manual searching
- Process contract repositories for semantic search capabilities
- Extract key terms and clauses from legal documents
- Analyze regulatory documents for compliance requirements
- Process and convert academic papers for easier reference
- Create a personal research assistant that can reference your document library
- Generate summaries of complex research for presentations or reviews
- Convert business reports into searchable formats
- Extract insights from PDF-based market research
- Make proprietary documents more accessible throughout an organization
FlockParse includes a sophisticated load balancer that can distribute embedding generation across multiple Ollama instances on your local network.
# Start FlockParse
python flockparsecli.py
# Auto-discover Ollama nodes on your network
β‘ Enter command: discover_nodesThe system will automatically scan your local network (/24 subnet) and detect any running Ollama instances.
# Add a specific node
β‘ Enter command: add_node http://192.168.1.100:11434
# List all configured nodes
β‘ Enter command: list_nodes
# Remove a node
β‘ Enter command: remove_node http://192.168.1.100:11434
# View load balancer statistics
β‘ Enter command: lb_stats- Speed: Process documents 2-10x faster with multiple nodes
- GPU Awareness: Automatically detects and prioritizes GPU nodes over CPU nodes
- VRAM Monitoring: Detects when GPU nodes fall back to CPU due to insufficient VRAM
- Fault Tolerance: Automatic failover if a node becomes unavailable
- Load Distribution: Smart routing based on node performance, GPU availability, and VRAM capacity
- Easy Scaling: Just add more machines with Ollama installed
On each additional machine:
# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh
# Pull the embedding model
ollama pull mxbai-embed-large
# Start Ollama (accessible from network)
OLLAMA_HOST=0.0.0.0:11434 ollama serveThen use discover_nodes or add_node to add them to FlockParse.
FlockParse automatically detects GPU availability and VRAM usage using Ollama's /api/ps endpoint:
- π GPU nodes with models loaded in VRAM get +200 health score bonus
β οΈ VRAM-limited nodes that fall back to CPU get only +50 bonus- π’ CPU-only nodes get -50 penalty
To ensure your GPU is being used:
- Check GPU detection: Run
lb_statscommand to see node status - Preload model into GPU: Run a small inference to load model into VRAM
ollama run mxbai-embed-large "test" - Verify VRAM usage: Check that
size_vram > 0in/api/ps:curl http://localhost:11434/api/ps
- Increase VRAM allocation: If model won't load into VRAM, free up GPU memory or use a smaller model
Dynamic VRAM monitoring: FlockParse continuously monitors embedding performance and automatically detects when a GPU node falls back to CPU due to VRAM exhaustion during heavy load.
-
Check Dependencies:
β‘ Enter command: check_deps -
Process a Directory of Research Papers:
β‘ Enter command: open_dir ~/research_papers -
Chat with Your Research Collection:
β‘ Enter command: chat π You: What are the key methods used in the Smith 2023 paper?
-
Start the API Server:
python flock_ai_api.py
-
Upload Documents via API:
curl -X POST -F "file=@quarterly_report.pdf" http://localhost:8000/upload/ -
Generate a Summary:
curl http://localhost:8000/summarize/quarterly_report.pdf
-
Search Across Documents:
curl http://localhost:8000/search/?query=revenue%20growth%20Q3
Problem: Error messages about Ollama not being available or connection failures.
Solution:
- Verify Ollama is running:
ps aux | grep ollama - Restart the Ollama service:
killall ollama ollama serve
- Check that you've pulled the required models:
ollama list
- If models are missing:
ollama pull mxbai-embed-large ollama pull llama3.1:latest
Problem: No text extracted from certain PDFs.
Solution:
-
Check if the PDF is scanned/image-based:
- Install OCR tools:
sudo apt-get install tesseract-ocr(Linux) - For better scanned PDF handling:
pip install ocrmypdf - Process with OCR:
ocrmypdf input.pdf output.pdf
- Install OCR tools:
-
If the PDF has unusual fonts or formatting:
- Install poppler-utils for better extraction
- Try using the
-layoutoption with pdftotext manually:pdftotext -layout problem_document.pdf output.txt
Problem: Application crashes with large PDFs or many documents.
Solution:
- Process one document at a time for very large PDFs
- Reduce the chunk size in the code (default is 512 characters)
- Increase your system's available memory or use a swap file
- For server deployments, consider using a machine with more RAM
Problem: Error when trying to start the API server.
Solution:
- Check for port conflicts:
lsof -i :8000 - If another process is using port 8000, kill it or change the port
- Verify FastAPI is installed:
pip install fastapi uvicorn - Check for Python version compatibility (requires Python 3.7+)
# Set a strong API key via environment variable
export FLOCKPARSE_API_KEY="your-super-secret-key-change-this-now"
# Or generate a random one
export FLOCKPARSE_API_KEY=$(openssl rand -hex 32)
# Start the API server
python flock_ai_api.pyProduction Checklist:
- β
Change default API key - Never use
your-secret-api-key-change-this - β Use environment variables - Never hardcode secrets in code
- β Enable HTTPS - Use nginx or Apache as reverse proxy with SSL/TLS
- β
Add rate limiting - Use nginx
limit_reqor FastAPI middleware - β Network isolation - Don't expose API to public internet unless necessary
- β Monitor logs - Watch for authentication failures and abuse
Example nginx config with TLS:
server {
listen 443 ssl;
server_name your-domain.com;
ssl_certificate /path/to/cert.pem;
ssl_certificate_key /path/to/key.pem;
location / {
proxy_pass http://127.0.0.1:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}What data leaves your machine:
- π΄ Document queries - Sent to Claude Desktop β Anthropic API
- π΄ Document snippets - Retrieved context chunks sent as part of prompts
- π΄ Chat messages - All RAG conversations processed by Claude
- π’ Document files - Never uploaded (processed locally, only embeddings stored)
To disable MCP and stay 100% local:
- Remove FlockParse from Claude Desktop config
- Use CLI (
flockparsecli.py) or Web UI (flock_webui.py) instead - Both provide full RAG functionality without external API calls
MCP is safe for:
- β Public documents (research papers, manuals, non-sensitive data)
- β Testing and development
- β Personal use where you trust Anthropic's privacy policy
MCP is NOT recommended for:
- β Confidential business documents
- β Personal identifiable information (PII)
- β Regulated data (HIPAA, GDPR sensitive content)
- β Air-gapped or classified environments
SQLite limitations (ChromaDB backend):
β οΈ No concurrent writes from multiple processesβ οΈ File permissions determine access (not true auth)β οΈ No encryption at rest by default
For production with multiple users:
# Option 1: Separate databases per interface
CLI: chroma_db_cli/
API: chroma_db_api/
MCP: chroma_db_mcp/
# Option 2: Use PostgreSQL backend (ChromaDB supports it)
# See ChromaDB docs: https://docs.trychroma.com/FlockParse detects GPU usage via Ollama's /api/ps endpoint:
# Check what Ollama reports
curl http://localhost:11434/api/ps
# Response shows VRAM usage:
{
"models": [{
"name": "mxbai-embed-large:latest",
"size": 705530880,
"size_vram": 705530880, # <-- If >0, model is in GPU
...
}]
}Health score calculation:
size_vram > 0β +200 points (GPU in use)size_vram == 0but GPU present β +50 points (GPU available, not used)- CPU-only β -50 points
This is presence-based detection, not utilization monitoring. It detects if the model loaded into VRAM, not how efficiently it's being used.
| Feature | Description |
|---|---|
| Multi-method PDF Extraction | Uses both PyPDF2 and pdftotext for best results |
| Format Conversion | Converts PDFs to TXT, Markdown, DOCX, and JSON |
| Semantic Search | Uses vector embeddings to find relevant information |
| Interactive Chat | Discuss your documents with AI assistance |
| Privacy Options | Web UI/CLI: 100% offline; REST API: local network; MCP: Claude Desktop (cloud) |
| Distributed Processing | Load balancer with auto-discovery for multiple Ollama nodes |
| Accurate VRAM Monitoring | Real GPU memory tracking with nvidia-smi/rocm-smi + Ollama API (NEW!) |
| GPU & VRAM Awareness | Automatically detects GPU nodes and prevents CPU fallback |
| Intelligent Routing | 4 strategies (adaptive, round_robin, least_loaded, lowest_latency) with GPU priority |
| Flexible Model Matching | Supports model name variants (llama3.1, llama3.1:latest, llama3.1:8b, etc.) |
| ChromaDB Vector Store | Production-ready persistent vector database with cosine similarity |
| Embedding Cache | MD5-based caching prevents reprocessing same content |
| Model Weight Caching | Keep models in VRAM for faster repeated inference |
| Parallel Batch Processing | Process multiple embeddings simultaneously |
| Database Management | Clear cache and clear DB commands for easy maintenance (NEW!) |
| Filename Preservation | Maintains original document names in converted files |
| REST API | Web server for multi-user/application integration |
| Document Summarization | AI-generated summaries of uploaded documents |
| OCR Processing | Extract text from scanned documents using image recognition |
| Feature | flock_webui.py | flockparsecli.py | flock_ai_api.py | flock_mcp_server.py |
|---|---|---|---|---|
| Interface | π¨ Web Browser (Streamlit) | Command line | REST API over HTTP | Model Context Protocol |
| Ease of Use | βββββ Easiest | ββββ Easy | βββ Moderate | βββ Moderate |
| Use case | Interactive GUI usage | Personal CLI processing | Service integration | AI Assistant integration |
| Document formats | Creates TXT, MD, DOCX, JSON | Creates TXT, MD, DOCX, JSON | Stores extracted text only | Creates TXT, MD, DOCX, JSON |
| Interaction | Point-and-click + chat | Interactive chat mode | Query/response via API | Tool calls from AI assistants |
| Multi-user | Single user (local) | Single user | Multiple users/applications | Single user (via AI assistant) |
| Storage | Local file-based | Local file-based | ChromaDB vector database | Local file-based |
| Load Balancing | β Yes (visual dashboard) | β Yes | β No | β Yes |
| Node Discovery | β Yes (one-click) | β Yes | β No | β Yes |
| GPU Monitoring | β Yes (real-time charts) | β Yes | β No | β Yes |
| Batch Operations | β No | β No | β No | |
| Privacy Level | π’ 100% Local | π’ 100% Local | π‘ Local Network | π΄ Cloud (Claude) |
| Best for | π General users, GUI lovers | Direct CLI usage | Integration with apps | Claude Desktop, AI workflows |
/converted_files- Stores the converted document formats (flockparsecli.py)/knowledge_base- Legacy JSON storage (backwards compatibility only)/chroma_db_cli- ChromaDB vector database for CLI (flockparsecli.py) - Production storage/uploads- Temporary storage for uploaded documents (flock_ai_api.py)/chroma_db- ChromaDB vector database (flock_ai_api.py)
- β GPU Auto-Optimization - Background process ensures models use GPU automatically (NEW!)
- β Programmatic GPU Control - Force models to GPU/CPU across distributed nodes (NEW!)
- β Accurate VRAM Monitoring - Real GPU memory tracking across distributed nodes
- β ChromaDB Production Integration - Professional vector database for 100x faster search
- β Clear Cache & Clear DB Commands - Manage embeddings and database efficiently
- β Model Weight Caching - Keep models in VRAM for 5-10x faster inference
- β Web UI - Beautiful Streamlit interface for easy document management
- β Advanced OCR Support - Automatic fallback to OCR for scanned documents
- β API Authentication - Secure API key authentication for REST API endpoints
- β¬ Document versioning - Track changes over time (Coming soon)
- π Architecture Deep Dive - System design, routing algorithms, technical decisions
- π Distributed Setup Guide - β Set up your own multi-node cluster
- π Performance Benchmarks - Real-world performance data and scaling tests
β οΈ Known Issues & Limitations - π΄ READ THIS - Honest assessment of current state- π Security Policy - Security best practices and vulnerability reporting
- π Error Handling Guide - Troubleshooting common issues
- π€ Contributing Guide - How to contribute to the project
- π Code of Conduct - Community guidelines
- π Changelog - Version history
- β‘ Performance Optimization - Tuning for maximum speed
- π§ GPU Router Setup - Distributed cluster configuration
- π€ GPU Auto-Optimization - Automatic GPU management
- π VRAM Monitoring - GPU memory tracking
- π― Adaptive Parallelism - Smart workload distribution
- ποΈ ChromaDB Production - Vector database scaling
- πΎ Model Caching - Performance through caching
- π₯οΈ Node Management - Managing distributed nodes
- β‘ Quick Setup - Fast track to getting started
- ποΈ FlockParser-legacy - Original distributed inference implementation
- π¦ Docker Setup - Containerized deployment
- βοΈ Environment Config - Configuration template
- π§ͺ Tests - Test suite and CI/CD
FlockParser is designed to work seamlessly with SynapticLlamas (multi-agent orchestration) and SOLLOL (distributed inference platform) as a unified AI ecosystem.
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β SynapticLlamas (v0.1.0+) β
β Multi-Agent System & Orchestration β
β β’ Research agents β’ Editor agents β’ Storyteller agents β
βββββββββββββ¬βββββββββββββββββββββββββββββββββββββ¬ββββββββββββ
β β
β RAG Queries β Distributed
β (with pre-computed embeddings) β Inference
β β
ββββββββΌβββββββββββ βββββββββββΌβββββββββββββ
β FlockParser β β SOLLOL β
β API (v1.0.4+) β β Load Balancer β
β Port: 8000 β β (v0.9.31+) β
βββββββββββββββββββ ββββββββββββββββββββββββ
β β
β ChromaDB β Intelligent
β Vector Store β GPU/CPU Routing
β β
ββββββββΌβββββββββββ βββββββββββΌβββββββββββββ
β Knowledge Base β β Ollama Nodes β
β 41 Documents β β (Distributed) β
β 6,141 Chunks β β GPU + CPU β
βββββββββββββββββββ ββββββββββββββββββββββββ
FlockParser provides document RAG capabilities, SynapticLlamas orchestrates multi-agent workflows, and SOLLOL handles distributed inference with intelligent load balancing.
| Component | Role | Key Feature |
|---|---|---|
| FlockParser | Document RAG & Knowledge Base | ChromaDB vector store with 6,141+ chunks |
| SynapticLlamas | Agent Orchestration | Multi-agent workflows with RAG integration |
| SOLLOL | Distributed Inference | Load balanced embedding & model inference |
# Install all three packages (auto-installs dependencies)
pip install synaptic-llamas # Pulls in flockparser>=1.0.4 and sollol>=0.9.31
# Start FlockParser API (auto-starts with CLI)
flockparse
# Configure SynapticLlamas for integration
synaptic-llamas --interactive --distributedfrom flockparser_adapter import FlockParserAdapter
from sollol_load_balancer import SOLLOLLoadBalancer
# Initialize SOLLOL for distributed inference
sollol = SOLLOLLoadBalancer(
rpc_backends=["http://gpu-node-1:50052", "http://gpu-node-2:50052"]
)
# Initialize FlockParser adapter
flockparser = FlockParserAdapter("http://localhost:8000", remote_mode=True)
# Step 1: Generate embedding using SOLLOL (load balanced!)
embedding = sollol.generate_embedding(
model="mxbai-embed-large",
prompt="quantum entanglement"
)
# SOLLOL routes to fastest GPU automatically
# Step 2: Query FlockParser with pre-computed embedding
results = flockparser.query_remote(
query="quantum entanglement",
embedding=embedding, # Skip FlockParser's embedding generation
n_results=5
)
# FlockParser returns relevant chunks from 41 documents
# Performance gain: 2-5x faster when SOLLOL has faster nodes!FlockParser v1.0.4 adds SynapticLlamas-compatible public endpoints:
GET /health- Check API availability and document countGET /stats- Get knowledge base statistics (41 docs, 6,141 chunks)POST /query- Query with pre-computed embeddings (critical for load balanced RAG)
These endpoints allow SynapticLlamas to bypass FlockParser's embedding generation and use SOLLOL's load balancer instead!
- π Complete Integration Guide - Full architecture, examples, and setup
- SynapticLlamas Repository - Multi-agent orchestration
- SOLLOL Repository - Distributed inference platform
This project was developed iteratively using Claude and Claude Code as coding assistants. All design decisions, architecture choices, and integration strategy were directed and reviewed by me.
Contributions are welcome! Please feel free to submit a Pull Request.
This project is licensed under the MIT License - see the LICENSE file for details.