Skip to content

This is a LangGraph AI Agent that features a sophisticated two-tier routing system that uses OpenAI GPT models for smart tool selection with rule-based pattern matching as a reliable fallback, making it production-ready with extensive testing coverage and interactive capabilities.

License

Notifications You must be signed in to change notification settings

jkmaina/langgraph-llm-router-agent

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

2 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

LangGraph Dual-Tool Agent

An intelligent LangGraph agent that demonstrates advanced multi-tool workflows with LLM-based routing and comprehensive session management. The agent can perform mathematical calculations and text processing operations while providing detailed analytics and fallback capabilities.

๐Ÿš€ Key Features

๐Ÿค– Intelligent LLM-Based Routing

  • Primary: Uses OpenAI GPT models for intelligent tool selection
  • Fallback: Rule-based pattern matching when LLM is unavailable
  • Hybrid Approach: Best of both worlds - smart routing with reliable backup

๐Ÿงฎ Calculator Tool

  • Safe mathematical expression evaluation
  • Support for complex operations: +, -, *, /, **, %, parentheses
  • Error handling for division by zero and invalid expressions
  • Floating-point and integer arithmetic

๐Ÿ“ Text Processing Tool

  • Case conversion: uppercase, lowercase
  • Text manipulation: reverse text
  • Analytics: word count, character count
  • Flexible input: handles quoted and unquoted text

๐Ÿ“Š Advanced Session Management

  • Request tracking: Unique IDs, timestamps, execution times
  • Performance analytics: Success rates, tool usage statistics
  • Routing insights: Method used (LLM vs fallback), confidence scores
  • Session persistence: History across multiple requests

๐Ÿ”ง Production-Ready Features

  • Comprehensive logging: Configurable levels with structured output
  • Error resilience: Graceful degradation and detailed error reporting
  • Interactive mode: Real-time testing and debugging
  • Extensive testing: 140+ test cases with 100% coverage

๐Ÿ“ Project Structure

langgraph_agent/
โ”œโ”€โ”€ __init__.py                 # Package initialization with Agent export
โ”œโ”€โ”€ agent.py                    # Main Agent class with session management
โ”œโ”€โ”€ models/                     # Data models and state management
โ”‚   โ”œโ”€โ”€ __init__.py
โ”‚   โ”œโ”€โ”€ state.py               # AgentState with routing info
โ”‚   โ””โ”€โ”€ tools.py               # Tool input/output models
โ”œโ”€โ”€ tools/                     # Tool implementations
โ”‚   โ”œโ”€โ”€ __init__.py
โ”‚   โ”œโ”€โ”€ calculator.py          # Safe mathematical evaluation
โ”‚   โ””โ”€โ”€ text_processor.py      # Text manipulation operations
โ””โ”€โ”€ workflow/                  # LangGraph workflow components
    โ”œโ”€โ”€ __init__.py
    โ”œโ”€โ”€ graph.py               # Workflow graph definition
    โ”œโ”€โ”€ nodes.py               # Workflow node implementations
    โ”œโ”€โ”€ router.py              # Rule-based routing (fallback)
    โ””โ”€โ”€ llm_router.py          # LLM-based intelligent routing
tests/                         # Comprehensive test suite (140+ tests)
โ”œโ”€โ”€ test_agent_integration.py  # End-to-end agent testing
โ”œโ”€โ”€ test_llm_router.py         # LLM routing functionality
โ”œโ”€โ”€ test_graph.py              # Workflow graph testing
โ”œโ”€โ”€ test_models.py             # Data model validation
โ”œโ”€โ”€ test_router.py             # Rule-based routing
โ”œโ”€โ”€ test_calculator_tool.py    # Calculator tool testing
โ”œโ”€โ”€ test_text_processor_tool.py # Text processing testing
โ””โ”€โ”€ test_workflow_nodes.py     # Individual node testing
main.py                        # Demo and interactive modes
requirements.txt               # Core and LLM dependencies
.env.example                   # Environment configuration template

๐Ÿ› ๏ธ Setup

1. Environment Setup

# Create and activate virtual environment
python3 -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate

# Install dependencies
pip install -r requirements.txt

2. LLM Configuration (Optional but Recommended)

# Copy environment template
cp .env.example .env

# Edit .env and add your OpenAI API key
OPENAI_API_KEY=your_openai_api_key_here

# Optional: Customize model and settings
LLM_ROUTER_MODEL=gpt-3.5-turbo
LLM_ROUTER_TEMPERATURE=0.1

3. Run the Agent

# Demo mode - shows all capabilities
python main.py

# Interactive mode - real-time testing
python main.py --interactive

๐ŸŽฏ Usage Examples

Basic Usage

from langgraph_agent import Agent
from langgraph_agent.workflow.llm_router import LLMConfig

# Initialize with LLM routing
llm_config = LLMConfig(model="gpt-3.5-turbo", temperature=0.1)
agent = Agent(log_level="INFO", llm_config=llm_config)

# Mathematical calculations
response = agent.invoke("What is 15 * 7 + 3?")
print(response)  # ๐Ÿงฎ Calculator Result: 108

# Text processing
response = agent.invoke("Make 'hello world' uppercase")
print(response)  # ๐Ÿ“ Text Processing Result: HELLO WORLD

# Get session analytics
stats = agent.get_session_info()
print(f"Success rate: {stats['success_rate']:.1%}")
print(f"Routing methods: {stats['routing_methods']}")

Advanced Features

# Check router status
router_status = agent.get_router_status()
print(f"LLM Available: {router_status['llm_available']}")
print(f"Model: {router_status.get('model', 'N/A')}")

# Get detailed session history
history = agent.get_session_history(limit=5)
for entry in history:
    routing_info = entry.get('routing_info', {})
    print(f"Input: {entry['user_input']}")
    print(f"Tool: {entry['tool_choice']} via {routing_info.get('method', 'unknown')}")
    print(f"Confidence: {routing_info.get('confidence', 0.0):.2f}")

๐Ÿงฎ Calculator Examples

# Basic arithmetic
agent.invoke("What is 25 + 17?")                    # โžœ 42
agent.invoke("Calculate 144 / 12")                  # โžœ 12
agent.invoke("Solve 2 ** 8")                       # โžœ 256

# Complex expressions
agent.invoke("What is (10 + 5) * 2 - 7?")          # โžœ 23
agent.invoke("Calculate 15% of 200")               # โžœ 30
agent.invoke("Solve sqrt(16) + 3 * 4")             # โžœ 16

๐Ÿ“ Text Processing Examples

# Case conversion
agent.invoke("Make 'Hello World' uppercase")        # โžœ HELLO WORLD
agent.invoke("Convert 'PYTHON' to lowercase")       # โžœ python

# Text manipulation
agent.invoke("Reverse 'LangGraph'")                 # โžœ hparGgnaL

# Text analytics
agent.invoke("Count words in 'This is a test'")     # โžœ Word count: 4
agent.invoke("Count characters in 'Hello'")         # โžœ Character count: 5

๐Ÿ”€ Routing Intelligence

The agent uses a sophisticated two-tier routing system:

๐Ÿค– LLM-Based Routing (Primary)

  • Model: Configurable (default: gpt-3.5-turbo)
  • Prompt Engineering: Optimized system prompt for tool selection
  • High Accuracy: ~90% confidence for clear inputs
  • Context Aware: Understands nuanced requests

๐Ÿ”ง Rule-Based Routing (Fallback)

  • Pattern Matching: Regex patterns for mathematical expressions
  • Keyword Analysis: Tool-specific vocabulary detection
  • Scoring System: Confidence-based decision making
  • Reliable: Always available, no external dependencies

๐ŸŽฏ Routing Examples

# Clear mathematical intent โ†’ LLM: calculator (0.9 confidence)
"What is the square root of 144?"

# Clear text processing โ†’ LLM: text_processor (0.9 confidence)  
"Convert this sentence to uppercase"

# Ambiguous input โ†’ Both systems: ambiguous (0.3-0.5 confidence)
"Process this data"

# LLM unavailable โ†’ Fallback: calculator (0.8 confidence)
"Calculate 5 + 3" (when API key missing)

๐Ÿ“Š Session Analytics

Track detailed performance and usage metrics:

session_info = agent.get_session_info()

# Performance metrics
print(f"Total requests: {session_info['total_requests']}")
print(f"Success rate: {session_info['success_rate']:.1%}")
print(f"Average execution time: {session_info['average_execution_time_seconds']:.3f}s")

# Tool usage distribution
print(f"Tool usage: {session_info['tool_usage']}")
# Example: {'calculator': 15, 'text_processor': 8, 'ambiguous': 2}

# Routing method distribution  
print(f"Routing methods: {session_info['routing_methods']}")
# Example: {'llm': 20, 'fallback': 5}

๐Ÿงช Testing

Run the comprehensive test suite:

# Run all tests
pytest

# Run with coverage
pytest --cov=langgraph_agent --cov-report=html

# Run specific test categories
pytest tests/test_llm_router.py -v          # LLM routing tests
pytest tests/test_agent_integration.py -v   # End-to-end tests
pytest tests/test_calculator_tool.py -v     # Calculator tests

Test Coverage: 140+ test cases covering:

  • โœ… LLM routing with mocked API calls
  • โœ… Fallback routing scenarios
  • โœ… Tool execution and error handling
  • โœ… Session management and analytics
  • โœ… Edge cases and error conditions
  • โœ… Integration workflows

๐Ÿ”ง Configuration

Environment Variables

# Required for LLM routing
OPENAI_API_KEY=your_openai_api_key_here

# Optional customization
OPENAI_BASE_URL=https://your-custom-endpoint.com  # For Azure OpenAI
LLM_ROUTER_MODEL=gpt-4                            # Model selection
LLM_ROUTER_TEMPERATURE=0.1                        # Response randomness

Programmatic Configuration

from langgraph_agent.workflow.llm_router import LLMConfig

# Custom LLM configuration
llm_config = LLMConfig(
    model="gpt-4",
    temperature=0.2,
    max_tokens=100,
    timeout=10.0,
    api_key="your-key",
    base_url="https://custom-endpoint.com"
)

# Initialize agent with custom config
agent = Agent(
    log_level="DEBUG",
    enable_session_logging=True,
    llm_config=llm_config
)

๐Ÿš€ Interactive Mode

Test the agent in real-time:

python main.py --interactive

Available Commands:

  • help - Show usage examples
  • stats - Display session statistics
  • history - Show recent requests
  • router - Check router status
  • quit/exit/q - Exit the program

๐Ÿ—๏ธ Architecture

Workflow Graph

graph TD
    A[User Input] --> B[LLM Router]
    B --> C{Tool Selection}
    C -->|calculator| D[Calculator Tool]
    C -->|text_processor| E[Text Processor Tool]  
    C -->|ambiguous| F[Clarification]
    D --> G[Response Formatter]
    E --> G
    F --> G
    G --> H[Final Response]
Loading

Component Interaction

  • Agent: Main orchestrator with session management
  • LLM Router: Intelligent tool selection with fallback
  • Workflow Graph: LangGraph state machine
  • Tools: Specialized processors for calculations and text
  • State Management: Typed state with routing metadata

๐Ÿค Contributing

  1. Fork the repository
  2. Create a feature branch: git checkout -b feature/amazing-feature
  3. Add tests: Ensure new features have test coverage
  4. Run tests: pytest to verify everything works
  5. Commit changes: git commit -m 'Add amazing feature'
  6. Push to branch: git push origin feature/amazing-feature
  7. Open a Pull Request

๐Ÿ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

๐Ÿ‘จโ€๐Ÿ’ป Author

James Karanja Mainaj
Author of The Complete AI Blueprint series of books
๐Ÿ“š Available on Amazon

๐Ÿ™ Acknowledgments

  • LangGraph: For the powerful workflow orchestration framework
  • OpenAI: For the intelligent routing capabilities
  • LangChain: For the seamless LLM integration
  • Pydantic: For robust data validation and modeling

About

This is a LangGraph AI Agent that features a sophisticated two-tier routing system that uses OpenAI GPT models for smart tool selection with rule-based pattern matching as a reliable fallback, making it production-ready with extensive testing coverage and interactive capabilities.

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages