An intelligent LangGraph agent that demonstrates advanced multi-tool workflows with LLM-based routing and comprehensive session management. The agent can perform mathematical calculations and text processing operations while providing detailed analytics and fallback capabilities.
- Primary: Uses OpenAI GPT models for intelligent tool selection
- Fallback: Rule-based pattern matching when LLM is unavailable
- Hybrid Approach: Best of both worlds - smart routing with reliable backup
- Safe mathematical expression evaluation
- Support for complex operations:
+,-,*,/,**,%, parentheses - Error handling for division by zero and invalid expressions
- Floating-point and integer arithmetic
- Case conversion: uppercase, lowercase
- Text manipulation: reverse text
- Analytics: word count, character count
- Flexible input: handles quoted and unquoted text
- Request tracking: Unique IDs, timestamps, execution times
- Performance analytics: Success rates, tool usage statistics
- Routing insights: Method used (LLM vs fallback), confidence scores
- Session persistence: History across multiple requests
- Comprehensive logging: Configurable levels with structured output
- Error resilience: Graceful degradation and detailed error reporting
- Interactive mode: Real-time testing and debugging
- Extensive testing: 140+ test cases with 100% coverage
langgraph_agent/
โโโ __init__.py # Package initialization with Agent export
โโโ agent.py # Main Agent class with session management
โโโ models/ # Data models and state management
โ โโโ __init__.py
โ โโโ state.py # AgentState with routing info
โ โโโ tools.py # Tool input/output models
โโโ tools/ # Tool implementations
โ โโโ __init__.py
โ โโโ calculator.py # Safe mathematical evaluation
โ โโโ text_processor.py # Text manipulation operations
โโโ workflow/ # LangGraph workflow components
โโโ __init__.py
โโโ graph.py # Workflow graph definition
โโโ nodes.py # Workflow node implementations
โโโ router.py # Rule-based routing (fallback)
โโโ llm_router.py # LLM-based intelligent routing
tests/ # Comprehensive test suite (140+ tests)
โโโ test_agent_integration.py # End-to-end agent testing
โโโ test_llm_router.py # LLM routing functionality
โโโ test_graph.py # Workflow graph testing
โโโ test_models.py # Data model validation
โโโ test_router.py # Rule-based routing
โโโ test_calculator_tool.py # Calculator tool testing
โโโ test_text_processor_tool.py # Text processing testing
โโโ test_workflow_nodes.py # Individual node testing
main.py # Demo and interactive modes
requirements.txt # Core and LLM dependencies
.env.example # Environment configuration template
# Create and activate virtual environment
python3 -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install dependencies
pip install -r requirements.txt# Copy environment template
cp .env.example .env
# Edit .env and add your OpenAI API key
OPENAI_API_KEY=your_openai_api_key_here
# Optional: Customize model and settings
LLM_ROUTER_MODEL=gpt-3.5-turbo
LLM_ROUTER_TEMPERATURE=0.1# Demo mode - shows all capabilities
python main.py
# Interactive mode - real-time testing
python main.py --interactivefrom langgraph_agent import Agent
from langgraph_agent.workflow.llm_router import LLMConfig
# Initialize with LLM routing
llm_config = LLMConfig(model="gpt-3.5-turbo", temperature=0.1)
agent = Agent(log_level="INFO", llm_config=llm_config)
# Mathematical calculations
response = agent.invoke("What is 15 * 7 + 3?")
print(response) # ๐งฎ Calculator Result: 108
# Text processing
response = agent.invoke("Make 'hello world' uppercase")
print(response) # ๐ Text Processing Result: HELLO WORLD
# Get session analytics
stats = agent.get_session_info()
print(f"Success rate: {stats['success_rate']:.1%}")
print(f"Routing methods: {stats['routing_methods']}")# Check router status
router_status = agent.get_router_status()
print(f"LLM Available: {router_status['llm_available']}")
print(f"Model: {router_status.get('model', 'N/A')}")
# Get detailed session history
history = agent.get_session_history(limit=5)
for entry in history:
routing_info = entry.get('routing_info', {})
print(f"Input: {entry['user_input']}")
print(f"Tool: {entry['tool_choice']} via {routing_info.get('method', 'unknown')}")
print(f"Confidence: {routing_info.get('confidence', 0.0):.2f}")# Basic arithmetic
agent.invoke("What is 25 + 17?") # โ 42
agent.invoke("Calculate 144 / 12") # โ 12
agent.invoke("Solve 2 ** 8") # โ 256
# Complex expressions
agent.invoke("What is (10 + 5) * 2 - 7?") # โ 23
agent.invoke("Calculate 15% of 200") # โ 30
agent.invoke("Solve sqrt(16) + 3 * 4") # โ 16# Case conversion
agent.invoke("Make 'Hello World' uppercase") # โ HELLO WORLD
agent.invoke("Convert 'PYTHON' to lowercase") # โ python
# Text manipulation
agent.invoke("Reverse 'LangGraph'") # โ hparGgnaL
# Text analytics
agent.invoke("Count words in 'This is a test'") # โ Word count: 4
agent.invoke("Count characters in 'Hello'") # โ Character count: 5The agent uses a sophisticated two-tier routing system:
- Model: Configurable (default: gpt-3.5-turbo)
- Prompt Engineering: Optimized system prompt for tool selection
- High Accuracy: ~90% confidence for clear inputs
- Context Aware: Understands nuanced requests
- Pattern Matching: Regex patterns for mathematical expressions
- Keyword Analysis: Tool-specific vocabulary detection
- Scoring System: Confidence-based decision making
- Reliable: Always available, no external dependencies
# Clear mathematical intent โ LLM: calculator (0.9 confidence)
"What is the square root of 144?"
# Clear text processing โ LLM: text_processor (0.9 confidence)
"Convert this sentence to uppercase"
# Ambiguous input โ Both systems: ambiguous (0.3-0.5 confidence)
"Process this data"
# LLM unavailable โ Fallback: calculator (0.8 confidence)
"Calculate 5 + 3" (when API key missing)Track detailed performance and usage metrics:
session_info = agent.get_session_info()
# Performance metrics
print(f"Total requests: {session_info['total_requests']}")
print(f"Success rate: {session_info['success_rate']:.1%}")
print(f"Average execution time: {session_info['average_execution_time_seconds']:.3f}s")
# Tool usage distribution
print(f"Tool usage: {session_info['tool_usage']}")
# Example: {'calculator': 15, 'text_processor': 8, 'ambiguous': 2}
# Routing method distribution
print(f"Routing methods: {session_info['routing_methods']}")
# Example: {'llm': 20, 'fallback': 5}Run the comprehensive test suite:
# Run all tests
pytest
# Run with coverage
pytest --cov=langgraph_agent --cov-report=html
# Run specific test categories
pytest tests/test_llm_router.py -v # LLM routing tests
pytest tests/test_agent_integration.py -v # End-to-end tests
pytest tests/test_calculator_tool.py -v # Calculator testsTest Coverage: 140+ test cases covering:
- โ LLM routing with mocked API calls
- โ Fallback routing scenarios
- โ Tool execution and error handling
- โ Session management and analytics
- โ Edge cases and error conditions
- โ Integration workflows
# Required for LLM routing
OPENAI_API_KEY=your_openai_api_key_here
# Optional customization
OPENAI_BASE_URL=https://your-custom-endpoint.com # For Azure OpenAI
LLM_ROUTER_MODEL=gpt-4 # Model selection
LLM_ROUTER_TEMPERATURE=0.1 # Response randomnessfrom langgraph_agent.workflow.llm_router import LLMConfig
# Custom LLM configuration
llm_config = LLMConfig(
model="gpt-4",
temperature=0.2,
max_tokens=100,
timeout=10.0,
api_key="your-key",
base_url="https://custom-endpoint.com"
)
# Initialize agent with custom config
agent = Agent(
log_level="DEBUG",
enable_session_logging=True,
llm_config=llm_config
)Test the agent in real-time:
python main.py --interactiveAvailable Commands:
help- Show usage examplesstats- Display session statisticshistory- Show recent requestsrouter- Check router statusquit/exit/q- Exit the program
graph TD
A[User Input] --> B[LLM Router]
B --> C{Tool Selection}
C -->|calculator| D[Calculator Tool]
C -->|text_processor| E[Text Processor Tool]
C -->|ambiguous| F[Clarification]
D --> G[Response Formatter]
E --> G
F --> G
G --> H[Final Response]
- Agent: Main orchestrator with session management
- LLM Router: Intelligent tool selection with fallback
- Workflow Graph: LangGraph state machine
- Tools: Specialized processors for calculations and text
- State Management: Typed state with routing metadata
- Fork the repository
- Create a feature branch:
git checkout -b feature/amazing-feature - Add tests: Ensure new features have test coverage
- Run tests:
pytestto verify everything works - Commit changes:
git commit -m 'Add amazing feature' - Push to branch:
git push origin feature/amazing-feature - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
James Karanja Mainaj
Author of The Complete AI Blueprint series of books
๐ Available on Amazon
- LangGraph: For the powerful workflow orchestration framework
- OpenAI: For the intelligent routing capabilities
- LangChain: For the seamless LLM integration
- Pydantic: For robust data validation and modeling