Democratizing AI-powered API security assessment for small and medium businesses
Features β’ Installation β’ Usage β’ Documentation β’ Contributing
The GenAI API Pentest Platform is an AI-powered API security testing tool designed for small to medium businesses and individual developers. It leverages multiple Large Language Models (LLMs) to perform intelligent, context-aware vulnerability assessments with a focus on accuracy and ease of use.
Current Status: 15% Complete - Proof of Concept
- β Core AI-powered scanning engine
 - β Multi-LLM consensus system
 - β OWASP API Security Top 10 coverage
 - β Advanced validation to reduce false positives
 - π§ Web interface and advanced features in development
 
- Multi-LLM Integration: OpenAI, Anthropic, Google, OpenRouter, and local LLMs (Ollama)
 - Consensus Validation: Multiple AI models validate findings to reduce false positives
 - Context-Aware Payloads: AI generates attack payloads specific to your API context
 - Smart Pattern Recognition: Advanced response analysis with AI-powered insights
 
- OWASP API Security Top 10 (2023): Comprehensive coverage of modern API threats
 - BOLA/IDOR Detection: Broken Object Level Authorization with AI payload generation
 - SQL Injection: Database-specific payloads with timing and error-based detection
 - Authentication/Authorization: Business logic flaw detection
 - Response Analysis: Behavioral anomaly detection and pattern matching
 
- OpenAPI/Swagger: 2.0 & 3.x with automatic discovery
 - Manual Configuration: Direct endpoint testing
 - Future Support: Postman Collections, GraphQL (planned)
 
- Easy Setup: Simple configuration with environment variables
 - Cost-Effective: Use local LLMs or affordable cloud APIs
 - Developer-Friendly: Clear documentation and simple integration
 - Focused Results: Prioritized findings with actionable remediation advice
 
- Python 3.8+ (Recommended: Python 3.11)
 - API Keys: At least one AI provider (OpenAI, Anthropic, Google, or local Ollama)
 - System Requirements: 4GB RAM, 1GB disk space
 
# Clone the repository
git clone https://github.com/gensecaihq/genai-api-pentest-platform.git
cd genai-api-pentest-platform
# Create virtual environment
python -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate
# Install dependencies
pip install -r requirements.txt
# Configure your API keys
cp .env.example .env
# Edit .env with your API keys (at least one required)# Required: At least one AI provider
export OPENAI_API_KEY="your-openai-key"
export ANTHROPIC_API_KEY="your-anthropic-key"
export GOOGLE_API_KEY="your-google-key"
# Optional: Local LLM (free alternative)
export OLLAMA_BASE_URL="http://localhost:11434"
export LOCAL_MODEL="llama2"
# Configuration
export LOG_LEVEL="INFO"
export HTTP_TIMEOUT="30"
export MAX_PAYLOADS_PER_ENDPOINT="25"# Install Ollama for free local AI
curl -fsSL https://ollama.ai/install.sh | sh
# Download a model
ollama pull llama2
# The platform will automatically detect and use local models# Test configuration
python -c "from src.core.config_validator import validate_config_dict; print('β
 Configuration valid')"
# Basic OpenAPI scan
python scripts/example_scan.py https://api.example.com/openapi.json
# Local file scan
python scripts/example_scan.py ./examples/vulnerable-api.yamlimport asyncio
from src.api.parser import OpenAPIParser
from src.attack.bola_scanner import BOLAScanner
from src.validation.vulnerability_validator import VulnerabilityValidator
async def scan_api():
    # Parse OpenAPI specification
    async with OpenAPIParser() as parser:
        api_spec = await parser.parse_from_url('https://api.example.com/openapi.json')
        
    # Configure scanner
    config = {
        'genai': {
            'providers': {
                'openai': {'api_key': 'your-key', 'enabled': True}
            }
        }
    }
    
    # Run BOLA scan on endpoints
    scanner = BOLAScanner(config)
    validator = VulnerabilityValidator(config)
    
    vulnerabilities = []
    for endpoint in api_spec.endpoints:
        async for vuln in scanner.scan(endpoint):
            # Validate to reduce false positives
            validation = await validator.validate_vulnerability(vuln)
            if validation.is_valid:
                vulnerabilities.append(vuln)
    
    return vulnerabilities
# Run the scan
results = asyncio.run(scan_api())# configs/development.yaml
log_level: "DEBUG"
http_timeout: 10
verify_ssl: false
max_payloads_per_endpoint: 10
# AI provider (choose one or multiple)
openai_model: "gpt-3.5-turbo"  # Cost-effective for development
temperature: 0.7{
  "vulnerability": {
    "id": "bola_users_123",
    "title": "BOLA: Successful access to unauthorized object",
    "severity": "HIGH",
    "confidence": 0.85,
    "attack_type": "authorization",
    "endpoint": {
      "path": "/users/{id}",
      "method": "GET"
    },
    "payload": "admin",
    "evidence": {
      "response_status": 200,
      "response_time": 150,
      "technique": "privilege_escalation"
    },
    "ai_analysis": "AI detected unauthorized access to user data using 'admin' payload. Response contains sensitive user information that should require proper authorization.",
    "business_impact": "High business impact: Unauthorized access to sensitive user data, potential data breaches, compliance violations",
    "remediation": [
      "Implement proper authorization checks for object access",
      "Use indirect object references (e.g., session-based identifiers)",
      "Validate user permissions for each object request"
    ],
    "validation_result": {
      "is_valid": true,
      "confidence_score": 0.82,
      "false_positive_probability": 0.15
    }
  }
}See configs/development.yaml and configs/production.yaml for complete examples.
# Logging
log_level: "INFO"                    # DEBUG, INFO, WARNING, ERROR
structured_logging: false           # JSON logging for production
# HTTP Client  
http_timeout: 30                     # Request timeout in seconds
max_retries: 3                       # Maximum retry attempts
verify_ssl: true                     # SSL certificate verification
rate_limit_delay: 0.5               # Delay between requests
# AI Configuration (use environment variables for API keys)
openai_model: "gpt-4-turbo-preview"  # or "gpt-3.5-turbo" for cost savings
temperature: 0.7                     # AI creativity level (0.0-2.0)
# Security Testing
max_payloads_per_endpoint: 25        # Payloads per endpoint (balance speed vs coverage)
confidence_threshold: 0.7            # Minimum confidence for valid findings
false_positive_threshold: 0.3        # Maximum false positive probabilityAll sensitive configuration should use environment variables:
# Required: At least one AI provider
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GOOGLE_API_KEY=AI...
# Optional: Local LLM
OLLAMA_BASE_URL=http://localhost:11434
# Testing Configuration
HTTP_TIMEOUT=30
MAX_PAYLOADS_PER_ENDPOINT=25
LOG_LEVEL=INFO- Getting Started Guide
 - Configuration Reference
 - Complete Documentation
 - Security Best Practices
 - Project Scope & Roadmap
 
This is a proof-of-concept implementation with the following components:
- Core AI Engine: Multi-LLM consensus system
 - BOLA Scanner: OWASP API1 detection with AI payloads
 - OpenAPI Parser: Automatic endpoint discovery
 - Validation System: Advanced false positive reduction
 - HTTP Client: Production-ready with error handling
 - Configuration: Secure validation and environment variables
 
- Web interface for easy scanning
 - Additional OWASP API Security Top 10 scanners
 - Reporting and export functionality
 - CLI interface improvements
 - Additional authentication methods
 
- GraphQL and Postman Collections support
 - Advanced exploit chain discovery
 - Integration with CI/CD pipelines
 - Team collaboration features
 
Contributions welcome! This project is in active development. Priority areas:
- OWASP API Security Top 10 scanner implementations
 - Web interface development
 - Documentation improvements
 - Test coverage expansion
 
For security issues, please email the maintainers instead of creating public issues.
This project is licensed under the MIT License - see the LICENSE file for details.
This tool is for authorized security testing only. Users must:
- Obtain explicit written permission before testing any API
 - Comply with all applicable laws and regulations
 - Use the tool responsibly and ethically
 - Report vulnerabilities through appropriate channels
 
Important: This is proof-of-concept software. Use in controlled environments only.
- β Install Python 3.8+ and create virtual environment
 - β Get API Keys - At least one: OpenAI, Anthropic, Google, or setup Ollama
 - β Clone & Install - Follow installation instructions above
 - β
 Configure - Copy 
.env.exampleto.envand add your API keys - β Test - Run configuration validation
 - β Scan - Try with a sample OpenAPI specification
 
- OpenAI GPT-3.5-turbo: ~$0.002 per 1K tokens (cost-effective)
 - OpenAI GPT-4: ~$0.03 per 1K tokens (higher accuracy)
 - Anthropic Claude: ~$0.008 per 1K tokens (good balance)
 - Local LLMs (Ollama): Free (requires local compute)
 
Estimate: Testing a 10-endpoint API costs $0.10-$2.00 depending on model choice.
- OpenAI, Anthropic, and Google for LLM APIs
 - OWASP for API Security Top 10 guidance
 - The cybersecurity community for vulnerability research
 - Ollama for enabling local LLM deployment
 
GenAI API Pentest Platform
Democratizing AI-powered API security for SMB/SME
GitHub β’ Roadmap β’ Documentation