Automatically generate comprehensive API test cases from your OpenAPI specification using a local Ollama model. This tool simplifies API validation, reduces manual effort, and integrates seamlessly into your development workflow.
For learning more about OpenAPI Specifications: https://spec.openapis.org/oas/v3.1.0.html#openapi-specification
- OpenAPI Integration - Parses your API specification to understand endpoints and schemas
- Local Ollama Model - Uses a locally hosted LLM for generating intelligent test cases
- Automated Test Generation - Quickly produces unit/integration tests for endpoints
- User Friendly Interface - Clean, intuitive UI for seamless interaction
- Fast & Secure - No external API calls; all processing happens locally
- Multiple Export Formats - Generate tests in various formats (Jest, Mocha, etc.)
- Schema Validation - Automatically validates request/response schemas
- Node.js v22.12.0or higher (Important)
- npm or yarn
- Ollama installed locally (https://ollama.com/download)
- Your OpenAPI specification file (openapi.yamlor.json)
git clone git@github.com:eddyseed/OpenAPI-Testing-Tool.git
cd OpenAPI-Testing-Tool/Install dependencies for both frontend and backend:
# Install backend dependencies
cd backend
npm install
# Install frontend dependencies
cd ../frontend
npm installollama listEnsure your desired model (e.g., phi3:mini) is available. If not, pull a model:
ollama pull phi3:miniRecommended models:
- phi3:mini- Lightweight and fast
- llama2- More comprehensive
- codellama- Optimized for code generation
Copy the sample environment file and configure it:
cd backend
cp .env.sample .envEdit your .env file:
# Server Configuration
PORT=3000
NODE_ENV=development 
# URLs
LOCAL_HOST=http://localhost:3000
CORS_ORIGIN=http://localhost:5173
# Ollama Configuration
OLLAMA_HOST=http://localhost:11434
# Set your own local model name (from ollama list)
MODEL_NAME=phi3:miniNote: Run
ollama listto see available models and updateMODEL_NAMEaccordingly.
cd backend
npm run devThe backend will start on http://localhost:3000
In a new terminal:
cd frontend
npm run devThe frontend will start on http://localhost:5173
Open your browser and navigate to: http://localhost:5173
- Click the upload button to upload your OpenAPI file (.yamlor.json)
- The tool will parse and display your API endpoints
- Browse through the parsed endpoints
- Click "Generate Tests" button
- The AI model will analyze your API specification
- Wait for the generation process to complete
- Review the generated test cases
Here's a minimal example to get started:
openapi: 3.0.0
info:
  title: Sample API
  version: 1.0.0
paths:
  /users:
    get:
      summary: Get all users
      responses:
        '200':
          description: Successful response
          content:
            application/json:
              schema:
                type: array
                items:
                  type: object
                  properties:
                    id:
                      type: integer
                    name:
                      type: string
                    email:
                      type: string
    post:
      summary: Create a new user
      requestBody:
        required: true
        content:
          application/json:
            schema:
              type: object
              properties:
                name:
                  type: string
                email:
                  type: string
      responses:
        '201':
          description: User created successfully
        '400':
          description: Bad request Upload your OpenAPI specification to get started
Upload your OpenAPI specification to get started
 Browse and select endpoints for test generation
Browse and select endpoints for test generation
 AI-powered test case generation in progress
AI-powered test case generation in progress
 Review, edit, and export your test cases
Review, edit, and export your test cases
- Basic - Essential happy path tests only
- Standard - Common scenarios and edge cases
- Comprehensive - Full coverage including error scenarios, validation, and boundary conditions
Ollama Connection Error
Error: connect ECONNREFUSED 127.0.0.1:11434
Solution: Ensure Ollama is running: ollama serve
Model Not Found
Error: model 'phi3:mini' not found
Solution: Pull the model: ollama pull phi3:mini
Port Already in Use
Error: listen EADDRINUSE: address already in use :::3000
Solution: Change the PORT in .env or kill the process using the port:
# Find process using port 3000
lsof -i :3000
# Kill the process
kill -9 <PID>CORS Errors
Access to XMLHttpRequest has been blocked by CORS policy
Solution: Verify CORS_ORIGIN in .env matches your frontend URL exactly
OpenAPI Parse Error
Error: Invalid OpenAPI specification
Solution: Validate your OpenAPI file using online validators like https://editor.swagger.io/
Node Version Mismatch
Error: The engine "node" is incompatible
Solution: Ensure you're using Node.js v22.12.0 or higher:
node --versionContributions are welcome! Please follow these steps:
- Fork the repository
- Create a feature branch: git checkout -b feature/amazing-feature
- Commit your changes: git commit -m 'Add amazing feature'
- Push to the branch: git push origin feature/amazing-feature
- Open a Pull Request
- Follow the existing code style and conventions
- Write meaningful commit messages following conventional commits
- Add tests for new features
- Update documentation as needed
- Ensure all tests pass before submitting PR
- Keep pull requests focused on a single feature or fix
- Use ESLint for JavaScript linting
- Follow Airbnb JavaScript style guide
- Use Prettier for code formatting
- Write descriptive variable and function names
This project is licensed under no license - see the LICENSE file for details.
- OpenAPI Initiative for the specification standard
- Ollama for local LLM hosting
- React for the frontend framework
- Express.js for the backend framework
- All contributors and users of this tool
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Documentation: Wiki
- Email: support@example.com (update with actual contact)
- Support for GraphQL schemas
- Integration with CI/CD pipelines
- Docker containerization
- Multi-language test generation (Python, Java, Go)
- Performance testing capabilities
- Mock server generation
- API documentation generation
- Cloud deployment options
- Team collaboration features
- Test execution and reporting
- Integration with API gateways
- Use lightweight models for faster generation (e.g., phi3:mini)
- Generate tests in batches for large specifications
- Cache frequently used specifications to avoid re-parsing
- Allocate sufficient RAM to Ollama (8GB+ recommended)
- Use SSD storage for faster model loading
- Limit concurrent generations to avoid resource exhaustion
- All processing happens locally - no data leaves your machine
- No external API calls to third-party services
- Review generated tests before deployment
- Keep dependencies updated for security patches
- Use environment variables for sensitive configuration
- Always validate your OpenAPI spec before uploading
- Start with basic coverage and gradually increase
- Review generated tests for accuracy and completeness
- Customize templates to match your team's standards
- Keep your Ollama models updated for best results
- Use version control for your test files
- Integrate with CI/CD for automated testing
- OpenAPI Specification Guide
- Ollama Documentation
- API Testing Best Practices
- Test Automation Patterns
Made with ❤️ by the Eddy seed
Star ⭐ this repo if you find it helpful!
Version: 1.0.0 | Last Updated: October 2025