A state-of-the-art Model Context Protocol (MCP) server that provides seamless integration with Google's Gemini AI models. This server enables Claude Desktop and other MCP-compatible clients to leverage the full power of Gemini's advanced AI capabilities.
- Gemini 2.5 Pro - Most capable thinking model for complex reasoning
- Gemini 2.5 Flash - Fast thinking model with best price/performance
- Gemini 2.0 Series - Latest generation models with advanced features
- Gemini 1.5 Series - Proven, reliable models for production use
- π§ Thinking Models - Gemini 2.5 series with step-by-step reasoning
- π Google Search Grounding - Real-time web information integration
- π JSON Mode - Structured output with schema validation
- π― System Instructions - Behavior customization and control
- ποΈ Vision Support - Image analysis and multimodal capabilities
- π¬ Conversation Memory - Context preservation across interactions
- TypeScript - Full type safety and modern development
- Comprehensive Error Handling - Robust error management and recovery
- Rate Limiting - Built-in protection against API abuse
- Detailed Logging - Comprehensive monitoring and debugging
- Input Validation - Secure parameter validation with Zod
- Retry Logic - Automatic retry with exponential backoff
- Node.js 16+ (Download)
- Google AI Studio API Key (Get one here)
npm install -g mcp-server-geminigit clone https://github.com/gurr-i/mcp-server-gemini-pro.git
cd mcp-server-gemini-pro
npm install
npm run buildOption A: Environment Variable
export GEMINI_API_KEY="your_api_key_here"Option B: .env file
echo "GEMINI_API_KEY=your_api_key_here" > .envAdd to your claude_desktop_config.json:
For Global Installation:
{
"mcpServers": {
"gemini": {
"command": "mcp-server-gemini",
"env": {
"GEMINI_API_KEY": "your_api_key_here"
}
}
}
}For Local Installation:
{
"mcpServers": {
"gemini": {
"command": "node",
"args": ["/path/to/mcp-server-gemini-pro/dist/enhanced-stdio-server.js"],
"env": {
"GEMINI_API_KEY": "your_api_key_here"
}
}
}
}Close and restart Claude Desktop completely for changes to take effect.
Once configured, you can use Gemini through Claude Desktop with natural language:
"Use Gemini to explain quantum computing in simple terms"
"Generate a creative story about AI using Gemini 2.5 Pro"
"Use Gemini with JSON mode to extract key points from this text"
"Use Gemini with grounding to get the latest news about AI"
"Generate a Python function using Gemini's thinking capabilities"
"Analyze this image with Gemini" (attach image)
"What's in this screenshot using Gemini vision?"
"Use Gemini to review this code and suggest improvements"
"Generate comprehensive tests for this function using Gemini"
The server can be configured using environment variables or a .env file:
# Google AI Studio API Key (required)
GEMINI_API_KEY=your_api_key_here# Logging level (default: info)
# Options: error, warn, info, debug
LOG_LEVEL=info
# Enable performance metrics (default: false)
ENABLE_METRICS=false
# Rate limiting configuration
RATE_LIMIT_ENABLED=true # Enable/disable rate limiting (default: true)
RATE_LIMIT_REQUESTS=100 # Max requests per window (default: 100)
RATE_LIMIT_WINDOW=60000 # Time window in ms (default: 60000 = 1 minute)
# Request timeout in milliseconds (default: 30000 = 30 seconds)
REQUEST_TIMEOUT=30000
# Environment mode (default: production)
NODE_ENV=production# .env for development
GEMINI_API_KEY=your_api_key_here
NODE_ENV=development
LOG_LEVEL=debug
RATE_LIMIT_ENABLED=false
REQUEST_TIMEOUT=60000# .env for production
GEMINI_API_KEY=your_api_key_here
NODE_ENV=production
LOG_LEVEL=warn
RATE_LIMIT_ENABLED=true
RATE_LIMIT_REQUESTS=100
RATE_LIMIT_WINDOW=60000
REQUEST_TIMEOUT=30000
ENABLE_METRICS=true| OS | Path |
|---|---|
| macOS | ~/Library/Application Support/Claude/claude_desktop_config.json |
| Windows | %APPDATA%\Claude\claude_desktop_config.json |
| Linux | ~/.config/Claude/claude_desktop_config.json |
{
"mcpServers": {
"gemini": {
"command": "mcp-server-gemini",
"env": {
"GEMINI_API_KEY": "your_api_key_here"
}
}
}
}{
"mcpServers": {
"gemini": {
"command": "mcp-server-gemini",
"env": {
"GEMINI_API_KEY": "your_api_key_here",
"LOG_LEVEL": "info",
"RATE_LIMIT_REQUESTS": "200",
"REQUEST_TIMEOUT": "45000"
}
}
}
}{
"mcpServers": {
"gemini": {
"command": "node",
"args": ["/path/to/mcp-server-gemini-pro/dist/enhanced-stdio-server.js"],
"cwd": "/path/to/mcp-server-gemini-pro",
"env": {
"GEMINI_API_KEY": "your_api_key_here",
"NODE_ENV": "development",
"LOG_LEVEL": "debug"
}
}
}
}| Tool | Description | Key Features |
|---|---|---|
| generate_text | Generate text with advanced features | Thinking models, JSON mode, grounding |
| analyze_image | Analyze images using vision models | Multi-modal understanding, detailed analysis |
| count_tokens | Count tokens for cost estimation | Accurate token counting for all models |
| list_models | List all available Gemini models | Real-time model availability and features |
| embed_text | Generate text embeddings | High-quality vector representations |
| get_help | Get usage help and documentation | Self-documenting with examples |
| Model | Context Window | Features | Best For | Speed |
|---|---|---|---|---|
| gemini-2.5-pro | 2M tokens | Thinking, JSON, Grounding | Complex reasoning, coding | Slower |
| gemini-2.5-flash β | 1M tokens | Thinking, JSON, Grounding | General purpose | Fast |
| gemini-2.5-flash-lite | 1M tokens | Thinking, JSON | High-throughput tasks | Fastest |
| gemini-2.0-flash | 1M tokens | JSON, Grounding | Standard tasks | Fast |
| gemini-2.0-flash-lite | 1M tokens | JSON | Simple tasks | Fastest |
| gemini-2.0-pro-experimental | 2M tokens | JSON, Grounding | Experimental features | Medium |
| gemini-1.5-pro | 2M tokens | JSON | Legacy support | Medium |
| gemini-1.5-flash | 1M tokens | JSON | Legacy support | Fast |
- Node.js 16+ (Download)
- npm 7+ (comes with Node.js)
- Git for version control
- Google AI Studio API Key (Get one here)
# Clone the repository
git clone https://github.com/gurr-i/mcp-server-gemini-pro.git
cd mcp-server-gemini-pro
# Install dependencies
npm install
# Set up environment variables
cp .env.example .env
# Edit .env and add your GEMINI_API_KEYnpm run dev # Start development server with hot reload
npm run dev:watch # Start with file watching (nodemon)
npm run build # Build for production
npm run build:watch # Build with watch mode
npm run clean # Clean build directorynpm test # Run all tests
npm run test:watch # Run tests in watch mode
npm run test:coverage # Run tests with coverage report
npm run test:integration # Run integration tests (requires API key)npm run lint # Lint TypeScript code
npm run lint:fix # Fix linting issues automatically
npm run format # Format code with Prettier
npm run format:check # Check code formatting
npm run type-check # Run TypeScript type checking
npm run validate # Run all quality checks (lint + test + type-check)npm run prepack # Prepare package for publishing
npm run release # Build, validate, and publish to npmmcp-server-gemini/
βββ src/ # Source code
β βββ config/ # Configuration management
β β βββ index.ts # Environment config with Zod validation
β βββ utils/ # Utility modules
β β βββ logger.ts # Structured logging system
β β βββ errors.ts # Custom error classes & handling
β β βββ validation.ts # Input validation with Zod
β β βββ rateLimiter.ts # Rate limiting implementation
β βββ enhanced-stdio-server.ts # Main MCP server implementation
β βββ types.ts # TypeScript type definitions
βββ tests/ # Test suite
β βββ unit/ # Unit tests
β β βββ config.test.ts # Configuration tests
β β βββ validation.test.ts # Validation tests
β β βββ errors.test.ts # Error handling tests
β βββ integration/ # Integration tests
β β βββ gemini-api.test.ts # Real API integration tests
β βββ setup.ts # Test setup and utilities
βββ docs/ # Documentation
β βββ api.md # API reference
β βββ configuration.md # Configuration guide
β βββ troubleshooting.md # Troubleshooting guide
βββ scripts/ # Build and utility scripts
β βββ build.sh # Production build script
β βββ dev.sh # Development script
β βββ test.sh # Test execution script
βββ .github/workflows/ # GitHub Actions CI/CD
β βββ ci.yml # Continuous integration
β βββ release.yml # Automated releases
βββ dist/ # Built output (generated)
βββ coverage/ # Test coverage reports (generated)
βββ node_modules/ # Dependencies (generated)
The project includes comprehensive testing with unit tests, integration tests, and code coverage reporting.
npm test # Run all tests (unit tests only by default)
npm run test:watch # Run tests in watch mode for development
npm run test:coverage # Run tests with coverage reportnpm test -- --testPathPattern=unit # Run only unit tests
npm test -- --testNamePattern="config" # Run specific test suitesIntegration tests require a valid GEMINI_API_KEY and make real API calls:
# Set API key and run integration tests
GEMINI_API_KEY=your_api_key_here npm run test:integration
# Or set in .env file and run
npm run test:integrationnpm run test:coverage # Generate coverage report
open coverage/lcov-report/index.html # View coverage report (macOS)- Configuration Tests: Environment variable validation, config loading
- Validation Tests: Input validation, schema validation, sanitization
- Error Handling Tests: Custom error classes, error recovery, retry logic
- Utility Tests: Logger, rate limiter, helper functions
- Gemini API Tests: Real API calls to test connectivity and functionality
- Model Testing: Verify all supported models work correctly
- Feature Testing: JSON mode, grounding, embeddings, token counting
// tests/unit/example.test.ts
import { describe, it, expect, beforeEach, afterEach } from '@jest/globals';
import { YourModule } from '../../src/your-module.js';
describe('YourModule', () => {
beforeEach(() => {
// Setup before each test
});
afterEach(() => {
// Cleanup after each test
});
it('should do something', () => {
// Test implementation
expect(result).toBe(expected);
});
});The test suite includes custom Jest matchers:
expect(response).toBeValidMCPResponse(); // Validates MCP response formatTests are configured in jest.config.js with:
- TypeScript Support: Full ES modules and TypeScript compilation
- Coverage Thresholds: Minimum 70% coverage required
- Test Timeout: 30 seconds for integration tests
- Setup Files: Automatic test environment setup
# Build the Docker image
docker build -t mcp-server-gemini .
# Run the container
docker run -d \
--name mcp-server-gemini \
-e GEMINI_API_KEY=your_api_key_here \
-e LOG_LEVEL=info \
mcp-server-gemini# Create .env file with your API key
echo "GEMINI_API_KEY=your_api_key_here" > .env
# Start the service
docker-compose up -d
# View logs
docker-compose logs -f
# Stop the service
docker-compose down# Start development environment
docker-compose --profile dev up
# This mounts source code for live reloading# Production build
docker build --target production -t mcp-server-gemini:prod .
# Run with production settings
docker run -d \
--name mcp-server-gemini-prod \
--restart unless-stopped \
-e GEMINI_API_KEY=your_api_key_here \
-e NODE_ENV=production \
-e LOG_LEVEL=warn \
-e RATE_LIMIT_ENABLED=true \
-e ENABLE_METRICS=true \
mcp-server-gemini:prod# Check container health
docker ps
docker logs mcp-server-gemini
# Manual health check
docker exec mcp-server-gemini node -e "console.log('Health check passed')"# Install globally
npm install -g mcp-server-gemini
# Run directly
GEMINI_API_KEY=your_key mcp-server-gemini# Clone and build
git clone https://github.com/gurr-i/mcp-server-gemini-pro.git
cd mcp-server-gemini-pro
npm install
npm run build
# Run locally
GEMINI_API_KEY=your_key npm start# Using Docker Hub (when published)
docker run -e GEMINI_API_KEY=your_key mcp-server-gemini-pro:latest
# Using local build
docker build -t mcp-server-gemini-pro .
docker run -e GEMINI_API_KEY=your_key mcp-server-gemini-pro# Install PM2
npm install -g pm2
# Create ecosystem file
cat > ecosystem.config.js << EOF
module.exports = {
apps: [{
name: 'mcp-server-gemini',
script: './dist/enhanced-stdio-server.js',
env: {
NODE_ENV: 'production',
GEMINI_API_KEY: 'your_api_key_here',
LOG_LEVEL: 'info'
}
}]
}
EOF
# Start with PM2
pm2 start ecosystem.config.js
pm2 save
pm2 startup# Check if API key is set
echo $GEMINI_API_KEY
# Verify .env file exists and is readable
cat .env | grep GEMINI_API_KEY
# Check file permissions
ls -la .env
chmod 600 .env# Test API key manually
curl -H "Content-Type: application/json" \
-d '{"contents":[{"parts":[{"text":"Hello"}]}]}' \
-X POST "https://generativelanguage.googleapis.com/v1beta/models/gemini-pro:generateContent?key=YOUR_API_KEY"# Verify config file location (macOS)
ls -la ~/Library/Application\ Support/Claude/claude_desktop_config.json
# Validate JSON syntax
cat claude_desktop_config.json | jq .
# Check server installation
which mcp-server-gemini
npm list -g mcp-server-gemini# Temporarily disable rate limiting
export RATE_LIMIT_ENABLED=false
# Increase limits
export RATE_LIMIT_REQUESTS=1000
export RATE_LIMIT_WINDOW=60000# Enable debug logging
export LOG_LEVEL=debug
npm run dev
# Or for production
export LOG_LEVEL=debug
npm start- π Report Issues
- π¬ Discussions
- π Documentation
- Never commit API keys to version control
- Use environment variables or secure secret management
- Rotate keys regularly for production use
- Use different keys for development and production
- Enable rate limiting in production (
RATE_LIMIT_ENABLED=true) - Configure appropriate limits based on your usage patterns
- Monitor API usage to prevent quota exhaustion
- All inputs are automatically validated and sanitized
- XSS and injection protection built-in
- Schema validation for all tool parameters
- Runs as non-root user in Docker
- Read-only filesystem with minimal privileges
- Security scanning in CI/CD pipeline
We welcome contributions! Please see our Contributing Guide for details.
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests
- Run
npm run validate - Submit a pull request
MIT License - see LICENSE file for details.
- Google AI for the Gemini API
- Anthropic for the Model Context Protocol
- The open-source community for inspiration and feedback
- π Report Issues
- π¬ Discussions
- π§ Email Support