| layout | default |
|---|---|
| title | Chapter 8: Production Deployment |
| parent | MeiliSearch Tutorial |
| nav_order | 8 |
Welcome to Chapter 8: Production Deployment. In this part of MeiliSearch Tutorial: Lightning Fast Search Engine, you will build an intuitive mental model first, then move into concrete implementation details and practical production tradeoffs.
This final chapter covers deploying Meilisearch in production environments, including scaling, monitoring, security, and maintenance strategies.
# Recommended server specifications
MINIMUM_SPECS="
CPU: 2 cores
RAM: 4GB
Storage: 20GB SSD
Network: 100Mbps
"
RECOMMENDED_SPECS="
CPU: 4+ cores
RAM: 8GB+
Storage: 100GB+ SSD
Network: 1Gbps
"# docker-compose.prod.yml
version: '3.8'
services:
meilisearch:
image: getmeili/meilisearch:v1.8.0
container_name: meilisearch_prod
restart: unless-stopped
ports:
- "127.0.0.1:7700:7700"
environment:
- MEILI_MASTER_KEY=${MEILI_MASTER_KEY}
- MEILI_DB_PATH=./meili_data
- MEILI_HTTP_ADDR=0.0.0.0:7700
- MEILI_ENV=production
- MEILI_LOG_LEVEL=INFO
- MEILI_MAX_INDEX_SIZE=5GiB
volumes:
- ./meili_data:/meili_data
- ./snapshots:/snapshots
networks:
- meilisearch_network
networks:
meilisearch_network:
driver: bridge# Create systemd service file
sudo tee /etc/systemd/system/meilisearch.service > /dev/null <<EOF
[Unit]
Description=Meilisearch Search Engine
After=network.target
[Service]
Type=simple
User=meilisearch
Group=meilisearch
WorkingDirectory=/var/lib/meilisearch
ExecStart=/usr/local/bin/meilisearch --db-path /var/lib/meilisearch/data --master-key ${MEILI_MASTER_KEY}
Restart=always
RestartSec=5
StandardOutput=journal
StandardError=journal
SyslogIdentifier=meilisearch
[Install]
WantedBy=multi-user.target
EOF
# Enable and start service
sudo systemctl enable meilisearch
sudo systemctl start meilisearch# Generate strong master key
MASTER_KEY=$(openssl rand -hex 32)
echo "MEILI_MASTER_KEY=$MASTER_KEY" > .env
# Use environment variables
export MEILI_MASTER_KEY=$MASTER_KEY# Configure firewall
sudo ufw allow from 127.0.0.1 to any port 7700
sudo ufw deny from 0.0.0.0/0 to any port 7700
# Or using iptables
sudo iptables -A INPUT -p tcp -s 127.0.0.1 --dport 7700 -j ACCEPT
sudo iptables -A INPUT -p tcp --dport 7700 -j DROP# Using Nginx as reverse proxy with SSL
sudo tee /etc/nginx/sites-available/meilisearch > /dev/null <<EOF
server {
listen 443 ssl http2;
server_name search.yourdomain.com;
ssl_certificate /path/to/cert.pem;
ssl_certificate_key /path/to/key.pem;
location / {
proxy_pass http://127.0.0.1:7700;
proxy_set_header Host \$host;
proxy_set_header X-Real-IP \$remote_addr;
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto \$scheme;
}
}
EOF# Create tenant-specific API keys
curl -X POST 'http://localhost:7700/keys' \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer your_master_key' \
--data '{
"description": "Search-only key for web app",
"actions": ["search"],
"indexes": ["products", "articles"],
"expiresAt": "2025-12-31T23:59:59Z"
}'# Health check endpoint
curl http://localhost:7700/health
# Detailed stats
curl 'http://localhost:7700/indexes/products/stats'# Configure logging
export MEILI_LOG_LEVEL=INFO
export MEILI_MAX_LOG_SIZE=100MiB
export MEILI_MAX_LOG_FILES=5// Simple monitoring dashboard
class MeiliMonitor {
constructor(client) {
this.client = client;
}
async getHealthStatus() {
try {
const health = await this.client.health();
return { status: 'healthy', ...health };
} catch (error) {
return { status: 'unhealthy', error: error.message };
}
}
async getIndexStats() {
const indexes = await this.client.getIndexes();
const stats = {};
for (const index of indexes) {
stats[index.uid] = await this.client.index(index.uid).getStats();
}
return stats;
}
async getPerformanceMetrics() {
// Monitor response times, throughput, etc.
return {
uptime: process.uptime(),
memoryUsage: process.memoryUsage(),
// Add custom metrics
};
}
}# Multiple Meilisearch instances
version: '3.8'
services:
meilisearch-1:
image: getmeili/meilisearch:v1.8.0
environment:
- MEILI_MASTER_KEY=${MEILI_MASTER_KEY}
volumes:
- ./data-1:/meili_data
networks:
- meilisearch-cluster
meilisearch-2:
image: getmeili/meilisearch:v1.8.0
environment:
- MEILI_MASTER_KEY=${MEILI_MASTER_KEY}
volumes:
- ./data-2:/meili_data
networks:
- meilisearch-cluster
load-balancer:
image: nginx:alpine
ports:
- "7700:7700"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
depends_on:
- meilisearch-1
- meilisearch-2
networks:
- meilisearch-cluster# nginx.conf for load balancing
events {
worker_connections 1024;
}
http {
upstream meilisearch_backend {
server meilisearch-1:7700;
server meilisearch-2:7700;
}
server {
listen 7700;
location / {
proxy_pass http://meilisearch_backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
}# Create snapshot
curl -X POST 'http://localhost:7700/snapshots' \
-H 'Authorization: Bearer your_master_key'
# List snapshots
curl 'http://localhost:7700/snapshots' \
-H 'Authorization: Bearer your_master_key'#!/bin/bash
# backup_meilisearch.sh
BACKUP_DIR="/var/backups/meilisearch"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
SNAPSHOT_NAME="meilisearch_$TIMESTAMP"
# Create snapshot
curl -X POST 'http://localhost:7700/snapshots' \
-H 'Authorization: Bearer your_master_key' \
-d "{\"snapshotName\": \"$SNAPSHOT_NAME\"}"
# Download snapshot
curl "http://localhost:7700/snapshots/$SNAPSHOT_NAME/download" \
-H 'Authorization: Bearer your_master_key' \
-o "$BACKUP_DIR/$SNAPSHOT_NAME.tar.gz"
# Clean old backups (keep last 7 days)
find "$BACKUP_DIR" -name "*.tar.gz" -mtime +7 -delete# Stop Meilisearch
sudo systemctl stop meilisearch
# Restore from snapshot
tar -xzf /path/to/snapshot.tar.gz -C /var/lib/meilisearch/
# Start Meilisearch
sudo systemctl start meilisearch# Optimize index
curl -X POST 'http://localhost:7700/indexes/products/optimize' \
-H 'Authorization: Bearer your_master_key'# Configure memory usage
export MEILI_MAX_INDEX_SIZE=10GiB
export MEILI_MAX_TASK_DB_SIZE=1GiB// Optimize search queries
const optimizedSearch = async (query, filters) => {
// Use specific attributes to reduce payload
const params = {
q: query,
filter: filters,
attributesToRetrieve: ['id', 'title', 'price'], // Only needed fields
limit: 50 // Reasonable limit
};
return await client.search(query, params);
};class SearchAnalytics {
constructor() {
this.metrics = {
totalSearches: 0,
averageResponseTime: 0,
popularQueries: new Map(),
failedSearches: 0
};
}
trackSearch(query, results, responseTime) {
this.metrics.totalSearches++;
this.metrics.averageResponseTime =
(this.metrics.averageResponseTime + responseTime) / 2;
// Track popular queries
const count = this.metrics.popularQueries.get(query) || 0;
this.metrics.popularQueries.set(query, count + 1);
}
getMetrics() {
return {
...this.metrics,
popularQueries: Array.from(this.metrics.popularQueries.entries())
.sort((a, b) => b[1] - a[1])
.slice(0, 10)
};
}
}// Monitor Meilisearch performance
const monitor = async () => {
const stats = await client.getStats();
console.log('Index Stats:', {
numberOfDocuments: stats.numberOfDocuments,
size: stats.size,
lastUpdate: stats.lastUpdate
});
// Check task queue
const tasks = await client.getTasks();
const pendingTasks = tasks.filter(task => task.status === 'processing');
if (pendingTasks.length > 10) {
console.warn('High task queue:', pendingTasks.length);
}
};#!/bin/bash
# maintenance.sh
# Update Meilisearch
docker pull getmeili/meilisearch:latest
# Backup before maintenance
./backup_meilisearch.sh
# Health check
curl http://localhost:7700/health
# Clean old logs
find /var/log/meilisearch -name "*.log" -mtime +30 -delete
# Optimize indexes
curl -X POST 'http://localhost:7700/indexes/*/optimize' \
-H 'Authorization: Bearer your_master_key'# Configure logrotate
sudo tee /etc/logrotate.d/meilisearch > /dev/null <<EOF
/var/log/meilisearch/*.log {
daily
rotate 7
compress
delaycompress
missingok
notifempty
create 644 meilisearch meilisearch
postrotate
systemctl reload meilisearch
endscript
}
EOF// Cache search results in CDN
const cachedSearch = async (query, ttl = 300) => {
const cacheKey = `search:${query}`;
// Check CDN cache first
const cached = await cdn.get(cacheKey);
if (cached) return cached;
// Perform search
const results = await client.search(query);
// Cache in CDN
await cdn.set(cacheKey, results, ttl);
return results;
};# Pre-deployment checklist
DEPLOYMENT_CHECKLIST="
☐ Server requirements met
☐ Master key generated and secured
☐ SSL/TLS configured
☐ Firewall rules applied
☐ Backup strategy implemented
☐ Monitoring setup
☐ Load balancer configured
☐ Health checks working
☐ Documentation updated
☐ Team notified of deployment
"-
High Memory Usage
# Check memory usage ps aux | grep meilisearch # Configure memory limits export MEILI_MAX_INDEX_SIZE=5GiB
-
Slow Search Performance
# Check index stats curl 'http://localhost:7700/indexes/products/stats' # Optimize index curl -X POST 'http://localhost:7700/indexes/products/optimize'
-
Task Queue Backlog
# Monitor tasks curl 'http://localhost:7700/tasks?statuses=processing' # Check server resources top -p $(pgrep meilisearch)
- ✅ Configured production-ready Meilisearch deployment
- ✅ Implemented security measures (SSL, API keys, firewall)
- ✅ Set up monitoring and logging
- ✅ Configured scaling and load balancing
- ✅ Implemented backup and recovery strategies
- ✅ Optimized performance and maintenance
Key Takeaways:
- Always use strong master keys and secure API endpoints
- Implement proper monitoring and health checks
- Set up automated backups and recovery procedures
- Configure load balancing for high availability
- Monitor performance and optimize regularly
- Implement proper logging and log rotation
- Use SSL/TLS in production environments
- Plan for scaling as your application grows
Most teams struggle here because the hard part is not writing more code, but deciding clear boundaries for meilisearch, http, curl so behavior stays predictable as complexity grows.
In practical terms, this chapter helps you avoid three common failures:
- coupling core logic too tightly to one implementation path
- missing the handoff boundaries between setup, execution, and validation
- shipping changes without clear rollback or observability strategy
After working through this chapter, you should be able to reason about Chapter 8: Production Deployment as an operating subsystem inside MeiliSearch Tutorial: Lightning Fast Search Engine, with explicit contracts for inputs, state transitions, and outputs.
Use the implementation notes around localhost, sudo, stats as your checklist when adapting these patterns to your own repository.
Under the hood, Chapter 8: Production Deployment usually follows a repeatable control path:
- Context bootstrap: initialize runtime config and prerequisites for
meilisearch. - Input normalization: shape incoming data so
httpreceives stable contracts. - Core execution: run the main logic branch and propagate intermediate state through
curl. - Policy and safety checks: enforce limits, auth scopes, and failure boundaries.
- Output composition: return canonical result payloads for downstream consumers.
- Operational telemetry: emit logs/metrics needed for debugging and performance tuning.
When debugging, walk this sequence in order and confirm each stage has explicit success/failure conditions.
Use the following upstream sources to verify implementation details while reading this chapter:
- View Repo
Why it matters: authoritative reference on
View Repo(github.com). - AI Codebase Knowledge Builder
Why it matters: authoritative reference on
AI Codebase Knowledge Builder(github.com).
Suggested trace strategy:
- search upstream code for
meilisearchandhttpto map concrete implementation paths - compare docs claims against actual runtime/config code before reusing patterns in production