This guide covers different deployment options for the LangChain Agent API.
- Node.js 18+ (for manual deployment)
- Docker & Docker Compose (for containerized deployment)
- Redis server
- OpenAI API key
The project uses TypeScript and builds to the dist/ folder:
# Clean previous build
npm run clean
# Build for production
npm run build
# Check build output
ls -la dist/# Clone repository
git clone <your-repo-url>
cd langchain-agent-api
# Install dependencies
npm ci --only=production
# Copy environment file
cp .env.production.example .env.production
# Edit .env.production with your valuesnpm run build# Start Redis (if not using external Redis)
redis-server
# Start the application
npm run start:prod# Copy environment file
cp .env.production.example .env
# Edit .env with your values
nano .env
# Start all services
docker-compose up -d
# Check logs
docker-compose logs -f app
# Stop services
docker-compose down# Build image
docker build -t langchain-agent-api .
# Run Redis
docker run -d --name redis -p 6379:6379 redis:7-alpine
# Run application
docker run -d \
--name langchain-api \
-p 3000:3000 \
--link redis:redis \
--env-file .env \
langchain-agent-api# Install Heroku CLI
# Create Heroku app
heroku create your-app-name
# Add Redis addon
heroku addons:create heroku-redis:mini
# Set environment variables
heroku config:set OPENAI_API_KEY="your-key"
heroku config:set NODE_ENV=production
# Deploy
git push heroku main# Install Railway CLI
npm install -g @railway/cli
# Login and deploy
railway login
railway init
railway up- Connect your GitHub repository
- Set build command:
npm run build - Set run command:
npm run start:prod - Add environment variables
- Add Redis database
The application includes health check endpoints:
# Basic health check
curl http://localhost:3000/
# Tool status
curl http://localhost:3000/ai/chat-with-memory/tools# Docker Compose logs
docker-compose logs -f app
# Docker logs
docker logs -f langchain-api
# PM2 (if using PM2)
pm2 logsMonitor these key metrics:
- Response time
- Token usage
- Error rates
- Redis memory usage
- Tool success rates
- Never commit
.envfiles - Use strong Redis passwords in production
- Rotate API keys regularly
- Use HTTPS in production
# Use reverse proxy (nginx example)
server {
listen 80;
server_name your-domain.com;
location / {
proxy_pass http://localhost:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}-
Build fails
npm run clean npm run build
-
Redis connection error
- Check Redis is running
- Verify REDIS_URL in environment
-
OpenAI API errors
- Verify API key is correct
- Check API quota/billing
-
Port already in use
# Find process using port lsof -i :3000 # Kill process kill -9 <PID>
# Enable verbose logging
export AGENT_VERBOSE=true
export NODE_ENV=development
# Run with debug
npm run dev# docker-compose.yml
services:
app:
deploy:
replicas: 3
# ... other config
nginx:
image: nginx:alpine
# Load balancer config# Increase memory limits
docker run --memory=2g langchain-agent-api# Pull latest changes
git pull origin main
# Rebuild
npm run build
# Restart services
docker-compose restart appFor more help, check the main README.md or create an issue in the repository.