Modern, web-based management interface for HAProxy load balancers with multi-cluster support, agent-based pull architecture, and automated deployment capabilities.
- π― Project Overview
- πΈ Screenshots
- Key Capabilities
- ποΈ Architecture
- β¨ Features & User Interface
- π Getting Started - First-Time Usage
- π¦ Installation
- π Configuration
- π API Reference
- π Troubleshooting
- π οΈ Development
- π Contributing
- π License
- π¨βπ» Author
- π€ Support
HAProxy OpenManager is a comprehensive management platform designed to simplify HAProxy administration across multiple environments. It provides a unified interface for managing remote HAProxy instances through an agent-based pull architecture, supporting multi-cluster configurations with centralized control.
This project implements an agent pull architecture where management operations are executed through a lightweight agent service (haproxy-agent) installed on each target HAProxy server:
Pool (Logical Grouping)
βββ Cluster (HAProxy Cluster Definition)
βββ Agent 1 β HAProxy Server 1
βββ Agent 2 β HAProxy Server 2
βββ Agent 3 β HAProxy Server 3
- Create Agent Pool: First, create a pool to logically group agents (e.g., "Production Pool")
- Create Cluster: Select the pool and create a cluster (e.g., "Production Cluster with 3 nodes")
- Deploy Agents: Generate installation scripts for each HAProxy server, selecting the same pool
- Result: Multiple agents in one pool, all managing one cluster
- Automatic Polling: Agents periodically poll the backend for configuration tasks (every 30 seconds)
- Local Execution: Agents apply changes to local
haproxy.cfgand reload HAProxy service - No Direct Push: Backend never directly pushes changes - agents pull on their schedule
- Unified Management: All agents in a pool receive same cluster configuration
This architecture provides better security (no inbound connections to HAProxy servers), resilience (agents can retry on failure), and scalability (backend doesn't track agent state).
β
Agent-Based Pull Architecture - Secure, scalable management without inbound connections
β
Multi-Cluster & Pool Management - Organize and manage multiple HAProxy clusters from one interface
β
Frontend/Backend/Server CRUD - Complete entity management with visual UI
β
Bulk Config Import - Import existing haproxy.cfg files with smart SSL auto-assignment
β
Version Control & Rollback - Every change versioned with one-click restore capability
β
Real-Time Monitoring - Live stats, health checks, and performance dashboards
β
SSL Certificate Management - Centralized SSL with expiration tracking
β
WAF Rules - Web Application Firewall management and deployment
β
Agent Script Versioning - Update agents via UI (Monaco editor) with auto-upgrade
β
Token-Based Agent Auth - Secure token management with revoke/renew
β
Role-Based User Management - Admin and user roles with granular permissions and access control
β
User Activity Audit Logs - Complete audit trail of all system events
β
REST API - Full programmatic access for automation and CI/CD integration
Dashboard - Overview tab showing system metrics, active servers, SSL certificates, and cluster agent health

Dashboard - Performance Trends with request rate and response time heatmap (24h view)

Dashboard - Health Matrix showing backend servers with detailed health check status and response times

Backend Management - Backends & Servers tab with 165 servers across 56 backends, health status overview

Multi-Cluster Management - Cluster selector dropdown showing multiple clusters with agent pools and online agent counts - switch between different clusters

Frontend Management - List view with SSL/TLS bindings, sync status, and config status

Frontend Management - Edit modal for configuring bind address, protocol mode, default backend, SSL/TLS, ACLs, and advanced options

Configuration Management - Backend server and pool configuration page

Configuration Management - Real-time configuration viewer to pull haproxy.cfg from live agents

Bulk Config Import - Paste existing haproxy.cfg and parse configuration

Bulk Config Import - Parsed entities showing 56 frontends, 55 backends, 165 servers with SSL and configuration recommendations

Apply Management - Pending changes and deployment history with version tracking

Version History - Configuration diff viewer showing added/removed lines with summary

Version History - Complete version history with restore capabilities and deployment status

Agent Management - Agent setup wizard with platform selection (Linux/macOS) and architecture (x86_64/ARM64)

Agent Management - Generated installation script ready for deployment to target servers

Security Management - Agent API token management with active tokens, expiration tracking, and revoke capabilities

WAF Management - WAF rule configuration with request filtering, rate limiting, and advanced options

π Note: Screenshots show the actual production interface. Replace GitHub asset URLs with your own after uploading images to your repository's releases or issues.
- Frontend Management: Create, edit, update, and delete frontend configurations with SSL bindings, ACL rules, and routing
- Backend Management: Full CRUD operations for backend pools with load balancing algorithms and health checks
- Server Management: Add, update, remove, and configure backend servers with weights, maintenance mode, and connection limits
- Bulk Configuration Import: Import existing
haproxy.cfgfiles to instantly manage all frontends, backends, and servers without manual recreation- Smart SSL Auto-Assignment: Automatically matches and assigns SSL certificates during bulk import if they exist in SSL Management with matching names and SYNCED status - eliminates manual SSL configuration after import
- Configuration Synchronization: All entities (backends, frontends, servers, SSL certificates, WAF rules) auto-sync to agents
- Configuration Viewer: View live HAProxy configuration pulled from agents via Configuration Management page
- Agent-Based Management: Lightweight agent service for secure, pull-based HAProxy management
- Pool Management: Organize agents into logical pools for better cluster organization
- Multi-Cluster Support: Manage multiple HAProxy clusters from a single interface with cluster selector
- Agent Token Management: Secure token-based agent installation with token revoke and renew capabilities
- Agent Script Editor: Monaco code editor for updating agent scripts via UI (not binary) - create new versions, rollback, and auto-upgrade all agents
- Platform-Specific Scripts: Auto-generated installation scripts for Linux and macOS (x86_64/ARM64)
- Apply Management: Centralized change tracking and deployment status monitoring across all agents
- Version Control: Every configuration change creates a versioned snapshot with complete history
- Configuration Restore: Restore any previous configuration version with one-click rollback capability
- Diff Visualization: See exactly what changed between versions with line-by-line comparison
- Change Tracking: Track who made changes, when, and what entities were affected
- Real-time Monitoring: Live statistics, health checks, and performance metrics from HAProxy stats socket
- SSL Management: Centralized certificate management with expiration tracking and zero-downtime updates
- WAF Rules: Web Application Firewall rule management and deployment for security protection
- User Management: Role-based access control with admin and user roles
- User Activity Logs: Complete audit trail of all user actions, configuration changes, and system events
- REST API: Complete API for programmatic access and integration with CI/CD pipelines
- Agent Pull Architecture: Secure polling model - no inbound connections required to HAProxy servers
- Redis Cache: High-performance metrics caching for dashboard with configurable retention
%%{init: {'theme':'base', 'themeVariables': { 'fontSize':'16px'}}}%%
graph TB
subgraph UI_LAYER["π User Interface Layer"]
USER["π€ Admin/User"] --> WEB["React UI<br/>Nginx Proxy<br/>:8080"]
end
subgraph BACKEND_LAYER["βοΈ Backend Layer"]
WEB <--> API["FastAPI<br/>Backend<br/>:8000"]
API <--> DB[("PostgreSQL<br/>Configs & History")]
API <--> CACHE[("Redis<br/>Metrics Cache")]
end
subgraph AGENT_LAYER["π¦ Agent Pool: Production Cluster"]
direction LR
AGENT1["π€ Agent 1<br/>10.0.1.101<br/>(Poll Every 30s)"]
AGENT2["π€ Agent 2<br/>10.0.1.102<br/>(Poll Every 30s)"]
AGENT3["π€ Agent 3<br/>10.0.1.103<br/>(Poll Every 30s)"]
HAP1["β‘ HAProxy<br/>+ Stats Socket<br/>+ haproxy.cfg"]
HAP2["β‘ HAProxy<br/>+ Stats Socket<br/>+ haproxy.cfg"]
HAP3["β‘ HAProxy<br/>+ Stats Socket<br/>+ haproxy.cfg"]
AGENT1 --> HAP1
AGENT2 --> HAP2
AGENT3 --> HAP3
end
API <-.->|"π Pull Config<br/>π€ Push Stats<br/>β¬οΈ SSL/WAF"| AGENT1
API <-.->|"π Pull Config<br/>π€ Push Stats<br/>β¬οΈ SSL/WAF"| AGENT2
API <-.->|"π Pull Config<br/>π€ Push Stats<br/>β¬οΈ SSL/WAF"| AGENT3
classDef uiStyle fill:#3a5a8a,stroke:#6a9eff,stroke-width:3px,color:#fff
classDef backendStyle fill:#5a3a8a,stroke:#b39ddb,stroke-width:2px,color:#fff
classDef dbStyle fill:#3a6a3a,stroke:#66bb6a,stroke-width:2px,color:#fff
classDef agentStyle fill:#8a6a30,stroke:#ffb347,stroke-width:2px,color:#fff
classDef haproxyStyle fill:#2a6a5a,stroke:#4db6ac,stroke-width:3px,color:#fff
class USER,WEB uiStyle
class API backendStyle
class DB,CACHE dbStyle
class AGENT1,AGENT2,AGENT3 agentStyle
class HAP1,HAP2,HAP3 haproxyStyle
| Component | Technology | Purpose | Port |
|---|---|---|---|
| Frontend | React.js 18 + Ant Design | Web interface, dashboards, configuration UI | 3000 |
| Backend | FastAPI + Python | REST API, task queue, agent coordination | 8000 |
| Database | PostgreSQL 15 | Configuration storage, user management, agent tasks | 5432 |
| Cache | Redis 7 | Session storage, performance metrics caching | 6379 |
| Reverse Proxy | Nginx | Frontend/backend routing, SSL termination | 8080 |
| HAProxy Agent | Bash Service | Polls backend, applies configs, manages HAProxy | N/A |
| HAProxy Instances | HAProxy 2.8+ | Load balancer instances being managed by agents | 8404+ |
This detailed sequence diagram shows the complete lifecycle of configuration management, including entity creation, agent polling, configuration application, SSL management, WAF rules, and agent upgrade process.
sequenceDiagram
autonumber
participant U as π€ User Browser
participant B as βοΈ Backend API
participant D as πΏ PostgreSQL DB
participant R as πΎ Redis Cache
participant A as π€ Agent
participant CFG as π haproxy.cfg
participant H as β‘ HAProxy Service
participant S as π Stats Socket
Note over U,B: βββ PHASE 1: USER CREATES ENTITIES βββ
U->>B: Create Backend "api-backend"
B->>D: INSERT backends + servers
B->>D: CREATE version v1.0.124
B-->>U: β
Backend Created
U->>B: Create Frontend "web-frontend"
B->>D: INSERT frontend + SSL binding
B->>D: UPDATE version v1.0.125
B-->>U: β
Frontend Created
U->>B: Upload SSL Certificate
B->>D: INSERT ssl_certificates (global)
B-->>U: β
SSL Ready for all clusters
U->>B: Create WAF Rule
B->>D: INSERT waf_rules (rate limit)
B-->>U: β
WAF Rule Created
U->>B: π Click "Apply Changes"
B->>D: Mark tasks READY_FOR_DEPLOYMENT
B-->>U: β
Queued for deployment
Note over A,R: βββ PHASE 2: AGENT POLLING CYCLE (30s) βββ
loop Every 30 seconds
A->>B: GET /api/agent/tasks
B->>D: Query pending tasks
B-->>A: Tasks + Version v1.0.125
A->>S: socat show stat
S-->>A: CSV Stats
A->>B: POST /api/stats/upload
B->>R: Store metrics (5min TTL)
A->>B: Check agent version
B->>D: SELECT latest_agent_version
B-->>A: Current: 1.0.10, Latest: 1.0.11
end
Note over A,CFG: βββ PHASE 3: AGENT APPLIES CONFIG βββ
A->>A: Parse task bundle
A->>A: Generate backend config
A->>A: Generate frontend config
A->>B: Download SSL certificate
B-->>A: PEM file (encrypted)
A->>A: Write /etc/haproxy/certs/*.pem
A->>A: chmod 600 (secure)
A->>A: Generate WAF stick-table
A->>CFG: Backup haproxy.cfg
A->>CFG: Write new configuration
A->>H: haproxy -c -f haproxy.cfg
alt Config Valid
A->>H: systemctl reload haproxy
H-->>A: β
Reload successful
A->>CFG: Keep new config
else Config Invalid
A->>CFG: Restore backup
A->>A: β Rollback complete
end
Note over A,B: βββ PHASE 4: AGENT REPORTS STATUS βββ
A->>B: POST /api/agent/tasks/complete
B->>D: UPDATE task status = COMPLETED
B->>D: INSERT activity_logs (JSON)
B-->>A: β
Acknowledged
Note over U,B: βββ PHASE 5: USER MONITORS STATUS βββ
U->>B: GET /api/apply-changes/status
B->>D: Query all agent statuses
B-->>U: 3/3 agents β
v1.0.125
Note over U,R: βββ PHASE 6: DASHBOARD METRICS βββ
U->>B: GET /api/dashboard/stats
B->>R: GET haproxy:cluster:5:stats
R-->>B: Latest metrics (30s fresh)
B-->>U: Real-time dashboard data
Note over A,B: βββ PHASE 7: AGENT AUTO-UPGRADE βββ
A->>B: Check version (in polling)
B-->>A: New version 1.0.11 available
A->>B: GET /api/agent/download/1.0.11
B->>D: SELECT agent_script WHERE version=1.0.11
B-->>A: Download script (bash/python)
A->>A: Write new script to /tmp/agent-upgrade.sh
A->>A: chmod +x agent-upgrade.sh
A->>A: Execute: /tmp/agent-upgrade.sh
Note over A: Agent Self-Upgrade Process
A->>A: Stop current agent service
A->>A: Backup old agent binary
A->>A: Install new agent version
A->>A: Update systemd service
A->>A: Start agent with new version
A->>B: POST /api/agent/upgrade/complete
B->>D: UPDATE agent version = 1.0.11
B->>D: LOG upgrade activity
B-->>A: β
Upgrade confirmed
A->>CFG: Read haproxy.cfg (verify integrity)
A->>H: Check HAProxy service status
H-->>A: Service running normally
Note over A,B: Agent now running v1.0.11<br/>Continues normal polling cycle
Key Flow Features:
- π Continuous Polling: Agents poll every 30s for tasks, stats collection, and version checks
- π Config Management: haproxy.cfg backup β validate β apply β rollback if needed
- π SSL Handling: Secure download, file permissions (chmod 600), automatic frontend binding
- π‘οΈ WAF Integration: Stick-tables generated and integrated into config
- π Stats Collection: Parallel stats upload using socat + stats socket
- π Auto-Upgrade: Agents detect new versions and self-upgrade with zero-touch deployment
- β Validation: Every config change is validated before HAProxy reload
- π Activity Logging: All operations logged in JSON format for audit trail
- Pool Creation: Create logical pools to group HAProxy agents
- Pool Assignment: Associate clusters with specific pools
- Pool Overview: View all agents within a pool
- Multi-Pool Support: Manage multiple pools for different environments (prod, staging, etc.)
- Pool-based Filtering: Filter agents and clusters by pool
Agent scripts are not hardcoded in the project. They are stored in the database and can be updated via the UI:
- Script Editor: Edit agent installation scripts directly in the UI
- Platform Support: Separate scripts for Linux and macOS (x86_64/ARM64)
- Version Control: Create new agent versions with changelog
- Automatic Upgrade: Deploy new agent versions to all agents with one click
- Upgrade Process: Agents check backend version β Download new script β Self-upgrade β Restart
Agent Version Update Flow:
%%{init: {'theme':'base', 'themeVariables': { 'fontSize':'14px'}}}%%
graph LR
subgraph USER["π€ User Actions"]
A["Edit Script<br/>+ Create v1.0.5"] --> B["Mark as<br/>Latest"]
end
subgraph AUTO["π€ Automatic Process"]
C["Agents Poll"] --> D["Detect New<br/>Version"] --> E["Download"] --> F["Self-Upgrade"] --> G["β
Running<br/>v1.0.5"]
end
B -.->|Trigger| C
style USER fill:#2a4a6a,stroke:#4a9eff,stroke-width:2px,color:#fff
style AUTO fill:#3a5a3a,stroke:#66bb6a,stroke-width:2px,color:#fff
style A fill:#3a5a8a,stroke:#6a9eff,stroke-width:2px,color:#fff
style B fill:#3a5a8a,stroke:#6a9eff,stroke-width:2px,color:#fff
style C fill:#4a6a4a,stroke:#66bb6a,stroke-width:2px,color:#fff
style D fill:#4a6a4a,stroke:#66bb6a,stroke-width:2px,color:#fff
style E fill:#4a6a4a,stroke:#66bb6a,stroke-width:2px,color:#fff
style F fill:#4a6a4a,stroke:#66bb6a,stroke-width:2px,color:#fff
style G fill:#2a5a2a,stroke:#66bb6a,stroke-width:3px,color:#fff
- Agent Registration: Automatic agent registration after installation
- Installation Scripts: Generate platform-specific scripts (Linux/macOS ARM/x86)
- Script Download: One-click download of customized installation scripts
- Agent Status: Real-time agent connectivity and health monitoring
- Platform Detection: Automatic detection of OS and architecture
- Last Seen: Track when agents last connected to backend (updated every 30s)
- Batch Operations: Upgrade multiple agents simultaneously
- Log Location:
/var/log/haproxy-agent/agent.logon each agent server - Log Format: Structured JSON for easy parsing and log aggregation
- Log Fields: timestamp, level, message, agent_id, cluster_id, task_id
- Activity Tracking: All agent activities (config updates, SSL changes, task execution)
- Log Integration: JSON format allows easy integration with ELK, Splunk, Datadog
- Log Rotation: Automatic rotation with configurable retention
Example Log Entry (JSON):
{
"timestamp": "2025-01-15T10:30:45Z",
"level": "INFO",
"agent_id": "prod-haproxy-01",
"cluster_id": 5,
"task_id": 123,
"message": "SSL certificate updated successfully",
"details": {
"cert_name": "wildcard.company.com",
"action": "update",
"haproxy_reload": "success"
}
}- Cluster Selector: Top navigation cluster selector for context switching
- Cluster Creation: Define clusters and associate with agent pools
- Cluster Status: Monitor cluster health and connectivity
- Default Cluster: Set preferred default cluster
- Cross-Cluster View: Compare metrics across multiple clusters
- Cluster-Scoped Operations: All configurations are cluster-specific
Every configuration change creates a new version with complete tracking:
- Automatic Versioning: Each apply creates a new version number (e.g., v1.0.123)
- Version Snapshots: Complete configuration state saved for each version
- Diff Visualization: See exactly what changed between versions
- Green: Added lines
- Red: Removed lines
- Yellow: Modified lines
- Change Summary: High-level summary of changes (added frontends, modified backends, etc.)
- Version Metadata: Timestamp, user, cluster, affected entities
- Change Tracking: Track all pending configuration changes before apply
- Apply Status: Real-time deployment status per agent
- Bulk Apply: Deploy changes to all agents in a cluster simultaneously
- Success/Failure Tracking: Monitor which agents successfully applied changes
- Retry Mechanism: Retry failed deployments per agent
- Apply Logs: Detailed logs for each deployment operation with error messages
Version History Viewer:
- View all historical configuration versions
- Compare any two versions side-by-side
- See who made changes and when
- Filter by date, user, or cluster
Configuration Restore:
- Restore any previous configuration version
- One-click rollback to last known good configuration
- Restore process:
- Select previous version from history
- Review diff (current vs target version)
- Confirm restore
- Agents pull restored configuration
- HAProxy reloaded with previous config
Use Cases:
- Rollback after problematic deployment
- Audit configuration changes
- Compare production vs staging configurations
- Disaster recovery
The dashboard displays comprehensive real-time metrics collected by agents from HAProxy stats socket. Agents use socat to query the stats socket every 30 seconds and send CSV data to the backend, which parses and stores it in Redis for high-performance access.
Tab-Based Navigation:
-
Overview Tab:
- System-wide metrics: Total requests, active sessions, error rate
- Frontend/Backend/Server counts
- SSL certificate status and expiration warnings
- Quick health status indicators
-
Frontends Tab:
- Request rate per frontend with sparkline charts
- Session distribution across frontends
- Status (UP/DOWN) monitoring
- Filter by specific frontends
-
Backends Tab:
- Backend health matrix (all servers at a glance)
- Request distribution percentages
- Response time averages
- Active/backup server status
-
Performance Trends Tab:
- Response time trends (1h, 6h, 24h, 7d, 30d views)
- Request rate trends with interactive zoom
- Error rate analysis over time
- Session trends and capacity planning
-
Capacity & Load Tab:
- Current vs max connections
- Queue depth monitoring
- CPU/Memory utilization (if available)
- Load distribution heatmaps
-
Health Matrix Tab:
- Detailed server health status
- Last status check timestamps
- Check duration and latency
- Failure reasons and diagnostics
Data Collection Flow:
- Agents β Stats Socket (
show stat) β CSV β Backend API β Redis β Dashboard UI - Agent Log Path:
/var/log/haproxy-agent/agent.log(JSON format for easy log aggregation) - Update Interval: 30 seconds (configurable)
- Data Retention: 30 days in Redis (configurable)
- Frontend CRUD: Create, edit, delete, and duplicate frontend configurations
- Protocol Support: HTTP, TCP, and health check mode configurations
- Binding Configuration: IP address, port, and SSL binding options
- Backend Assignment: Default backend selection and routing rules
- SSL Configuration: Certificate assignment and SSL protocol settings
- ACL Rules: Access control rules with pattern matching
- Redirect Rules: HTTP to HTTPS redirects and custom redirections
- Advanced Options: Connection limits, timeouts, and performance tuning
- Search & Filter: Real-time search and status-based filtering
- Server Management: Add, edit, remove, and configure backend servers
- Health Checks: HTTP/TCP health check configuration and monitoring
- Load Balancing: Round-robin, least-conn, source, and URI algorithms
- Server Weights: Dynamic weight adjustment for load distribution
- Maintenance Mode: Enable/disable servers without removing configuration
- Backup Servers: Failover server configuration for high availability
- Connection Limits: Per-server connection and rate limiting
- Monitoring Dashboard: Real-time server status and performance metrics
HAProxy OpenManager provides powerful centralized SSL/TLS certificate management with both global and cluster-specific scopes:
1. Global SSL Certificates:
- Defined once, distributed to all clusters and all agents
- Perfect for wildcard certificates or shared certificates
- When updated, all agents across all clusters automatically receive the new certificate
- Use case:
*.company.comcertificate used across all environments
2. Cluster-Specific SSL Certificates:
- Scoped to a specific cluster only
- Only agents in that cluster receive these certificates
- Use case: Environment-specific certificates (prod, staging, dev)
Upload & Distribution:
- User uploads certificate via UI (PEM format: cert + private key)
- Backend stores certificate in database
- Agents poll backend and detect new/updated certificates
- Agents download certificates to local disk (
/etc/haproxy/certs/) - Agents update
haproxy.cfgfrontend bindings - HAProxy is safely reloaded with zero downtime
Binding to Frontends:
- SSL certificates are automatically bound to frontends based on configuration
- Frontend definition specifies which certificate to use
- Format:
bind *:443 ssl crt /etc/haproxy/certs/cert-name.pem
Centralized SSL Update Process:
User Updates SSL in UI β All Agents Poll Backend (30s)
β Download New Certificate β Update Frontend Bindings
β Validate Config β Reload HAProxy (zero downtime)
- Certificate Upload: PEM format certificate and private key upload
- Certificate Store: Centralized repository with encryption at rest
- Expiration Monitoring: Automatic expiration tracking with alerts (30, 15, 7 days)
- Global Update: Update one certificate, deploy to all clusters simultaneously
- Domain Binding: Associate certificates with specific domains/frontends
- Certificate Validation: Syntax and format validation before deployment
- Renewal Tracking: Certificate renewal status and history
- Security Profiles: SSL/TLS protocol and cipher suite configuration
- Zero Downtime: Safe HAProxy reload ensures no dropped connections
- Rate Limiting: Request rate limiting by IP, URL, or custom patterns
- IP Filtering: Whitelist/blacklist IP addresses and CIDR ranges
- Geographic Blocking: Country-based access restrictions
- DDoS Protection: Automated DDoS detection and mitigation rules
- Custom Rules: User-defined security rules with regex patterns
- Attack Monitoring: Real-time attack detection and logging
- Rule Templates: Pre-configured security rule templates
- Statistics Dashboard: WAF activity analytics and blocked request metrics
- Syntax Editor: Monaco editor with HAProxy syntax highlighting
- Real-time Validation: Configuration syntax checking and error detection
- Configuration Templates: Pre-built templates for common setups
- Version Control: Configuration versioning and rollback capabilities
- Backup & Restore: Automatic backups and manual restore functionality
- Direct Editing: Raw HAProxy configuration file editing
- Deployment: Apply configuration changes with validation
- Help System: Integrated HAProxy directive documentation
- Real-time Config Pull: Pull current haproxy.cfg from live HAProxy servers
- View actual running configuration
- Compare with managed configuration
- Import existing configurations
- Useful for migrating from manual to managed setup
- User Accounts: Create, edit, and manage user accounts
- Role-based Access: Admin, user, and custom role definitions
- Permission System: Granular permissions for different system functions
- User Activity: Login history and user action audit logs
- Password Management: Secure password policies and reset functionality
- Session Management: Active session monitoring and force logout
- API Keys: User API key generation and management
- Role Assignment: Dynamic role assignment and permission updates
- Theme Settings: Light/dark mode toggle and UI customization
- Notification Settings: Alert preferences and notification channels
- System Preferences: Default timeouts, refresh intervals, and limits
- Backup Settings: Automated backup scheduling and retention policies
- Integration Settings: External system integrations and webhooks
- Language Settings: Multi-language support configuration
- Performance Tuning: System performance and optimization settings
This guide walks you through using HAProxy OpenManager for the first time, including importing your existing HAProxy configuration.
graph TD
Start([π Start Setup]) --> Step1[1οΈβ£ Create Agent Pool<br/>HAProxy Agent Pool Management]
Step1 --> Step2[2οΈβ£ Create Cluster<br/>HAProxy Cluster Management]
Step2 --> Step3[3οΈβ£ Generate Agent Script<br/>Agent Management]
Step3 --> Step4[4οΈβ£ Run Script on HAProxy Server<br/>SSH to server and execute]
Step4 --> Step5[5οΈβ£ Copy Backup Config<br/>haproxy.cfg.initial-backup]
Step5 --> Step6[6οΈβ£ Import Configuration<br/>Bulk HAProxy Config Import]
Step6 --> Step7[7οΈβ£ Apply Changes<br/>Apply Management]
Step7 --> Complete([β
Setup Complete!])
style Start fill:#2d5016,stroke:#4caf50,stroke-width:3px,color:#fff
style Step1 fill:#7b4d1e,stroke:#ff9800,stroke-width:2px,color:#fff
style Step2 fill:#7b4d1e,stroke:#ff9800,stroke-width:2px,color:#fff
style Step3 fill:#1a4971,stroke:#2196f3,stroke-width:2px,color:#fff
style Step4 fill:#4a1a4a,stroke:#9c27b0,stroke-width:2px,color:#fff
style Step5 fill:#6b1e3f,stroke:#e91e63,stroke-width:2px,color:#fff
style Step6 fill:#1a4d43,stroke:#009688,stroke-width:2px,color:#fff
style Step7 fill:#6b5d1a,stroke:#fbc02d,stroke-width:2px,color:#fff
style Complete fill:#2d5016,stroke:#4caf50,stroke-width:3px,color:#fff
Navigate to HAProxy Agent Pool Management page.
- Click "Create Pool" button
- Fill in pool details:
- Pool Name:
Production Pool(or your preferred name) - Description:
Production environment HAProxy servers
- Pool Name:
- Click Save
Purpose: Pools logically group your HAProxy agents for better organization.
Navigate to HAProxy Cluster Management page.
- Click "Create Cluster" button
- Fill in cluster details:
- Cluster Name:
Production Cluster - Description:
Main production HAProxy cluster - Select Pool: Choose the pool created in Step 1 (
Production Pool)
- Cluster Name:
- Click Save
Purpose: A cluster represents a logical grouping of HAProxy configurations that will be deployed to agents.
Navigate to Agent Management page.
- Click "Create Agent" or "Generate Script" button
- Configure agent details:
- Agent Name:
haproxy-prod-01(unique identifier) - Select Pool: Choose the same pool from Step 1 (
Production Pool) - Platform: Select
LinuxormacOS - Architecture: Select
x86_64orARM64
- Agent Name:
- Click "Generate Install Script"
- Click "Download Script" to save the installation script
Purpose: This generates a customized installation script that will connect your HAProxy server to the management backend.
SSH into your HAProxy server and run the installation script.
# Copy the script to your HAProxy server
scp install-agent-haproxy-prod-01.sh root@your-haproxy-server:/tmp/
# SSH to the HAProxy server
ssh root@your-haproxy-server
# Make the script executable
chmod +x /tmp/install-agent-haproxy-prod-01.sh
# Run the installation script
sudo /tmp/install-agent-haproxy-prod-01.shWhat the script does:
- Installs required dependencies (Python, socat)
- Creates
haproxy-agentsystemd service - Starts the agent (polls backend every 30 seconds)
- Creates initial backup:
/etc/haproxy/haproxy.cfg.initial-backup - Registers the agent with the backend
Verify installation:
# Check agent service status
sudo systemctl status haproxy-agent
# Check agent logs
sudo tail -f /var/log/haproxy-agent/agent.logReturn to the UI and verify the agent appears in Agent Management with status "Connected β ".
You have three options to get your existing HAProxy configuration:
Option A: Using the Initial Backup (Recommended)
On your HAProxy server, the agent created a backup of your original configuration.
# Display the backup configuration
cat /etc/haproxy/haproxy.cfg.initial-backupCopy the entire contents of this file to your clipboard.
Option B: View Configuration via UI (Easiest)
Navigate to Configuration Management page in the UI.
- Select your agent from the Agent Selector dropdown
- Click "View Configuration" button
- The system will fetch the live configuration from the agent
- Copy the entire configuration displayed in the viewer
- You now have the configuration ready for import!
Benefits: No need to SSH to the server, get configuration directly from the UI.
Option C: Current Configuration File
If the backup doesn't exist, SSH to your server and copy the current configuration:
# Display current HAProxy configuration
cat /etc/haproxy/haproxy.cfgCopy the entire contents to your clipboard.
Navigate to Bulk HAProxy Config Import page.
- Select your cluster from the Cluster Selector dropdown (top of page)
- Paste the configuration copied from Step 5 into the text area
- Click "Parse Configuration" button
- Review the parsed entities:
- Frontends detected
- Backends detected
- Servers detected
- SSL certificates (if any)
- Click "Import Configuration"
What happens:
- The system parses your HAProxy config file
- Extracts frontends, backends, servers, and SSL bindings
- Creates database entities for each component
- Prepares configuration for deployment
Result: Your existing HAProxy configuration is now managed by HAProxy OpenManager! π
Navigate to Apply Management page.
- You'll see pending changes listed:
- Frontends to be created
- Backends to be created
- Servers to be added
- Review the changes
- Click "Apply Changes" button
- Monitor deployment status:
Agent: haproxy-prod-01 Status: β Configuration applied successfully Version: v1.0.1
What happens:
- Backend marks configuration tasks as ready for deployment
- Agent polls and detects pending tasks (within 30 seconds)
- Agent downloads the new configuration
- Agent validates HAProxy configuration syntax
- Agent reloads HAProxy service (zero downtime)
- Agent reports success back to backend
Verify deployment:
- Check that agent status shows "Applied Successfully"
- Check HAProxy is running:
systemctl status haproxy - Check HAProxy stats:
http://your-server:8404/stats
Your HAProxy server is now fully managed by HAProxy OpenManager. You can now:
- β¨ Manage Frontends: Add, edit, delete frontends via UI
- π₯οΈ Manage Backends: Configure backend servers and health checks
- π Upload SSL Certificates: Centralized SSL management
- π‘οΈ Configure WAF Rules: Add security rules
- π Monitor Performance: View real-time statistics on dashboard
- π Version Control: Track all configuration changes with rollback capability
Next Steps:
- Add more agents to the same pool for multi-node clusters
- Configure SSL certificates in SSL Management
- Add WAF rules in WAF Management
- Monitor your cluster in Dashboard
Choose your preferred installation method:
|
Recommended for:
|
Recommended for:
|
- Docker & Docker Compose (v20.10+)
- Git
- 2GB+ RAM for all services
-
Clone the Repository
git clone https://github.com/yourusername/haproxy-openmanager.git cd haproxy-openmanager -
Start All Services
docker-compose up -d
-
Verify Installation
# Check all services are running docker-compose ps # Check API health curl http://localhost:8080/api/clusters
-
Access the Application
- Web Interface: http://localhost:8080
- API Documentation: http://localhost:8080/docs
- HAProxy Stats: http://localhost:8404/stats
- Admin User:
admin/admin123 - Regular User:
user/user123
β οΈ Security Note: Change default passwords immediately in production environments.
All configuration is managed through environment variables for maximum flexibility across different deployment environments.
π Configuration Template: .env.template
# 1. KopyalayΔ±n (template β gerΓ§ek config)
cp .env.template .env
# 2. DΓΌzenleyin (YOUR-DOMAIN yerine gerΓ§ek domain)
nano .env
# 3. BaΕlatΔ±n
docker-compose up -d.env.templatesadece bir Εablondur (git'e commit edilir).envgerΓ§ek yapΔ±landΔ±rmadΔ±r (git'e commit EDΔ°LMEZ - .gitignore'da)- DeΔiΕtirdiΔinizde
.envdosyasΔ±nΔ± dΓΌzenleyin,.env.templatedeΔil!
# Database
DATABASE_URL="postgresql://haproxy_user:haproxy_pass@postgres:5432/haproxy_openmanager"
REDIS_URL="redis://redis:6379"
# Security
SECRET_KEY="your-secret-key-change-this-in-production"
DEBUG="False" # Set to False in production
# Public URL Configuration (IMPORTANT for agent connectivity)
# This URL is embedded in agent installation scripts
PUBLIC_URL="https://your-haproxy-openmanager.example.com"
MANAGEMENT_BASE_URL="${PUBLIC_URL}" # Optional, defaults to PUBLIC_URL
# Examples:
# Development: PUBLIC_URL="http://localhost:8000"
# Production: PUBLIC_URL="https://haproxy-openmanager.company.com"
# Production: PUBLIC_URL="https://haproxy-openmanager.yourdomain.com"
# Logging
LOG_LEVEL="INFO" # DEBUG, INFO, WARNING, ERROR, CRITICAL
# Agent Settings
AGENT_HEARTBEAT_TIMEOUT_SECONDS=15
AGENT_CONFIG_SYNC_INTERVAL_SECONDS=30# API Endpoint (auto-detected if empty)
# Leave empty in production to use same-origin (window.location)
REACT_APP_API_URL="" # For development: "http://localhost:8000"
# Environment
NODE_ENV="production"
GENERATE_SOURCEMAP="false"- Development: Use
.envfile in project root - Docker Compose: Environment variables in
docker-compose.yml - Kubernetes/OpenShift: ConfigMaps (
k8s/manifests/07-configmaps.yaml)
All URLs are configurable via environment variables for enterprise deployment flexibility:
- β No hardcoded URLs in source code
- β Deploy anywhere: Development, staging, production
- β Multi-tenant support: Each instance can have its own URL
- β Easy migration: Change deployment location without code changes
Add new HAProxy clusters through the web interface or directly via API:
{
"name": "Production Cluster",
"description": "Production HAProxy cluster",
"host": "10.0.1.100",
"port": 8404,
"connection_type": "ssh",
"ssh_username": "haproxy",
"installation_type": "existing",
"deployment_type": "cluster",
"cluster_nodes": ["10.0.1.101", "10.0.1.102"],
"keepalive_ip": "10.0.1.10"
}# Login and get JWT token
POST /api/auth/login
Content-Type: application/json
{
"username": "admin",
"password": "admin123"
}
Response:
{
"access_token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...",
"token_type": "bearer"
}
# Get current user
GET /api/auth/me
Authorization: Bearer <access_token># List all pools
GET /api/pools
Authorization: Bearer <token>
# Create new pool
POST /api/pools
{
"name": "Production Pool",
"description": "Production environment pool"
}
# Get pool details
GET /api/pools/{pool_id}# List all agents
GET /api/agents
Authorization: Bearer <token>
# Create agent and get installation script
POST /api/agents
{
"name": "HAProxy-Prod-01",
"pool_id": 1,
"platform": "linux", # linux or macos
"architecture": "x86_64" # x86_64 or arm64
}
# Agent polls for tasks (called by agent service)
GET /api/agents/{agent_id}/tasks
Authorization: Bearer <agent_token>
# Agent reports task completion
POST /api/agents/tasks/{task_id}/complete
{
"status": "success",
"message": "Configuration applied successfully",
"logs": "HAProxy reloaded"
}
# Get agent status
GET /api/agents/{agent_id}/status# List all clusters
GET /api/clusters
Authorization: Bearer <token>
# Create new cluster
POST /api/clusters
{
"name": "Production Cluster",
"description": "Production HAProxy cluster",
"pool_id": 1
}
# Get cluster details
GET /api/clusters/{cluster_id}# Get dashboard overview
GET /api/dashboard/overview
# Get HAProxy statistics
GET /api/haproxy/stats?cluster_id=1
# Get backend server status
GET /api/backends?cluster_id=1
# Get frontend configurations
GET /api/frontends?cluster_id=1# Get HAProxy configuration
GET /api/haproxy/config?cluster_id=1
# Update configuration
PUT /api/haproxy/config?cluster_id=1
Content-Type: text/plain
# Validate configuration
POST /api/haproxy/config/validateThe application consists of 8 Docker containers:
# Core Application Services
haproxy-openmanager-frontend # React.js web interface
haproxy-openmanager-backend # FastAPI backend
haproxy-openmanager-db # PostgreSQL database
haproxy-openmanager-redis # Redis cache
haproxy-openmanager-nginx # Nginx reverse proxy
# HAProxy Test Instances
haproxy-instance # Local test instance
haproxy-remote1 # Remote test instance 1
haproxy-remote2 # Remote test instance 2# Start all services
docker-compose up -d
# View logs
docker-compose logs -f [service_name]
# Stop all services
docker-compose down
# Rebuild and restart
docker-compose down && docker-compose up --build -d
# Database reset (β οΈ Destroys all data)
docker-compose down
docker volume rm haproxy-openmanager_postgres_data
docker-compose up -d# List volumes
docker volume ls | grep haproxy
# Backup database
docker exec haproxy-openmanager-db pg_dump -U haproxy_user haproxy_openmanager > backup.sql
# Restore database
docker exec -i haproxy-openmanager-db psql -U haproxy_user haproxy_openmanager < backup.sqlFor Kubernetes environments, use the provided manifests:
# Deploy to Kubernetes
kubectl apply -f k8s/manifests/
# Check deployment status
kubectl get pods -n haproxy-openmanager
# Access via port-forward
kubectl port-forward svc/nginx-service 8080:80 -n haproxy-openmanagerSee k8s/manifests/README.md for detailed Kubernetes setup instructions.
-
Backend Development
cd backend python -m venv venv source venv/bin/activate # Linux/Mac # or venv\Scripts\activate # Windows pip install -r requirements.txt uvicorn main:app --reload --host 0.0.0.0 --port 8000
-
Frontend Development
cd frontend npm install npm start -
Database Setup
# Start only database and redis docker-compose up -d postgres redis # Run migrations python backend/migration.py
haproxy-openmanager/
βββ backend/ # FastAPI backend application
β βββ main.py # Main application entry point
β βββ config.py # Application configuration
β βββ auth_middleware.py # Authentication middleware
β βββ agent_notifications.py # Agent notification system
β βββ database/ # Database layer
β β βββ connection.py # Database connection management
β β βββ migrations.py # Database migration scripts
β βββ models/ # SQLAlchemy ORM models
β β βββ user.py # User model
β β βββ cluster.py # Cluster model
β β βββ agent.py # Agent model
β β βββ frontend.py # Frontend configuration model
β β βββ backend.py # Backend configuration model
β β βββ ssl.py # SSL certificate model
β β βββ waf.py # WAF rules model
β βββ routers/ # API route handlers
β β βββ auth.py # Authentication endpoints
β β βββ user.py # User management
β β βββ cluster.py # Cluster management
β β βββ agent.py # Agent management
β β βββ frontend.py # Frontend configuration
β β βββ backend.py # Backend configuration
β β βββ ssl.py # SSL certificate management
β β βββ waf.py # WAF rules management
β β βββ dashboard.py # Dashboard statistics
β β βββ configuration.py # Configuration viewer
β β βββ security.py # Security & token management
β βββ services/ # Business logic services
β β βββ haproxy_config.py # HAProxy config generation
β β βββ dashboard_stats_service.py # Stats aggregation
β βββ middleware/ # Custom middleware
β β βββ activity_logger.py # Activity logging
β β βββ error_handler.py # Error handling
β β βββ rate_limiter.py # API rate limiting
β βββ utils/ # Utility functions
β β βββ auth.py # Authentication utilities
β β βββ haproxy_config_parser.py # Config parser
β β βββ haproxy_stats_parser.py # Stats parser
β β βββ haproxy_validator.py # Config validator
β β βββ ssl_parser.py # SSL certificate parser
β β βββ activity_log.py # Activity log utilities
β β βββ logging_config.py # Logging configuration
β β βββ agent_scripts/ # Agent installation scripts
β β βββ linux_install.sh # Linux agent installer
β β βββ macos_install.sh # macOS agent installer
β βββ tests/ # Backend tests
β β βββ conftest.py # Pytest configuration
β β βββ test_*.py # Test files
β βββ requirements.txt # Python dependencies
β βββ Dockerfile # Backend Docker image
β
βββ frontend/ # React.js frontend application
β βββ src/
β β βββ components/ # React components
β β β βββ Login.js # Login page
β β β βββ DashboardV2.js # Main dashboard
β β β βββ AgentManagement.js # Agent management
β β β βββ ClusterManagement.js # Cluster management
β β β βββ PoolManagement.js # Pool management
β β β βββ FrontendManagement.js # Frontend config
β β β βββ BackendServers.js # Backend config
β β β βββ SSLManagement.js # SSL certificates
β β β βββ WAFManagement.js # WAF rules
β β β βββ ApplyManagement.js # Apply changes
β β β βββ Configuration.js # Config viewer
β β β βββ BulkConfigImport.js # Bulk import
β β β βββ VersionHistory.js # Version history
β β β βββ UserManagement.js # User management
β β β βββ Security.js # Security & tokens
β β β βββ Settings.js # Settings page
β β β βββ EntitySyncStatus.js # Sync status
β β β βββ dashboard/ # Dashboard components
β β β βββ *.js # Dashboard tabs & charts
β β βββ contexts/ # React contexts
β β β βββ AuthContext.js # Authentication context
β β β βββ ClusterContext.js # Cluster selection context
β β β βββ ThemeContext.js # Theme (dark/light mode)
β β β βββ ProgressContext.js # Global progress indicator
β β βββ utils/ # Utility functions
β β β βββ api.js # API client
β β β βββ agentSync.js # Agent sync utilities
β β β βββ colors.js # Color utilities
β β β βββ dashboardCache.js # Dashboard caching
β β βββ App.js # Main app component
β β βββ App.css # Global styles
β β βββ index.js # React entry point
β βββ public/
β β βββ index.html # HTML template
β βββ package.json # Node dependencies
β βββ Dockerfile # Frontend Docker image
β
βββ docs/ # Documentation
β βββ screenshots/ # UI screenshots
β βββ dashboard-overview.png
β βββ frontend-management.png
β βββ backend-configuration.png
β βββ bulk-import.png
β βββ *.png # Other screenshots
β
βββ k8s/ # Kubernetes/OpenShift manifests
β βββ manifests/
β βββ 00-namespace.yaml # Namespace
β βββ 01-service-accounts.yaml # Service accounts
β βββ 02-rbac.yaml # RBAC roles
β βββ 03-secrets.yaml # Secrets
β βββ 04-storage.yaml # Persistent volumes
β βββ 05-postgres.yaml # PostgreSQL deployment
β βββ 06-redis.yaml # Redis deployment
β βββ 07-configmaps.yaml # ConfigMaps
β βββ 08-backend.yaml # Backend deployment
β βββ 09-frontend.yaml # Frontend deployment
β βββ 10-nginx.yaml # Nginx deployment
β βββ 11-routes.yaml # OpenShift routes
β βββ 12-ingress.yaml # Kubernetes ingress
β βββ 13-hpa.yaml # Horizontal Pod Autoscaler
β βββ deploy.sh # Deployment script
β βββ cleanup.sh # Cleanup script
β
βββ nginx/ # Nginx reverse proxy
β βββ nginx.conf # Nginx configuration
β
βββ haproxy/ # Sample HAProxy configs
β βββ haproxy.cfg # Sample configuration
β βββ haproxy-simple.cfg # Simple example
β βββ haproxy-remote1.cfg # Remote example 1
β βββ haproxy-remote2.cfg # Remote example 2
β
βββ scripts/ # Utility scripts
β βββ check-agent-logs.sh # Check agent logs
β βββ check-agent-stats.sh # Check agent stats
β βββ check-haproxy-stats-socket.sh # Check stats socket
β βββ cleanup-cluster-entities.sh # Cleanup entities
β βββ cleanup-soft-deleted.sh # Cleanup soft deletes
β βββ update-agent-version.sh # Update agent version
β βββ test-stats-parser.py # Test stats parser
β
βββ utils/ # Uninstall utilities
β βββ uninstall-agent-linux.sh # Linux uninstaller
β βββ uninstall-agent-macos.sh # macOS uninstaller
β
βββ docker-compose.yml # Docker Compose configuration
βββ docker-compose.test.yml # Test environment
βββ build-images.sh # Build Docker images
βββ pytest.ini # Pytest configuration
βββ README.md # This file
βββ CONFIG.md # Configuration documentation
βββ TESTING.md # Testing documentation
βββ UPGRADE_GUIDE.md # Upgrade guide
βββ LICENSE # MIT License
- Backend API Endpoints: Add to
backend/main.py - Frontend Components: Add to
frontend/src/components/ - Database Changes: Update
backend/init.sqland create migration - HAProxy Features: Extend
backend/haproxy_client.py
# Check if HAProxy is running
curl http://localhost:8404/stats
# Check HAProxy configuration
docker exec haproxy-instance haproxy -c -f /usr/local/etc/haproxy/haproxy.cfg
# View HAProxy logs
docker logs haproxy-instance# Check database status
docker exec haproxy-openmanager-db pg_isready -U haproxy_user
# Check database logs
docker logs haproxy-openmanager-db
# Reset database
docker-compose down
docker volume rm haproxy-openmanager_postgres_data
docker-compose up -d# Test SSH connectivity
ssh -i ~/.ssh/id_rsa user@remote-server
# Check SSH key permissions
chmod 600 ~/.ssh/id_rsa
chmod 644 ~/.ssh/id_rsa.pub
# Verify SSH service
systemctl status ssh # on remote server# Clear node modules and reinstall
cd frontend
rm -rf node_modules package-lock.json
npm install
# Check for port conflicts
lsof -i :3000Symptom: Apply Management shows pending changes that can't be applied or rejected, or shows entities from wrong cluster.
Root Cause: Orphan config versions - versions that reference entities from different clusters or deleted entities (entity ID reuse bug).
Solution (Automatic):
- Modern versions (v1.1.0+): Orphan versions are automatically detected and cleaned when you:
- Open Apply Management page (orphans filtered out from display)
- Click "Apply Changes" (orphans deleted before apply)
- Click "Reject All" (orphans deleted before reject)
Manual Cleanup (if needed):
-- Connect to database
kubectl exec -it <postgres-pod> -- psql -U haproxy_user -d haproxy_openmanager
-- Find orphan versions (example for cluster_id=2)
SELECT cv.id, cv.version_name, cv.cluster_id,
CASE
WHEN cv.version_name ~ 'backend-[0-9]+-'
THEN (SELECT cluster_id FROM backends WHERE id = SUBSTRING(cv.version_name FROM 'backend-([0-9]+)-')::int)
ELSE NULL
END as entity_cluster
FROM config_versions cv
WHERE cv.cluster_id = 2 AND cv.status = 'PENDING';
-- Delete orphan versions (where entity_cluster != 2 or NULL)
DELETE FROM config_versions
WHERE id IN (SELECT id FROM ...);Prevention:
- Always delete backends/frontends in the correct cluster
- Use Apply Changes workflow (don't skip)
- Keep backend/frontend names unique across clusters
Symptom: Pending changes in Apply Management remain stuck and are not applied by agents, or agents report configuration failures.
Root Cause: The generated HAProxy configuration has syntax errors or validation issues that prevent HAProxy from accepting the new config.
Diagnosis - Check HAProxy Validation on Agent Server:
# SSH to the agent server where changes are stuck
ssh user@agent-server
# Check agent logs for validation errors
sudo tail -f /var/log/haproxy-agent/agent.log
# The agent writes the new config to /tmp/haproxy-new-config.cfg before applying
# Manually validate the configuration file
sudo /usr/sbin/haproxy -c -f /tmp/haproxy-new-config.cfg
# If validation fails, you'll see detailed error messages like:
# [ALERT] parsing [/tmp/haproxy-new-config.cfg:45] : 'bind' : missing address, port or path
# [ALERT] parsing [/tmp/haproxy-new-config.cfg:78] : unknown keyword 'ssl-certificate'Common Validation Issues:
-
Invalid SSL Certificate Reference:
[ALERT] 'bind' : unable to load SSL certificate '/etc/haproxy/certs/missing-cert.pem'- Solution: Ensure SSL certificate was uploaded and synced to the agent
- Check:
ls -la /etc/haproxy/certs/
-
Backend Server Not Defined:
[ALERT] 'use_backend' : unable to find required use_backend: 'backend-name'- Solution: Ensure backend exists and is defined before the frontend that uses it
-
Invalid ACL Syntax:
[ALERT] parsing [config:42] : error detected while parsing ACL 'acl-name'- Solution: Fix ACL syntax in Frontend Management
-
Port Already in Use:
[ALERT] Starting frontend GLOBAL: cannot bind socket- Solution: Check if another service is using the same port
Recovery Steps:
-
View Generated Config:
# On agent server - view the new config that failed validation sudo cat /tmp/haproxy-new-config.cfg # Compare with current working config sudo cat /etc/haproxy/haproxy.cfg
-
Check Agent Logs:
# See full error context and rollback messages sudo tail -100 /var/log/haproxy-agent/agent.log | grep -A 10 "validation failed"
-
Fix Issues in UI:
- Go to Apply Management β View pending changes
- Identify the problematic entity (frontend, backend, SSL certificate)
- Edit or delete the problematic entity
- Reject current pending changes
- Re-apply after fixing
-
Emergency Rollback:
# On agent server - agent automatically keeps backup sudo ls -la /etc/haproxy/haproxy.cfg.backup* # Manually restore if needed (agent usually does this automatically on validation failure) sudo cp /etc/haproxy/haproxy.cfg.backup.TIMESTAMP /etc/haproxy/haproxy.cfg sudo systemctl reload haproxy
Prevention:
- Always test configurations incrementally (don't make too many changes at once)
- Use Configuration Viewer to preview generated HAProxy config before applying
- Ensure SSL certificates are uploaded before creating frontends that use them
- Keep backend names unique and descriptive
- Test ACL syntax before saving
Enable debug logging:
# Backend debug mode
export DEBUG=True
export LOG_LEVEL=DEBUG
# Frontend development mode
export NODE_ENV=development
export REACT_APP_DEBUG=true# Application logs
docker logs haproxy-openmanager-backend
docker logs haproxy-openmanager-frontend
# HAProxy logs
docker logs haproxy-instance
# Database logs
docker logs haproxy-openmanager-db
# System logs (on remote servers)
journalctl -u haproxy
journalctl -u keepalived- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit changes (
git commit -m 'Add amazing feature') - Push to branch (
git push origin feature/amazing-feature) - Open a Pull Request
- Follow PEP 8 for Python code
- Use ESLint/Prettier for JavaScript code
- Add tests for new features
- Update documentation for API changes
- Ensure Docker builds work correctly
- Enterprise HAProxy Management: Manage multiple HAProxy clusters across different environments
- DevOps Automation: Automated configuration deployment with version control
- Multi-Tenant Setups: Separate pools and clusters for different teams/projects
- Monitoring & Analytics: Real-time metrics and performance tracking
- Security Management: Centralized SSL certificate and WAF rule management
This project is licensed under the MIT License - see the LICENSE file for details.
Taylan BakΔ±rcΔ±oΔlu
Burgan Bank - DevOps / Product Group Manager
LinkedIn: linkedin.com/in/taylanbakircioglu
Developed with β€οΈ for the HAProxy community
If you find this project useful, please consider giving it a star on GitHub!
- Documentation: This README and inline code documentation
- Issues: GitHub Issues
- Pull Requests: Contributions are welcome!
We welcome contributions! Here's how you can help:
- Report Bugs: Open an issue with detailed reproduction steps
- Suggest Features: Share your ideas for new features or improvements
- Submit PRs: Fix bugs, add features, or improve documentation
- Improve Docs: Help make the documentation clearer and more comprehensive
- Share: Star the project and share it with others
- Follow existing code style and conventions
- Write clear commit messages
- Add tests for new features
- Update documentation as needed
- Keep PRs focused on a single change
- HAProxy - The load balancer being managed
- FastAPI - Backend framework
- React.js - Frontend framework
- Ant Design - UI component library
Made with β€οΈ for the HAProxy community