A Prometheus exporter for iPerf3 network performance metrics.
The Docker image has moved to GitHub Container Registry (ghcr.io) and the name has changed from iperf3-exporter to iperf3_exporter following GitHub's naming standards. If you were using the old image name, please update your references.
The iPerf3 exporter allows iPerf3 probing of endpoints for Prometheus monitoring, enabling you to measure network performance metrics like bandwidth, jitter, and packet loss.
- Measure network bandwidth between hosts
- Monitor network performance over time
- Support for both TCP and UDP tests
- Configurable test parameters (duration, bitrate, etc.)
- TLS support for secure communication
- Basic authentication for access control
- Health and readiness endpoints for monitoring
- Prometheus metrics for exporter itself
Download the most suitable binary for your platform from the releases tab.
# Download (replace VERSION and PLATFORM with appropriate values)
curl -L -o iperf3_exporter https://github.com/edgard/iperf3_exporter/releases/download/VERSION/iperf3_exporter-VERSION.PLATFORM
# Make executable
chmod +x iperf3_exporter
# Run
./iperf3_exporter <flags>Note: iperf3 binary should also be installed and accessible from the path.
docker run --rm -d -p 9579:9579 --name iperf3_exporter ghcr.io/edgard/iperf3_exporter:latestThe Docker images are available for multiple architectures (amd64, arm64) and are published to GitHub Container Registry.
# Clone repository
git clone https://github.com/edgard/iperf3_exporter.git
cd iperf3_exporter
# Build
go build -o iperf3_exporter ./cmd/iperf3_exporter
# Run
./iperf3_exporter./iperf3_exporter [flags]iPerf3 exporter is configured via command-line flags:
| Flag | Description | Default |
|---|---|---|
--web.listen-address |
Addresses on which to expose metrics and web interface (repeatable) | :9579 |
--web.telemetry-path |
Path under which to expose metrics | /metrics |
--web.probe-path |
Path under which to expose the probe endpoint | /probe |
--iperf3.timeout |
iperf3 run timeout | 30s |
--web.config.file |
Path to configuration file that can enable TLS or authentication | |
--web.systemd-socket |
Use systemd socket activation listeners instead of port listeners (Linux only) | false |
--log.level |
Only log messages with the given severity or above | info |
--log.format |
Output format of log messages | logfmt |
The exporter supports a configuration file for TLS and authentication settings. This file is specified with the --web.config.file flag.
Example configuration file:
tls_server_config:
cert_file: server.crt
key_file: server.key
basic_auth_users:
username: passwordFor more details on the web configuration file format, see the exporter-toolkit documentation.
To view all available command-line flags, run:
./iperf3_exporter -hThe timeout for each iperf3 probe is determined by the following logic:
-
Prometheus scrape timeout: When Prometheus or compatible systems (e.g., VictoriaMetrics) scrape the exporter, they send the
X-Prometheus-Scrape-Timeout-Secondsheader containing the configuredscrape_timeoutvalue. -
Configured timeout limit: The
--iperf3.timeoutflag acts as an upper limit. If set, it restricts the effective timeout to be no larger than this value, regardless of the Prometheus scrape timeout. -
Default timeout: If neither the Prometheus header nor
--iperf3.timeoutis provided, the timeout defaults to 30 seconds. -
Timeout offset: A small offset (0.5 seconds) is subtracted from the Prometheus timeout to ensure the exporter finishes before Prometheus gives up, allowing for network delays and cleaner error handling.
Examples:
- Prometheus
scrape_timeout: 60s,--iperf3.timeout=10s→ Effective timeout: 10s (configured limit applied) - Prometheus
scrape_timeout: 30s,--iperf3.timeoutnot set → Effective timeout: 29.5s (header value minus offset) - No Prometheus header,
--iperf3.timeout=15s→ Effective timeout: 15s (configured value used) - No Prometheus header,
--iperf3.timeoutnot set → Effective timeout: 30s (default)
This behavior ensures that the --iperf3.timeout flag can be used to enforce maximum test durations even when Prometheus is configured with longer scrape timeouts.
When making requests to the /probe endpoint, the following parameters can be used:
| Parameter | Description | Default |
|---|---|---|
target |
Target host to probe (required) | - |
port |
Port that the target iperf3 server is listening on | 5201 |
reverse_mode |
Run iperf3 in reverse mode (server sends, client receives) | false |
udp_mode |
Run iperf3 in UDP mode instead of TCP | false |
bitrate |
Target bitrate in bits/sec (format: #[KMG][/#]). For UDP mode, iperf3 defaults to 1 Mbit/sec if not specified. | - |
period |
Duration of the iperf3 test | 5s |
bind |
Bind to a specific local IP address or interface | - |
Visit http://localhost:9579 to see the exporter's web interface.
The iPerf3 exporter needs to be passed the target as a parameter, this can be done with relabelling.
Example config:
scrape_configs:
- job_name: 'iperf3'
metrics_path: /probe
static_configs:
- targets:
- foo.server
- bar.server
params:
port: ['5201']
# Optional: enable reverse mode
# reverse_mode: ['true']
# Optional: enable UDP mode
# udp_mode: ['true']
# Optional: set bitrate limit
# bitrate: ['100M']
# Optional: set test period
# period: ['10s']
# Optional: bind to specific interface/IP
# bind: ['192.168.1.10']
relabel_configs:
- source_labels: [__address__]
target_label: __param_target
- source_labels: [__param_target]
target_label: instance
- target_label: __address__
replacement: 127.0.0.1:9579 # The iPerf3 exporter's real hostname:port.The exporter provides the following metrics:
| Metric | Description | Labels |
|---|---|---|
iperf3_up |
Was the last iperf3 probe successful (1 for success, 0 for failure) | target, port |
iperf3_sent_seconds |
Total seconds spent sending packets | target, port |
iperf3_sent_bytes |
Total sent bytes for the last test run | target, port |
iperf3_received_seconds |
Total seconds spent receiving packets | target, port |
iperf3_received_bytes |
Total received bytes for the last test run | target, port |
iperf3_retransmits |
Total retransmits for the last test run (TCP mode only, omitted in UDP) | target, port |
iperf3_sent_packets |
Total sent packets for the last UDP test run (UDP mode only) | target, port |
iperf3_sent_jitter_ms |
Jitter in milliseconds for sent packets (UDP mode only) | target, port |
iperf3_lost_packets |
Total lost packets for the last UDP test run (UDP mode only) | target, port |
iperf3_lost_percent |
Percentage of packets lost for the last UDP test run (UDP mode only) | target, port |
Additionally, the exporter provides metrics about itself:
| Metric | Description |
|---|---|
iperf3_exporter_duration_seconds |
Duration of collections by the iperf3 exporter |
iperf3_exporter_errors_total |
Errors raised by the iperf3 exporter |
The byte and second metrics are gauges that represent the results from each individual iperf3 test run. Each time Prometheus scrapes the /probe endpoint, a new iperf3 test runs and the metrics reflect that test's results.
You can use the following Prometheus queries to calculate bandwidth in Mbits/sec:
iperf3_received_bytes / iperf3_received_seconds * 8 / 1000000
iperf3_sent_bytes / iperf3_sent_seconds * 8 / 1000000
These queries calculate the average bandwidth for each test by dividing the total bytes by the time duration to get bytes per second, then converting from bytes to bits (multiply by 8) and from bits to megabits (divide by 1,000,000).
For average bandwidth over multiple test runs:
avg_over_time((iperf3_received_bytes / iperf3_received_seconds * 8 / 1000000)[30m:])
avg_over_time((iperf3_sent_bytes / iperf3_sent_seconds * 8 / 1000000)[30m:])
Note: Since these are gauge metrics (not counters), the rate() function should not be used. Each scrape represents a discrete test, not a cumulative counter.
Contributions to the iperf3_exporter are welcome!
This project follows the Conventional Commits specification. When contributing, please format your commit messages according to this standard:
<type>(<scope>): <description>
[optional body]
[optional footer(s)]
Examples:
feat: add support for UDP testsfix: correct metric label in collectordocs: update installation instructionsrefactor(collector): simplify error handling
- Go 1.24 or higher
- iperf3 installed on your system
.
├── cmd/
│ └── iperf3_exporter/ # Main application entry point
├── internal/
│ ├── collector/ # Prometheus collector implementation
│ ├── config/ # Configuration handling
│ ├── iperf/ # iperf3 command execution and result parsing
│ └── server/ # HTTP server implementation
├── tests/
│ └── e2e/ # End-to-end tests
├── .github/
│ └── workflows/ # GitHub Actions workflows
├── .goreleaser.yaml # GoReleaser configuration
├── Dockerfile # Multi-arch Docker build configuration
├── go.mod # Go module definition
└── README.md # This file
The project uses a Makefile to streamline development tasks:
# Build the binary
make build
# Run all tests
make test
# Complete development workflow (run mod, generate, lint, vet, tests and build)
make all
# Tidy and download dependencies
make mod
# Run linting
make lint
# Run go vet
make vet
# Generate code (if any generators are configured)
make generate
# Build Docker image for local development
make docker
# See all available commands
make helpYou can also use standard Go commands directly:
# Build manually
go build -o iperf3_exporter ./cmd/iperf3_exporter
# Run tests
go test ./...This project is released under Apache License 2.0, see LICENSE.