A fast, concurrent, and feature-rich file downloader library and CLI tool written in Go.
# Go users (recommended - works on all platforms)
go install github.com/forest6511/gdl/cmd/gdl@latest
# macOS/Linux via Homebrew
brew install forest6511/tap/gdl
# Or download binaries from GitHub Releases
# https://github.com/forest6511/gdl/releases# Simple download - gdl automatically optimizes for file size
gdl https://example.com/file.zip
# With resume support (recommended for large files)
gdl --resume https://releases.ubuntu.com/22.04/ubuntu-22.04-desktop-amd64.isogdl automatically detects optimal concurrency, resumes interrupted downloads, and handles errors gracefully.
Next Steps: Scenario Guides | CLI Reference | API Reference
- π Optimized Performance: Smart defaults with adaptive concurrency based on file size
- β‘ Bandwidth Control: Rate limiting with human-readable formats (1MB/s, 500k, etc.)
- π Progress Tracking: Real-time progress with multiple display formats
- π Resume Support: Automatic resume of interrupted downloads
- π Protocol Support: HTTP/HTTPS with custom headers and proxy support
- π‘οΈ Error Handling: Comprehensive error handling with smart retry logic
- β‘ Cross-Platform: Works on Linux, macOS, Windows, and ARM (including Apple Silicon, Raspberry Pi, and ARM servers)
- π§ Dual Interface: Both library API and command-line tool
- π± User-Friendly: Interactive prompts and helpful error messages
- π Plugin System: Extensible plugin architecture for custom functionality
- π― Event-Driven: Hook into download lifecycle events
- π Performance Monitoring: Metrics collection and aggregation for production use
- π Security: Built-in security constraints and validation
gdl uses intelligent optimization to provide the best download speeds:
- Small files (<1MB): Lightweight mode - minimal overhead, 60-90% of curl speed β‘
- Small-medium files (1-10MB): 2 concurrent connections
- Medium files (10-100MB): 4 concurrent connections
- Large files (100MB-1GB): 8 concurrent connections
- Very large files (>1GB): 16 concurrent connections
Based on benchmarks against curl and wget:
| File Size | gdl (optimized) | gdl (baseline) | curl | wget |
|---|---|---|---|---|
| 100KB | 90% β‘ | 30% | 100% | 90% |
| 500KB | 80% β‘ | 20% | 100% | 80% |
| 1MB | 80% | 80% | 100% | 130% |
| 10MB | 85% π | 70% | 100% | 90% |
| 50MB | 90% π | 60% | 100% | 85% |
Performance as % of curl speed (baseline = 100%). Higher is better.
- Zero-Copy I/O: Linux sendfile for files >10MB reduces CPU by 20-30%
- Buffer Pooling: 4-tier memory pool (8KB/64KB/1MB/4MB) reduces allocations by 50-90%
- Advanced Connection Pool: DNS caching, TLS session resumption, CDN optimization
- Lightweight Mode: Minimal overhead HTTP client for files <1MB (3-6x faster)
- Optimized HTTP Client: Enhanced connection pooling and HTTP/2 support
- Adaptive Chunk Sizing: Dynamic buffer sizes (8KB-1MB) based on file size
- Memory Efficiency: 50-90% less memory usage with advanced pooling
- CI Performance Testing: Automated regression detection with 10% threshold
go install github.com/forest6511/gdl/cmd/gdl@latestbrew tap forest6511/tap
brew install forest6511/tap/gdlNote: Use the full tap name
forest6511/tap/gdlto avoid conflicts with the GNOME gdl package.
Download pre-built binaries from GitHub Releases
go get github.com/forest6511/gdl# Simple download (uses smart defaults)
gdl https://example.com/file.zip
# Override smart defaults with custom settings
gdl --concurrent 8 --chunk-size 2MB -o myfile.zip https://example.com/file.zip
# With bandwidth limiting
gdl --max-rate 1MB/s https://example.com/large-file.zip
gdl --max-rate 500k https://example.com/file.zip # Smart concurrency still applies
# With custom headers and resume
gdl -H "Authorization: Bearer token" --resume https://example.com/file.zip
# Using plugins
gdl plugin install oauth2-auth
gdl plugin list
gdl --plugin oauth2-auth https://secure-api.example.com/file.zippackage main
import (
"bytes"
"context"
"fmt"
"github.com/forest6511/gdl"
)
func main() {
// Simple download using Download function
stats, err := gdl.Download(context.Background(),
"https://example.com/file.zip", "file.zip")
if err != nil {
panic(err)
}
fmt.Printf("Downloaded %d bytes in %v\n", stats.BytesDownloaded, stats.Duration)
// Download with progress callback and bandwidth limiting using DownloadWithOptions
options := &gdl.Options{
MaxConcurrency: 4,
MaxRate: 1024 * 1024, // 1MB/s rate limit
ProgressCallback: func(p gdl.Progress) {
fmt.Printf("Progress: %.1f%% Speed: %.2f MB/s\n",
p.Percentage, float64(p.Speed)/1024/1024)
},
}
stats, err = gdl.DownloadWithOptions(context.Background(),
"https://example.com/file.zip", "file.zip", options)
if err == nil {
fmt.Printf("Download completed successfully! Average speed: %.2f MB/s\n",
float64(stats.AverageSpeed)/1024/1024)
}
// Download to memory using DownloadToMemory
data, stats, err := gdl.DownloadToMemory(context.Background(),
"https://example.com/small-file.txt")
if err == nil {
fmt.Printf("Downloaded %d bytes to memory in %v\n", len(data), stats.Duration)
}
// Download to any io.Writer using DownloadToWriter
var buffer bytes.Buffer
stats, err = gdl.DownloadToWriter(context.Background(),
"https://example.com/data.json", &buffer)
if err == nil {
fmt.Printf("Downloaded to buffer: %d bytes\n", stats.BytesDownloaded)
}
// Resume a partial download using DownloadWithResume
stats, err = gdl.DownloadWithResume(context.Background(),
"https://example.com/large-file.zip", "large-file.zip")
if err == nil && stats.Resumed {
fmt.Printf("Successfully resumed download: %d bytes\n", stats.BytesDownloaded)
}
// Get file information without downloading using GetFileInfo
fileInfo, err := gdl.GetFileInfo(context.Background(),
"https://example.com/file.zip")
if err == nil {
fmt.Printf("File size: %d bytes\n", fileInfo.Size)
}
// Using the extensible Downloader with plugins
downloader := gdl.NewDownloader()
// Register custom protocol handler
err = downloader.RegisterProtocol(customProtocolHandler)
// Use middleware
downloader.UseMiddleware(rateLimitingMiddleware)
// Register event listeners
downloader.On(events.EventDownloadStarted, func(event events.Event) {
fmt.Printf("Download started: %s\n", event.Data["url"])
})
// Download with plugins and middleware
stats, err = downloader.Download(context.Background(),
"https://example.com/file.zip", "file.zip", options)
if err == nil {
fmt.Printf("Plugin-enhanced download completed: %d bytes\n", stats.BytesDownloaded)
}
}- π API Reference - Complete library API documentation
- π Directory Structure - Complete project organization
- π§ Maintenance Guide - Development and maintenance procedures
- π¨ Error Handling - Error types and handling strategies
- π Plugin Development - Plugin development guide
- π¨ Plugin Gallery - Example plugins and extensions
- π Extending Guide - Extension points and customization
- π Compatibility Policy - Versioning and API stability
- π¦ Go Package Docs - Generated Go documentation
- βοΈ CLI Reference - Comprehensive CLI usage guide
Complete working examples are available in the examples/ directory:
- Basic Download - Simple download operations
- Concurrent Downloads - Parallel download optimization
- Progress Tracking - Real-time progress monitoring
- Resume Support - Interrupt and resume downloads
- Error Handling - Robust error recovery
- Production Usage - Production-ready patterns with monitoring
- CLI Examples - Command-line usage patterns
- Integration Tests - Feature verification
- Plugin Examples - Custom plugin development
- Extension Examples - System extension patterns
# Core functionality examples
cd examples/01_basic_download && go run main.go
cd examples/02_concurrent_download && go run main.go
cd examples/03_progress_tracking && go run main.go
cd examples/04_resume_functionality && go run main.go
cd examples/05_error_handling && go run main.go
cd examples/06_production_usage && go run main.go
# Interface examples
cd examples/cli
chmod +x *.sh
./basic_cli_examples.sh
./advanced_cli_examples.sh
# Integration demo
cd examples/integration
go run feature_demo.go
# Plugin examples
cd examples/plugins/auth/oauth2
go build -buildmode=plugin -o oauth2.so
cd examples/plugins/storage/s3
go build -buildmode=plugin -o s3.so| Feature | CLI | Library | Description |
|---|---|---|---|
| Basic download | β | β | Simple URL to file download |
| Custom destination | β | β | Specify output filename/path |
| Overwrite existing | β | β | Force overwrite of existing files |
| Create directories | β | β | Auto-create parent directories |
| Concurrent downloads | β | β | Multiple simultaneous connections |
| Custom chunk size | β | β | Configurable download chunks |
| Bandwidth throttling | β | β | Rate limiting with human-readable formats |
| Single-threaded mode | β | β | Disable concurrent downloads |
| Resume downloads | β | β | Continue interrupted downloads |
| Retry on failure | β | β | Automatic retry with backoff |
| Custom retry settings | β | β | Configure retry attempts/delays |
| Custom headers | β | β | Add custom HTTP headers |
| Custom User-Agent | β | β | Set custom User-Agent string |
| Proxy support | β | β | HTTP proxy configuration |
| SSL verification control | β | β | Skip SSL certificate verification |
| Redirect handling | β | β | Follow HTTP redirects |
| Timeout configuration | β | β | Set request/download timeouts |
| Progress display | β | β | Visual progress bars |
| Progress callbacks | β | β | Programmatic progress updates |
| Multiple progress formats | β | β | Simple/detailed/JSON progress |
| Quiet mode | β | β | Suppress output |
| Verbose mode | β | β | Detailed logging |
| Download to memory | β | β | Download directly to memory |
| Download to writer | β | β | Download to any io.Writer |
| File info retrieval | β | β | Get file metadata without download |
| Error handling | β | β | Robust error handling and recovery |
| Comprehensive errors | β | β | Detailed error information |
| Error suggestions | β | β | User-friendly error suggestions |
| Multilingual messages | β | β | Localized error messages |
| Interactive prompts | β | β | User confirmation prompts |
| Disk space checking | β | β | Pre-download space verification |
| Network diagnostics | β | β | Network connectivity testing |
| Signal handling | β | β | Graceful shutdown on signals |
| Plugin system | β | β | Extensible plugin architecture |
| Custom protocols | β | β | Plugin-based protocol handlers |
| Middleware support | β | β | Request/response processing |
| Event system | β | β | Download lifecycle events |
| Custom storage | β | β | Pluggable storage backends |
| Performance monitoring | β | β | Metrics collection and aggregation |
- β Fully supported - Feature is available and fully functional
- β Not applicable - Feature doesn't make sense in this context
The terms "Resume" and "Retry" sound similar but handle different situations. Understanding the difference is key to using gdl effectively.
| Retry | Resume | |
|---|---|---|
| Purpose | Automatic recovery from temporary errors | Manual continuation after an intentional stop |
| Trigger | Network errors, server errors | Existence of an incomplete file |
| Control | Number of attempts (RetryAttempts) |
Enabled/Disabled (EnableResume) |
| Result | stats.Retries (count) |
stats.Resumed (true/false) |
- Day 1: You start downloading a 10GB file. It gets to 5GB, and you stop the program (e.g., with Ctrl+C). (-> Interruption)
- Day 2: You run the same command again with resume enabled.
gdldetects the incomplete 5GB file and starts downloading from that point. (-> This is a Resumed download) - During the download of the remaining 5GB, your network connection briefly drops, causing a timeout error.
gdlautomatically waits a moment and re-attempts the failed request. (-> This is a Retry)- The download then completes successfully.
In this case, the final DownloadStats would be Resumed: true and Retries: 1.
For the complete project structure, see Directory Structure.
- Core Engine (
internal/core): Main download orchestration - Concurrency Manager (
internal/concurrent): Parallel download coordination - Rate Limiter (
pkg/ratelimit): Bandwidth throttling and rate control - Resume Engine (
internal/resume): Download resumption and partial file handling - Progress System (
pkg/progress): Real-time progress tracking - Error Framework (
pkg/errors): Comprehensive error handling - Network Layer (
internal/network): HTTP client and diagnostics - Storage Layer (
internal/storage): File system operations - Plugin System (
pkg/plugin): Extensible plugin architecture - Event System (
pkg/events): Download lifecycle events - Middleware Layer (
pkg/middleware): Request/response processing - Protocol Registry (
pkg/protocols): Custom protocol handlers - Monitoring System (
pkg/monitoring): Performance metrics and analytics - CLI Interface (
cmd/gdl): Command-line tool implementation
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β gdl Core β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β Plugin Manager β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β Auth Plugins β Protocol Plugins β Storage Plugins β
ββββββββββββββββββΌβββββββββββββββββββΌββββββββββββββββββββββββββ€
β Transform β Hook Plugins β Custom Plugins β
β Plugins β β β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Plugin Types:
- Authentication Plugins: OAuth2, API keys, custom auth schemes
- Protocol Plugins: FTP, S3, custom protocols
- Storage Plugins: Cloud storage, databases, custom backends
- Transform Plugins: Compression, encryption, format conversion
- Hook Plugins: Pre/post processing, logging, analytics
The project includes comprehensive testing:
# Run all tests (basic development)
go test ./...
# Run tests with race detection (recommended)
go test -race ./...
# Run with coverage
go test -coverprofile=coverage.out ./...
# Run benchmarks
go test -bench=. ./...
# Run performance benchmarks with optimization comparisons
go test -bench=BenchmarkDownloadWith -benchmem ./internal/core/
# Check for performance regressions
./scripts/performance_check.sh# Complete local CI validation (recommended before pushing)
./scripts/local-ci-check.sh
# OR use Makefile targets
make pre-push # Format + all CI checks
make ci-check # All CI checks without formatting# Test all platforms locally (requires: brew install act)
make test-ci-all # Ubuntu + Windows + macOS
make test-ci-ubuntu # Ubuntu only
make test-ci-windows # Windows only
make test-ci-macos # macOS only
# Quick cross-compilation check
make test-cross-compile| Purpose | Command | Use Case |
|---|---|---|
| Development | go test ./... |
Quick feedback during coding |
| Safety Check | go test -race ./... |
Detect race conditions |
| Before Push | ./scripts/local-ci-check.sh |
Full CI validation |
| Cross-Platform | make test-ci-all |
Test Windows/macOS compatibility |
| Coverage | go test -coverprofile=... |
Coverage analysis |
- Unit tests: All packages have >90% coverage
- Integration tests: Real HTTP download scenarios
- CLI tests: Command-line interface functionality
- Benchmark tests: Performance regression detection
- Race detection: Concurrent safety verification
We welcome contributions! Please see CONTRIBUTING.md for guidelines.
# Clone the repository
git clone https://github.com/forest6511/gdl.git
cd gdl
# Install dependencies
go mod download
# Install golangci-lint (if not already installed)
go install github.com/golangci/golangci-lint/cmd/golangci-lint@latest
# Install act for local CI testing (optional but recommended)
brew install act
# Verify everything works (CI equivalent check)
make ci-check
# Run tests
go test ./...
# Build CLI
go build -o gdl ./cmd/gdl/Before committing and pushing changes, always run these checks locally to avoid CI failures:
# Run lint checks (essential before commit/push)
golangci-lint run
# Run tests with race detection
go test -race ./...
# Run tests with coverage
go test -coverprofile=coverage.out ./...To prevent "works locally but fails in CI" issues, use these CI-equivalent commands:
# RECOMMENDED: Run complete pre-push validation
make pre-push # Formats code AND runs all CI checks
# Alternative: Run CI checks without formatting
make ci-check # All CI checks locally
# Cross-platform testing with act (requires: brew install act)
make test-ci-all # Test Ubuntu, Windows, macOS locally
make test-ci-ubuntu # Test Ubuntu CI locally
make test-ci-windows # Test Windows CI locally
make test-ci-macos # Test macOS CI locally
# Fix formatting issues automatically
make fix-and-commit # Auto-fix formatting and create commit if needed
# Quick cross-compilation check
make test-cross-compile # Fast Windows/macOS build verification
# Individual commands
make ci-format # Format code to CI standards
make ci-vet # go vet (excluding examples)
make ci-test-core # core library tests with race detectiongit add .
git commitA pre-commit hook is automatically installed that runs CI-equivalent checks on every commit, preventing most CI failures.
Important: Always run make ci-check locally before pushing to ensure all checks pass. This prevents CI pipeline failures and maintains code quality standards.
# Setup commit message validation and pre-commit checks
./scripts/setup-git-hooks.sh# Prepare a new release
./scripts/prepare-release.sh v0.10.0
# Edit CHANGELOG.md with release notes
./scripts/prepare-release.sh --release v0.10.0# Test GitHub Actions locally with act
act push -j quick-checks # Fast validation
act push -W .github/workflows/main.yml --dryrun # Full CI dry run- Contributing Guide - Development guidelines and workflow
- Release Setup - Release management and distribution
- Local Testing - GitHub Actions testing with act
- API Reference - Library API documentation
- CLI Reference - Command-line usage
- Plugin Development - Plugin system guide
- Maintenance - Development and maintenance procedures
This project is licensed under the MIT License - see the LICENSE file for details.
- Go community for excellent libraries and tools
- Contributors who helped improve this project
- Users who provided feedback and bug reports
Made with β€οΈ in Go