A powerful Go CLI tool to migrate data between Elasticsearch clusters. Elasticdump can also be used to backup and restore data with support for various Elasticsearch versions and efficient handling of large datasets.
- π Fast Data Migration: Migrate data between Elasticsearch clusters efficiently
- πΎ Backup & Restore: Comprehensive backup and restore functionality
- π Multi-Version Support: Compatible with various Elasticsearch versions
- β‘ High Performance: Multi-threaded operations for faster processing
- π Progress Tracking: Real-time progress bars for long-running operations
- π Multiple Formats: Support for JSON and NDJSON output formats
- π― Flexible Operations: Transfer data, mappings, or settings independently
go install github.com/lilmonk/elasticdump@latestgit clone https://github.com/lilmonk/elasticdump.git
cd elasticdump
go build -o elasticdumpDownload the latest binary from the releases page.
Transfer data between two Elasticsearch clusters:
elasticdump transfer --input=http://localhost:9200/source_index --output=http://localhost:9200/dest_indexBackup data to a file:
elasticdump backup --input=http://localhost:9200/myindex --output=backup.ndjsonRestore data from a backup file:
elasticdump restore --input=backup.ndjson --output=http://localhost:9200/myindexTransfer only index mappings:
elasticdump transfer --input=http://localhost:9200/myindex --output=http://localhost:9200/myindex --type=mappingTransfer only index settings:
elasticdump transfer --input=http://localhost:9200/myindex --output=http://localhost:9200/myindex --type=settingsTransfer data, mappings, or settings between Elasticsearch clusters or to/from files.
elasticdump transfer [flags]Flags:
--input, -i: Source Elasticsearch cluster or index (required)--output, -o: Destination Elasticsearch cluster or index (required)--type, -t: Type of data to transfer (data,mapping,settings) (default: "data")--limit, -l: Limit the number of records to transfer (0 = no limit) (default: 0)--concurrency, -c: Number of concurrent operations (default: 4)--format, -f: Output format (json,ndjson) (default: "json")--scrollSize, -s: Size of the scroll for large datasets (default: 1000)--username, -u: Username for Elasticsearch authentication--password, -p: Password for Elasticsearch authentication
Backup Elasticsearch data to a file. This is a convenience wrapper around the transfer command.
elasticdump backup [flags]Flags:
--input, -i: Source Elasticsearch cluster or index (required)--output, -o: Output file path (required)--type, -t: Type of data to backup (data,mapping,settings) (default: "data")--limit, -l: Limit the number of records to backup (0 = no limit) (default: 0)--concurrency, -c: Number of concurrent operations (default: 4)--format, -f: Output format (json,ndjson) (default: "ndjson")--scrollSize, -s: Size of the scroll for large datasets (default: 1000)--username, -u: Username for Elasticsearch authentication--password, -p: Password for Elasticsearch authentication
Restore Elasticsearch data from a backup file.
elasticdump restore [flags]Flags:
--input, -i: Input file path (required)--output, -o: Destination Elasticsearch cluster or index (required)--type, -t: Type of data to restore (data,mapping,settings) (default: "data")--concurrency, -c: Number of concurrent operations (default: 4)--username, -u: Username for Elasticsearch authentication--password, -p: Password for Elasticsearch authentication
--verbose, -v: Verbose output--help, -h: Help for any command--version: Show version information
Migrate an entire index including data, mappings, and settings:
# First, transfer settings and mappings
elasticdump transfer --input=http://source:9200/myindex --output=http://dest:9200/myindex --type=settings
elasticdump transfer --input=http://source:9200/myindex --output=http://dest:9200/myindex --type=mapping
# Then transfer the data
elasticdump transfer --input=http://source:9200/myindex --output=http://dest:9200/myindex --type=data --concurrency=8For large datasets, increase concurrency and scroll size:
elasticdump transfer \
--input=http://localhost:9200/large_index \
--output=http://newcluster:9200/large_index \
--concurrency=10 \
--scrollSize=5000 \
--verboseBackup only a subset of documents:
elasticdump backup \
--input=http://localhost:9200/myindex \
--output=partial_backup.ndjson \
--limit=10000 \
--format=ndjsonElasticdump supports basic authentication methods for clusters requiring authentication:
elasticdump transfer \
--input=http://source.elasticsearch.com:9200/index \
--output=http://dest.elasticsearch.com:9200/index \
--username=elastic \
--password=your_password- Increase Concurrency: Use
--concurrencyflag to increase parallel operations - Optimize Scroll Size: Adjust
--scrollSizebased on document size and available memory - Use NDJSON Format: For large datasets, NDJSON format is more memory efficient
- Network Proximity: Run elasticdump close to your Elasticsearch clusters to reduce network latency
Elasticdump includes robust error handling:
- Automatic retries for transient network errors
- Detailed error messages for debugging
- Graceful handling of malformed documents
We welcome contributions! Please see CONTRIBUTING.md for details.
This project is licensed under the MIT License - see the LICENSE file for details.
- π Documentation
- π Issues
- π¬ Discussions