# esdump-rs Dump Elasticsearch or OpenSearch indexes to blob storage, really-really fast :rocket: Features: - Super-dooper fast - Supports compressing output with zstd or gzip - Natively supports blob storage on AWS, Google Cloud and Azure - Supports filtering and selecting specific fields - Detailed progress output and logging - Comes as a single, small static binary or a Docker image - Runs on Windows, Linux or MacOS - Written in Rust :crab: ![](./images/readme.gif) ## Installation **Releases:** Grab a pre-built executable [from the releases page](https://github.com/GitGuardian/esdump-rs/releases) **Docker:** `docker run ghcr.io/gitguardian/esdump-rs:v0.1.0` ## Usage Pass the Elasticsearch or OpenSearch HTTP(s) URL and a blob storage URL. Set the credentials in the environment (see [example.env](./example.env)), and run! ```shell $ esdump-rs http://localhost:9200 s3://es-dump/test/ \ --index=test-index \ --batches-per-file=5 \ --batch-size=5000 \ --concurrency=10 ``` Settings such as the batch size and concurrency can be set as flags ```shell Usage: esdump-rs [OPTIONS] --index --concurrency --batch-size --batches-per-file Arguments: Elasticsearch cluster to dump Location to write results. Can be a file://, s3:// or gs:// URL Options: -i, --index Index to dump -c, --concurrency Number of concurrent requests to use -l, --limit Limit the total number of records returned -b, --batch-size Number of records in each batch --batches-per-file Number of batches to write per file -q, --query A file path containing a query to execute while dumping -f, --field Specific fields to fetch --compression Compress the output files [default: zstd] [possible values: gzip, zstd] --concurrent-uploads Max chunks to concurrently upload *per task* --upload-size Size of each uploaded [default: 15MB] -d, --distribution Distribution of the cluster [possible values: elasticsearch, opensearch] --env-file Distribution of the cluster [default: .env] -h, --help Print help -V, --version Print version ```