| Crates.io | s3fcp |
| lib.rs | s3fcp |
| version | 0.2.1 |
| created_at | 2025-12-18 14:51:45.749801+00 |
| updated_at | 2025-12-18 14:57:13.818724+00 |
| description | Fast file downloader with multi-part concurrent downloads from S3 and HTTP/HTTPS |
| homepage | |
| repository | https://github.com/Dzejkop/s3fcp |
| max_upload_size | |
| id | 1992556 |
| size | 150,052 |
A high-performance Rust CLI tool for downloading files from S3 and HTTP/HTTPS with multi-part concurrent downloads and ordered streaming to stdout.
cargo install --path .
Or build from source:
cargo build --release
./target/release/s3fcp --help
# Download and stream to stdout
s3fcp s3 s3://bucket/key
# Redirect to file
s3fcp s3 s3://bucket/key > output.bin
# Download specific version
s3fcp s3 s3://bucket/key --version-id v123
# Increase concurrency
s3fcp s3 s3://bucket/key -c 16
# Use larger chunks (human-readable sizes)
s3fcp s3 s3://bucket/key --chunk-size 16MB
# Download from HTTP URL
s3fcp http https://example.com/file.bin > file.bin
# With custom concurrency and chunk size
s3fcp http https://example.com/large.iso -c 16 --chunk-size 16MB
# Quiet mode
s3fcp http https://example.com/data.json -q | jq '.field'
Usage: s3fcp <COMMAND>
Commands:
s3 Download from S3
http Download from HTTP/HTTPS URL
help Print this message or the help of the given subcommand(s)
Usage: s3fcp s3 [OPTIONS] <URI>
Arguments:
<URI> S3 URI in the format s3://bucket/key
Options:
--version-id <VERSION_ID> S3 object version ID for versioned objects
-c, --concurrency <CONCURRENCY> Number of concurrent download workers [default: 10]
--chunk-size <CHUNK_SIZE> Chunk size [default: 8MB]
-q, --quiet Quiet mode - suppress progress output
-h, --help Print help
Usage: s3fcp http [OPTIONS] <URL>
Arguments:
<URL> HTTP/HTTPS URL to download
Options:
-c, --concurrency <CONCURRENCY> Number of concurrent download workers [default: 10]
--chunk-size <CHUNK_SIZE> Chunk size [default: 8MB]
-q, --quiet Quiet mode - suppress progress output
-h, --help Print help
Supported chunk size formats:
8388608 (bytes)8MB, 1GB, 1TB (powers of 1000)8MiB, 1GiB, 1TiB (powers of 1024)s3fcp uses a 3-stage pipeline architecture:
For HTTP downloads, s3fcp checks if the server supports Range requests via the Accept-Ranges header. If supported, it uses chunked parallel downloads. Otherwise, it falls back to a single-stream download.
Memory usage is bounded by:
Max Memory ≈ 2 × concurrency × chunk_size
With defaults (concurrency=10, chunk_size=8MB):
Max Memory ≈ 160MB
This holds regardless of the file size.
For S3 downloads, s3fcp uses the standard AWS credential chain:
AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY)~/.aws/credentials)Download a 1GB file from S3 with 16 concurrent workers:
s3fcp s3 s3://my-bucket/large-file.bin -c 16 > large-file.bin
Download from HTTP and pipe to another command:
s3fcp http https://example.com/data.gz -q | gunzip | grep "pattern"
Quiet download for scripts:
s3fcp s3 s3://my-bucket/data.json -q | jq '.field'
Run tests:
cargo test
Build for release:
cargo build --release
Check code:
cargo check
cargo clippy
MIT