| Crates.io | aws-multipart-upload |
| lib.rs | aws-multipart-upload |
| version | 0.1.0-rc5 |
| created_at | 2025-03-17 08:55:39.287728+00 |
| updated_at | 2025-12-12 02:30:34.855665+00 |
| description | SDK plugin for S3 multipart uploads |
| homepage | |
| repository | https://github.com/quasi-coherent/aws-multipart-upload |
| max_upload_size | |
| id | 1595215 |
| size | 187,962 |
A high-level API for building and working with AWS S3 multipart uploads using the official SDK for Rust.
Making an AWS S3 multipart upload is a fairly involved multi-stage process:
The official AWS Rust SDK is generated code that exposes request builders that can be initialized and sent from a client, including the several mentioned above, but there isn't much beyond that.
The aws-multipart-upload crate aims to simplify this process and do so with abstractions that
integrate cleanly with the parts of the Rust ecosystem one is likely to be using, or that one would
like to be using, when performing multipart uploads.
Add the crate to your Cargo.toml:
aws-multipart-upload = "0.1.0-rc5"
The feature flag "csv" enables a "part encoder"--the component responsible for writing items to a
part--built from a csv writer. Part encoders for writing jsonlines and for writing
arbitrary lines of text are available as well.
This example shows a stream of serde_json::Values being written as comma-separated values to a
multipart upload. This is a future and awaiting the future runs the stream to completion by writing
and uploading parts behind the scenes, completing the upload when the stream is exhausted.
See more examples here.
use aws_multipart_upload::{ByteSize, SdkClient, UploadBuilder};
use aws_multipart_upload::codec::CsvEncoder;
use aws_multipart_upload::write::UploadStreamExt as _;
use futures::stream::{self, StreamExt as _};
use serde_json::{Value, json};
/// Default aws-sdk-s3 client:
let client = SdkClient::defaults().await;
/// Use `UploadBuilder` to build a multipart uploader:
let upl = UploadBuilder::new(client)
.with_encoder(CsvEncoder::default().with_header())
.with_part_size(ByteSize::mib(10))
.with_uri(("example-bucket-us-east-1", "destination/key.csv"))
.build();
/// Consume a stream of `Value`s by forwarding it to `upl`,
/// and poll for completion:
let values = stream::iter(0..).map(|n| json!({"n": n, "n_sq": n * n}));
let completed = values
.take(100000)
.collect_upload(upl)
.await
.unwrap();
println!("object uploaded: {}", completed.uri);