| Crates.io | s-zip |
| lib.rs | s-zip |
| version | 0.8.0 |
| created_at | 2025-12-15 09:14:16.215092+00 |
| updated_at | 2026-01-16 09:00:22.324819+00 |
| description | High-performance streaming ZIP library with AES-256 encryption and async/await support - Read/write ZIP files with minimal memory footprint. Supports password protection, cloud storage, and Tokio runtime. |
| homepage | |
| repository | https://github.com/KSD-CO/s-zip |
| max_upload_size | |
| id | 1985772 |
| size | 425,777 |
███████╗ ███████╗██╗██████╗
██╔════╝ ╚══███╔╝██║██╔══██╗
███████╗█████╗ ███╔╝ ██║██████╔╝
╚════██║╚════╝ ███╔╝ ██║██╔═══╝
███████║ ███████╗██║██║
╚══════╝ ╚══════╝╚═╝╚═╝
s-zip is a streaming ZIP reader and writer designed for backend systems that need
to process large archives with minimal memory usage.
The focus is not on end-user tooling, but on providing a reliable ZIP building block for servers, batch jobs, and data pipelines.
Most ZIP libraries assume small files or in-memory buffers.
s-zip is built around streaming from day one.
Based on comprehensive benchmarks (see BENCHMARK_RESULTS.md):
| Metric | DEFLATE level 6 | Zstd level 3 | Improvement |
|---|---|---|---|
| Speed (1MB) | 610 MiB/s | 2.0 GiB/s | 3.3x faster ⚡ |
| File Size (1MB compressible) | 3.16 KB | 281 bytes | 11x smaller 🗜️ |
| File Size (10MB compressible) | 29.97 KB | 1.12 KB | 27x smaller 🗜️ |
| Memory Usage | 2-5 MB constant | 2-5 MB constant | Same ✓ |
| CPU Usage | Moderate | Low-Moderate | Better ✓ |
Key Benefits:
Add this to your Cargo.toml:
[dependencies]
s-zip = "0.7"
# With AES-256 encryption support
s-zip = { version = "0.7", features = ["encryption"] }
# With async support (Tokio runtime)
s-zip = { version = "0.7", features = ["async"] }
# With AWS S3 cloud storage support
s-zip = { version = "0.7", features = ["cloud-s3"] }
# With Google Cloud Storage support
s-zip = { version = "0.7", features = ["cloud-gcs"] }
# With all cloud storage providers
s-zip = { version = "0.7", features = ["cloud-all"] }
# With async + Zstd compression + encryption
s-zip = { version = "0.7", features = ["async", "async-zstd", "encryption"] }
| Feature | Description | Dependencies |
|---|---|---|
encryption |
AES-256 encryption support (NEW!) | aes, ctr, hmac, sha1, pbkdf2 |
async |
Enables async/await support with Tokio runtime | tokio, async-compression |
async-zstd |
Async + Zstd compression support | async, zstd-support |
zstd-support |
Zstd compression for sync API | zstd |
cloud-s3 |
AWS S3 + MinIO + S3-compatible services | async, aws-sdk-s3 |
cloud-gcs |
Google Cloud Storage adapter | async, google-cloud-storage |
cloud-all |
All cloud storage providers | cloud-s3, cloud-gcs |
Note: async-zstd includes both async and zstd-support features. Cloud features require async.
use s_zip::StreamingZipReader;
fn main() -> Result<(), Box<dyn std::error::Error>> {
let mut reader = StreamingZipReader::open("archive.zip")?;
// List all entries
for entry in reader.entries() {
println!("{}: {} bytes", entry.name, entry.uncompressed_size);
}
// Read a specific file
let data = reader.read_entry_by_name("file.txt")?;
println!("Content: {}", String::from_utf8_lossy(&data));
// Or use streaming for large files
let mut stream = reader.read_entry_streaming_by_name("large_file.bin")?;
std::io::copy(&mut stream, &mut std::io::stdout())?;
Ok(())
}
use s_zip::StreamingZipWriter;
fn main() -> Result<(), Box<dyn std::error::Error>> {
let mut writer = StreamingZipWriter::new("output.zip")?;
// Add first file
writer.start_entry("file1.txt")?;
writer.write_data(b"Hello, World!")?;
// Add second file
writer.start_entry("folder/file2.txt")?;
writer.write_data(b"Another file in a folder")?;
// Finish and write central directory
writer.finish()?;
Ok(())
}
use s_zip::StreamingZipWriter;
let mut writer = StreamingZipWriter::with_compression("output.zip", 9)?; // Max compression
// ... add files ...
writer.finish()?;
zstd-support feature)use s_zip::{StreamingZipWriter, CompressionMethod};
fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create writer with Zstd compression (level 3, range 1-21)
let mut writer = StreamingZipWriter::with_zstd("output.zip", 3)?;
// Or use the generic method API
let mut writer = StreamingZipWriter::with_method(
"output.zip",
CompressionMethod::Zstd,
3 // compression level
)?;
writer.start_entry("compressed.bin")?;
writer.write_data(b"Data compressed with Zstd")?;
writer.finish()?;
// Reader automatically detects and decompresses Zstd entries
let mut reader = StreamingZipReader::open("output.zip")?;
let data = reader.read_entry_by_name("compressed.bin")?;
Ok(())
}
Note: Zstd compression provides better compression ratios than DEFLATE but may have slower decompression on some systems. The reader will automatically detect and decompress Zstd-compressed entries when the zstd-support feature is enabled.
s-zip supports WinZip-compatible AES-256 encryption to password-protect sensitive files in your ZIP archives. This feature is perfect for securing confidential data, credentials, or any sensitive information.
use s_zip::StreamingZipWriter;
fn main() -> Result<(), Box<dyn std::error::Error>> {
let mut writer = StreamingZipWriter::new("encrypted.zip")?;
// Set password for encryption (requires 'encryption' feature)
writer.set_password("my_secure_password_123");
// All subsequent files will be encrypted
writer.start_entry("confidential.txt")?;
writer.write_data(b"Top secret information")?;
writer.start_entry("passwords.txt")?;
writer.write_data(b"Database credentials")?;
// Clear password to add unencrypted files
writer.clear_password();
writer.start_entry("readme.txt")?;
writer.write_data(b"Public information")?;
writer.finish()?;
Ok(())
}
You can use different passwords for different files in the same ZIP:
let mut writer = StreamingZipWriter::new("mixed.zip")?;
// Financial files with one password
writer.set_password("finance_2024");
writer.start_entry("salary_report.txt")?;
writer.write_data(b"Employee salaries...")?;
// Legal files with different password
writer.set_password("legal_secure");
writer.start_entry("contracts/agreement.pdf")?;
writer.write_data(b"Contract data...")?;
// Public files without password
writer.clear_password();
writer.start_entry("public_info.txt")?;
writer.write_data(b"Public data...")?;
writer.finish()?;
Encryption adds overhead but maintains constant memory usage:
| File Size | Overhead | Throughput | Notes |
|---|---|---|---|
| 1 KB | ~80x slower | 8-10 MiB/s | Dominated by key derivation (~950µs) |
| 100 KB | ~23x slower | 20-23 MiB/s | Stable encryption overhead |
| 1 MB+ | ~24-31x slower | 17-23 MiB/s | Network/disk I/O becomes bottleneck |
Memory usage: ✅ No impact - maintains constant 2-5 MB streaming architecture
Best for: Backend services, large files, cloud storage (where network is the bottleneck)
Considerations: Real-time applications with <100ms latency requirements
📊 See ENCRYPTION_PERFORMANCE.md for detailed benchmarks
Currently, decryption is not yet implemented in the reader. This is planned for future releases. For now, you can extract encrypted ZIPs using:
7z x encrypted.zips-zip supports async/await with Tokio runtime, enabling non-blocking I/O for web servers and cloud applications.
✅ Use Async for:
✅ Use Sync for:
use s_zip::AsyncStreamingZipWriter;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let mut writer = AsyncStreamingZipWriter::new("output.zip").await?;
writer.start_entry("hello.txt").await?;
writer.write_data(b"Hello, async world!").await?;
writer.start_entry("data.txt").await?;
writer.write_data(b"Streaming with async/await").await?;
writer.finish().await?;
Ok(())
}
Perfect for HTTP responses or cloud storage:
use s_zip::AsyncStreamingZipWriter;
use std::io::Cursor;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create ZIP in memory
let buffer = Vec::new();
let cursor = Cursor::new(buffer);
let mut writer = AsyncStreamingZipWriter::from_writer(cursor);
writer.start_entry("data.json").await?;
writer.write_data(br#"{"status": "ok"}"#).await?;
// Get ZIP bytes for upload
let cursor = writer.finish().await?;
let zip_bytes = cursor.into_inner();
// Upload to S3, send as HTTP response, etc.
println!("Created {} bytes", zip_bytes.len());
Ok(())
}
Stream files directly without blocking:
use s_zip::AsyncStreamingZipWriter;
use tokio::fs::File;
use tokio::io::AsyncReadExt;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let mut writer = AsyncStreamingZipWriter::new("archive.zip").await?;
// Stream large file without loading into memory
writer.start_entry("large_file.bin").await?;
let mut file = File::open("source.bin").await?;
let mut buffer = vec![0u8; 8192];
loop {
let n = file.read(&mut buffer).await?;
if n == 0 { break; }
writer.write_data(&buffer[..n]).await?;
}
writer.finish().await?;
Ok(())
}
Read ZIP files asynchronously with minimal memory usage. Supports reading from local files, S3, HTTP, or any AsyncRead + AsyncSeek source.
use s_zip::AsyncStreamingZipReader;
use tokio::io::AsyncReadExt;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Open ZIP from local file
let mut reader = AsyncStreamingZipReader::open("archive.zip").await?;
// List all entries
for entry in reader.entries() {
println!("{}: {} bytes", entry.name, entry.uncompressed_size);
}
// Read a specific file into memory
let data = reader.read_entry_by_name("file.txt").await?;
println!("Content: {}", String::from_utf8_lossy(&data));
// Stream large files without loading into memory
let mut stream = reader.read_entry_streaming_by_name("large_file.bin").await?;
let mut buffer = vec![0u8; 8192];
loop {
let n = stream.read(&mut buffer).await?;
if n == 0 { break; }
// Process chunk...
}
Ok(())
}
Read ZIP files directly from S3 without downloading to disk:
use s_zip::{GenericAsyncZipReader, cloud::S3ZipReader};
use aws_sdk_s3::Client;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Configure AWS SDK
let config = aws_config::load_from_env().await;
let s3_client = Client::new(&config);
// Create S3 reader - streams directly from S3 using byte-range requests
let s3_reader = S3ZipReader::new(
s3_client,
"my-bucket",
"archives/data.zip"
).await?;
// Wrap with GenericAsyncZipReader
let mut reader = GenericAsyncZipReader::new(s3_reader).await?;
// List entries
for entry in reader.entries() {
println!("📄 {}: {} bytes", entry.name, entry.uncompressed_size);
}
// Read specific file from S3 ZIP
let data = reader.read_entry_by_name("report.csv").await?;
println!("Downloaded {} bytes from S3 ZIP", data.len());
Ok(())
}
Key Benefits:
AsyncRead + AsyncSeek source (HTTP, in-memory, custom)Performance Note: For small files (<50MB), downloading the entire ZIP first is faster due to network latency. For large archives or when reading only a few files, streaming from S3 provides significant memory savings.
The generic async reader works with any AsyncRead + AsyncSeek source:
use s_zip::GenericAsyncZipReader;
use std::io::Cursor;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Example: In-memory ZIP (could be from HTTP response)
let zip_bytes = download_zip_from_http().await?;
let cursor = Cursor::new(zip_bytes);
// Read ZIP from in-memory source
let mut reader = GenericAsyncZipReader::new(cursor).await?;
for entry in reader.entries() {
println!("📦 {}", entry.name);
}
Ok(())
}
Create multiple ZIPs simultaneously (5x faster than sequential):
use s_zip::AsyncStreamingZipWriter;
use tokio::task::JoinSet;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let mut tasks = JoinSet::new();
// Create 10 ZIPs concurrently
for i in 0..10 {
tasks.spawn(async move {
let path = format!("output_{}.zip", i);
let mut writer = AsyncStreamingZipWriter::new(&path).await?;
writer.start_entry("data.txt").await?;
writer.write_data(b"Concurrent creation!").await?;
writer.finish().await?;
Ok::<_, s_zip::SZipError>(())
});
}
// Wait for all to complete
while let Some(result) = tasks.join_next().await {
result.unwrap()?;
}
println!("Created 10 ZIPs concurrently!");
Ok(())
}
| Scenario | Sync | Async | Advantage |
|---|---|---|---|
| Local disk (5MB) | 6.7ms | 7.1ms | ≈ Same (~6% overhead) |
| In-memory (100KB) | 146µs | 136µs | Async 7% faster |
| Network upload (5×50KB) | 1053ms | 211ms | Async 5x faster 🚀 |
| 10 concurrent operations | 70ms | 10-15ms | Async 4-7x faster 🚀 |
See PERFORMANCE.md for detailed benchmarks.
Stream ZIP files directly to/from AWS S3 or Google Cloud Storage without writing to local disk. Perfect for serverless, containers, and cloud-native applications.
use s_zip::{AsyncStreamingZipWriter, cloud::S3ZipWriter};
use aws_sdk_s3::Client;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Configure AWS SDK
let config = aws_config::load_from_env().await;
let s3_client = Client::new(&config);
// Create S3 writer - streams directly with multipart upload
let writer = S3ZipWriter::new(
s3_client,
"my-bucket",
"exports/archive.zip"
).await?;
let mut zip = AsyncStreamingZipWriter::from_writer(writer);
// Add files - data streams directly to S3
zip.start_entry("report.csv").await?;
zip.write_data(b"id,name,value\n1,Alice,100\n").await?;
zip.start_entry("data.json").await?;
zip.write_data(br#"{"status": "success"}"#).await?;
// Finish - completes S3 multipart upload
zip.finish().await?;
println!("✅ ZIP streamed to s3://my-bucket/exports/archive.zip");
Ok(())
}
Key Benefits:
Read ZIP files directly from S3 without downloading:
use s_zip::{GenericAsyncZipReader, cloud::S3ZipReader};
use aws_sdk_s3::Client;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let config = aws_config::load_from_env().await;
let s3_client = Client::new(&config);
// Read directly from S3 using byte-range requests
let s3_reader = S3ZipReader::new(s3_client, "bucket", "archive.zip").await?;
let mut reader = GenericAsyncZipReader::new(s3_reader).await?;
// Extract specific files without downloading entire ZIP
let data = reader.read_entry_by_name("report.csv").await?;
println!("Read {} bytes from S3", data.len());
Ok(())
}
Key Benefits:
use s_zip::{AsyncStreamingZipWriter, cloud::GCSZipWriter};
use google_cloud_storage::client::Client;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Configure GCS client
let gcs_client = Client::default().await?;
// Create GCS writer - streams with resumable upload
let writer = GCSZipWriter::new(
gcs_client,
"my-bucket",
"exports/archive.zip"
).await?;
let mut zip = AsyncStreamingZipWriter::from_writer(writer);
zip.start_entry("log.txt").await?;
zip.write_data(b"Application logs...").await?;
zip.finish().await?;
println!("✅ ZIP streamed to gs://my-bucket/exports/archive.zip");
Ok(())
}
Key Benefits:
Real-world comparison on AWS S3 (20MB data):
| Method | Time | Memory | Description |
|---|---|---|---|
| Sync (in-memory + upload) | 368ms | ~20MB | Create ZIP in RAM, then upload |
| Async (direct streaming) | 340ms | ~10MB | Stream directly to S3 |
| Speedup | 1.08x faster | 50% less memory | ✅ Better for large files |
For 100MB+ files:
When to use cloud streaming:
Stream ZIPs directly to MinIO, Cloudflare R2, DigitalOcean Spaces, Backblaze B2, and other S3-compatible services:
use s_zip::{AsyncStreamingZipWriter, cloud::S3ZipWriter};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Write to MinIO
let writer = S3ZipWriter::builder()
.endpoint_url("http://localhost:9000")
.region("us-east-1")
.bucket("my-bucket")
.key("archive.zip")
.force_path_style(true) // Required for MinIO
.build()
.await?;
let mut zip = AsyncStreamingZipWriter::from_writer(writer);
zip.start_entry("data.txt").await?;
zip.write_data(b"Hello MinIO!").await?;
zip.finish().await?;
println!("✅ ZIP streamed to MinIO");
Ok(())
}
Read from MinIO:
use s_zip::{GenericAsyncZipReader, cloud::S3ZipReader};
let reader = S3ZipReader::builder()
.endpoint_url("http://localhost:9000")
.bucket("my-bucket")
.key("archive.zip")
.build()
.await?;
let mut zip = GenericAsyncZipReader::new(reader).await?;
let data = zip.read_entry_by_name("data.txt").await?;
Supported S3-Compatible Services:
| Service | Endpoint Example |
|---|---|
| MinIO | http://localhost:9000 |
| Cloudflare R2 | https://<account_id>.r2.cloudflarestorage.com |
| DigitalOcean Spaces | https://<region>.digitaloceanspaces.com |
| Backblaze B2 | https://s3.<region>.backblazeb2.com |
| Linode Object Storage | https://<region>.linodeobjects.com |
use s_zip::cloud::S3ZipWriter;
// Custom part size for large files
let writer = S3ZipWriter::builder()
.client(s3_client)
.bucket("my-bucket")
.key("large-archive.zip")
.part_size(100 * 1024 * 1024) // 100MB parts for huge files
.build()
.await?;
// Or with custom endpoint for S3-compatible services
let writer = S3ZipWriter::builder()
.endpoint_url("https://s3.us-west-001.backblazeb2.com")
.region("us-west-001")
.bucket("my-bucket")
.key("archive.zip")
.build()
.await?;
See examples:
s-zip supports writing to any type that implements Write + Seek, not just files. This enables:
use s_zip::StreamingZipWriter;
use std::io::Cursor;
fn main() -> Result<(), Box<dyn std::error::Error>> {
// Write ZIP to in-memory buffer
let buffer = Vec::new();
let cursor = Cursor::new(buffer);
let mut writer = StreamingZipWriter::from_writer(cursor)?;
writer.start_entry("data.txt")?;
writer.write_data(b"In-memory ZIP content")?;
// finish() returns the writer, allowing you to extract the data
let cursor = writer.finish()?;
let zip_bytes = cursor.into_inner();
// Now you can save to file, send over network, etc.
std::fs::write("output.zip", &zip_bytes)?;
println!("Created ZIP with {} bytes", zip_bytes.len());
Ok(())
}
⚠️ IMPORTANT - Memory Usage by Writer Type:
| Writer Type | Memory Usage | Best For |
|---|---|---|
File (StreamingZipWriter::new(path)) |
✅ ~2-5 MB constant | Large files, production use |
| Network streams (TCP, pipes) | ✅ ~2-5 MB constant | Streaming over network |
Vecfrom_writer()) |
⚠️ ENTIRE ZIP IN RAM | Small archives only (<100MB) |
⚠️ Critical Warning for VecVec<u8> or Cursor<Vec<u8>> as the writer, the entire compressed ZIP file will be stored in memory. While the compressor still uses only ~2-5MB for its internal buffer, the final output accumulates in the Vec. Only use this for small archives or when you have sufficient RAM.
Recommended approach for large files:
StreamingZipWriter::new(path) to write to disk (constant ~2-5MB memory)Vec<u8>/Cursor for small temporary ZIPs (<100MB)The implementation uses a 1MB buffer threshold to periodically flush compressed data to the writer, keeping compression memory low (~2-5MB) for all writer types. However, in-memory writers like Vec<u8> will still accumulate the full output.
See examples/arbitrary_writer.rs for more examples.
| Method | Description | Default | Feature Flag | Best For |
|---|---|---|---|---|
| DEFLATE (8) | Standard ZIP compression | ✓ | Always available | Text, source code, JSON, XML, CSV, XLSX |
| Stored (0) | No compression | - | Always available | Already compressed files (JPG, PNG, MP4, PDF) |
| Zstd (93) | Modern compression algorithm | - | zstd-support |
All text/data files, logs, databases |
Use DEFLATE (default) when:
Use Zstd when:
Use Stored (no compression) when:
s-zip includes comprehensive benchmarks to compare compression methods:
# Run all benchmarks with Zstd support
./run_benchmarks.sh
# Or run individual benchmark suites
cargo bench --features zstd-support --bench compression_bench
cargo bench --features zstd-support --bench read_bench
Benchmarks measure:
Results are saved to target/criterion/ with HTML reports showing detailed statistics, comparisons, and performance graphs.
| Method | Compressed Size | Ratio | Speed |
|---|---|---|---|
| DEFLATE level 6 | 3.16 KB | 0.31% | ~610 MiB/s |
| DEFLATE level 9 | 3.16 KB | 0.31% | ~494 MiB/s |
| Zstd level 3 | 281 bytes | 0.03% | ~2.0 GiB/s ⚡ |
| Zstd level 10 | 358 bytes | 0.03% | ~370 MiB/s |
Key Insights:
💡 Recommendation: Use Zstd level 3 for best performance and compression. Only use DEFLATE when compatibility with older tools is required.
📊 Full Analysis: See BENCHMARK_RESULTS.md for detailed performance data including:
Zero Breaking Changes! The v0.7.0 release is fully backward compatible.
What's New:
encryption feature)Migration:
[dependencies]
# Just update the version - existing code works as-is!
s-zip = "0.7"
# Or add encryption support
s-zip = { version = "0.7", features = ["encryption"] }
New APIs (Optional):
// Enable encryption for files
let mut writer = StreamingZipWriter::new("secure.zip")?;
writer.set_password("my_password");
writer.start_entry("confidential.txt")?;
writer.write_data(b"Secret data")?;
// Mix encrypted and unencrypted files
writer.clear_password();
writer.start_entry("public.txt")?;
writer.write_data(b"Public data")?;
writer.finish()?;
Zero Breaking Changes! The v0.6.0 release is fully backward compatible.
What's New:
GenericAsyncZipReader<R>)AsyncRead + AsyncSeek source (S3, HTTP, in-memory, files)Migration:
[dependencies]
# Just update the version - existing code works as-is!
s-zip = "0.7"
# Or with features
s-zip = { version = "0.7", features = ["async", "cloud-s3"] }
New APIs (Optional):
// v0.5.x - Still works!
let mut reader = AsyncStreamingZipReader::open("file.zip").await?;
// v0.6.0+ - Read from S3
let s3_reader = S3ZipReader::new(client, "bucket", "key").await?;
let mut reader = GenericAsyncZipReader::new(s3_reader).await?;
// v0.6.0+ - Read from any source
let mut reader = GenericAsyncZipReader::new(custom_reader).await?;
Zero Breaking Changes! The v0.5.0 release is fully backward compatible.
What's New:
cloud-s3 feature)cloud-gcs feature)Migration Options:
Option 1: Keep Using Existing Code (No Changes)
[dependencies]
s-zip = "0.5" # Existing code works as-is
Your existing code continues to work exactly as before!
Option 2: Add Cloud Storage Support
[dependencies]
# AWS S3 only
s-zip = { version = "0.5", features = ["cloud-s3"] }
# Google Cloud Storage only
s-zip = { version = "0.5", features = ["cloud-gcs"] }
# Both S3 and GCS
s-zip = { version = "0.5", features = ["cloud-all"] }
API Comparison:
// Local file (v0.4.x and later)
let mut writer = AsyncStreamingZipWriter::new("output.zip").await?;
writer.start_entry("file.txt").await?;
writer.write_data(b"data").await?;
writer.finish().await?;
// AWS S3 (v0.5.0+)
let s3_writer = S3ZipWriter::new(s3_client, "bucket", "key.zip").await?;
let mut writer = AsyncStreamingZipWriter::from_writer(s3_writer);
writer.start_entry("file.txt").await?;
writer.write_data(b"data").await?;
writer.finish().await?;
All v0.3.x code is compatible with v0.7.0. Just update the version number and optionally add new features.
Check out the examples/ directory for complete working examples:
Sync Examples:
Encryption Examples:
Async Examples:
Cloud Storage Examples:
Run examples:
# Sync examples
cargo run --example basic
cargo run --example zstd_compression --features zstd-support
# Encryption examples
cargo run --example encryption_basic --features encryption
cargo run --example encryption_advanced --features encryption
# Async examples
cargo run --example async_basic --features async
cargo run --example concurrent_demo --features async
cargo run --example network_simulation --features async
# Cloud storage examples (requires AWS credentials)
export AWS_ACCESS_KEY_ID="..."
export AWS_SECRET_ACCESS_KEY="..."
export AWS_REGION="us-east-1"
cargo run --example cloud_s3 --features cloud-s3
cargo run --example async_vs_sync_s3 --features cloud-s3
MIT License - see LICENSE file for details.
Contributions are welcome! Please feel free to submit a Pull Request.
Ton That Vu - @KSD-CO