| Crates.io | avl-storage |
| lib.rs | avl-storage |
| version | 0.1.0 |
| created_at | 2025-11-23 08:33:08.530745+00 |
| updated_at | 2025-11-23 08:33:08.530745+00 |
| description | AVL Storage - S3-compatible object storage optimized for Brazil and LATAM |
| homepage | https://avila.cloud |
| repository | https://github.com/avilaops/arxis |
| max_upload_size | |
| id | 1946301 |
| size | 131,328 |
AVL (fortress) + STORAGE (engine) = AVL Storage
Where objects find permanent refuge and engines deliver at speed
๐ง๐ท Latency 3-8ms in Brazil | ๐ฐ 50% cheaper than S3 | ๐ S3-compatible API
AVL Storage is the S3-compatible object storage for the AVL Cloud Platform - built as a fortress for your files and an engine for your data.
Like Arxis provides the mathematical citadel, AVL Storage provides the object citadel:
avila-compressAVL Storage follows the Arxis philosophy - solid as a fortress, fast as an engine:
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ AVL Storage - Object Citadel โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ ๐๏ธ Object Layer โ
โ - S3-compatible API โ
โ - Multipart uploads โ
โ - Versioning & metadata โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ โ๏ธ Compression Engine โ
โ - avila-compress (LZ4/Zstd) โ
โ - Content-type detection โ
โ - Smart tier selection โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ ๐ก๏ธ Storage Backend โ
โ - Local filesystem (dev) โ
โ - Distributed storage (prod) โ
โ - Replication (3 copies) โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ ๐ Transfer Engine โ
โ - Parallel uploads/downloads โ
โ - Resumable transfers โ
โ - CDN integration โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Add to your Cargo.toml:
[dependencies]
avl-storage = "0.1"
tokio = { version = "1", features = ["full"] }
# Install AVL CLI
curl -sSL https://avila.cloud/install.sh | sh
# Configure credentials
avl storage configure
# Upload file
avl storage put my-bucket/file.txt local-file.txt
# Download file
avl storage get my-bucket/file.txt downloaded.txt
# List objects
avl storage ls my-bucket/
# Use with s3cmd
s3cmd --host=storage.avila.cloud --host-bucket='%(bucket)s.storage.avila.cloud' \
put file.txt s3://my-bucket/
# Use with AWS CLI
aws s3 cp file.txt s3://my-bucket/ --endpoint-url=https://storage.avila.cloud
use avl_storage::{StorageClient, PutObjectRequest};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Connect to AVL Storage
let client = StorageClient::connect("https://storage.avila.cloud").await?;
// Create bucket
client.create_bucket("my-bucket").await?;
// Upload object
let data = b"Hello from AVL Storage!";
client.put_object(PutObjectRequest {
bucket: "my-bucket".to_string(),
key: "hello.txt".to_string(),
body: data.to_vec(),
content_type: Some("text/plain".to_string()),
metadata: Default::default(),
}).await?;
// Download object
let obj = client.get_object("my-bucket", "hello.txt").await?;
println!("Content: {}", String::from_utf8(obj.body)?);
// List objects
let objects = client.list_objects("my-bucket", None).await?;
for obj in objects {
println!("- {} ({} bytes)", obj.key, obj.size);
}
// Delete object
client.delete_object("my-bucket", "hello.txt").await?;
Ok(())
}
use avl_storage::{StorageClient, MultipartUpload};
use tokio::fs::File;
use tokio::io::AsyncReadExt;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let client = StorageClient::connect("https://storage.avila.cloud").await?;
// Initiate multipart upload
let upload = client.create_multipart_upload(
"my-bucket",
"large-file.bin"
).await?;
// Upload parts (5 MB chunks)
let mut file = File::open("large-local-file.bin").await?;
let chunk_size = 5 * 1024 * 1024; // 5 MB
let mut part_number = 1;
let mut parts = Vec::new();
loop {
let mut buffer = vec![0u8; chunk_size];
let n = file.read(&mut buffer).await?;
if n == 0 {
break;
}
buffer.truncate(n);
let etag = client.upload_part(
"my-bucket",
"large-file.bin",
&upload.upload_id,
part_number,
buffer,
).await?;
parts.push((part_number, etag));
part_number += 1;
}
// Complete upload
client.complete_multipart_upload(
"my-bucket",
"large-file.bin",
&upload.upload_id,
parts,
).await?;
println!("Upload complete!");
Ok(())
}
use avl_storage::{StorageClient, PutObjectRequest, StorageClass};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let client = StorageClient::connect("https://storage.avila.cloud").await?;
// Hot data (frequent access) - LZ4 compression
client.put_object(PutObjectRequest {
bucket: "my-bucket".to_string(),
key: "hot-data.json".to_string(),
body: large_json_data,
storage_class: Some(StorageClass::Standard), // LZ4
..Default::default()
}).await?;
// Cold data (archival) - Zstandard compression
client.put_object(PutObjectRequest {
bucket: "my-bucket".to_string(),
key: "archive.tar".to_string(),
body: archive_data,
storage_class: Some(StorageClass::Archive), // Zstd
..Default::default()
}).await?;
// Transparent decompression on GET
let obj = client.get_object("my-bucket", "hot-data.json").await?;
// obj.body is automatically decompressed!
Ok(())
}
| Feature | AVL Storage | AWS S3 | Azure Blob |
|---|---|---|---|
| Brazil latency | 3-8ms โ | 50-80ms | 40-60ms |
| Storage (GB/month) | R$ 0,15 โ | USD 0.023 (~R$0,12) | USD 0.018 (~R$0,09) |
| Transfer (within BR) | R$ 0,05 โ | USD 0.09 (~R$0,45) | USD 0.08 (~R$0,40) |
| Compression | Automatic โ | Manual | Manual |
| Egress within services | FREE โ | Paid | Paid |
| S3 API compatibility | 100% โ | Native | Via adapter |
| Multipart uploads | โ | โ | โ |
| Versioning | โ | โ | โ |
AVL Storage is 50% cheaper for Brazilian workloads! ๐ง๐ท
// โ
GOOD: Descriptive, DNS-compatible
"my-app-uploads"
"prod-ml-models"
"user-avatars-2024"
// โ BAD: Ambiguous, special characters
"bucket1"
"my_bucket" // underscores not recommended
"UPPERCASE" // use lowercase
// โ
GOOD: Hierarchical, organized
"users/user123/profile.jpg"
"models/v2/checkpoint-1000.pt"
"logs/2024/11/23/app.log"
// โ BAD: Flat, no structure
"file1.jpg"
"data.bin"
// โ
GOOD: Let AVL Storage compress
client.put_object(PutObjectRequest {
body: uncompressed_data,
// AVL Storage compresses automatically
..Default::default()
}).await?;
// โ BAD: Pre-compress yourself
// let compressed = manual_compress(data); // Redundant!
// โ
GOOD: Use multipart for files > 100 MB
if file_size > 100 * 1024 * 1024 {
upload_multipart(&client, bucket, key, file_path).await?;
} else {
upload_single(&client, bucket, key, file_path).await?;
}
# Configure
avl storage configure
# Access Key: your-key
# Secret Key: your-secret
# Region: sa-east-1 (Sรฃo Paulo)
# Create bucket
avl storage mb s3://my-bucket
# Upload
avl storage put my-bucket/file.txt local-file.txt
# Upload directory (recursive)
avl storage sync ./local-dir/ s3://my-bucket/remote-dir/
# Download
avl storage get my-bucket/file.txt downloaded.txt
# List
avl storage ls s3://my-bucket/
# Delete
avl storage rm s3://my-bucket/file.txt
# Get object metadata
avl storage info s3://my-bucket/file.txt
# Run locally (no cloud costs!)
docker run -p 9000:9000 avilacloud/avl-storage-emulator:latest
# Update endpoint
export AVL_STORAGE_ENDPOINT=http://localhost:9000
AVL Storage embodies the Arxis philosophy:
Contributions are welcome! Please:
git checkout -b feature/awesome-feature)git commit -m 'Add awesome feature')git push origin feature/awesome-feature)Email: nicolas@avila.inc WhatsApp: +55 17 99781-1471 GitHub: https://github.com/avilaops/arxis Docs: https://docs.avila.cloud/storage
Dual-licensed under MIT OR Apache-2.0 - See LICENSE-MIT and LICENSE-APACHE for details.
AVL Storage - The Storage Fortress Part of the AVL Cloud Platform
๐๏ธ Durable as a fortress โ๏ธ Fast as an engine ๐ง๐ท Built for Brazil
Built with โค๏ธ in Rust for the Brazilian and LATAM tech community.