| Crates.io | chunked-uploader |
| lib.rs | chunked-uploader |
| version | 0.1.2 |
| created_at | 2026-01-20 04:20:38.53287+00 |
| updated_at | 2026-01-20 04:20:38.53287+00 |
| description | A resumable chunked upload server supporting large files (>10GB) with Cloudflare compatibility |
| homepage | |
| repository | |
| max_upload_size | |
| id | 2055766 |
| size | 290,717 |
A production-ready Rust HTTP server supporting resumable chunked uploads for large files (>10GB), designed for Cloudflare compatibility with 50MB chunk sizes.
videos/2024/movie.mp4) to organize files┌─────────────────────────────────────────────────────────────────┐
│ Client │
└─────────────────────────────────────────────────────────────────┘
│
1. POST /upload/init (API Key)
│ filename: "videos/2024/movie.mp4"
│ Returns: file_id + JWT tokens for each part
▼
┌─────────────────────────────────────────────────────────────────┐
│ Upload Server (Rust/Axum) │
├─────────────────────────────────────────────────────────────────┤
│ 2. PUT /upload/{id}/part/{n} (JWT Token per part) │
│ - Validates token │
│ - Stores chunk │
│ - Updates SQLite │
│ │
│ 3. GET /upload/{id}/status (API Key) │
│ - Returns progress: [{part: 0, status: "uploaded"}, ...] │
│ │
│ 4. POST /upload/{id}/complete (API Key) │
│ - Assembles all parts │
│ - Returns final file path │
└─────────────────────────────────────────────────────────────────┘
│
┌───────────────┼───────────────┐
▼ ▼ ▼
┌─────────────────────┐ ┌─────────────────┐ ┌─────────────────────┐
│ Local Storage │ │ SMB/NAS │ │ S3 Storage │
│ ./uploads/ │ │ \\server\share│ │ s3://bucket/ │
└─────────────────────┘ └─────────────────┘ └─────────────────────┘
# Initialize environment (generates .env with secure random keys)
./init.sh
# Edit .env if needed (e.g., change storage path, port)
nano .env
# Build
cargo build --release
# Deploy and start service (creates LaunchAgent and loads it)
./deploy-mac.sh
# Service management
launchctl list | grep chunked-uploader # Check status
launchctl unload ~/Library/LaunchAgents/com.grace.chunked-uploader.plist # Stop
launchctl load ~/Library/LaunchAgents/com.grace.chunked-uploader.plist # Start
# View logs
tail -f chunked-uploader.stdout.log
# Deploy and start service (requires sudo)
sudo ./deploy-linux.sh
# Service management
sudo systemctl status chunked-uploader # Check status
sudo systemctl restart chunked-uploader # Restart
sudo systemctl stop chunked-uploader # Stop
sudo systemctl enable chunked-uploader # Enable on boot
# View logs
sudo journalctl -u chunked-uploader -f
# or
tail -f chunked-uploader.stdout.log
./target/release/chunked-uploader
curl -X POST http://localhost:3000/upload/init \
-H "Content-Type: application/json" \
-H "X-API-Key: your-api-key" \
-d '{
"filename": "large-video.mp4",
"total_size": 10737418240,
"webhook_url": "https://your-server.com/webhook/upload-complete"
}'
With custom path (path is extracted from filename):
curl -X POST http://localhost:3000/upload/init \
-H "Content-Type: application/json" \
-H "X-API-Key: your-api-key" \
-d '{
"filename": "videos/2024/december/large-video.mp4",
"total_size": 10737418240
}'
This will store the file at videos/2024/december/uuid_large-video.mp4
Response:
{
"file_id": "550e8400-e29b-41d4-a716-446655440000",
"total_parts": 205,
"chunk_size": 52428800,
"parts": [
{"part": 0, "token": "eyJhbGc...", "status": "pending"},
{"part": 1, "token": "eyJhbGc...", "status": "pending"},
...
],
"expires_at": "2025-12-16T12:00:00Z"
}
Upload each 50MB chunk with its corresponding JWT token:
# Upload part 0
curl -X PUT "http://localhost:3000/upload/${FILE_ID}/part/0" \
-H "Authorization: Bearer ${PART_0_TOKEN}" \
-H "Content-Type: application/octet-stream" \
--data-binary @chunk_0.bin
Response:
{
"upload_id": "550e8400-e29b-41d4-a716-446655440000",
"part_number": 0,
"status": "uploaded",
"checksum_sha256": "abc123...",
"uploaded_parts": 1,
"total_parts": 205
}
curl -X GET "http://localhost:3000/upload/${FILE_ID}/status" \
-H "X-API-Key: your-api-key"
Response:
{
"file_id": "550e8400-e29b-41d4-a716-446655440000",
"filename": "large-video.mp4",
"total_size": 10737418240,
"total_parts": 205,
"uploaded_parts": 100,
"progress_percent": 48.78,
"parts": [
{"part": 0, "status": "uploaded", "checksum_sha256": "..."},
{"part": 1, "status": "pending", "checksum_sha256": null},
...
]
}
After all parts are uploaded:
curl -X POST "http://localhost:3000/upload/${FILE_ID}/complete" \
-H "X-API-Key: your-api-key"
Response:
{
"file_id": "550e8400-e29b-41d4-a716-446655440000",
"filename": "large-video.mp4",
"total_size": 10737418240,
"status": "complete",
"final_path": "./uploads/files/550e8400..._large-video.mp4",
"storage_backend": "local"
}
With S3 backend (and path in filename):
{
"file_id": "550e8400-e29b-41d4-a716-446655440000",
"filename": "large-video.mp4",
"total_size": 10737418240,
"status": "complete",
"final_path": "s3://my-bucket/videos/2024/december/550e8400..._large-video.mp4",
"storage_backend": "s3"
}
curl -X DELETE "http://localhost:3000/upload/${FILE_ID}" \
-H "X-API-Key: your-api-key"
| Endpoint | Method | Auth | Description |
|---|---|---|---|
/upload/init |
POST | API Key | Initialize upload, get part tokens |
/upload/{id}/part/{n} |
PUT | JWT (per part) | Upload a single chunk |
/upload/{id}/status |
GET | API Key | Get upload progress |
/upload/{id}/complete |
POST | API Key | Assemble all parts |
/upload/{id} |
DELETE | API Key | Cancel and cleanup |
/health |
GET | None | Health check |
| Variable | Default | Description |
|---|---|---|
API_KEY |
required | API key for authentication |
JWT_SECRET |
required | Secret for JWT token signing |
STORAGE_BACKEND |
local |
local, smb, or s3 |
LOCAL_STORAGE_PATH |
./uploads |
Path for local storage |
TEMP_STORAGE_PATH |
system temp | Local path for temporary chunk storage (fast SSD recommended). Used by S3 and SMB backends. |
SMB_HOST |
localhost |
SMB server hostname or IP |
SMB_PORT |
445 |
SMB server port |
SMB_USER |
SMB username | |
SMB_PASS |
SMB password | |
SMB_SHARE |
share |
SMB share name |
SMB_PATH |
Subdirectory within the share (optional) | |
S3_ENDPOINT |
AWS default | S3 endpoint URL |
S3_BUCKET |
uploads |
S3 bucket name |
S3_REGION |
us-east-1 |
S3 region |
CHUNK_SIZE_MB |
50 |
Chunk size in MB |
UPLOAD_TTL_HOURS |
24 |
Hours before incomplete uploads expire |
DATABASE_PATH |
./uploads.db |
SQLite database path |
SERVER_PORT |
3000 |
Server port |
POST /upload/initGET /upload/{id}/statuspending vs uploadedpending parts using original tokensPOST /upload/{id}/completeimport requests
import os
API_KEY = "your-api-key"
BASE_URL = "http://localhost:3000"
FILE_PATH = "large-file.zip"
CHUNK_SIZE = 50 * 1024 * 1024 # 50MB
def upload_file(file_path, target_path=None):
"""
Upload a file to the chunked upload server.
Args:
file_path: Local path to the file
target_path: Optional remote path (e.g., "videos/2024")
"""
file_size = os.path.getsize(file_path)
filename = os.path.basename(file_path)
# Include target path in filename if specified
remote_filename = f"{target_path}/{filename}" if target_path else filename
# 1. Initialize upload
resp = requests.post(
f"{BASE_URL}/upload/init",
headers={"X-API-Key": API_KEY},
json={"filename": remote_filename, "total_size": file_size}
)
data = resp.json()
file_id = data["file_id"]
parts = data["parts"]
print(f"Upload initialized: {file_id}, {len(parts)} parts")
# 2. Upload each part
with open(file_path, "rb") as f:
for part_info in parts:
part_num = part_info["part"]
token = part_info["token"]
# Read chunk
chunk = f.read(CHUNK_SIZE)
if not chunk:
break
# Upload
resp = requests.put(
f"{BASE_URL}/upload/{file_id}/part/{part_num}",
headers={"Authorization": f"Bearer {token}"},
data=chunk
)
result = resp.json()
print(f"Part {part_num}: {result['uploaded_parts']}/{result['total_parts']}")
# 3. Complete upload
resp = requests.post(
f"{BASE_URL}/upload/{file_id}/complete",
headers={"X-API-Key": API_KEY}
)
print(f"Upload complete: {resp.json()['final_path']}")
if __name__ == "__main__":
# Simple upload (file goes to default location)
upload_file(FILE_PATH)
# Upload to specific path
upload_file(FILE_PATH, target_path="videos/2024/december")
Official SDK for browser and Node.js: chunked-uploader-sdk
npm install chunked-uploader-sdk
import { ChunkedUploader } from 'chunked-uploader-sdk';
const uploader = new ChunkedUploader({
baseUrl: 'https://upload.example.com',
apiKey: 'your-api-key',
});
// Upload a file with progress tracking
const result = await uploader.uploadFile(file, {
onProgress: (event) => {
console.log(`Progress: ${event.overallProgress.toFixed(1)}%`);
},
});
if (result.success) {
console.log('Upload complete:', result.finalPath);
} else {
console.error('Upload failed:', result.error);
}
const uploader = new ChunkedUploader({
baseUrl: 'http://localhost:3000',
apiKey: 'your-api-key',
});
// File input handler
const input = document.querySelector('input[type="file"]') as HTMLInputElement;
input.addEventListener('change', async () => {
const file = input.files?.[0];
if (!file) return;
const result = await uploader.uploadFile(file, {
concurrency: 5, // Upload 5 parts simultaneously
onProgress: (event) => {
progressBar.style.width = `${event.overallProgress}%`;
statusText.textContent = `Uploading part ${event.uploadedParts}/${event.totalParts}`;
},
onPartComplete: (result) => {
if (!result.success) {
console.error(`Part ${result.partNumber} failed:`, result.error);
}
},
});
console.log(result);
});
// Store part tokens from initial upload
const tokenMap = new Map<number, string>();
initResponse.parts.forEach(p => tokenMap.set(p.part, p.token));
// Later, resume the upload
const result = await uploader.resumeUpload(uploadId, file, {
partTokens: tokenMap,
onProgress: (event) => console.log(`${event.overallProgress}%`),
});
const abortController = new AbortController();
// Cancel button
cancelButton.addEventListener('click', () => {
abortController.abort();
});
const result = await uploader.uploadFile(file, {
signal: abortController.signal,
});
if (!result.success && result.error?.message === 'Upload aborted') {
console.log('Upload was cancelled');
}
import { ChunkedUploader } from 'chunked-uploader-sdk';
import { readFile } from 'fs/promises';
const uploader = new ChunkedUploader({
baseUrl: 'http://localhost:3000',
apiKey: 'your-api-key',
concurrency: 5,
});
async function uploadFromDisk(filePath: string) {
const buffer = await readFile(filePath);
const result = await uploader.uploadFile(buffer, {
onProgress: (event) => {
process.stdout.write(`\rProgress: ${event.overallProgress.toFixed(1)}%`);
},
});
console.log('\nUpload complete:', result);
}
interface ChunkedUploaderConfig {
/** Base URL of the chunked upload server */
baseUrl: string;
/** API key for management endpoints */
apiKey: string;
/** Request timeout in milliseconds (default: 30000) */
timeout?: number;
/** Number of concurrent chunk uploads (default: 3) */
concurrency?: number;
/** Retry attempts for failed chunks (default: 3) */
retryAttempts?: number;
/** Delay between retries in milliseconds (default: 1000) */
retryDelay?: number;
/** Custom fetch implementation */
fetch?: typeof fetch;
}
| Method | Description |
|---|---|
uploadFile(file, options?) |
Upload a file with automatic chunking and parallel uploads |
resumeUpload(uploadId, file, options?) |
Resume an interrupted upload |
initUpload(filename, totalSize, webhookUrl?) |
Initialize an upload session manually |
uploadPart(uploadId, partNumber, token, data, signal?) |
Upload a single chunk |
getStatus(uploadId) |
Get upload progress and status |
completeUpload(uploadId) |
Complete an upload (assemble all parts) |
cancelUpload(uploadId) |
Cancel an upload and cleanup |
healthCheck() |
Check server health |
| Script | Description |
|---|---|
init.sh |
Generates .env file with secure random API_KEY and JWT_SECRET |
deploy-mac.sh |
Creates macOS LaunchAgent and starts service (auto-restarts on reboot) |
deploy-linux.sh |
Creates systemd service and starts it (requires sudo, auto-restarts on reboot) |
Initializes the environment configuration:
API_KEY and JWT_SECRET.env file with default settingsuploads directory./init.sh
Deploys on macOS using launchd:
~/Library/LaunchAgents/com.grace.chunked-uploader.plist/Volumes/...) to mount before starting./deploy-mac.sh # Deploy and start
./deploy-mac.sh --run # Run mode (used by launchd internally)
Deploys on Linux using systemd:
/etc/systemd/system/chunked-uploader.service/mnt/..., /media/...) before startingsudo ./deploy-linux.sh # Deploy and start
./deploy-linux.sh --run # Run mode (used by systemd internally)
# Default build (local storage only)
cargo build --release
# With SMB/NAS support (pure Rust, no external dependencies)
cargo build --release --features smb
# With S3 support (requires native crypto libs)
cargo build --release --features s3
# With both SMB and S3 support
cargo build --release --features "smb s3"
# The binary will be at:
./target/release/chunked-uploader
# Run with custom config
API_KEY=xxx JWT_SECRET=yyy ./target/release/chunked-uploader
When initializing an upload, you can provide a webhook_url. When the upload completes, the server will POST a notification to that URL:
{
"event": "upload.complete",
"file_id": "550e8400-e29b-41d4-a716-446655440000",
"filename": "large-video.mp4",
"total_size": 10737418240,
"final_path": "./uploads/files/550e8400..._large-video.mp4",
"storage_backend": "local",
"completed_at": "2025-12-15T10:30:00Z"
}
The webhook is called asynchronously and does not block the complete response.
For S3-compatible storage (AWS S3, MinIO, etc.):
# .env
STORAGE_BACKEND=s3
S3_ENDPOINT=https://play.min.io # or AWS endpoint
S3_BUCKET=my-uploads
S3_REGION=us-east-1
AWS_ACCESS_KEY_ID=your-access-key
AWS_SECRET_ACCESS_KEY=your-secret-key
# Optional: Fast local storage for temporary chunks (recommended)
TEMP_STORAGE_PATH=/tmp/chunked-uploads
The S3 backend uses a hybrid approach for optimal performance:
This design ensures:
# Build with S3 feature
cargo build --release --features s3
# Ensure .env has S3 credentials configured
# Start server with S3 backend
cargo run --features s3
# Run integration tests (in another terminal)
cargo test --features s3 --test s3_upload_test -- --nocapture --test-threads=1
For SMB/CIFS network storage (NAS devices, Windows shares, Samba):
# .env
STORAGE_BACKEND=smb
SMB_HOST=192.168.1.100 # NAS IP or hostname
SMB_PORT=445 # Default SMB port
SMB_USER=admin # SMB username
SMB_PASS=your-password # SMB password
SMB_SHARE=uploads # Share name on the server
SMB_PATH=videos # Optional: subdirectory within share
# Optional: Fast local storage for temporary chunks (recommended)
TEMP_STORAGE_PATH=/tmp/chunked-uploads
The SMB backend uses a hybrid approach for optimal performance:
This design ensures:
# Build with SMB feature (pure Rust, no external dependencies)
cargo build --release --features smb
On macOS Sequoia (15.x) and later, apps need permission to access local network resources. When deploying with deploy-mac.sh, the service may need Local Network permission:
source .env && ./target/release/chunked-uploader
./deploy-mac.shIf SMB connection fails:
# Test network connectivity
ping 192.168.1.100
# Test SMB port
nc -zv 192.168.1.100 445
# Test SMB connection (on macOS/Linux)
smbclient //192.168.1.100/share -U username
# Check server logs
tail -f chunked-uploader.stderr.log
Common issues:
MIT