| Crates.io | borderless-storage |
| lib.rs | borderless-storage |
| version | 0.1.0 |
| created_at | 2025-09-22 13:03:06.563609+00 |
| updated_at | 2025-09-22 13:03:06.563609+00 |
| description | A minimal S3-style object store with pre-signed URLs, chunked uploads, and a filesystem backend (based on Axum/Tokio). |
| homepage | https://github.com/borderless-tech/borderless-storage |
| repository | https://github.com/borderless-tech/borderless-storage |
| max_upload_size | |
| id | 1849984 |
| size | 175,065 |
A tiny, production-ready object storage server written in Rust. It exposes an S3-like model (objects identified by UUIDs, pre-signed uploads/downloads), persists data on the local filesystem, and includes an automatic janitor for cleaning up failed uploads.
️Platform: Uses Unix signal handling via
tokio::signal::unix, so it currently targets Linux/macOS and other Unix-like systems.
.tmp files and stale chunk directoriesSIGINT/SIGTERMYou can use the run_dev.sh to run the service locally in develop mode. See section building, deploying and configuration for more advanced options.
The actions are similar to s3, so before you can upload or download anything, you have to generate a presigned url via the /presign endpoint.
Assuming you have your server locally available, it would look something like this:
curl 127.0.0.1:3000/presign \
-H "authorization: Bearer secret-api-key" \
-H "content-type: application/json" \
-d '{ "action": "upload" }'
This will produce a response like this:
{
"success": true,
"action": "upload",
"blob_id": "01996168-e738-7552-9662-2041482b96c3",
"url": "http://localhost:3000/upload/01996168-e738-7552-9662-2041482b96c3?expires=1758276788&sig=BDtmjKQ2iImF5emvbyqPdivEojq60UI6gYuKDRQBSO4=",
"method": "POST",
"expires_in": 900
}
You can now use the pre-signed url to upload your data (which will be stored under the given blob_id):
curl -X POST "http://localhost:3000/upload/01996168-e738-7552-9662-2041482b96c3?expires=1758276788&sig=BDtmjKQ2iImF5emvbyqPdivEojq60UI6gYuKDRQBSO4=" \
--data-binary @My_Fancy_File.pdf
Note: The data is not encoded via JSON or anything, instead the raw data is streamed over to the storage server in the body. Internally, we use the http stream to directly write the blob to disk, without copying the entire content of the file into RAM. This process is very efficient and allows for quick and fast uploads, even with large files.
The response of an upload returns the number of bytes written and blob-id:
{
"success": true,
"message": "uploaded blob",
"blob_id": "01996168-e738-7552-9662-2041482b96c3",
"bytes_written": 714300
}
To retrive the data, you have to presign a download url, and then you can download the file:
curl 127.0.0.1:3000/presign \
-H "authorization: Bearer secret-api-key" \
-H "content-type: application/json" \
-d '{ "action": "download", "blob_id": "01996168-e738-7552-9662-2041482b96c3" }'
# The response looks identical to the upload response - the most important path is the presigned url, which you need to download:
curl "http://localhost:3000/files/01996168-e738-7552-9662-2041482b96c3?expires=1758277521&sig=QjRyPQUAQ9QwtKGiKg_4oUwK3QiuL3_X13UXiKs86W8=" -o My_Fancy_File.pdf
If you want to upload very large files, or upload from a very unstable connection (like a mobile device) you can use the chunked upload. This effectively allows you to upload your file piece by piece, while the server merges all chunks into a single file when you are done.
This is done via the same upload endpoint, but using special request headers to indicate the upload type and chunk-index:
curl -X POST "http://localhost:3000/upload/01996168-e738-7552-9662-2041482b96c3?expires=1758276788&sig=BDtmjKQ2iImF5emvbyqPdivEojq60UI6gYuKDRQBSO4=" \
-H "x-upload-type: chunked" \
-H "x-chunk-index: 1" \
-H "x-chunk-total: 3" \
--data-binary @File-Chunk_1_3
After all chunks are uploaded, a last request advises the server to perform the merge:
curl -X POST "http://localhost:3000/upload/01996168-e738-7552-9662-2041482b96c3?expires=1758276788&sig=BDtmjKQ2iImF5emvbyqPdivEojq60UI6gYuKDRQBSO4=" \
-H "x-upload-type: chunked" \
-H "x-chunk-merge: true" \
-H "x-chunk-total: 3"
You can then download the file like normal.
You have several options of how do build and deploy this project. We use nix as our build system.
To get a development shell with all required dependencies:
nix-shell
# or (if you have flakes enabled)
nix develop
To build the application natively with nix ( requires flakes )
nix build .#borderless-storage
You can also build a minimal docker image based on nix ( requires flakes to be enabled ):
nix build .#docker
# This creates a ./result symlink, which you can use to load the image into docker
docker load < result
The service is exposed under port 8080 inside the docker container. You can execute it via docker like this:
docker run --rm -p 8080:8080 \
-e DOMAIN="http://localhost:8080" \
-e PRESIGN_API_KEY="secret-api-key" \
-e PRESIGN_HMAC_SECRET="your-very-long-and-secret-hmac-secret" \
-v "$PWD/data:/data" \
borderless/borderless-storage:0.1.0
Note: You don't have to specify IP_ADDR and DATA_DIR, as they are fixed inside the container.
You can build this project manually like any rust project.
/var/lib/storage)# 1) Build
cargo build --release
# 2) Prepare data dir
sudo mkdir -p /var/lib/storage
sudo chown "$USER" /var/lib/storage
# 3) Run (choose one of the config methods)
./target/release/borderless-storage --ip-addr 0.0.0.0:8080 --data-dir /var/lib/storage --domain https://storage.example.com
You can configure borderless-storage via (1) config file, (2) CLI flags, or (3) environment variables. The precedence is:
--config <file> (TOML)0.0.0.0:8080https://storage.example.com)| Key | Env var | Default | Notes |
|---|---|---|---|
ip_addr |
IP_ADDR |
— (required) | Must parse as socket address |
data_dir |
DATA_DIR |
— (required) | Directory must exist & be writable |
domain |
DOMAIN |
— (required) | Parsed as http::Uri |
presign_api_key |
PRESIGN_API_KEY |
— (required) | Use a secure API-Key in production |
presign_hmac_secret |
PRESIGN_HMAC_SECRET |
generate random secret | Use a secure secret in production |
cors_origins |
CORS_ORIGINS |
all origins ('*') | Comma separated list of origins |
ttl_orphan_secs |
TTL_ORPHAN_SECS |
43200 (12h) |
Orphan TTL for temp files/chunks |
max_data_rq_size |
MAX_DATA_RQ_SIZE |
4 * 1024^3 (4 GiB) |
Hard cap for data API requests |
max_presign_rq_size |
MAX_PRESIGN_RQ_SIZE |
100 * 1024 (100 KiB) |
Hard cap for pre‑sign endpoints |
rq_timeout_secs |
RQ_TIMEOUT_SECS |
30 seconds |
Per‑request timeout |
The server validates the data directory is writable by creating and removing a small probe file.
See configuration examples for more information.
Default log level: INFO
Use --verbose for DEBUG level with extra details during cleanup and chunk checks
subtle::ConstantTimeEq to mitigate timing attacks*.tmp, then rename to the final pathcheck_chunks ensures all chunk_{i}_{total} exist before mergerq_timeout_secs and request size caps (max_*) to protect resourcesdomain to an HTTPS originIssues and PRs are welcome! Please open an Issue if you encounter a bug, or if you have an idea, how we could make the borderless-storage even better.
The project is published under MIT or Apache License.
Thanks to the Rust and Tokio communities for fantastic tooling and libraries.
The project is mainly built on top of axum and tower, which are fantastic project for building high-performance web applications.
If you build something cool with borderless-storage, let us know via an issue. ✨