| Crates.io | ic-file-uploader |
| lib.rs | ic-file-uploader |
| version | 0.1.4 |
| created_at | 2024-07-12 02:56:27.29704+00 |
| updated_at | 2025-07-07 15:20:29.882071+00 |
| description | A utility for uploading files larger than 2MB to Internet Computer canisters. |
| homepage | https://github.com/modclub-app/ic-file-uploader |
| repository | https://github.com/modclub-app/ic-file-uploader |
| max_upload_size | |
| id | 1300204 |
| size | 69,416 |
ic-file-uploader is a Rust crate designed to facilitate the efficient uploading of files larger than 2MB to the Internet Computer. This crate focuses on breaking down large files into manageable chunks that fit within packet size limits and passing them to update calls which write these chunks to files.
cargo install ic-file-uploader
git clone <repository-url>
cd ic-file-uploader
cargo install --path .
ic-file-uploader <canister_name> <method_name> <file_path>
ic-file-uploader <canister_name> <method_name> <file_path> --parallel --max-concurrent 4
ic-file-uploader <canister_name> <method_name> <file_path> --chunk-offset 10 --autoresume
ic-file-uploader <canister_name> <method_name> <file_path> --network ic
ic-file-uploader <canister_name> <method_name> <file_path> --parallel --retry-chunks-file failed_chunks.txt
--parallel: Enable parallel upload mode for better performance--max-concurrent <N>: Maximum number of concurrent uploads (default: 4)--target-rate <RATE>: Target upload rate in MiB/s (default: 4.0)--chunk-offset <N>: Start uploading from chunk N (for resume)--autoresume: Enable automatic resume with retry attempts--max-retries <N>: Maximum retry attempts per chunk (default: 3)--network <NETWORK>: Specify dfx network (local, ic, etc.)--retry-chunks-file <FILE>: Retry only specific chunk IDs from fileYour canister needs to implement methods that accept chunked data. For parallel uploads, the method should accept:
// For parallel uploads
append_parallel_chunk : (nat32, blob) -> ();
// For sequential uploads
append_chunk : (blob) -> ();
Example Rust canister implementation:
use std::cell::RefCell;
use std::collections::HashMap;
thread_local! {
static CHUNKS: RefCell<HashMap<u32, Vec<u8>>> = RefCell::new(HashMap::new());
}
#[ic_cdk::update]
fn append_parallel_chunk(chunk_id: u32, data: Vec<u8>) {
CHUNKS.with(|chunks| {
chunks.borrow_mut().insert(chunk_id, data);
});
}
# Upload a 50MB machine learning model with parallel chunks
ic-file-uploader my_canister store_model ./large_model.safetensors --parallel --max-concurrent 6
# If upload fails at chunk 15, resume from there
ic-file-uploader my_canister store_data ./big_file.bin --chunk-offset 15 --autoresume
# Upload to IC mainnet with conservative rate limiting
ic-file-uploader my_canister store_file ./data.zip --parallel --target-rate 2.0 --network ic
--parallel for files larger than 10MB--max-concurrent based on your network and canister capacity--target-rate to avoid overwhelming the canister--autoresume for unreliable network connections--max-concurrent to 1 or 2--target-rate--chunk-offset with the exact chunk number where upload failed--autoresume for automatic retry logicdfx command-line tool installed and configuredAll original work is licensed under either of:
Apache License, Version 2.0 (LICENSE-APACHE or http://www.apache.org/licenses/LICENSE-2.0)
MIT license (LICENSE-MIT or http://opensource.org/licenses/MIT), at your option.