primitive-archiver

Crates.ioprimitive-archiver
lib.rsprimitive-archiver
version
sourcesrc
created_at2025-02-08 19:19:51.494768+00
updated_at2025-02-08 19:19:51.494768+00
descriptionPrimitive archiver
homepagehttps://github.com/yhdgms1/primitive-archiver
repositoryhttps://github.com/yhdgms1/primitive-archiver
max_upload_size
id1548326
Cargo.toml error:TOML parse error at line 18, column 1 | 18 | autolib = false | ^^^^^^^ unknown field `autolib`, expected one of `name`, `version`, `edition`, `authors`, `description`, `readme`, `license`, `repository`, `homepage`, `documentation`, `build`, `resolver`, `links`, `default-run`, `default_dash_run`, `rust-version`, `rust_dash_version`, `rust_version`, `license-file`, `license_dash_file`, `license_file`, `licenseFile`, `license_capital_file`, `forced-target`, `forced_dash_target`, `autobins`, `autotests`, `autoexamples`, `autobenches`, `publish`, `metadata`, `keywords`, `categories`, `exclude`, `include`
size0
Artemiy Schukin (yhdgms1)

documentation

README

Primitive Archiver

It structures data in a repeating sequence for each stored file

  1. Filename length (2 bytes)
  2. Filename bytes — Maximum of 65,535 bytes.
  3. Content length (after compression, 4 bytes)
  4. Content bytes (after compression) — Maximum of 4,294,967,295 bytes.

This format allows multiple files to be stored sequentially, even with identical filenames.

Example

use primitive_archiver::{Archiver, Unarchiver};

#[tokio::main]
async fn main() {
  let mut archiver = Archiver::new();

  archiver.put("file.txt", Vec::from("Nothing makes sense anymore."));
  archiver.put("some bytes", vec![1, 2, 3, 4, 5]);

  archiver.end().await;

  dbg!(archiver.bytes.clone());

  let mut unarchiver = Unarchiver::new();

  unarchiver.read(&mut archiver.bytes).await;

  dbg!(unarchiver.files);
}
  • The put method (sync) adds file data to an internal buffer.
  • The end method (async) finalizes the archive by compressing and appending data to the internal BytesMut buffer.
  • The Unarchiver reads and extracts stored files asynchronously.

Future Improvements

  • Support for additional compression algorithms
  • Support for Result instead of silently discarding files

Dependencies

Commit count: 2

cargo fmt