packbits-rle

Crates.iopackbits-rle
lib.rspackbits-rle
version0.1.1
created_at2026-01-25 10:18:40.666423+00
updated_at2026-01-25 16:50:56.365785+00
descriptionImplementation of the PackBits algorithm commonly used on the classic Apple Macintosh platform
homepage
repositoryhttps://codeberg.org/cyco/packbits-rs
max_upload_size
id2068494
size32,639
(cyco)

documentation

README

Packbits-rle

Packbits-rle is a rust implementation of the PackBits algorithm commonly used on the classic Apple Macintosh platform.

The PackBits algorithm

PackBits is a lossless compression algorithm using run-length encoding. It was commonly used on the Macintosh computer, especially in graphics related applications, but can also be found in StuffIt archive files.

The algorithm described nicely on the PackBits Wikipedia page and Apple's Technical Note TN1023 (kindly preserved by the Internet Archive).

Overview

The crate provides high level functions to expand PackBits from a buffer using [unpack_buf] and [unpack] to expand all data coming from an [io::Read] stream.

// Using the canonical sample input from Apple Technical Note TN1023
let packbits_data = b"\xFE\xAA\x02\x80\x00\x2A\xFD\xAA\x03\x80\x00\x2A\x22\xF7\xAA";

let data = packbits_rle::unpack_buf(packbits_data)
                        .expect("Could not unpack buffer");

assert_eq!(&data, b"\xAA\xAA\xAA\x80\x00\x2A\xAA\xAA\xAA\xAA\x80\x00\x2A\x22\xAA\xAA\xAA\xAA\xAA\xAA\xAA\xAA\xAA\xAA");

These functions allocate data in a [Vec] as needed and might not be suitable for large or unknown inputs.

To gain finer control over how much memory is used during decoding, you can employ a packbits_rle::Reader to wrap an existing [io::Read] stream and unpack data in chunks:

use std::io::{self, Read as _};

const CHUNK_SIZE: usize = 1024;

let packbits_data = b"\xFE\xAA\x02\x80\x00\x2A\xFD\xAA\x03\x80\x00\x2A\x22\xF7\xAA";
let reader = io::Cursor::new(packbits_data);

let mut reader = packbits_rle::Reader::new(reader);
let mut buffer = vec![0u8; CHUNK_SIZE];
loop {
  let len = reader.read(&mut buffer)
                  .expect("Could not read from packbits stream");

    println!("Unpacked {} bytes", len);

    if len < buffer.len() {
      break;
    }
}

Once you're done expanding the data, you can get the original [io::Read] back via the into_inner method.

Data can also be expanded from [io::Read]ers without the need to relinquish ownership by importing the [PackBitsReaderExt] trait and calling its [PackBitsReaderExt::read_packbits] function on the reader you already have.

use std::io::{self, Read as _};

use packbits_rle::PackBitsReaderExt;

// Build an input stream specifying the size of unpacked data (0x18) in a single byte,
// followed by PackBits data and then some more unpacked bytes (0xBA)
let mixed_data = b"\x18\xFE\xAA\x02\x80\x00\x2A\xFD\xAA\x03\x80\x00\x2A\x22\xF7\xAA\xBA";
let mut reader = io::Cursor::new(mixed_data);
let mut size = [0u8];
let mut more_data = [0u8];

// Read unpacked size from reader
reader.read(&mut size).unwrap();

// Unpack PackBits at current reader position until the buffer is full or the stream ends
let mut unpacked_packbit_bits = vec![0u8; size[0] as usize];
reader.read_packbits(&mut unpacked_packbit_bits).unwrap();

// Continue reading "regular" data from the reader
reader.read(&mut more_data).unwrap();

assert_eq!(size[0], 24);
assert_eq!(&unpacked_packbit_bits, b"\xAA\xAA\xAA\x80\x00\x2A\xAA\xAA\xAA\xAA\x80\x00\x2A\x22\xAA\xAA\xAA\xAA\xAA\xAA\xAA\xAA\xAA\xAA");
assert_eq!(more_data[0], 0xba);

While this approach keeps allocations in check, it does not lend itself well to precise error handling.

You can assert even finer control over the unpacking process by using the [Operation] struct. This is especially useful if PackBits compressed data has been split up between chunks and might end abruptly.

use std::io::{self, Read as _};
use packbits_rle::{Operation, Command, OperationError};

let input_chunk_1 = b"\xFE\xAA\x02\x80\x00\x2A".as_slice();
let input_chunk_2 = b"\xFD\xAA\x03\x80\x00\x2A\x22\xF7\xAA".as_slice();

let mut stream = io::Cursor::new(input_chunk_1);

let mut operation = Operation::default();
loop {
  // Produce a byte of data and advance to the next state,
  // this can read additional data from stream if required
  operation = match operation.advance(&mut stream) {
    Ok((byte, next_operation)) => {
      println!("Produced byte 0x{:02x}", byte);
      next_operation
    },
    // No value could be produced and there's an unfinished PackBits command that
    // needs more data to continue
    Err(OperationError::InsufficientInput(command)) => {
      // Provide more input data from the second chunk
      stream = io::Cursor::new(input_chunk_2);
      // Continue where we left off
      let (byte, next_operation) = command.execute(&mut stream).unwrap();
       println!("Produced byte 0x{:02x}", byte);
       next_operation
    }
    Err(OperationError::UnexpectedEof) => {
      println!("Reached end of unpacked data without any unfinished operations.");
      break;
    }
    Err(e) => panic!("An unexpected error occured: {:?}", e)
  }
}
Commit count: 5

cargo fmt