Crates.io | q_compress |
lib.rs | q_compress |
version | 0.11.7 |
source | src |
created_at | 2021-06-04 21:57:06.705438 |
updated_at | 2023-07-07 23:14:14.594929 |
description | Good compression for numerical sequences and time series |
homepage | |
repository | https://github.com/mwlon/pcodec |
max_upload_size | |
id | 406266 |
size | 224,892 |
q_compress
use q_compress::{auto_compress, auto_decompress, DEFAULT_COMPRESSION_LEVEL};
fn main() {
// your data
let mut my_ints = Vec::new();
for i in 0..100000 {
my_ints.push(i as i64);
}
// Here we let the library choose a configuration with default compression
// level. If you know about the data you're compressing, you can compress
// faster by creating a `CompressorConfig`.
let bytes: Vec<u8> = auto_compress(&my_ints, DEFAULT_COMPRESSION_LEVEL);
println!("compressed down to {} bytes", bytes.len());
// decompress
let recovered = auto_decompress::<i64>(&bytes).expect("failed to decompress");
println!("got back {} ints from {} to {}", recovered.len(), recovered[0], recovered.last().unwrap());
}
To run something right away, try the benchmarks.
For a lower-level standalone API that allows writing/reading one chunk at a time and extracting all metadata, see the docs.rs documentation.
To embed/interleave q_compress
in another data format, it is better to use
the wrapped API and format than standalone.
See the wrapped time series example.
This allows
See changelog.md
Small data types can be efficiently compressed in expansion:
for example, compressing u8
data as a sequence of u16
values. The only cost to using a larger datatype is a small
increase in chunk metadata size.
When necessary, you can implement your own data type via
q_compress::types::NumberLike
and (if the existing signed/unsigned
implementations are insufficient)
q_compress::types::SignedLike
and
q_compress::types::UnsignedLike
.
Recall that each chunk has a metadata section containing
Using the compressed body size, it is easy to seek through the whole file and collect a list of all the chunk metadatas. One can aggregate them to obtain the total count of numbers in the whole file and even an approximate histogram. This is typically about 100x faster than decompressing all the numbers.
See the fast seeking example.