Crates.io | compressed_vec |
lib.rs | compressed_vec |
version | 0.1.0 |
source | src |
created_at | 2020-07-02 21:46:55.695693 |
updated_at | 2020-07-02 21:46:55.695693 |
description | Floating point and integer compressed vector library, SIMD-enabled for fast processing/iteration over compressed representations. |
homepage | |
repository | https://github.com/velvia/compressed-vec |
max_upload_size | |
id | 260786 |
size | 266,878 |
Floating point and integer compressed vector library, SIMD-enabled for fast processing/iteration over compressed representations.
[dependencies]
compressed_vec = "0.1"
This is a compressed vec library, rather than a compression library. What does that mean? A compression library takes some uncompressed data and provides essentially compress() and decompress() functions. Typically you have to decompress data to be able to do anything with it, resulting in extra latency and allocations.
On the other hand, this compressed vec library allows you to iterate over and process the compressed representations directly. It is designed to balance fast iteration and SIMD processing/filtering, while compressing vectors to within 2x of the best columnar compression technology such as Apache Parquet, using techniques such as delta and XOR encoding. Applications:
Numbers are from my laptop: 2.9 GHz Core i9, 6/12 cores, 12MB L3, AVX2; from running cargo bench vector
, which benchmarks a filter-and-count-matches operation directly on encoded/compressed vectors.
Vector type(s) | Elements/sec | Raw GBs per sec |
---|---|---|
u32 dense (no sparsity) | 1.7 Gelems/s | 6.8 GB/s |
u32 sparse (99% zeros) | 13.9 Gelems/s | 55.6 GB/s |
Two u32 vectors (sparse + dense)* | 1.3-5.2 Gelems/s | 5-20 GB/s |
u64 vector, dense | 955M - 1.1 Gelems/s | 7.6 - 9.1 GB/s |
f32, XOR encoded, 60% density | 985 Melems/s | 3.9 GB/s |
MultiVectorFilter
) depends on the order of the vectors. It is faster to filter the sparse vector first.To create an f32 compressed vector:
use compressed_vec::VectorF32XorAppender;
let mut appender = VectorF32XorAppender::try_new(2048).unwrap();
let encoded_bytes = appender.encode_all(vec![1.0, 1.5, 2.0, 2.5]).unwrap();
The simplest way to iterate on this compressed vector (note this does not allocate at all):
use compressed_vec::VectorReader;
let reader = VectorReader::<f32>::try_new(&encoded_bytes[..]).unwrap();
let sum = reader.iterate().sum::<f32>(); // Yay, no allocations!
iterate()
is the easiest API to go through individual elements of the compressed vector, but it is not the fastest. Fast data processing, such as done in the filter-and-count benchmarks above, are performed using Sink
s, which are defined in the sink
module. Sinks operate on a SIMD word at a time, and the sink API is designed for inlining.
For example, let's say that we want to add 2.5 to the f32 vector above, and then write out the results to a Vec<f32>
. Internally, XOR encoding and decoding is performed (using a sink). The sinks can be stacked during decoding, for an almost entirely SIMD pipeline:
- XorSink
(used automatically for f32 decoding)
- AddConstSink
(to add 2.5, again done using SIMD)
- VecSink
(writes output to a normal Vec)
use compressed_vec::{VectorReader, AddConstSink, VecSink};
let reader = VectorReader::<f32>::try_new(&encoded_bytes[..]).unwrap();
let mut vecsink = VecSink::<f32>::new();
let mut addsink = AddConstSink::new(2.5f32, &mut vecsink);
reader.decode_to_sink(&mut addsink).unwrap();
println!("And the transformed vector is: {:?}", vecsink.vec);
Details of the vector format can be found here.
The vector format follows columnar compression techniques used throughout the big data and database world, and roughly follows the Google Procella paper with its custom Artus format:
VectorAppender
collects min/max and other stats on the raw data and uses it to decide on the best encoding strategy (delta, etc.)Please reach out to me to collaborate!