Crates.io | ppar |
lib.rs | ppar |
version | 0.3.1 |
source | src |
created_at | 2020-10-30 10:21:38.748716 |
updated_at | 2021-08-31 12:01:16.463362 |
description | Persistent immutable array |
homepage | |
repository | https://github.com/bnclabs/ppar |
max_upload_size | |
id | 306940 |
size | 188,592 |
Package implement persistent array using a variant of rope data structure.
Array is implemented as std::vec in rust-standard library.
For most cases that should be fine. But when we start working with arrays
that are super-large and/or step into requirements like non-destructive
writes and concurrent access, we find std::vec insufficient.
im is a popular alternative, but has insert()
and delete()
penalties similar to std::vec
for large arrays. While most implementation
prefer to use RRB-Tree, ppar
is based on Rope data structure.
Here is a quick list of situation that might require using ppar
.
Check out our performance metric
Fundamentally, it can be viewed as a binary-tree of array-blocks, where
each leaf-node is a contiguous-block of type T
items, while intermediate
nodes only hold references to the child nodes - left
and right
.
To be more precise, intermediate nodes in the tree are organised similar
to rope structure, as a tuple of (weight, left, right)
where weight is
the sum of all items present in the leaf-nodes under the left-branch.
A list of alternatives can be found here. If you find good alternatives please add it to the list and raise a PR.
If you are planning to use ppar
for your project, do let us know.
The basic algorithm is fairly tight. Contributions are welcome to make the
ppar::Vector
type as rich as std::vec::Vec and
im::Vector.
On a 2008 core2-duo machine with 8GB RAM. The measurements are taken using the following command and all the numbers are op Vs latency.
cargo run --release --bin perf --features=perf -- --load 100000 --ops 10000
Lower numbers are better Lower number are better
We are loading the array with an initial data set of 100_000
u64 numbers.
Then applying each operation in a tight loop, measuring the elapsed time
for the entire loop and finally computing per-op latency. insert
,
insert_mut
, remove
, remove_mut
, update
, update_mut
, and get
use
random offset into the array. For iteration, we compute the elapsed time
for iterating each item.
arc
and rc
variant of ppar
, performance difference is not much.
Though it might make a difference for cache-optimized algorithms.ppar
gains order-of-magniture improvent over vec
and im
for insert,
remove, load, split, append, clone operations.It is also worth noting that, maximum size of the operated data-set is less than 2MB, which means the entire array footprint can be held inside the CPU cache. As far as my understanding goes, when we increase the size of the array, the performance difference will only be more pronounced, that is, fast-op might look faster and slow-op might look slower.
RRB-Tree
.make build
to confirm all versions of build is passing with
0 warnings and 0 errors.check.sh
with 0 warnings, 0 errors and all testcases passing.perf.sh
with 0 warnings, 0 errors and all testcases passing.cargo spellcheck
to remove common spelling mistakes.