Crates.io | node-replication |
lib.rs | node-replication |
version | 0.1.1 |
source | src |
created_at | 2020-10-21 17:14:23.370809 |
updated_at | 2021-09-02 19:12:43.438983 |
description | An operation-log based approach that transform single-threaded data structures into concurrent, replicated structures. |
homepage | |
repository | |
max_upload_size | |
id | 303891 |
size | 156,964 |
Node Replication library based on Black-box Concurrent Data Structures for NUMA Architectures.
This library can be used to implement a concurrent version of any single threaded data structure: It takes in a single threaded implementation of said data structure, and scales it out to multiple cores and NUMA nodes by combining three techniques: readers-writer locks, operation logging and flat combining.
To replicate a single-threaded data structure, one needs to implement Dispatch
(from node-replication). As an example, we implement Dispatch
for the
single-threaded HashMap from std
.
use std::collections::HashMap;
use node_replication::Dispatch;
/// The node-replicated hashmap uses a std hashmap internally.
#[derive(Default)]
struct NrHashMap {
storage: HashMap<u64, u64>,
}
/// We support mutable put operation on the hashmap.
#[derive(Clone, Debug, PartialEq)]
enum Modify {
Put(u64, u64),
}
/// We support an immutable read operation to lookup a key from the hashmap.
#[derive(Clone, Debug, PartialEq)]
enum Access {
Get(u64),
}
/// The Dispatch traits executes `ReadOperation` (our Access enum)
/// and `WriteOperation` (our `Modify` enum) against the replicated
/// data-structure.
impl Dispatch for NrHashMap {
type ReadOperation = Access;
type WriteOperation = Modify;
type Response = Option<u64>;
/// The `dispatch` function applies the immutable operations.
fn dispatch(&self, op: Self::ReadOperation) -> Self::Response {
match op {
Access::Get(key) => self.storage.get(&key).map(|v| *v),
}
}
/// The `dispatch_mut` function applies the mutable operations.
fn dispatch_mut(&mut self, op: Self::WriteOperation) -> Self::Response {
match op {
Modify::Put(key, value) => self.storage.insert(key, value),
}
}
}
The full example (using HashMap
as the underlying data-structure) can be found
here. To run, execute: cargo run --example hashmap
The library often makes your single-threaded implementation work better than, or competitive with fine-grained locking or lock free implementations of the same data-structure.
It works especially well if
As an example, the following benchmark uses Rust' the hash-table with the
Dispatch
implementation from above (nr
), and compares it against concurrent
hash table implementations from crates.io (chashmap,
dashmap, flurry), a HashMap protected by an RwLock
(std
), and
urcu.
The figures show a benchmark using hash tables pre-filled with 67M entires (8
byte keys and values) and uses a uniform key distribution for operations. On the
left graphs, different write ratios (0%, 10% and 80%) are shown. On the right
graph, we vary the write ratio (x-axis) with 192 threads. The system has 4 NUMA
nodes, so it uses 4 replicas (at x=96
, a replica gets added every 24 cores).
After x=96
, the remaining hyper-threads are used.
The works with no_std
and a stable rust compiler.
cargo build
If you are using a nightly rust compiler, you can compile the library to make
use of some more recent features (new_uninit
, and get_mut_unchecked
,
negative_impls
):
cargo build --features unstable
As a dependency in your Cargo.toml
:
node-replication = "*"
The code should currently be treated as an early release and is still work in progress. In its current form, the library is only known to work on x86 platforms (other platforms will require some changes and are untested).
There are a series of unit tests as part of the implementation and a few integration tests that check various aspects of the implementation using a stack.
You can run the tests by executing: cargo test
The benchmarks (and how to execute them) are explained in more detail in the benches folder.