Crates.io | tokio-lk |
lib.rs | tokio-lk |
version | 0.2.2 |
source | src |
created_at | 2020-03-10 01:11:06.008077 |
updated_at | 2020-03-26 02:15:54.556501 |
description | Futures-aware lock-by-id primitives |
homepage | https://github.com/zenixls2/tokio-lk |
repository | https://github.com/zenixls2/tokio-lk |
max_upload_size | |
id | 217046 |
size | 32,201 |
** A lock-by-id future for tokio **
Lock
future will return Guard
once it gets the mutex.
to hold the lock in subsequent futures, move the Guard
inside the future output.
to release the mutex, simply drops the Guard
from your future chain.
Each Lock
object is assigned an unique id.
The uniqueness is promised until USIZE_MAX of id gets generated.
Make sure old Locks are dropped before you generate new Locks above this amount.
KeyPool
for hashmap abstraction.RwLock<HashMap>
use std::time::{Duration, Instant};
use tokio_lk::*;
use futures::prelude::*;
use tokio::runtime::Runtime;
use tokio::time::delay_for;
let mut rt = Runtime::new().unwrap();
let map = KeyPool::<MapType>::new();
let now = Instant::now();
// this task will compete with task2 for lock at id 1
let task1 = async {
let _guard = Lock::fnew(1, map.clone()).await.await;
delay_for(Duration::from_millis(100)).await;
};
// this task will compete with task1 for lock at id 1
let task2 = async {
let _guard = Lock::fnew(1, map.clone()).await.await;
delay_for(Duration::from_millis(100)).await;
};
// no other task compete for lock at id 2
let task3 = async {
let _guard = Lock::fnew(2, map.clone()).await.await;
delay_for(Duration::from_millis(100)).await;
};
rt.block_on(async { tokio::join!(task1, task2, task3) });
MapType
as a type alias of hashbrown::HashMap
for KeyPool initializationDashMapType
as a type alias of dashmap::DashMap
for KeyPool initializationhashbrown
and dashmap
to run the benchmark, execute the following command in the prompt:
cargo bench -- --nocapture
The lock1000_parallel
benchmark is to run 1000 futures locked by a single lock to update the
counter.
The lock1000_serial
benchmark is to run run similar operations in a single thread.
Currently our implementation is about 4~5 times slower than the single threaded version.
Licensed under
Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, shall be licensed as above, without any additional terms or conditions.