Crates.io | concurrent_lru |
lib.rs | concurrent_lru |
version | 0.2.0 |
source | src |
created_at | 2021-02-14 05:15:03.794412 |
updated_at | 2021-02-14 14:34:05.744601 |
description | A concurrent LRU cache |
homepage | https://github.com/ngkv/concurrent_lru |
repository | https://github.com/ngkv/concurrent_lru.git |
max_upload_size | |
id | 354946 |
size | 35,192 |
An implementation of a concurrent LRU cache. It is designed to hold heavyweight resources, e.g. file descriptors, disk pages. The implementation is heavily influenced by the LRU cache in LevelDB.
Currently there are two implementations, unsharded
and sharded
.
unsharded
is a linked hashmap protected by a big lock.sharded
shards unsharded
by key, providing better performance under
contention.use concurrent_lru::sharded::LruCache;
use std::{fs, io};
fn read(_f: &fs::File) -> io::Result<()> {
// Maybe some positioned read...
Ok(())
}
fn main() -> io::Result<()> {
let cache = LruCache::<String, fs::File>::new(10);
let foo_handle = cache.get_or_try_init("foo".to_string(), 1, |name| {
fs::OpenOptions::new().read(true).open(name)
})?;
read(foo_handle.value())?;
drop(foo_handle); // Unpin foo file.
// Foo is in the cache.
assert!(cache.get("foo".to_string()).is_some());
// Evict foo manually.
cache.prune();
assert!(cache.get("foo".to_string()).is_none());
Ok(())
}
Contributions are welcome! Please fork the library, push changes to your fork, and send a pull request. All contributions are shared under an MIT license unless explicitly stated otherwise in the pull request.
TODO