concurrent_lru

Crates.ioconcurrent_lru
lib.rsconcurrent_lru
version0.2.0
sourcesrc
created_at2021-02-14 05:15:03.794412
updated_at2021-02-14 14:34:05.744601
descriptionA concurrent LRU cache
homepagehttps://github.com/ngkv/concurrent_lru
repositoryhttps://github.com/ngkv/concurrent_lru.git
max_upload_size
id354946
size35,192
zhongjn (zhongjn)

documentation

README

Concurrent LRU

crates.io Badge docs.rs Badge License Badge

An implementation of a concurrent LRU cache. It is designed to hold heavyweight resources, e.g. file descriptors, disk pages. The implementation is heavily influenced by the LRU cache in LevelDB.

Currently there are two implementations, unsharded and sharded.

  • unsharded is a linked hashmap protected by a big lock.
  • sharded shards unsharded by key, providing better performance under contention.

Example

use concurrent_lru::sharded::LruCache;
use std::{fs, io};

fn read(_f: &fs::File) -> io::Result<()> {
    // Maybe some positioned read...
    Ok(())
}

fn main() -> io::Result<()> {
    let cache = LruCache::<String, fs::File>::new(10);

    let foo_handle = cache.get_or_try_init("foo".to_string(), 1, |name| {
        fs::OpenOptions::new().read(true).open(name)
    })?;
    read(foo_handle.value())?;
    drop(foo_handle); // Unpin foo file.

    // Foo is in the cache.
    assert!(cache.get("foo".to_string()).is_some());

    // Evict foo manually.
    cache.prune();
    assert!(cache.get("foo".to_string()).is_none());

    Ok(())
}

Contribution

Contributions are welcome! Please fork the library, push changes to your fork, and send a pull request. All contributions are shared under an MIT license unless explicitly stated otherwise in the pull request.

Performance

TODO

Commit count: 16

cargo fmt