Crates.io | clru |
lib.rs | clru |
version | 0.6.2 |
source | src |
created_at | 2020-10-27 23:59:45.126991 |
updated_at | 2024-05-02 21:27:57.657477 |
description | An LRU cache implementation with constant time operations and weighted semantic |
homepage | |
repository | https://github.com/marmeladema/clru-rs |
max_upload_size | |
id | 306165 |
size | 100,404 |
Another LRU cache implementation in rust. It has two main characteristics that differentiates it from other implementation:
It is backed by a HashMap: it offers a O(1) time complexity (amortized average) for common operations like:
get
/ get_mut
put
/ pop
peek
/ peek_mut
It is a weighted cache: each key-value pair has a weight and the capacity serves as both as:
using the following formula:
Even though most operations don't depend on the number of elements in the cache,
CLruCache::put_with_weight
has a special behavior: because it needs to make room
for the new element, it will remove enough least recently used elements. In the worst
case, that will require to fully empty the cache. Additionally, if the weight of the
new element is too big, the insertion can fail.
For the common case of an LRU cache whose elements don't have a weight, a default
ZeroWeightScale
is provided and unlocks some useful APIs like:
CLruCache::put
: an infallible insertion that will remove a maximum of 1 element.CLruCache::put_or_modify
: a conditional insertion or modification flow similar to the entry API of [HashMap
].CLruCache::try_put_or_modify
: fallible version of CLruCache::put_or_modify
.CLruCache::get_mut
).Most of the API, documentation, examples and tests have been heavily inspired by the lru crate. I want to thank jeromefroe for his work without which this crate would have probably never has been released.
The main differences are:
Borrow
-ed version of the key.Below are simple examples of how to instantiate and use this LRU cache.
ZeroWeightScale
:
use std::num::NonZeroUsize;
use clru::CLruCache;
let mut cache = CLruCache::new(NonZeroUsize::new(2).unwrap());
cache.put("apple".to_string(), 3);
cache.put("banana".to_string(), 2);
assert_eq!(cache.get("apple"), Some(&3));
assert_eq!(cache.get("banana"), Some(&2));
assert!(cache.get("pear").is_none());
assert_eq!(cache.put("banana".to_string(), 4), Some(2));
assert_eq!(cache.put("pear".to_string(), 5), None);
assert_eq!(cache.get("pear"), Some(&5));
assert_eq!(cache.get("banana"), Some(&4));
assert!(cache.get("apple").is_none());
{
let v = cache.get_mut("banana").unwrap();
*v = 6;
}
assert_eq!(cache.get("banana"), Some(&6));
WeightScale
implementation:
use std::num::NonZeroUsize;
use clru::{CLruCache, CLruCacheConfig, WeightScale};
struct CustomScale;
impl WeightScale<String, &str> for CustomScale {
fn weight(&self, _key: &String, value: &&str) -> usize {
value.len()
}
}
let mut cache = CLruCache::with_config(
CLruCacheConfig::new(NonZeroUsize::new(6).unwrap()).with_scale(CustomScale),
);
assert_eq!(cache.put_with_weight("apple".to_string(), "red").unwrap(), None);
assert_eq!(
cache.put_with_weight("apple".to_string(), "green").unwrap(),
Some("red")
);
assert_eq!(cache.len(), 1);
assert_eq!(cache.get("apple"), Some(&"green"));
Each contribution is tested with regular compiler, miri, and 4 flavors of sanitizer (address, memory, thread and leak). This should help catch bugs sooner than later.