Crates.io | xfetch |
lib.rs | xfetch |
version | 1.0.1 |
source | src |
created_at | 2019-06-22 14:04:52.675603 |
updated_at | 2020-07-15 06:29:36.157813 |
description | Optimal probabilistic cache refresh algorithm |
homepage | https://github.com/kanru/xfetch-rs |
repository | https://github.com/kanru/xfetch-rs |
max_upload_size | |
id | 142815 |
size | 77,227 |
Rust crate for Optimal Probabilistic Cache Stampede Prevention aka XFetch algorithm
It can be used in conjunction with cache containers like LRU cache to implement cache expiration and re-computation in parallel environment like multi-thread / multi-process computing.
It is very efficient because the algorithm does not need coordination (no locks) between processes.
Create a single cache entry and test it's expiration:
# struct SomeValue { value: u64, ttl: u64 };
# fn expensive_computation() -> SomeValue { SomeValue { value: 42, ttl: 10000 } }
use xfetch::CacheEntry;
use std::time::Duration;
let entry = CacheEntry::builder(|| {
expensive_computation()
})
.with_ttl(|value| {
Duration::from_millis(value.ttl)
})
.build();
assert!(!entry.is_expired());
The CacheEntry can be used with any cache library.
For example the lru
crate:
use lru::LruCache;
use xfetch::CacheEntry;
use std::time::Duration;
struct SomeValue {
value: u64,
ttl: u64
};
fn recompute_value(n: u64) -> SomeValue {
SomeValue { value: n, ttl: 10000 }
}
fn main() {
let mut cache = LruCache::new(2);
cache.put("apple", CacheEntry::builder(|| recompute_value(3))
.with_ttl(|v| Duration::from_millis(v.ttl))
.build());
cache.put("banana", CacheEntry::builder(|| recompute_value(2))
.with_ttl(|v| Duration::from_millis(v.ttl))
.build());
if let Some(entry) = cache.get(&"apple") {
if !entry.is_expired() {
assert_eq!(entry.get().value, 3);
} else {
cache.put("apple", CacheEntry::builder(|| recompute_value(3))
.with_ttl(|v| Duration::from_millis(v.ttl))
.build());
}
}
}
Plot showing the simulated probability of early expiration of different system:
Licensed under either of
at your option.
Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.