Crates.io | hashlru |
lib.rs | hashlru |
version | 0.11.1 |
source | src |
created_at | 2020-01-05 16:12:43.78795 |
updated_at | 2024-03-17 13:12:19.176506 |
description | HashLRU is a LRU cache. |
homepage | https://gitlab.com/liberecofr/hashlru |
repository | https://gitlab.com/liberecofr/hashlru/tree/main |
max_upload_size | |
id | 195439 |
size | 150,702 |
HashLRU is a LRU cache implemented in Rust.
It tries to follow the API exposed by a standard Rust HashMap while enforcing a limited memory footprint by limiting the number of keys using the LRU (least recently used) strategy, which is a quite common cache replacement policy.
It has been heavily inspired by lru by Jerome Froelich. The underlying logic is similar, using a double-linked list implemented with pointers, combined with a hash map. HashLRU is slightly less optimized, but the API is a bit richer in some areas.
Typically, on top of common LRU implementations, HashLRU has:
HashLRU is between the toy project and something you could use. It comes with with a rather complete test harness and tries to have reasonable documentation, so that's a plus. OTOH it is quite young, and to my knowledge not used in production anywhere.
Also there are many other libraries you could use instead:
use hashlru::Cache;
let mut cache = Cache::new(4);
cache.insert("key1", 10);
cache.insert("key2", 20);
cache.insert("key3", 30);
cache.insert("key4", 40);
cache.insert("key5", 50);
// key1 has been dropped, size is limited to 4
assert_eq!(Some("key2"), cache.lru());
assert_eq!(Some(&20), cache.get(&"key2"));
// getting key2 has made key3 the least recently used item
assert_eq!(Some("key3"), cache.lru());
assert_eq!(Some(&40), cache.get(&"key4"));
// getting key4 makes it the most recently used item
assert_eq!("[key3: 30, key5: 50, key2: 20, key4: 40]", format!("{}", cache));
Taken from a random CI job:
running 10 tests
test tests::bench_read_usize_builtin_hashmap ... bench: 34 ns/iter (+/- 1)
test tests::bench_read_usize_extern_caches ... bench: 58 ns/iter (+/- 30)
test tests::bench_read_usize_extern_lru ... bench: 24 ns/iter (+/- 5)
test tests::bench_read_usize_hashlru_cache ... bench: 25 ns/iter (+/- 3)
test tests::bench_read_usize_hashlru_sync_cache ... bench: 61 ns/iter (+/- 18)
test tests::bench_write_usize_builtin_hashmap ... bench: 104 ns/iter (+/- 19)
test tests::bench_write_usize_extern_caches ... bench: 116 ns/iter (+/- 46)
test tests::bench_write_usize_extern_lru ... bench: 62 ns/iter (+/- 32)
test tests::bench_write_usize_hashlru_cache ... bench: 64 ns/iter (+/- 34)
test tests::bench_write_usize_hashlru_sync_cache ... bench: 100 ns/iter (+/- 42)
test result: ok. 0 passed; 0 failed; 0 ignored; 10 measured; 0 filtered out; finished in 43.16s
Those are done with a 100k items cache.
Not the results of thorough intensive benchmarking but here are a few facts:
hashlru
and lru
perform better than a vanilla HashMap
on these benches.hashlru
is not the fastest, lru
always performs best, it has interesting
corner-case optimizations. Use lru
if you are after raw speed.To run the benchmarks:
cd bench
rustup default nightly
cargo bench
RefCell
can be usedHashLRU is licensed under the MIT license.