| Crates.io | const-lru |
| lib.rs | const-lru |
| version | 1.0.0 |
| created_at | 2023-08-22 16:58:56.60471+00 |
| updated_at | 2023-10-06 15:24:18.359266+00 |
| description | A simple no_std, non-hashing, constant-capacity, constant-memory-usage LRU cache. |
| homepage | https://github.com/billythedummy/const-lru |
| repository | https://github.com/billythedummy/const-lru.git |
| max_upload_size | |
| id | 951185 |
| size | 106,163 |
A simple no_std, non-hashing, constant-capacity, constant-memory-usage LRU cache.
The data structure is backed by a couple of const-generic arrays, resulting in all required memory being allocated up-front.
The LRU cache struct is laid out in a struct-of-arrays format: all keys are in 1 array, all values are in another array.
A sorted index over the keys is also stored in the struct to allow for O(log N) lookup times using binary search.
LRU-ordering is implemented using a doubly-linked list, but with array indices instead of pointers. Following the struct-of-arrays format, all the next-link array indices are in one array while all the prev-link array indices are in another array.
To maximize space-efficiency, the last optional generic I specifies the index type, which can be set to an unsigned primitive int type with smaller bitwidth than usize, as long as it's wide enough to store the cache's capacity.
use const_lru::ConstLru;
use core::mem;
assert_eq!(mem::align_of::<ConstLru<u8, u8, 255>>(), 8);
assert_eq!(mem::size_of::<ConstLru<u8, u8, 255>>(), 6656);
assert_eq!(mem::align_of::<ConstLru<u8, u8, 255, u8>>(), 1);
assert_eq!(mem::size_of::<ConstLru<u8, u8, 255, u8>>(), 1278);
where N is number of elements:
O(log N) lookup using the sorted indexO(log N) lookup using the sorted index + O(N) to modify the sorted index (bitwise-copy of index types similar to Vec)O(log N) lookup using the sorted index + O(N) to modify the sorted index (bitwise-copy of index types similar to Vec)O(1) since it's stored in the structO(1) using .iter().next()O(1) using .iter().next_back()O(1) using .iter_key_order().next()O(1) using .iter_key_order().next_back()Most, if not all, general LRU cache implementations (including but not limited to associative-cache, caches, clru, hashlink, lru) rely on one-or-more hashmaps to give O(1) op times. While fast, this makes their usage less well-suited for memory-constrained environments like embedded systems since hashmaps may rehash and reallocate more memory.
ConstLru on the other hand is designed to have a fixed size known at compile-time, but gives up a O(1) hashing-based lookup for a O(log N) binary-search-based lookup and O(N) inserts and removes.
uluru is another fixed-capacity LRU-cache implementation that uses less memory but has O(n) lookup times.