| Crates.io | cache-rs |
| lib.rs | cache-rs |
| version | 0.2.0 |
| created_at | 2025-08-04 05:45:53.14068+00 |
| updated_at | 2026-01-16 06:02:08.617863+00 |
| description | A high-performance, memory-efficient cache implementation supporting multiple eviction policies including LRU, LFU, LFUDA, SLRU and GDSF |
| homepage | https://github.com/sigsegved/cache-rs |
| repository | https://github.com/sigsegved/cache-rs |
| max_upload_size | |
| id | 1780345 |
| size | 437,062 |
A high-performance, memory-efficient cache library for Rust supporting multiple eviction algorithms with O(1) operations.
no_std compatible: Works in embedded and resource-constrained environmentsMutex/RwLock for concurrent accessAdd to your Cargo.toml:
[dependencies]
cache-rs = "0.2.0"
Basic usage:
use cache_rs::LruCache;
use std::num::NonZeroUsize;
let mut cache = LruCache::new(NonZeroUsize::new(100).unwrap());
cache.put("key", "value");
assert_eq!(cache.get(&"key"), Some(&"value"));
Choose the right cache algorithm for your use case:
Best for: General-purpose caching with temporal locality
use cache_rs::LruCache;
use std::num::NonZeroUsize;
let mut cache = LruCache::new(NonZeroUsize::new(100).unwrap());
cache.put("recent", "data");
Best for: Workloads with scan resistance requirements
use cache_rs::SlruCache;
use std::num::NonZeroUsize;
// Total capacity: 100, Protected segment: 20
let mut cache = SlruCache::new(
NonZeroUsize::new(100).unwrap(),
NonZeroUsize::new(20).unwrap()
);
Best for: Workloads with strong frequency patterns
use cache_rs::LfuCache;
use std::num::NonZeroUsize;
let mut cache = LfuCache::new(NonZeroUsize::new(100).unwrap());
cache.put("frequent", "data");
Best for: Long-running applications where access patterns change
use cache_rs::LfudaCache;
use std::num::NonZeroUsize;
let mut cache = LfudaCache::new(NonZeroUsize::new(100).unwrap());
Best for: Variable-sized objects (images, files, documents)
use cache_rs::GdsfCache;
use std::num::NonZeroUsize;
let mut cache = GdsfCache::new(NonZeroUsize::new(1000).unwrap());
cache.put("image.jpg", image_data, 250); // key, value, size
| Algorithm | Get Operation | Use Case | Memory Overhead |
|---|---|---|---|
| LRU | ~887ns | General purpose | Low |
| SLRU | ~983ns | Scan resistance | Medium |
| GDSF | ~7.5µs | Size-aware | Medium |
| LFUDA | ~20.5µs | Aging workloads | Medium |
| LFU | ~22.7µs | Frequency-based | Medium |
Benchmarks run on mixed workloads with Zipf distribution
Works out of the box in no_std environments:
#![no_std]
extern crate alloc;
use cache_rs::LruCache;
use core::num::NonZeroUsize;
use alloc::string::String;
let mut cache = LruCache::new(NonZeroUsize::new(10).unwrap());
cache.put(String::from("key"), "value");
hashbrown (default): Use hashbrown HashMap for better performancenightly: Enable nightly-only optimizationsstd: Enable standard library features (disabled by default)concurrent: Enable thread-safe concurrent cache types (uses parking_lot)# Default: no_std + hashbrown (recommended for most use cases)
cache-rs = "0.2.0"
# Concurrent caching (recommended for multi-threaded apps)
cache-rs = { version = "0.2.0", features = ["concurrent"] }
# std + hashbrown (recommended for std environments)
cache-rs = { version = "0.2.0", features = ["std"] }
# std + concurrent + nightly optimizations
cache-rs = { version = "0.2.0", features = ["std", "concurrent", "nightly"] }
# no_std + nightly optimizations only
cache-rs = { version = "0.2.0", features = ["nightly"] }
# Only std::HashMap (not recommended - slower than hashbrown)
cache-rs = { version = "0.2.0", default-features = false, features = ["std"] }
For high-performance multi-threaded scenarios, cache-rs provides dedicated concurrent cache types with the concurrent feature:
[dependencies]
cache-rs = { version = "0.2.0", features = ["concurrent"] }
| Type | Description |
|---|---|
ConcurrentLruCache |
Thread-safe LRU with segmented storage |
ConcurrentSlruCache |
Thread-safe Segmented LRU |
ConcurrentLfuCache |
Thread-safe LFU |
ConcurrentLfudaCache |
Thread-safe LFUDA |
ConcurrentGdsfCache |
Thread-safe GDSF |
use cache_rs::ConcurrentLruCache;
use std::sync::Arc;
use std::thread;
// Create a concurrent cache (default 16 segments)
let cache = Arc::new(ConcurrentLruCache::new(
std::num::NonZeroUsize::new(10000).unwrap()
));
// Access from multiple threads
let handles: Vec<_> = (0..8).map(|i| {
let cache = Arc::clone(&cache);
thread::spawn(move || {
for j in 0..1000 {
let key = format!("thread{}-key{}", i, j);
cache.put(key.clone(), i * 1000 + j);
cache.get(&key);
}
})
}).collect();
for handle in handles {
handle.join().unwrap();
}
get_withAvoid cloning large values by processing them in-place:
use cache_rs::ConcurrentLruCache;
use std::num::NonZeroUsize;
let cache = ConcurrentLruCache::new(NonZeroUsize::new(100).unwrap());
cache.put("large_data".to_string(), vec![1u8; 1024]);
// Process value without cloning
let sum: Option<u8> = cache.get_with(&"large_data".to_string(), |data| {
data.iter().sum()
});
Configure segment count based on your workload:
use cache_rs::ConcurrentLruCache;
use std::num::NonZeroUsize;
// More segments = better concurrency, higher memory overhead
let cache = ConcurrentLruCache::with_segments(
NonZeroUsize::new(10000).unwrap(),
32 // Power of 2 recommended
);
| Segments | 8-Thread Mixed Workload |
|---|---|
| 1 | ~464µs |
| 8 | ~441µs |
| 16 | ~379µs |
| 32 | ~334µs (optimal) |
| 64 | ~372µs |
For simpler use cases, you can also wrap single-threaded caches manually:
use cache_rs::LruCache;
use std::sync::{Arc, Mutex};
use std::num::NonZeroUsize;
let cache = Arc::new(Mutex::new(
LruCache::new(NonZeroUsize::new(100).unwrap())
));
// Clone Arc for use in other threads
let cache_clone = Arc::clone(&cache);
use cache_rs::LruCache;
use std::collections::hash_map::RandomState;
use std::num::NonZeroUsize;
let cache = LruCache::with_hasher(
NonZeroUsize::new(100).unwrap(),
RandomState::new()
);
use cache_rs::GdsfCache;
use std::num::NonZeroUsize;
let mut cache = GdsfCache::new(NonZeroUsize::new(1000).unwrap());
// Cache different sized objects
cache.put("small.txt", "content", 10);
cache.put("medium.jpg", image_bytes, 500);
cache.put("large.mp4", video_bytes, 2000);
// GDSF automatically considers size, frequency, and recency
Run the included benchmarks to compare performance:
cargo bench
Example results on modern hardware:
Contributions welcome! Please see CONTRIBUTING.md for guidelines.
# Run all tests
cargo test --all
# Check formatting
cargo fmt --all -- --check
# Run clippy
cargo clippy --all-targets -- -D warnings
# Test no_std compatibility
cargo build --target thumbv6m-none-eabi --no-default-features --features hashbrown
# Run Miri for unsafe code validation (detects undefined behavior)
MIRIFLAGS="-Zmiri-ignore-leaks" cargo +nightly miri test --lib
See MIRI_ANALYSIS.md for a detailed Miri usage guide and analysis of findings.
Releases are tag-based. The CI workflow triggers a release only when a version tag is pushed.
# 1. Update version in Cargo.toml
# 2. Update CHANGELOG.md with release notes
# 3. Commit and push to main
git commit -am "Bump version to X.Y.Z"
git push origin main
# 4. Create an annotated tag (triggers release)
git tag -a vX.Y.Z -m "Release vX.Y.Z - Brief description"
git push origin vX.Y.Z
Tag Conventions:
vMAJOR.MINOR.PATCH (e.g., v0.2.0, v1.0.0)git tag -a), not lightweight tagsWhat happens on tag push:
Note: Publishing requires the
CARGO_REGISTRY_TOKENsecret to be configured in repository settings.
Licensed under the MIT License.
For security concerns, see SECURITY.md.