Crates.io | tiered-cache |
lib.rs | tiered-cache |
version | |
source | src |
created_at | 2024-11-06 06:44:45.285808 |
updated_at | 2024-11-08 00:36:51.797512 |
description | A high-performance multi-tiered cache with automatic sizing |
homepage | |
repository | https://github.com/aeromilai/tiered-cache |
max_upload_size | |
id | 1437792 |
Cargo.toml error: | TOML parse error at line 17, column 1 | 17 | autolib = false | ^^^^^^^ unknown field `autolib`, expected one of `name`, `version`, `edition`, `authors`, `description`, `readme`, `license`, `repository`, `homepage`, `documentation`, `build`, `resolver`, `links`, `default-run`, `default_dash_run`, `rust-version`, `rust_dash_version`, `rust_version`, `license-file`, `license_dash_file`, `license_file`, `licenseFile`, `license_capital_file`, `forced-target`, `forced_dash_target`, `autobins`, `autotests`, `autoexamples`, `autobenches`, `publish`, `metadata`, `keywords`, `categories`, `exclude`, `include` |
size | 0 |
A high-performance multi-tiered cache implementation in Rust with automatic sizing and async support.
Add this to your Cargo.toml
:
[dependencies]
tiered-cache = "0.1.6"
use tiered_cache::{TieredCache, CacheConfig, TierConfig};
use std::time::Duration;
const MB: usize = 1024 * 1024;
// Configure a cache with two tiers
let config = CacheConfig {
tiers: vec![
TierConfig {
total_capacity: 100 * MB, // 100MB
size_range: (0, 64 * 1024), // 0-64KB
},
TierConfig {
total_capacity: 900 * MB, // 900MB
size_range: (64 * 1024, MB), // 64KB-1MB
},
],
update_channel_size: 1024,
};
// Create the cache
let cache = TieredCache::<Vec<u8>, Vec<u8>>::new(config);
// Basic operations
cache.put(b"key1".to_vec(), vec![0; 1024]);
if let Some(value) = cache.get(&b"key1".to_vec()) {
println!("Retrieved value of size: {}", value.len());
}
// Async get_or_update
let value = cache.get_or_update(b"key2".to_vec(), async {
None // Returns None instead of Some(vec![0; 2048])
}).await;
The cache is configured using tiers, where each tier has:
Items are automatically placed in the appropriate tier based on their size. The cache uses the HeapSize
trait to accurately measure memory usage.
The implementation uses:
DashMap
for concurrent key-to-tier mappingparking_lot
locks for low contentionSmallVec
for efficient tier storageSee the examples directory for more usage examples, including:
The update_channel_size: 1024
is used to configure the capacity of a broadcast channel that notifies subscribers about cache updates. This is implemented using Tokio's broadcast channel.
In the code, specifically in lib.rs
, we can see:
let (tx, _) = broadcast::channel(config.update_channel_size);
This channel is used to notify interested parties when values in the cache are updated. Users of the cache can subscribe to these updates using the subscribe_updates()
method:
/// Subscribes to cache updates
#[inline]
pub fn subscribe_updates(&self) -> broadcast::Receiver<K> {
self.update_tx.subscribe()
}
When values are updated in the cache (specifically in the update_value
method), notifications are sent through this channel:
#[inline]
fn notify_update(&self, key: K) {
let _ = self.update_tx.send(key);
}
The size of 1024 means that the channel can buffer up to 1024 update notifications before older messages start getting dropped. This is useful in scenarios where you want to monitor or react to cache updates, such as:
If you expect a very high rate of cache updates, you might want to increase this value. Conversely, if you don't need update notifications, you could set it to a smaller value to save memory.
Licensed under either of:
at your option.
Contributions are welcome! Please feel free to submit a Pull Request.