| Crates.io | cachelito-async |
| lib.rs | cachelito-async |
| version | 0.16.0 |
| created_at | 2025-11-10 15:35:03.081171+00 |
| updated_at | 2026-01-08 20:08:39.999583+00 |
| description | Async caching library with LRU/FIFO/LFU/ARC/Random/TLRU/W-TinyLFU eviction policies for async/await functions |
| homepage | https://github.com/josepdcs/cachelito |
| repository | https://github.com/josepdcs/cachelito |
| max_upload_size | |
| id | 1925721 |
| size | 172,117 |
A flexible and efficient async caching library for Rust async/await functions.
Ok values from Result types.await needed for cache operationscache_if predicates (v0.14.0)Add this to your Cargo.toml:
[dependencies]
cachelito-async = "0.10.1"
tokio = { version = "1", features = ["full"] }
use cachelito_async::cache_async;
use std::time::Duration;
#[cache_async]
async fn expensive_operation(x: u32) -> u32 {
tokio::time::sleep(Duration::from_secs(1)).await;
x * 2
}
#[tokio::main]
async fn main() {
// First call: sleeps for 1 second
let result = expensive_operation(5).await;
// Second call: returns immediately from cache
let result = expensive_operation(5).await;
}
use cachelito_async::cache_async;
#[cache_async]
async fn fetch_user(id: u64) -> User {
database::get_user(id).await
}
use cachelito_async::cache_async;
#[cache_async(limit = 100, policy = "lru")]
async fn fetch_data(key: String) -> Data {
// Only 100 entries cached
// Least recently used entries evicted first
api::fetch(&key).await
}
use cachelito_async::cache_async;
#[cache_async(ttl = 60)]
async fn get_weather(city: String) -> Weather {
// Cache expires after 60 seconds
weather_api::fetch(&city).await
}
use cachelito_async::cache_async;
#[cache_async(limit = 50)]
async fn api_call(endpoint: String) -> Result<Response, Error> {
// Only successful responses are cached
// Errors are not cached and always re-executed
make_request(&endpoint).await
}
Track cache performance with built-in statistics:
use cachelito_async::{cache_async, stats_registry};
#[cache_async]
async fn compute(x: u32) -> u32 {
x * x
}
#[cache_async(name = "my_cache")]
async fn custom(x: u32) -> u32 {
x + 10
}
#[tokio::main]
async fn main() {
// Make some calls
compute(1).await;
compute(1).await; // cache hit
compute(2).await;
// Get statistics
if let Some(stats) = stats_registry::get("compute") {
println!("Hits: {}", stats.hits());
println!("Misses: {}", stats.misses());
println!("Hit rate: {:.2}%", stats.hit_rate() * 100.0);
}
// List all caches
for name in stats_registry::list() {
println!("Cache: {}", name);
}
}
Statistics Features:
name attributeAtomicU64Fine-tune the balance between recency and frequency in TLRU eviction decisions:
use cachelito_async::cache_async;
// Low frequency_weight (0.3) - emphasizes recency
// Good for time-sensitive data where freshness matters more
#[cache_async(policy = "tlru", limit = 100, ttl = 300, frequency_weight = 0.3)]
async fn fetch_realtime_prices(symbol: String) -> f64 {
// Recent entries are prioritized over frequently accessed ones
stock_api::get_price(&symbol).await
}
// High frequency_weight (1.5) - emphasizes frequency
// Good for popular content that should stay cached
#[cache_async(policy = "tlru", limit = 100, ttl = 300, frequency_weight = 1.5)]
async fn fetch_trending_posts(topic: String) -> Vec<Post> {
// Popular entries remain cached longer despite age
database::get_trending(&topic).await
}
// Default (omit frequency_weight) - balanced approach
#[cache_async(policy = "tlru", limit = 100, ttl = 300)]
async fn fetch_user_data(user_id: u64) -> UserData {
// Balanced between recency and frequency
api::get_user(user_id).await
}
Frequency Weight Guide:
< 1.0 → Emphasize recency (time-sensitive data: stock prices, news)= 1.0 (default) → Balanced (general-purpose caching)> 1.0 → Emphasize frequency (popular content: trending posts, hot products)use cachelito_async::cache_async;
#[cache_async(limit = 100, policy = "lru", ttl = 300)]
async fn complex_operation(x: i32, y: i32) -> Result<i32, Error> {
// - Max 100 entries
// - LRU eviction policy
// - 5 minute TTL
// - Only Ok values cached
expensive_computation(x, y).await
}
| Parameter | Type | Default | Description |
|---|---|---|---|
limit |
usize |
unlimited | Maximum number of entries in cache |
policy |
"fifo" | "lru" | "lfu" | "arc" | "random" | "tlru" |
"fifo" |
Eviction policy when limit is reached |
ttl |
u64 |
none | Time-to-live in seconds |
frequency_weight |
f64 |
1.0 | Weight factor for frequency in TLRU (v0.15.0) |
name |
String |
function name | Custom cache identifier |
max_memory |
String |
none | Maximum memory usage (e.g., "100MB") |
tags |
[String] |
none | Tags for group invalidation |
events |
[String] |
none | Events that trigger invalidation |
dependencies |
[String] |
none | Cache dependencies |
invalidate_on |
function | none | Function to check if entry should be invalidated |
cache_if |
function | none | Function to determine if result should be cached |
frequency_weight parameterscore = frequency^weight × position × age_factorfrequency_weight < 1.0: Emphasize recency (good for time-sensitive data)frequency_weight > 1.0: Emphasize frequency (good for popular content)Policy Comparison:
| Policy | Eviction | Cache Hit | Use Case |
|---|---|---|---|
| LRU | O(1) | O(n) | Recent access matters |
| FIFO | O(1) | O(1) | Simple predictable caching |
| LFU | O(n) | O(1) | Frequency patterns matter |
| ARC | O(n) | O(n) | Mixed workloads, adaptive |
| Random | O(1) | O(1) | Baseline benchmarks |
| TLRU | O(n) | O(n) | Time-sensitive with TTL |
.await needed for cache lookupsAll caches are thread-safe and can be safely shared across multiple tasks and threads. The underlying DashMap provides excellent concurrent performance without traditional locks.
| Feature | cachelito | cachelito-async |
|---|---|---|
| Functions | Sync | Async |
| Storage | Thread-local or Global (RwLock) | Global (DashMap) |
| Concurrency | Mutex/RwLock | Lock-free |
| Scope | Thread or Global | Always Global |
| Best for | CPU-bound, sync code | I/O-bound, async code |
async_basic.rs - Basic async cachingasync_lru.rs - LRU eviction policyasync_tlru.rs - TLRU (Time-aware LRU) eviction policy with concurrent operationsasync_concurrent.rs - Concurrent task accessasync_stats.rs - Cache statistics trackingRun examples with:
cargo run --example async_basic
cargo run --example async_lru
cargo run --example async_tlru
cargo run --example async_concurrent
cargo run --example async_stats
Debug for key generationClone for cache storageLicensed under the Apache License, Version 2.0. See LICENSE for details.
cachelito - Sync version for regular functionscachelito-core - Core caching primitivescachelito-macros - Sync procedural macros