| Crates.io | cachelito-async-macros |
| lib.rs | cachelito-async-macros |
| version | 0.16.0 |
| created_at | 2025-11-10 15:34:44.640886+00 |
| updated_at | 2026-01-08 20:08:36.023858+00 |
| description | Async procedural macros for cachelito - automatic async caching attributes |
| homepage | https://github.com/josepdcs/cachelito |
| repository | https://github.com/josepdcs/cachelito |
| max_upload_size | |
| id | 1925719 |
| size | 36,148 |
A lightweight, thread-safe caching library for Rust that provides automatic memoization through procedural macros.
#[cache] attribute to any function or methodscope = "thread" for thread-local)parking_lot::RwLock for global caches, enabling concurrent readsscope = "thread" for maximum performanceResult::Ok valueslimitmax_memory = "100MB" attribute for memory-aware evictioncache_if predicate functionsstats feature & stats_registrycachelito-async crate (lock-free DashMap)parking_lot for optimal performanceAdd this to your Cargo.toml:
[dependencies]
cachelito = "0.16.0"
# Or with statistics:
# cachelito = { version = "0.16.0", features = ["stats"] }
Note:
cachelito-asyncfollows the same versioning ascachelitocore (0.16.x).
[dependencies]
cachelito-async = "0.16.0"
tokio = { version = "1", features = ["full"] }
| Use Case | Crate | Macro | Best For |
|---|---|---|---|
| Sync functions | cachelito |
#[cache] |
CPU-bound computations |
| Async functions | cachelito-async |
#[cache_async] |
I/O-bound / network operations |
| Thread-local cache | cachelito |
#[cache(scope = "thread")] |
Per-thread isolated cache |
| Global shared cache | cachelito / cachelito-async |
#[cache] / #[cache_async] |
Cross-thread/task sharing |
| High concurrency | cachelito-async |
#[cache_async] |
Many concurrent async tasks |
| Statistics tracking | cachelito (v0.6.0+) |
#[cache] + feature stats |
Performance monitoring |
| Memory limits | cachelito (v0.10.0+) |
#[cache(max_memory = "64MB")] |
Large objects / controlled memory usage |
| Smart invalidation | cachelito (v0.12.0+) |
#[cache(tags = ["user"])] |
Tag/event-based cache clearing |
| Conditional caching | cachelito (v0.14.0+) |
#[cache(cache_if = predicate)] |
Cache only valid results |
| Time-sensitive data | cachelito (v0.15.0+) |
#[cache(policy = "tlru")] |
Data with expiration (weather, prices, etc.) |
| Maximum hit rates | cachelito (v0.16.0+) |
#[cache(policy = "w_tinylfu")] |
Mixed workloads, hot/cold data patterns |
Eviction Policies (all versions):
policy = "fifo" - First In, First Out (simple, O(1))policy = "lru" - Least Recently Used (default, good for most cases)policy = "lfu" - Least Frequently Used (v0.8.0+, popular items priority)policy = "arc" - Adaptive Replacement Cache (v0.9.0+, self-tuning)policy = "random" - Random Replacement (v0.11.0+, minimal overhead)policy = "tlru" - Time-aware LRU (v0.15.0+, combines time, frequency & recency, customizable with frequency_weight)policy = "w_tinylfu" - Windowed TinyLFU (v0.16.0+, excellent hit rates, configurable with window_ratio)Quick Decision:
cachelitocachelito-asyncpolicy = "tlru"policy = "w_tinylfu"use cachelito::cache;
#[cache]
fn fibonacci(n: u32) -> u64 {
if n <= 1 {
return n as u64;
}
fibonacci(n - 1) + fibonacci(n - 2)
}
fn main() {
// First call computes the result
let result1 = fibonacci(10);
// Second call returns cached result instantly
let result2 = fibonacci(10);
assert_eq!(result1, result2);
}
The #[cache] attribute also works with methods:
use cachelito::cache;
use cachelito::DefaultCacheableKey;
#[derive(Debug, Clone)]
struct Calculator {
precision: u32,
}
impl DefaultCacheableKey for Calculator {}
impl Calculator {
#[cache]
fn compute(&self, x: f64, y: f64) -> f64 {
// Expensive computation
x.powf(y) * self.precision as f64
}
}
For complex types, you can implement custom cache key generation:
use cachelito::DefaultCacheableKey;
#[derive(Debug, Clone)]
struct Product {
id: u32,
name: String,
}
// Enable default cache key generation based on Debug
impl DefaultCacheableKey for Product {}
use cachelito::CacheableKey;
#[derive(Debug, Clone)]
struct User {
id: u64,
name: String,
}
// More efficient custom key implementation
impl CacheableKey for User {
fn to_cache_key(&self) -> String {
format!("user:{}", self.id)
}
}
Functions returning Result<T, E> only cache successful results:
use cachelito::cache;
#[cache]
fn divide(a: i32, b: i32) -> Result<i32, String> {
if b == 0 {
Err("Division by zero".to_string())
} else {
Ok(a / b)
}
}
fn main() {
// Ok results are cached
let _ = divide(10, 2); // Computes and caches Ok(5)
let _ = divide(10, 2); // Returns cached Ok(5)
// Err results are NOT cached (will retry each time)
let _ = divide(10, 0); // Returns Err, not cached
let _ = divide(10, 0); // Computes again, returns Err
}
Control memory usage by setting cache limits and choosing an eviction policy:
use cachelito::cache;
// Cache with a limit of 100 entries using LRU eviction
#[cache(limit = 100, policy = "lru")]
fn expensive_computation(x: i32) -> i32 {
// When cache is full, least recently accessed entry is evicted
// Accessing a cached value moves it to the end of the queue
x * x
}
// LRU is the default policy, so this is equivalent:
#[cache(limit = 100)]
fn another_computation(x: i32) -> i32 {
x * x
}
use cachelito::cache;
// Cache with a limit of 100 entries using FIFO eviction
#[cache(limit = 100, policy = "fifo")]
fn expensive_computation(x: i32) -> i32 {
// When cache is full, oldest entry is evicted
x * x
}
use cachelito::cache;
// Cache with a limit of 100 entries using LFU eviction
#[cache(limit = 100, policy = "lfu")]
fn expensive_computation(x: i32) -> i32 {
// When cache is full, least frequently accessed entry is evicted
// Each access increments the frequency counter
x * x
}
use cachelito::cache;
// Cache with a limit of 100 entries using ARC eviction
#[cache(limit = 100, policy = "arc")]
fn expensive_computation(x: i32) -> i32 {
// Self-tuning cache that adapts between recency and frequency
// Combines the benefits of LRU and LFU automatically
// Best for mixed workloads with varying access patterns
x * x
}
use cachelito::cache;
// Cache with a limit of 100 entries using Random eviction
#[cache(limit = 100, policy = "random")]
fn expensive_computation(x: i32) -> i32 {
// When cache is full, a random entry is evicted
// Minimal overhead, useful for benchmarks and random access patterns
x * x
}
use cachelito::cache;
// Cache with TLRU policy and TTL
#[cache(limit = 100, policy = "tlru", ttl = 300)]
fn fetch_weather_data(city: String) -> WeatherData {
// Combines recency, frequency, and age factors
// Entries approaching TTL expiration are prioritized for eviction
// Score: frequency × position_weight × age_factor
fetch_from_api(city)
}
// TLRU without TTL behaves like ARC
#[cache(limit = 100, policy = "tlru")]
fn compute_expensive(n: u64) -> u64 {
// Without TTL, age_factor = 1.0 (behaves like ARC)
// Considers both frequency and recency
n * n
}
The frequency_weight parameter allows fine-tuning the balance between recency and frequency in TLRU eviction decisions:
use cachelito::cache;
// Low frequency_weight (< 1.0): Emphasize recency and age
// Good for time-sensitive data where freshness matters more than popularity
#[cache(
policy = "tlru",
limit = 100,
ttl = 300,
frequency_weight = 0.3
)]
fn fetch_realtime_stock_prices(symbol: String) -> StockPrice {
// Recent entries are prioritized over frequently accessed ones
// Fresh data is more important than popular data
// Use case: Real-time data, news feeds, live scores
api_client.get_current_price(symbol)
}
// High frequency_weight (> 1.0): Emphasize frequency
// Good for popular content that should stay cached despite age
#[cache(
policy = "tlru",
limit = 100,
ttl = 300,
frequency_weight = 1.5
)]
fn fetch_popular_articles(article_id: u64) -> Article {
// Frequently accessed entries remain cached longer
// Popular content is protected from eviction
// Use case: Popular posts, trending items, hot products
database.fetch_article(article_id)
}
// Default behavior (balanced): Omit frequency_weight
#[cache(policy = "tlru", limit = 100, ttl = 300)]
fn fetch_user_profile(user_id: u64) -> Profile {
// Balanced approach between recency and frequency
// Neither recency nor frequency dominates
// Use case: General-purpose caching
database.get_profile(user_id)
}
Frequency Weight Guidelines:
| Weight Value | Behavior | Best For | Example Use Cases |
|---|---|---|---|
| < 1.0 (e.g., 0.3) | Emphasize recency & age | Time-sensitive data | Real-time prices, news feeds, weather |
| 1.0 (default/omit) | Balanced approach | General-purpose | User profiles, generic queries |
| > 1.0 (e.g., 1.5) | Emphasize frequency | Popular content | Trending items, hot products, viral posts |
Formula: eviction_score = frequency^weight × position × age_factor
When to Use:
New in v0.16.0! W-TinyLFU is an advanced eviction policy that provides excellent hit rates through a two-segment architecture:
use cachelito::cache;
// Basic W-TinyLFU cache
#[cache(limit = 1000, policy = "w_tinylfu")]
fn fetch_user_data(user_id: u64) -> UserData {
// Window segment (20%): Recent items (FIFO)
// Protected segment (80%): Frequently accessed items (LFU)
// Admission control prevents cache pollution
database.fetch_user(user_id)
}
// Custom window ratio for more recency emphasis
#[cache(
limit = 1000,
policy = "w_tinylfu",
window_ratio = 0.3 // 30% window, 70% protected
)]
fn fetch_news_articles(article_id: u64) -> Article {
// Larger window = more emphasis on recent items
// Good for: news, social media, trending content
fetch_from_api(article_id)
}
// Small window for frequency-focused caching
#[cache(
limit = 1000,
policy = "w_tinylfu",
window_ratio = 0.1 // 10% window, 90% protected
)]
fn fetch_analytics_data(query_id: u64) -> QueryResult {
// Smaller window = more emphasis on frequency
// Good for: analytics queries, reference data, stable workloads
run_expensive_query(query_id)
}
How W-TinyLFU Works:
W-TinyLFU divides the cache into two segments:
Window Segment (20% by default):
Protected Segment (80% by default):
Advantages:
Configuration:
window_ratio: Float between 0.01 and 0.99 (default: 0.20)
Current Limitations (v0.16.0):
This is the initial implementation of W-TinyLFU. The following features will be added in future versions:
🔄 Count-Min Sketch admission policy (planned for v0.17.0)
🔄 Automatic periodic decay (planned for v0.17.0)
📊 Segment-specific metrics (planned for v0.17.0)
When to Use W-TinyLFU:
Policy Comparison:
| Policy | Evicts | Best For | Performance |
|---|---|---|---|
| LRU | Least recently accessed | Temporal locality (recent items matter) | O(n) on hit |
| FIFO | Oldest inserted | Simple, predictable behavior | O(1) |
| LFU | Least frequently accessed | Frequency patterns (popular items matter) | O(n) on evict |
| ARC | Adaptive (recency + frequency) | Mixed workloads, self-tuning | O(n) on evict/hit |
| Random | Randomly selected | Baseline benchmarks, random access | O(1) |
| TLRU | Low score (freq^weight × recency × age) | Time-sensitive data, customizable with frequency_weight |
O(n) on evict/hit |
| W-TinyLFU | Window (FIFO) or Protected (LFU) | Highest hit rates, mixed workloads with hot/cold data | O(n) on evict |
Choosing the Right Policy:
frequency_weight to fine-tune recency vs frequency balance. Without TTL, behaves like ARC.window_ratio to tune recency vs frequency emphasis.Set automatic expiration times for cached entries:
use cachelito::cache;
// Cache entries expire after 60 seconds
#[cache(ttl = 60)]
fn fetch_user_data(user_id: u32) -> UserData {
// Entries older than 60 seconds are automatically removed
// when accessed
fetch_from_database(user_id)
}
// Combine TTL with limits and policies
#[cache(limit = 100, policy = "lru", ttl = 300)]
fn api_call(endpoint: &str) -> Result<Response, Error> {
// Max 100 entries, LRU eviction, 5 minute TTL
make_http_request(endpoint)
}
Benefits:
By default, the cache is shared across all threads (global scope). Use scope = "thread" for thread-local caches where
each thread has its own independent cache:
use cachelito::cache;
// Global cache (default) - shared across all threads
#[cache(limit = 100)]
fn global_computation(x: i32) -> i32 {
// Cache IS shared across all threads
// Uses RwLock for thread-safe access
x * x
}
// Thread-local cache - each thread has its own cache
#[cache(limit = 100, scope = "thread")]
fn thread_local_computation(x: i32) -> i32 {
// Cache is NOT shared across threads
// No synchronization overhead
x * x
}
When to use global scope (default):
stats_registryWhen to use thread-local (scope = "thread"):
Performance considerations:
RwLock for synchronization, allows concurrent readsuse cachelito::cache;
use std::thread;
#[cache(limit = 50)] // Global by default
fn expensive_api_call(endpoint: &str) -> String {
// This expensive call is cached globally
// All threads benefit from the same cache
format!("Response from {}", endpoint)
}
fn main() {
let handles: Vec<_> = (0..10)
.map(|i| {
thread::spawn(move || {
// All threads share the same cache
// First thread computes, others get cached result
expensive_api_call("/api/users")
})
})
.collect();
for handle in handles {
handle.join().unwrap();
}
}
The cache clones values on every get operation. For large values (big structs, vectors, strings), this can be
expensive. Wrap your return values in Arc<T> to share ownership without copying data:
use cachelito::cache;
#[derive(Clone, Debug)]
struct LargeData {
payload: Vec<u8>, // Could be megabytes of data
metadata: String,
}
#[cache(limit = 100)]
fn process_data(id: u32) -> LargeData {
LargeData {
payload: vec![0u8; 1_000_000], // 1MB of data
metadata: format!("Data for {}", id),
}
}
fn main() {
// First call: computes and caches (1MB allocation)
let data1 = process_data(42);
// Second call: clones the ENTIRE 1MB! (expensive)
let data2 = process_data(42);
}
use cachelito::cache;
use std::sync::Arc;
#[derive(Debug)]
struct LargeData {
payload: Vec<u8>,
metadata: String,
}
// Return Arc instead of the value directly
#[cache(limit = 100)]
fn process_data(id: u32) -> Arc<LargeData> {
Arc::new(LargeData {
payload: vec![0u8; 1_000_000], // 1MB of data
metadata: format!("Data for {}", id),
})
}
fn main() {
// First call: computes and caches Arc (1MB allocation)
let data1 = process_data(42);
// Second call: clones only the Arc pointer (cheap!)
// The 1MB payload is NOT cloned
let data2 = process_data(42);
// Both Arc point to the same underlying data
assert!(Arc::ptr_eq(&data1, &data2));
}
use cachelito::cache;
use std::sync::Arc;
#[derive(Debug)]
struct ParsedDocument {
title: String,
content: String,
tokens: Vec<String>,
word_count: usize,
}
// Cache expensive parsing operations
#[cache(limit = 50, policy = "lru", ttl = 3600)]
fn parse_document(file_path: &str) -> Arc<ParsedDocument> {
// Expensive parsing operation
let content = std::fs::read_to_string(file_path).unwrap();
let tokens: Vec<String> = content
.split_whitespace()
.map(|s| s.to_string())
.collect();
Arc::new(ParsedDocument {
title: extract_title(&content),
content,
word_count: tokens.len(),
tokens,
})
}
fn analyze_document(path: &str) {
// First access: parses file (expensive)
let doc = parse_document(path);
println!("Title: {}", doc.title);
// Subsequent accesses: returns Arc clone (cheap)
let doc2 = parse_document(path);
println!("Words: {}", doc2.word_count);
// The underlying ParsedDocument is shared, not cloned
}
Use Arc
Don't need Arc
Copy trait)For maximum efficiency with multi-threaded applications:
use cachelito::cache;
use std::sync::Arc;
use std::thread;
#[cache(scope = "global", limit = 100, policy = "lru")]
fn fetch_user_profile(user_id: u64) -> Arc<UserProfile> {
// Expensive database or API call
Arc::new(UserProfile::fetch_from_db(user_id))
}
fn main() {
let handles: Vec<_> = (0..10)
.map(|i| {
thread::spawn(move || {
// All threads share the global cache
// Cloning Arc is cheap across threads
let profile = fetch_user_profile(42);
println!("User: {}", profile.name);
})
})
.collect();
for handle in handles {
handle.join().unwrap();
}
}
Benefits:
Starting from version 0.5.0, Cachelito uses parking_lot for
synchronization in global scope caches. The implementation uses RwLock for the cache map and Mutex for the
eviction queue, providing optimal performance for read-heavy workloads.
RwLock Benefits (for the cache map):
parking_lot Advantages over std::sync:
Result wrappingGlobalCache Structure:
┌─────────────────────────────────────┐
│ map: RwLock<HashMap<...>> │ ← Multiple readers OR one writer
│ order: Mutex<VecDeque<...>> │ ← Always exclusive (needs modification)
└─────────────────────────────────────┘
Read Operation (cache hit):
Thread 1 ──┐
Thread 2 ──┼──> RwLock.read() ──> ✅ Concurrent, no blocking
Thread 3 ──┘
Write Operation (cache miss):
Thread 1 ──> RwLock.write() ──> ⏳ Exclusive access
Performance comparison on concurrent cache access:
Mixed workload (8 threads, 100 operations, 90% reads / 10% writes):
Thread-Local Cache: 1.26ms (no synchronization baseline)
Global + RwLock: 1.84ms (concurrent reads)
Global + Mutex only: ~3.20ms (all operations serialized)
std::sync::RwLock: ~2.80ms (less optimized)
Improvement: RwLock is ~74% faster than Mutex for read-heavy workloads
Pure concurrent reads (20 threads, 100 reads each):
With RwLock: ~2ms (all threads read simultaneously)
With Mutex: ~40ms (threads wait in queue)
20x improvement for concurrent reads!
You can run the included benchmarks to see the performance on your hardware:
# Run cache benchmarks (includes RwLock concurrent reads)
cd cachelito-core
cargo bench --bench cache_benchmark
# Run RwLock concurrent reads demo
cargo run --example rwlock_concurrent_reads
# Run parking_lot demo
cargo run --example parking_lot_performance
# Compare thread-local vs global
cargo run --example cache_comparison
The #[cache] macro generates code that:
thread_local! and RefCell<HashMap>VecDeque for eviction trackingCacheEntry to track insertion timestampsCacheableKey::to_cache_key()Result<T, E> types, only caches Ok valuesStarting with version 0.7.0, Cachelito provides dedicated support for async/await functions through the
cachelito-async crate.
[dependencies]
cachelito-async = "0.2.0"
tokio = { version = "1", features = ["full"] }
# or use async-std, smol, etc.
use cachelito_async::cache_async;
use std::time::Duration;
#[cache_async(limit = 100, policy = "lru", ttl = 60)]
async fn fetch_user(id: u64) -> Result<User, Error> {
// Expensive async operation (database, API call, etc.)
let user = database::get_user(id).await?;
Ok(user)
}
#[tokio::main]
async fn main() {
// First call: fetches from database (~100ms)
let user1 = fetch_user(42).await.unwrap();
// Second call: returns cached result (instant)
let user2 = fetch_user(42).await.unwrap();
assert_eq!(user1.id, user2.id);
}
| Feature | Sync (#[cache]) |
Async (#[cache_async]) |
|---|---|---|
| Scope | Global or Thread-local | Always Global |
| Storage | RwLock<HashMap> or RefCell<HashMap> |
DashMap (lock-free) |
| Concurrency | parking_lot::RwLock |
Lock-free concurrent |
| Best for | CPU-bound operations | I/O-bound async operations |
| Blocking | May block on lock | No blocking |
| Policies | FIFO, LRU | FIFO, LRU |
| TTL | ✅ Supported | ✅ Supported |
The async version uses DashMap instead of traditional locks because:
.awaitSee the cachelito-async README for:
The library includes several comprehensive examples demonstrating different features:
# Basic caching with custom types (default cache key)
cargo run --example custom_type_default_key
# Custom cache key implementation
cargo run --example custom_type_custom_key
# Result type caching (only Ok values cached)
cargo run --example result_caching
# Cache limits with LRU policy
cargo run --example cache_limit
# LRU eviction policy
cargo run --example lru
# FIFO eviction policy
cargo run --example fifo
# Default policy (FIFO)
cargo run --example fifo_default
# TTL (Time To Live) expiration
cargo run --example ttl
# Global scope cache (shared across threads)
cargo run --example global_scope
# TLRU (Time-aware LRU) eviction policy with frequency_weight examples
cargo run --example tlru
# Async examples (requires cachelito-async)
cargo run --example async_basic --manifest-path cachelito-async/Cargo.toml
cargo run --example async_lru --manifest-path cachelito-async/Cargo.toml
cargo run --example async_concurrent --manifest-path cachelito-async/Cargo.toml
cargo run --example async_tlru --manifest-path cachelito-async/Cargo.toml # Includes frequency_weight demos
=== Testing LRU Cache Policy ===
Calling compute_square(1)...
Executing compute_square(1)
Result: 1
Calling compute_square(2)...
Executing compute_square(2)
Result: 4
Calling compute_square(3)...
Executing compute_square(3)
Result: 9
Calling compute_square(2)...
Result: 4 (should be cached)
Calling compute_square(4)...
Executing compute_square(4)
Result: 16
...
Total executions: 6
✅ LRU Policy Test PASSED
scope = "global", the cache is shared across all threads using a Mutex. This adds
synchronization overhead but allows cache sharing.limit parameter to control memory usage.CacheableKey::to_cache_key() method. The default implementation uses Debug
formatting, which may be slow for complex types. Consider implementing CacheableKey directly for better performance.Arc<T> to avoid
expensive clones. See the Performance with Large Values section for details.Available since v0.6.0 with the stats feature flag.
Track cache performance metrics including hit/miss rates and access counts. Statistics are automatically collected for global-scoped caches and can be queried programmatically.
Add the stats feature to your Cargo.toml:
[dependencies]
cachelito = { version = "0.6.0", features = ["stats"] }
Statistics are automatically tracked for global caches (default):
use cachelito::cache;
#[cache(limit = 100, policy = "lru")] // Global by default
fn expensive_operation(x: i32) -> i32 {
// Simulate expensive work
std::thread::sleep(std::time::Duration::from_millis(100));
x * x
}
fn main() {
// Make some calls
expensive_operation(5); // Miss - computes
expensive_operation(5); // Hit - cached
expensive_operation(10); // Miss - computes
expensive_operation(5); // Hit - cached
// Access statistics using the registry
#[cfg(feature = "stats")]
if let Some(stats) = cachelito::stats_registry::get("expensive_operation") {
println!("Total accesses: {}", stats.total_accesses());
println!("Cache hits: {}", stats.hits());
println!("Cache misses: {}", stats.misses());
println!("Hit rate: {:.2}%", stats.hit_rate() * 100.0);
println!("Miss rate: {:.2}%", stats.miss_rate() * 100.0);
}
}
Output:
Total accesses: 4
Cache hits: 2
Cache misses: 2
Hit rate: 50.00%
Miss rate: 50.00%
The stats_registry module provides centralized access to all cache statistics:
use cachelito::stats_registry;
fn main() {
// Get a snapshot of statistics for a function
if let Some(stats) = stats_registry::get("my_function") {
println!("Hits: {}", stats.hits());
println!("Misses: {}", stats.misses());
}
// Get direct reference (no cloning)
if let Some(stats) = stats_registry::get_ref("my_function") {
println!("Hit rate: {:.2}%", stats.hit_rate() * 100.0);
}
}
use cachelito::stats_registry;
fn main() {
// Get names of all registered cache functions
let functions = stats_registry::list();
for name in functions {
if let Some(stats) = stats_registry::get(&name) {
println!("{}: {} hits, {} misses", name, stats.hits(), stats.misses());
}
}
}
use cachelito::stats_registry;
fn main() {
// Reset stats for a specific function
if stats_registry::reset("my_function") {
println!("Statistics reset successfully");
}
// Clear all registrations (useful for testing)
stats_registry::clear();
}
The CacheStats struct provides the following metrics:
hits() - Number of successful cache lookupsmisses() - Number of cache misses (computation required)total_accesses() - Total number of get operationshit_rate() - Ratio of hits to total accesses (0.0 to 1.0)miss_rate() - Ratio of misses to total accesses (0.0 to 1.0)reset() - Reset all counters to zeroStatistics are thread-safe and work correctly with concurrent access:
use cachelito::cache;
use std::thread;
#[cache(limit = 100)] // Global by default
fn compute(n: u32) -> u32 {
n * n
}
fn main() {
// Spawn multiple threads
let handles: Vec<_> = (0..5)
.map(|_| {
thread::spawn(|| {
for i in 0..20 {
compute(i);
}
})
})
.collect();
// Wait for completion
for handle in handles {
handle.join().unwrap();
}
// Check statistics
#[cfg(feature = "stats")]
if let Some(stats) = cachelito::stats_registry::get("compute") {
println!("Total accesses: {}", stats.total_accesses());
println!("Hit rate: {:.2}%", stats.hit_rate() * 100.0);
// Expected: ~80% hit rate since first thread computes,
// others find values in cache
}
}
Use statistics to monitor and optimize cache performance:
use cachelito::{cache, stats_registry};
#[cache(limit = 50, policy = "lru")] // Global by default
fn api_call(endpoint: &str) -> String {
// Expensive API call
format!("Data from {}", endpoint)
}
fn monitor_cache_health() {
#[cfg(feature = "stats")]
if let Some(stats) = stats_registry::get("api_call") {
let hit_rate = stats.hit_rate();
if hit_rate < 0.5 {
eprintln!("⚠️ Low cache hit rate: {:.2}%", hit_rate * 100.0);
eprintln!("Consider increasing cache limit or adjusting TTL");
} else if hit_rate > 0.9 {
println!("✅ Excellent cache performance: {:.2}%", hit_rate * 100.0);
}
println!("Cache stats: {} hits / {} total",
stats.hits(), stats.total_accesses());
}
}
Use the name attribute to give your caches custom identifiers in the statistics registry:
use cachelito::cache;
// API V1 - using custom name (global by default)
#[cache(limit = 50, name = "api_v1")]
fn fetch_data(id: u32) -> String {
format!("V1 Data for ID {}", id)
}
// API V2 - using custom name (global by default)
#[cache(limit = 50, name = "api_v2")]
fn fetch_data_v2(id: u32) -> String {
format!("V2 Data for ID {}", id)
}
fn main() {
// Make some calls
fetch_data(1);
fetch_data(1);
fetch_data_v2(2);
fetch_data_v2(2);
fetch_data_v2(3);
// Access statistics using custom names
#[cfg(feature = "stats")]
{
if let Some(stats) = cachelito::stats_registry::get("api_v1") {
println!("V1 hit rate: {:.2}%", stats.hit_rate() * 100.0);
}
if let Some(stats) = cachelito::stats_registry::get("api_v2") {
println!("V2 hit rate: {:.2}%", stats.hit_rate() * 100.0);
}
}
}
Benefits:
Default behavior: If name is not provided, the function name is used as the identifier.
Starting from version 0.12.0, Cachelito supports smart invalidation mechanisms beyond simple TTL expiration, providing fine-grained control over when and how cached entries are invalidated.
Cachelito supports three complementary invalidation strategies:
Use tags to group related cache entries and invalidate them together:
use cachelito::{cache, invalidate_by_tag};
#[cache(
scope = "global",
tags = ["user_data", "profile"],
name = "get_user_profile"
)]
fn get_user_profile(user_id: u64) -> UserProfile {
// Expensive database query
fetch_user_from_db(user_id)
}
#[cache(
scope = "global",
tags = ["user_data", "settings"],
name = "get_user_settings"
)]
fn get_user_settings(user_id: u64) -> UserSettings {
fetch_settings_from_db(user_id)
}
// Later, when user data is updated:
invalidate_by_tag("user_data"); // Invalidates both functions
Trigger cache invalidation based on application events:
use cachelito::{cache, invalidate_by_event};
#[cache(
scope = "global",
events = ["user_updated", "permissions_changed"],
name = "get_user_permissions"
)]
fn get_user_permissions(user_id: u64) -> Vec<String> {
fetch_permissions_from_db(user_id)
}
// When a permission changes:
invalidate_by_event("permissions_changed");
// When user profile is updated:
invalidate_by_event("user_updated");
Create cascading invalidation when dependent caches change:
use cachelito::{cache, invalidate_by_dependency};
#[cache(scope = "global", name = "get_user")]
fn get_user(user_id: u64) -> User {
fetch_user_from_db(user_id)
}
#[cache(
scope = "global",
dependencies = ["get_user"],
name = "get_user_dashboard"
)]
fn get_user_dashboard(user_id: u64) -> Dashboard {
// This cache depends on get_user
build_dashboard(user_id)
}
// When the user cache changes:
invalidate_by_dependency("get_user"); // Invalidates get_user_dashboard
You can combine tags, events, and dependencies for maximum flexibility:
use cachelito::cache;
#[cache(
scope = "global",
tags = ["user_data", "dashboard"],
events = ["user_updated"],
dependencies = ["get_user_profile", "get_user_permissions"],
name = "get_user_dashboard"
)]
fn get_user_dashboard(user_id: u64) -> Dashboard {
// This cache can be invalidated by:
// - Tag: invalidate_by_tag("user_data")
// - Event: invalidate_by_event("user_updated")
// - Dependency: invalidate_by_dependency("get_user_profile")
build_dashboard(user_id)
}
Invalidate specific caches by their name:
use cachelito::invalidate_cache;
// Invalidate a specific cache function
if invalidate_cache("get_user_profile") {
println!("Cache invalidated successfully");
}
The invalidation API is simple and intuitive:
invalidate_by_tag(tag: &str) -> usize - Returns the number of caches invalidatedinvalidate_by_event(event: &str) -> usize - Returns the number of caches invalidatedinvalidate_by_dependency(dependency: &str) -> usize - Returns the number of caches invalidatedinvalidate_cache(cache_name: &str) -> bool - Returns true if the cache was found and invalidatedFor even more control, you can use custom check functions (predicates) to selectively invalidate cache entries based on runtime conditions:
Invalidate specific entries in a cache based on custom logic:
use cachelito::{cache, invalidate_with};
#[cache(scope = "global", name = "get_user", limit = 1000)]
fn get_user(user_id: u64) -> User {
fetch_user_from_db(user_id)
}
// Invalidate only users with ID > 1000
invalidate_with("get_user", |key| {
key.parse::<u64>().unwrap_or(0) > 1000
});
// Invalidate users based on a pattern
invalidate_with("get_user", |key| {
key.starts_with("admin_")
});
Apply a check function across all registered caches:
use cachelito::invalidate_all_with;
#[cache(scope = "global", name = "get_user")]
fn get_user(user_id: u64) -> User {
fetch_user_from_db(user_id)
}
#[cache(scope = "global", name = "get_product")]
fn get_product(product_id: u64) -> Product {
fetch_product_from_db(product_id)
}
// Invalidate all entries with numeric IDs >= 1000 across ALL caches
let count = invalidate_all_with(|_cache_name, key| {
key.parse::<u64>().unwrap_or(0) >= 1000
});
println!("Applied check function to {} caches", count);
Use any Rust logic in your check functions:
use cachelito::invalidate_with;
// Invalidate entries where ID is divisible by 30
invalidate_with("get_user", |key| {
key.parse::<u64>()
.map(|id| id % 30 == 0)
.unwrap_or(false)
});
// Invalidate entries matching a range
invalidate_with("get_product", |key| {
if let Ok(id) = key.parse::<u64>() {
id >= 100 && id < 1000
} else {
false
}
});
invalidate_with(cache_name: &str, check_fn: F) -> bool
check_fn(key) returns truetrue if the cache was found and the check function was appliedinvalidate_all_with(check_fn: F) -> usize
check_fn(cache_name, key) returns trueFor automatic validation on every cache access, you can specify an invalidation check function directly in the macro:
use cachelito::cache;
use std::time::{Duration, Instant};
#[derive(Clone)]
struct User {
id: u64,
name: String,
updated_at: Instant,
}
// Define invalidation check function
fn is_stale(_key: &String, value: &User) -> bool {
// Return true if entry should be invalidated (is stale)
value.updated_at.elapsed() > Duration::from_secs(3600)
}
// Use invalidation check as macro attribute
#[cache(
scope = "global",
name = "get_user",
invalidate_on = is_stale
)]
fn get_user(user_id: u64) -> User {
fetch_user_from_db(user_id)
}
// Check function is evaluated on EVERY cache access
let user = get_user(42); // Returns cached value only if !is_stale()
get() is calledfn check_fn(key: &String, value: &T) -> booltrue to invalidate: If the function returns true, the cached entry is considered staleglobal and thread scope// Time-based staleness
fn is_older_than_5min(_key: &String, val: &CachedData) -> bool {
val.timestamp.elapsed() > Duration::from_secs(300)
}
// Key-based invalidation
fn is_admin_key(key: &String, _val: &Data) -> bool {
key.contains("admin") // Note: keys are stored with Debug format
}
// Value-based validation
fn has_invalid_data(_key: &String, val: &String) -> bool {
val.contains("ERROR") || val.is_empty()
}
// Complex conditions
fn needs_refresh(key: &String, val: &(u64, Instant)) -> bool {
let (count, timestamp) = val;
// Refresh if count > 1000 OR older than 1 hour
*count > 1000 || timestamp.elapsed() > Duration::from_secs(3600)
}
Cache keys are stored using Rust's Debug format ({:?}), which means string keys will have quotes. Use contains() instead of exact matching:
// ✅ Correct
fn check_admin(key: &String, _val: &T) -> bool {
key.contains("admin")
}
// ❌ Won't work (key is "\"admin_123\"" not "admin_123")
fn check_admin(key: &String, _val: &T) -> bool {
key.starts_with("admin")
}
See examples/smart_invalidation.rs and examples/named_invalidation.rs for complete working examples demonstrating all invalidation strategies.
cache_if (v0.14.0)By default, all function results are cached. The cache_if attribute allows you to control when results should be cached based on custom predicates. This is useful for:
use cachelito::cache;
// Only cache non-empty vectors
fn should_cache_non_empty(_key: &String, result: &Vec<String>) -> bool {
!result.is_empty()
}
#[cache(scope = "global", limit = 100, cache_if = should_cache_non_empty)]
fn fetch_items(category: String) -> Vec<String> {
// Simulate database query
match category.as_str() {
"electronics" => vec!["laptop".to_string(), "phone".to_string()],
"empty_category" => vec![], // This won't be cached!
_ => vec![],
}
}
fn main() {
// First call with "electronics" - computes and caches
let items1 = fetch_items("electronics".to_string());
// Second call - returns cached result
let items2 = fetch_items("electronics".to_string());
// First call with "empty_category" - computes but doesn't cache
let items3 = fetch_items("empty_category".to_string());
// Second call - computes again (not cached)
let items4 = fetch_items("empty_category".to_string());
}
Don't cache None values:
fn cache_some(_key: &String, result: &Option<User>) -> bool {
result.is_some()
}
#[cache(scope = "thread", cache_if = cache_some)]
fn find_user(id: u32) -> Option<User> {
database.find_user(id)
}
Only cache successful HTTP responses:
#[derive(Clone)]
struct ApiResponse {
status: u16,
body: String,
}
fn cache_success(_key: &String, response: &ApiResponse) -> bool {
response.status >= 200 && response.status < 300
}
#[cache(scope = "global", limit = 50, cache_if = cache_success)]
fn api_call(url: String) -> ApiResponse {
// Only 2xx responses will be cached
make_http_request(url)
}
Cache based on value size:
fn cache_if_large(_key: &String, data: &Vec<u8>) -> bool {
data.len() > 1024 // Only cache results larger than 1KB
}
#[cache(scope = "global", cache_if = cache_if_large)]
fn process_data(input: String) -> Vec<u8> {
expensive_processing(input)
}
Cache based on value criteria:
fn cache_if_positive(_key: &String, value: &i32) -> bool {
*value > 0
}
#[cache(scope = "thread", cache_if = cache_if_positive)]
fn compute(x: i32, y: i32) -> i32 {
x + y // Only positive results will be cached
}
The cache_if attribute also works with async functions:
use cachelito_async::cache_async;
fn should_cache_non_empty(_key: &String, result: &Vec<String>) -> bool {
!result.is_empty()
}
#[cache_async(limit = 100, cache_if = should_cache_non_empty)]
async fn fetch_items_async(category: String) -> Vec<String> {
// Async database query
fetch_from_db_async(category).await
}
When caching functions that return Result<T, E>, remember that:
cache_if: Only Ok values are cached (default behavior, Err is never cached)cache_if: The predicate receives the full Result and can inspect both Ok and Err variants to decide whether to cachefn cache_valid_ok(_key: &String, result: &Result<String, String>) -> bool {
matches!(result, Ok(data) if !data.is_empty())
}
#[cache(limit = 50, cache_if = cache_valid_ok)]
fn fetch_data(id: u32) -> Result<String, String> {
match id {
1..=10 => Ok(format!("Data {}", id)), // ✅ Cached
11..=20 => Ok(String::new()), // ❌ Not cached (empty)
_ => Err("Invalid ID".to_string()), // ❌ Not cached (Err)
}
}
cache_if is not specified, there's no performance impactexamples/conditional_caching.rs - Complete sync examplescachelito-async/examples/conditional_caching_async.rs - Async examplestests/conditional_caching_tests.rs - Test suitescope = "thread" for thread-local isolation)stats_registryFor detailed API documentation, run:
cargo doc --no-deps --open
See CHANGELOG.md for a detailed history of changes.
🪟 W-TinyLFU (Windowed Tiny LFU) Policy!
Version 0.16.0 introduces the W-TinyLFU eviction policy, a state-of-the-art cache replacement algorithm that delivers excellent hit rates:
Key Features:
🪟 W-TinyLFU Policy - Two-segment architecture (window + protected) for optimal caching
window_ratio for workload tuning🎯 Superior Hit Rates - 5-15% better than traditional LRU on mixed workloads
🛡️ Cache Pollution Protection - Prevents one-hit wonders from evicting valuable data
⚙️ Configurable - Tune window_ratio to emphasize recency vs frequency
Basic Example:
use cachelito::cache;
// Basic W-TinyLFU cache
#[cache(limit = 1000, policy = "w_tinylfu")]
fn fetch_user_data(user_id: u64) -> UserData {
database.fetch_user(user_id)
}
// Custom window ratio for recency emphasis
#[cache(
limit = 1000,
policy = "w_tinylfu",
window_ratio = 0.3 // 30% window, 70% protected
)]
fn fetch_trending_content(id: u64) -> Content {
api_client.fetch(id)
}
How It Works:
W-TinyLFU splits the cache into two segments:
This dual-segment approach provides excellent performance across various workload patterns.
Configuration Options:
window_ratio (0.01-0.99, default: 0.20) - Balance between recency and frequency
Current Status (v0.16.0):
This is the initial, fully functional implementation of W-TinyLFU. Future versions will add:
Examples:
examples/w_tinylfu.rs - Complete demonstration with multiple scenariostests/w_tinylfu_policy_tests.rs - Test suite⏰ TLRU (Time-aware Least Recently Used) Policy!
Version 0.15.0 introduced the TLRU eviction policy, combining recency, frequency, and time-based factors for intelligent cache management:
New Features:
frequency × position_weight × age_factorfrequency_weight parameter to fine-tune recency vs frequency balanceQuick Start:
use cachelito::cache;
// Time-aware caching with TLRU
#[cache(policy = "tlru", limit = 100, ttl = 300)]
fn fetch_weather(city: String) -> WeatherData {
// Entries approaching 5-minute TTL are prioritized for eviction
fetch_from_api(city)
}
// TLRU without TTL behaves like ARC
#[cache(policy = "tlru", limit = 50)]
fn compute_expensive(n: u64) -> u64 {
// Considers both frequency and recency
expensive_calculation(n)
}
// NEW: Fine-tune with frequency_weight
#[cache(policy = "tlru", limit = 100, ttl = 300, frequency_weight = 1.5)]
fn fetch_popular_content(id: u64) -> Content {
// frequency_weight > 1.0 emphasizes frequency over recency
// Popular entries stay cached longer
database.fetch(id)
}
How TLRU Works:
frequency^weight × position_weight × age_factor< 1.0: Emphasize recency (good for time-sensitive data)> 1.0: Emphasize frequency (good for popular content)🎯 Conditional Caching with cache_if!
Version 0.14.0 introduces conditional caching, giving you fine-grained control over when results should be cached based on custom predicates:
New Features:
cache_if predicatesResult types)Result<T, E> types only cache Ok values by defaulttrueQuick Start:
use cachelito::cache;
// Only cache non-empty results
fn should_cache(_key: &String, result: &Vec<String>) -> bool {
!result.is_empty()
}
#[cache(scope = "global", limit = 100, cache_if = should_cache)]
fn fetch_items(category: String) -> Vec<String> {
// Empty results won't be cached
database.query(category)
}
// Default behavior: Result types only cache Ok values
#[cache(scope = "global", limit = 50)]
fn validate_email(email: String) -> Result<String, String> {
if email.contains('@') {
Ok(format!("Valid: {}", email)) // ✅ Cached
} else {
Err(format!("Invalid: {}", email)) // ❌ NOT cached
}
}
// Custom predicate for Result types
fn cache_only_ok(_key: &String, result: &Result<User, Error>) -> bool {
result.is_ok()
}
#[cache(scope = "global", cache_if = cache_only_ok)]
fn fetch_user(id: u32) -> Result<User, Error> {
// Only successful results are cached
api_client.get_user(id)
}
Common Use Cases:
None valuesSee also: examples/conditional_caching.rs
🎯 Conditional Invalidation with Custom Check Functions!
Version 0.13.0 introduces powerful conditional invalidation, allowing you to selectively invalidate cache entries based on runtime conditions:
New Features:
invalidate_on = function_name attributeQuick Start:
use cachelito::{cache, invalidate_with, invalidate_all_with};
// Named invalidation check function (evaluated on every access)
fn is_stale(_key: &String, value: &User) -> bool {
value.updated_at.elapsed() > Duration::from_secs(3600)
}
#[cache(scope = "global", name = "get_user", invalidate_on = is_stale)]
fn get_user(user_id: u64) -> User {
fetch_user_from_db(user_id)
}
// Manual conditional invalidation
invalidate_with("get_user", |key| {
key.parse::<u64>().unwrap_or(0) > 1000
});
// Global invalidation across all caches
invalidate_all_with(|_cache_name, key| {
key.parse::<u64>().unwrap_or(0) >= 1000
});
See also:
examples/conditional_invalidation.rs - Manual conditional invalidationexamples/named_invalidation.rs - Named invalidation check functions🔥 Smart Cache Invalidation!
Version 0.12.0 introduces intelligent cache invalidation mechanisms beyond simple TTL expiration:
New Features:
Quick Start:
use cachelito::{cache, invalidate_by_tag, invalidate_by_event};
// Tag-based grouping
#[cache(tags = ["user_data", "profile"], name = "get_user_profile")]
fn get_user_profile(user_id: u64) -> UserProfile {
fetch_from_db(user_id)
}
// Event-driven invalidation
#[cache(events = ["user_updated"], name = "get_user_settings")]
fn get_user_settings(user_id: u64) -> Settings {
fetch_settings(user_id)
}
// Invalidate all user_data caches
invalidate_by_tag("user_data");
// Invalidate on event
invalidate_by_event("user_updated");
See also: examples/smart_invalidation.rs
🎲 Random Replacement Policy!
Version 0.11.0 introduces the Random eviction policy for baseline benchmarking and simple use cases:
New Features:
fastrand for fast, lock-free random selectionlimit, ttl, and max_memory attributesQuick Start:
// Simple random eviction - O(1) performance
#[cache(policy = "random", limit = 1000)]
fn baseline_cache(x: u64) -> u64 { x * x }
// Random with memory limit
#[cache(policy = "random", max_memory = "100MB")]
fn random_with_memory(key: String) -> Vec<u8> {
vec![0u8; 1024]
}
When to Use Random:
See the Cache Limits and Eviction Policies section for complete details.
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
Contributions are welcome! Please feel free to submit a Pull Request.
format!("{:?}")stats_registry and how
they work