| Crates.io | oxcache |
| lib.rs | oxcache |
| version | 0.1.4 |
| created_at | 2025-12-29 16:51:34.777259+00 |
| updated_at | 2026-01-23 16:42:35.543725+00 |
| description | A high-performance multi-level cache library for Rust with L1 (memory) and L2 (Redis) caching. |
| homepage | |
| repository | https://github.com/kirky-x/oxcache |
| max_upload_size | |
| id | 2010915 |
| size | 2,130,660 |
Oxcache is a high-performance, production-grade two-level caching library for Rust, providing L1 (Moka in-memory cache) + L2 (Redis distributed cache) architecture.
|
Extreme Performance L1 in nanoseconds |
Zero-Code Changes One-line cache enable |
Auto Recovery Redis fault degradation |
Multi-Instance Sync Based on Pub/Sub |
Batch Optimization Smart batch writes |
#[cached] macroAdd oxcache to your Cargo.toml:
[dependencies]
oxcache = "0.1.3"
Note:
tokioandserdeare already included by default. If you need minimal dependencies, you can useoxcache = { version = "0.1.3", default-features = false }and add them manually.
Features: To use
#[cached]macro, enablemacrosfeature:oxcache = { version = "0.1.3", features = ["macros"] }
# Full features (recommended)
oxcache = { version = "0.1.3", features = ["full"] }
# Core functionality only
oxcache = { version = "0.1.3", features = ["core"] }
# Minimal - L1 cache only
oxcache = { version = "0.1.3", features = ["minimal"] }
# Custom selection
oxcache = { version = "0.1.3", features = ["core", "macros", "metrics"] }
# Development with specific features
oxcache = { version = "0.1.3", features = [
"l1-moka", # L1 cache (Moka)
"l2-redis", # L2 cache (Redis)
"macros", # #[cached] macro
"batch-write", # Optimized batch writing
"metrics", # Basic metrics
] }
| Tier | Features | Description |
|---|---|---|
| minimal | l1-moka, serialization, metrics |
L1 cache only |
| core | minimal + l2-redis |
L1 + L2 cache |
| full | core + all advanced features |
Complete functionality |
Advanced Features (included in full):
macros - #[cached] attribute macrobatch-write - Optimized batch writingwal-recovery - Write-ahead log for durabilitybloom-filter - Cache penetration protectionrate-limiting - DoS protectiondatabase - Database integrationcli - Command-line interfacefull-metrics - OpenTelemetry integrationCreate a config.toml file:
Important: To initialize from a config file, you need to enable both
config-tomlandconfersfeatures:oxcache = { version = "0.1.3", features = ["config-toml", "confers"] }
[global]
default_ttl = 3600
health_check_interval = 30
serialization = "json"
enable_metrics = true
# Two-level cache (L1 + L2)
[services.user_cache]
cache_type = "two-level" # "l1" | "l2" | "two-level"
ttl = 600
[services.user_cache.l1]
max_capacity = 10000
ttl = 300 # L1 TTL must be <= L2 TTL
tti = 180
initial_capacity = 1000
[services.user_cache.l2]
mode = "standalone" # "standalone" | "sentinel" | "cluster"
connection_string = "redis://127.0.0.1:6379"
[services.user_cache.two_level]
write_through = true
promote_on_hit = true
enable_batch_write = true
batch_size = 100
batch_interval_ms = 50
# L1-only cache (memory only)
[services.session_cache]
cache_type = "l1"
ttl = 300
[services.session_cache.l1]
max_capacity = 5000
ttl = 300
tti = 120
# L2-only cache (Redis only)
[services.shared_cache]
cache_type = "l2"
ttl = 7200
[services.shared_cache.l2]
mode = "standalone"
connection_string = "redis://127.0.0.1:6379"
use oxcache::macros::cached;
use serde::{Deserialize, Serialize};
#[derive(Serialize, Deserialize, Clone, Debug)]
struct User {
id: u64,
name: String,
}
// One-line cache enable
#[cached(service = "user_cache", ttl = 600)]
async fn get_user(id: u64) -> Result<User, String> {
// Simulate slow database query
tokio::time::sleep(std::time::Duration::from_millis(100)).await;
Ok(User {
id,
name: format!("User {}", id),
})
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Initialize cache (from config file)
oxcache::init_from_file("config.toml").await?;
// First call: execute function logic + cache result (~100ms)
let user = get_user(1).await?;
println!("First call: {:?}", user);
// Second call: return directly from cache (~0.1ms)
let cached_user = get_user(1).await?;
println!("Cached call: {:?}", cached_user);
Ok(())
}
use oxcache::{get_client, CacheOps};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
oxcache::init_from_file("config.toml").await?;
let client = get_client("user_cache")?;
// Standard operation: write to both L1 and L2
client.set("key", &my_data, Some(300)).await?;
let data: MyData = client.get("key").await?.unwrap();
// Write to L1 only (temporary data)
client.set_l1_only("temp_key", &temp_data, Some(60)).await?;
// Write to L2 only (shared data)
client.set_l2_only("shared_key", &shared_data, Some(3600)).await?;
// Delete
client.delete("key").await?;
Ok(())
}
#[cached(service = "user_cache", ttl = 600)]
async fn get_user_profile(user_id: u64) -> Result<UserProfile, Error> {
database::query_user(user_id).await
}
#[cached(
service = "api_cache",
ttl = 300,
key = "api_{endpoint}_{version}"
)]
async fn fetch_api_data(endpoint: String, version: u32) -> Result<ApiResponse, Error> {
http_client::get(&format!("/api/{}/{}", endpoint, version)).await
}
#[cached(service = "session_cache", cache_type = "l1", ttl = 60)]
async fn get_user_session(session_id: String) -> Result<Session, Error> {
session_store::load(session_id).await
}
graph TD
A[Application Code<br/>#[cached] Macro] --> B[Cache Manager<br/>Service Registry + Health Monitor]
B --> C[TwoLevelClient]
B --> D[L1OnlyClient]
B --> E[L2OnlyClient]
C --> F[L1 Cache<br/>Moka]
C --> G[L2 Cache<br/>Redis]
D --> F
E --> G
style A fill:#e1f5fe
style B fill:#f3e5f5
style C fill:#e8f5e8
style D fill:#fff3e0
style E fill:#fce4ec
style F fill:#f1f8e9
style G fill:#fdf2e9
L1: In-process high-speed cache using LRU/TinyLFU eviction strategy
L2: Distributed shared cache supporting Sentinel/Cluster modes
Test environment: M1 Pro, 16GB RAM, macOS, Redis 7.0
Note: Performance varies based on hardware, network conditions, and data size.
xychart-beta
title "Single-thread Latency Test (P99)"
x-axis ["L1 Cache", "L2 Cache", "Database"]
y-axis "Latency (ms)" 0 --> 60
bar [0.05, 3, 30]
line [0.05, 3, 30]
xychart-beta
title "Throughput Test (batch_size=100)"
x-axis ["L1 Operations", "L2 Single Write", "L2 Batch Write"]
y-axis "Ops/sec" 0 --> 600
bar [7500, 75, 350]
Performance Summary:
Oxcache implements multiple security measures to protect against common attacks:
All user inputs are validated before being passed to Redis:
\r, \n, \0) that could enable Redis protocol injection attacks.FLUSHALL, FLUSHDB, KEYS, SHUTDOWN, DEBUG, CONFIG, SAVE, BGSAVE, MONITOR*) charactersLong-running operations have timeout protection:
Distributed locks use cryptographically secure UUID v4 values automatically generated by the library, eliminating the risk of lock value prediction attacks.
Passwords in connection strings are redacted in logs by default to prevent credential leakage. Use normalize_connection_string_with_redaction() for secure logging.
validate_redis_key() functionFor more details, see Security Documentation.
Pull Requests and Issues are welcome!
See CHANGELOG.md
This project is licensed under MIT License. See LICENSE file.
If this project helps you, please give a ⭐ Star to show support!
Made with ❤️ by Kirky.X