| Crates.io | http-cache-tower-server |
| lib.rs | http-cache-tower-server |
| version | 0.2.0 |
| created_at | 2025-11-23 01:05:14.401598+00 |
| updated_at | 2026-01-18 19:44:13.486345+00 |
| description | Server-side HTTP response caching middleware for Tower/Axum |
| homepage | https://http-cache.rs |
| repository | https://github.com/06chaynes/http-cache |
| max_upload_size | |
| id | 1945942 |
| size | 154,541 |
Server-side HTTP response caching middleware for Tower-based frameworks (Axum, Hyper, Tonic).
This crate provides Tower middleware for caching your server's HTTP responses to improve performance and reduce load. Unlike client-side caching, this middleware caches responses after your handlers execute, making it ideal for expensive operations like database queries or complex computations.
Use http-cache-tower-server when you want to:
| Crate | Purpose | Use Case |
|---|---|---|
http-cache-tower |
Client-side caching | Cache responses from external APIs you call |
http-cache-tower-server |
Server-side caching | Cache your own application's responses |
Important: If you're experiencing issues with path parameter extraction or routing when using http-cache-tower in a server application, you should use this crate instead. See Issue #121 for details.
cargo add http-cache-tower-server
By default, manager-cacache is enabled.
manager-cacache (default): Enable cacache disk-based cache backendmanager-moka: Enable moka in-memory cache backenduse axum::{Router, routing::get, response::IntoResponse};
use http_cache_tower_server::ServerCacheLayer;
use http_cache::CACacheManager;
async fn expensive_handler() -> impl IntoResponse {
// Simulate expensive operation
tokio::time::sleep(tokio::time::Duration::from_secs(2)).await;
// Set cache control to cache for 60 seconds
(
[("cache-control", "max-age=60")],
"This response is cached for 60 seconds"
)
}
#[tokio::main]
async fn main() {
// Create cache manager
let manager = CACacheManager::new("./cache", false);
// Create router with cache layer
let app = Router::new()
.route("/expensive", get(expensive_handler))
.layer(ServerCacheLayer::new(manager));
// Run server
let listener = tokio::net::TcpListener::bind("0.0.0.0:3000")
.await
.unwrap();
axum::serve(listener, app).await.unwrap();
}
Responses include an x-cache header indicating cache status:
x-cache: HIT → Response served from cachex-cache: MISS → Response generated by handler (may be cached)Caches based on HTTP method and path:
use http_cache_tower_server::{ServerCacheLayer, DefaultKeyer};
let layer = ServerCacheLayer::new(manager);
// GET /users/123 → "GET /users/123"
// GET /users/456 → "GET /users/456"
Includes query parameters in cache key:
use http_cache_tower_server::{ServerCacheLayer, QueryKeyer};
let layer = ServerCacheLayer::with_keyer(manager, QueryKeyer);
// GET /search?q=rust → "GET /search?q=rust"
// GET /search?q=http → "GET /search?q=http"
For advanced scenarios (authentication, content negotiation, etc.):
use http_cache_tower_server::{ServerCacheLayer, CustomKeyer};
use http::Request;
// Include user ID from headers in cache key
let keyer = CustomKeyer::new(|req: &Request<()>| {
let user_id = req.headers()
.get("x-user-id")
.and_then(|v| v.to_str().ok())
.unwrap_or("anonymous");
format!("{} {} user:{}", req.method(), req.uri().path(), user_id)
});
let layer = ServerCacheLayer::with_keyer(manager, keyer);
// GET /dashboard with x-user-id: 123 → "GET /dashboard user:123"
// GET /dashboard with x-user-id: 456 → "GET /dashboard user:456"
use http_cache_tower_server::{ServerCacheLayer, ServerCacheOptions};
use std::time::Duration;
let options = ServerCacheOptions {
// Default TTL when no Cache-Control header present
default_ttl: Some(Duration::from_secs(60)),
// Maximum TTL (even if response specifies longer)
max_ttl: Some(Duration::from_secs(3600)),
// Minimum TTL (even if response specifies shorter)
min_ttl: Some(Duration::from_secs(10)),
// Add X-Cache headers (HIT/MISS)
cache_status_headers: true,
// Maximum response body size to cache (128 MB)
max_body_size: 128 * 1024 * 1024,
// Cache responses without explicit Cache-Control
cache_by_default: false,
// Respect Vary header (currently extracted but not enforced)
respect_vary: true,
};
let layer = ServerCacheLayer::new(manager)
.with_options(options);
This middleware implements a shared cache per RFC 9111 (HTTP Caching).
Responses are cached when they have:
max-age=X → Cached for X secondss-maxage=X → Cached for X seconds (shared cache specific)public → Cached with default TTLResponses are never cached if they have:
no-store → Prevents all cachingno-cache → Requires revalidation (not supported)private → Only for private cachesWhen multiple directives are present:
s-maxage (shared cache specific) takes precedencemax-age (general directive)public (uses default TTL)// Cached for 60 seconds
("cache-control", "max-age=60")
// Cached for 120 seconds (s-maxage overrides max-age for shared caches)
("cache-control", "max-age=60, s-maxage=120")
// Cached with default TTL
("cache-control", "public")
// Never cached
("cache-control", "no-store")
("cache-control", "private")
("cache-control", "no-cache")
Critical: Cached responses are served to ALL users. Never cache user-specific data without appropriate measures.
async fn public_page() -> impl IntoResponse {
(
[("cache-control", "max-age=300")],
"Public content safe to cache"
)
}
// Include user ID in cache key
let keyer = CustomKeyer::new(|req: &Request<()>| {
let user_id = extract_user_id(req);
format!("{} {} user:{}", req.method(), req.uri().path(), user_id)
});
// ❌ DANGEROUS: Will serve user123's data to user456!
async fn user_profile() -> impl IntoResponse {
let user_data = get_current_user_data().await;
(
[("cache-control", "max-age=60")], // ❌ Don't do this!
user_data
)
}
// ✅ Safe: Won't be cached
async fn user_profile() -> impl IntoResponse {
let user_data = get_current_user_data().await;
(
[("cache-control", "private")], // Won't be cached
user_data
)
}
Cache-Control: private for user-specific responsesFor responses that vary by Accept-Language:
let keyer = CustomKeyer::new(|req: &Request<()>| {
let lang = req.headers()
.get("accept-language")
.and_then(|v| v.to_str().ok())
.unwrap_or("en");
format!("{} {} lang:{}", req.method(), req.uri().path(), lang)
});
let layer = ServerCacheLayer::with_keyer(manager, keyer);
Only cache certain routes:
use axum::middleware;
async fn cache_middleware(
req: Request<Body>,
next: Next<Body>,
) -> Response {
// Only cache GET requests to /api/*
if req.method() == Method::GET && req.uri().path().starts_with("/api/") {
// Apply cache layer
}
next.run(req).await
}
async fn long_cache_handler() -> impl IntoResponse {
(
[("cache-control", "max-age=3600")], // 1 hour
"Rarely changing content"
)
}
async fn short_cache_handler() -> impl IntoResponse {
(
[("cache-control", "max-age=60")], // 1 minute
"Frequently updated content"
)
}
The middleware extracts Vary headers but does not currently enforce them during cache lookup. For content negotiation:
CustomKeyer that includes relevant headers in the cache key, ORCache-Control: private to prevent cachingThe middleware does not check for Authorization headers in requests. Authenticated endpoints should either:
Cache-Control: private (won't be cached), ORCustomKeyer that includes user/session ID, ORThe Expires header is recognized but not currently parsed. Modern applications should use Cache-Control directives instead.
See the examples directory:
axum_basic.rs - Basic usage with AxumRun with:
cargo run --example axum_basic --features manager-cacache
1.82.0
Contributions are welcome! Please see the main repository for contribution guidelines.
Licensed under either of
at your option.