| Crates.io | tracing-throttle |
| lib.rs | tracing-throttle |
| version | 0.4.1 |
| created_at | 2025-11-25 05:26:35.582635+00 |
| updated_at | 2026-01-14 14:43:54.116868+00 |
| description | High-performance log deduplication and rate limiting for the tracing ecosystem |
| homepage | |
| repository | https://github.com/nootr/tracing-throttle |
| max_upload_size | |
| id | 1949219 |
| size | 580,408 |
High-performance log deduplication and rate limiting for the Rust `tracing` ecosystem.
High-volume Rust applications often suffer from repetitive or bursty log events that overwhelm logging infrastructure. A single error condition can generate thousands of identical log messages per second, causing:
tracing-throttle solves this at the source by providing signature-based rate limiting as a drop-in tracing::Layer. Events with identical signatures (level, message, target, and all field values) are deduplicated and throttled together, while unique events pass through unaffected.
The layer computes a signature for each log event based on its level, message template, target, and all structured field values (by default). Each unique signature gets its own rate limiter that applies your chosen policy (token bucket, time-window, count-based, etc.). This means:
request_id) to reduce memory usageAdd this to your Cargo.toml:
[dependencies]
tracing-throttle = "0.4"
tracing = "0.1.41"
tracing-subscriber = "0.3.20"
use tracing_throttle::TracingRateLimitLayer;
use tracing_subscriber::prelude::*;
// Create a rate limit filter with safe defaults
// Defaults: 50 burst capacity, 1 token/sec (60/min), 10k max signatures with LRU eviction.
let rate_limit = TracingRateLimitLayer::new();
// Add it as a filter to your fmt layer
tracing_subscriber::registry()
.with(tracing_subscriber::fmt::layer().with_filter(rate_limit))
.init();
// Now your logs are rate limited!
// Each different user_id creates a unique signature - NOT throttled together
for user_id in 0..1000 {
tracing::error!(user_id = user_id, "Failed to fetch user");
}
// All 1000 logged - they have different user_id values, so different signatures
// But duplicate errors ARE throttled
for _ in 0..1000 {
tracing::error!(user_id = 123, "Failed to fetch user");
}
// Only first 50 logged immediately, then 1/sec (same user_id = same signature)
For detailed guidance on using tracing-throttle effectively, including:
See BEST_PRACTICES.md for a comprehensive guide with examples.
By default, all field values are included in event signatures. This means events with different field values are throttled independently:
// Each user_id creates a unique signature
info!(user_id = 123, "Login"); // Different signature
info!(user_id = 456, "Login"); // Different signature
For high-cardinality fields (request IDs, trace IDs, timestamps), exclude them to prevent signature explosion:
let rate_limit = TracingRateLimitLayer::builder()
.with_excluded_fields(vec![
"request_id".to_string(),
"trace_id".to_string(),
])
.build()
.unwrap();
// Now these share the same signature (request_id excluded)
info!(user_id = 123, request_id = "req-1", "Login"); // Same signature
info!(user_id = 123, request_id = "req-2", "Login"); // Same signature
See BEST_PRACTICES.md for detailed guidance on signature cardinality and memory management.
Token Bucket (Default): Burst tolerance with natural recovery
Policy::token_bucket(50.0, 1.0).unwrap()
Time-Window: Allow K events per time period
Policy::time_window(10, Duration::from_secs(60)).unwrap()
Count-Based: Allow N events total (no recovery)
Policy::count_based(50).unwrap()
Exponential Backoff: Emit at exponentially increasing intervals
Policy::exponential_backoff()
Custom: Implement RateLimitPolicy trait for custom behavior
See the API documentation for details on each policy.
Control which signatures are kept when storage limits are reached:
See the API documentation and examples/eviction.rs for details.
Track rate limiting behavior with built-in metrics:
let metrics = rate_limit.metrics();
println!("Allowed: {}", metrics.events_allowed());
println!("Suppressed: {}", metrics.events_suppressed());
println!("Suppression rate: {:.1}%", metrics.snapshot().suppression_rate() * 100.0);
Optionally emit periodic summaries of suppressed events as log events (requires async feature):
let rate_limit = TracingRateLimitLayer::builder()
.with_active_emission(true)
.with_summary_interval(Duration::from_secs(60))
.build()
.unwrap();
See the API documentation for available metrics and customization options.
Uses a circuit breaker that fails open to preserve observability during errors. If rate limiting operations fail, all events are allowed through rather than being lost.
Tracks up to 10,000 unique event signatures by default (~2-4 MB, including event metadata for human-readable summaries). Configure via .with_max_signatures() for high-cardinality applications.
Memory per signature: ~200-400 bytes (varies with message length and field count)
See the API documentation for detailed memory breakdown, cardinality analysis, and configuration guidelines.
See BENCHMARKS.md for detailed measurements and methodology.
Run benchmarks yourself:
cargo bench --bench rate_limiting
By default, the library captures event metadata for human-readable suppression summaries. This adds ~20-25% overhead in single-threaded scenarios. For maximum performance, disable the human-readable feature:
[dependencies]
tracing-throttle = { version = "0.3", default-features = false, features = ["async"] }
This improves performance, but summaries will show signature hashes instead of event details.
Run the included examples:
# Basic count-based rate limiting
cargo run --example basic
# Demonstrate different policies
cargo run --example policies
# Show suppression summaries (default and custom formatters)
cargo run --example summaries --features async
Before v1.0, the focus is on gathering real-world usage feedback to identify missing features and API improvements. Once v1.0 is released, the crate will enter maintenance mode with minimal feature additions (only when truly necessary) and focus on bug fixes to maintain stability.
If you're using tracing-throttle in production, please share feedback via GitHub issues. Your input will shape the v1.0 API.
This project includes pre-commit hooks that run formatting, linting, tests, and example builds. To enable them:
# One-time setup - configure Git to use the .githooks directory
git config core.hooksPath .githooks
The pre-commit hook will automatically run:
cargo fmt --check - Verify code formattingcargo clippy --all-features --all-targets - Run lintscargo test --all-features - Run all testscargo build --examples - Build examplesContributions are welcome! Please open issues or pull requests on GitHub.
Licensed under the MIT License.