| Crates.io | rater |
| lib.rs | rater |
| version | 0.1.1 |
| created_at | 2025-08-07 06:37:33.070333+00 |
| updated_at | 2025-08-08 10:17:21.684292+00 |
| description | High-performance, lock-free, thread-safe rate limiter using token bucket algorithm with per-IP rate limiting support |
| homepage | https://github.com/khaledsmq/rater |
| repository | https://github.com/khaledsmq/rater |
| max_upload_size | |
| id | 1784756 |
| size | 190,696 |
A blazingly fast, lock-free, thread-safe rate limiting library for Rust implementing the token bucket algorithm with optional per-IP rate limiting support.
Add this to your Cargo.toml:
[dependencies]
rater = "0.1.0"
For all features including serialization support:
[dependencies]
rater = { version = "0.1.0", features = ["full"] }
use rater::RateLimiter;
fn main() {
// Create a rate limiter with 100 tokens, refilling 10 tokens/second
let limiter = RateLimiter::new(100, 10);
// Try to acquire a single token
if limiter.try_acquire() {
println!("Request allowed!");
} else {
println!("Rate limited!");
}
// Try to acquire multiple tokens at once
if limiter.try_acquire_n(5) {
println!("Batch request allowed!");
}
// Check available tokens
println!("Available tokens: {}", limiter.available_tokens());
}
use rater::IpRateLimiterManager;
use std::net::IpAddr;
use std::sync::Arc;
fn main() {
// Create a manager for per-IP rate limiting
let config = rater::RateLimiterConfig::per_second(10); // 10 requests per second
let manager = Arc::new(IpRateLimiterManager::new(config));
// Start automatic cleanup thread
let manager_clone = manager.clone();
manager_clone.start_cleanup_thread();
// Handle incoming requests
let client_ip: IpAddr = "192.168.1.100".parse().unwrap();
if manager.try_acquire(client_ip) {
println!("Request from {} allowed", client_ip);
} else {
println!("Request from {} rate limited", client_ip);
}
// Get statistics
let stats = manager.stats();
println!("{}", stats.summary());
}
use rater::{RateLimiterBuilder, MemoryOrdering};
fn main() {
let limiter = RateLimiterBuilder::new()
.max_tokens(1000)
.refill_rate(100)
.refill_interval_ms(1000)
.memory_ordering(MemoryOrdering::AcquireRelease)
.build();
// Use the configured limiter
if limiter.try_acquire() {
println!("Request processed!");
}
}
use rater::RateLimiterConfig;
// Per-second rate limiting
let config = RateLimiterConfig::per_second(100);
// Per-minute rate limiting
let config = RateLimiterConfig::per_minute(1000);
// Custom configuration with burst capacity
let config = RateLimiterConfig::new(500, 50, 1000) // 500 max tokens, 50 refill rate, 1000ms interval
.with_burst_multiplier(3) // Allow bursts up to 3x the normal rate
.with_ordering(MemoryOrdering::Sequential); // Strongest memory ordering
use rater::RateLimiter;
let limiter = RateLimiter::new(100, 10);
// Perform some operations
for _ in 0..50 {
limiter.try_acquire();
}
// Get comprehensive metrics
let metrics = limiter.metrics();
println!("Success rate: {:.2}%", metrics.success_rate() * 100.0);
println!("Current tokens: {}/{}", metrics.current_tokens, metrics.max_tokens);
println!("Health status: {:?}", metrics.health_status());
// Get a detailed summary
println!("{}", metrics.summary());
use rater::{IpRateLimiterManager, RateLimiterConfig};
use std::sync::Arc;
let config = RateLimiterConfig::per_second(10);
let manager = Arc::new(IpRateLimiterManager::with_cleanup_settings(
config,
60_000, // Cleanup every 60 seconds
300_000, // Remove limiters inactive for 5 minutes
));
// Start a stoppable cleanup thread
let (handle, stop_tx) = manager.clone().start_stoppable_cleanup_thread();
// ... use the manager ...
// Stop the cleanup thread when done
stop_tx.send(()).unwrap();
handle.join().unwrap();
Rater is designed for extreme performance in high-concurrency scenarios:
TODO
Benchmarks run on AMD Ryzen 9 5900X, 32GB RAM
RateLimiter: Lock-free token bucket implementation
IpRateLimiterManager: Per-IP rate limiting
Metrics & Monitoring: Real-time performance tracking
Choose the appropriate memory ordering for your use case:
use rater::MemoryOrdering;
// Best performance, minimal guarantees
MemoryOrdering::Relaxed
// Balanced performance and correctness (default)
MemoryOrdering::AcquireRelease
// Strongest guarantees, lowest performance
MemoryOrdering::Sequential
Check out the examples directory for more detailed usage:
Run the test suite:
# Run all tests
cargo test
# Run with all features
cargo test --all-features
# Run benchmarks
cargo bench
The library includes comprehensive benchmarks:
# Run all benchmarks
cargo bench
# Run specific benchmark
cargo bench --bench rate_limiter
Contributions are welcome! Please feel free to submit a Pull Request.
git checkout -b feature/amazing-feature)git commit -m 'Add some amazing feature')git push origin feature/amazing-feature)MIT license (LICENSE-MIT or http://opensource.org/licenses/MIT)
Made with โค๏ธ by Khaled