| Crates.io | ntex-ratelimiter |
| lib.rs | ntex-ratelimiter |
| version | 0.2.0 |
| created_at | 2025-07-12 15:27:19.761645+00 |
| updated_at | 2025-07-18 11:29:59.68323+00 |
| description | A rate limiter middleware for ntex web framework. |
| homepage | |
| repository | https://github.com/lollipopkit/ntex-ratelimiter |
| max_upload_size | |
| id | 1749441 |
| size | 63,011 |
A rate limiting middleware for the ntex web framework.
tokio (default): Enable Tokio runtime supportasync-std: Enable async-std runtime supportjson (default): Enable JSON serialization for error responses[dependencies]
# Default features (tokio + json)
ntex-ratelimiter = "^0"
# With async-std instead of tokio
ntex-ratelimiter = { version = "^0", default-features = false, features = ["async-std", "json"] }
# Minimal build without JSON support
ntex-ratelimiter = { version = "^0", default-features = false, features = ["tokio"] }
The primary components are RateLimiter and RateLimit.
RateLimiter: Manages the rate limiting logic and state. You create an instance of this, often shared across your application.RateLimit: The ntex middleware that wraps your services and applies the rate limiting rules defined by a RateLimiter instance.use ntex::web;
use ntex_ratelimiter::{RateLimit, RateLimiter};
#[ntex::main]
async fn main() -> std::io::Result<()> {
// Create a rate limiter: 100 requests per 60 seconds
let limiter = RateLimiter::new(100, 60);
web::HttpServer::new(move || {
web::App::new()
// Apply rate limiting middleware
.wrap(RateLimit::new(limiter.clone()))
.service(web::resource("/").to(|| async { "Hello world!" }))
})
.bind("127.0.0.1:8080")?
.run()
.await
}
For more control over the rate limiter behavior:
use ntex_ratelimiter::{RateLimiter, RateLimiterConfig};
use std::time::Duration;
let config = RateLimiterConfig {
capacity: 1000, // 1000 requests
window: 3600, // per hour (3600 seconds)
cleanup_interval: Duration::from_secs(300), // cleanup every 5 minutes
stale_threshold: 7200, // remove entries idle for 2+ hours
};
let limiter = RateLimiter::with_config(config);
// Get statistics
let stats = limiter.stats();
println!("Active rate limit entries: {}", stats.active_entries);
This middleware uses the token bucket algorithm for rate limiting:
The middleware intelligently extracts client IPs from:
X-Forwarded-For header (first IP in comma-separated list)X-Real-IP headerThis ensures accurate rate limiting even behind proxies and load balancers.
The middleware adds these headers to all responses:
| Header | Description |
|---|---|
x-ratelimit-remaining |
Number of requests remaining in current window |
x-ratelimit-limit |
Total request limit for the window |
x-ratelimit-reset |
Unix timestamp when the rate limit resets |
When rate limits are exceeded, a 429 Too Many Requests response is returned:
{
"code": 429,
"message": "Rate limit exceeded",
"data": {
"remaining": 0,
"reset": 1700000000,
"limit": 100
}
}
limiter: Contains the core RateLimiter logic, TokenBucket implementation, RateLimiterConfig, and the RateLimit ntex middleware.Contributions are welcome! Please feel free to open an issue or submit a pull request.
MIT All contributor.