| Crates.io | request_coalescer |
| lib.rs | request_coalescer |
| version | 0.1.0 |
| created_at | 2025-05-22 06:28:08.569084+00 |
| updated_at | 2025-05-22 06:28:08.569084+00 |
| description | An asynchronous request coalescing library for Rust |
| homepage | |
| repository | https://github.com/thou-sif/request_coalescer |
| max_upload_size | |
| id | 1684819 |
| size | 66,456 |
A Rust library for coalescing identical asynchronous operations to prevent redundant work in concurrent systems.
request_coalescer provides a CoalescingService that prevents redundant work in concurrent asynchronous systems by ensuring that the underlying expensive operation is executed only once when multiple requests for the same "key" arrive simultaneously.
This is particularly useful for:
Add the following to your Cargo.toml:
[dependencies]
request_coalescer = "0.1.0"
tokio = { version = "1", features = ["full"] } # Or specific features
anyhow = "1"
# Optional for logging/tracing:
# tracing = "0.1"
# tracing-subscriber = "0.3" # For development/examples only
Here's a simple example of how to use the CoalescingService:
use request_coalescer::CoalescingService;
use std::sync::Arc;
use anyhow::Result;
async fn fetch_data(id: &str) -> Result<String> {
// Simulate expensive operation
tokio::time::sleep(std::time::Duration::from_millis(100)).await;
Ok(format!("Data for {}", id))
}
#[tokio::main]
async fn main() -> Result<()> {
// Create a new coalescing service
let service: CoalescingService<String, String> = CoalescingService::new();
// Execute multiple requests with the same key
let key = "user:123".to_string();
// These will be coalesced into a single operation
let handle1 = {
let svc = service.clone();
let k = key.clone();
tokio::spawn(async move {
svc.execute(k, || fetch_data("user:123")).await
})
};
let handle2 = {
let svc = service.clone();
let k = key.clone();
tokio::spawn(async move {
svc.execute(k, || fetch_data("user:123")).await
})
};
// Both will get the same result, but the operation runs only once
let result1 = handle1.await??;
let result2 = handle2.await??;
assert_eq!(result1, result2);
Ok(())
}
You can customize the CoalescingService with various configuration options:
use request_coalescer::{CoalescingService, service::CoalescingConfig};
use std::time::Duration;
async fn main() -> Result<()> {
let config = CoalescingConfig {
timeout: Some(Duration::from_millis(200)),
max_concurrent_ops: Some(2),
auto_cleanup: false,
};
let service: CoalescingService<String, String> = CoalescingService::with_config(config);
Ok(())
}
The library provides built-in timeout handling:
use request_coalescer::CoalescingService;
use std::time::Duration;
#[tokio::main]
async fn main() -> Result<()> {
// Create a service with a global timeout of 1 second
let service: CoalescingService<String, String> =
CoalescingService::with_timeout(Duration::from_secs(1));
// Execute with a specific timeout
let result = service.execute_with_timeout(
"resource".to_string(),
Duration::from_millis(500),
|| slow_api_call("resource")
).await;
Ok(())
}
For detailed API documentation, please run cargo doc --open or visit docs.rs (once published).
Contributions are welcome! Please feel free to submit a Pull Request.
This project is licensed under the MIT License or Apache License 2.0, at your option.