| Crates.io | nexus-pool |
| lib.rs | nexus-pool |
| version | 0.1.0 |
| created_at | 2026-01-18 20:25:32.758963+00 |
| updated_at | 2026-01-18 20:25:32.758963+00 |
| description | High-performance object pools for low latency systems |
| homepage | |
| repository | https://github.com/Abso1ut3Zer0/nexus |
| max_upload_size | |
| id | 2053027 |
| size | 74,421 |
High-performance object pools for latency-sensitive applications.
use nexus_pool::local::BoundedPool;
// Create a pool of 100 pre-allocated buffers
let pool = BoundedPool::new(
100,
|| Vec::<u8>::with_capacity(1024), // Factory
|v| v.clear(), // Reset on return
);
// Acquire and use
let mut buf = pool.try_acquire().expect("pool not empty");
buf.extend_from_slice(b"hello world");
// Automatically returns to pool when `buf` drops
local::BoundedPool / local::PoolSingle-threaded pools with zero synchronization overhead.
use nexus_pool::local::Pool;
// Growable pool - creates objects on demand
let pool = Pool::new(
|| Vec::<u8>::with_capacity(1024),
|v| v.clear(),
);
// Always succeeds (creates new if empty)
let buf = pool.acquire();
sync::PoolThread-safe pool: one thread acquires, any thread can return.
use nexus_pool::sync::Pool;
let pool = Pool::new(1000, || Vec::new(), |v| v.clear());
let buf = pool.try_acquire().unwrap();
// Send to another thread - returns to pool when dropped
std::thread::spawn(move || {
println!("{:?}", &*buf);
});
Predictability over generality.
This crate intentionally does not provide MPMC (multi-producer multi-consumer) pools. Here's why:
MPMC requires solving ABA: Generation counters, hazard pointers, or epoch-based reclamation add overhead and complexity.
MPMC is a design smell: If multiple threads contend for the same pool, you've created a bottleneck. The pool that was supposed to reduce latency now adds it.
Better alternatives exist:
local::Pool per thread)If you truly need MPMC, use crossbeam::ArrayQueue.
Measured on Intel Core i9 @ 3.1 GHz:
| Pool | Acquire p50 | Release p50 | Release p99 |
|---|---|---|---|
local::BoundedPool |
26 cycles | 26 cycles | 58 cycles |
local::Pool (reuse) |
26 cycles | 26 cycles | 58 cycles |
local::Pool (factory) |
32 cycles | 26 cycles | 58 cycles |
sync::Pool (same thread) |
42 cycles | 68 cycles | 74 cycles |
sync::Pool (cross-thread) |
42 cycles | 68 cycles | 86 cycles |
Run the benchmarks yourself:
cargo run --example perf_local_pool --release
cargo run --example perf_sync_pool --release
use nexus_pool::sync::Pool;
// Order entry thread owns the pool
let pool = Pool::new(
10_000,
|| Order::default(),
|o| o.reset(),
);
// Hot path: acquire order, fill, send to matching engine
let mut order = pool.try_acquire().expect("order pool exhausted");
order.symbol = symbol;
order.price = price;
order.quantity = qty;
// Send to matching engine thread
matching_engine_tx.send(order).unwrap();
// Order returns to pool when matching engine drops it
use nexus_pool::local::BoundedPool;
// Per-connection buffer pool
let buffers = BoundedPool::new(
16,
|| Box::new([0u8; 65536]),
|b| { /* optional: zero sensitive data */ },
);
loop {
let mut buf = buffers.try_acquire()?;
let n = socket.read(&mut buf[..])?;
process(&buf[..n]);
// buf returns to pool
}
Rust 1.85 or later.
Licensed under either of Apache License, Version 2.0 or MIT license at your option.