nexus-pool

Crates.ionexus-pool
lib.rsnexus-pool
version0.1.0
created_at2026-01-18 20:25:32.758963+00
updated_at2026-01-18 20:25:32.758963+00
descriptionHigh-performance object pools for low latency systems
homepage
repositoryhttps://github.com/Abso1ut3Zer0/nexus
max_upload_size
id2053027
size74,421
Michael Hart (Abso1ut3Zer0)

documentation

README

nexus-pool

High-performance object pools for latency-sensitive applications.

Crates.io Documentation License

Features

  • Sub-100 cycle operations: ~26 cycles for local pools, ~42-68 cycles for sync pools
  • Zero allocation on hot path: Pre-allocate objects at startup
  • RAII guards: Objects automatically return to pool on drop
  • Graceful shutdown: Guards safely drop values if pool is gone

Quick Start

use nexus_pool::local::BoundedPool;

// Create a pool of 100 pre-allocated buffers
let pool = BoundedPool::new(
    100,
    || Vec::<u8>::with_capacity(1024),  // Factory
    |v| v.clear(),                       // Reset on return
);

// Acquire and use
let mut buf = pool.try_acquire().expect("pool not empty");
buf.extend_from_slice(b"hello world");

// Automatically returns to pool when `buf` drops

Pool Types

local::BoundedPool / local::Pool

Single-threaded pools with zero synchronization overhead.

use nexus_pool::local::Pool;

// Growable pool - creates objects on demand
let pool = Pool::new(
    || Vec::<u8>::with_capacity(1024),
    |v| v.clear(),
);

// Always succeeds (creates new if empty)
let buf = pool.acquire();

sync::Pool

Thread-safe pool: one thread acquires, any thread can return.

use nexus_pool::sync::Pool;

let pool = Pool::new(1000, || Vec::new(), |v| v.clear());

let buf = pool.try_acquire().unwrap();

// Send to another thread - returns to pool when dropped
std::thread::spawn(move || {
    println!("{:?}", &*buf);
});

Design Philosophy

Predictability over generality.

This crate intentionally does not provide MPMC (multi-producer multi-consumer) pools. Here's why:

  1. MPMC requires solving ABA: Generation counters, hazard pointers, or epoch-based reclamation add overhead and complexity.

  2. MPMC is a design smell: If multiple threads contend for the same pool, you've created a bottleneck. The pool that was supposed to reduce latency now adds it.

  3. Better alternatives exist:

    • Per-thread pools (local::Pool per thread)
    • Sharded pools (hash thread ID to pool index)
    • Message passing (send buffers through channels)

If you truly need MPMC, use crossbeam::ArrayQueue.

Performance

Measured on Intel Core i9 @ 3.1 GHz:

Pool Acquire p50 Release p50 Release p99
local::BoundedPool 26 cycles 26 cycles 58 cycles
local::Pool (reuse) 26 cycles 26 cycles 58 cycles
local::Pool (factory) 32 cycles 26 cycles 58 cycles
sync::Pool (same thread) 42 cycles 68 cycles 74 cycles
sync::Pool (cross-thread) 42 cycles 68 cycles 86 cycles

Run the benchmarks yourself:

cargo run --example perf_local_pool --release
cargo run --example perf_sync_pool --release

Use Cases

Trading Systems

use nexus_pool::sync::Pool;

// Order entry thread owns the pool
let pool = Pool::new(
    10_000,
    || Order::default(),
    |o| o.reset(),
);

// Hot path: acquire order, fill, send to matching engine
let mut order = pool.try_acquire().expect("order pool exhausted");
order.symbol = symbol;
order.price = price;
order.quantity = qty;

// Send to matching engine thread
matching_engine_tx.send(order).unwrap();
// Order returns to pool when matching engine drops it

Network Buffers

use nexus_pool::local::BoundedPool;

// Per-connection buffer pool
let buffers = BoundedPool::new(
    16,
    || Box::new([0u8; 65536]),
    |b| { /* optional: zero sensitive data */ },
);

loop {
    let mut buf = buffers.try_acquire()?;
    let n = socket.read(&mut buf[..])?;
    process(&buf[..n]);
    // buf returns to pool
}

Minimum Supported Rust Version

Rust 1.85 or later.

License

Licensed under either of Apache License, Version 2.0 or MIT license at your option.

Commit count: 211

cargo fmt