| Crates.io | nexus-queue |
| lib.rs | nexus-queue |
| version | 1.0.0 |
| created_at | 2026-01-01 05:24:22.025962+00 |
| updated_at | 2026-01-25 23:25:38.754653+00 |
| description | High-performance lock-free SPSC and MPSC queues for low-latency systems |
| homepage | |
| repository | https://github.com/Abso1ut3Zer0/nexus |
| max_upload_size | |
| id | 2015785 |
| size | 102,626 |
A high-performance SPSC (Single-Producer Single-Consumer) ring buffer for Rust, optimized for ultra-low-latency messaging.
Benchmarked on AMD Ryzen (single-socket), 2.69 GHz base clock, pinned to physical cores:
| Metric | nexus-queue | rtrb | crossbeam (MPMC) |
|---|---|---|---|
| p50 latency | 68 cycles (25 ns) | 67 cycles (25 ns) | 83 cycles (31 ns) |
| p99 latency | 130 cycles | 123 cycles | 160 cycles |
| Throughput | 640 M msgs/sec | 485 M msgs/sec | 92 M msgs/sec |
See BENCHMARKS.md for detailed methodology and results.
use nexus_queue::spsc;
let (mut tx, mut rx) = spsc::ring_buffer::<u64>(1024);
// Producer thread
tx.push(42).unwrap();
// Consumer thread
assert_eq!(rx.pop(), Some(42));
use nexus_queue::Full;
// Spin until space is available
while tx.push(msg).is_err() {
std::hint::spin_loop();
}
// Or handle the full case
match tx.push(msg) {
Ok(()) => { /* sent */ }
Err(Full(returned_msg)) => { /* queue full, msg returned */ }
}
// Check if the other end has been dropped
if rx.is_disconnected() {
// Producer was dropped, drain remaining messages
}
if tx.is_disconnected() {
// Consumer was dropped, stop producing
}
┌─────────────────────────────────────────────────────────────┐
│ Shared (Arc): │
│ tail: CachePadded<AtomicUsize> ← Producer writes │
│ head: CachePadded<AtomicUsize> ← Consumer writes │
│ buffer: *mut T │
└─────────────────────────────────────────────────────────────┘
┌─────────────────────┐ ┌─────────────────────┐
│ Producer: │ │ Consumer: │
│ local_tail │ │ local_head │
│ cached_head │ │ cached_tail │
│ buffer (cached) │ │ buffer (cached) │
└─────────────────────┘ └─────────────────────┘
Producer and consumer write to separate cache lines (128-byte padding). Each endpoint caches the buffer pointer, mask, and the other's index locally, only refreshing from atomics when the cache indicates full/empty.
This design performs well on multi-socket NUMA systems where cache line ownership is important for latency.
For accurate results, disable turbo boost and pin to physical cores:
# Build
cargo build -p nexus-queue --examples --release
# Run pinned to two cores
taskset -c 0,1 ./target/release/examples/bench_spsc
# For more stable results, disable turbo boost:
echo 1 | sudo tee /sys/devices/system/cpu/intel_pstate/no_turbo
# Re-enable after:
echo 0 | sudo tee /sys/devices/system/cpu/intel_pstate/no_turbo
Verify your core topology with lscpu -e — you want cores with different CORE numbers to avoid hyperthreading siblings.
Uses manual fencing for clarity and portability:
fence(Release) before publishing tailfence(Acquire) after reading tail, fence(Release) before advancing headOn x86 these compile to no instructions (strong memory model), but they're required for correctness on ARM and other weakly-ordered architectures.
Use nexus-queue when:
Consider alternatives when:
tokio::sync::mpscMIT OR Apache-2.0