| Crates.io | nexus-slab |
| lib.rs | nexus-slab |
| version | 0.7.0 |
| created_at | 2026-01-02 02:29:12.912163+00 |
| updated_at | 2026-01-18 05:30:02.817327+00 |
| description | A high-performance slab allocator optimized for predictable tail latency |
| homepage | |
| repository | https://github.com/Abso1ut3Zer0/nexus |
| max_upload_size | |
| id | 2017837 |
| size | 195,271 |
A high-performance slab allocator optimized for predictable tail latency.
Traditional slab allocators using Vec exhibit bimodal p999 latency—most operations are fast, but occasional reallocations cause multi-thousand cycle spikes. For latency-critical systems like trading engines, a single slow operation can mean a missed fill.
nexus-slab provides two allocators:
BoundedSlab: Fixed capacity, pre-allocated. The production choice for deterministic latency.Slab: Grows by adding independent chunks. No copying on growth—only the new chunk is allocated.Benchmarked on Intel Core Ultra 7 155H, pinned to a physical core.
| Operation | BoundedSlab | slab crate | Notes |
|---|---|---|---|
| INSERT p50 | ~20 cycles | ~22 cycles | 2 cycles faster |
| GET p50 | ~24 cycles | ~26 cycles | 2 cycles faster |
| REMOVE p50 | ~24 cycles | ~30 cycles | 6 cycles faster |
Steady-state p50 matches slab crate (~22-32 cycles). The win is tail latency during growth:
| Metric | Slab | slab crate | Notes |
|---|---|---|---|
| Growth p999 | ~40 cycles | ~2000+ cycles | 50x better |
| Growth max | ~70K cycles | ~1.5M cycles | 20x better |
Slab adds chunks independently—no copying. The slab crate uses Vec, which copies all existing data on reallocation.
use nexus_slab::{BoundedSlab, Full};
// All memory allocated upfront, pre-touched
let mut slab: BoundedSlab<u64> = BoundedSlab::with_capacity(100_000);
// O(1) operations
let key = slab.try_insert(42).unwrap();
assert_eq!(slab[key], 42);
let value = slab.remove(key);
assert_eq!(value, 42);
// Returns Err(Full) when exhausted—no surprise allocations
match slab.try_insert(123) {
Ok(key) => { /* ... */ }
Err(Full(value)) => { /* handle backpressure */ }
}
use nexus_slab::Slab;
// Lazy allocation—no memory until first insert
let mut slab: Slab<u64> = Slab::new();
// Or pre-allocate for expected capacity
let mut slab: Slab<u64> = Slab::with_capacity(10_000);
// Grows automatically when needed
let key = slab.insert(42);
assert_eq!(slab[key], 42);
let value = slab.remove(key);
assert_eq!(value, 42);
use nexus_slab::Slab;
let slab: Slab<u64> = Slab::builder()
.chunk_capacity(8192) // Slots per chunk (default: 4096)
.reserve(100_000) // Pre-allocate space for N items
.build();
For self-referential structures where the value needs to know its own key:
use nexus_slab::{BoundedSlab, Key};
struct Node {
self_key: Key,
data: u64,
}
let mut slab = BoundedSlab::with_capacity(1024);
let entry = slab.try_vacant_entry().unwrap();
let key = entry.key();
entry.insert(Node { self_key: key, data: 42 });
assert_eq!(slab[key].self_key, key);
BoundedSlab (single contiguous allocation):
┌─────────────────────────────────────────────┐
│ Slot 0: [tag: u32][value: T] │
│ Slot 1: [tag: u32][value: T] │
│ ... │
│ Slot N: [tag: u32][value: T] │
└─────────────────────────────────────────────┘
Slab (multiple independent chunks):
┌──────────────┐ ┌──────────────┐ ┌──────────────┐
│ Chunk 0 │ │ Chunk 1 │ │ Chunk 2 │
│ (BoundedSlab)│ │ (BoundedSlab)│ │ (BoundedSlab)│
└──────────────┘ └──────────────┘ └──────────────┘
▲ ▲
└─── head_with_space ───────────────┘
(freelist of non-full chunks)
Each slot has a tag: u32:
tag == 0Single comparison (tag == 0) checks occupancy.
BoundedSlab keys are direct slot indices.
Slab keys encode chunk and local index via power-of-2 arithmetic:
┌─────────────────────┬──────────────────────┐
│ chunk_idx (high) │ local_idx (low) │
└─────────────────────┴──────────────────────┘
Decoding is two instructions: shift and mask.
Keys are simple indices with occupancy checks. After removal, get() returns None:
let key = slab.insert(42);
assert!(slab.contains(key));
slab.remove(key);
assert!(!slab.contains(key)); // Slot is vacant
assert!(slab.get(key).is_none());
No generational indices. If a new value occupies the same slot, an old key will access the new value. For systems requiring stale-key protection, validate against authoritative external identifiers (exchange order IDs, database keys, etc.).
See the Key documentation for design rationale.
| Method | BoundedSlab | Slab | Description |
|---|---|---|---|
try_insert(value) |
Result<Key, Full> |
— | Insert, returns Err if full |
insert(value) |
— | Key |
Insert, grows if needed |
get(key) |
Option<&T> |
Option<&T> |
Get reference |
get_mut(key) |
Option<&mut T> |
Option<&mut T> |
Get mutable reference |
remove(key) |
T |
T |
Remove and return (panics if invalid) |
slab[key] |
&T / &mut T |
&T / &mut T |
Index access (panics if invalid) |
contains(key) |
bool |
bool |
Check if slot is occupied |
len() |
usize |
usize |
Number of occupied slots |
capacity() |
usize |
usize |
Total slots available |
is_full() |
bool |
— | Check if at capacity |
clear() |
() |
() |
Remove all elements |
Use BoundedSlab when:
Use Slab when:
Vec-based slabsUse the slab crate when:
MIT OR Apache-2.0