| Crates.io | sliding-ring |
| lib.rs | sliding-ring |
| version | 0.1.1 |
| created_at | 2025-10-26 17:43:12.035886+00 |
| updated_at | 2025-10-27 15:49:59.47743+00 |
| description | Cache-friendly sliding ring buffer keyed to an anchor coordinate for ultra-low-latency workloads. |
| homepage | |
| repository | https://github.com/ChetanBhasin/sliding-ring |
| max_upload_size | |
| id | 1901659 |
| size | 68,368 |
sliding-ring provides a cache-friendly, allocation-free ring buffer that keeps its slots indexed relative to a moving anchor index. It was extracted from our high-performance orderbook engine and generalized so any Copy slot type can participate in consistent sliding-window semantics. A const-generic DEPTH defines how many offsets past the anchor stay live at any time; newly entered slots are proactively zeroed on each shift so the hot window never carries stale data. Store any domain-specific coordinate (price, timestamp, etc.) outside the ring and combine it with the offsets it exposes.
cargo add sliding-ring
The crate ships with a README.md-driven cargo doc experience, so cargo doc --open -p sliding-ring gives you rendered API docs locally.
Traditional ring buffers optimize for FIFO semantics, but exchange and telemetry workloads often care about the best coordinate (highest bid, current time bucket, etc.) and need random access around that best. SlidingRing keeps a 256-slot ring that re-anchors in O(1) time, clearing the newly entered slots while preserving the rest. This enables predictable performance, avoids per-slot allocations, and keeps the CPU cache warm even under heavy churn.
DEPTH describes the contiguous window of offsets [0, DEPTH) that stays live; everything else is reset to the caller-supplied “zero” value.k steps rewires indices modulo 256, then clears the |k| offsets that newly enter the window (or resets the whole ring when |k| >= DEPTH).shift_by_keep_anchor skips clearing the newly entered anchor slot on backward moves—useful when promoting the next best level without losing its quantity.SlidingRing<T, DEPTH> stores [T; 256], so every slot type is monomorphized at compile time and the compiler inlines the small helper methods. T must be Copy and DEPTH controls the number of contiguous offsets that stay hot beyond the anchor.slices_from_anchor* return the two contiguous spans covering exactly DEPTH slots (tail/head), so you can run vectorized scans without modulo math. This is the primary primitive for ultra-low-latency users.slices_from_anchor* with the wide crate (see the simd_scan example below) to stay on stable Rust while still using vector instructions.iter() / iter_mut() walk storage order, while iter_from_anchor* emit (offset, slot) relative to the current anchor. Both rely on pointer arithmetic, so they stay zero-cost abstractions.#[inline(always)] so LLVM can flatten the helpers.slices_from_anchor*() to obtain the tail/head spans.iter_from_anchor*()—they share the same traversal order and hand you offsets you can add to your externally tracked anchor.The ring stores [T; 256], so T must be Copy. Construction takes the value that represents a cleared slot:
use sliding_ring::SlidingRing;
const DEPTH: u8 = 48;
let mut ring = SlidingRing::<u64, DEPTH>::new(0);
That “zero” gets copied into every slot initially and whenever a slot needs to be cleared because the anchor moved. It can be anything cheap to copy: numeric zeroes, None, or lightweight structs. Only the DEPTH slots starting at the anchor remain significant; everything else is cleared automatically.
use sliding_ring::SlidingRing;
#[derive(Default, Clone, Copy)]
struct Bucket {
hits: u64,
}
const DEPTH: u8 = 64;
let mut ring = SlidingRing::<Bucket, DEPTH>::new(Bucket::default());
let mut anchor = 10_000i128;
let offset = 5u8;
ring.slot_mut(offset).hits += 1;
ring.shift_by(offset as i128);
anchor += offset as i128;
assert_eq!(ring.borrow_anchor().hits, 1);
use sliding_ring::{Direction, SlidingRing};
#[derive(Default, Clone, Copy)]
struct Bucket { hits: u64 }
const DEPTH: u8 = 32;
let mut ring = SlidingRing::<Bucket, DEPTH>::new(Bucket::default());
ring.slot_mut(5).hits += 1;
ring.slot_mut(6).hits += 3;
let sum: u64 = ring
.iter_from_anchor(Direction::Forward)
.map(|(_, bucket)| bucket.hits)
.sum();
assert_eq!(sum, 4);
use sliding_ring::SlidingRing;
use wide::u64x4;
const DEPTH: u8 = 64;
let mut ring = SlidingRing::<u64, DEPTH>::new(0);
for i in 0..DEPTH {
*ring.slot_mut(i) = i as u64;
}
let (tail, head) = ring.slices_from_anchor();
let needle = u64x4::splat(42);
for chunk in tail.chunks_exact(4).chain(head.chunks_exact(4)) {
let vec = u64x4::new([chunk[0], chunk[1], chunk[2], chunk[3]]);
if vec
.simd_eq(needle)
.to_array()
.iter()
.any(|&lane| lane == u64::MAX)
{
println!("found 42");
break;
}
}
Run cargo run --example telemetry or cargo run --example simd_scan for the full samples. The SIMD example relies on the wide crate, so it stays on stable Rust while still demonstrating contiguous-slice scanning.
cargo doc --open -p sliding-ring for local browsing.This crate is extracted from production systems but is still evolving. Expect additions such as configurable ring widths, iterators, and more slot utilities as we work toward a polished open-source release.
cargo run --example telemetry – basic per-slot counter with anchor-ordered reduction.cargo run --example simd_scan – demonstrates splitting the ring into contiguous spans and scanning with SIMD on stable via the wide crate.cargo bench -p sliding-ring runs the Criterion suite measuring anchor shifts and slot updates.bazel run //lib/collections/sliding-ring:sliding_ring_bench (tagged benchmark) to launch the same binary.Licensed under either of
at your option.
Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in sliding-ring by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.