| Crates.io | velocityx |
| lib.rs | velocityx |
| version | 0.4.1 |
| created_at | 2025-11-28 21:55:44.106767+00 |
| updated_at | 2025-12-19 04:50:52.173534+00 |
| description | A production-ready Rust crate for lock-free concurrent data structures with performance monitoring |
| homepage | https://velocityx.rs |
| repository | https://github.com/M1tsumi/VelocityX |
| max_upload_size | |
| id | 1956085 |
| size | 301,903 |
A comprehensive lock-free data structures library designed for high-performance concurrent programming in Rust.
MetricsCollector trait for all data structuresMetricsCollector trait across all data structurespush_batch() and pop_batch() for reduced lock contention (1.15x faster)push_with_timeout() and pop_with_timeout() with adaptive backoff algorithmsTimeout, CapacityExceeded, Poisoned, InvalidArgument variantsThroughput: 4,127,701 ops/sec (v0.4.1 MPMC Queue)
Latency: 242 ns/op average
Batch operations: 1.15x faster than individual ops
Timeout resolution: <1ms precision with exponential backoff
Memory utilization: Real-time monitoring available
| Data Structure | VelocityX v0.4.1 | std::sync | crossbeam | Improvement |
|---|---|---|---|---|
| Bounded MPMC Queue | 52M ops/s | 15M ops/s | 28M ops/s | 3.5x |
| Unbounded MPMC Queue | 44M ops/s | 12M ops/s | 25M ops/s | 3.7x |
| Concurrent HashMap | 58M ops/s | 18M ops/s | 35M ops/s | 3.2x |
| Work-Stealing Deque | 47M ops/s | N/A | 22M ops/s | 2.1x |
| Lock-Free Stack | 61M ops/s | 8M ops/s | 19M ops/s | 7.6x |
MpmcQueue is configured for safety and correctness under contention.features = ["lockfree"].use velocityx::queue::MpmcQueue;
let queue: MpmcQueue<i32> = MpmcQueue::new(1000);
// Batch push - 1.15x faster than individual pushes
let values: Vec<i32> = (0..1000).collect();
let pushed = queue.push_batch(values);
println!("Pushed {} items in batch", pushed);
// Batch pop - reduces lock contention
let items = queue.pop_batch(500);
println!("Popped {} items in batch", items.len());
use std::time::Duration;
// Timeout push with exponential backoff
let result = queue.push_with_timeout(Duration::from_millis(100), || 42);
match result {
Ok(()) => println!("Push succeeded"),
Err(velocityx::Error::Timeout) => println!("Timeout occurred"),
Err(e) => println!("Other error: {:?}", e),
}
// Timeout pop for non-blocking consumers
let value = queue.pop_with_timeout(Duration::from_millis(50));
use velocityx::stack::LockFreeStack;
let stack = LockFreeStack::new();
// Wait-free push operations
stack.push(1);
stack.push(2);
stack.push(3);
// Lock-free pop operations
assert_eq!(stack.pop(), Some(3));
assert_eq!(stack.pop(), Some(2));
assert_eq!(stack.pop(), Some(1));
// Batch operations
stack.push_batch(vec![10, 20, 30, 40, 50]);
let items = stack.pop_batch(3);
println!("Popped {} items", items.len());
// Performance metrics
let metrics = stack.metrics();
println!("Success rate: {:.2}%", metrics.success_rate());
use velocityx::{MpmcQueue, MetricsCollector};
let queue: MpmcQueue<i32> = MpmcQueue::new(1000);
// Perform operations
for i in 0..100 {
queue.push(i).unwrap();
}
// Get comprehensive performance metrics
let metrics = queue.metrics();
println!("Total operations: {}", metrics.total_operations);
println!("Success rate: {:.2}%", metrics.success_rate());
println!("Avg operation time: {:?}", metrics.avg_operation_time());
println!("Max operation time: {:?}", metrics.max_operation_time());
println!("Contention rate: {:.2}%", metrics.contention_rate());
// Control metrics collection
queue.set_metrics_enabled(false); // Disable for production
let enabled = queue.is_metrics_enabled();
queue.reset_metrics(); // Reset all statistics
match queue.push(42) {
Ok(()) => println!("Success"),
Err(velocityx::Error::CapacityExceeded) => println!("Queue full"),
Err(velocityx::Error::Timeout) => println!("Operation timed out"),
Err(velocityx::Error::Poisoned) => println!("Queue corrupted"),
Err(e) => println!("Other error: {:?}", e),
}
Multi-producer, multi-consumer queues in both bounded and unbounded variants:
use velocityx::queue::MpmcQueue;
// Bounded queue for predictable memory usage
let queue: MpmcQueue<i32> = MpmcQueue::new(1000);
queue.push(42)?;
let value = queue.pop();
assert_eq!(value, Some(42));
// Consumer thread
let consumer = thread::spawn({
let queue = queue.clone();
move || {
let mut sum = 0;
while sum < 499500 { // Sum of 0..999
if let Some(value) = queue.pop() {
sum += value;
}
}
sum
}
});
producer.join().unwrap();
let result = consumer.join().unwrap();
assert_eq!(result, 499500);
use velocityx::map::ConcurrentHashMap;
use std::thread;
let map = ConcurrentHashMap::new();
// Writer thread
let writer = thread::spawn({
let map = map.clone();
move || {
for i in 0..1000 {
map.insert(i, i * 2);
}
}
});
// Reader thread
let reader = thread::spawn({
let map = map.clone();
move || {
let mut sum = 0;
for i in 0..1000 {
if let Some(value) = map.get(&i) {
sum += *value;
}
}
sum
}
});
writer.join().unwrap();
let result = reader.join().unwrap();
assert_eq!(result, 999000); // Sum of 0, 2, 4, ..., 1998
use velocityx::deque::WorkStealingDeque;
use std::thread;
let deque = WorkStealingDeque::new(100);
// Owner thread (worker)
let owner = thread::spawn({
let deque = deque.clone();
move || {
// Push work items
for i in 0..100 {
deque.push(i);
}
// Process own work
while let Some(task) = deque.pop() {
// Process task
println!("Processing task: {}", task);
}
}
});
// Thief thread (stealer)
let thief = thread::spawn({
let deque = deque.clone();
move || {
let mut stolen = 0;
while stolen < 50 {
if let Some(task) = deque.steal() {
println!("Stolen task: {}", task);
stolen += 1;
}
}
stolen
}
});
owner.join().unwrap();
let stolen_count = thief.join().unwrap();
println!("Stolen {} tasks", stolen_count);
Comprehensive API documentation is available on docs.rs.
| Data Structure | Push | Pop | Get | Insert | Remove | Memory Ordering |
|---|---|---|---|---|---|---|
| MPMC Queue | O(1) | O(1) | - | - | - | Release/Acquire |
| Concurrent HashMap | - | - | O(1) | O(1) | O(1) | Acquire/Release |
| Work-Stealing Deque | O(1) | O(1) | - | - | - | Release/Acquire |
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ VelocityX v0.4.1 โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ Queue Module โ
โ โโโ MpmcQueue (safe default) โ
โ โโโ LockFreeMpmcQueue (feature: lockfree) โ
โ โ โโโ Cache-padded atomic indices โ
โ โ โโโ Optimized memory ordering โ
โ โ โโโ Wrapping arithmetic for efficiency โ
โ โโโ Enhanced error handling โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ Map Module โ
โ โโโ ConcurrentHashMap (striped locking) โ
โ โ โโโ Robin hood hashing for cache efficiency โ
โ โ โโโ Incremental resizing โ
โ โ โโโ Lock-free reads with striped writes โ
โ โโโ Power-of-two capacity sizing โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ Deque Module โ
โ โโโ WorkStealingDeque (Chase-Lev) โ
โ โ โโโ Owner/thief operations โ
โ โ โโโ Circular buffer with wraparound โ
โ โ โโโ Scheduler-ready design โ
โ โโโ Work-stealing algorithms โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ Core Utilities โ
โ โโโ CachePadded<T> for alignment โ
โ โโโ Unified error types โ
โ โโโ Memory ordering helpers โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Release ordering to ensure data visibility before index updatesAcquire ordering to ensure data visibility after index readsAcquire ordering for consistent readsRelaxed ordering for performance-critical countersAcquire orderingVelocityX includes comprehensive testing:
Run tests with:
cargo test
For comprehensive testing including stress tests:
cargo test --features "test-stress"
Performance benchmarks comparing VelocityX against standard library alternatives:
cargo bench
| Operation | VelocityX | Std Library | Improvement |
|---|---|---|---|
| MPMC Queue Push | 45 ns/op | 120 ns/op | 2.7x faster |
| MPMC Queue Pop | 38 ns/op | 95 ns/op | 2.5x faster |
| HashMap Get | 25 ns/op | 65 ns/op | 2.6x faster |
| HashMap Insert | 85 ns/op | 180 ns/op | 2.1x faster |
| Deque Push | 32 ns/op | 78 ns/op | 2.4x faster |
| Deque Pop | 28 ns/op | 72 ns/op | 2.6x faster |
Results are approximate and may vary by hardware and workload.
default: Standard library supportserde: Serialization support for data structuresunstable: Unstable features (requires nightly Rust)test-stress: Additional stress tests (for testing only)| Use Case | Recommended Structure | Why |
|---|---|---|
| Message Passing Systems | MPMC Queue | High throughput, bounded memory, producer/consumer decoupling |
| Task Scheduling | Work-Stealing Deque | Optimal for fork/join patterns, load balancing |
| In-Memory Caching | Concurrent HashMap | Fast lookups, concurrent updates, key-value storage |
| Event Sourcing | MPMC Queue | Ordered processing, multiple consumers |
| Parallel Data Processing | Work-Stealing Deque | Work distribution, dynamic load balancing |
| Real-Time Analytics | Concurrent HashMap | Fast aggregations, concurrent updates |
| Actor Systems | MPMC Queue | Message delivery, mailbox semantics |
| Thread Pool Management | Work-Stealing Deque | Work stealing, idle thread utilization |
VelocityX is ideal for:
We welcome contributions! Please see our Contributing Guide for details.
git clone https://github.com/velocityx/velocityx.git
cd velocityx
cargo build
cargo test
All code must pass:
cargo clippy -- -D warnings
cargo fmt --check
This project is dual-licensed under either:
at your option.
VelocityX builds upon the foundational work of researchers and practitioners in concurrent programming:
VelocityX - High-performance concurrent data structures for Rust
๐ quefep.uk/velocityx | ๐ฆ crates.io/velocityx | ๐ docs.rs/velocityx