Crates.io | derive-stack-queue |
lib.rs | derive-stack-queue |
version | 0.14.0 |
source | src |
created_at | 2022-10-22 21:22:55.168958 |
updated_at | 2024-03-15 08:04:13.173191 |
description | Derives for stack-queue |
homepage | |
repository | https://github.com/Bajix/stack-queue/ |
max_upload_size | |
id | 694894 |
size | 9,015 |
A heapless auto-batching queue featuring deferrable batching by way of negotiating exclusive access over task ranges on thread-owned circular buffers. As tasks continue to be enqueued until batches are bounded, doing so can be deferred until after a database connection has been acquired as to allow for opportunitistic batching. This delivers optimal batching at all workload levels without batch collection overhead, superfluous timeouts, nor unnecessary allocations.
Impl one of the following while using the local_queue macro:
TaskQueue
, for batching with per-task receiversBackgroundQueue
, for background processsing task batches without receiversBatchReducer
, for collecting or reducing batched dataFor best performance, exclusively use the Tokio runtime as configured via the tokio::main or tokio::test macro with the crate
attribute set to async_local
while the barrier-protected-runtime
feature is enabled on async-local
. Doing so configures the Tokio runtime with a barrier that rendezvous runtime worker threads during shutdown in a way that ensures tasks never outlive thread local data owned by runtime worker threads and obviates the need for Box::leak as a means of lifetime extension.
crossbeam |
flume |
TaskQueue |
BackgroundQueue |
tokio::mpsc |
---|---|---|---|---|
1.67 us (✅ 1.00x) |
1.95 us (❌ 1.17x slower) |
942.72 ns (✅ 1.77x faster) |
638.45 ns (🚀 2.62x faster) |
1.91 us (❌ 1.14x slower) |