Crates.io | hopscotch |
lib.rs | hopscotch |
version | 0.1.1 |
source | src |
created_at | 2020-04-09 22:43:28.360401 |
updated_at | 2020-06-04 19:56:06.94651 |
description | A FIFO queue for efficiently hopping and skipping between tagged items |
homepage | http://www.github.com/kwf/hopscotch |
repository | http://www.github.com/kwf/hopscotch |
max_upload_size | |
id | 228174 |
size | 198,080 |
A hopscotch::Queue<K, V>
is a first-in-first-out queue where each Item<K, V>
in the queue has
V
,K
, andindex
of type u64
which is assigned in sequential order of
insertion starting at 0 when the queue is created.In addition to supporting the ordinary push
, pop
, and get
methods of a
FIFO queue, a hopscotch queue also supports the special, optimized methods
after
, after_mut
, before
, and before_mut
.
These methods find the next (or previous) Item
(or ItemMut
) in the
queue whose tag is equal to any of a given set of tags. For instance, the
signature of after
is:
pub fn after<'a, Tags>(&self, index: u64, tags: Tags) -> Option<Item<K, V>>
where
Tags: IntoIterator<Item = &'a K>,
K: 'a,
If we use &[K]
for Tags
, this signature simplifies to:
pub fn after(&self, index: u64, tags: &[K]) -> Option<Item<K, V>>
These methods are the real benefit of using a hopscotch queue over another data structure. Their asymptotic time complexity is:
Hopscotch queues also provide flexible iterators Iter(Mut)
to efficiently
traverse portions of their contents, sliced by index using Iter(Mut)::until
and
Iter(Mut)::after
, and filtered by set of desired tags using
Iter(Mut)::matching_only
.
This queue performs well when:
One use-case for such a structure is implementing a bounded pub/sub event buffer where clients interested in a particular subset of events repeatedly query for the next event matching their desired subset. This scenario plays to the strengths of a hopscotch queue, because each query can be performed very quickly, regardless of the contents or size of the buffer.
The push
and pop
operations are slower by a considerable constant factor
than their equivalents for VecDeque<(K, T)>
, but for this price, you gain the
ability to skip and hop between sets of tags in almost-constant time.
In scenarios with a few hundred tags, push
and pop
operations on my machine
(recent-ish MacBook Pro) take on the order of low single-digits of microseconds
(hundreds of thousands of operations per second), and after
/before
queries
take on the order of tens of nanoseconds (tens of millions of operations per
second), depending on the number of tags present in the queue. More detailed and
scientific benchmarks are to come, and please feel free to contribute!
If this is similar to your needs, this data structure might be for you!
VecDeque<(K, T)>
.VecDeque<T>
of values and a mapping from
tags to sorted vectors of of indices.VecDeque<(K, T)>
may also suffice.