Crates.io | emap |
lib.rs | emap |
version | 0.0.13 |
source | src |
created_at | 2023-04-23 16:59:56.831549 |
updated_at | 2023-04-30 17:29:28.810189 |
description | A map with a fixed capacity and integers as keys |
homepage | https://github.com/yegor256/emap |
repository | https://github.com/yegor256/emap |
max_upload_size | |
id | 846689 |
size | 62,394 |
It is an alternative on-heap implementation of a map with keys of type usize
and a fixed capacity. It works much faster than a standard HashMap
because it allocates memory for all keys at once and then the cost
of get()
is O(1). Obviously, with this design, the cost of iter()
increases because the iterator
has to jump through empty keys. However, there
is a supplementary function next_key()
, which returns the next available key in the map.
It is recommended to use it in order to ensure sequential order of the keys, which
will guarantee O(1) cost of next()
in iterators.
If usize
keys are placed sequentially, the only true competitor of ours is
std::vec::Vec
.
We beat it too, see the benchmarking results below.
First, add this to Cargo.toml
:
[dependencies]
emap = "0.0.11"
Then, use it like a standard hash map... well, almost:
use emap::Map;
let mut m : Map<&str> = Map::with_capacity_init(100); // allocation on heap
m.insert(m.next_key(), "foo");
m.insert(m.next_key(), "bar");
assert_eq!(2, m.len());
If more than 100 keys will be added to the map, it will panic.
The map doesn't increase its size automatically, like Vec
does
(this is one of the reasons why we are faster).
Read the API documentation.
The struct
emap::Map
is designed as closely similar to
std::collections::HashMap
as possible.
There is a summary of a simple benchmark, where we compared emap::Map
with
Vec
, changing the total capacity CAP
of them (horizontal axis).
We applied the same interactions
(benchmark.rs
)
to them both and measured how fast they performed. In the following table,
the numbers over 1.0 indicate performance gain of Map
against Vec
,
while the numbers below 1.0 demonstrate performance loss.
4 | 16 | 256 | 4096 | |
---|---|---|---|---|
i ∈ 0..CAP {M.insert(i, &"Hello, world!")} |
1.17 | 2.89 | 1.26 | 1.27 |
i ∈ 0..CAP {M.insert(i, &"大家好"); s ∈ M.values() {sum += s.len()}} |
1.05 | 0.73 | 0.65 | 0.51 |
i ∈ 0..CAP {M.insert(i, &42); s ∈ M.into_values() {sum += s}} |
1.05 | 0.91 | 0.61 | 0.68 |
i ∈ 0..CAP {M.insert(i, &42); s ∈ M.keys() {sum += s}} |
0.98 | 0.69 | 0.34 | 0.34 |
i ∈ 0..CAP {M.insert(i, &42); s ∈ M.values() {sum += s}} |
1.07 | 0.81 | 0.51 | 0.51 |
i ∈ 0..CAP {M.insert(i, &42)}; M.clear(); M.len(); |
1.20 | 2.72 | 5.17 | 9.30 |
i ∈ 0..CAP {M.insert(i, &42)}; i ∈ CAP-1..0 {M.remove(&i)} |
1.23 | 2.19 | 1.34 | 1.03 |
The experiment was performed on 30-04-2023. There were 10000 repetition cycles. The entire benchmark took 486s.
First, install Rust and then:
$ cargo test -vv
If everything goes well, fork repository, make changes,
send us a pull request.
We will review your changes and apply them to the master
branch shortly,
provided they don't violate our quality standards. To avoid frustration,
before sending us your pull request please run cargo test
again. Also,
run cargo fmt
and cargo clippy
.
Also, before you start making changes, run benchmarks:
$ rustup run nightly cargo bench
Then, after the changes you make, run it again. Compare the results. If your changes degrade performance, think twice before submitting a pull request.