| Crates.io | small_hash_map |
| lib.rs | small_hash_map |
| version | 1.0.1 |
| created_at | 2026-01-21 06:03:22.988512+00 |
| updated_at | 2026-01-21 15:46:19.464666+00 |
| description | A hash map optimized for small collections with automatic stack-to-heap transition |
| homepage | |
| repository | https://github.com/shergin/small-hash-map |
| max_upload_size | |
| id | 2058434 |
| size | 71,893 |
A hash map optimized for small collections with automatic stack-to-heap transition.
In many programs, hash maps are created with only a few entries. The standard HashMap always allocates on the heap, which involves:
For maps that typically hold 4-16 entries, linear search through a contiguous array is often faster than hash-based lookup due to cache efficiency.
SmallHashMap solves this by:
InlineMap for small collectionsHeapMap when capacity is exceededS parameter (defaults to RandomState, same as std::HashMap)This provides optimal performance across the full range of collection sizes.
use small_hash_map::SmallHashMap;
// Create a map with inline capacity of 8
let mut map: SmallHashMap<&str, i32, 8> = SmallHashMap::new();
// Standard map operations
map.insert("one", 1);
map.insert("two", 2);
map.insert("three", 3);
assert_eq!(map.get(&"two"), Some(&2));
assert_eq!(map.len(), 3);
assert!(map.contains_key(&"one"));
// Iteration
for (key, value) in map.iter() {
println!("{}: {}", key, value);
}
// Remove entries
map.remove(&"two");
assert_eq!(map.get(&"two"), None);
// Automatic transition to HeapMap when exceeding capacity
for i in 0..20 {
map.insert(Box::leak(format!("key{}", i).into_boxed_str()), i);
}
// Now using HeapMap internally, but API remains the same
use small_hash_map::SmallHashMap;
// If you know you'll exceed inline capacity, start with HeapMap
let map: SmallHashMap<String, i32, 8> = SmallHashMap::with_capacity(100);
// Starts directly with HeapMap, avoiding transition overhead
Like std::collections::HashMap, SmallHashMap supports custom hashers:
use small_hash_map::SmallHashMap;
use std::collections::hash_map::RandomState;
// Default hasher (RandomState) - same as std::HashMap
let map1: SmallHashMap<String, i32, 8> = SmallHashMap::new();
// Explicit hasher
let map2: SmallHashMap<String, i32, 8, RandomState> =
SmallHashMap::with_hasher(RandomState::new());
// With capacity and custom hasher
let map3: SmallHashMap<String, i32, 8, RandomState> =
SmallHashMap::with_capacity_and_hasher(100, RandomState::new());
// Access the hasher
let _hasher: &RandomState = map2.hasher();
For performance-critical applications, you can use faster hashers like fxhash:
use fxhash::FxBuildHasher;
use small_hash_map::SmallHashMap;
let mut map: SmallHashMap<String, i32, 8, FxBuildHasher> =
SmallHashMap::with_hasher(FxBuildHasher::default());
Once a SmallHashMap transitions from InlineMap to HeapMap, it never transitions back. Even if you remove elements, it remains heap-allocated.
let mut map: SmallHashMap<i32, i32, 4> = SmallHashMap::new();
// Fill and exceed capacity
for i in 0..10 {
map.insert(i, i);
}
// Now using HeapMap
map.clear();
// Still HeapMap, even though empty
Mitigation: If you need to reclaim stack allocation, create a new SmallHashMap.
InlineMap uses O(n) linear search, not hash-based lookup. This is intentional and typically faster for small n due to cache locality, but becomes slower as n approaches the capacity limit.
Keys, values, and hashers require trait bounds depending on the operation:
new, default: K: Hash + Eq, S: BuildHasher + Defaultwith_hasher: K: Hash + Eq, S: BuildHasherwith_capacity, with_capacity_and_hasher: K: Hash + Eq, S: BuildHasher + Default + Cloneinsert, extend: K: Hash + Eq, S: BuildHasher + Cloneget, remove, etc.: K: Hash + Eq, S: BuildHasherclone: K: Clone, V: Clone, S: CloneDebug: K: Debug, V: DebugUnlike std::collections::HashMap, there's no entry API for in-place mutation. The Entry API is problematic because it holds a mutable borrow of the map's internals, but inserting via a VacantEntry could trigger a transition from InlineMap to HeapMap, invalidating that reference. Use get/insert/remove instead.
Stack-allocated storage using fixed-size arrays with MaybeUninit:
pub struct InlineMap<K, V, const N: usize> {
keys: [MaybeUninit<K>; N],
values: [MaybeUninit<V>; N],
len: usize,
}
MaybeUninit to avoid requiring Default for uninitialized slotsThin wrapper around HashMap with configurable hasher:
pub struct HeapMap<K, V, S = RandomState> {
map: HashMap<K, V, S>,
}
BuildHasherRandomState (same as std::HashMap)Enum-based dispatch between the two implementations:
pub struct SmallHashMap<K, V, const N: usize, S = RandomState> {
inner: MapKind<K, V, N, S>,
transition_threshold: usize,
hash_builder: S,
}
pub enum MapKind<K, V, const N: usize, S = RandomState> {
InlineMap(InlineMap<K, V, N>),
HeapMap(HeapMap<K, V, S>),
}
The hasher S is stored and used when transitioning to HeapMap. Transition occurs when inserting a new key would exceed capacity N.
InlineMap uses unsafe for:
MaybeUninit slots (safe because we track len)All unsafe code maintains the invariant that only indices 0..len contain initialized values.
| Operation | InlineMap | HeapMap |
|---|---|---|
get() |
O(n) | O(1) average |
insert() |
O(n) | O(1) average |
remove() |
O(n) | O(1) average |
contains_key() |
O(n) | O(1) average |
iter() |
O(n) | O(n) |
clone() |
O(n) | O(n) |
| Memory | Stack, N×(K+V) | Heap, hash table overhead |
Crossover point: For typical key/value sizes, InlineMap outperforms HeapMap up to roughly 16-32 entries due to:
Beyond this, hash-based O(1) lookup wins.
| Crate | Storage | Heap Spill | SIMD | Key Constraint | Notes |
|---|---|---|---|---|---|
| SmallHashMap | Stack array | Yes | No | Hash + Eq |
Automatic transition to HashMap |
small-map |
Stack array | Yes | Yes | Hash + Eq |
SIMD-accelerated (SSE2/NEON) |
stackmap |
Stack array | No | No | Hash + Eq |
Fixed capacity, panics if exceeded |
smallmap |
Page array | No | No | Byte-indexable | Max 256 entries, specialized |
vecmap-rs |
Heap Vec | N/A | No | Eq only |
No Hash required, no_std |
SIMD-accelerated lookup (as in small-map) uses hash fingerprinting to compare 16 keys in parallel. However, this requires:
For the typical use case of SmallHashMap (4-16 entries), plain linear scan is faster because:
SIMD becomes beneficial only when N ≥ 16 AND the map is frequently near capacity.
small-map is an excellent crate with SIMD acceleration. If you're unsure which to use, try small-map first — it's well-designed and performs great across a wide range of sizes.
Choose SmallHashMap when:
Choose small-map when:
| Aspect | SmallHashMap | small-map |
|---|---|---|
| Lookup (N ≤ 8) | Faster | Slower (hash overhead) |
| Lookup (N = 8-16) | ~Equal | ~Equal |
| Lookup (N > 16) | Slower | Faster (SIMD) |
| Insert | Faster | Slower (stores h2) |
| Memory/entry | K + V |
K + V + 1 byte |
SmallHashMap applies the small-vector optimization pattern to hash maps:
| Aspect | SmallHashMap | std HashMap |
|---|---|---|
| Small collection | Stack allocated | Heap allocated |
| Memory locality | Excellent for small n | Hash table scattered |
| Allocation | Zero for small n | Always allocates |
| Lookup (small n) | O(n) but cache-friendly | O(1) but cache-unfriendly |
| Lookup (large n) | O(1) after transition | O(1) |
| Method | Description |
|---|---|
SmallHashMap::new() |
Creates empty map with inline storage |
SmallHashMap::with_capacity(n) |
Pre-sizes; uses HeapMap if n > N |
SmallHashMap::with_hasher(s) |
Creates with custom hasher |
SmallHashMap::with_capacity_and_hasher(n, s) |
Pre-sizes with custom hasher |
SmallHashMap::default() |
Same as new() |
iter.collect() |
Creates from iterator |
map.hasher() |
Returns reference to the hasher |
| Method | Returns | Description |
|---|---|---|
insert(k, v) |
Option<V> |
Insert or update; returns old value |
get(&k) |
Option<&V> |
Get reference to value |
get_mut(&k) |
Option<&mut V> |
Get mutable reference |
get_key_value(&k) |
Option<(&K, &V)> |
Get key-value pair |
remove(&k) |
Option<V> |
Remove and return value |
contains_key(&k) |
bool |
Check if key exists |
| Method | Returns | Description |
|---|---|---|
len() |
usize |
Number of entries |
is_empty() |
bool |
True if no entries |
capacity() |
usize |
Current capacity |
is_inline() |
bool |
True if using stack storage |
clear() |
() |
Remove all entries |
| Method | Yields | Description |
|---|---|---|
iter() |
(&K, &V) |
Immutable iteration |
iter_mut() |
(&K, &mut V) |
Mutable value iteration |
keys() |
&K |
Iterate over keys |
values() |
&V |
Iterate over values |
values_mut() |
&mut V |
Mutable value iteration |
into_iter() |
(K, V) |
Consuming iteration |
retain(f) |
() |
Filter in place |
Good fit:
Poor fit:
HashMap directly)HashMap-specific featuresno_std environments (depends on std::collections::HashMap)