| Crates.io | intern-mint |
| lib.rs | intern-mint |
| version | 0.2.0 |
| created_at | 2025-07-14 10:34:11.55916+00 |
| updated_at | 2025-07-28 12:00:32.36036+00 |
| description | byte slice interning |
| homepage | |
| repository | https://github.com/sweet-security/intern-mint |
| max_upload_size | |
| id | 1751498 |
| size | 1,756,821 |
intern-mint is an implementation of byte slice interning.
Slice interning is a memory management technique that stores identical slices once in a slice pool.
This can potentially save memory and avoid allocations in environments where data is repetitive.
Slices are kept as Arc<[u8]>s using the triomphe crate for a smaller footprint.
The Arcs are then stored in a global static pool implemented as a dumbed-down version of DashMap.
The pool consists of N shards (dependent on available_parallelism) of hashbrown hash-tables, sharded by the slices' hashes, to avoid locking the entire table for each lookup.
When a slice is dropped, the total reference count is checked, and the slice is removed from the pool if needed.
Interned type is the main type offered by this crate, responsible for interning slices.
There is also &BorrowedInterned to pass around instead of cloning Interned instances when not needed,
and in order to avoid passing &Interned which will require double-dereference to access the data.
Same data will be held in the same address
use intern_mint::Interned;
let a = Interned::new(b"hello");
let b = Interned::new(b"hello");
assert_eq!(a.as_ptr(), b.as_ptr());
&BorrowedInterned can be used with hash-maps
Note that the pointer is being used for hashing and comparing (see Hash and PartialEq trait implementations)
as opposed to hashing and comparing the actual data - because the pointers are unique for the same data as long as it "lives" in memory
use intern_mint::{BorrowedInterned, Interned};
let map = std::collections::HashMap::<Interned, u64>::from_iter([(Interned::new(b"key"), 1)]);
let key = Interned::new(b"key");
assert_eq!(map.get(&key), Some(&1));
let borrowed_key: &BorrowedInterned = &key;
assert_eq!(map.get(borrowed_key), Some(&1));
&BorrowedInterned can be used with btree-maps
use intern_mint::{BorrowedInterned, Interned};
let map = std::collections::BTreeMap::<Interned, u64>::from_iter([(Interned::new(b"key"), 1)]);
let key = Interned::new(b"key");
assert_eq!(map.get(&key), Some(&1));
let borrowed_key: &BorrowedInterned = &key;
assert_eq!(map.get(borrowed_key), Some(&1));
The following features are available:
bstr to add some type conversions, and the Debug and Display traits by using the bstr crate - disabled by defaultserde to add the Serialize and Deserialize traits provided by the serde crate - disabled by defaultdatabuf to add the Encode and Decode traits provided by the databuf crate - disabled by defaultIn the comparison benchmark intern-mint is compared to the crates internment and intern-arc.
The benchmark runs multi-threaded (one thread per available core) and uses per-thread standard's hash-maps to insert, modify, and get values using interned keys.
On my personal machine (base model M4 MacBook Air), intern-mint performed 1.22x faster than internment, and 4.19x faster than intern-arc.
On my work machine (12th Gen Intel i7-1260P), intern-mint performed 1.77x faster than internment, and 52.36x faster than intern-arc.
I suspect the difference between my personal and work machine is so dramatic because it has double the cores, and intern-arc has a single mutex for the entire interned pool.
cargo bench can be used to run the benchmark locally.