| Crates.io | monotree |
| lib.rs | monotree |
| version | 0.4.0 |
| created_at | 2020-04-06 03:17:58.455182+00 |
| updated_at | 2025-09-02 10:54:19.585875+00 |
| description | Rust implementation of an optimized Sparse Merkle Tree |
| homepage | https://github.com/thyeem/monotree |
| repository | https://github.com/thyeem/monotree |
| max_upload_size | |
| id | 226763 |
| size | 182,493 |

Rust implementation of an optimized Sparse Merkle Tree.
This is a kind of binary-radix tree based on bitwise branching, currently, no nibble of bit.
For now, branching unit is just a single bit, neither a 4-bit nor a byte nibble.
See monotree.py for a pure Python implementation of monotree.
This library mostly relies on the Rust standard library only except for database APIs and hashers.
Currently, monotree supports these databases and hash functions following, but is designed to be super easy to customize and add:
Databases include:
Hashers include:
Add dependency to Cargo.toml
[dependencies]
monotree = "0.4.0"
# If you want to use it with specific database backends such as 'rocksdb' or 'sled',
monotree = { version = "0.4.0", features = ["db_rocksdb", "db_sled"] }
or use cargo add
$ cargo add monotree
$ cargo add monotree -F "db_rocksdb, db_sled"
Refer to
examples/basic.rsfor a complete working example.Regarding non-inclusion proof and inclusion proof, See Merkle proof section in More Examples below.
// If no database or hash function is specified,
// the tree defaults to using a HashMap and the Blake3 hash function.
let mut tree = Monotree::default();
// Generate a random key and leaf pair.
// random_hash() creates a fixed-length random array of 32 bytes.
let key = random_hash();
let leaf = random_hash();
// It is natural the tree root initially has 'None'.
let root = None;
// Insert the entry (key, leaf) into tree, yielding a new root of tree
let root = tree.insert(root.as_ref(), &key, &leaf)?;
assert_ne!(root, None);
// Retrieve the leaf inserted just before. Note that the last root was used.
let found = tree.get(root.as_ref(), &key)?;
assert_eq!(found, Some(leaf));
// Remove the entry
let root = tree.remove(root.as_ref(), &key)?;
// The tree is empty now and the root back to 'None'
assert_eq!(tree.get(root.as_ref(), &key)?, None);
assert_eq!(root, None);
Instead of executing each operation one by one, write them in a batch and then commit them all at once.
One can do the same thing above using batch operation for performance gain and atomicity. In short,
prepare → many of {insert, get, remove} → commit
// initialize an empty batch: prepare transaction
tree.prepare();
insert, get, and remove// Insert the entry (key, leaf) within the batch
let root = tree.insert(root.as_ref(), &key, &leaf)?;
assert_ne!(root, None);
// Retrieve the inserted leaf using the batch root
let found = tree.get(root.as_ref(), &key)?;
assert_eq!(found, Some(leaf));
// Remove the entry within the same batch
let root = tree.remove(root.as_ref(), &key)?;
// Ensure the entry was removed within the batch
assert_eq!(tree.get(root.as_ref(), &key)?, None);
// Commit the batch operations: commit transaction
tree.commit();
// Verify that the final root is `None` after commit
assert_eq!(tree.get(root.as_ref(), &key)?, None);
All benchmarks were performed in a cumulative way, where the root resulting from an operation just before was reused for subsequent operations. They were carried out with randomly-generated bytes entries and from the root of an empty tree. (Deletion starts from the last root.)
Tested on: Intel Core i5-3570 CPU @ 3.4GHz and 16 GB RAM / Ubuntu 18.04 with
rustc stable 1.42.0As a hasher,Blake3was used in this benchmark.
| Op. | DB used | #Entries | Time Measured | Std Error | per Op. |
|---|---|---|---|---|---|
| Insert | HashMap | 10 | 8.50 us | 0.01 us | 0.85 us |
| Insert | HashMap | 100 | 165.71 us | 0.14 us | 1.66 us |
| Insert | HashMap | 1000 | 2.32 ms | 0.01 ms | 2.32 us |
| Insert | HashMap | 10000 | 37.39 ms | 0.02 ms | 3.74 us |
| Get | HashMap | 10 | 7.82 us | 0.01 us | 0.78 us |
| Get | HashMap | 100 | 114.51 us | 0.14 us | 1.15 us |
| Get | HashMap | 1000 | 1.52 ms | 0.01 ms | 1.52 us |
| Get | HashMap | 10000 | 20.40 ms | 0.01 ms | 2.40 us |
| Remove | HashMap | 10 | 15.65 us | 0.01 us | 1.57 us |
| Remove | HashMap | 100 | 282.68 us | 0.09 us | 2.83 us |
| Remove | HashMap | 1000 | 4.23 ms | 0.01 ms | 4.23 us |
| Remove | HashMap | 10000 | 69.15 ms | 0.07 ms | 6.92 us |
| Insert | RocksDB | 10 | 34.80 us | 0.27 us | 3.48 us |
| Insert | RocksDB | 100 | 602.55 us | 2.72 us | 6.03 us |
| Insert | RocksDB | 1000 | 10.35 ms | 0.06 ms | 10.35 us |
| Insert | RocksDB | 10000 | 145.83 ms | 0.20 ms | 14.58 us |
| Get | RocksDB | 10 | 10.20 us | 0.06 us | 1.02 us |
| Get | RocksDB | 100 | 142.20 us | 0.79 us | 1.42 us |
| Get | RocksDB | 1000 | 1.74 ms | 0.01 ms | 1.74 us |
| Get | RocksDB | 10000 | 22.94 ms | 0.03 ms | 2.29 us |
| Remove | RocksDB | 10 | 58.33 us | 0.50 us | 5.83 us |
| Remove | RocksDB | 100 | 1.02 ms | 6.40 us | 10.20 us |
| Remove | RocksDB | 1000 | 20.22 ms | 0.10 ms | 20.22 us |
| Remove | RocksDB | 10000 | 394.95 ms | 1.46 ms | 39.50 us |
performs integration tests with full combinations of operations and tree types consisting of Databases and Hashers included.
# Some tests are time consuming.
# --release or -r is optional, but without it, it will take a longer time to complete the tests
$ cargo test -r --all-features
performs a micro-benchmark based on Criterion, with full combinations of operations and tree types consisting of Databases and Hashers included.
$ cargo bench --all-features
and macroscopic-time benchmarks (but rather wider error bars) with all tree types were also prepared in examples/perf.rs.
$ cargo run -r --example perf --all-features
and some examples describing how to manipulate monotree
$ cargo run --example basic
$ cargo run -r --example advanced --all-features
Refer to
examples/advanced.rsfor a complete working example.
// Init a monotree instance with a database and hash function
//
// Monotree::<DATABASE, HASHER>::new(DB_PATH)
// where DATABASE = {MemoryDB, RocksDB, Sled}
// HASHER = {Blake3, Blake2s, Blake2b, Sha2, Sha3}
let mut tree = Monotree::<RocksDB, Blake2b>::new("/tmp/monotree");
inserts and removes.As shown in Quick Start above, operations executed between tree.prepare() and tree.commit() can be viewed as a single batch operation. By default, they are automatically cached and are written together in a batch unit.
However, if you need to repeat the same operation, such as insert or remove, you can easily optimize performance by using inserts and removes.
inserts() is significantly faster than insert() for the following reason:
// Prepare 100 random pairs of key and leaf.
// random_hash(SIZE) creates a vector of fixed-length random array of 32 bytes.
let keys = random_hashes(100);
let leaves = random_hashes(100);
// It is natural the tree root initially has 'None'
let root = None;
// Insert a vector of entries of (key, leaf) into tree.
let root = tree.inserts(root.as_ref(), &keys, &leaves)?;
assert_ne!(root, None);
Similarly, gets() and removes() also are designed for batch usage.
let result = tree.gets(root.as_ref(), &keys)?;
assert_eq!(result.len(), keys.len());
let root = tree.removes(root.as_ref(), &keys)?;
// surely, the tree has nothing nad the root back to 'None'
assert_eq!(root, None);
monotree has compressed representations, but it fully retains the core property of the Sparse Merkle Trie.
non-inclusion proof is quite straightforward: Just go walk down the tree with a key (or a path) given.
If we cannot successfully get a leaf, we can assure that the leaf is not a part of the tree.
The process of inclusion proof is outlined below:
Prepare a random tree for testing.
// random insertions for testing Merkle proof generation
let root = tree.inserts(root.as_ref(), &keys, &leaves)?;
// pick a random key from the keys among inserted just before
let key = keys[99];
Generate a Merkle proof for a given root and key.
let proof = tree.get_merkle_proof(root.as_ref(), &key)?;
Prepare a target leaf matched with the target root above.
// where the Merkle proof verification starts off
let leaf = leaves[99];
To verify the proof correctly, you need to provide a hasher matched.
// Previously the tree was initialized with `Blake2b`
let hasher = Blake2b::new();
Just call verify_proof(&HASHER, &ROOT, &LEAF, &PROOF). That's all!
let verified = verify_proof(&hasher, root.as_ref(), &leaf, proof.as_ref());
assert_eq!(verified, true);
Sparse Merkle Trie is a space where multiple states (roots) coexist simultaneously.
There is no reason why a particular state must be stored more preciously than another. The lastest root is nothing more than the most recently updated state.
monotree has a minimal design. It does not automatically update or track the latest root.
However, monotree provides tools to update and fetch the latest root.
Use set_headroot(&LATEST_ROOT) to set the latest root to the database.
tree.set_headroot(root.as_ref());
Use get_headroot() to get the latest root from the database.
let headroot = tree.get_headroot()?;
assert_eq!(headroot, root);
monotree is a special case among the generalized binary radix trees, I propose to refer to it as Power Of Two radix tree or PoT.
If monotree were generalized with pow(2, n) where n-nibbles is a branching unit, there would have been room for further performance improvement.
One can easily make this improvement by building tries based on monotree.