| Crates.io | mobench-macros |
| lib.rs | mobench-macros |
| version | 0.1.13 |
| created_at | 2026-01-13 09:22:11.790012+00 |
| updated_at | 2026-01-21 16:57:48.456365+00 |
| description | Proc macros for mobench-sdk - #[benchmark] attribute |
| homepage | |
| repository | https://github.com/worldcoin/mobile-bench-rs |
| max_upload_size | |
| id | 2039785 |
| size | 23,647 |
Procedural macros for the mobench mobile benchmarking framework.
This crate provides the #[benchmark] attribute macro that automatically registers functions for mobile benchmarking. It uses compile-time registration via the inventory crate to build a registry of benchmark functions.
#[benchmark] attribute: Mark functions as benchmarksAdd this to your Cargo.toml:
[dependencies]
mobench-macros = "0.1"
mobench-sdk = "0.1" # For the runtime
use mobench_macros::benchmark;
#[benchmark]
fn fibonacci_benchmark() {
let result = fibonacci(30);
std::hint::black_box(result);
}
#[benchmark]
fn sorting_benchmark() {
let mut data = vec![5, 2, 8, 1, 9];
data.sort();
std::hint::black_box(data);
}
fn fibonacci(n: u32) -> u64 {
match n {
0 => 0,
1 => 1,
_ => fibonacci(n - 1) + fibonacci(n - 2),
}
}
The macros work seamlessly with mobench-sdk:
use mobench_macros::benchmark;
use mobench_sdk::{run_benchmark, BenchSpec};
#[benchmark]
fn my_expensive_operation() {
// Your benchmark code
}
fn main() -> Result<(), Box<dyn std::error::Error>> {
// Run the benchmark
let spec = BenchSpec::new("my_expensive_operation", 100, 10)?;
let report = run_benchmark(spec)?;
println!("Mean: {} ns", report.mean_ns());
Ok(())
}
The #[benchmark] macro:
inventory::submit! callWhen you write:
#[benchmark]
fn my_benchmark() {
expensive_operation();
}
The macro expands to something like:
fn my_benchmark() {
expensive_operation();
}
inventory::submit! {
BenchFunction {
name: "my_benchmark",
runner: |spec| {
run_closure(spec, || {
my_benchmark();
Ok(())
})
}
}
}
For benchmarks that need expensive setup that shouldn't be measured:
use mobench_macros::benchmark;
fn setup_data() -> Vec<u8> {
vec![0u8; 1_000_000] // Not measured
}
#[benchmark(setup = setup_data)]
fn hash_benchmark(data: &Vec<u8>) {
std::hint::black_box(compute_hash(data)); // Only this is measured
}
For benchmarks that mutate their input (e.g., sorting):
fn generate_random_vec() -> Vec<i32> {
(0..1000).collect()
}
#[benchmark(setup = generate_random_vec, per_iteration)]
fn sort_benchmark(data: Vec<i32>) {
let mut data = data;
data.sort();
std::hint::black_box(data);
}
fn setup_db() -> Database { Database::connect("test.db") }
fn cleanup_db(db: Database) { db.close(); }
#[benchmark(setup = setup_db, teardown = cleanup_db)]
fn db_query(db: &Database) {
db.query("SELECT * FROM users");
}
std::hint::black_box() to prevent optimization of resultsAlways use black_box for benchmark results:
use mobench_macros::benchmark;
#[benchmark]
fn good_benchmark() {
let result = expensive_computation();
std::hint::black_box(result); // ✓ Prevents optimization
}
#[benchmark]
fn bad_benchmark() {
let result = expensive_computation(); // ✗ May be optimized away
}
Use descriptive names that indicate what's being measured:
#[benchmark]
fn hash_1kb_data() { /* ... */ }
#[benchmark]
fn parse_json_small() { /* ... */ }
#[benchmark]
fn encrypt_aes_256() { /* ... */ }
Keep benchmarks focused on one operation:
// Good: Measures one thing
#[benchmark]
fn sha256_hash() {
let hash = sha256(&DATA);
std::hint::black_box(hash);
}
// Bad: Measures multiple things
#[benchmark]
fn hash_and_encode() {
let hash = sha256(&DATA);
let encoded = base64_encode(hash);
std::hint::black_box(encoded);
}
This crate is part of the mobench ecosystem for mobile benchmarking:
Licensed under the MIT License. See LICENSE.md for details.
Copyright (c) 2026 World Foundation