mobench-macros

Crates.iomobench-macros
lib.rsmobench-macros
version0.1.13
created_at2026-01-13 09:22:11.790012+00
updated_at2026-01-21 16:57:48.456365+00
descriptionProc macros for mobench-sdk - #[benchmark] attribute
homepage
repositoryhttps://github.com/worldcoin/mobile-bench-rs
max_upload_size
id2039785
size23,647
dcbuilder.eth (dcbuild3r)

documentation

https://docs.rs/mobench-macros

README

mobench-macros

Procedural macros for the mobench mobile benchmarking framework.

This crate provides the #[benchmark] attribute macro that automatically registers functions for mobile benchmarking. It uses compile-time registration via the inventory crate to build a registry of benchmark functions.

Features

  • #[benchmark] attribute: Mark functions as benchmarks
  • Automatic registration: No manual registry maintenance required
  • Type safety: Compile-time validation of benchmark functions
  • Zero runtime overhead: Registration happens at compile time

Usage

Add this to your Cargo.toml:

[dependencies]
mobench-macros = "0.1"
mobench-sdk = "0.1"  # For the runtime

Basic Example

use mobench_macros::benchmark;

#[benchmark]
fn fibonacci_benchmark() {
    let result = fibonacci(30);
    std::hint::black_box(result);
}

#[benchmark]
fn sorting_benchmark() {
    let mut data = vec![5, 2, 8, 1, 9];
    data.sort();
    std::hint::black_box(data);
}

fn fibonacci(n: u32) -> u64 {
    match n {
        0 => 0,
        1 => 1,
        _ => fibonacci(n - 1) + fibonacci(n - 2),
    }
}

With mobench-sdk

The macros work seamlessly with mobench-sdk:

use mobench_macros::benchmark;
use mobench_sdk::{run_benchmark, BenchSpec};

#[benchmark]
fn my_expensive_operation() {
    // Your benchmark code
}

fn main() -> Result<(), Box<dyn std::error::Error>> {
    // Run the benchmark
    let spec = BenchSpec::new("my_expensive_operation", 100, 10)?;
    let report = run_benchmark(spec)?;

    println!("Mean: {} ns", report.mean_ns());
    Ok(())
}

How It Works

The #[benchmark] macro:

  1. Preserves your function: The original function remains unchanged
  2. Generates registration code: Creates an inventory::submit! call
  3. Wraps in closure: Converts your function into a callable closure
  4. Registers at compile time: Adds to the global benchmark registry

Macro Expansion

When you write:

#[benchmark]
fn my_benchmark() {
    expensive_operation();
}

The macro expands to something like:

fn my_benchmark() {
    expensive_operation();
}

inventory::submit! {
    BenchFunction {
        name: "my_benchmark",
        runner: |spec| {
            run_closure(spec, || {
                my_benchmark();
                Ok(())
            })
        }
    }
}

Setup and Teardown

For benchmarks that need expensive setup that shouldn't be measured:

use mobench_macros::benchmark;

fn setup_data() -> Vec<u8> {
    vec![0u8; 1_000_000]  // Not measured
}

#[benchmark(setup = setup_data)]
fn hash_benchmark(data: &Vec<u8>) {
    std::hint::black_box(compute_hash(data));  // Only this is measured
}

Per-Iteration Setup

For benchmarks that mutate their input (e.g., sorting):

fn generate_random_vec() -> Vec<i32> {
    (0..1000).collect()
}

#[benchmark(setup = generate_random_vec, per_iteration)]
fn sort_benchmark(data: Vec<i32>) {
    let mut data = data;
    data.sort();
    std::hint::black_box(data);
}

Setup and Teardown

fn setup_db() -> Database { Database::connect("test.db") }
fn cleanup_db(db: Database) { db.close(); }

#[benchmark(setup = setup_db, teardown = cleanup_db)]
fn db_query(db: &Database) {
    db.query("SELECT * FROM users");
}

Requirements

  • Functions must be regular functions (not async)
  • Without setup: no parameters allowed
  • With setup: exactly one parameter (reference to setup result, or owned for per_iteration)
  • Functions should use std::hint::black_box() to prevent optimization of results

Best Practices

Prevent Compiler Optimization

Always use black_box for benchmark results:

use mobench_macros::benchmark;

#[benchmark]
fn good_benchmark() {
    let result = expensive_computation();
    std::hint::black_box(result); // ✓ Prevents optimization
}

#[benchmark]
fn bad_benchmark() {
    let result = expensive_computation(); // ✗ May be optimized away
}

Benchmark Naming

Use descriptive names that indicate what's being measured:

#[benchmark]
fn hash_1kb_data() { /* ... */ }

#[benchmark]
fn parse_json_small() { /* ... */ }

#[benchmark]
fn encrypt_aes_256() { /* ... */ }

Isolate Benchmarks

Keep benchmarks focused on one operation:

// Good: Measures one thing
#[benchmark]
fn sha256_hash() {
    let hash = sha256(&DATA);
    std::hint::black_box(hash);
}

// Bad: Measures multiple things
#[benchmark]
fn hash_and_encode() {
    let hash = sha256(&DATA);
    let encoded = base64_encode(hash);
    std::hint::black_box(encoded);
}

Part of mobench

This crate is part of the mobench ecosystem for mobile benchmarking:

See Also

License

Licensed under the MIT License. See LICENSE.md for details.

Copyright (c) 2026 World Foundation

Commit count: 62

cargo fmt