| Crates.io | cargo-simplebench |
| lib.rs | cargo-simplebench |
| version | 2.2.0 |
| created_at | 2025-12-13 20:11:58.11549+00 |
| updated_at | 2025-12-20 15:40:31.385999+00 |
| description | minimalist microbenchmarking framework with clear regression detection |
| homepage | |
| repository | https://github.com/CmPons/simplebench |
| max_upload_size | |
| id | 1983363 |
| size | 114,566 |
A minimalist microbenchmarking framework for Rust with clear regression detection.
SimpleBench provides a simple #[bench] attribute and automatic workspace-wide benchmark discovery, without the complexity of larger frameworks.
#[bench] to any function--ci flag returns non-zero exit code on regressionscargo install cargo-simplebench
In your crate's Cargo.toml:
[dev-dependencies]
simplebench-runtime = "2.1"
simplebench-macros = "2.1"
#[cfg(test)]
mod benchmarks {
use simplebench_macros::bench;
// Simple benchmark - entire function body is measured
#[bench]
fn my_function() {
let sum: u64 = (0..1000).sum();
std::hint::black_box(sum);
}
}
For benchmarks where setup is expensive, use the setup attribute to run setup code once before measurement begins:
#[cfg(test)]
mod benchmarks {
use simplebench_macros::bench;
// Setup runs once, benchmark receives reference to data
#[bench(setup = || generate_test_data(1000))]
fn benchmark_with_setup(data: &Vec<u64>) {
let sum: u64 = data.iter().sum();
std::hint::black_box(sum);
}
// Named setup function also works
fn create_large_dataset() -> Vec<String> {
(0..10000).map(|i| format!("item_{}", i)).collect()
}
#[bench(setup = create_large_dataset)]
fn benchmark_filtering(data: &Vec<String>) {
let filtered: Vec<_> = data.iter()
.filter(|s| s.contains("5"))
.collect();
std::hint::black_box(filtered);
}
}
setup_each)For benchmarks that need fresh data for each sample (e.g., sorting, consuming iterators), use setup_each. The setup runs before every sample:
#[cfg(test)]
mod benchmarks {
use simplebench_macros::bench;
// Owning: benchmark takes ownership (for mutations/consumption)
#[bench(setup_each = || vec![5, 3, 8, 1, 9, 4, 7, 2, 6])]
fn bench_sort(mut data: Vec<i32>) {
data.sort(); // Mutates the data
}
// Owning: consuming an iterator
#[bench(setup_each = || (0..1000).collect::<Vec<_>>())]
fn bench_consume_iter(data: Vec<i32>) {
let _sum: i32 = data.into_iter().sum(); // Consumes data
}
// Borrowing: fresh data each sample, but only reading
#[bench(setup_each = || random_vectors(1000))]
fn bench_normalize(vectors: &Vec<Vec3>) {
for v in vectors {
let _ = v.normalize();
}
}
}
When to use which:
setup - Setup is expensive and data is read-only (setup runs once)setup_each with T - Benchmark mutates or consumes the datasetup_each with &T - Need fresh random/unique data each samplecargo simplebench
cargo simplebench [OPTIONS]
Options:
--samples <N> Number of samples per benchmark (default: 1000)
--warmup-duration <S> Warmup duration in seconds (default: 3)
--threshold <P> Regression threshold percentage (default: 5.0)
--ci CI mode - exit with error on regression
--bench <PATTERN> Run only benchmarks matching pattern
--parallel Run benchmarks in parallel (faster, may increase variance)
-j, --jobs <N> Number of parallel jobs (implies --parallel)
-q, --quiet Suppress progress bars
All options can also be set via environment variables:
SIMPLEBENCH_SAMPLESSIMPLEBENCH_WARMUP_DURATIONSIMPLEBENCH_THRESHOLDSIMPLEBENCH_BENCH_FILTERSIMPLEBENCH_QUIETCreate simplebench.toml in your project root:
[measurement]
samples = 1000
warmup_duration_secs = 3
[comparison]
threshold = 5.0
# .github/workflows/bench.yml
name: Benchmarks
on: [push, pull_request]
jobs:
bench:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: dtolnay/rust-toolchain@stable
- run: cargo install cargo-simplebench
- run: cargo simplebench --ci
# Run benchmarks
cargo simplebench
# Clean baseline data
cargo simplebench clean
# Analyze historical trends
cargo simplebench analyze <benchmark_name> --last 10
SimpleBench uses the inventory crate for compile-time benchmark registration. The #[bench] macro expands to register each benchmark function, and cargo simplebench builds a unified runner that links all workspace crates and executes discovered benchmarks.
Each sample measures exactly one function call, giving you per-call timing data with full variance information.
Benchmarks are compiled with #[cfg(test)], so they're excluded from production builds.
cargo-simplebench - CLI toolsimplebench-runtime - Core runtime librarysimplebench-macros - #[bench] proc-macroLicensed under either of:
at your option.
This project was co-authored with Claude, an AI assistant by Anthropic.