Crates.io | criterion-table |
lib.rs | criterion-table |
version | 0.4.2 |
source | src |
created_at | 2022-03-11 13:40:34.077714 |
updated_at | 2022-03-15 02:03:55.101465 |
description | Generate markdown comparison tables from cargo-criterion benchmark output |
homepage | |
repository | https://github.com/nu11ptr/criterion-table |
max_upload_size | |
id | 548224 |
size | 45,144 |
Generate markdown comparison tables from Cargo Criterion benchmark JSON output.
Currently, the tool is limited to Github Flavored Markdown (GFM), but adding new output types is relatively simple.
# If you don't have it already
cargo install cargo-criterion
# This project
cargo install criterion-table
/
)
<table_name>/<column_name>/[row_name]
benchmark_function
you would only get a column
name by default, which isn't sufficientBenchmarkId
you will get all three
sections automaticallyuse criterion::{black_box, criterion_group, criterion_main, Criterion};
#[inline]
fn fibonacci(n: u64) -> u64 {
match n {
0 => 1,
1 => 1,
n => fibonacci(n-1) + fibonacci(n-2),
}
}
pub fn criterion_benchmark(c: &mut Criterion) {
let id = "Fibonacci/Recursive Fib/20";
c.bench_function(id, |b| b.iter(|| fibonacci(black_box(20))));
}
criterion_group!(benches, criterion_benchmark);
criterion_main!(benches);
use criterion::{black_box, BenchmarkId, criterion_group, criterion_main,
Criterion};
#[inline]
fn fibonacci(n: u64) -> u64 {
match n {
0 => 1,
1 => 1,
n => fibonacci(n-1) + fibonacci(n-2),
}
}
pub fn criterion_benchmark(c: &mut Criterion) {
let mut group = c.benchmark_group("Fibonacci");
for row in vec![10, 20] {
let id = BenchmarkId::new("Recursive Fib", row);
group.bench_with_input(id, &row, |b, row| {
b.iter(|| fibonacci(black_box(*row)))
});
}
group.finish();
}
criterion_group!(benches, criterion_benchmark);
criterion_main!(benches);
tables.toml
configuration file (Optional)This allows you to add commentary to integrate with the tables in the markdown. Table names are in lowercase and spaces replaced with dashes. The file must be in the local directory. Here is an example:
[top_comments]
Overview = """
This is a benchmark comparison report.
"""
[table_comments]
fibonacci = """
Since `fibonacci` is not tail recursive or iterative, all these function calls
are not inlined which makes this version very slow.
"""
This can be done in a couple of different ways:
This method ensures all benchmarks are included in one step
# Run all benchmarks and convert into the markdown all in one step
cargo criterion --message-format=json | criterion-table > BENCHMARKS.md
This method allows better control of order and which benchmarks are included
# Execute only the desired benchmarks
cargo criterion --bench recursive_fib --message-format=json > recursive_fib.json
cargo criterion --bench iterative_fib --message-format=json > iterative_fib.json
# Reorder before converting into markdown
cat iterative_fib.json recursive_fib.json | criterion-table > BENCHMARKS.md
Currently, the tool is hardcoded to GFM, but it is easy to add a new output
type via the Formatter
trait by creating your own new binary project
[dependencies]
criterion-table = "0.4"
flexstr = "0.8"
indexmap = "1"
Create a new type and implement Formatter
Create a main
function and call
build_tables
NOTE: Replace GFMFormatter
with your new formatter below
use std::io;
use criterion_table::build_tables;
// Replace with your formatter
use criterion_table::formatter::GFMFormatter;
const TABLES_CONFIG: &str = "tables.toml";
fn main() {
// Replace `GFMFormatter` with your formatter
match build_tables(io::stdin(), GFMFormatter, TABLES_CONFIG) {
Ok(data) => {
println!("{data}");
}
Err(err) => {
eprintln!("An error occurred processing Criterion data: {err}");
}
}
}
String
to the file type of your formatter or write to
stdoutThis project is licensed optionally under either: