| Crates.io | rust_metrics |
| lib.rs | rust_metrics |
| version | 0.1.7 |
| created_at | 2025-11-26 19:05:13.181073+00 |
| updated_at | 2025-12-13 17:09:54.275472+00 |
| description | Incremental evaluation metrics for various machine learning pipelines. |
| homepage | https://github.com/shaankhosla/rust_metrics |
| repository | https://github.com/shaankhosla/rust_metrics |
| max_upload_size | |
| id | 1952054 |
| size | 172,818 |
rust_metrics is an ML evaluation toolkit that brings Torchmetrics-style metrics to Rust. Each metric implements the same incremental
Metric trait, so you can feed batched predictions over time and ask for the final score when ready.
Every metric shares the same Torch-inspired test cases and examples so that the
crate docs mirror the upstream behavior where the functionality matches.
Some benefits of rust-metrics:
A standardized interface to increase reproducibility
Reduces Boilerplate
Rigorously tested
Automatic accumulation over batches
Add the crate to your project:
cargo add rust_metrics
# or enable the BERT-based similarity metric
cargo add rust_metrics --features text-bert
All snippets below (and method examples) reuse the exact inputs from the public TorchMetrics docs so you can cross-check the expected values.
use rust_metrics::{BinaryAccuracy, BinaryAuroc, Metric};
let target = [0_usize, 1, 0, 1, 0, 1];
let preds = [0.11, 0.22, 0.84, 0.73, 0.33, 0.92];
let mut acc = BinaryAccuracy::default();
acc.update((&preds[..], &target[..])).unwrap();
assert!((acc.compute().unwrap() - 2.0 / 3.0).abs() < f64::EPSILON);
let mut auroc = BinaryAuroc::new(0);
let auroc_scores = [0.0, 0.5, 0.7, 0.8];
let auroc_target = [0_usize, 1, 1, 0];
auroc.update((&auroc_scores, &auroc_target)).unwrap();
assert!((auroc.compute().unwrap() - 0.5).abs() < f64::EPSILON);
use rust_metrics::{MeanSquaredError, MeanAbsoluteError, Metric};
let mut mse = MeanSquaredError::default();
mse.update((&[3.0, 5.0, 2.5, 7.0], &[2.5, 5.0, 4.0, 8.0])).unwrap();
assert!((mse.compute().unwrap() - 0.875).abs() < f64::EPSILON);
let mut mae = MeanAbsoluteError::default();
mae.update((&[2.5, 0.0, 2.0, 8.0], &[3.0, -0.5, 2.0, 7.0])).unwrap();
assert!((mae.compute().unwrap() - 0.5).abs() < f64::EPSILON);
use rust_metrics::{MutualInfoScore, Metric};
let preds = [2, 1, 0, 1, 0];
let target = [0, 2, 1, 1, 0];
let mut metric = MutualInfoScore::default();
metric.update((&preds, &target)).unwrap();
assert!((metric.compute().unwrap() - 0.500402423538188).abs() < f64::EPSILON);
use rust_metrics::{Bleu, EditDistance, Metric};
let preds = ["the cat is on the mat"];
let targets = ["a cat is on the mat"];
let mut bleu = Bleu::default();
bleu.update((&preds, &targets)).unwrap();
assert!(bleu.compute().unwrap() > 0.5);
let mut edit = EditDistance::default();
edit.update((&["rain"], &["shine"])).unwrap();
assert_eq!(edit.compute(), Some(3.0));
For SentenceEmbeddingSimilarity enable the text-bert feature; it mirrors the BERTScore example sentences and
reports cosine similarities for each pair instead of precision/recall triples.
BinaryAccuracy, MulticlassAccuracyBinaryPrecision, BinaryRecall, MulticlassPrecisionBinaryF1Score, MulticlassF1ScoreBinaryHingeLoss, MulticlassHingeLossBinaryJaccardIndex, MulticlassJaccardIndexBinaryConfusionMatrixBinaryAurocMeanSquaredErrorNormalizedRootMeanSquaredErrorMeanAbsoluteErrorMeanAbsolutePercentageErrorR2ScoreMutualInfoScoreBleu with optional smoothing and arbitrary n-gram depthEditDistance with sum or mean reductionRougeScoreSentenceEmbeddingSimilarity (requires the text-bert feature) backed by fastembed. This
metric embeds each sentence pair with lightweight BERT embeddings and reports cosine similarity
scores.| Feature | Default | Description |
|---|---|---|
text-bert |
no | Enables BERT sentence embedding similarity via fastembed. |