| Crates.io | sklears-metrics |
| lib.rs | sklears-metrics |
| version | 0.1.0-beta.1 |
| created_at | 2025-10-13 12:09:37.129329+00 |
| updated_at | 2026-01-01 21:29:45.359058+00 |
| description | Evaluation metrics for sklears: accuracy, precision, recall, F1, ROC-AUC, etc. |
| homepage | https://github.com/cool-japan/sklears |
| repository | https://github.com/cool-japan/sklears |
| max_upload_size | |
| id | 1880448 |
| size | 1,449,579 |
Comprehensive, high-performance evaluation metrics for machine learning in Rust, offering 10-50x speedup over scikit-learn with GPU acceleration support.
Latest release:
0.1.0-beta.1(January 1, 2026). See the workspace release notes for highlights and upgrade guidance.
sklears-metrics provides a complete suite of evaluation metrics including:
use sklears_metrics::{accuracy_score, precision_recall_fscore, roc_auc_score};
use ndarray::array;
// Basic classification metrics
let y_true = array![0, 1, 1, 0, 1, 0];
let y_pred = array![0, 1, 0, 0, 1, 1];
let acc = accuracy_score(&y_true, &y_pred)?;
let (precision, recall, f1) = precision_recall_fscore(&y_true, &y_pred)?;
let auc = roc_auc_score(&y_true, &y_pred)?;
println!("Accuracy: {:.2}", acc);
println!("Precision: {:.2}, Recall: {:.2}, F1: {:.2}", precision, recall, f1);
println!("ROC-AUC: {:.2}", auc);
use sklears_metrics::gpu::{GpuMetricsContext, gpu_accuracy};
let ctx = GpuMetricsContext::new()?;
let accuracy = gpu_accuracy(&ctx, &y_true_gpu, &y_pred_gpu)?;
use sklears_metrics::uncertainty::{bootstrap_confidence_interval, conformal_prediction};
let (lower, upper) = bootstrap_confidence_interval(&y_true, &y_pred, 0.95)?;
let prediction_sets = conformal_prediction(&calibration_scores, alpha)?;
use sklears_metrics::streaming::StreamingMetrics;
let mut metrics = StreamingMetrics::new();
for batch in data_stream {
metrics.update(&batch.y_true, &batch.y_pred)?;
}
let final_scores = metrics.compute()?;
Benchmarks show significant improvements:
| Metric | scikit-learn | sklears-metrics | Speedup |
|---|---|---|---|
| Accuracy | 1.2ms | 0.05ms | 24x |
| ROC-AUC | 8.5ms | 0.3ms | 28x |
| Clustering | 15ms | 0.8ms | 19x |
| GPU Accuracy | N/A | 0.01ms | >100x |
use sklears_metrics::vision::{iou_score, ssim, psnr};
let iou = iou_score(&pred_masks, &true_masks)?;
let similarity = ssim(&pred_image, &true_image)?;
let peak_snr = psnr(&pred_image, &true_image)?;
use sklears_metrics::nlp::{bleu_score, rouge_scores, perplexity};
let bleu = bleu_score(&hypothesis, &reference)?;
let rouge = rouge_scores(&summary, &reference)?;
let ppl = perplexity(&model_logits, &true_tokens)?;
use sklears_metrics::timeseries::{mase, smape, directional_accuracy};
let mase_score = mase(&y_true, &y_pred, &y_train)?;
let smape_score = smape(&y_true, &y_pred)?;
let da = directional_accuracy(&y_true, &y_pred)?;
use sklears_metrics::multiobjective::{pareto_frontier, topsis_ranking};
let frontier = pareto_frontier(&objectives)?;
let rankings = topsis_ranking(&alternatives, &weights)?;
use sklears_metrics::federated::{secure_aggregation, privacy_preserving_metrics};
let global_metrics = secure_aggregation(&client_metrics, epsilon)?;
let private_accuracy = privacy_preserving_metrics(&local_data, delta)?;
use sklears_metrics::calibration::{calibration_curve, expected_calibration_error};
let (fraction_positive, mean_predicted) = calibration_curve(&y_true, &y_prob)?;
let ece = expected_calibration_error(&y_true, &y_prob)?;
The crate is organized into modules:
sklears-metrics/
├── classification/ # Binary and multiclass metrics
├── regression/ # Continuous target metrics
├── clustering/ # Unsupervised evaluation
├── ranking/ # Information retrieval metrics
├── uncertainty/ # Confidence and uncertainty
├── streaming/ # Online and incremental metrics
├── gpu/ # CUDA-accelerated computations
├── visualization/ # Plotting and reporting
└── specialized/ # Domain-specific metrics
use sklears_metrics::MetricsBuilder;
let results = MetricsBuilder::new()
.accuracy()
.precision()
.recall()
.f1_score()
.roc_auc()
.with_confidence_intervals(0.95)
.with_gpu_acceleration()
.compute(&y_true, &y_pred)?;
We welcome contributions! See CONTRIBUTING.md for guidelines.
Licensed under either of:
@software{sklears_metrics,
title = {sklears-metrics: High-Performance ML Metrics for Rust},
author = {COOLJAPAN OU (Team KitaSan)},
year = {2026},
url = {https://github.com/cool-japan/sklears}
}