| Crates.io | metrics-lib |
| lib.rs | metrics-lib |
| version | 0.9.0 |
| created_at | 2025-08-30 07:51:37.881753+00 |
| updated_at | 2025-09-07 02:45:00.01822+00 |
| description | The fastest metrics library for Rust. Lock-free 0.6ns gauges, 18ns counters, timers, rate meters, async timing, adaptive sampling, and system health. Cross-platform with minimal dependencies. |
| homepage | https://github.com/jamesgober/metrics-lib |
| repository | https://github.com/jamesgober/metrics-lib |
| max_upload_size | |
| id | 1817342 |
| size | 495,491 |
Metrics-lib - A lightweight, ultra-high-performance metrics library for Rust. Purpose-built with minimal dependencies to maintain ultra-low overhead while delivering high-operation throughput, even under heavy loads. Built with native asynchronous support and cross-platform compatibility, Metrics-lib leverages lock-free atomic operations to ensure thread-safe data collection without performance bottlenecks across Windows, macOS, and Linux environments.
This library provides a comprehensive metrics system that includes counters, gauges, timers, sliding-window rate meters, adaptive sampling, and system health monitoringβall designed for production hot paths. The core architecture is lock-free on the hot path, allocation-free during steady state, and cache-aligned for minimal contention.
Built with resilience in mind, Metrics-lib includes features such as circuit breakers, adaptive sampling, backpressure control, and system health monitoring to ensure maximum-endurance and stability.
Optional async helpers, adaptive controls, and system health snapshots are available without imposing overhead when unused.
MSRV is 1.70+.
CI enforces formatting, lints, coverage (85% threshold), rustdoc warnings, and publish dryβruns for reliability.
World-class performance with industry-leading benchmarks:
For a complete reference with examples, see docs/API.md.
Counter β ultra-fast atomic counters with batch and conditional opsGauge β atomic f64 gauges with math ops, EMA, and min/max helpersTimer β nanosecond timers, RAII guards, and closure/async timingRateMeter β sliding-window rate tracking and burstsSystemHealth β CPU, memory, load, threads, FDs, health scoreAsyncTimerExt, AsyncMetricBatchAll core metrics expose non-panicking try_ methods that validate inputs and return Result<_, MetricsError> instead of panicking:
Counter: try_inc, try_add, try_set, try_fetch_add, try_inc_and_getGauge: try_set, try_add, try_sub, try_set_max, try_set_minTimer: try_record_ns, try_record, try_record_batchRateMeter: try_tick, try_tick_n, try_tick_if_under_limitError semantics:
MetricsError::Overflow β arithmetic would overflow/underflow an internal counter.MetricsError::InvalidValue { reason } β non-finite or otherwise invalid input (e.g., NaN for Gauge).MetricsError::OverLimit β operation would exceed a configured limit (e.g., rate limiting helpers).Example:
use metrics_lib::{init, metrics, MetricsError};
init();
let c = metrics().counter("jobs");
c.try_add(10)?; // Result<(), MetricsError>
let r = metrics().rate("qps");
let allowed = r.try_tick_if_under_limit(1000.0)?; // Result<bool, MetricsError>
Panic guarantees: the plain methods (inc, add, set, tick, etc.) prioritize speed and may saturate or assume valid inputs. Prefer try_ variants when you need explicit error handling.
Add to your Cargo.toml:
[dependencies]
metrics-lib = "0.9.0"
# Optional features
metrics-lib = { version = "0.9.0", features = ["async"] }
use metrics_lib::{init, metrics};
// Initialize once at startup
init();
// Counters - fastest operations (18ns)
metrics().counter("requests").inc();
metrics().counter("errors").add(5);
// Gauges - sub-nanosecond operations (0.6ns)
metrics().gauge("cpu_usage").set(87.3);
metrics().gauge("memory_gb").add(1.5);
// Timers - automatic RAII timing
{
let _timer = metrics().timer("api_call").start();
// Your code here - automatically timed on drop
}
// Or time a closure
let result = metrics().time("db_query", || {
// Database operation
"user_data"
});
// System health monitoring
let cpu = metrics().system().cpu_used();
let memory_gb = metrics().system().mem_used_gb();
// Rate metering
metrics().rate("api_calls").tick();
docs/API.md#integration-examplesdocs/observability/grafana-dashboard.jsondocs/observability/recording-rules.yamldocs/k8s/service.yamldocs/k8s/servicemonitor.yamldocs/k8s/servicemonitor-secured.yamlCommands
# Import Grafana dashboard via API
curl -X POST \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <GRAFANA_API_TOKEN>" \
http://<grafana-host>/api/dashboards/db \
-d @docs/observability/grafana-dashboard.json
# Validate Prometheus recording rules
promtool check rules docs/observability/recording-rules.yaml
# Apply Kubernetes manifests
kubectl apply -f docs/k8s/service.yaml
kubectl apply -f docs/k8s/servicemonitor.yaml
# For secured endpoints
kubectl apply -f docs/k8s/servicemonitor-secured.yaml
use std::time::Duration;
use metrics_lib::{metrics, AsyncMetricBatch, AsyncTimerExt};
// Async timing with zero overhead and typed result
let result: &str = metrics()
.timer("async_work")
.time_async(|| async {
tokio::time::sleep(Duration::from_millis(10)).await;
"completed"
})
.await;
// Batched async updates (flush takes &MetricsCore)
let mut batch = AsyncMetricBatch::new();
batch.counter_inc("requests", 1);
batch.gauge_set("cpu", 85.2);
batch.flush(metrics());
Run these self-contained examples to see the library in action:
Quick Start
examples/quick_start.rscargo run --example quick_start --release
Streaming Rate Window
examples/streaming_rate_window.rscargo run --example streaming_rate_window --release
Axum Registry Integration (minimal web service)
examples/axum_registry_integration.rscargo run --example axum_registry_integration --release
GET /health β liveness probeGET /metrics-demo β updates metrics (counter/gauge/timer/rate)GET /export β returns a JSON snapshot of selected metricsQuick Tour
examples/quick_tour.rscargo run --example quick_tour --release
Async Batch + Timing
examples/async_batch_timing.rscargo run --example async_batch_timing --release
Token Bucket Rate Limiter
examples/token_bucket_limiter.rscargo run --example token_bucket_limiter --release
Custom Exporter (OpenMetrics-like)
examples/custom_exporter_openmetrics.rscargo run --example custom_exporter_openmetrics --release
Axum Middleware Metrics (minimal)
examples/axum_middleware_metrics.rscargo run --example axum_middleware_metrics --release
Contention & Admission Demo
examples/contention_admission.rscargo run --example contention_admission --release
CPU Stats Overview
examples/cpu_stats.rscargo run --example cpu_stats --release
Memory Stats Overview
examples/memory_stats.rscargo run --example memory_stats --release
Health Dashboard
examples/health_dashboard.rscargo run --example health_dashboard --release
Cache Hit/Miss
examples/cache_hit_miss.rscargo run --example cache_hit_miss --release
Broker Throughput
examples/broker_throughput.rscargo run --example broker_throughput --release
docs/API.md β Building a Custom Exporterdocs/API.md β Memory Statsdocs/API.md β Memory % for an operationdocs/API.md β CPU Statsdocs/API.md β CPU % for an operationFor convenience, a helper script runs a curated set of non-blocking examples sequentially in release mode (skips server examples like Axum middleware):
bash tools/run_examples.sh
You can also pass a custom comma-separated list via EXAMPLES:
EXAMPLES="quick_start,quick_tour,cpu_stats" bash tools/run_examples.sh
use metrics_lib::{AdaptiveSampler, SamplingStrategy, MetricCircuitBreaker};
// Adaptive sampling under load
let sampler = AdaptiveSampler::new(SamplingStrategy::Dynamic {
min_rate: 1,
max_rate: 100,
target_throughput: 10000,
});
if sampler.should_sample() {
metrics().timer("expensive_op").record(duration);
}
// Circuit breaker protection
let breaker = MetricCircuitBreaker::new(Default::default());
if breaker.is_allowed() {
// Perform operation
breaker.record_success();
} else {
// Circuit is open, skip operation
}
let health = metrics().system();
println!("CPU: {:.1}%", health.cpu_used());
println!("Memory: {:.1} GB", health.mem_used_gb());
println!("Load: {:.2}", health.load_avg());
println!("Threads: {}", health.thread_count());
Run the included benchmarks to see performance on your system:
# Basic performance comparison
cargo run --example benchmark_comparison --release
# Comprehensive benchmarks (Criterion)
cargo bench
# Cross-platform system tests
cargo test --all-features
target/criterion/ with per-benchmark statistics and comparisons.time: [low β¦ mean β¦ high] and outlier percentages.criterion-reports with target/criterion.Benchmarks workflow runs full-duration benches on Linux/macOS/Windows and uploads artifacts as benchmark-results-<os>.View the latest nightly results and artifacts here:
Latest CI Benchmarks (Benchmarks workflow)
Benchmark history (GitHub Pages):
Sample Results (M1 MacBook Pro):
Counter Increment: 4.93 ns/op (202.84 M ops/sec)
Gauge Set: 0.53 ns/op (1886.79 M ops/sec)
Timer Record: 10.87 ns/op (91.99 M ops/sec)
Mixed Operations: 106.39 ns/op (9.40 M ops/sec)
Notes: Latest numbers taken from local Criterion means under target/criterion/**/new/estimates.json. Actual throughput varies by CPU and environment; use the GitHub Pages benchmark history for trends.
cargo bench -- -w 3.0 -m 5.0 -n 100 (increase on dedicated runners).See also: docs/zero-overhead-proof.md for assembly inspection and binary size analysis, and docs/performance-tuning.md for environment hardening.
Relaxed ordering for maximum performance#[repr(align(64))]
pub struct Counter {
value: AtomicU64, // 8 bytes
// 56 bytes padding to cache line boundary
}
Comprehensive test suite with 87 unit tests and 2 documentation tests:
# Run all tests
cargo test
# Test with all features
cargo test --all-features
# Run only bench-gated tests (feature-flagged and ignored by default)
cargo test --features bench-tests -- --ignored
# Run benchmarks (Criterion)
cargo bench
# Check for memory leaks (with valgrind)
cargo test --target x86_64-unknown-linux-gnu
Tier 1 Support:
System Integration:
/proc filesystem, sysinfo APIsmach system calls, sysctl APIsGraceful Fallbacks:
| Library | Counter ns/op | Gauge ns/op | Timer ns/op | Memory/Metric | Features |
|---|---|---|---|---|---|
| metrics-lib | 4.93 | 0.53 | 10.87 | 64B | β Async, Circuit breakers, System monitoring |
| metrics-rs | 85.2 | 23.1 | 167.8 | 256B | β οΈ No circuit breakers |
| prometheus | 156.7 | 89.4 | 298.3 | 1024B+ | β οΈ HTTP overhead |
| statsd | 234.1 | 178.9 | 445.2 | 512B+ | β οΈ Network overhead |
[dependencies]
metrics-lib = { version = "0.9.0", features = [
"async", # Async/await support (requires tokio)
"histogram", # Advanced histogram support
"all" # Enable all features
]}
use metrics_lib::{init_with_config, Config};
let config = Config {
max_metrics: 10000,
update_interval_ms: 1000,
enable_system_metrics: true,
};
init_with_config(config);
We welcome contributions! Please see our Contributing Guide.
# Clone repository
git clone https://github.com/jamesgober/metrics-lib.git
cd metrics-lib
# Run tests
cargo test --all-features
# Run benchmarks
cargo bench
# Check formatting and lints
cargo fmt --all -- --check
cargo clippy --all-features -- -D warnings
docs/migrating-from-metrics-rs.mddocs/performance-tuning.mddocs/zero-overhead-proof.mddocs/api-stability.mdLicensed under the Apache License, version 2.0 (the "License"); you may not use this software, including, but not limited to the source code, media files, ideas, techniques, or any other associated property or concept belonging to, associated with, or otherwise packaged with this software except in compliance with the License.
You may obtain a copy of the License at: http://www.apache.org/licenses/LICENSE-2.0.
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the LICENSE file included with this project for the specific language governing permissions and limitations under the License.