Crates.io | granular-metrics |
lib.rs | granular-metrics |
version | 0.1.0 |
created_at | 2025-04-25 09:44:35.875771+00 |
updated_at | 2025-04-25 09:44:35.875771+00 |
description | Zero-middleware metrics counter that emits per-key and aggregate RPS/RPM snapshots every second |
homepage | https://github.com/Gistyr/granular-metrics |
repository | https://github.com/Gistyr/granular-metrics |
max_upload_size | |
id | 1648767 |
size | 87,768 |
[dependencies]
granular-metrics = "0.1.0"
Or with optional actix-web HTTP endpoint
[dependencies]
granular-metrics = { version = "0.1.0", features = ["http"] }
#[derive(serde::Serialize)]
is only needed for the http
feature
#[derive(Debug, Clone, PartialEq, Eq, Hash)]
is mandatory
#[derive(Debug, Clone, PartialEq, Eq, Hash, serde::Serialize)]
enum Keys {
Login,
Download,
Ping,
Verify,
}
any
name for the enum
typevariant
corresponds to a key and may be named however you like
no limit
on how many keys (variants)
you define#[tokio::main]
async fn main() {
granular_metrics::init::<Keys>();
}
Or with the actix-web HTTP feature
#[tokio::main]
async fn main() {
granular_metrics::init::<Keys>("127.0.0.1", "8080", "metrics", "all");
}
pub fn init<K>(address: &str, port: &str, path: &str, workers: &str)
http://127.0.0.1:8080/metrics
(without leading slash)
that is exposedworker threads
to spawn
"all"
or ""
will spawn the default amount (matches the number of CPU cores)
number
, such as "2"
Call the increment(Keys::Variant)
function to increase that key’s counter by one
use crate::Keys::*;
use granular_metrics::increment;
async fn login_handler() -> impl Responder {
increment(Login);
}
async fn download_handler() -> impl Responder {
increment(Download);
}
fn main() {
let server_metrics = granular_metrics::fetch::<Keys>();
}
Or with the actix-web HTTP feature
curl http://127.0.0.1:8080/metrics
// Make a custom implementation to retrieve and store your data
pub struct MetricsSnapshot<K> {
pub per_key: HashMap<K, (u64, u64)>,
pub total: (u64, u64),
}
MetricsSnapshot { per_key: {Login: (8, 500), Download: (0, 10), Ping: (1, 100), Verify: (16, 1000)}, total: (26, 1610) }
Or the JSON version when using the HTTP
feature
{"per_key":{"Login":[1,95],"Download":[0,3],"Ping":[2,179],"Verify":[0,20]},"total":[4,297]}
This translates
to
{ per_key: {Login: (requests per second, requests per minute), Download: (RPS, RPM), Ping: (RPS, RPM), Verify: (RPS, RPM)}, total: (aggregate RPS: total requests last minute ÷ 60, aggregate RPM: total requests last minute) }
"Real time" metrics are possible