| Crates.io | micro_metrics |
| lib.rs | micro_metrics |
| version | 0.2.0 |
| created_at | 2025-08-01 04:34:05.325036+00 |
| updated_at | 2025-08-01 14:37:43.092869+00 |
| description | Performance monitoring and metrics collection for micro-neural networks |
| homepage | |
| repository | https://github.com/ruvnet/ruv-FANN |
| max_upload_size | |
| id | 1776091 |
| size | 115,669 |
Basic performance monitoring and metrics collection framework
This crate provides a foundation for collecting and exporting performance metrics from the Semantic Cartan Matrix system. It offers basic timing, data collection, and JSON export capabilities.
Add this to your Cargo.toml:
[dependencies]
micro_metrics = { path = "../micro_metrics" }
Basic metrics collection and aggregation:
use micro_metrics::{MetricsCollector, SystemMetrics, AgentMetrics};
// Create collector (basic implementation)
let collector = MetricsCollector::new();
// Record basic metrics (implementation varies)
// Note: Actual API may differ from this example
Cross-platform timing functionality:
use micro_metrics::{Timer, TimingInfo};
// Create and use timer
let timer = Timer::new();
let timing_info = timer.measure(|| {
// Code to time
expensive_operation();
});
println!("Operation took: {:?}", timing_info.duration);
Export metrics in JSON format:
use micro_metrics::{JsonExporter, MetricsReport};
let exporter = JsonExporter::new();
// Export basic metrics to JSON
let json_report = exporter.export_metrics(&collector)?;
println!("Metrics JSON: {}", json_report);
Basic data structures for visualization:
use micro_metrics::{DashboardData, HeatmapData};
// Create dashboard-compatible data
let dashboard_data = DashboardData {
timestamp: std::time::SystemTime::now(),
metrics: collector.get_current_metrics(),
// ... other basic fields
};
// Create heatmap data for attention visualization
let heatmap = HeatmapData {
width: 32,
height: 32,
data: attention_matrix.flatten(),
// ... other visualization data
};
[features]
default = ["std"]
std = ["serde/std", "serde_json/std"]
system-metrics = [] # System monitoring (not implemented)
prometheus = [] # Prometheus export (not implemented)
dashboard = [] # Web dashboard (not implemented)
use micro_metrics::{MetricsCollector, Timer};
// Initialize collector
let mut collector = MetricsCollector::new();
// Time operations
let timer = Timer::start("operation_name".to_string());
perform_neural_network_inference();
let duration = timer.stop();
// Store timing result
collector.record_timing(duration);
// Export for analysis
let json_metrics = collector.export_json()?;
The following describes intended functionality, not current implementation:
// PLANNED API (not fully implemented)
use micro_metrics::{
PerformanceMetrics, DriftTracker, RegressionDetector
};
let mut metrics = PerformanceMetrics::new();
metrics.record_latency("inference", 1.2);
metrics.record_throughput("tokens_per_second", 15420.0);
let drift_tracker = DriftTracker::new();
let regression_detector = RegressionDetector::new();
// PLANNED API (not implemented)
use micro_metrics::{DashboardServer, MetricsStreamer};
let dashboard = DashboardServer::new("0.0.0.0:8080");
dashboard.start().await?;
let streamer = MetricsStreamer::new();
streamer.stream_to_dashboard(&metrics).await?;
// PLANNED API (not implemented)
use micro_metrics::PrometheusExporter;
let exporter = PrometheusExporter::new();
exporter.register_counter("inferences_total");
exporter.export_to_gateway("http://prometheus:9091").await?;
# Run basic tests
cargo test
# Test JSON export functionality
cargo test --features std
# Test system metrics (when implemented)
cargo test --features system-metrics
Current examples would demonstrate:
Planned examples:
Priority areas for contribution:
Licensed under either of:
at your option.
micro_core: Core types being monitoredmicro_cartan_attn: Attention mechanisms with metricsmicro_routing: Routing performance monitoringmicro_swarm: High-level orchestration metricsPart of the rUv-FANN Semantic Cartan Matrix system - Basic metrics collection for neural network monitoring.