| Crates.io | iron_runtime_analytics |
| lib.rs | iron_runtime_analytics |
| version | 0.4.0 |
| created_at | 2025-12-16 18:04:18.389701+00 |
| updated_at | 2025-12-18 09:31:17.656446+00 |
| description | Usage analytics and reporting for Iron Cage agent runtime |
| homepage | |
| repository | https://github.com/.../iron_runtime |
| max_upload_size | |
| id | 1988375 |
| size | 188,391 |
Lock-free event-based analytics for Python LlmRouter.
[dependencies]
iron_runtime_analytics = { path = "../iron_runtime_analytics" }
iron_cost = { path = "../iron_cost" } # For pricing
Lock-free storage - crossbeam ArrayQueue for bounded event buffer
Atomic counters - O(1) stats access without locks
Per-model/provider stats - DashMap for concurrent aggregation
High-level recording API - automatic provider inference and cost calculation
Protocol 012 compatible - field compatibility with analytics API
Background sync - server sync with auto-flush on shutdown (feature: sync)
The high-level API handles provider inference and cost calculation automatically:
use iron_runtime_analytics::EventStore;
use iron_cost::pricing::PricingManager;
let store = EventStore::new();
let pricing = PricingManager::new().unwrap();
// Record successful LLM request - provider inferred from model name
store.record_llm_completed(&pricing, "gpt-4", 150, 50, None, None);
// Record with agent attribution
store.record_llm_completed(
&pricing,
"claude-3-opus-20240229",
200,
100,
Some("agent_123"), // agent_id
Some("ip_anthropic-001"), // provider_id
);
// Record failed request
store.record_llm_failed("gpt-4", None, None, Some("rate_limit"), None);
// Lifecycle events
store.record_router_started(8080);
store.record_router_stopped(); // Captures final stats automatically
let stats = store.stats();
// Totals (O(1) access)
println!("Requests: {}", stats.total_requests);
println!("Cost: ${:.4}", stats.total_cost_usd());
println!("Success rate: {:.1}%", stats.success_rate() * 100.0);
// Per-model breakdown
for (model, model_stats) in &stats.by_model {
println!("{}: {} requests, ${:.4}", model, model_stats.request_count, model_stats.cost_usd());
}
// Per-provider breakdown
for (provider, provider_stats) in &stats.by_provider {
println!("{}: {} tokens", provider, provider_stats.input_tokens + provider_stats.output_tokens);
}
Simple, predictable behavior:
Fixed Memory: Bounded buffer (default 10,000 slots, ~2-5MB)
Non-Blocking: Drop new events when buffer full (never block)
Observability: dropped_count() tracks lost events
Fixed Memory: ~2-5MB for 10,000 events
O(1) stats access: Atomic counters, no lock contention
Non-blocking: Never waits for locks or I/O
When enabled with the sync feature, events can be automatically synced to Control API:
use iron_runtime_analytics::{EventStore, SyncClient, SyncConfig};
use std::sync::Arc;
let store = Arc::new(EventStore::new());
let config = SyncConfig::new("http://localhost:3001", "ic_token_here")
.with_interval(Duration::from_secs(30)) // Sync every 30s
.with_batch_threshold(10); // Or when 10 events pending
// Start background sync (requires tokio runtime handle)
let handle = client.start(&runtime_handle);
// ... use store normally ...
// Stop and flush remaining events
handle.stop();
| Option | Default | Description |
|---|---|---|
sync_interval |
30s | How often to sync events |
batch_threshold |
10 | Sync immediately when this many events pending |
timeout |
30s | HTTP request timeout |
/api/v1/analytics/eventsllm_request_completed and llm_request_failed events are syncedFor full control over event construction:
use iron_runtime_analytics::{EventStore, AnalyticsEvent, EventPayload};
use iron_runtime_analytics::event::{LlmUsageData, LlmModelMeta};
let store = EventStore::new();
store.record(AnalyticsEvent::new(EventPayload::LlmRequestCompleted(LlmUsageData {
meta: LlmModelMeta {
provider_id: Some("ip_openai-001".into()),
provider: "openai".into(),
model: "gpt-4".into(),
},
input_tokens: 150,
output_tokens: 50,
cost_micros: 6000, // $0.006
})));
// Check for dropped events (buffer overflow)
if store.dropped_count() > 0 {
eprintln!("Warning: {} events dropped (buffer full)", store.dropped_count());
}
// Check unsynced events (pending server sync)
println!("Unsynced events: {}", store.unsynced_count());
use std::thread;
// Create store with streaming channel
let (store, receiver) = EventStore::with_streaming(10_000, 100);
// Spawn consumer thread
thread::spawn(move || {
while let Ok(event) = receiver.recv() {
// Process event (e.g., send to server)
println!("Event: {:?}", event.event_id());
}
});
// Events are automatically sent to channel when recorded
store.record_llm_completed(&pricing, "gpt-4", 100, 50, None, None);
src/
├── lib.rs # Re-exports
├── event.rs # AnalyticsEvent, EventPayload, LlmUsageData
├── event_storage.rs # EventStore (lock-free buffer + atomic counters)
├── stats.rs # AtomicModelStats, ModelStats, ComputedStats
├── recording.rs # High-level record_* methods
└── helpers.rs # Provider enum, infer_provider, current_time_ms
In Scope:
Out of Scope:
| File | Responsibility |
|---|---|
| lib.rs | Lock-free event-based analytics for Iron Runtime LLM proxy. |
| event.rs | Analytics event types and payloads. |
| event_storage.rs | Lock-free event storage with atomic counters. |
| provider_utils.rs | Utility functions and types for analytics. |
| recording.rs | High-level recording API for EventStore. |
| stats.rs | Statistics types for analytics aggregation. |
| sync.rs | Analytics sync - background sync of events to Control API. |
Notes:
Apache-2.0