| Crates.io | frame-trace |
| lib.rs | frame-trace |
| version | 0.1.0 |
| created_at | 2025-12-22 22:43:05.751344+00 |
| updated_at | 2025-12-22 22:43:05.751344+00 |
| description | Execution tracing and monitoring subsystem for Frame microservices |
| homepage | |
| repository | https://github.com/Blackfall-Labs/frame-trace |
| max_upload_size | |
| id | 2000394 |
| size | 26,959 |
Shared debugging and utility functions for the Frame ecosystem.
CallGraph tracking for debugging, transparency, and performance analysis.
Add to your Cargo.toml:
[dependencies]
frame-trace = "0.1.0"
frame-trace is standalone with no Frame dependencies:
frame-trace
└── (no Frame dependencies)
Used by: All Frame subsystems for execution monitoring
Position in Frame ecosystem:
frame-trace (standalone monitoring)
↓
[All Frame subsystems use this for tracing]
use sam_utils::trace::{ExecutionTrace, StepType};
fn main() {
let mut trace = ExecutionTrace::new();
// Start a step
trace.start_step(StepType::Retrieval, "search_documents");
// Do work...
let results = search_documents("Rust async");
// End the step
trace.end_step();
// Add another step
trace.start_step(StepType::LlmGeneration, "generate_response");
let response = generate_response(&results);
trace.end_step();
// Analyze performance
println!("Total execution time: {}ms", trace.total_duration_ms());
println!("Step count: {}", trace.steps().len());
// Export as JSON
let json = serde_json::to_string_pretty(&trace).unwrap();
println!("{}", json);
}
use sam_utils::trace::{ExecutionTrace, StepType};
use serde_json::json;
fn process_query(query: &str) -> String {
let mut trace = ExecutionTrace::new();
// Step 1: Retrieval with input data
trace.start_step_with_data(
StepType::Retrieval,
"search",
Some(json!({"query": query}))
);
let docs = vec!["doc1", "doc2"];
trace.end_step_with_data(Some(json!({"count": docs.len()})));
// Step 2: LLM generation
trace.start_step(StepType::LlmGeneration, "generate");
let response = "Generated response...";
trace.end_step();
// Export trace
let trace_json = serde_json::to_string(&trace).unwrap();
eprintln!("Trace: {}", trace_json);
response.to_string()
}
use sam_utils::trace::ExecutionTrace;
fn analyze_trace(trace: &ExecutionTrace) {
println!("Performance Analysis");
println!("===================");
println!("Total duration: {}ms", trace.total_duration_ms());
println!("Steps: {}", trace.steps().len());
// Find slowest step
if let Some(slowest) = trace.steps().iter().max_by_key(|s| s.duration_ms) {
println!(
"Slowest step: {} ({}ms)",
slowest.name, slowest.duration_ms
);
}
// Group by step type
let mut by_type = std::collections::HashMap::new();
for step in trace.steps() {
*by_type.entry(step.step_type).or_insert(0) += step.duration_ms;
}
println!("\nTime by step type:");
for (step_type, duration) in by_type {
println!(" {:?}: {}ms", step_type, duration);
}
}
The StepType enum defines common pipeline stages:
AudioCapture - Audio input captureVoiceActivity - Voice activity detectionSpeechToText - Speech-to-text transcriptionRetrieval - Knowledge/context retrievalLlmGeneration - LLM response generationToolExecution - Tool/skill executionTextToSpeech - Text-to-speech synthesisAudioPlayback - Audio output playbackError - Error conditionTrack execution flow through multi-stage AI pipelines:
// Voice assistant pipeline
trace.start_step(StepType::AudioCapture, "capture");
let audio = capture_audio();
trace.end_step();
trace.start_step(StepType::SpeechToText, "transcribe");
let text = transcribe(audio);
trace.end_step();
trace.start_step(StepType::Retrieval, "retrieve_context");
let context = retrieve_context(&text);
trace.end_step();
trace.start_step(StepType::LlmGeneration, "generate");
let response = llm.generate(&context);
trace.end_step();
Identify bottlenecks in your application:
for step in trace.steps() {
if step.duration_ms > 1000 {
eprintln!("SLOW: {} took {}ms", step.name, step.duration_ms);
}
}
Export execution traces for review:
{
"steps": [
{
"step_type": "Retrieval",
"name": "search_documents",
"start_time_ms": 1703001234567,
"duration_ms": 42,
"input": { "query": "How do I use async Rust?" },
"output": { "count": 3 }
},
{
"step_type": "LlmGeneration",
"name": "generate_response",
"start_time_ms": 1703001234609,
"duration_ms": 1523
}
]
}
Pass traces between services for end-to-end visibility:
// Service A
let trace = execute_service_a();
let trace_json = serde_json::to_string(&trace)?;
send_to_service_b(trace_json);
// Service B
let mut trace: ExecutionTrace = serde_json::from_str(&trace_json)?;
trace.start_step(StepType::ToolExecution, "service_b_work");
// ... continue trace ...
ExecutionTraceMain trace container.
Methods:
new() - Create new tracestart_step(step_type, name) - Start a new stepstart_step_with_data(step_type, name, input) - Start with input dataend_step() - End current stepend_step_with_data(output) - End with output datasteps() - Get all stepstotal_duration_ms() - Total execution timecurrent_step_mut() - Get mutable reference to current stepTraceStepIndividual step in execution trace.
Fields:
step_type: StepType - Type of stepname: String - Step descriptionstart_time_ms: u64 - Unix timestamp (ms)duration_ms: u64 - Duration in millisecondsinput: Option<Value> - Input data (JSON)output: Option<Value> - Output data (JSON)error: Option<String> - Error message if failedStepTypeEnum of common pipeline step types.
See Step Types section above.
Minimal overhead suitable for production use.
Extracted from the Frame project, where it provides execution tracing for the AI assistant pipeline.
MIT - See LICENSE for details.
Magnus Trent magnus@blackfall.dev