Crates.io | ferric-ai |
lib.rs | ferric-ai |
version | 0.2.0 |
created_at | 2025-09-25 19:24:14.808832+00 |
updated_at | 2025-09-25 19:24:14.808832+00 |
description | Comprehensive flamegraph analysis CLI with intelligent hotspot detection, source code attribution, allocation analysis, and LLM-friendly structured output for performance optimization |
homepage | https://github.com/dolly-parseton/ferric |
repository | https://github.com/dolly-parseton/ferric |
max_upload_size | |
id | 1855026 |
size | 215,964 |
Comprehensive flamegraph analysis CLI with intelligent hotspot detection, source code attribution, allocation analysis, and LLM-friendly structured output for performance optimization.
Developer note: I made this project over the course of 48 hours as a way of learning how to optimally pair program with an LLM, in my case I was using Claude Code. Really happy with these results but of course there's more testing I'll need to do but as I run this on more complex code doing more performant tasks I'll make changes to this tool. There were some very interesting use cases for this kind of tool, having something that presents results in a LLM efficent way has some interesting consequences.
all
, __libc*
, base rust stuff noise filtering with pattern matchingfile:line
locationsflamegraph.svg
file# From source
git clone https://github.com/dolly-parseton/ferric.git
cd ferric
cargo build --release
# Or soon...
cargo install ferric-ai
Below is code documentation if you want to use ferric
directly. However, I suggest if you're using an LLM backed coding assistant like Claude Code or Github Co-pilot you ask it to run ferric
and see how it performs. Honestly it's really weird watching the agent start to figure our what ferric is an infer next steps (Claude Code in particular was very interesting to watch).
# 1. Understand flamegraph characteristics
ferric summary app.svg --format json
# 2. Set context-aware thresholds
ferric config auto app.svg
# 3. Find performance issues with code context
ferric find hotspots app.svg --source ./src
# 4. Investigate performance patterns
ferric analyze allocation-patterns app.svg
# Comprehensive flamegraph statistics
ferric summary app.svg --format json
# Auto-configure based on characteristics
ferric config auto app.svg
# Manual configuration
ferric config set cpu-threshold 15%
ferric config get
# Smart hotspot detection (uses configured thresholds)
ferric find hotspots app.svg
# Functions by performance metrics
ferric find functions app.svg --cpu-above 15%
ferric find functions app.svg --name-contains alloc
# Call sites for specific functions
ferric find call-sites expensive_func app.svg --source ./src
# Recursion analysis with configurable thresholds
ferric analyze recursion app.svg --min-depth 10
# Memory allocation patterns
ferric analyze allocation-patterns app.svg
# Call frequency vs cost analysis
ferric analyze call-frequency app.svg --function expensive_func
# Execution path tracing
ferric analyze call-paths app.svg --from main --to malloc
# Convert SVG to structured tree
ferric tree app.svg --json
{
"analysis_type": "enhanced_with_source",
"source_integration": {
"source_directory": "./src",
"total_functions_analyzed": 23,
"functions_with_source_attribution": 21,
"source_files_scanned": 8,
"attribution_success_rate": 91.3
},
"enhanced_functions": [
{
"function_name": "expensive_computation",
"cpu_percent": 36.01,
"samples": 13023107,
"depth": 15,
"source_locations": [
{
"file_path": "./src/compute.rs",
"line_number": 42,
"function_signature": "pub fn expensive_computation(data: &[u8]) -> Vec<u8>",
"context_lines": ["...", "pub fn expensive_computation(data: &[u8]) -> Vec<u8> {", " // Simulate expensive computation", "..."]
}
]
}
],
"insights": [
"Source attribution: 21 of 23 functions mapped to source code (91.3% success rate)",
"High attribution rate - excellent source-to-flamegraph mapping"
]
}
{
"flamegraph_metadata": {
"total_functions": 35,
"max_depth": 26,
"total_samples": 36162394
},
"cpu_distribution": {
"percentile_50": 0.12,
"percentile_90": 7.58,
"percentile_95": 21.75,
"percentile_99": 36.01
},
"context_aware_thresholds": {
"hotspot_cpu_threshold": 12.0,
"recursion_depth_threshold": 24,
"allocation_threshold_percent": 5.0
},
"insights": [
"High CPU concentration: 95th percentile at 21.8%",
"Deep recursion detected: max depth 26 levels",
"Context-aware thresholds configured for optimal analysis"
]
}
The best advice I could give it just ask it to run ferric
. There's a bunch of usage guides that are printed for each command, sub-command, and argument. This I've found really helps guide the LLM in it's analysis, paired with the context of a codebase that it's actively working on there's some interesting behaviors I've seen. I've tried to include features that mitigate against context bloat but obviously be aware some of the commands can have large printouts (tree
and call-paths
most notably).
Developer Note: Perhaps there's some kind of token counter and pageination functionality I can add for larger flamegraphs that might also help optimise context blot. I'm still learning about LLMs and optimial workflows so bare with me.
# Phase 1: Assessment
ferric summary app.svg --format json
# Phase 2: Configuration
ferric config auto app.svg
# Phase 3: Investigation
ferric find hotspots app.svg --source ./src
ferric analyze allocation-patterns app.svg
# Phase 4: Deep Analysis
ferric analyze call-paths app.svg --from main --to hotspot_function
Ferric analyzes your flamegraph characteristics to derive optimal thresholds:
# Analyzes CPU distribution, recursion depth, allocation patterns
ferric config auto app.svg
# Generated config saved to ~/.config/ferric/config.json
{
"cpu_threshold_hotspots": 12.0,
"depth_threshold_recursion": 24,
"allocation_threshold_percent": 5.0,
"exclude_runtime_by_default": true,
"call_frequency_threshold_high": 500000
}
# Set individual thresholds
ferric config set cpu-threshold 15%
ferric config set recursion-depth-threshold 20
# View current configuration
ferric config get
Component | Purpose | Features |
---|---|---|
Parser | SVG flamegraph parsing | DTD handling, spatial containment |
Categorization | Function classification | Infrastructure noise filtering |
Source Attribution | Code location mapping | Multi-language support (90.9% success) |
Configuration | Context-aware thresholds | Persistent, auto-derived settings |
Analysis | Performance pattern detection | Recursion, allocation, call paths |
Real-World Testing:
Production Ready:
# Rust 2021 edition
cargo build --release
cargo test
cargo clippy --all-targets
src/
โโโ cli.rs # Clap-based CLI with hierarchical commands
โโโ parser.rs # SVG flamegraph parser (roxmltree-based)
โโโ categorization.rs # Smart function classification
โโโ source_attributions.rs # Multi-language code location mapping
โโโ config.rs # Context-aware configuration system
โโโ tree.rs # Call tree data structures
โโโ query.rs # Legacy query engine (being phased out)
โโโ error.rs # Custom error types
โโโ main.rs # Command dispatch and analysis logic
Licensed under the MIT License. See LICENSE for details.
Dependencies:
Inspiration: