| Crates.io | async-inspect |
| lib.rs | async-inspect |
| version | 0.2.0 |
| created_at | 2025-11-18 22:27:38.601879+00 |
| updated_at | 2025-12-05 16:19:03.881576+00 |
| description | X-ray vision for async Rust - inspect and debug async state machines |
| homepage | https://github.com/ibrahimcesar/async-inspect |
| repository | https://github.com/ibrahimcesar/async-inspect |
| max_upload_size | |
| id | 1939087 |
| size | 934,048 |
X-ray vision for async Rust
async-inspect is a debugging tool that visualizes and inspects async state machines in Rust. See exactly what your futures are doing, where they're stuck, and why.
Debugging async Rust is frustrating:
#[tokio::test]
async fn test_user_flow() {
let user = fetch_user(123).await; // Where is this stuck?
let posts = fetch_posts(user.id).await; // Or here?
let friends = fetch_friends(user.id).await; // Or here?
// Test hangs... but WHERE? WHY? 😱
}
What you see in a regular debugger:
Thread blocked in:
tokio::runtime::park
std::sys::unix::thread::Thread::sleep
???
❌ Useless! You can't tell:
.await is blockedCommon async debugging nightmares:
Current "solutions":
// Solution 1: Add prints everywhere 😭
async fn fetch_user(id: u64) -> User {
println!("Starting fetch_user");
let result = http_get(url).await;
println!("Finished fetch_user");
result
}
// Solution 2: Use tokio-console (limited visibility)
// Solution 3: Give up and add timeouts everywhere 🤷
async-inspect gives you complete visibility into async execution:
$ async-inspect run ./my-app
┌─────────────────────────────────────────────────────────────┐
│ async-inspect - Task Inspector │
├─────────────────────────────────────────────────────────────┤
│ │
│ Task #42: fetch_user_data(user_id=12345) │
│ Status: BLOCKED (2.3s) │
│ State: WaitingForPosts │
│ │
│ Progress: ▓▓▓▓▓░░░ 2/4 steps │
│ │
│ ✅ fetch_user() - Completed (145ms) │
│ ⏳ fetch_posts() - IN PROGRESS (2.3s) ◄─── STUCK HERE │
│ └─> http::get("api.example.com/posts/12345") │
│ └─> TCP: ESTABLISHED, waiting for response │
│ └─> Timeout in: 27.7s │
│ ⏸️ fetch_friends() - Not started │
│ ⏸️ build_response() - Not started │
│ │
│ State Machine Polls: 156 (avg: 14.7ms between polls) │
│ │
│ Press 'd' for details | 't' for timeline | 'g' for graph │
└─────────────────────────────────────────────────────────────┘
Now you know EXACTLY:
fetch_posts)Async Rust is powerful but opaque. When you write:
async fn complex_operation() {
let a = step_a().await;
let b = step_b(a).await;
let c = step_c(b).await;
}
The compiler transforms this into a state machine:
// Simplified - the real thing is more complex
enum ComplexOperationState {
WaitingForStepA { /* ... */ },
WaitingForStepB { a: ResultA, /* ... */ },
WaitingForStepC { a: ResultA, b: ResultB, /* ... */ },
Done,
}
The problem: This state machine is invisible to debuggers!
Traditional debuggers show you:
async-inspect understands async state machines and shows you:
.await you're blocked ontokio-console is excellent but limited:
$ tokio-console
What tokio-console shows:
Task Duration Polls State
#42 2.3s 156 Running
#43 0.1s 5 Idle
#44 5.2s 892 Running
What it DOESN'T show:
.await is blocked| Feature | async-inspect | tokio-console | gdb/lldb | println! |
|---|---|---|---|---|
See current .await |
✅ | ❌ | ❌ | ⚠️ Manual |
| State machine state | ✅ | ❌ | ❌ | ❌ |
| Variable inspection | ✅ | ❌ | ⚠️ Limited | ❌ |
| Waiting reason | ✅ | ❌ | ❌ | ❌ |
| Timeline view | ✅ | ⚠️ Basic | ❌ | ❌ |
| Deadlock detection | ✅ | ❌ | ❌ | ❌ |
| Dependency graph | ✅ | ⚠️ Basic | ❌ | ❌ |
| Runtime agnostic | ✅ | ❌ Tokio only | ✅ | ✅ |
| Zero code changes | ✅ | ⚠️ Requires tracing | ✅ | ❌ |
async-inspect is complementary to tokio-console:
Use both together for complete visibility!
async-inspect works with multiple async runtimes:
tokio featureasync-std-runtime featuresmol-runtime featureExample usage with different runtimes:
// Tokio
use async_inspect::runtime::tokio::{spawn_tracked, InspectExt};
#[tokio::main]
async fn main() {
spawn_tracked("my_task", async {
// Your code here
}).await;
let result = fetch_data()
.inspect("fetch_data")
.await;
}
// async-std
use async_inspect::runtime::async_std::{spawn_tracked, InspectExt};
fn main() {
async_std::task::block_on(async {
spawn_tracked("my_task", async {
// Your code here
}).await;
});
}
// smol
use async_inspect::runtime::smol::{spawn_tracked, InspectExt};
fn main() {
smol::block_on(async {
spawn_tracked("my_task", async {
// Your code here
}).await;
});
}
See the examples/ directory for complete working examples.
.await pointsWork in Progress - Early development
Current version: 0.1.0-alpha
# Not yet published
cargo install async-inspect
# Or build from source
git clone https://github.com/yourusername/async-inspect
cd async-inspect
cargo install --path .
# Run your app with inspection enabled
async-inspect run ./my-app
# Attach to running process
async-inspect attach --pid 12345
# Run tests with inspection
async-inspect test
# Start web dashboard
async-inspect serve --port 8080
// Add to Cargo.toml
[dependencies]
async-inspect = "0.1"
// Instrument specific functions
#[async_inspect::trace]
async fn fetch_user(id: u64) -> User {
// Automatically instrumented
let profile = fetch_profile(id).await;
let posts = fetch_posts(id).await;
User { profile, posts }
}
// Or use manual inspection points
use async_inspect::prelude::*;
async fn complex_operation() {
inspect_point!("starting");
let data = fetch_data().await;
inspect_point!("data_fetched", data.len());
process(data).await
}
#[tokio::test]
async fn test_timeout() {
// This test hangs... but where?
let result = timeout(
Duration::from_secs(30),
long_operation()
).await;
}
With async-inspect:
$ async-inspect test
Found test stuck at:
test_timeout
└─> long_operation()
└─> fetch_data().await ◄─── BLOCKED (5m 23s)
└─> Waiting for: HTTP response
└─> URL: https://slow-api.example.com/data
└─> Timeout: None (will wait forever!)
Suggestion: Add timeout to HTTP client
async fn deadlock_example() {
let mutex_a = Arc::new(Mutex::new(0));
let mutex_b = Arc::new(Mutex::new(0));
// Task 1: locks A then B
tokio::spawn(async move {
let _a = mutex_a.lock().await;
tokio::time::sleep(Duration::from_millis(10)).await;
let _b = mutex_b.lock().await; // DEADLOCK!
});
// Task 2: locks B then A
tokio::spawn(async move {
let _b = mutex_b.lock().await;
tokio::time::sleep(Duration::from_millis(10)).await;
let _a = mutex_a.lock().await; // DEADLOCK!
});
}
With async-inspect:
💀 DEADLOCK DETECTED!
Task #42: waiting for Mutex<i32> @ 0x7f8a9c0
└─> Held by: Task #89
Task #89: waiting for Mutex<i32> @ 0x7f8a9d0
└─> Held by: Task #42
Circular dependency:
Task #42 → Mutex A → Task #89 → Mutex B → Task #42
Suggestion:
• Acquire locks in consistent order (A before B)
• Use try_lock() with timeout
• Consider lock-free alternatives
$ async-inspect profile ./my-app
Performance Report:
Slowest Operations:
1. fetch_posts() - avg 2.3s (called 450x)
└─> 98% time in: HTTP request
└─> Suggestion: Add caching or batch requests
2. acquire_lock() - avg 340ms (called 1200x)
└─> Lock contention: 50 tasks waiting
└─> Suggestion: Reduce lock scope or use RwLock
Hot Paths:
1. process_request → fetch_user → fetch_posts (89% of requests)
2. handle_webhook → validate → store (11% of requests)
# .github/workflows/test.yml
- name: Run tests with async inspection
run: async-inspect test --timeout 30s --fail-on-hang
- name: Upload trace on failure
if: failure()
uses: actions/upload-artifact@v3
with:
name: async-trace
path: async-inspect-trace.json
// Your code
async fn fetch_user(id: u64) -> User {
let profile = fetch_profile(id).await;
let posts = fetch_posts(id).await;
User { profile, posts }
}
// With instrumentation (conceptual)
async fn fetch_user(id: u64) -> User {
__async_inspect_enter("fetch_user", id);
__async_inspect_await_start("fetch_profile");
let profile = fetch_profile(id).await;
__async_inspect_await_end("fetch_profile");
__async_inspect_await_start("fetch_posts");
let posts = fetch_posts(id).await;
__async_inspect_await_end("fetch_posts");
let result = User { profile, posts };
__async_inspect_exit("fetch_user", &result);
result
}
# Production build - no overhead
[profile.release]
debug = false
# Debug build - full instrumentation
[profile.dev]
debug = true
async-inspect works seamlessly with your existing Rust async ecosystem tools:
Export metrics for monitoring dashboards:
use async_inspect::integrations::prometheus::PrometheusExporter;
let exporter = PrometheusExporter::new()?;
exporter.update();
// In your /metrics endpoint:
let metrics = exporter.gather();
Available metrics:
async_inspect_tasks_total - Total tasks createdasync_inspect_active_tasks - Currently active tasksasync_inspect_blocked_tasks - Tasks waiting on I/Oasync_inspect_task_duration_seconds - Task execution timesasync_inspect_tasks_failed_total - Failed task countSend traces to Jaeger, Zipkin, or any OTLP backend:
use async_inspect::integrations::opentelemetry::OtelExporter;
let exporter = OtelExporter::new("my-service");
exporter.export_tasks();
Automatic capture via tracing-subscriber:
use tracing_subscriber::prelude::*;
use async_inspect::integrations::tracing_layer::AsyncInspectLayer;
tracing_subscriber::registry()
.with(AsyncInspectLayer::new())
.init();
Use alongside tokio-console for complementary insights:
# Terminal 1: Run with tokio-console
RUSTFLAGS="--cfg tokio_unstable" cargo run
# Terminal 2: Monitor with tokio-console
tokio-console
# async-inspect exports provide historical analysis
cargo run --example ecosystem_integration
Import async-inspect metrics into Grafana:
Feature Flags:
[dependencies]
async-inspect = { version = "0.0.1", features = [
"prometheus-export", # Prometheus metrics
"opentelemetry-export", # OTLP traces
"tracing-sub", # Tracing integration
] }
async-inspect supports multiple industry-standard export formats for visualization and analysis:
Export complete task and event data as structured JSON:
use async_inspect::export::JsonExporter;
// Export to file
JsonExporter::export_to_file(&inspector, "data.json")?;
// Or get as string
let json = JsonExporter::export_to_string(&inspector)?;
Use with: jq, Python pandas, JavaScript tools, data pipelines
Export tasks and events in spreadsheet-compatible format:
use async_inspect::export::CsvExporter;
// Export tasks (id, name, duration, poll_count, etc.)
CsvExporter::export_tasks_to_file(&inspector, "tasks.csv")?;
// Export events (event_id, task_id, timestamp, kind, details)
CsvExporter::export_events_to_file(&inspector, "events.csv")?;
Use with: Excel, Google Sheets, pandas, data analysis
Export for visualization in chrome://tracing or Perfetto UI:
use async_inspect::export::ChromeTraceExporter;
ChromeTraceExporter::export_to_file(&inspector, "trace.json")?;
How to visualize:
Chrome DevTools (built-in):
chrome://tracingtrace.jsonPerfetto UI (recommended):
trace.jsonWhat you see:
Generate flamegraphs for performance analysis:
use async_inspect::export::{FlamegraphExporter, FlamegraphBuilder};
// Basic export (folded stack format)
FlamegraphExporter::export_to_file(&inspector, "flamegraph.txt")?;
// Customized export
FlamegraphBuilder::new()
.include_polls(false) // Exclude poll events
.include_awaits(true) // Include await points
.min_duration_ms(10) // Filter < 10ms operations
.export_to_file(&inspector, "flamegraph_filtered.txt")?;
// Generate SVG directly (requires 'flamegraph' feature)
#[cfg(feature = "flamegraph")]
FlamegraphExporter::generate_svg(&inspector, "flamegraph.svg")?;
How to visualize:
Speedscope (easiest, online):
flamegraph.txt onto the pageinferno (local SVG generation):
cargo install inferno
cat flamegraph.txt | inferno-flamegraph > output.svg
open output.svg
flamegraph.pl (classic):
git clone https://github.com/brendangregg/FlameGraph
./FlameGraph/flamegraph.pl flamegraph.txt > output.svg
What you see:
See examples/export_formats.rs for a complete example:
cargo run --example export_formats
This demonstrates:
Output files:
async_inspect_exports/
├── data.json # Complete JSON export
├── tasks.csv # Task metrics
├── events.csv # Event timeline
├── trace.json # Chrome Trace Event Format
├── flamegraph.txt # Basic flamegraph
└── flamegraph_filtered.txt # Filtered flamegraph
┌─ async-inspect ─────────────────────────────────────────┐
│ [Tasks] [Timeline] [Graph] [Profile] [?] Help │
├──────────────────────────────────────────────────────────┤
│ │
│ Active Tasks: 23 CPU: ████░░ 45% │
│ Blocked: 8 Mem: ██░░░░ 20% │
│ Running: 15 │
│ │
│ Task State Duration Details │
│ ─────────────────────────────────────────────────────── │
│ #42 ⏳ WaitingPosts 2.3s http::get() │
│ #43 ✅ Done 0.1s Completed │
│ #44 💀 Deadlock 5.2s Mutex wait │
│ #45 🏃 Running 0.03s Computing │
│ │
│ [←→] Navigate [Enter] Details [g] Graph [q] Quit │
└──────────────────────────────────────────────────────────┘
http://localhost:8080
┌────────────────────────────────────────────────┐
│ async-inspect [Settings] │
├────────────────────────────────────────────────┤
│ │
│ 📊 Overview 🕒 Last updated: 2s ago │
│ │
│ ● 23 Tasks Active ▁▃▅▇█▇▅▃▁ Activity │
│ ⏸️ 8 Blocked │
│ 💀 1 Deadlock [View Details →] │
│ │
│ 📈 Performance │
│ ├─ Avg Response: 145ms │
│ ├─ 99th percentile: 2.3s │
│ └─ Slowest: fetch_posts() - 5.2s │
│ │
│ [View Timeline] [Export Trace] [Filter...] │
└────────────────────────────────────────────────┘
Contributions welcome! This is a challenging project that needs expertise in:
Priority areas:
See CONTRIBUTING.md for details.
This project uses telemetry-kit to collect anonymous usage analytics. This helps us understand how async-inspect is used in the real world, enabling data-driven decisions instead of relying solely on GitHub issues.
What we collect:
monitor, export, stats)What we DON'T collect:
Open source projects often make decisions based on a vocal minority. Telemetry gives us visibility into:
We will publish a public dashboard showing aggregated, anonymous usage data at: Coming soon
You can disable telemetry in several ways:
1. Environment variable (recommended):
export ASYNC_INSPECT_NO_TELEMETRY=1
2. Standard DO_NOT_TRACK:
export DO_NOT_TRACK=1
3. Compile-time (excludes telemetry code entirely):
[dependencies]
async-inspect = { version = "0.1", default-features = false, features = ["cli", "tokio"] }
Even when telemetry is disabled, we send a single anonymous opt-out signal. This helps us understand the opt-out rate without collecting any identifying information.
Learn more:
async-inspect is designed to be used in development and CI/CD environments for analyzing async code. We take security seriously:
cargo-audit and cargo-denyYou can verify the provenance of any release binary:
# Install GitHub CLI attestation verification
gh attestation verify async-inspect-linux-x86_64.tar.gz \
--owner ibrahimcesar
If you discover a security vulnerability, please email security@ibrahimcesar.com instead of using the issue tracker.
MIT OR Apache-2.0
Inspired by:
async-inspect - Because async shouldn't be a black box 🔍
Status: 🚧 Pre-alpha - Architecture design phase
Star ⭐ this repo to follow development!
Have ideas or feedback? Open an issue or discussion!
Key questions we're exploring: