| Crates.io | memscope-rs |
| lib.rs | memscope-rs |
| version | 0.1.10 |
| created_at | 2025-07-09 10:07:16.868146+00 |
| updated_at | 2025-10-15 06:39:59.478599+00 |
| description | Advanced Rust memory analysis and visualization toolkit with custom allocator, variable tracking, and beautiful SVG reports. |
| homepage | https://github.com/TimWood0x10/memscope-rs |
| repository | https://github.com/TimWood0x10/memscope-rs |
| max_upload_size | |
| id | 1744666 |
| size | 10,673,015 |
A comprehensive memory analysis toolkit with specialized tracking strategies for single-threaded, multi-threaded, and async Rust applications.
memscope-rs provides four intelligent tracking strategies automatically selected based on your application patterns:
| Strategy | Use Case | Performance | Best For |
|---|---|---|---|
| ๐งฉ Core Tracker | Development & debugging | Zero overhead | Precise analysis with track_var! macros |
| ๐ Lock-free Multi-threaded | High concurrency (100+ threads) | Thread-local sampling | Production monitoring, zero contention |
| โก Async Task-aware | async/await applications | < 5ns per allocation | Context-aware async task tracking |
| ๐ Unified Backend | Complex hybrid applications | Adaptive routing | Automatic strategy selection and switching |
use memscope_rs::{track_var, track_var_smart, track_var_owned};
fn main() {
// Zero-overhead reference tracking (recommended)
let data = vec![1, 2, 3, 4, 5];
track_var!(data);
// Smart tracking (automatic strategy selection)
let number = 42i32; // Copy type - copied
let text = String::new(); // Non-copy - tracked by reference
track_var_smart!(number);
track_var_smart!(text);
// Ownership tracking (precise lifecycle analysis)
let tracked = track_var_owned!(vec![1, 2, 3]);
// Export with multiple formats
memscope_rs::export_user_variables_json("analysis.json").unwrap();
memscope_rs::export_user_variables_binary("analysis.memscope").unwrap();
}
use memscope_rs::lockfree;
fn main() -> Result<(), Box<dyn std::error::Error>> {
// Initialize lock-free tracking
lockfree::initialize_lockfree_tracking()?;
// Spawn many threads (scales to 100+ threads)
let handles: Vec<_> = (0..100).map(|i| {
std::thread::spawn(move || {
// Thread-local tracking with intelligent sampling
for j in 0..1000 {
let data = vec![i; j % 100 + 1];
lockfree::track_allocation(&data, &format!("data_{}_{}", i, j));
}
})
}).collect();
for handle in handles {
handle.join().unwrap();
}
// Aggregate and analyze all threads
let analysis = lockfree::aggregate_all_threads()?;
lockfree::export_analysis(&analysis, "lockfree_analysis")?;
Ok(())
}
use memscope_rs::async_memory;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Initialize async-aware tracking
async_memory::initialize().await?;
// Track memory across async tasks
let tasks: Vec<_> = (0..50).map(|i| {
tokio::spawn(async move {
let data = vec![i; 1000];
async_memory::track_in_task(&data, &format!("async_data_{}", i)).await;
// Simulate async work
tokio::time::sleep(tokio::time::Duration::from_millis(10)).await;
})
}).collect();
futures::future::join_all(tasks).await;
// Export task-aware analysis
let analysis = async_memory::generate_analysis().await?;
async_memory::export_visualization(&analysis, "async_analysis").await?;
Ok(())
}
use memscope_rs::unified::{UnifiedBackend, BackendConfig};
fn main() -> Result<(), Box<dyn std::error::Error>> {
// Initialize unified backend with automatic detection
let mut backend = UnifiedBackend::initialize(BackendConfig::default())?;
// Backend automatically detects environment and selects optimal strategy:
// - Single-threaded: Core tracker
// - Multi-threaded: Lock-free tracker
// - Async runtime: Async-aware tracker
// - Mixed: Hybrid strategy
let session = backend.start_tracking()?;
// Your application logic here - tracking happens transparently
let data = vec![1, 2, 3, 4, 5];
// Backend handles tracking automatically
// Collect comprehensive analysis
let analysis = session.collect_data()?;
let final_data = session.end_session()?;
// Export unified analysis
backend.export_analysis(&final_data, "unified_analysis")?;
Ok(())
}
| Strategy | Overhead | Best Use Case |
|---|---|---|
| Reference Tracking | ~0% (zero-cost) | Development debugging |
| Ownership Tracking | ~5-10% | Precise lifecycle analysis |
| Lock-free Multi-threaded | ~2-8% (adaptive sampling) | High concurrency production |
| Async Task-aware | < 5ns per allocation | Async applications |
| Format | Speed vs JSON | Size vs JSON | Use Case |
|---|---|---|---|
| Binary Export | 5-10x faster | 60-80% smaller | Production, large datasets |
| JSON Export | Baseline | Baseline | Development, debugging |
| Streaming Export | Memory-efficient | Variable | Large datasets, limited memory |
| Metric | Single-threaded | Multi-threaded | Async |
|---|---|---|---|
| Concurrency | 1 thread | 100+ threads | 50+ tasks |
| Variables | 1M+ variables | 100K+ per thread | 10K+ per task |
| Memory Usage | ~50KB + 100B/var | Thread-local pools | Task-local buffers |
| Module | Export Time | File Size | Use Case |
|---|---|---|---|
| Single-threaded | 1.3s | 1.2MB | Development analysis |
| Multi-threaded | 211ms | 480KB | Production monitoring |
| Async | 800ms | 800KB | Task performance analysis |
| Hybrid | 2.1s | 2.5MB | Comprehensive analysis |
Based on actual test results from example applications
All modules generate rich, interactive HTML dashboards:
# Clone the repository
git clone https://github.com/TimWood0x10/memscope-rs
cd memscope-rs
# Try each module:
cargo run --example basic_usage # ๐งฉ Single-threaded
cargo run --example complex_multithread_showcase # ๐ Multi-threaded
cargo run --example comprehensive_async_showcase # โก Async
cargo run --example enhanced_30_thread_demo # ๐ Hybrid
# Generate HTML reports:
make html DIR=MemoryAnalysis BASE=basic_usage
track_var! macros with examplestrack_var! macro to track variables without breaking your existing code (we promise!)Rc<T>, Arc<T>, Box<T> - because Rust loves its smart pointers# Basic usage demonstration
cargo run --example basic_usage
# Comprehensive memory analysis showcase
cargo run --example comprehensive_memory_analysis
# Complex lifecycle showcase
cargo run --example comprehensive_binary_to_html_demo
# Memory stress test (warning: may stress your computer too)
cargo run --example heavy_workload_test
# Multi-threaded stress test
cargo run --example multithreaded_stress_test
# Performance test
cargo run --example performance_benchmark_demo
# Realistic usage with extensions
cargo run --example realistic_usage_with_extensions
# Large-scale binary comparison
cargo run --example large_scale_binary_comparison
# Unsafe/FFI safety demo (for the brave souls)
cargo run --example unsafe_ffi_demo
# Async basic test
cargo run --example async_basic_test
# Simple binary test
cargo run --example simple_binary_test
# JSON export test
cargo run --example test_binary_to_json
use memscope_rs::{init, track_var, get_global_tracker};
fn main() {
// Initialize memory tracking (don't forget this, or nothing will work!)
init();
// Create and track variables
let my_vec = vec![1, 2, 3, 4, 5];
track_var!(my_vec);
let my_string = String::from("Hello, memscope!");
track_var!(my_string);
let my_box = Box::new(42); // The answer to everything
track_var!(my_box);
// Variables work normally (tracking is invisible, like a good spy)
println!("Vector: {:?}", my_vec);
println!("String: {}", my_string);
println!("Box: {}", *my_box);
// Export analysis results
let tracker = get_global_tracker();
if let Err(e) = tracker.export_to_json("my_analysis") {
eprintln!("Export failed: {} (this shouldn't happen, but computers...)", e);
}
}
use std::rc::Rc;
use std::sync::Arc;
// Track reference counted pointers
let rc_data = Rc::new(vec![1, 2, 3]);
track_var!(rc_data);
// Track atomic reference counted pointers (for when you need thread safety)
let arc_data = Arc::new(String::from("shared data"));
track_var!(arc_data);
// Cloning operations are also tracked (watch the ref count go up!)
let rc_clone = Rc::clone(&rc_data);
track_var!(rc_clone);
use memscope_rs::ExportOptions;
let options = ExportOptions::new()
.include_system_allocations(false) // Fast mode (recommended)
.verbose_logging(true) // For when you want ALL the details
.buffer_size(128 * 1024); // 128KB buffer (because bigger is better, right?)
if let Err(e) = tracker.export_to_json_with_options("detailed_analysis", options) {
eprintln!("Export failed: {}", e);
}
# Clone and setup
git clone https://github.com/TimWood0x10/memscope-rs
cd memscope-rs
# Build and test basic functionality
make build
make run-basic
# Generate HTML report
make html DIR=MemoryAnalysis/basic_usage BASE=user OUTPUT=memory_report.html VERBOSE=1
open ./MemoryAnalysis/basic_usage/memory_report.html
# Fast benchmarks (recommended)
make benchmark-main # ~2 minutes
# Comprehensive benchmarks
make run-benchmark # Full performance analysis
make run-core-performance # Core system evaluation
make run-simple-benchmark # Quick validation
# Stress testing
cargo run --example heavy_workload_test
cargo run --example multithreaded_stress_test
# Clone the repository
git clone https://github.com/TimWood0x10/memscope-rs.git
cd memscope-rs
# Build the project (grab a coffee, this might take a moment)
make build
# Run tests
cargo test
# Try an example
make run-basic
โโโ complex_lifecycle_snapshot_complex_types.json
โโโ complex_lifecycle_snapshot_lifetime.json
โโโ complex_lifecycle_snapshot_memory_analysis.json
โโโ complex_lifecycle_snapshot_performance.json
โโโ complex_lifecycle_snapshot_security_violations.json
โโโ complex_lifecycle_snapshot_unsafe_ffi.json
# Export to different formats
make html DIR=MemoryAnalysis/basic_usage OUTPUT=memory_report.html # JSON โ HTML
cargo run --example comprehensive_binary_to_html_demo # Binary โ HTML
cargo run --example large_scale_binary_comparison # Binary format comparison demo
# View generated dashboards
open memory_report.html # From JSON conversion
open comprehensive_report.html # From binary conversion
# You can view the HTML interface examples in ./images/*.html
# Add to your project
cargo add memscope-rs
# Or manually add to Cargo.toml
[dependencies]
memscope-rs = "0.1.5"
[dependencies]
memscope-rs = { version = "0.1.5" }
Available features:
backtrace - Enable stack trace collection (adds overhead, but gives you the full story)derive - Enable derive macro support (experimental, use at your own risk)tracking-allocator - Custom allocator support (enabled by default)After running programs, you'll find analysis results in the MemoryAnalysis/ directory:
โโโ basic_usage_memory_analysis.json // comprehensive memory data
โโโ basic_usage_lifetime.json // variable lifetime info
โโโ basic_usage_performance.json // performance metrics
โโโ basic_usage_security_violations.json // security analysis
โโโ basic_usage_unsafe_ffi.json // unsafe && ffi info
โโโ basic_usage_complex_types.json // complex types data
โโโ memory_report.html // interactive dashboard
The generated dashboard.html provides a rich, interactive experience:
To view the dashboard:
# output html
make html DIR=YOUR_JSON_DIR BASE=complex_lifecycle OUTPUT=improved_tracking_final.html
# After running your tracked program
open MemoryAnalysis/your_analysis_name/dashboard.html
# Or simply double-click the HTML file in your file manager
| Feature | memscope-rs | Valgrind | Heaptrack | jemalloc |
|---|---|---|---|---|
| Rust Native | โ | โ | โ | โ ๏ธ |
| Variable Names | โ | โ | โ | โ |
| Smart Pointer Analysis | โ | โ ๏ธ | โ ๏ธ | โ |
| Visual Reports | โ | โ ๏ธ | โ | โ |
| Production Ready | โ ๏ธ | โ | โ | โ |
| Interactive Timeline | โ | โ | โ ๏ธ | โ |
| Real-time Tracking | โ ๏ธ | โ | โ | โ |
| Low Overhead | โ ๏ธ | โ ๏ธ | โ | โ |
| Mature Ecosystem | โ | โ | โ | โ |
memscope-rs (this project)
Valgrind
Heaptrack
jemalloc
Good scenarios:
Use with caution:
Based on actual testing (not marketing numbers):
Very large datasets: Performance may degrade with >1M allocations
High-frequency systems: Monitor performance impact in your specific use case
Production environments: Recommend staging testing before deployment
The project uses a modular design:
Performance optimization:
# Use fast mode for reduced overhead
export MEMSCOPE_FAST_MODE=1
# Or disable expensive operations for large datasets
export MEMSCOPE_DISABLE_ANALYSIS=1
Export fails with large datasets:
// Use smaller buffer or exclude system allocations
let options = ExportOptions::new()
.include_system_allocations(false)
.buffer_size(32 * 1024);
High memory usage:
# Disable backtrace collection
cargo run --no-default-features --features tracking-allocator
Permission errors on output:
# Ensure write permissions
mkdir -p MemoryAnalysis
chmod 755 MemoryAnalysis
Platform-specific configuration:
# For optimal performance on different platforms
export MEMSCOPE_PLATFORM_OPTIMIZED=1
This is experimental software, but we welcome contributions! Please:
# Development workflow
git clone https://github.com/TimWood0x10/memscope-rs
cd memscope-rs
make build
make run-basic
Licensed under either of:
at your option.
Add to your Cargo.toml:
[dependencies]
memscope-rs = "0.1.6"
# Optional features
[features]
default = ["parking-lot"]
derive = ["memscope-rs/derive"] # Derive macros
enhanced-tracking = ["memscope-rs/enhanced-tracking"] # Advanced analysis
system-metrics = ["memscope-rs/system-metrics"] # System monitoring
memscope-rs includes powerful command-line tools:
# Analyze existing memory data
cargo run --bin memscope-analyze -- analysis.json
# Generate comprehensive reports
cargo run --bin memscope-report -- --input analysis.memscope --format html
# Run performance benchmarks
cargo run --bin memscope-benchmark -- --threads 50 --allocations 10000
I need your feedback! While memscope-rs has comprehensive functionality, I believe it can be even better with your help.
I've put tremendous effort into testing, but complex software inevitably has edge cases I haven't encountered. Your real-world usage scenarios are invaluable:
Every issue report helps make memscope-rs more robust for the entire Rust community. I'm committed to:
Together, we can build the best memory analysis tool for Rust! ๐ฆ
We welcome contributions! Please see our Contributing Guide for details.
make test # Run all tests
make check # Check code quality
make benchmark # Run performance benchmarks
This project is licensed under either of:
*Made with โค๏ธ and ๐ฆ by developers who care about memory (maybe too much) *