| Crates.io | system-analysis |
| lib.rs | system-analysis |
| version | 0.2.0 |
| created_at | 2025-07-13 17:58:03.300462+00 |
| updated_at | 2025-07-28 22:54:17.984319+00 |
| description | A comprehensive Rust library for analyzing system capabilities, workload requirements, and optimal resource allocation |
| homepage | https://github.com/ciresnave/system-analysis |
| repository | https://github.com/ciresnave/system-analysis |
| max_upload_size | |
| id | 1750665 |
| size | 551,568 |
A comprehensive Rust library for analyzing system capabilities, workload requirements, and optimal resource allocation. This crate provides tools for determining if a system can run specific workloads, scoring hardware capabilities, and recommending optimal configurations with a focus on AI/ML workloads.
Add this to your Cargo.toml:
[dependencies]
system-analysis = "0.1.0"
tokio = { version = "1.0", features = ["full"] }
use system_analysis::SystemAnalyzer;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let mut analyzer = SystemAnalyzer::new();
let system_profile = analyzer.analyze_system().await?;
println!("System Overall Score: {}/10", system_profile.overall_score());
println!("CPU Score: {}/10", system_profile.cpu_score());
println!("GPU Score: {}/10", system_profile.gpu_score());
println!("Memory Score: {}/10", system_profile.memory_score());
Ok(())
}
use system_analysis::{SystemAnalyzer, WorkloadRequirements};
use system_analysis::workloads::{AIInferenceWorkload, ModelParameters};
use system_analysis::resources::{ResourceRequirement, ResourceType, CapabilityLevel};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let mut analyzer = SystemAnalyzer::new();
let system_profile = analyzer.analyze_system().await?;
// Define an AI model workload
let model_params = ModelParameters::new()
.parameters(7_000_000_000) // 7B parameter model
.memory_required(16.0) // 16GB memory requirement
.compute_required(6.0) // High compute requirement
.prefer_gpu(true); // Prefer GPU acceleration
let ai_workload = AIInferenceWorkload::new(model_params);
let mut workload_requirements = WorkloadRequirements::new("llama2-7b-inference");
workload_requirements.add_resource_requirement(
ResourceRequirement::new(ResourceType::Memory)
.minimum_gb(16.0)
.recommended_gb(24.0)
);
workload_requirements.set_workload(Box::new(ai_workload));
// Check compatibility
let compatibility = analyzer.check_compatibility(&system_profile, &workload_requirements)?;
if compatibility.is_compatible() {
println!("โ System can run the AI model!");
println!("Performance estimate: {}", compatibility.performance_estimate());
} else {
println!("โ System cannot run the AI model");
for missing in compatibility.missing_requirements() {
println!("Missing: {}", missing.resource_type());
}
}
Ok(())
}
The SystemProfile represents a comprehensive analysis of system capabilities:
let system_profile = analyzer.analyze_system().await?;
// Access individual scores
println!("CPU Score: {}", system_profile.cpu_score());
println!("GPU Score: {}", system_profile.gpu_score());
println!("Memory Score: {}", system_profile.memory_score());
// Get AI capabilities assessment
let ai_capabilities = system_profile.ai_capabilities();
println!("AI Inference Level: {}", ai_capabilities.inference_level());
Define what your workload needs to run effectively:
let mut requirements = WorkloadRequirements::new("my-workload");
// Add resource requirements
requirements.add_resource_requirement(
ResourceRequirement::new(ResourceType::Memory)
.minimum_gb(8.0)
.recommended_gb(16.0)
.critical()
);
requirements.add_resource_requirement(
ResourceRequirement::new(ResourceType::GPU)
.minimum_level(CapabilityLevel::High)
.preferred_vendor(Some("NVIDIA"))
);
Check if a system can run your workload:
let compatibility = analyzer.check_compatibility(&system_profile, &requirements)?;
println!("Compatible: {}", compatibility.is_compatible());
println!("Score: {}/10", compatibility.score());
println!("Performance: {}", compatibility.performance_estimate());
// Get missing requirements
for missing in compatibility.missing_requirements() {
println!("Need {} {}, have {}",
missing.resource_type(),
missing.required(),
missing.available()
);
}
Estimate how your workload will use system resources:
let utilization = analyzer.predict_utilization(&system_profile, &requirements)?;
println!("Expected CPU usage: {}%", utilization.cpu_percent());
println!("Expected GPU usage: {}%", utilization.gpu_percent());
println!("Expected memory usage: {}%", utilization.memory_percent());
Get specific recommendations for improving system performance:
let upgrades = analyzer.recommend_upgrades(&system_profile, &requirements)?;
for upgrade in upgrades {
println!("Upgrade {}: {}",
upgrade.resource_type(),
upgrade.recommendation()
);
println!("Expected improvement: {}", upgrade.estimated_improvement());
}
Implement the Workload trait for custom workload types:
use system_analysis::workloads::{Workload, WorkloadType, PerformanceCharacteristics};
struct CustomWorkload {
// Your workload parameters
}
impl Workload for CustomWorkload {
fn workload_type(&self) -> WorkloadType {
WorkloadType::Custom("my-workload".to_string())
}
fn resource_requirements(&self) -> Vec<ResourceRequirement> {
// Define your resource requirements
vec![]
}
fn estimated_utilization(&self) -> HashMap<ResourceType, f64> {
// Define expected resource utilization
HashMap::new()
}
// ... implement other required methods
}
Enable GPU detection for NVIDIA GPUs:
[dependencies]
system-analysis = { version = "0.1.0", features = ["gpu-detection"] }
Run performance benchmarks to validate system capabilities:
use system_analysis::utils::BenchmarkRunner;
use std::time::Duration;
let benchmark = BenchmarkRunner::new(Duration::from_secs(5), 100);
let cpu_result = benchmark.run_cpu_benchmark()?;
let memory_result = benchmark.run_memory_benchmark()?;
println!("CPU Benchmark Score: {}", cpu_result.score);
println!("Memory Benchmark Score: {}", memory_result.score);
The examples/ directory contains comprehensive examples:
basic_analysis.rs: Basic system analysis and workload compatibilityai_workload_analysis.rs: AI/ML workload analysis with multiple modelsRun examples with:
cargo run --example basic_analysis
cargo run --example ai_workload_analysis
Customize the analyzer behavior:
use system_analysis::analyzer::AnalyzerConfig;
let config = AnalyzerConfig {
enable_gpu_detection: true,
enable_detailed_cpu_analysis: true,
enable_network_testing: false,
cache_duration_seconds: 300,
enable_benchmarking: false,
benchmark_timeout_seconds: 30,
};
let analyzer = SystemAnalyzer::with_config(config);
SystemAnalyzer: Main analyzer for system capabilitiesSystemProfile: Comprehensive system analysis resultsWorkloadRequirements: Specification of workload needsCompatibilityResult: Results of compatibility analysisResourceUtilization: Resource usage predictionsUpgradeRecommendation: Hardware upgrade suggestionsResourceType: CPU, GPU, Memory, Storage, NetworkCapabilityLevel: VeryLow, Low, Medium, High, VeryHigh, ExceptionalResourceAmount: Different ways to specify resource amountsAIInferenceWorkload: AI model inference workloadsModelParameters: AI model parameter specificationsWorkload traitThe crate uses the SystemAnalysisError enum for comprehensive error handling:
use system_analysis::error::SystemAnalysisError;
match result {
Ok(profile) => println!("Analysis successful: {}", profile.overall_score()),
Err(SystemAnalysisError::AnalysisError { message }) => {
eprintln!("Analysis failed: {}", message);
}
Err(SystemAnalysisError::SystemInfoError { source }) => {
eprintln!("System information error: {}", source);
}
Err(e) => eprintln!("Other error: {}", e),
}
Licensed under either of
at your option.
use system_analysis::{
workloads::{Workload, WorkloadType},
resources::{ResourceRequirement, ResourceType, CapabilityLevel},
};
struct CustomWorkload {
name: String,
cpu_intensive: bool,
memory_gb: f64,
}
impl Workload for CustomWorkload {
fn workload_type(&self) -> WorkloadType {
WorkloadType::Custom(self.name.clone())
}
fn resource_requirements(&self) -> Vec<ResourceRequirement> {
let mut requirements = Vec::new();
requirements.push(
ResourceRequirement::new(ResourceType::Memory)
.minimum_gb(self.memory_gb)
);
if self.cpu_intensive {
requirements.push(
ResourceRequirement::new(ResourceType::CPU)
.minimum_level(CapabilityLevel::High)
.cores(8)
);
}
requirements
}
fn validate(&self) -> system_analysis::Result<()> {
if self.memory_gb <= 0.0 {
return Err(system_analysis::SystemAnalysisError::invalid_workload(
"Memory requirement must be positive"
));
}
Ok(())
}
fn clone_workload(&self) -> Box<dyn Workload> {
Box::new(CustomWorkload {
name: self.name.clone(),
cpu_intensive: self.cpu_intensive,
memory_gb: self.memory_gb,
})
}
}
use system_analysis::{SystemAnalyzer, capabilities::CapabilityProfile};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let mut analyzer = SystemAnalyzer::new();
let system_profile = analyzer.analyze_system().await?;
let capabilities = analyzer.get_capability_profile().await?;
// CPU Analysis
println!("CPU: {} ({} cores)",
capabilities.cpu.brand,
capabilities.cpu.physical_cores
);
println!("CPU AI Score: {}/10", capabilities.cpu.ai_score);
// GPU Analysis
for (i, gpu) in capabilities.gpu.iter().enumerate() {
println!("GPU {}: {} ({}GB VRAM)",
i + 1,
gpu.name,
gpu.memory_gb
);
println!("GPU AI Score: {}/10", gpu.ai_score);
}
// Memory Analysis
println!("Memory: {:.1}GB {} @ {}MHz",
capabilities.memory.total_gb,
capabilities.memory.memory_type,
capabilities.memory.frequency_mhz.unwrap_or(0)
);
Ok(())
}
use system_analysis::{SystemAnalyzer, utils::BenchmarkRunner};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let mut analyzer = SystemAnalyzer::new();
let mut benchmark_runner = BenchmarkRunner::new();
// Run CPU benchmark
let cpu_score = benchmark_runner.benchmark_cpu().await?;
println!("CPU Benchmark Score: {}/10", cpu_score);
// Run memory benchmark
let memory_score = benchmark_runner.benchmark_memory().await?;
println!("Memory Benchmark Score: {}/10", memory_score);
// Run comprehensive system benchmark
let overall_score = benchmark_runner.benchmark_system().await?;
println!("Overall System Score: {}/10", overall_score);
Ok(())
}
The crate is organized into several key modules:
analyzer: Main system analysis logic and orchestrationcapabilities: Hardware capability assessment and scoringworkloads: Workload definitions and AI/ML specializationsresources: Resource management and requirement modelingtypes: Core data types and structuresutils: Utility functions and helper toolserror: Comprehensive error handlingRun the comprehensive test suite:
# Run all tests
cargo test
# Run tests with output
cargo test -- --nocapture
# Run specific test modules
cargo test integration_tests
cargo test edge_case_tests
# Run benchmarks
cargo bench
The crate includes comprehensive benchmarks:
# Run all benchmarks
cargo bench
# Run specific benchmarks
cargo bench -- system_analysis
cargo bench -- workload_creation
cargo bench -- compatibility_checking
# Generate HTML benchmark reports
cargo bench -- --output-format html
Enable optional features based on your needs:
[dependencies]
system-analysis = { version = "0.1.0", features = ["gpu-detection"] }
Available features:
gpu-detection: Enable NVIDIA GPU detection and CUDA capabilitiesadvanced-benchmarks: Include additional benchmarking toolsml-models: Extended AI/ML model support and optimizationsWe welcome contributions! Please see our Contributing Guidelines for details.
git clone https://github.com/yourusername/system-analysis.git
cd system-analysis
cargo build
cargo test
This project is licensed under either of
at your option.
Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.
For more detailed documentation, please visit docs.rs/system-analysis.