| Crates.io | torsh-utils |
| lib.rs | torsh-utils |
| version | 0.1.0-alpha.2 |
| created_at | 2025-09-30 03:05:37.420042+00 |
| updated_at | 2025-12-22 05:10:53.260568+00 |
| description | Utility functions and helpers for ToRSh |
| homepage | https://github.com/cool-japan/torsh/ |
| repository | https://github.com/cool-japan/torsh/ |
| max_upload_size | |
| id | 1860506 |
| size | 572,699 |
Comprehensive utilities and tools for the ToRSh deep learning framework.
This crate provides essential utilities for model development, debugging, optimization, and deployment in the ToRSh ecosystem. It includes benchmarking tools, profiling utilities, TensorBoard integration, mobile optimization, and development environment management.
benchmark: Model benchmarking and performance analysisbottleneck: Performance bottleneck detection and profilingtensorboard: TensorBoard logging and visualizationmobile_optimizer: Mobile deployment optimizationcollect_env: Environment and system information collectioncpp_extension: C++ extension building utilitiesmodel_zoo: Model repository managementuse torsh_utils::prelude::*;
use torsh_nn::Module;
// Benchmark a model
let config = BenchmarkConfig {
batch_size: 32,
warmup_iterations: 10,
benchmark_iterations: 100,
measure_memory: true,
device: DeviceType::Cpu,
};
let result = benchmark_model(&model, &[1, 3, 224, 224], config)?;
println!("Average forward time: {:.2}ms", result.avg_forward_time.as_millis());
use torsh_utils::prelude::*;
// Profile model bottlenecks
let report = profile_bottlenecks(
&model,
&[32, 3, 224, 224], // input shape
100, // iterations
DeviceType::Cpu,
)?;
println!("Bottleneck report:");
for (op, time) in report.operation_times {
println!(" {}: {:.2}ms", op, time.as_millis());
}
use torsh_utils::prelude::*;
// Create TensorBoard writer
let mut writer = SummaryWriter::new("./logs")?;
// Log scalars
writer.add_scalar("train/loss", 0.5, 100)?;
writer.add_scalar("train/accuracy", 0.95, 100)?;
// Log histograms
let weights: Tensor<f32> = model.get_parameter("linear.weight")?;
writer.add_histogram("weights/linear", &weights, 100)?;
writer.close()?;
use torsh_utils::prelude::*;
// Optimize model for mobile
let config = MobileOptimizerConfig {
backend: MobileBackend::CoreML,
export_format: ExportFormat::TorchScript,
quantize: true,
quantization_bits: 8,
optimize_for_inference: true,
remove_dropout: true,
fold_batch_norm: true,
};
let optimized_model = optimize_for_mobile(&model, config)?;
use torsh_utils::prelude::*;
// Collect environment information
let env_info = collect_env()?;
println!("ToRSh version: {}", env_info.torsh_version);
println!("Rust version: {}", env_info.rust_version);
println!("Available devices: {:?}", env_info.available_devices);
std: Standard library supporttensorboard: TensorBoard integrationprofiling: Advanced profiling capabilitiesmobile: Mobile optimization toolscpp-extensions: C++ extension buildingtorsh-core: Core types and device abstractiontorsh-tensor: Tensor operationstorsh-nn: Neural network modulestorsh-profiler: Performance profilingreqwest: HTTP client for model downloadsprometheus: Metrics collectionsysinfo: System information gatheringtorsh-utils is optimized for:
Designed to integrate seamlessly with:
See the examples/ directory for: