| Crates.io | torsh |
| lib.rs | torsh |
| version | 0.1.0-alpha.2 |
| created_at | 2025-06-30 12:06:33.776276+00 |
| updated_at | 2025-12-22 05:48:22.508201+00 |
| description | A blazingly fast, production-ready deep learning framework written in pure Rust |
| homepage | https://github.com/cool-japan/torsh/ |
| repository | https://github.com/cool-japan/torsh/ |
| max_upload_size | |
| id | 1731768 |
| size | 424,121 |
The main crate for ToRSh - A blazingly fast, production-ready deep learning framework written in pure Rust.
This is the primary entry point for the ToRSh framework, providing convenient access to all functionality through a unified API.
Add to your Cargo.toml:
[dependencies]
torsh = "0.1.0-alpha.2"
use torsh::prelude::*;
fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create tensors
let x = tensor![[1.0, 2.0], [3.0, 4.0]];
let y = tensor![[5.0, 6.0], [7.0, 8.0]];
// Matrix multiplication
let z = x.matmul(&y)?;
println!("Result: {:?}", z);
// Automatic differentiation
let a = tensor![2.0].requires_grad_(true);
let b = a.pow(2.0)? + a * 3.0;
b.backward()?;
println!("Gradient: {:?}", a.grad()?);
Ok(())
}
default: Includes std, nn, optim, and datastd: Standard library support (enabled by default)nn: Neural network modulesoptim: Optimization algorithmsdata: Data loading utilitiescuda: CUDA backend supportwgpu: WebGPU backend supportmetal: Metal backend support (Apple Silicon)serialize: Serialization supportfull: All featuresThe crate re-exports functionality from specialized sub-crates:
torsh::core): Basic types and traitstorsh::tensor): Tensor operationstorsh::autograd): Automatic differentiationtorsh::nn): Neural network layerstorsh::optim): Optimizerstorsh::data): Data loadingSimilar to PyTorch's torch.nn.functional, ToRSh provides functional operations in the F namespace:
use torsh::F;
let output = F::relu(&input);
let output = F::softmax(&logits, -1)?;
Licensed under either of
at your option.