| Crates.io | tensor_frame |
| lib.rs | tensor_frame |
| version | 0.0.3-alpha |
| created_at | 2025-07-02 01:41:34.706713+00 |
| updated_at | 2025-08-20 21:13:12.986184+00 |
| description | A PyTorch-like tensor library for Rust with CPU, WGPU, and CUDA backends |
| homepage | |
| repository | https://github.com/TrainPioneers/Tensor-Frame |
| max_upload_size | |
| id | 1734342 |
| size | 451,313 |
A high-performance, PyTorch-like tensor library for Rust with support for multiple computational backends.
Most up-to-date documentation can be found here: docs
Add to your Cargo.toml:
[dependencies]
tensor_frame = "0.0.3-alpha"
# For GPU support
tensor_frame = { version = "0.0.3-alpha", features = ["wgpu"] }
Basic usage:
use tensor_frame::Tensor;
// Create tensors (automatically uses best backend)
let a = Tensor::from_vec(vec![1.0, 2.0, 3.0, 4.0], vec![2, 2])?;
let b = Tensor::from_vec(vec![10.0, 20.0], vec![2, 1])?;
// All operations support broadcasting: +, -, *, /
let c = (a + b)?; // Broadcasting: [2,2] + [2,1] -> [2,2]
let d = (c * b)?; // Element-wise multiplication with broadcasting
let sum = d.sum(None)?;
println!("Result: {:?}", sum.to_vec()?);
features = ["wgpu"]features = ["cuda"]See the examples directory for more detailed usage:
Contributions are welcome! Please see CONTRIBUTING.md for guidelines.
Licensed under either of
at your option.