| Crates.io | tenflowers |
| lib.rs | tenflowers |
| version | 0.1.0-alpha.2 |
| created_at | 2025-06-30 11:58:10.270422+00 |
| updated_at | 2025-12-23 07:12:57.889031+00 |
| description | Pure Rust implementation of TensorFlow - A comprehensive deep learning framework |
| homepage | https://github.com/cool-japan/tenflowers |
| repository | https://github.com/cool-japan/tenflowers |
| max_upload_size | |
| id | 1731759 |
| size | 203,953 |
A pure Rust implementation of TensorFlow, providing a comprehensive deep learning framework with Rust's safety and performance guarantees.
TenfloweRS is the main convenience crate that re-exports all TenfloweRS subcrates, providing a unified API for deep learning in Rust. Built on the robust SciRS2 ecosystem, it offers:
Add TenfloweRS to your Cargo.toml:
[dependencies]
tenflowers = "0.1.0-alpha.2"
use tenflowers::prelude::*;
fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create tensors
let a = Tensor::<f32>::zeros(&[2, 3]);
let b = Tensor::<f32>::ones(&[2, 3]);
// Arithmetic operations
let c = ops::add(&a, &b)?;
// Matrix multiplication
let x = Tensor::<f32>::ones(&[2, 3]);
let y = Tensor::<f32>::ones(&[3, 4]);
let z = ops::matmul(&x, &y)?;
Ok(())
}
use tenflowers::prelude::*;
fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create a simple feedforward network
let mut model = Sequential::new();
model.add(Dense::new(784, 128)?);
model.add_activation(ActivationFunction::ReLU);
model.add(Dense::new(128, 10)?);
model.add_activation(ActivationFunction::Softmax);
// Forward pass
let input = Tensor::zeros(&[32, 784]);
let output = model.forward(&input)?;
Ok(())
}
use tenflowers::prelude::*;
fn main() -> Result<(), Box<dyn std::error::Error>> {
let mut model = Sequential::new();
model.add(Dense::new(10, 64)?);
model.add(Dense::new(64, 3)?);
let x_train = Tensor::zeros(&[100, 10]);
let y_train = Tensor::zeros(&[100, 3]);
// Quick training
let results = quick_train(
model,
&x_train,
&y_train,
Box::new(SGD::new(0.01)),
categorical_cross_entropy,
10, // epochs
32, // batch_size
)?;
Ok(())
}
TenfloweRS provides several optional features:
std: Standard library supportparallel: Parallel execution via Rayongpu: GPU acceleration via WGPU (Metal, Vulkan, DirectX, WebGPU)cuda: CUDA support (Linux/Windows only)cudnn: cuDNN support (requires CUDA)opencl: OpenCL supportmetal: Metal support (macOS only)rocm: ROCm support (AMD GPUs)nccl: NCCL for distributed GPU trainingblas: Generic BLAS supportblas-openblas: OpenBLAS accelerationblas-mkl: Intel MKL accelerationblas-accelerate: Apple Accelerate framework (macOS only)simd: SIMD vectorization optimizationsserialize: Serialization support (JSON, MessagePack)compression: Compression support for checkpointsonnx: ONNX model import/exportwasm: WebAssembly supportautograd: Automatic differentiation supportbenchmark: Benchmarking utilitiespython: Python bindings via PyO3full: Enable most features (gpu, blas-openblas, simd, serialize, compression, onnx, autograd, python)[dependencies]
tenflowers = { version = "0.1.0-alpha.2", features = ["gpu"] }
[dependencies]
tenflowers = { version = "0.1.0-alpha.2", features = ["full"] }
TenfloweRS is organized into focused subcrates:
This meta crate re-exports all public APIs for convenience.
TenfloweRS is built on the SciRS2 scientific computing ecosystem:
TenfloweRS (Deep Learning Framework - TensorFlow-compatible API)
↓ builds upon
OptiRS (ML Optimization Specialization)
↓ builds upon
SciRS2 (Scientific Computing Foundation)
↓ builds upon
ndarray, num-traits, etc. (Core Rust Scientific Stack)
This architecture provides:
scirs2-corescirs2-autogradscirs2-neuraloptirsSee the examples directory for more comprehensive examples:
Contributions are welcome! Please see CONTRIBUTING.md for guidelines.
Licensed under either of:
at your option.
TenfloweRS is currently in alpha (v0.1.0-alpha.2). APIs may change as development continues.