| Crates.io | cetana |
| lib.rs | cetana |
| version | 0.0.1 |
| created_at | 2024-11-14 00:38:35.848206+00 |
| updated_at | 2025-09-18 06:12:32.086263+00 |
| description | Yet Another Neural Network Library |
| homepage | |
| repository | |
| max_upload_size | |
| id | 1447293 |
| size | 1,395,235 |
An advanced machine learning library empowering developers to build intelligent applications with ease, written in Rust.
Cetana (चेतन) is a Sanskrit word meaning "consciousness" or "intelligence," reflecting the library's goal of bringing machine intelligence to your applications.
Cetana is a Rust-based machine learning library designed to provide efficient and flexible machine learning operations across multiple compute platforms. It focuses on providing a clean, safe API while maintaining high performance and memory safety.
| Core Features | Neural Networks | Compute Backends |
|---|---|---|
| Type-safe Tensor Operations | Linear & Convolutional Layers | CPU (Current) |
| Automatic Differentiation | Activation Functions | CUDA (Planned) |
| Model Serialization | Pooling Layers | MPS (Planned) |
| Loss Functions | Backpropagation | Vulkan (Planned) |
use cetana::tensor::{Tensor, Device};
// Create tensors
let a = Tensor::new(&[1.0, 2.0, 3.0, 4.0], &[2, 2], Device::CPU)?;
let b = Tensor::new(&[5.0, 6.0, 7.0, 8.0], &[2, 2], Device::CPU)?;
// Perform operations
let c = a.add(&b)?;
let d = a.matmul(&b)?;
println!("Addition: {:?}", c);
println!("Matrix multiplication: {:?}", d);
use cetana::nn::{Sequential, Linear, ReLU, MSELoss};
use cetana::optimizer::SGD;
// Create a simple neural network
let model = Sequential::new()
.add(Linear::new(10, 64)?)
.add(ReLU::new())
.add(Linear::new(64, 1)?);
// Define loss function and optimizer
let loss_fn = MSELoss::new();
let optimizer = SGD::new(0.01);
// Training loop
for epoch in 0..100 {
let output = model.forward(&input)?;
let loss = loss_fn.compute(&output, &target)?;
model.backward(&loss)?;
optimizer.step(&model)?;
}
use cetana::model::{save_model, load_model};
// Save trained model
save_model(&model, "my_model.cetana")?;
// Load model later
let loaded_model = load_model("my_model.cetana")?;
| Backend | Status | Platform | Features |
|---|---|---|---|
| CPU | Active | All | Full feature set |
| CUDA | Planned | NVIDIA GPUs | GPU acceleration |
| MPS | Planned | Apple Silicon | Metal Performance Shaders |
| Vulkan | Planned | Cross-platform | Vulkan compute |
Add Cetana to your Cargo.toml:
[dependencies]
cetana = "0.1.0"
use cetana::tensor::Tensor;
fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create your first tensor
let tensor = Tensor::new(&[1.0, 2.0, 3.0], &[3], cetana::tensor::Device::CPU)?;
println!("Hello from Cetana: {:?}", tensor);
Ok(())
}