| Crates.io | micro_grad |
| lib.rs | micro_grad |
| version | 0.1.1 |
| created_at | 2025-10-29 12:28:30.786501+00 |
| updated_at | 2025-10-29 12:41:45.876295+00 |
| description | A minimal autograd + MLP engine in Rust |
| homepage | |
| repository | https://github.com/BriceLucifer/micro_grad |
| max_upload_size | |
| id | 1906543 |
| size | 29,867 |
A minimal scalar autograd engine and small MLP framework inspired by micrograd (by Karpathy), implemented entirely in Rust.
It supports basic operations (+, -, *, /, ReLU), backpropagation, and multi-layer perceptron training.
Rc<RefCell<>> based — no unsafe codeAdd this to your Cargo.toml:
[dependencies]
micro_grad = "0.1"
Then in your Rust code:
use micro_grad::value::Var;
use micro_grad::nn::{MLP, Module};
Train an XOR neural network from scratch:
use micro_grad::nn::{MLP, Module};
use micro_grad::value::Var;
fn mse_loss(pred: &Var, target: &Var) -> Var {
let diff = pred.sub(target);
diff.mul(&diff)
}
fn to_vars(xs: &[f64]) -> Vec<Var> {
xs.iter().map(|&v| Var::new(v)).collect()
}
fn main() {
// XOR dataset
let dataset = vec![
(vec![0.0, 0.0], 0.0),
(vec![0.0, 1.0], 1.0),
(vec![1.0, 0.0], 1.0),
(vec![1.0, 1.0], 0.0),
];
// MLP: 2 -> 4 -> 1
let mlp = MLP::new(2, &[4, 1]);
let lr = 0.1;
for epoch in 1..=2000 {
let mut total_loss = Var::new(0.0);
for (x, y) in &dataset {
let y_true = Var::new(*y);
let y_pred = mlp.forward_scalar(&to_vars(x));
total_loss = total_loss.add(&mse_loss(&y_pred, &y_true));
}
Var::backward(&total_loss);
for p in mlp.parameters() {
p.set_data(p.data_of() - lr * p.grad_of());
}
if epoch % 100 == 0 {
println!("epoch {epoch}, loss = {:.6}", total_loss.data_of());
}
}
println!("\n== After training ==");
for (x, y) in &dataset {
let y_pred = mlp.forward_scalar(&to_vars(x)).data_of();
println!("x={:?} -> pred={:.3} (target={})", x, y_pred, y);
}
}
✅ Expected output (after ~1000 epochs):
epoch 2000, loss = 0.000000
== After training ==
x=[0.0, 0.0] -> pred≈0.00 (target=0)
x=[0.0, 1.0] -> pred≈1.00 (target=1)
x=[1.0, 0.0] -> pred≈1.00 (target=1)
x=[1.0, 1.0] -> pred≈0.00 (target=0)
Each Var stores:
data: the scalar valuegrad: the accumulated gradientOp: reference to how it was created (Add, Mul, etc.)Var::backward() performs a reverse topological traversal, applying the chain rule.
A minimal graph example:
let a = Var::new(2.0);
let b = Var::new(3.0);
let c = a.mul(&b); // c = a * b
let d = c.add(&a); // d = a*b + a
Var::backward(&d);
println!("∂d/∂a = {}, ∂d/∂b = {}", a.grad_of(), b.grad_of());
// ∂d/∂a = b + 1 = 4, ∂d/∂b = a = 2
Licensed under either of
Developed by BriceLucifer Inspired by Andrej Karpathy’s micrograd.