Crates.io | snail_nn |
lib.rs | snail_nn |
version | 0.1.0 |
source | src |
created_at | 2023-08-01 12:33:31.345347 |
updated_at | 2023-08-01 12:33:31.345347 |
description | small neural network libary, running on the cpu with parallelized stochastic gradient descent |
homepage | |
repository | https://github.com/Lommix/snail_nn |
max_upload_size | |
id | 931838 |
size | 455,454 |
fully functional neural network libary with backpropagation and parallelized stochastic gradient descent implementation.
Storing images inside the neural network, upscaling and interpolate between them.
cargo run --example imagepol --release
The mandatory xor example
cargo run --example xor --release
Example Code:
use snail_nn::prelude::*;
fn main(){
let mut nn = Model::new(&[2, 3, 1]);
nn.set_activation(Activation::Sigmoid)
let mut batch = TrainingBatch::empty(2, 1);
let rate = 1.0;
// AND - training data
batch.add(&[0.0, 0.0], &[0.0]);
batch.add(&[1.0, 0.0], &[0.0]);
batch.add(&[0.0, 1.0], &[0.0]);
batch.add(&[1.0, 1.0], &[1.0]);
for _ in 0..10000 {
let (w_gradient, b_gradient) = nn.gradient(&batch.random_chunk(2));
nn.learn(w_gradient, b_gradient, rate);
}
println!("ouput {:?} expected: 0.0", nn.forward(&[0.0, 0.0]));
println!("ouput {:?} expected: 0.0", nn.forward(&[1.0, 0.0]));
println!("ouput {:?} expected: 0.0", nn.forward(&[0.0, 1.0]));
println!("ouput {:?} expected: 1.0", nn.forward(&[1.0, 1.0]));
}
Sigmoid, Tanh & Relu activation functions
Parallelized stochastic gradient descent
It works on my machine ¯\(ツ)/¯
Will gobble up most of your cpu