Crates.io | gradients |
lib.rs | gradients |
version | 0.3.4 |
source | src |
created_at | 2022-07-17 13:18:56.838111 |
updated_at | 2022-09-10 18:39:00.201717 |
description | An OpenCL, CUDA and CPU based Deep Learning Library |
homepage | |
repository | https://github.com/elftausend/gradients |
max_upload_size | |
id | 627234 |
size | 95,218 |
Deep Learning library using custos and custos-math.
external (C) dependencies: OpenCL, CUDA, nvrtc, cublas, a BLAS lib (OpenBLAS, Intel MKL, ...)
There are two features available that are enabled by default:
If you deactivate them (add default-features = false
and provide no additional features), only the CPU device can be used.
For all feature-configurations, a BLAS library needs to be installed on the system.
[dependencies]
gradients = "0.3.4"
# to disable the default features (cuda, opencl) and use an own set of features:
#gradients = {version = "0.3.4", default-features = false, features=["opencl"]}
(if this example does not compile, consider looking here)
Use a struct that implements the NeuralNetwork trait (it is implemented via the network
attribute) to define which layers you want to use:
use gradients::purpur::{CSVLoader, CSVReturn, Converter};
use gradients::OneHotMat;
use gradients::{
correct_classes,
nn::{cce, cce_grad},
range, Adam, CLDevice, Linear, network, ReLU, Softmax,
};
#[network]
pub struct Network {
lin1: Linear<784, 128>,
relu1: ReLU,
lin2: Linear<128, 10>,
relu2: ReLU,
lin3: Linear<10, 10>,
softmax: Softmax,
}
Load data and create an instance of Network:
You can download the mnist dataset here.
// use cpu (no features enabled): let device = gradients::CPU::new().select();
// use cuda device (cuda feature enabled): let device = gradients::CudaDevice::new(0).unwrap().select();
// use opencl device (opencl feature enabled):
let device = CLDevice::new(0)?;
let mut net = Network::with_device(&device);
let loader = CSVLoader::new(true);
let loaded_data: CSVReturn<f32> = loader.load("PATH/TO/DATASET/mnist_train.csv")?;
let i = Matrix::from((
&device,
(loaded_data.sample_count, loaded_data.features),
&loaded_data.x,
));
let i = i / 255.;
let y = Matrix::from((&device, (loaded_data.sample_count, 1), &loaded_data.y));
let y = y.onehot();
Training loop:
let mut opt = Adam::new(0.01);
for epoch in range(200) {
let preds = net.forward(&i);
let correct_training = correct_classes(&loaded_data.y.as_usize(), &preds) as f32;
let loss = cce(&device, &preds, &y);
println!(
"epoch: {epoch}, loss: {loss}, training_acc: {acc}",
acc = correct_training / loaded_data.sample_count() as f32
);
let grad = cce_grad(&device, &preds, &y);
net.backward(&grad);
opt.step(&device, net.params());
}