Crates.io | triton_hydra |
lib.rs | triton_hydra |
version | 0.0.1 |
source | src |
created_at | 2023-11-16 12:56:15.220867 |
updated_at | 2023-11-16 12:56:15.220867 |
description | A branch of the triton project build with CUDA backend for matrix math |
homepage | |
repository | |
max_upload_size | |
id | 1037582 |
size | 62,708 |
Use the package manager cargo to add triton to your rust project.
cargo add triton_hydra
or add the dependency directly in your cargo.toml file
[dependencies]
triton_hydra = "{version}"
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
Triton acts as a typical neural network implementation, but allows for a more dynamic way of solving problems you may not know how to solve. Acting as a 'brute force' approach to the world of deep learning, after n
epochs in the training process triton will evaluate the specific error of each neuron and column, deciding whether to add a neuron to a column, add a new column entirely, remove a neuron or remove a column.
Triton will train and grow a desirable neural network until a specific accuracy is matched, returning the finished model
use triton_grow::network::{network::Network, activations, modes::Mode};
fn main() {
let mut inputs = vec![vec![0.0,0.0],vec![1.0,0.0],vec![0.0,1.0],vec![1.0,1.0]];
let mut outputs = vec![vec![0.0],vec![1.0],vec![1.0],vec![0.0]];
let mut new_net: Network = Network::new(vec![2,3,1], activations::SIGMOID, 0.1);
new_net = new_net.train_to_loss(inputs, outputs, 0.001, 100000, Mode::Avg, 0.001, 3, 10);
println!("1 and 0: {:?}", new_net.feed_forward(&vec![1.0,0.0])[0].round());
println!("0 and 1: {:?}", new_net.feed_forward(&vec![0.0,1.0])[0].round());
println!("1 and 1: {:?}", new_net.feed_forward(&vec![1.0,1.0])[0].round());
println!("0 and 0: {:?}", new_net.feed_forward(&vec![0.0,0.0])[0].round());
println!("Net network made: {:?}", new_net.layers);
}
Upon testing Triton's self growth method against a traditional preconfigured network model. Three neural networks were all tasked with learning a simple XOR predictor with the following inputs and outputs:
[ 1.0 , 0.0 ]
[ 0.0 , 1.0 ]
[ 0.0 , 0.0 ]
[ 1.0 , 1.0 ]
[ 1.0 ]
[ 1.0 ]
[ 0.0 ]
[ 0.0 ]
Model Name | Layers {input -[hidden] - output} | Epochs Needed to Get 0.001 Avg Loss |
---|---|---|
Minimum | 2 - { 3 } - 1 | 7,880,000 |
Well Fit | 2 - { 3 - 4 - 3 } - 1 | 2,790,000 |
Triton | 2 - { self growing } - 1 | 150,000 |
Triton was 98.09% more efficient than the minimum fit model, and 94.62% more than even the well fit model.
Currently, triton is in a very beta stage, the following features are still in development:
n
neurons into any point of an existent network