| Crates.io | rfann |
| lib.rs | rfann |
| version | 0.1.0 |
| created_at | 2025-07-12 05:48:25.6567+00 |
| updated_at | 2025-07-12 05:48:25.6567+00 |
| description | A pure Rust implementation of the Fast Artificial Neural Network (FANN) library |
| homepage | |
| repository | |
| max_upload_size | |
| id | 1749004 |
| size | 815,877 |
Rust Fast Artificial Neural Network - A modern, high-performance neural network library built in pure Rust
RFANN is a feature-rich, GPU-accelerated neural network library inspired by the original FANN (Fast Artificial Neural Network) but rebuilt from the ground up in Rust. It combines classical neural network algorithms with modern GPU acceleration, WebAssembly support, and professional-grade memory management.
Add to your Cargo.toml:
[dependencies]
rfann = "0.1"
# For GPU acceleration
rfann = { version = "0.1", features = ["gpu"] }
# For WebAssembly
rfann = { version = "0.1", features = ["wasm"] }
# For browser GPU acceleration
rfann = { version = "0.1", features = ["wasm-gpu"] }
use rfann::{NetworkBuilder, ActivationFunction};
use rfann::training::{TrainingData, IncrementalBackprop};
// Create a 3-layer network: 2 inputs, 3 hidden, 1 output
let mut network = NetworkBuilder::new()
.input_layer(2)
.hidden_layer(3, ActivationFunction::Sigmoid)
.output_layer(1, ActivationFunction::Linear)
.build()?;
// Prepare training data
let training_data = TrainingData::new(
vec![vec![0.0, 0.0], vec![0.0, 1.0], vec![1.0, 0.0], vec![1.0, 1.0]],
vec![vec![0.0], vec![1.0], vec![1.0], vec![0.0]] // XOR function
)?;
// Train the network
let mut trainer = IncrementalBackprop::new(0.7, 0.2)?;
trainer.train(&mut network, &training_data, 1000, 0.001)?;
// Make predictions
let output = network.run(&[1.0, 0.0])?;
println!("XOR(1,0) = {:.4}", output[0]);
use rfann::{NetworkBuilder, ActivationFunction};
use rfann::training::{TrainingData, gpu_training::GpuBatchTraining};
// Create network with GPU support
let mut network = NetworkBuilder::new()
.input_layer(784) // MNIST-sized input
.hidden_layer(128, ActivationFunction::ReLU)
.hidden_layer(64, ActivationFunction::ReLU)
.output_layer(10, ActivationFunction::Sigmoid)
.enable_gpu(true)
.build()?;
// GPU-accelerated batch training
let mut trainer = GpuBatchTraining::new(0.001, 32)?; // batch size 32
trainer.train(&mut network, &training_data, 100, 0.01).await?;
use rfann::{NetworkBuilder, ActivationFunction};
use wasm_bindgen::prelude::*;
#[wasm_bindgen]
pub struct WebNetwork {
network: rfann::Network<f32>,
}
#[wasm_bindgen]
impl WebNetwork {
#[wasm_bindgen(constructor)]
pub fn new() -> Result<WebNetwork, JsValue> {
let network = NetworkBuilder::new()
.input_layer(10)
.hidden_layer(20, ActivationFunction::ReLU)
.output_layer(5, ActivationFunction::Sigmoid)
.build()
.map_err(|e| JsValue::from_str(&e.to_string()))?;
Ok(WebNetwork { network })
}
#[wasm_bindgen]
pub fn predict(&mut self, inputs: &[f32]) -> Result<Vec<f32>, JsValue> {
self.network.run(inputs)
.map_err(|e| JsValue::from_str(&e.to_string()))
}
}
use rfann::{NetworkBuilder, ActivationFunction};
use rfann::cascade::CascadeCorrelation;
// Start with minimal network
let mut network = NetworkBuilder::new()
.input_layer(2)
.output_layer(1, ActivationFunction::Sigmoid)
.build()?;
// Cascade correlation automatically adds hidden neurons
let mut cascade = CascadeCorrelation::new(0.01, 100)?;
cascade.train(&mut network, &training_data, 50)?; // Max 50 hidden neurons
println!("Final network has {} layers", network.get_num_layers());
RFANN delivers exceptional performance across different deployment scenarios:
๐งช GPU Training with Validation (50โ100โ50โ10 network, 800 samples)
โข CPU Training: 1.67s (0.025s/epoch)
โข GPU Training: 0.72s (0.012s/epoch)
โข Speedup: 2.32x with identical convergence
Customize RFANN for your specific needs:
[dependencies.rfann]
version = "0.1"
default-features = false
features = [
"std", # Standard library support
"serde", # Serialization support
"parallel", # Multi-threading
"gpu", # GPU acceleration
"wasm", # WebAssembly support
"wasm-gpu", # WASM + WebGPU
"simd", # SIMD optimizations
"compression" # Gzip compression
]
We welcome contributions! Please see our Contributing Guide for details.
git clone https://github.com/agenticsorg/rfann.git
cd rfann
# Run tests
cargo test
# Run with GPU features
cargo test --features gpu
# Run benchmarks
cargo bench
# Test WASM build
cargo build --target wasm32-unknown-unknown --features wasm
This project is licensed under the MIT License - see the LICENSE file for details.
Ready to supercharge your neural networks? Get started with RFANN today! ๐