rfann

Crates.iorfann
lib.rsrfann
version0.1.0
created_at2025-07-12 05:48:25.6567+00
updated_at2025-07-12 05:48:25.6567+00
descriptionA pure Rust implementation of the Fast Artificial Neural Network (FANN) library
homepage
repository
max_upload_size
id1749004
size815,877
OceanLi (ohdearquant)

documentation

README

RFANN ๐Ÿง โšก

Rust Fast Artificial Neural Network - A modern, high-performance neural network library built in pure Rust

Crates.io Documentation License: MIT

RFANN is a feature-rich, GPU-accelerated neural network library inspired by the original FANN (Fast Artificial Neural Network) but rebuilt from the ground up in Rust. It combines classical neural network algorithms with modern GPU acceleration, WebAssembly support, and professional-grade memory management.

๐Ÿš€ Key Features

โšก High Performance

  • WebGPU Acceleration: Native GPU compute support for training and inference
  • WASM + WebGPU: Full neural networks running in browsers with GPU acceleration
  • SIMD Optimizations: Vectorized CPU operations for enhanced performance
  • Multi-threaded Training: Parallel processing with automatic CPU utilization

๐Ÿง  Advanced Neural Networks

  • 18+ Activation Functions: From standard (ReLU, Sigmoid) to specialized (Elliott, Gaussian)
  • Cascade Correlation: Unique dynamic topology optimization that grows networks during training
  • Multiple Training Algorithms: Backprop, RProp, Quickprop, Adam, AdamW
  • Generic Float Support: Works with f32, f64, and custom numeric types

๐ŸŒ Cross-Platform Deployment

  • Native: Linux, macOS, Windows with full GPU support
  • WebAssembly: Complete library functionality in browsers
  • no_std Support: Embedded and resource-constrained environments
  • Automatic Fallback: Seamless CPU/GPU backend switching

๐Ÿ”ง Professional Features

  • Advanced Memory Management: 5-tier buffer pooling with pressure monitoring
  • Circuit Breaker Protection: Predictive analytics preventing memory exhaustion
  • Multiple I/O Formats: FANN, JSON, Binary, with compression support
  • Real-time Monitoring: Performance metrics and auto-tuning

๐Ÿ“ฆ Installation

Add to your Cargo.toml:

[dependencies]
rfann = "0.1"

# For GPU acceleration
rfann = { version = "0.1", features = ["gpu"] }

# For WebAssembly
rfann = { version = "0.1", features = ["wasm"] }

# For browser GPU acceleration
rfann = { version = "0.1", features = ["wasm-gpu"] }

๐Ÿ”ฅ Quick Start

Basic Neural Network

use rfann::{NetworkBuilder, ActivationFunction};
use rfann::training::{TrainingData, IncrementalBackprop};

// Create a 3-layer network: 2 inputs, 3 hidden, 1 output
let mut network = NetworkBuilder::new()
    .input_layer(2)
    .hidden_layer(3, ActivationFunction::Sigmoid)
    .output_layer(1, ActivationFunction::Linear)
    .build()?;

// Prepare training data
let training_data = TrainingData::new(
    vec![vec![0.0, 0.0], vec![0.0, 1.0], vec![1.0, 0.0], vec![1.0, 1.0]],
    vec![vec![0.0], vec![1.0], vec![1.0], vec![0.0]]  // XOR function
)?;

// Train the network
let mut trainer = IncrementalBackprop::new(0.7, 0.2)?;
trainer.train(&mut network, &training_data, 1000, 0.001)?;

// Make predictions
let output = network.run(&[1.0, 0.0])?;
println!("XOR(1,0) = {:.4}", output[0]);

GPU-Accelerated Training

use rfann::{NetworkBuilder, ActivationFunction};
use rfann::training::{TrainingData, gpu_training::GpuBatchTraining};

// Create network with GPU support
let mut network = NetworkBuilder::new()
    .input_layer(784)  // MNIST-sized input
    .hidden_layer(128, ActivationFunction::ReLU)
    .hidden_layer(64, ActivationFunction::ReLU)
    .output_layer(10, ActivationFunction::Sigmoid)
    .enable_gpu(true)
    .build()?;

// GPU-accelerated batch training
let mut trainer = GpuBatchTraining::new(0.001, 32)?;  // batch size 32
trainer.train(&mut network, &training_data, 100, 0.01).await?;

WebAssembly Deployment

use rfann::{NetworkBuilder, ActivationFunction};
use wasm_bindgen::prelude::*;

#[wasm_bindgen]
pub struct WebNetwork {
    network: rfann::Network<f32>,
}

#[wasm_bindgen]
impl WebNetwork {
    #[wasm_bindgen(constructor)]
    pub fn new() -> Result<WebNetwork, JsValue> {
        let network = NetworkBuilder::new()
            .input_layer(10)
            .hidden_layer(20, ActivationFunction::ReLU)
            .output_layer(5, ActivationFunction::Sigmoid)
            .build()
            .map_err(|e| JsValue::from_str(&e.to_string()))?;
        
        Ok(WebNetwork { network })
    }
    
    #[wasm_bindgen]
    pub fn predict(&mut self, inputs: &[f32]) -> Result<Vec<f32>, JsValue> {
        self.network.run(inputs)
            .map_err(|e| JsValue::from_str(&e.to_string()))
    }
}

Cascade Correlation (Dynamic Networks)

use rfann::{NetworkBuilder, ActivationFunction};
use rfann::cascade::CascadeCorrelation;

// Start with minimal network
let mut network = NetworkBuilder::new()
    .input_layer(2)
    .output_layer(1, ActivationFunction::Sigmoid)
    .build()?;

// Cascade correlation automatically adds hidden neurons
let mut cascade = CascadeCorrelation::new(0.01, 100)?;
cascade.train(&mut network, &training_data, 50)?;  // Max 50 hidden neurons

println!("Final network has {} layers", network.get_num_layers());

๐ŸŽฏ Training Algorithms

Classical Algorithms

  • Backpropagation: Incremental and batch variants
  • RProp: Resilient backpropagation with adaptive learning rates
  • Quickprop: Quasi-Newton method for faster convergence

Modern Optimizers

  • Adam: Adaptive moment estimation
  • AdamW: Adam with decoupled weight decay

Unique Features

  • Cascade Correlation: Dynamic topology optimization
  • GPU Acceleration: All algorithms support GPU training
  • Automatic Tuning: Learning rate adaptation and momentum optimization

๐ŸŒŸ Use Cases

๐ŸŽฎ Real-Time Applications

  • Game AI with GPU-accelerated inference
  • Interactive web applications with WASM deployment
  • Real-time signal processing and control systems

๐Ÿ”ฌ Research & Development

  • Algorithm comparison and benchmarking
  • Custom activation function development
  • Neural architecture search with cascade correlation

๐Ÿญ Production Systems

  • High-throughput batch inference
  • Edge deployment with no_std support
  • Microservice architectures with minimal dependencies

๐Ÿ“š Education & Prototyping

  • Learning neural network fundamentals
  • Rapid prototyping with fluent API
  • Network visualization and analysis

๐Ÿ“Š Performance

RFANN delivers exceptional performance across different deployment scenarios:

Real Benchmark Results (Apple M2 Max)

๐Ÿงช GPU Training with Validation (50โ†’100โ†’50โ†’10 network, 800 samples)
โ€ข CPU Training: 1.67s (0.025s/epoch)
โ€ข GPU Training: 0.72s (0.012s/epoch)
โ€ข Speedup: 2.32x with identical convergence

Performance Characteristics

  • GPU Acceleration: 2-10x speedup on Apple Silicon, up to 100x on discrete GPUs
  • WASM Performance: Near-native speeds in browsers with WebGPU support
  • Memory Efficiency: Advanced 5-tier pooling reduces allocation overhead by 80%
  • Batch Processing: Optimized matrix operations for high-throughput inference
  • Early Stopping: Intelligent validation monitoring prevents overfitting

๐ŸŽ›๏ธ Feature Flags

Customize RFANN for your specific needs:

[dependencies.rfann]
version = "0.1"
default-features = false
features = [
    "std",        # Standard library support
    "serde",      # Serialization support
    "parallel",   # Multi-threading
    "gpu",        # GPU acceleration
    "wasm",       # WebAssembly support
    "wasm-gpu",   # WASM + WebGPU
    "simd",       # SIMD optimizations
    "compression" # Gzip compression
]

๐Ÿ“š Documentation

๐Ÿค Contributing

We welcome contributions! Please see our Contributing Guide for details.

Development Setup

git clone https://github.com/agenticsorg/rfann.git
cd rfann

# Run tests
cargo test

# Run with GPU features
cargo test --features gpu

# Run benchmarks
cargo bench

# Test WASM build
cargo build --target wasm32-unknown-unknown --features wasm

๐Ÿ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

๐Ÿ™ Acknowledgments

  • Inspired by the original FANN library
  • Built with wgpu for cross-platform GPU compute
  • Powered by the Rust ecosystem's excellent crates

Ready to supercharge your neural networks? Get started with RFANN today! ๐Ÿš€

Commit count: 0

cargo fmt