| Crates.io | feagi-hal |
| lib.rs | feagi-hal |
| version | 0.0.1-beta.4 |
| created_at | 2025-12-23 22:55:27.668651+00 |
| updated_at | 2026-01-25 21:54:12.936762+00 |
| description | Hardware Abstraction Layer (HAL) for FEAGI embedded systems - platform abstraction and implementations |
| homepage | https://feagi.org |
| repository | https://github.com/feagi/feagi-core |
| max_upload_size | |
| id | 2002504 |
| size | 347,698 |
Hardware Abstraction Layer (HAL) for FEAGI embedded neural networks
Platform abstraction and implementations for embedded systems.
Part of FEAGI 2.0 - Framework for Evolutionary AGI
feagi-hal provides a Hardware Abstraction Layer (HAL) with platform-agnostic traits and concrete implementations for running FEAGI neural networks on embedded systems. It sits between the platform-agnostic neural processing core and platform-specific hardware.
┌─────────────────────────────────────────────────────────┐
│ Application (feagi-nano SDK or custom application) │
└─────────────────────────────────────────────────────────┘
↓ uses
┌─────────────────────────────────────────────────────────┐
│ feagi-hal (THIS CRATE) │
│ ├── hal/ Platform Abstraction Layer (traits) │
│ └── platforms/ Platform Implementations │
└─────────────────────────────────────────────────────────┘
↓ uses
┌─────────────────────────────────────────────────────────┐
│ feagi-core (neural processing) │
│ ├── feagi-types │
│ ├── feagi-neural │
│ ├── feagi-synapse │
│ └── feagi-runtime-embedded │
└─────────────────────────────────────────────────────────┘
| Platform | Status | Feature Flag | Target | Max Neurons (INT8) |
|---|---|---|---|---|
| ESP32 | ✅ Production | esp32 |
xtensa-esp32-none-elf |
2,000 |
| ESP32-S3 | ✅ Production | esp32-s3 |
xtensa-esp32s3-none-elf |
40,000 |
| ESP32-C3 | ✅ Production | esp32-c3 |
riscv32imc-esp-espidf |
1,500 |
| Arduino Due | ✅ Foundation | arduino-due |
thumbv7m-none-eabi |
1,000 |
| STM32F4 | ✅ Foundation | stm32f4 |
thumbv7em-none-eabihf |
2,500 |
| Raspberry Pi Pico | ✅ Foundation | rpi-pico |
thumbv6m-none-eabi |
3,500 |
| Hailo-8 | ✅ Foundation | hailo |
aarch64-unknown-linux-gnu |
1,000,000+ 🚀 |
Legend:
Note: Hailo-8 requires HailoRT C/C++ library and FFI bindings for hardware deployment. Architecture is complete!
Add to your Cargo.toml:
[dependencies]
feagi-hal = { version = "2.0", features = ["esp32"] }
# Microcontrollers
esp32 = ["esp-idf-svc", "esp-idf-hal"] # ESP32 family
arduino-due = ["arduino-hal"] # Arduino Due (future)
stm32f4 = ["stm32f4xx-hal"] # STM32F4 series (future)
# Neural accelerators
hailo = ["hailo-sdk"] # Hailo-8 (future)
# Convenience bundles
all-esp32 = ["esp32", "esp32-s3", "esp32-c3"]
use feagi_hal::prelude::*;
fn main() -> ! {
// Initialize platform
let platform = Esp32Platform::init().expect("Failed to initialize ESP32");
platform.info("FEAGI Embedded starting...");
platform.info(&format!("Platform: {}", platform.name()));
platform.info(&format!("CPU: {} MHz", platform.cpu_frequency_hz() / 1_000_000));
// Your neural network code here
let mut neurons = NeuronArray::<INT8Value, 1000>::new();
let mut synapses = SynapseArray::<5000>::new();
loop {
let start = platform.get_time_us();
// Process neural burst
neurons.process_burst(&synapses);
// Timing control
let elapsed = platform.get_time_us() - start;
if elapsed < 10_000 { // 10ms = 100 Hz
platform.delay_us((10_000 - elapsed) as u32);
}
}
}
# Install ESP32 toolchain
cargo install espup
espup install
source ~/export-esp.sh
# Build
cargo build --release --features esp32 --target xtensa-esp32-none-elf
# Flash
cargo run --release --features esp32
pub trait TimeProvider {
fn get_time_us(&self) -> u64;
fn delay_us(&self, us: u32);
fn delay_ms(&self, ms: u32);
}
pub trait SerialIO {
type Error;
fn write(&mut self, data: &[u8]) -> Result<usize, Self::Error>;
fn read(&mut self, buffer: &mut [u8]) -> Result<usize, Self::Error>;
fn flush(&mut self) -> Result<(), Self::Error>;
}
pub trait Logger {
fn log(&self, level: LogLevel, message: &str);
fn error(&self, message: &str);
fn warn(&self, message: &str);
fn info(&self, message: &str);
}
pub trait NeuralAccelerator {
type Error;
fn is_available(&self) -> bool;
fn upload_neurons(&mut self, neurons: &[u8]) -> Result<(), Self::Error>;
fn process_burst(&mut self) -> Result<u32, Self::Error>;
fn download_neurons(&mut self, buffer: &mut [u8]) -> Result<usize, Self::Error>;
}
See PORTING_GUIDE.md for step-by-step instructions.
Quick overview:
src/platforms/myplatform.rsTimeProvider, SerialIO, Logger, Platform)Cargo.tomlEstimated time: 2-3 days per platform
This separation allows:
feagi-hal directlyLicensed under Apache License 2.0
Copyright © 2025 Neuraville Inc.
See CONTRIBUTING.md
Platform implementations welcome! We're looking for contributors to add support for: