| Crates.io | ruvector-nervous-system |
| lib.rs | ruvector-nervous-system |
| version | 0.1.30 |
| created_at | 2025-12-29 19:13:17.291303+00 |
| updated_at | 2026-01-04 19:42:00.964414+00 |
| description | Bio-inspired neural system with spiking networks, BTSP learning, and EWC plasticity |
| homepage | |
| repository | https://github.com/ruvnet/ruvector |
| max_upload_size | |
| id | 2011128 |
| size | 915,254 |
A five-layer bio-inspired nervous system for AI applications. Think less "smart algorithm" and more "living organism."
Most AI systems are like assembly lines: data goes in, predictions come out, repeat forever. This crate takes a different approach. It gives your software a nervous system - the same kind of layered architecture that lets living creatures sense danger, react instantly, learn from experience, and rest when they need to.
The result? Systems that:
"From 'How do we make machines smarter?' to 'What kind of organism are we building?'"
Every living nervous system has specialized layers. So does this one:
graph TD
subgraph "COHERENCE LAYER"
A1[Global Workspace]
A2[Oscillatory Routing]
A3[Predictive Coding]
end
subgraph "LEARNING LAYER"
B1[BTSP One-Shot]
B2[E-prop Online]
B3[EWC Consolidation]
end
subgraph "MEMORY LAYER"
C1[Hopfield Networks]
C2[HDC Vectors]
C3[Pattern Separation]
end
subgraph "REFLEX LAYER"
D1[K-WTA Competition]
D2[Dendritic Detection]
D3[Safety Gates]
end
subgraph "SENSING LAYER"
E1[Event Bus]
E2[Sparse Spikes]
E3[Backpressure]
end
A1 --> B1
A2 --> B2
A3 --> B3
B1 --> C1
B2 --> C2
B3 --> C3
C1 --> D1
C2 --> D2
C3 --> D3
D1 --> E1
D2 --> E2
D3 --> E3
| Layer | What It Does | Why It Matters |
|---|---|---|
| Sensing | Converts continuous data into sparse events | Only process what changed. 10,000+ events/ms throughput. |
| Reflex | Instant decisions via winner-take-all competition | <1μs response time. No thinking required. |
| Memory | Stores patterns in hyperdimensional space | 10^40 capacity. Retrieve similar patterns in <100ns. |
| Learning | One-shot and online adaptation | Learn immediately. No batch retraining. |
| Coherence | Coordinates what gets attention | 90-99% bandwidth savings. Global workspace for focus. |
graph LR
subgraph Traditional["Traditional AI"]
T1[Batch Data] --> T2[Train Model]
T2 --> T3[Deploy]
T3 --> T4[Inference Loop]
T4 --> T1
end
subgraph NervousSystem["Nervous System"]
N1[Events] --> N2[Reflex]
N2 --> N3{Familiar?}
N3 -->|Yes| N4[Instant Response]
N3 -->|No| N5[Learn + Remember]
N5 --> N4
N4 --> N1
end
| Traditional AI | Nervous System |
|---|---|
| Always processing | Mostly quiet, reacts when needed |
| Learns from batches | Learns from single examples |
| Fails silently | Knows when it's struggling |
| Scales with more compute | Scales with better organization |
| Static after deployment | Adapts through use |
Event Bus - Lock-free ring buffers with region-based sharding
K-Winner-Take-All (K-WTA) - Instant decisions
Dendritic Coincidence Detection - Temporal pattern matching
Hyperdimensional Computing (HDC) - Ultra-fast similarity
Modern Hopfield Networks - Exponential pattern storage
Pattern Separation - Collision-free encoding
BTSP (Behavioral Timescale Plasticity) - One-shot learning
E-prop (Eligibility Propagation) - Online learning
EWC (Elastic Weight Consolidation) - Remember old tasks
Oscillatory Routing - Phase-coupled communication
Global Workspace - Focus of attention
Predictive Coding - Only transmit surprises
SCN-Inspired Duty Cycling - Rest when idle
Five metrics that define system health:
| Metric | What It Measures | Target |
|---|---|---|
| Silence Ratio | How often the system stays calm | >70% |
| TTD P50/P95 | Time to decision latency | <1ms/<10ms |
| Energy per Spike | Efficiency per meaningful change | Minimize |
| Write Amplification | Memory writes per event | <3× |
| Calmness Index | Post-learning stability | >0.8 |
All examples are in the unified examples/tiers/ folder:
cargo run --example t1_anomaly_detection # Infrastructure/Finance
cargo run --example t1_edge_autonomy # Drones/Robotics
cargo run --example t1_medical_wearable # Health Monitoring
cargo run --example t2_self_optimizing # Software Monitoring
cargo run --example t2_swarm_intelligence # IoT Fleets
cargo run --example t2_adaptive_simulation # Digital Twins
cargo run --example t3_self_awareness # Machine Introspection
cargo run --example t3_synthetic_nervous # Building Nervous Systems
cargo run --example t3_bio_machine # Brain-Machine Interfaces
cargo run --example t4_neuromorphic_rag # Coherence-gated LLM memory
cargo run --example t4_agentic_self_model # Agent that models own cognition
cargo run --example t4_collective_dreaming # Swarm memory consolidation
cargo run --example t4_compositional_hdc # Zero-shot HDC reasoning
Add to your Cargo.toml:
[dependencies]
ruvector-nervous-system = "0.1"
use ruvector_nervous_system::plasticity::btsp::BTSPLayer;
// Create layer with 2-second learning window
let mut layer = BTSPLayer::new(100, 2000.0);
// Learn from single example
let pattern = vec![0.1; 100];
layer.one_shot_associate(&pattern, 1.0);
// Immediate recall - no training loop!
let output = layer.forward(&pattern);
use ruvector_nervous_system::hdc::{Hypervector, HdcMemory};
// 10,000-bit hypervectors
let apple = Hypervector::random();
let orange = Hypervector::random();
// Bind concepts (<50ns)
let fruit = apple.bind(&orange);
// Similarity check (<100ns)
let sim = apple.similarity(&orange);
// Store and retrieve
let mut memory = HdcMemory::new();
memory.store("apple", apple.clone());
let results = memory.retrieve(&apple, 0.9);
use ruvector_nervous_system::compete::WTALayer;
// 1000 competing neurons
let mut wta = WTALayer::new(1000, 0.5, 0.8);
// Winner in <1μs
if let Some(winner) = wta.compete(&activations) {
handle_winner(winner);
}
use ruvector_nervous_system::routing::{OscillatoryRouter, GlobalWorkspace};
// 40Hz gamma oscillators
let mut router = OscillatoryRouter::new(10, 40.0);
router.step(0.001);
// Communication gain from phase alignment
let gain = router.communication_gain(sender, receiver);
// Global workspace (4-7 items max)
let mut workspace = GlobalWorkspace::new(7);
workspace.broadcast(representation);
use ruvector_nervous_system::routing::{
CircadianController, HysteresisTracker, BudgetGuardrail,
};
// 24-hour cycle controller
let mut clock = CircadianController::new(24.0);
clock.set_coherence(0.8);
// Phase-aware compute decisions
if clock.should_compute() {
run_inference();
}
if clock.should_learn() {
update_weights();
}
if clock.should_consolidate() {
background_cleanup();
}
// Hysteresis: require 5 ticks above threshold
let mut tracker = HysteresisTracker::new(0.7, 5);
if tracker.update(coherence) {
clock.accelerate(1.5);
}
// Budget: auto-decelerate when overspending
let mut budget = BudgetGuardrail::new(1000.0, 0.5);
budget.record_spend(energy, dt);
let duty = clock.duty_factor() * budget.duty_multiplier();
sequenceDiagram
participant Sensors
participant EventBus
participant Reflex
participant Memory
participant Learning
participant Coherence
Sensors->>EventBus: Sparse events
EventBus->>Reflex: K-WTA competition
alt Familiar Pattern
Reflex->>Memory: Query HDC/Hopfield
Memory-->>Reflex: Instant match
Reflex->>Sensors: Immediate response
else Novel Pattern
Reflex->>Learning: BTSP/E-prop update
Learning->>Memory: Store new pattern
Learning->>Coherence: Request attention
Coherence->>Sensors: Coordinated response
end
Note over Coherence: Circadian controller gates all layers
| Component | Target | Achieved |
|---|---|---|
| HDC Binding | <50ns | 64ns |
| HDC Similarity | <100ns | ~80ns |
| WTA Single Winner | <1μs | <1μs |
| K-WTA (k=50) | <10μs | 2.7μs |
| Hopfield Retrieval | <1ms | <1ms |
| Pattern Separation | <500μs | <500μs |
| E-prop Synapse Memory | 8-12 bytes | 12 bytes |
| Event Bus | 10K events/ms | 10K+ events/ms |
| Circadian Savings | 5-50× | Phase-dependent |
| Component | Research Basis |
|---|---|
| HDC | Kanerva 1988, Plate 2003 |
| Modern Hopfield | Ramsauer et al. 2020 |
| Pattern Separation | Rolls 2013, Dentate Gyrus |
| Dendritic Processing | Stuart & Spruston 2015 |
| BTSP | Bittner et al. 2017 |
| E-prop | Bellec et al. 2020 |
| EWC | Kirkpatrick et al. 2017 |
| Oscillatory Routing | Fries 2015 |
| Global Workspace | Baars 1988, Dehaene 2014 |
| Circadian Rhythms | Moore 2007, SCN research |
This isn't about making AI faster or smarter in the traditional sense. It's about building systems that:
You're not shipping faster inference. You're shipping a system that stays quiet, waits, and then reacts with intent.
MIT License - See LICENSE
Contributions welcome! Each module should include: