| Crates.io | omega-attention |
| lib.rs | omega-attention |
| version | 1.1.0 |
| created_at | 2025-12-10 16:57:20.534858+00 |
| updated_at | 2025-12-12 07:08:53.404937+00 |
| description | Attention mechanisms for ExoGenesis Omega - 39 attention types for brain-like selective processing |
| homepage | |
| repository | https://github.com/prancer-io/ExoGenesis-Omega |
| max_upload_size | |
| id | 1978424 |
| size | 105,981 |
Brain-like selective attention system with 40 attention mechanisms, working memory gating, and top-down/bottom-up processing.
Part of the ExoGenesis-Omega cognitive architecture.
omega-attention implements a biologically-inspired attention system modeled after neuroscience research on selective attention and transformer architectures. It provides 40 attention mechanisms ranging from standard scaled dot-product attention to advanced hyperbolic, graph, and memory-augmented variants.
The system combines:
Add this to your Cargo.toml:
[dependencies]
omega-attention = "1.0.0"
use omega_attention::{AttentionSystem, AttentionConfig, AttentionType};
fn main() {
// Create attention system with default configuration
let config = AttentionConfig::default();
let mut system = AttentionSystem::new(config);
// Input to attend to
let input = vec![0.5; 64];
let goals = vec![0.8; 64]; // Current goals/task
let context = vec![0.3; 64];
// Process through attention system
let output = system.attend(&input, &goals, &context).unwrap();
println!("Attention strength: {:.3}", output.max_attention);
println!("Attended values: {:?}", &output.attended_values[..5]);
// Check working memory state
let state = system.state();
println!("Working memory: {}/{} items", state.wm_items, state.wm_capacity);
}
┌─────────────────────────────────────────────────────────────┐
│ ATTENTION SYSTEM │
├─────────────────────────────────────────────────────────────┤
│ │
│ ┌────────────────────┐ ┌────────────────────┐ │
│ │ TOP-DOWN │ │ BOTTOM-UP │ │
│ │ (Goal-driven) │ │ (Salience) │ │
│ │ │ │ │ │
│ │ • Task relevance │ │ • Novelty │ │
│ │ • Expected value │ │ • Contrast │ │
│ │ • Memory match │ │ • Motion │ │
│ └────────┬───────────┘ └────────┬───────────┘ │
│ │ │ │
│ └───────────┬─────────────┘ │
│ ▼ │
│ ┌───────────────────────┐ │
│ │ ATTENTION CONTROL │ │
│ │ (Priority Map) │ │
│ └───────────┬───────────┘ │
│ ▼ │
│ ┌───────────────────────┐ │
│ │ ATTENTION MECHANISMS│ │
│ │ (40 types) │ │
│ └───────────┬───────────┘ │
│ ▼ │
│ ┌───────────────────────┐ │
│ │ WORKING MEMORY │ │
│ │ (Gated Access) │ │
│ └───────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────┘
| Type | Description | Use Case |
|---|---|---|
ScaledDotProduct |
Standard transformer attention | General purpose |
FlashAttention |
Memory-efficient O(N) attention | Long sequences |
LinearAttention |
Kernel-based linear complexity | Very long sequences |
MultiHeadAttention |
Parallel attention heads | Rich representations |
| Type | Description | Use Case |
|---|---|---|
SparseAttention |
Top-k sparsity patterns | Efficiency |
HyperbolicAttention |
Hyperbolic space embeddings | Hierarchical data |
GraphAttention |
Graph neural network attention | Relational data |
MemoryAugmented |
External memory access | Long-term context |
CrossAttention |
Query/key from different sources | Multi-modal fusion |
| Type | Description | Use Case |
|---|---|---|
SalienceAttention |
Bottom-up salience maps | Novelty detection |
InhibitionOfReturn |
Temporal attention suppression | Visual search |
FeatureIntegration |
Binding features to locations | Object recognition |
The working memory system implements Miller's "magical number 7±2" with biological gating:
use omega_attention::{WorkingMemory, WorkingMemoryItem, WMGate};
// Create working memory with capacity 7
let mut wm = WorkingMemory::new(7);
// Configure input gate (controls what enters WM)
wm.input_gate.threshold = 0.5; // Minimum importance to enter
wm.input_gate.openness = 0.8; // Gate openness
// Store high-importance item
let item = WorkingMemoryItem::new(vec![1.0, 2.0, 3.0], 0.9);
assert!(wm.try_store(item)); // Passes gate
// Items decay over time
wm.decay(0.1); // Reduce all activations
// Rehearse to maintain items
wm.rehearse("item_id"); // Boost activation
// Find similar items
let query = vec![1.1, 2.1, 3.1];
let similar = wm.find_similar(&query, 3);
Bottom-up attention is driven by stimulus salience:
use omega_attention::{SalienceComputer, SalienceFeature};
let mut computer = SalienceComputer::new();
// Process input to extract salience features
let input = vec![0.5; 64];
let salience_map = computer.compute(&input);
// Individual feature contributions
let features = computer.extract_features(&input);
for feature in features {
match feature {
SalienceFeature::Novelty(n) => println!("Novelty: {:.3}", n),
SalienceFeature::Contrast(c) => println!("Contrast: {:.3}", c),
SalienceFeature::Motion(m) => println!("Motion: {:.3}", m),
SalienceFeature::Change(ch) => println!("Change: {:.3}", ch),
}
}
use omega_attention::AttentionConfig;
let config = AttentionConfig {
dim: 64, // Attention dimension
num_heads: 8, // Number of attention heads
head_dim: 8, // Dimension per head
dropout: 0.1, // Dropout rate
top_down_weight: 0.6, // Weight for goal-driven attention
bottom_up_weight: 0.4, // Weight for salience-driven attention
wm_capacity: 7, // Working memory capacity
mechanism: AttentionType::MultiHeadAttention,
..Default::default()
};
// Focus attention on task-relevant features
let goals = encode_task("summarize the document");
let document = encode_document(text);
let attended = system.attend(&document, &goals, &context)?;
// attended.attended_values contains task-relevant information
// Automatically detect novel/unexpected inputs
let salience = salience_computer.compute(&new_input);
if salience.max() > 0.8 {
println!("Novel stimulus detected!");
system.focus(&new_input); // Shift attention
}
// Important items enter working memory
let output = system.attend(&input, &goals, &context)?;
if output.max_attention > 0.7 {
// High attention = important = store in WM
let wm = system.working_memory();
println!("Working memory now has {} items", wm.len());
}
omega-attention is a core component of the Omega cognitive architecture:
omega-brain (Unified Cognitive System)
└── omega-attention (Selective Processing)
└── omega-consciousness (Awareness)
└── omega-hippocampus (Memory)
└── omega-snn (Neural Substrate)
Licensed under the MIT License. See LICENSE for details.