| Crates.io | nt-neural |
| lib.rs | nt-neural |
| version | 1.0.0 |
| created_at | 2025-11-13 19:05:15.682824+00 |
| updated_at | 2025-11-13 19:05:15.682824+00 |
| description | Neural network integration for Neural Trader - LSTM, transformers, and deep learning models with Candle |
| homepage | |
| repository | https://github.com/ruvnet/neural-trader |
| max_upload_size | |
| id | 1931739 |
| size | 761,561 |
High-performance neural network models for financial time series forecasting with optional GPU acceleration.
npx agentdb)[dependencies]
nt-neural = "0.1.0"
# With GPU support (requires candle)
nt-neural = { version = "0.1.0", features = ["candle", "cuda"] }
use nt_neural::{
utils::preprocessing::normalize,
utils::features::create_lags,
utils::metrics::EvaluationMetrics,
};
// Preprocess data
let (normalized, params) = normalize(&prices)?;
let features = create_lags(&normalized, &[1, 3, 7, 14])?;
// With candle feature enabled:
#[cfg(feature = "candle")]
{
use nt_neural::{NHITSModel, ModelConfig, Trainer};
let config = ModelConfig {
input_size: 168, // 1 week hourly
horizon: 24, // 24h forecast
hidden_size: 512,
..Default::default()
};
let model = NHITSModel::new(config)?;
let trained = trainer.train(&model, &data).await?;
}
| Model | Type | Best For | GPU Required |
|---|---|---|---|
| NHITS | Hierarchical MLP | Multi-horizon forecasting | Yes |
| LSTM-Attention | RNN + Attention | Sequential patterns | Yes |
| Transformer | Attention-based | Long-range dependencies | Yes |
| GRU | RNN | Simpler sequences | No |
| TCN | Convolutional | Local patterns | No |
| DeepAR | Probabilistic | Uncertainty quantification | Yes |
| N-BEATS | Pure MLP | Interpretable decomposition | No |
| Prophet | Decomposition | Trend + seasonality | No |
Fast compilation, minimal dependencies, all preprocessing and metrics work:
cargo build --package nt-neural
Available:
Full neural model training and inference:
# CUDA (NVIDIA GPUs)
cargo build --package nt-neural --features "candle,cuda"
# Metal (Apple Silicon)
cargo build --package nt-neural --features "candle,metal"
# Accelerate (Apple CPU optimization)
cargo build --package nt-neural --features "candle,accelerate"
Store and retrieve models with vector similarity search:
use nt_neural::storage::{AgentDbStorage, ModelMetadata};
// Initialize storage
let storage = AgentDbStorage::new("./data/models/agentdb.db").await?;
// Save model
let model_id = storage.save_model(
&model_bytes,
ModelMetadata {
name: "btc-predictor".to_string(),
model_type: "NHITS".to_string(),
version: "1.0.0".to_string(),
tags: vec!["crypto".to_string(), "bitcoin".to_string()],
..Default::default()
}
).await?;
// Load model
let model_bytes = storage.load_model(&model_id).await?;
// Search similar models
let similar = storage.search_similar_models(&embedding, 5).await?;
Ok(())
}
Run the examples to see AgentDB integration in action:
# Basic storage operations
cargo run --example agentdb_basic
# Vector similarity search
cargo run --example agentdb_similarity_search
# Checkpoint management
cargo run --example agentdb_checkpoints
# Unit tests
cargo test --package nt-neural
# Integration tests (requires npx agentdb)
cargo test --package nt-neural --test storage_integration_test -- --ignored
[dependencies]
nt-neural = { version = "0.1.0", features = ["candle", "cuda"] }
Available features:
candle: Neural network framework (default)cuda: NVIDIA GPU accelerationmetal: Apple Metal GPU accelerationaccelerate: Apple Accelerate CPU optimizationThe module integrates with AgentDB for:
See AGENTDB_INTEGRATION.md for detailed documentation.
nt-neural/
├── src/
│ ├── models/ # Neural architectures
│ ├── training/ # Training infrastructure
│ ├── inference/ # Prediction engine
│ ├── storage/ # AgentDB integration
│ │ ├── mod.rs
│ │ ├── types.rs # Storage types
│ │ └── agentdb.rs # AgentDB backend
│ └── utils/ # Utilities
├── examples/ # Usage examples
└── tests/ # Integration tests
Key dependencies for AgentDB:
tokio: Async runtimeserde: Serializationuuid: Model IDschrono: Timestampstempfile: Temporary storagefasthash: Fast hashingOptimized for production CPU-only deployment:
Key Optimizations:
CPU vs Python Baseline:
Guides:
When GPU features are available:
MIT License - See LICENSE