| Crates.io | ruv-swarm-ml-training |
| lib.rs | ruv-swarm-ml-training |
| version | 1.0.5 |
| created_at | 2025-06-30 16:48:49.834269+00 |
| updated_at | 2025-07-02 20:45:12.601166+00 |
| description | Advanced ML training pipeline for neuro-divergent models in RUV Swarm |
| homepage | |
| repository | https://github.com/ruvnet/ruv-FANN |
| max_upload_size | |
| id | 1732098 |
| size | 184,605 |
Advanced machine learning training pipeline for neuro-divergent models in the RUV Swarm ecosystem. This crate provides comprehensive tools for training LSTM, TCN, and N-BEATS models to predict agent performance and optimize prompts.
Add this to your Cargo.toml:
[dependencies]
ruv-swarm-ml-training = "0.1.0"
use ruv_swarm_ml_training::{TrainingPipeline, TrainingConfig};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Configure training
let config = TrainingConfig::default();
// Create pipeline
let mut pipeline = TrainingPipeline::new(config);
// Run training with your event stream
let result = pipeline.run(event_stream).await?;
println!("Best model: {}", result.best_model);
Ok(())
}
Loads and processes streaming event data:
let loader = StreamDataLoader::new(buffer_size, sequence_length);
let dataset = loader.load_from_stream(events).await?;
LSTM Model:
let lstm = LSTMModel::new(hidden_size: 128, num_layers: 2);
TCN Model:
let tcn = TCNModel::new(num_channels: vec![64, 64, 64], kernel_size: 3);
N-BEATS Model:
let nbeats = NBEATSModel::new(
stack_types: vec![StackType::Trend, StackType::Seasonality],
num_blocks: 4
);
let search_space = SearchSpace {
parameters: HashMap::from([
("learning_rate", ParameterRange::Continuous { min: 0.0001, max: 0.01 }),
("hidden_size", ParameterRange::Discrete { values: vec![64.0, 128.0, 256.0] }),
]),
};
let optimizer = HyperparameterOptimizer::new(
search_space,
OptimizationMethod::BayesianOptimization,
num_trials: 20,
);
let result = optimizer.optimize(model_factory, dataset, config).await?;
The pipeline expects events in this format:
StreamEvent {
timestamp: u64,
agent_id: String,
event_type: EventType,
performance_metrics: PerformanceMetrics {
latency_ms: f64,
tokens_per_second: f64,
memory_usage_mb: f64,
cpu_usage_percent: f64,
success_rate: f64,
},
prompt_data: Option<PromptData {
prompt_text: String,
prompt_tokens: usize,
response_tokens: usize,
quality_score: f64,
}>,
}
The pipeline automatically extracts features from events:
See the examples/ directory for:
basic_training.rs - Simple training pipeline usagehyperparameter_search.rs - Advanced hyperparameter optimizationmodel_comparison.rs - Comparing different model architecturesLicensed under either of:
at your option.
Contributions are welcome! Please see the contributing guidelines for details.