| Crates.io | hill_descent_lib |
| lib.rs | hill_descent_lib |
| version | 0.2.0 |
| created_at | 2025-10-28 05:57:50.986766+00 |
| updated_at | 2025-11-21 06:24:56.482981+00 |
| description | Genetic algorithm library for n-dimensional optimization problems |
| homepage | |
| repository | https://github.com/cainem/hill_descent |
| max_upload_size | |
| id | 1904239 |
| size | 728,460 |
A Rust genetic algorithm library for n-dimensional optimization problems.
use hill_descent_lib::{GlobalConstants, SingleValuedFunction, setup_world, TrainingData};
use std::ops::RangeInclusive;
// Define your fitness function (lower scores are better)
#[derive(Debug)]
struct Quadratic;
impl SingleValuedFunction for Quadratic {
fn single_run(&self, params: &[f64]) -> f64 {
// Minimize: x² + y²
// Optimal solution: (0, 0) with score 0
params[0].powi(2) + params[1].powi(2)
}
}
fn main() {
// Define parameter bounds
let bounds = vec![-10.0..=10.0, -10.0..=10.0];
// Configure the algorithm (population size, number of regions)
let constants = GlobalConstants::new(100, 10);
// Create the optimization world
let mut world = setup_world(&bounds, constants, Box::new(Quadratic));
// Run optimization for 100 generations
for _ in 0..100 {
world.training_run(TrainingData::None { floor_value: 0.0 });
}
println!("Best score: {}", world.get_best_score());
}
enable-tracing)Add this to your Cargo.toml:
[dependencies]
hill_descent_lib = "0.1.0"
use hill_descent_lib::{GlobalConstants, SingleValuedFunction, setup_world, TrainingData};
use std::ops::RangeInclusive;
#[derive(Debug)]
struct Himmelblau;
impl SingleValuedFunction for Himmelblau {
fn single_run(&self, params: &[f64]) -> f64 {
let x = params[0];
let y = params[1];
// Himmelblau's function has four identical local minima
(x.powi(2) + y - 11.0).powi(2) + (x + y.powi(2) - 7.0).powi(2)
}
}
fn main() {
let bounds = vec![-5.0..=5.0, -5.0..=5.0];
let constants = GlobalConstants::new(500, 10);
let mut world = setup_world(&bounds, constants, Box::new(Himmelblau));
for generation in 0..1000 {
world.training_run(TrainingData::None { floor_value: 0.0 });
if generation % 100 == 0 {
println!("Generation {}: Best score = {}", generation, world.get_best_score());
}
}
}
use hill_descent_lib::{GlobalConstants, SingleValuedFunction, setup_world, TrainingData};
use std::ops::RangeInclusive;
#[derive(Debug)]
struct NDimSphere;
impl SingleValuedFunction for NDimSphere {
fn single_run(&self, params: &[f64]) -> f64 {
// Sphere function: sum of squares
params.iter().map(|&x| x * x).sum()
}
}
fn main() {
// Optimize in 20 dimensions
let bounds = vec![-10.0..=10.0; 20];
let constants = GlobalConstants::new(1000, 50);
let mut world = setup_world(&bounds, constants, Box::new(NDimSphere));
for _ in 0..500 {
world.training_run(TrainingData::None { floor_value: 0.0 });
}
println!("Best score: {}", world.get_best_score());
}
By default, the algorithm assumes the optimal fitness is 0. For functions with different optima:
use hill_descent_lib::{GlobalConstants, SingleValuedFunction, setup_world};
use std::ops::RangeInclusive;
#[derive(Debug)]
struct CustomFloor;
impl SingleValuedFunction for CustomFloor {
fn single_run(&self, params: &[f64]) -> f64 {
// Your function that has a minimum of -100
-100.0 + params[0].powi(2)
}
fn function_floor(&self) -> f64 {
-100.0 // Tell the algorithm the expected minimum
}
}
For large-scale parameter optimization (e.g., neural network weights), hill_descent_lib excels at finding good initializations or tuning thousands of parameters:
use hill_descent_lib::{GlobalConstants, SingleValuedFunction, setup_world, format_score, TrainingData};
use std::ops::RangeInclusive;
// Simulate a neural network with training data managed internally
#[derive(Debug)]
struct NeuralNetOptimizer {
// Your training data stored here
train_inputs: Vec<Vec<f64>>,
train_labels: Vec<f64>,
// Network structure: e.g., [input_size, hidden1, hidden2, output_size]
layer_sizes: Vec<usize>,
}
impl NeuralNetOptimizer {
fn new(train_inputs: Vec<Vec<f64>>, train_labels: Vec<f64>) -> Self {
Self {
train_inputs,
train_labels,
layer_sizes: vec![10, 50, 50, 1], // 10 inputs, 2x50 hidden, 1 output
}
}
fn param_count(&self) -> usize {
// Calculate total weights + biases
// weights: (10*50) + (50*50) + (50*1) = 500 + 2500 + 50 = 3050
// biases: 50 + 50 + 1 = 101
// Total: 3151 parameters
self.layer_sizes.windows(2)
.map(|w| w[0] * w[1] + w[1]) // weights + biases per layer
.sum()
}
fn forward(&self, inputs: &[f64], weights: &[f64]) -> f64 {
// Simplified forward pass (implement your actual network logic)
// This is just an example - use your real neural network implementation
let mut activation = inputs.to_vec();
let mut weight_idx = 0;
for layer_idx in 0..self.layer_sizes.len() - 1 {
let in_size = self.layer_sizes[layer_idx];
let out_size = self.layer_sizes[layer_idx + 1];
let mut next_activation = vec![0.0; out_size];
// Apply weights
for o in 0..out_size {
let mut sum = 0.0;
for i in 0..in_size {
sum += activation[i] * weights[weight_idx];
weight_idx += 1;
}
// Add bias
sum += weights[weight_idx];
weight_idx += 1;
// ReLU activation (except last layer)
next_activation[o] = if layer_idx < self.layer_sizes.len() - 2 {
sum.max(0.0)
} else {
sum // Linear output for regression
};
}
activation = next_activation;
}
activation[0] // Return single output
}
fn mse_loss(&self, weights: &[f64]) -> f64 {
// Calculate mean squared error over all training examples
let mut total_error = 0.0;
for (inputs, &label) in self.train_inputs.iter().zip(&self.train_labels) {
let prediction = self.forward(inputs, weights);
let error = prediction - label;
total_error += error * error;
}
total_error / self.train_inputs.len() as f64
}
}
impl SingleValuedFunction for NeuralNetOptimizer {
fn single_run(&self, params: &[f64]) -> f64 {
// Params are the neural network weights
self.mse_loss(params)
}
fn function_floor(&self) -> f64 {
0.0 // Minimum possible MSE
}
}
fn main() {
// Generate synthetic training data (replace with your real data)
let train_inputs: Vec<Vec<f64>> = (0..100)
.map(|i| vec![i as f64 / 100.0; 10]) // 100 samples, 10 features each
.collect();
let train_labels: Vec<f64> = (0..100)
.map(|i| (i as f64 / 100.0).sin()) // Target: sin(x)
.collect();
let optimizer = NeuralNetOptimizer::new(train_inputs, train_labels);
let param_count = optimizer.param_count();
println!("Optimizing neural network with {} parameters", param_count);
// Define parameter bounds (typical weight initialization range)
let bounds: Vec<RangeInclusive<f64>> = vec![-1.0..=1.0; param_count];
// Configuration for large parameter spaces:
// - Population size: 10-20x number of params for good coverage
// - Regions: sqrt(population) is often a good heuristic
let population_size = (param_count * 15).min(10000); // Scale up but cap at 10k
let num_regions = (population_size as f64).sqrt() as usize;
let constants = GlobalConstants::new(population_size, num_regions);
let mut world = setup_world(&bounds, constants, Box::new(optimizer));
println!("Population size: {}, Regions: {}", population_size, num_regions);
println!("Initial score: {}", format_score(world.get_best_score()));
// Run optimization
let epochs = 200;
for epoch in 1..=epochs {
world.training_run(TrainingData::None { floor_value: 0.0 });
if epoch % 20 == 0 {
println!("Epoch {}: MSE = {}", epoch, format_score(world.get_best_score()));
}
}
println!("\nFinal MSE: {}", format_score(world.get_best_score()));
// Extract best weights
let best = world.get_best_organism(TrainingData::None { floor_value: 0.0 });
let best_weights = best.phenotype().expression_problem_values();
println!("Optimized {} weights ready for use", best_weights.len());
}
Key considerations for large-scale optimization:
Parameter count: This library has been tested with 50,000+ parameters. For neural networks, this is suitable for smaller architectures or as an initialization strategy before gradient descent.
Population scaling: Use 10-20x the number of parameters for population size, but cap at a
reasonable limit (e.g., 10,000) for memory and performance.
Region count: A good heuristic is sqrt(population_size) for balanced exploration/exploitation.
Hybrid approach: Use hill_descent_lib to find a good initialization, then switch to gradient-based methods (SGD, Adam, etc.) for fine-tuning. Genetic algorithms excel at escaping local minima.
Data management: Keep training data inside your fitness function struct to avoid passing large datasets through the API.
Understanding how hill_descent_lib scales helps you configure it effectively for your problem size:
| Parameters | Recommended Pop Size | Recommended Regions | Memory (approx) | Time per Epoch |
|---|---|---|---|---|
| 2-10 | 100-200 | 10-15 | < 1 MB | < 1ms |
| 10-100 | 500-1,000 | 20-30 | < 10 MB | 10-50ms |
| 100-1,000 | 1,000-5,000 | 30-70 | 10-100 MB | 50-500ms |
| 1,000-10,000 | 5,000-10,000 | 70-100 | 100 MB - 1 GB | 0.5-5s |
| 10,000+ | 10,000 (capped) | 100 | 1-10 GB | 5-30s |
Times measured on modern multi-core CPU (Ryzen/Intel i7+). Actual performance varies with fitness function complexity.
Population Size:
10-20x parameter count works well5-10x parameter countRegion Count:
sqrt(population_size) provides good balanceEpochs (Generations):
get_best_score() - stop if no improvement for 100+ epochsMemory consumption is primarily driven by:
Memory ≈ population_size × parameter_count × 24 bytes
+ region_count × 1KB overhead
+ function-specific data
Example calculations:
Tips for large problems:
cargo build --release - debug builds use significantly more memoryget_state() sparingly (serialization copies data)✅ Good use cases:
❌ Consider alternatives when:
hill_descent_lib uses Rayon for parallel region processing:
min(num_regions, num_cores)Optimization tips:
release builds - parallelism overhead matters more in debugFor debugging and analysis, enable the optional tracing feature:
[dependencies]
hill_descent_lib = { version = "0.1.0", features = ["enable-tracing"] }
use hill_descent_lib::{init_tracing, GlobalConstants, SingleValuedFunction, setup_world};
fn main() {
// Initialize tracing (only available with feature enabled)
#[cfg(feature = "enable-tracing")]
init_tracing();
// Your optimization code...
}
The library implements a genetic algorithm with spatial partitioning:
The algorithm automatically manages:
setup_world() - Initialize the optimization environmentGlobalConstants - Configuration (population size, region count, seed)World - Main optimization container with methods:
training_run() - Run one generationget_best_score() - Get current best fitnessget_best_organism() - Get the best organismget_state() - Get JSON representation of world stateSingleValuedFunction - Define your fitness function
single_run(&self, params: &[f64]) -> f64 - Requiredfunction_floor(&self) -> f64 - Optional (default: 0.0)Unlike gradient-based optimizers, hill_descent_lib:
Full API documentation is available on docs.rs.
See the examples/ directory for more usage patterns:
simple_optimization.rs - Basic 2D examplecustom_function.rs - Implementing custom fitness functionsmulti_dimensional.rs - High-dimensional optimizationRun examples with:
cargo run --example simple_optimization
Rust 2024 edition (Rust 1.82.0+)
Contributions are welcome! Please feel free to submit issues or pull requests.
Licensed under either of:
at your option.
Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.
Developed on Windows with cross-platform compatibility in mind.
Source code: https://github.com/cainem/hill_descent