| Crates.io | cmaes-lbfgsb |
| lib.rs | cmaes-lbfgsb |
| version | 0.1.0 |
| created_at | 2025-11-19 12:00:16.274761+00 |
| updated_at | 2025-11-19 12:00:16.274761+00 |
| description | High-performance CMA-ES and L-BFGS-B optimization algorithms for constrained and unconstrained problems |
| homepage | https://github.com/gnsqd/cmaes-lbfgsb |
| repository | https://github.com/gnsqd/cmaes-lbfgsb |
| max_upload_size | |
| id | 1939995 |
| size | 100,488 |
A high-performance Rust optimization library featuring two complementary state-of-the-art algorithms: CMA-ES (Covariance Matrix Adaptation Evolution Strategy) and L-BFGS-B (Limited-memory Broyden-Fletcher-Goldfarb-Shanno with Box constraints).
Originally developed for options surface calibration in quantitative finance, this library provides robust, production-ready implementations suitable for any optimization problem requiring either gradient-free evolutionary optimization or efficient quasi-Newton methods with bounds.
CMA-ES (Covariance Matrix Adaptation Evolution Strategy): Advanced evolutionary algorithm with adaptive covariance matrix
L-BFGS-B: Limited-memory Broyden-Fletcher-Goldfarb-Shanno algorithm with box constraints
Add this to your Cargo.toml:
[dependencies]
cmaes-lbfgsb = "0.1.0"
use cmaes_lbfgsb::cmaes::{canonical_cmaes_optimize, CmaesCanonicalConfig};
// Define objective function (minimize)
let objective = |x: &[f64]| {
// Sphere function: f(x) = sum(x_i^2)
x.iter().map(|&xi| xi * xi).sum::<f64>()
};
// Set bounds for each parameter
let bounds = vec![(-5.0, 5.0); 3]; // 3D problem, each param in [-5, 5]
// Configure CMA-ES
let config = CmaesCanonicalConfig {
population_size: 12,
max_generations: 100,
seed: 42,
verbosity: 1,
..Default::default()
};
// Optimize
let result = canonical_cmaes_optimize(objective, &bounds, config, None);
println!("Best solution: {:?}", result.best_solution);
println!("Generations used: {}", result.generations_used);
use cmaes_lbfgsb::lbfgsb_optimize::{lbfgsb_optimize, LbfgsbConfig};
// Define objective function
let objective = |x: &[f64]| {
// Rosenbrock function
let mut sum = 0.0;
for i in 0..x.len()-1 {
let a = 1.0 - x[i];
let b = x[i+1] - x[i]*x[i];
sum += a*a + 100.0*b*b;
}
sum
};
// Initial guess
let mut x = vec![-1.0, 1.0];
// Set bounds
let bounds = vec![(-2.0, 2.0), (-2.0, 2.0)];
// Configure L-BFGS-B
let config = Some(LbfgsbConfig {
memory_size: 10,
obj_tol: 1e-6,
..Default::default()
});
// Optimize
let result = lbfgsb_optimize(
&mut x,
&bounds,
&objective,
1000, // max iterations
1e-5, // gradient tolerance
None, // no callback
config
);
match result {
Ok((best_obj, best_params)) => {
println!("Best objective: {}", best_obj);
println!("Best parameters: {:?}", best_params);
}
Err(e) => println!("Optimization failed: {}", e),
}
CMA-ES is a state-of-the-art evolutionary algorithm that excels in gradient-free optimization. It's particularly powerful for complex, real-world optimization problems.
What makes CMA-ES special:
Ideal for:
Technical highlights:
L-BFGS-B is a quasi-Newton method that provides fast convergence for smooth optimization problems. It's the gold standard for large-scale constrained optimization.
What makes L-BFGS-B special:
Ideal for:
Technical highlights:
Both algorithms provide extensive configuration options for fine-tuning performance. Here are the key parameters:
let config = CmaesCanonicalConfig {
population_size: 50, // Population size (0 = auto)
max_generations: 1000, // Maximum generations
seed: 12345, // Random seed
parallel_eval: true, // Enable parallel evaluation
verbosity: 1, // Output level (0-2)
ipop_restarts: 3, // IPOP restart count
bipop_restarts: 0, // BIPOP restart count (overrides IPOP)
total_evals_budget: 50000, // Total function evaluation budget
use_subrun_budgeting: true, // Advanced budget allocation
// ... 15+ additional parameters available
..Default::default()
};
let config = LbfgsbConfig {
memory_size: 10, // L-BFGS memory size
obj_tol: 1e-8, // Objective tolerance
step_size_tol: 1e-9, // Step size tolerance
c1: 1e-4, // Armijo parameter
c2: 0.9, // Curvature parameter
fd_epsilon: 1e-8, // Finite difference step
max_line_search_iters: 20, // Max line search iterations
// ... additional parameters available
..Default::default()
};
For comprehensive documentation of all parameters including:
See the full documentation in the source code:
CmaesCanonicalConfig - 25+ parameters with detailed explanationsLbfgsbConfig - 10+ parameters with comprehensive guidanceOr generate the documentation locally:
cargo doc --open
| Problem Type | Recommended Algorithm | Finance Example |
|---|---|---|
| Smooth, differentiable | L-BFGS-B | Black-Scholes parameter fitting |
| Non-smooth, noisy | CMA-ES | Monte Carlo-based model calibration |
| Many local minima | CMA-ES | Heston model full calibration |
| High-dimensional (>1000) | L-BFGS-B | Large options portfolio hedging |
| Expensive function evaluations | L-BFGS-B | Complex exotic pricing models |
| Derivative-free required | CMA-ES | Jump-diffusion models |
| Initial rough calibration | CMA-ES | Volatility surface bootstrapping |
| Fine-tuning/polishing | L-BFGS-B | Refining CMA-ES results |
This library was originally developed to solve a challenging problem in quantitative finance: calibrating complex options pricing models to market data.
The Challenge:
Why Two Algorithms:
Beyond Finance: While designed for options pricing, these implementations excel at any optimization problem with similar characteristics:
CMA-ES:
parallel_eval: true)L-BFGS-B:
Combined Strategy:
This project is licensed under the MIT License - see the LICENSE file for details.
Contributions are welcome! Please feel free to submit a Pull Request.
If you use this library in academic work, please consider citing the original algorithms: