ganesh

Crates.ioganesh
lib.rsganesh
version
sourcesrc
created_at2024-07-13 15:48:58.99461
updated_at2024-07-14 17:04:07.612471
descriptionFunction minimization in Rust, simplified
homepagehttps://github.com/denehoffman/ganesh
repositoryhttps://github.com/denehoffman/ganesh
max_upload_size
id1302442
size0
Nathaniel D. Hoffman (denehoffman)

documentation

https://docs.rs/ganesh

README

Ganesh

Function Minimization in Rust, Simplified

GitHub Release GitHub last commit GitHub Actions Workflow Status GitHub License Crates.io Version docs.rs Codecov

ganesh, (/ɡəˈneɪʃ/) named after the Hindu god of wisdom, provides several common minimization algorithms as well as a straightforward, trait-based interface to create your own extensions. This crate is intended to be as simple as possible. The user needs to implement the Function trait on some struct which will take a vector of parameters and return a single-valued Result ($f(\mathbb{R}^n) \to \mathbb{R}$). Users can optionally provide gradient and Hessian functions to speed up some algorithms, but a default finite-difference implementation is provided so that all algorithms will work out of the box.

[!CAUTION] This crate is still in an early development phase, and the API is not stable. It can (and likely will) be subject to breaking changes before the 1.0.0 version release (and hopefully not many after that).

Table of Contents

Key Features

  • Simple but powerful trait-oriented library which tries to follow the Unix philosophy of "do one thing and do it well".
  • Generics to allow for different numeric types to be used in the provided algorithms.
  • Algorithms that are simple to use with sensible defaults.
  • Traits which make developing future algorithms simple and consistent.

Quick Start

This crate provides some common test functions in the test_functions module. Consider the following implementation of the Rosenbrock function:

use ganesh::prelude::*;
use std::convert::Infallible;
pub struct Rosenbrock {
    /// Number of dimensions (must be at least 2)
    pub n: usize,
}
impl Function<f64, Infallible> for Rosenbrock {
    fn evaluate(&self, x: &[f64]) -> Result<f64, Infallible> {
        Ok((0..(self.n - 1))
            .map(|i| 100.0 * (x[i + 1] - x[i].powi(2)).powi(2) + (1.0 - x[i]).powi(2))
            .sum())
    }
}

To minimize this function, we could consider using the Nelder-Mead algorithm:

use ganesh::prelude::*;
use ganesh::algorithms::NelderMead;

let func = Rosenbrock { n: 2 };
let mut m = NelderMead::new(func, &[-2.3, 3.4], None);
let status = minimize!(m).unwrap();
let (x_best, fx_best) = m.best();
println!("x: {:?}\nf(x): {}", x_best, fx_best);

This should output

x: [0.9999459113507765, 0.9998977381285472]
f(x): 0.000000006421349269800761

I have ignored the status variable here, but in practice, the Minimizer::minimize method should return the last message sent by the algorithm. This can indicate the status of a fit without explicitly causing an error. This makes it easier to debug, since it can be tedious to have two separate error types, one for the function and one for the algorithm, returned by the minimization (functions can always be failable in this crate). We could also swap the f64s for f32s (or any type which implements the Field trait) in the Rosenbrock implementation. Additionally, if we wanted to modify any of the hyperparameters in the fitting algorithm, we could use NelderMeadOptions::builder() and pass it as the third argument in the NelderMead::new constructor. Finally, all algorithm implementations are constructed to pass a unique message type to a callback function. For NelderMead, we could do the following:

let status = minimize!(m, |message| println!("step: {}\nx: {:?}\nf(x): {}", message.step, message.x, message.fx)).unwrap();

This will print out the current step, the best position found by the optimizer at that step, and the function's evaluation at that position for each step in the algorithm. You can use the step number to limit printing (print only steps divisible by 100, for example).

The minimize! macro exists to simplify the Minimizer::minimize<Callback: Fn(M)>(&mut self, callback: Callback) call, which looks a bit ugly if you don't actually want a callback.

Future Plans

  • Eventually, I would like to implement BGFS and variants, MCMC algorithms, and some more modern gradient-free optimization techniques.
  • There are probably many optimizations and algorithm extensions that I'm missing right now because I just wanted to get it working first.
  • Reduce the amount of Vec::clones in the algorithms.
  • A test suite
Commit count: 0

cargo fmt