| Crates.io | eqsolver |
| lib.rs | eqsolver |
| version | 0.3.0 |
| created_at | 2022-06-07 14:44:21.011367+00 |
| updated_at | 2025-07-19 12:45:19.478081+00 |
| description | A library that solves equations using numerical methods |
| homepage | |
| repository | https://github.com/AzeezDa/eqsolver |
| max_upload_size | |
| id | 601523 |
| size | 226,692 |
eqsolver - An Equation Solving, Optimisation, and Integration Library for RustThis Rust library is aimed at numerically solving equations, optimising objective functions, and integrating functions.
The library is passively-maintained, meaning no other features will be added. However, issues on the GitHub will be answered and solved.
Contributions and feedback to this library are more than welcome!
The following methods are available to use in the library. Their descriptions use the largest possible domain and codomain for the functions, which is Rn. However, any (well-behaved) subset of Rn also works. Additionally, the methods that use multivariate input or output heavily utilises the linear algebra library for Rust nalgebra.
There are two versions of this method, one requires the Jacobian matrix to be given and the other approximates it using finite differences. The latter version has, therefore, slightly longer wall time. Both methods require an initial guess.
For certain ill-posed problems, this method will fail. For a slower but more robust method, see the Levenberg-Marquardt method below.
There are two versions of this method, one requires the Jacobian matrix to be given and the other approximates it using finite differences. The latter version has, therefore, slightly longer wall time. Both methods require an initial guess.
For certain ill-posed problems, this method will fail. For a slower but more robust method, see the Levenberg-Marquardt method below.
There are two versions of this method, one requires the Jacobian matrix to be given and the other approximates it using finite differences. The latter version has, therefore, slightly longer wall time. Both methods require an initial guess.
Use this method if you know the bounds of the parameters.
Use this method if you DON'T KNOW the bounds of the parameters but KNOW how uncertain each parameter is.
There is a single struct for ordinary differential equations (ODE) which can be modified (using
the builder pattern) to use one of the following step methods:
In eqsolver, there are structs that represents methods for integrating functions f: Rn → R.
Note! This method cannot guarantee a tolerance. Use Adaptive Newton-Cotes for a guarantee on error.
The integrator may be inputted with a random number generator (RNG), seeded (ChaCha8, for instance) or non-seeded (rand's RNG).
Furthermore, the output of the integrator is the mean and variance of algorithm's output where the former is the integral's value.
The struct uses parameters inspired by GNU's Scientific Library (GSL)'s implementation. These include a dither value to break symmetries of functions, an alpha value (introduced by Press and Farrar) to control the variance-based distribution of points, and parameters regarding the bounds on recursion and sample count.
Like the plain Monte Carlo integrator, MISER may be inputted with an RNG, and the output is a mean with a variance.
use eqsolver::single_variable::FDNewton;
let f = |x: f64| x.exp() - 1./x; // e^x = 1/x
let solution = FDNewton::new(f).solve(0.5); // Starting guess is 0.5
use eqsolver::multivariable::MultiVarNewtonFD;
use nalgebra::{vector, Vector2};
// Want to solve x^2 - y = 1 and xy = 2
let f = |v: Vector2<f64>| vector![v[0].powi(2) - v[1] - 1., v[0] * v[1] - 2.];
let solution = MultiVarNewtonFD::new(f).solve(vector![1., 1.]); // Starting guess is (1, 1)
use eqsolver::ODESolver;
let f = |t: f64, y: f64| t * y; // y' = f(t, y) = ty
let (x0, y0) = (0., 0.2);
let x_end = 2.;
let step_size = 1e-3;
let solution = ODESolver::new(f, x0, y0, step_size).solve(x_end);
use eqsolver::multivariable::LevenbergMarquardtFD;
use nalgebra::{vector, Vector2};
let c0 = [3., 5., 3.];
let c1 = [1., 0., 4.];
let c2 = [6., 2., 2.];
// Function from R2 to R3
let f = |v: Vector2<f64>| {
vector!(
(v[0] - c0[0]).powi(2) + (v[1] - c0[1]).powi(2) - c0[2] * c0[2],
(v[0] - c1[0]).powi(2) + (v[1] - c1[1]).powi(2) - c1[2] * c1[2],
(v[0] - c2[0]).powi(2) + (v[1] - c2[1]).powi(2) - c2[2] * c2[2],
)
};
let solution_lm = LevenbergMarquardtFD::new(f)
.solve(vector![4.5, 2.5]) // Guess
.unwrap();
use eqsolver::global_optimisers::{CrossEntropy, ParticleSwarm};
use nalgebra::SVector;
use std::f64::consts::PI;
const SIZE: usize = 10;
let rastrigin = |v: SVector<f64, SIZE>| {
v.fold(10. * SIZE as f64, |acc, x| {
acc + x * x - 10. * f64::cos(2. * PI * x)
})
};
let bounds = SVector::repeat(10.);
let standard_deviations = SVector::repeat(10.);
let guess = SVector::repeat(5.);
let opt_pso = ParticleSwarm::new(rastrigin, -bounds, bounds).solve(guess);
let opt_ce = CrossEntropy::new(rastrigin)
.with_std_dev(standard_deviations)
.solve(guess);
use eqsolver::integrators::AdaptiveNewtonCotes;
fn main() {
// We will integrate the function e^(x^2) from 0 to 1
let f = |x: f64| (x * x).exp();
// The adaptive Newton-Cotes allows for setting the tolerance
let adaptive_newton_cotes_result = AdaptiveNewtonCotes::new(f)
.with_tolerance(0.001)
.integrate(0., 1.)
.unwrap();
}
use eqsolver::integrators::{MISER, OrthotopeRandomIntegrator};
use nalgebra::{vector, Vector2};
use rand::SeedableRng;
use rand_chacha::ChaCha8Rng;
fn main() {
let f = |v: Vector2<f64>| (v[0] * v[1]).exp();
let integrator = MISER::new(f);
let from = vector![0., 0.];
let to = vector![2., 2.];
// uses rand's rng() and returns only mean
let result_rng = integrator
.integrate(from, to)
.unwrap();
// uses rand's rng() and returns mean and variance
let result_rng_full_output = integrator
.integrate_to_mean_variance(from, to)
.unwrap();
// uses ChaCha8 with 1729 as seed and returns the mean and the variance
let mut chacha8 = ChaCha8Rng::seed_from_u64(1729);
let result_chacha8 = MISER::new(f)
.integrate_with_rng(from, to, &mut chacha8)
.unwrap();
}
For more examples, please see the examples directory.