| Crates.io | stepwise |
| lib.rs | stepwise |
| version | 0.0.3 |
| created_at | 2025-05-08 20:09:08.008917+00 |
| updated_at | 2025-05-23 20:17:26.323321+00 |
| description | A library for stepwise algorithms |
| homepage | https://github.com/akanalytics/stepwise |
| repository | https://github.com/akanalytics/stepwise |
| max_upload_size | |
| id | 1665791 |
| size | 299,099 |
The stepwise crate is a zero-dependency helper library for writing iterative algorithms.
It supports both numeric techniques—such as gradient descent, fixed-point iteration, Newton–Raphson—and non-numeric stepwise processes, like line-by-line file parsing or tree search.
With stepwise, you can:
Driver executor in a functional, iterator-like style to configure:
metrics that compute moving averages, absolute/relative differences, invocation counts, or detect stalled solvers.linear algebra functions for computing norms and distances.samplers for ML training, convergence analysis, or plotting.problems, and compare them with the built-in algorithms.driver |
algorithms |
metrics |
linear algebra functions |
samplers |
problems
use stepwise::{Driver as _, fixed_iters, assert_approx_eq};
use stepwise::algos::GradientDescent;
let sphere_gradient = |x: &[f64]| vec![2.0 * x[0], 2.0 * x[1]];
let learning_rate = 0.1;
let initial_guess = vec![5.5, 6.5];
let algo = GradientDescent::new(learning_rate, initial_guess, sphere_gradient);
let (solved, step) = fixed_iters(algo, 1000).solve().expect("solving failed!");
assert_approx_eq!(solved.x(), &[0.0, 0.0]);
use std::time::Duration;
use stepwise::prelude::*;
let gd = algos::GradientDescent::new(0.01, vec![1.0, 2.0], problems::sphere_grad);
let mut avg_of_norm = metrics::Ema::with_window(10);
let (solved, step) = fail_after_iters(gd, 10_000)
.converge_when(|algo, _step| avg_of_norm.observe(algo.gradient.norm_l2()) < 1e-10)
.show_progress_bar_after(Duration::from_millis(20))
.on_step(|algo, _step| algo.learning_rate *= 0.9999)
.solve()
.expect("solving failed!");
assert_approx_eq!(solved.x(), &[0.0, 0.0]);
assert_eq!(step.iteration(), 1301);
In Example 2 (above):
Early stopping is used to terminate the algorithm
An exponential moving average of gradient's l2-norm is used to test it is near zero.
A progress bar is also printed in the terminal if algorithm runs for more than 20ms.
The stepwise prelude is used to import both the metrics and the linear algebra trait
A decaying learning rate is applied after each step.
Callers will typically interact with your algorithm through the Driver trait.
However, to integrate with stepwise, you must implement the core Algo trait for your algorithm.
This involves defining a single method:
step(): returns a tuple of:
ControlFlow, indicating whether iteration should continue or stopResult, capturing any error that may occur during a stepThis design separates control flow from failure, supporting fallible solvers with clean early-stopping logic.
Note: accessor methods like x() (for current state) are not part of the trait, but are expected in usable solvers, in order to retrieve the solution after or during iteration.
use std::{error::Error, ops::ControlFlow};
pub trait Algo {
type Error: Send + Sync + Error + 'static;
fn step(&mut self) -> (ControlFlow<()>, Result<(), Self::Error>);
// <default methods omitted>
}
Related crate: stochy