jaime

Crates.iojaime
lib.rsjaime
version
sourcesrc
created_at2024-10-28 22:00:54.872971
updated_at2024-11-06 16:46:12.420435
descriptionj.a.i.m.e. is an ergonomic all purpose gradient descent engine
homepage
repositoryhttps://github.com/jaimegonzalezfabregas/Jaime
max_upload_size
id1426111
Cargo.toml error:TOML parse error at line 18, column 1 | 18 | autolib = false | ^^^^^^^ unknown field `autolib`, expected one of `name`, `version`, `edition`, `authors`, `description`, `readme`, `license`, `repository`, `homepage`, `documentation`, `build`, `resolver`, `links`, `default-run`, `default_dash_run`, `rust-version`, `rust_dash_version`, `rust_version`, `license-file`, `license_dash_file`, `license_file`, `licenseFile`, `license_capital_file`, `forced-target`, `forced_dash_target`, `autobins`, `autotests`, `autoexamples`, `autobenches`, `publish`, `metadata`, `keywords`, `categories`, `exclude`, `include`
size0
Jaime González Fábregas (jaimegonzalezfabregas)

documentation

README

Jaime's Artificial Inteligence and Machine learning Engine

Crates.io Version Passing tests docs.rs

J.a.i.m.e., pronounced as /hɑːɪmɛ/, is a all purpose ergonomic gradient descent engine. It can configure ANY* and ALL** models to find the best fit for your dataset. It will magicaly take care of the gradient computations with little effect on your coding style.

* not only neuronal

** derivability conditions apply

Concepts and explanation

  • Input: For our purposes the input of our Model will be a vector of floating point numbers
  • Output: For our purposes the output of our Model will be a vector of floating point numbers
  • Dataset: a set of input-output pairs. Jaime will reconfigure the model to aproximate the behabiour described in the dataset.
  • Model: a function that maps from input to ouput using a set of configuration parameters that define its behabiour. For our purposes small changes in the parameters should translate to small changes in the behabiour of the function. Examples of suitable models:
    • Polinomial functions: Defined as y = P_0 * x^0 + P_1 * x^1 + ... + P_n * x^n. The vector [x] will be our input, the vector [y] will be our output, The vector [P_0, P_1, ... ,P_2] will be our parameter vector. An example of this crate for this precise case can be found here
    • Neuronal networks: In their most basic form they are defined as consecutive matrix multiplications with delinearization steps in between. The classical meaning of parameters, input and output for a NN matches the concepts used in this crate. An example of this crate for this precise case can be found here

If you are able to define a model this crate will happily apply gradient descent to find some local minumum that aproximates the behabiour defined in the dataset.

Examples

To make sure this crate was as usable and performant as posible I've also implemented a few exercises that use its functions.

If you belive you understanding of what my crate does is clear I encourage you to study the voronoi image aproximator.

Geeky internal wizardry

Gradient calculation

If you are a little math savy and know how gradient descent works you may be wondering how am I able to do the partial derivatives for the parameters without knowing beforehand what operations will the model perform. The solution relies on Forward Mode Automatic Differentiation using dual numbers. Jaime will require you to define a generic function that manipulates a vector of float-oids and returns a vector of float-oids. That function will later be instanciated with a custom dual number type, that will allow me hijack the mathematic operations and keep track of the necesary extra data.

Rust, specificaly rust's generics and trait system, is perfect for this task. I can unambiguosly define what a float-oid is to rust as a set of traits that overload operators and other functionality.

After that the only thing remaining is to follow the calculated gradient towards victory, success and greatness.

Gradient following

The field of gradient descent has been thoroughly studied to make it kind of good. The naive aproach is prone to local minima and wasted time, in order to tacle this problems many gradient descent optimizers exist. j.a.i.m.e implements a few, more implementations are very very welcome! At this point the following optimizers are aviable:

Usage Documentation

Comming soon, for now try having a look at the examples.

Contributing

Yes please. Make a PR to this repo and I will happily merge it.

A note on optimization

I heavily used Samply for profiling during this project.

Commit count: 27

cargo fmt