Crates.io | eevee |
lib.rs | eevee |
version | |
source | src |
created_at | 2025-04-10 23:13:00.341877+00 |
updated_at | 2025-04-12 01:44:39.489687+00 |
description | Generalized NeuroEvolution toolkit, based on NEAT |
homepage | |
repository | |
max_upload_size | |
id | 1629035 |
Cargo.toml error: | TOML parse error at line 17, column 1 | 17 | autolib = false | ^^^^^^^ unknown field `autolib`, expected one of `name`, `version`, `edition`, `authors`, `description`, `readme`, `license`, `repository`, `homepage`, `documentation`, `build`, `resolver`, `links`, `default-run`, `default_dash_run`, `rust-version`, `rust_dash_version`, `rust_version`, `license-file`, `license_dash_file`, `license_file`, `licenseFile`, `license_capital_file`, `forced-target`, `forced_dash_target`, `autobins`, `autotests`, `autoexamples`, `autobenches`, `publish`, `metadata`, `keywords`, `categories`, `exclude`, `include` |
size | 0 |
Nothing really works very well. There's a lot of useful code, and topology search / genome evolution can be done, but it's slow, inefficient, and often fails completely. Expect frequent changes.
Eevee doesn't work on Windows. This is because our default RNG seeding assumes that /dev/urandom
exists.
Eevee is a library for leveraging the NEAT algorithm to train genomes encoding neural network behavior. Specifically, it aims to generalize the algorithm such that it may be applied to different domains, and maybe in the future applications that don't implement neural networks at all.
I like to name lots of my projects after Pokemon. I called this one Eevee because, like generic NeuroEvolution, Eevee can evolve in a number of different ways - all of which fill their own niche, are good at some things, and not so good at others. Also because docs.rs/eevee
wasn't occupied.
The core iteration loop is that, given a scenario which implements some mechanism by which a genome may be scored with a fitness, Eevee will try mutate, reproduce, and cull genomes to optimize for that fitness to increase. There exist some experiments aound this in the examples
folder.
It's written in rust, and uses nightly versions - mostly for incomplete features. Later when I add CUDA support, it will even more rely on nightly.
I use criterion for benchmarking, it's recommended that if you run benches, you have gnuplot
on your system. You can use ./cmp-bench <bench> [branch:-]
to compare a benchmark across two branches, which produces a nice report.
I use flamegraph for profiling, it's required that if you run benches with profiling, you have perf
on your system. You can use ./profile <bench>
to run benchmarks on a pared-down version of any benchmark, and ./cmp-profile <bench> [branch:-]
to compare a profiling across two branches.
For both of those, I use toml-cli + jq
to get a list of benchmarks.
$ toml get Cargo.toml . | jq '.bench | map(.name)[]' -r
I use tarpaulin for getting test coverage.
Thanks to smol-rs/fastrand, I stole the core of their WyHash
rng implementation.
Thanks to TLmaK0/rustneat, This project turned me on to learning about CTRNN's and also I stole the CTRNN matmul code