evo-rl

Crates.ioevo-rl
lib.rsevo-rl
version0.1.0-alpha.5
sourcesrc
created_at2023-11-10 17:00:20.500851
updated_at2024-11-04 15:42:33.552835
descriptionA neuroevolution-based ML library for reinforcement learning inspired by NEAT
homepagehttps://www.evosentientai.com/
repositoryhttps://github.com/dawnis/evo_rl
max_upload_size
id1031341
size132,963
Dawnis M. Chow (dawnis)

documentation

README

Evo RL

Evo RL is a machine learning library built in Rust to explore the evolution strategies for the creation of artificial neural networks. Neural Networks are implemented as graphs specified by a direct encoding scheme, which allows crossover during selection.

Neuroevolution

Neuroevolution is a field in artificial intelligence which leverages evolutionary algorithms to create structured artificial neural networks.

The main evolutionary algorithm in this libary is inspired by the NEAT (K.O. Stanley and R. Miikkulainen) and implements stochastic universal sampling with truncation as the selection mechanism.

A survey/discussion of recent advances and other packages in this area as of 2024 can be found in this paper.

Alternatively, EvoJAX presents a more complete and scalable toolkit which implements many neuroevolution algorithms.

Website

This library is part of my startup project, Sentient AI. Please refer there for roadmap/vision around this library.

Python

A python package (evo_rl) can be built by running maturin develop in the source code. Examples are included in the examples directory.

A code snippet is reproduced here:

#A Python script which trains an agent to solve the mountain car task in OpenAI's Gymnasium

import evo_rl
import logging

from utils import MountainCarEnvironment, visualize_gen

import gymnasium as gym
import numpy as np

FORMAT = '%(levelname)s %(name)s %(asctime)-15s %(filename)s:%(lineno)d %(message)s'
logging.basicConfig(format=FORMAT)
logging.getLogger().setLevel(logging.INFO)

population_size = 200

configuration = {
        "population_size": population_size,
        "survival_rate": 0.2,
        "mutation_rate": 0.4, 
        "input_size": 2,
        "output_size": 2,
        "topology_mutation_rate": 0.4,
        "project_name": "mountaincar",
        "project_directory": "mc_agents"
        }

env = gym.make('MountainCarContinuous-v0')
mc = MountainCarEnvironment(env, configuration)

p = evo_rl.PopulationApi(configuration)

while p.generation < 1000:

    for agent in range(population_size):
        mc.evaluate_agent(p, agent)

    if p.fitness > 100:
        break
        
    p.update_population_fitness()
    p.report()
    p.evolve_step()

Running Tests

Verbose

RUST_LOG=[debug/info] cargo test -- --nocapture

Commit count: 201

cargo fmt