border

Crates.ioborder
lib.rsborder
version0.0.6
sourcesrc
created_at2021-03-13 08:50:01.951689
updated_at2023-09-19 12:07:27.200023
descriptionReinforcement learning library
homepage
repositoryhttps://github.com/taku-y/border
max_upload_size
id368229
size395,587
Taku Yoshioka (taku-y)

documentation

README

Border

A reinforcement learning library in Rust.

CI Latest version Documentation License

Border consists of the following crates:

  • border-core provides basic traits and functions generic to environments and reinforcmenet learning (RL) agents.
  • border-py-gym-env is a wrapper of the Gym environments written in Python, with the support of pybullet-gym and atari.
  • border-atari-env is a wrapper of atari-env, which is a part of gym-rs.
  • border-tch-agent is a collection of RL agents based on tch. Deep Q network (DQN), implicit quantile network (IQN), and soft actor critic (SAC) are includes.
  • border-async-trainer defines some traits and functions for asynchronous training of RL agents by multiple actors, each of which runs a sampling process of an agent and an environment in parallel.

You can use a part of these crates for your purposes, though border-core is mandatory. This crate is just a collection of examples. See Documentation for more details.

Status

Border is experimental and currently under development. API is unstable.

Examples

In examples directory, you can see how to run some examples. Python>=3.7 and gym must be installed for running examples using border-py-gym-env. Some examples requires PyBullet Gym. As the agents used in the examples are based on tch-rs, libtorch is required to be installed.

License

Crates License

border-core | MIT OR Apache-2.0 border-py-gym-env | MIT OR Apache-2.0 border-atari-env | GPL-2.0-or-later border-tch-agent | MIT OR Apache-2.0 border-async-trainer| MIT OR Apache-2.0 border | GPL-2.0-or-later

Commit count: 914

cargo fmt