| Crates.io | rlgym-learn-backend |
| lib.rs | rlgym-learn-backend |
| version | 0.1.1 |
| created_at | 2025-01-27 18:44:58.177368+00 |
| updated_at | 2025-01-27 18:55:45.670594+00 |
| description | Backend for the more expensive parts of the rlgym-learn python module |
| homepage | |
| repository | |
| max_upload_size | |
| id | 1532729 |
| size | 224,887 |
A flexible framework for efficiently using RLGym v2 to train models.
pip install rlgym. If you're here for Rocket League, you can use pip install rlgym[rl-sim] instead to get the RLGym API as well as the Rocket League / Sim submodules.pip install git+https://github.com/JPK314/rlgym-learn (coming to PyPI soon)See the RLGym website for complete documentation and demonstration of functionality. For now, you can take a look at quick_start_guide.py and speed_test.py to get a sense of what's going on.
This project was built using Matthew Allen's wonderful RLGym-PPO as a starting point. Although this project has grown to share almost no code with its predecessor, I couldn't have done this without his support in talking through the design of abstractions and without RLGym-PPO to reference. A couple files in this project remain quite similar or even identical to their counterparts in RLGym-ppo - these include:
This framework is designed to be usable in every situation you might use the RLGym API in. However, there are a couple assumptions on the usage of RLGym which are baked into the functionality of this framework. These are pretty niche, but are listed below just in case: