| Crates.io | arboriter-mcts |
| lib.rs | arboriter-mcts |
| version | 0.3.0 |
| created_at | 2025-04-29 21:24:27.10215+00 |
| updated_at | 2025-12-14 05:41:50.437599+00 |
| description | A Monte Carlo Tree Search implementation built on the arboriter tree traversal primitive |
| homepage | |
| repository | https://github.com/mrorigo/arboriter-mcts |
| max_upload_size | |
| id | 1654062 |
| size | 184,411 |
A flexible, well-documented Monte Carlo Tree Search (MCTS) implementation for Rust, built for game AI and decision-making processes.
ExpansionPolicyExpansionPolicy for setting priorsAdd this to your Cargo.toml:
[dependencies]
arboriter-mcts = "0.2.0"
Here's a simple example of how to use the library:
use arboriter_mcts::{MCTS, MCTSConfig, GameState};
// Implement GameState for your game
impl GameState for MyGame {
type Action = MyAction;
type Player = MyPlayer;
// Return legal actions from the current state
fn get_legal_actions(&self) -> Vec<Self::Action> { /* ... */ }
// Apply an action and return the new state
fn apply_action(&self, action: &Self::Action) -> Self { /* ... */ }
// Check if the game is over
fn is_terminal(&self) -> bool { /* ... */ }
// Get the result (0.0 = loss, 0.5 = draw, 1.0 = win)
fn get_result(&self, for_player: &Self::Player) -> f64 { /* ... */ }
// Get the current player
fn get_current_player(&self) -> Self::Player { /* ... */ }
}
// Create a configuration for the search
let config = MCTSConfig::default()
.with_exploration_constant(1.414)
.with_max_iterations(10_000);
// Create the MCTS searcher with initial state
let mut mcts = MCTS::new(initial_state, config);
// Find the best action
let best_action = mcts.search()?;
// Get search statistics
println!("{}", mcts.get_statistics().summary());
The repository includes complete examples for common games that demonstrate the MCTS algorithm in action:
To run the examples:
# Play Tic-Tac-Toe against the AI
cargo run --example tic_tac_toe
# Play Connect Four against the AI
cargo run --example connect_four
Monte Carlo Tree Search combines tree search with random sampling to find optimal decisions:
Selection: Starting from the root, select successive child nodes down to a leaf node using a selection policy that balances exploration and exploitation.
Expansion: If the leaf node is not terminal and has untried actions, create one or more child nodes by applying those actions.
Simulation: From the new node, simulate a game to completion using a default policy (often random play).
Backpropagation: Update the statistics (visit counts and rewards) for all nodes in the path from the selected node to the root.
This process is repeated many times, gradually building a tree of game states and improving the value estimates for each action.
Our implementation includes memory-efficient node pooling which recycles nodes during sequential searches instead of constantly allocating and deallocating memory, significantly improving performance especially for multi-step decision processes.
You can customize all aspects of the MCTS algorithm to match your specific needs:
// Standard configuration
let mut mcts = MCTS::new(initial_state, config)
// Use UCB1 for selection with custom exploration constant
.with_selection_policy(UCB1Policy::new(1.414))
// Choose how to expand nodes and set priors (e.g. for PUCT)
.with_expansion_policy(RandomExpansionPolicy::new())
// Use domain-specific heuristics for simulation
.with_simulation_policy(HeuristicPolicy::new(my_heuristic_fn))
// Use a weighted backpropagation policy (or RavePolicy)
.with_backpropagation_policy(RavePolicy::new(0.5));
// Configure time-based search limits
let action = mcts.search_for_time(Duration::from_secs(5))?;
// Enable node pooling for better performance
let config_with_pooling = MCTSConfig::default()
.with_best_child_criteria(BestChildCriteria::MostVisits)
.with_node_pool_config(1000); // Enable node pooling
// Create MCTS with node pooling
let mut mcts_with_pool = MCTS::with_node_pool(
initial_state,
config_with_pooling,
);
// For sequential searches (e.g., playing a full game), you can reset the root
// to reuse nodes instead of creating a new MCTS instance each time
mcts_with_pool.reset_root(new_state);
The simulate method in our policies now returns (f64, Vec<Action>). This action trace is critical for RAVE to update statistics for all moves played during the simulation (All Moves As First). Ensure your GameState implementation also returns this trace if you override simulate_random_playout.
Node pooling provides significant performance benefits by reducing memory allocation overhead, especially for:
The statistics output will show node pool usage details when enabled.
For detailed documentation and API reference, visit docs.rs/arboriter-mcts.
Contributions are welcome! See CONTRIBUTING.md for guidelines.
This crate is licensed under the MIT license. See LICENSE for details.