arboriter-mcts

Crates.ioarboriter-mcts
lib.rsarboriter-mcts
version0.3.0
created_at2025-04-29 21:24:27.10215+00
updated_at2025-12-14 05:41:50.437599+00
descriptionA Monte Carlo Tree Search implementation built on the arboriter tree traversal primitive
homepage
repositoryhttps://github.com/mrorigo/arboriter-mcts
max_upload_size
id1654062
size184,411
Mattias (mrorigo)

documentation

https://docs.rs/arboriter-mcts

README

arboriter-mcts

Crates.io Documentation License: MIT Rust

A flexible, well-documented Monte Carlo Tree Search (MCTS) implementation for Rust, built for game AI and decision-making processes.

Features

  • ๐Ÿงฐ Generic implementation that works with any game or decision process
  • ๐Ÿ”„ Multiple selection policies
    • UCB1: Standard Upper Confidence Bound for Trees
    • UCB1-Tuned: Robust implementation using actual variance calculation
    • PUCT: Proven policy used in AlphaZero, with support for state-dependent priors via ExpansionPolicy
  • โšก RAVE (Rapid Action Value Estimation): True AMAF implementation with simulation traces
  • ๐ŸŽฒ Customizable simulation strategies to match your domain knowledge
  • ๐ŸŒณ Customizable expansion strategies via ExpansionPolicy for setting priors
  • ๐Ÿš€ Memory-efficient node pooling for improved performance in sequential searches
  • ๐Ÿ“Š Detailed search statistics and visualization for debugging and analysis
  • ๐Ÿงช Comprehensive test suite ensuring correctness and reliability
  • ๐Ÿ“ Thorough documentation with examples for easy integration

Installation

Add this to your Cargo.toml:

[dependencies]
arboriter-mcts = "0.2.0"

Basic Usage

Here's a simple example of how to use the library:

use arboriter_mcts::{MCTS, MCTSConfig, GameState};

// Implement GameState for your game
impl GameState for MyGame {
    type Action = MyAction;
    type Player = MyPlayer;

    // Return legal actions from the current state
    fn get_legal_actions(&self) -> Vec<Self::Action> { /* ... */ }

    // Apply an action and return the new state
    fn apply_action(&self, action: &Self::Action) -> Self { /* ... */ }

    // Check if the game is over
    fn is_terminal(&self) -> bool { /* ... */ }

    // Get the result (0.0 = loss, 0.5 = draw, 1.0 = win)
    fn get_result(&self, for_player: &Self::Player) -> f64 { /* ... */ }

    // Get the current player
    fn get_current_player(&self) -> Self::Player { /* ... */ }
}

// Create a configuration for the search
let config = MCTSConfig::default()
    .with_exploration_constant(1.414)
    .with_max_iterations(10_000);

// Create the MCTS searcher with initial state
let mut mcts = MCTS::new(initial_state, config);

// Find the best action
let best_action = mcts.search()?;

// Get search statistics
println!("{}", mcts.get_statistics().summary());

Running the Examples

The repository includes complete examples for common games that demonstrate the MCTS algorithm in action:

  • Tic-Tac-Toe: A simple 3x3 game where you can play against the AI
  • Connect Four: A more complex 7x6 game with stronger tactical elements

To run the examples:

# Play Tic-Tac-Toe against the AI
cargo run --example tic_tac_toe

# Play Connect Four against the AI
cargo run --example connect_four

How MCTS Works

Monte Carlo Tree Search combines tree search with random sampling to find optimal decisions:

  1. Selection: Starting from the root, select successive child nodes down to a leaf node using a selection policy that balances exploration and exploitation.

  2. Expansion: If the leaf node is not terminal and has untried actions, create one or more child nodes by applying those actions.

  3. Simulation: From the new node, simulate a game to completion using a default policy (often random play).

  4. Backpropagation: Update the statistics (visit counts and rewards) for all nodes in the path from the selected node to the root.

This process is repeated many times, gradually building a tree of game states and improving the value estimates for each action.

Our implementation includes memory-efficient node pooling which recycles nodes during sequential searches instead of constantly allocating and deallocating memory, significantly improving performance especially for multi-step decision processes.

Advanced Customization

You can customize all aspects of the MCTS algorithm to match your specific needs:

// Standard configuration
let mut mcts = MCTS::new(initial_state, config)
    // Use UCB1 for selection with custom exploration constant
    .with_selection_policy(UCB1Policy::new(1.414))
    
    // Choose how to expand nodes and set priors (e.g. for PUCT)
    .with_expansion_policy(RandomExpansionPolicy::new())

    // Use domain-specific heuristics for simulation
    .with_simulation_policy(HeuristicPolicy::new(my_heuristic_fn))

    // Use a weighted backpropagation policy (or RavePolicy)
    .with_backpropagation_policy(RavePolicy::new(0.5));

// Configure time-based search limits
let action = mcts.search_for_time(Duration::from_secs(5))?;

// Enable node pooling for better performance
let config_with_pooling = MCTSConfig::default()
    .with_best_child_criteria(BestChildCriteria::MostVisits)
    .with_node_pool_config(1000);  // Enable node pooling

// Create MCTS with node pooling
let mut mcts_with_pool = MCTS::with_node_pool(
    initial_state,
    config_with_pooling,
);

// For sequential searches (e.g., playing a full game), you can reset the root
// to reuse nodes instead of creating a new MCTS instance each time
mcts_with_pool.reset_root(new_state);

Note on RAVE and Simulation

The simulate method in our policies now returns (f64, Vec<Action>). This action trace is critical for RAVE to update statistics for all moves played during the simulation (All Moves As First). Ensure your GameState implementation also returns this trace if you override simulate_random_playout.

Node pooling provides significant performance benefits by reducing memory allocation overhead, especially for:

  • Games with large branching factors
  • Sequential searches during gameplay
  • Deep search trees
  • Time-critical applications

The statistics output will show node pool usage details when enabled.

Documentation

For detailed documentation and API reference, visit docs.rs/arboriter-mcts.

Contributing

Contributions are welcome! See CONTRIBUTING.md for guidelines.

License

This crate is licensed under the MIT license. See LICENSE for details.

Acknowledgments

  • Built on the arboriter tree traversal primitive
  • Inspired by Tyler Glaiel's blog post on tree traversal primitives
  • Influenced by successful MCTS implementations in AlphaGo and other game AI systems
Commit count: 6

cargo fmt