| Crates.io | kodegen_tools_reasoner |
| lib.rs | kodegen_tools_reasoner |
| version | 0.10.9 |
| created_at | 2025-11-10 13:30:50.523302+00 |
| updated_at | 2026-01-02 15:16:44.995518+00 |
| description | KODEGEN.α΄Ιͺ: Memory-efficient, Blazing-Fast, MCP tools for code generation agents. |
| homepage | https://kodegen.ai |
| repository | https://github.com/cyrup-ai/kodegen-tools-reasoner |
| max_upload_size | |
| id | 1925495 |
| size | 4,442,244 |
Memory-efficient, Blazing-Fast MCP tools for code generation agents with advanced reasoning capabilities.
kodegen-tools-reasoner is a high-performance MCP (Model Context Protocol) server that provides sophisticated reasoning strategies for AI agents. It implements multiple search algorithms including Beam Search, Monte Carlo Tree Search (MCTS), and experimental variants designed for complex problem-solving with branching and revision support.
# Clone the repository
git clone https://github.com/cyrup-ai/kodegen-tools-reasoner.git
cd kodegen-tools-reasoner
# Build the project
cargo build --release
# Run the server
cargo run --release
The server will start on the default port (30453) and expose the MCP tool interface.
# Run in development mode
cargo run
# Run in release mode
cargo run --release
# Run with logging
RUST_LOG=info cargo run
# Run the comprehensive reasoner demo
cargo run --example reasoner_demo
Breadth-first exploration that maintains the top N paths simultaneously.
Best for: Balanced exploration, general problem-solving
Parameters:
beamWidth: Number of paths to maintain (default: 3){
"thought": "Analyzing algorithm complexity",
"thoughtNumber": 1,
"totalThoughts": 5,
"nextThoughtNeeded": true,
"strategyType": "beam_search",
"beamWidth": 3
}
Standard MCTS with UCB1/PUCT selection for exploration-exploitation balance.
Best for: Decision trees, game-like problems, optimization
Parameters:
numSimulations: Number of rollouts per thought (default: 50){
"thought": "Design distributed caching strategy",
"thoughtNumber": 1,
"totalThoughts": 3,
"nextThoughtNeeded": true,
"strategyType": "mcts",
"numSimulations": 100
}
MCTS with 10% higher exploration bonus for creative problem-solving.
Best for: Creative solutions, exploring novel approaches
{
"thought": "Optimize database query performance",
"thoughtNumber": 1,
"totalThoughts": 2,
"nextThoughtNeeded": true,
"strategyType": "mcts_002_alpha",
"numSimulations": 50
}
MCTS variant that rewards longer, more detailed reasoning paths.
Best for: Detailed analysis, thorough explanations
{
"thought": "Analyze microservices vs monolithic architecture",
"thoughtNumber": 1,
"totalThoughts": 2,
"nextThoughtNeeded": true,
"strategyType": "mcts_002alt_alpha",
"numSimulations": 50
}
reasonerProcess thoughts step-by-step with advanced reasoning strategies.
| Parameter | Type | Required | Description |
|---|---|---|---|
thought |
string | Yes | Current reasoning step text |
thoughtNumber |
integer | Yes | Current thought index (1-based) |
totalThoughts |
integer | Yes | Estimated total thoughts needed |
nextThoughtNeeded |
boolean | Yes | Whether more reasoning is required |
strategyType |
string | No | Reasoning strategy (default: "beam_search") |
beamWidth |
integer | No | Paths to maintain for beam search (default: 3) |
numSimulations |
integer | No | MCTS rollouts per thought (default: 50) |
parentId |
string | No | Parent node ID for branching thoughts |
{
"nodeId": "uuid-v4",
"thought": "echoed input thought",
"score": 0.85,
"depth": 2,
"isComplete": false,
"nextThoughtNeeded": true,
"possiblePaths": 3,
"bestScore": 0.92,
"strategyUsed": "beam_search",
"thoughtNumber": 2,
"totalThoughts": 5,
"stats": {
"totalNodes": 15,
"averageScore": 0.78,
"maxDepth": 3,
"branchingFactor": 2.1,
"strategyMetrics": {}
}
}
βββββββββββββββββββββββββββββββββββββββββββ
β MCP Tool Interface β
β (reasoner) β
ββββββββββββββββ¬βββββββββββββββββββββββββββ
β
ββββββββββββββββΌβββββββββββββββββββββββββββ
β Reasoner β
β (Route requests to strategies) β
ββββββββββββββββ¬βββββββββββββββββββββββββββ
β
βββββββ΄ββββββββββββββ¬βββββββββββββββ
β β β
ββββββββββΌβββββββββ ββββββββΌβββββββ βββββΌβββββββββββ
β BeamSearch β β MCTS β β MCTS Variantsβ
β β β β β β
ββββββββββ¬βββββββββ ββββββββ¬βββββββ βββββ¬βββββββββββ
β β β
ββββββββββββββββββββ΄βββββββββββββββ
β
ββββββββββΌβββββββββ
β BaseStrategy β
β (Stella 400M β
β Embeddings) β
ββββββββββ¬βββββββββ
β
ββββββββββΌβββββββββ
β StateManager β
β (LRU Cache + β
β HashMap) β
βββββββββββββββββββ
Key configuration constants in src/types.rs:
pub const CONFIG: Config = Config {
beam_width: 3, // Top paths to maintain
max_depth: 5, // Maximum reasoning depth
min_score: 0.5, // Viability threshold
temperature: 0.7, // Thought diversity
cache_size: 1000, // LRU cache entries
default_strategy: "beam_search",
num_simulations: 50, // MCTS rollouts
// ... additional MCTS weights
};
# Run all tests
cargo test
# Run tests with output
cargo test -- --nocapture
# Run specific test
cargo test test_name
# Format code
cargo fmt
# Check formatting
cargo fmt -- --check
# Run linter
cargo clippy
# Run linter with all warnings
cargo clippy -- -W clippy::all
# Build for native target
cargo build --release
# Build for WASM
cargo build --target wasm32-unknown-unknown
The reasoner is optimized for high-performance operation:
sysinfoMonitor embedding cache performance:
let stats = reasoner.get_cache_stats();
println!("Hit rate: {:.2}%", stats.hits as f64 / (stats.hits + stats.misses) as f64 * 100.0);
println!("Memory usage: {} bytes", stats.size_bytes);
println!("Evictions: {}", stats.evictions);
Contributions are welcome! Please follow these guidelines:
cargo fmt and cargo clippyLicensed under either of:
at your option.
Built by KODEGEN.α΄Ιͺ for high-performance AI agent reasoning.