| Crates.io | tenrso-planner |
| lib.rs | tenrso-planner |
| version | 0.1.0-alpha.2 |
| created_at | 2025-11-08 08:50:23.539168+00 |
| updated_at | 2025-12-17 06:35:14.628095+00 |
| description | Contraction order planning and optimization for TenRSo |
| homepage | https://github.com/cool-japan/tenrso |
| repository | https://github.com/cool-japan/tenrso |
| max_upload_size | |
| id | 1922666 |
| size | 566,669 |
Production-grade contraction order planning and optimization for tensor networks.
Part of the TenRSo tensor computing stack.
tenrso-planner implements intelligent planning for tensor network contractions, particularly those expressed in Einstein summation notation. It provides state-of-the-art algorithms for finding efficient execution plans that minimize computational cost and memory usage.
✅ 6 Planning Algorithms
✅ Cost Modeling
✅ Representation Selection
✅ Cache-Aware Tiling
✅ Production Ready
[dependencies]
tenrso-planner = "0.1.0-alpha.1"
[dependencies]
tenrso-planner = { version = "0.1.0-alpha.1", features = ["serde"] }
serde: Enable serialization/deserialization of plansuse tenrso_planner::{greedy_planner, EinsumSpec, PlanHints};
// Parse Einstein summation notation
let spec = EinsumSpec::parse("ij,jk->ik")?;
// Define tensor shapes
let shapes = vec![vec![100, 200], vec![200, 300]];
// Create a plan with default hints
let hints = PlanHints::default();
let plan = greedy_planner(&spec, &shapes, &hints)?;
// Inspect plan details
println!("Estimated FLOPs: {:.2e}", plan.estimated_flops);
println!("Peak memory: {} bytes", plan.estimated_memory);
println!("Contraction steps: {}", plan.nodes.len());
Automatically selects the best algorithm based on problem characteristics:
use tenrso_planner::{AdaptivePlanner, Planner, PlanHints};
let planner = AdaptivePlanner::new();
let hints = PlanHints::default();
let plan = planner.make_plan(
"ij,jk,kl->il",
&[vec![10, 20], vec![20, 30], vec![30, 10]],
&hints
)?;
Algorithm Selection:
Fast heuristic that greedily selects the lowest-cost contraction at each step:
use tenrso_planner::{GreedyPlanner, Planner};
let planner = GreedyPlanner::new();
let plan = planner.make_plan(spec, shapes, hints)?;
When to use: Fast planning required, many tensors (> 10) Complexity: O(n³) time, O(n) space Quality: Good (within 2-5x of optimal)
Explores multiple contraction paths simultaneously:
use tenrso_planner::{BeamSearchPlanner, Planner};
let planner = BeamSearchPlanner::with_beam_width(5);
let plan = planner.make_plan(spec, shapes, hints)?;
When to use: Medium networks (8-20 tensors), better quality than greedy Complexity: O(n³ × k) where k is beam width Quality: Better (often within 1.5-2x of optimal)
Finds globally optimal contraction order:
use tenrso_planner::{DPPlanner, Planner};
let planner = DPPlanner::new();
let plan = planner.make_plan(spec, shapes, hints)?;
When to use: Small networks (≤ 20 tensors), need provably optimal plan Complexity: O(3ⁿ) time, O(2ⁿ) space Quality: Optimal (guarantees minimum FLOPs)
Stochastic search with temperature-based acceptance:
use tenrso_planner::{SimulatedAnnealingPlanner, Planner};
let planner = SimulatedAnnealingPlanner::with_params(
1000.0, // initial temperature
0.95, // cooling rate
1000, // max iterations
);
let plan = planner.make_plan(spec, shapes, hints)?;
When to use: Large networks (> 20 tensors), quality more important than speed Complexity: O(iterations × n) time Quality: Variable (can escape local minima)
Evolutionary search with population-based exploration:
use tenrso_planner::{GeneticAlgorithmPlanner, Planner};
// Use default configuration
let planner = GeneticAlgorithmPlanner::new();
// Or use presets
let planner = GeneticAlgorithmPlanner::fast(); // Quick experimentation
let planner = GeneticAlgorithmPlanner::high_quality(); // Production quality
let plan = planner.make_plan(spec, shapes, hints)?;
When to use: Very large networks (> 20 tensors), complex topologies Complexity: O(generations × population × n³) Quality: High (can find near-optimal solutions for complex problems)
| Algorithm | Time | Space | Quality | Best For |
|---|---|---|---|---|
| Greedy | O(n³) | O(n) | Good | General purpose, fast |
| Beam Search | O(n³k) | O(kn) | Better | Medium networks |
| DP | O(3ⁿ) | O(2ⁿ) | Optimal | n ≤ 20 |
| Simulated Annealing | O(iter·n) | O(n) | Variable | Large, quality focus |
| Genetic Algorithm | O(gen·pop·n³) | O(pop·n) | High | Very large, complex |
| Adaptive | Varies | Varies | Auto | All cases ⭐ |
See the examples/ directory for detailed usage:
basic_matmul.rs - Simple matrix multiplicationmatrix_chain.rs - Greedy vs DP comparisonplanning_hints.rs - Advanced hint usagegenetic_algorithm.rs - GA planner showcaseRun examples:
cargo run --example basic_matmul
cargo run --example genetic_algorithm
Comprehensive benchmark suite covering all planners:
# Run all benchmarks
cargo bench --package tenrso-planner
# Run specific benchmark
cargo bench --package tenrso-planner -- greedy_matrix_chain
cargo bench --package tenrso-planner -- planner_comparison_star
Benchmark Results (typical):
use tenrso_planner::{greedy_planner, dp_planner};
let greedy = greedy_planner(&spec, &shapes, &hints)?;
let optimal = dp_planner(&spec, &shapes, &hints)?;
// Compare plans
let comparison = greedy.compare(&optimal);
println!("FLOPs improvement: {:.1}%", comparison.flops_improvement);
// Analyze plan
let analysis = greedy.analyze();
println!("Average node cost: {:.2e}", analysis.avg_node_cost);
use tenrso_planner::refine_plan;
let plan = greedy_planner(&spec, &shapes, &hints)?;
let refined = refine_plan(&plan, &spec, &shapes, &hints, 100)?;
serde feature)use tenrso_planner::Plan;
let plan = greedy_planner(&spec, &shapes, &hints)?;
// Serialize to JSON
let json = serde_json::to_string(&plan)?;
// Deserialize
let loaded: Plan = serde_json::from_str(&json)?;
Planning overhead is typically negligible compared to execution:
Network Size | Greedy | Beam(k=5) | DP | SA | GA
-------------|--------|-----------|-------|--------|--------
3 tensors | 100μs | 500μs | 1ms | 10ms | 50ms
5 tensors | 200μs | 1ms | 10ms | 20ms | 100ms
10 tensors | 1ms | 5ms | 1s | 50ms | 200ms
20 tensors | 10ms | 50ms | N/A | 100ms | 500ms
# Run all tests
cargo test --package tenrso-planner
# Run with output
cargo test --package tenrso-planner -- --nocapture
# Run specific test
cargo test --package tenrso-planner test_ga_planner_struct
Test Coverage: 144 tests (136 unit + 8 doc), 100% passing
Build and view documentation:
cargo doc --package tenrso-planner --open
Contributions welcome! See CONTRIBUTING.md for guidelines.
opt_einsum library (Python)Licensed under the Apache License, Version 2.0. See LICENSE for details.
tenrso-core - Core tensor data structurestenrso-kernels - High-performance tensor kernelstenrso-exec - Plan execution enginetenrso - Main TenRSo libraryStatus: ✅ Production-ready • Version: 0.1.0-alpha.1 • Tests: 144/144 passing