| Crates.io | tensorlogic-compiler |
| lib.rs | tensorlogic-compiler |
| version | 0.1.0-alpha.2 |
| created_at | 2025-11-07 22:41:01.952915+00 |
| updated_at | 2026-01-03 21:04:37.023059+00 |
| description | Compiler for transforming logic expressions into tensor computation graphs |
| homepage | https://github.com/cool-japan/tensorlogic |
| repository | https://github.com/cool-japan/tensorlogic |
| max_upload_size | |
| id | 1922297 |
| size | 1,432,739 |
Engine-agnostic compilation of TensorLogic expressions to tensor computation graphs
The compiler translates logical rules with quantifiers into optimized tensor operations using Einstein summation notation. It operates as a planning layer only—no execution happens here.
Input: TLExpr (logical expressions with predicates, quantifiers, implications)
Output: EinsumGraph (directed graph of tensor operations)
The compiler features a 7-pass optimization pipeline that can reduce expression complexity by up to 80%:
Additional Graph-Level Optimizations:
Pipeline Features:
use tensorlogic_compiler::compile_to_einsum;
use tensorlogic_ir::{TLExpr, Term};
// Define a logic rule: ∃y. knows(x, y)
// "Find all persons x who know someone"
let rule = TLExpr::exists(
"y",
"Person",
TLExpr::pred("knows", vec![Term::var("x"), Term::var("y")]),
);
// Compile to tensor operations
let graph = compile_to_einsum(&rule)?;
// Graph contains:
// - Tensors: ["knows[ab]", "temp_0"]
// - Operations: [Reduce{op: "sum", axes: [1]}]
// - Outputs: [1]
| Logic Operation | Tensor Equivalent | Notes |
|---|---|---|
P(x, y) |
Tensor with axes ab |
Predicate as multi-dimensional array |
P ∧ Q |
Hadamard product or einsum | Element-wise if same axes, contraction if shared vars |
P ∨ Q |
max(P, Q) |
Or soft variant (configurable) |
¬P |
1 - P |
Or temperature-controlled |
∃x. P(x) |
sum(P, axis=x) |
Or max for hard quantification |
∀x. P(x) |
NOT(∃x. NOT(P(x))) |
Dual of EXISTS |
P → Q |
ReLU(Q - P) |
Soft implication |
| Logic Operation | Tensor Equivalent | Notes |
|---|---|---|
□P (Box) |
min(P, axis=world) or prod(P, axis=world) |
Necessity over possible worlds |
◇P (Diamond) |
max(P, axis=world) or sum(P, axis=world) |
Possibility over possible worlds |
F(P) (Eventually) |
max(P, axis=time) or sum(P, axis=time) |
True in some future state |
G(P) (Always) |
min(P, axis=time) or prod(P, axis=time) |
True in all future states |
Modal Logic Example:
use tensorlogic_ir::{TLExpr, Term};
// □(∃y. knows(x, y)) - "In all possible worlds, x knows someone"
let expr = TLExpr::Box(Box::new(
TLExpr::exists("y", "Person",
TLExpr::pred("knows", vec![Term::var("x"), Term::var("y")])
)
));
Temporal Logic Example:
// F(completed(t)) - "Task t will eventually be completed"
let expr = TLExpr::Eventually(Box::new(
TLExpr::pred("completed", vec![Term::var("t")])
));
// G(safe(s)) - "System s is always safe"
let expr = TLExpr::Always(Box::new(
TLExpr::pred("safe", vec![Term::var("s")])
));
Combined Modal & Temporal:
// □(F(goal(a))) - "In all possible worlds, agent a eventually achieves goal"
let expr = TLExpr::Box(Box::new(
TLExpr::Eventually(Box::new(
TLExpr::pred("goal", vec![Term::var("a")])
))
));
See examples/10_modal_temporal_logic.rs for comprehensive demonstrations.
The compiler defines 6 preset configurations and 26+ configurable strategies:
use tensorlogic_compiler::{CompilationConfig, CompilationConfigBuilder};
// Define preset configurations
let config = CompilationConfig::soft_differentiable(); // Default (neural training)
let config = CompilationConfig::hard_boolean(); // Discrete reasoning
let config = CompilationConfig::fuzzy_godel(); // Gödel fuzzy logic
let config = CompilationConfig::probabilistic(); // Probabilistic semantics
// Or build a custom configuration
let config = CompilationConfigBuilder::new()
.and_strategy(AndStrategy::Product) // Product t-norm
.or_strategy(OrStrategy::ProbabilisticSum) // Probabilistic s-norm
.not_strategy(NotStrategy::Complement) // Standard complement
.exists_strategy(ExistsStrategy::Max) // Max aggregation
.build();
// Note: Full integration into compilation pipeline is in progress
// Currently uses default soft_differentiable strategy
Available Strategies:
| Operation | Strategies | Use Cases |
|---|---|---|
| AND | Product, Min, ProbabilisticSum, Gödel, ProductTNorm, Łukasiewicz | T-norms for conjunctions |
| OR | Max, ProbabilisticSum, Gödel, ProbabilisticSNorm, Łukasiewicz | S-norms for disjunctions |
| NOT | Complement (1-x), Sigmoid | Negation with or without temperature |
| EXISTS | Sum, Max, LogSumExp, Mean | Different quantifier semantics |
| FORALL | DualOfExists, Product, Min, MeanThreshold | Universal quantification strategies |
| IMPLY | ReLU, Material, Gödel, Łukasiewicz, Reichenbach | Various implication operators |
| MODAL | AllWorldsMin, AllWorldsProduct, Threshold | Necessity/possibility operators |
| TEMPORAL | Max, Sum, LogSumExp | Eventually/always operators |
The compiler handles complex rules like transitivity with shared variables:
// knows(x,y) ∧ knows(y,z) → knows(x,z)
let knows_xy = TLExpr::pred("knows", vec![Term::var("x"), Term::var("y")]);
let knows_yz = TLExpr::pred("knows", vec![Term::var("y"), Term::var("z")]);
let knows_xz = TLExpr::pred("knows", vec![Term::var("x"), Term::var("z")]);
let premise = TLExpr::and(knows_xy, knows_yz);
let rule = TLExpr::imply(premise, knows_xz);
let graph = compile_to_einsum(&rule)?;
// Generates:
// 1. knows[ab] ∧ knows[bc] → einsum("ab,bc->abc") [contraction over shared 'b']
// 2. Marginalize over 'b' to align with conclusion axes 'ac'
// 3. Apply ReLU(knows[ac] - marginalized_premise[ac])
The recommended approach is to use the unified optimization pipeline that applies all 7 passes iteratively:
use tensorlogic_compiler::optimize::{OptimizationPipeline, PipelineConfig};
use tensorlogic_ir::{TLExpr, Term};
// Create a complex expression
let x = TLExpr::pred("x", vec![Term::var("i")]);
let expr = TLExpr::negate(TLExpr::negate(TLExpr::add(
TLExpr::pow(x, TLExpr::Constant(2.0)),
TLExpr::Constant(0.0),
)));
// Apply default optimization pipeline
let pipeline = OptimizationPipeline::new();
let (optimized, stats) = pipeline.optimize(&expr);
// Check results
println!("Total optimizations: {}", stats.total_optimizations());
println!(" Negation: {}", stats.negation.double_negations_eliminated);
println!(" Constant folding: {}", stats.constant_folding.binary_ops_folded);
println!(" Algebraic: {}", stats.algebraic.identities_eliminated);
println!(" Strength reduction: {}", stats.strength_reduction.power_reductions);
println!(" Iterations: {}", stats.total_iterations);
println!(" Reached fixed point: {}", stats.reached_fixed_point);
Customize which passes run and how many iterations:
use tensorlogic_compiler::optimize::PipelineConfig;
// Aggressive optimization (more iterations)
let config = PipelineConfig::aggressive();
let pipeline = OptimizationPipeline::with_config(config);
// Custom configuration
let config = PipelineConfig::default()
.with_negation_opt(true)
.with_constant_folding(true)
.with_algebraic_simplification(true)
.with_strength_reduction(true)
.with_distributivity(true)
.with_quantifier_opt(true)
.with_dead_code_elimination(true)
.with_max_iterations(15);
let pipeline = OptimizationPipeline::with_config(config);
let (optimized, stats) = pipeline.optimize(&expr);
For fine-grained control, use individual optimization passes:
use tensorlogic_compiler::optimize::{
optimize_negations, fold_constants, simplify_algebraic,
reduce_strength, optimize_distributivity, optimize_quantifiers,
eliminate_dead_code,
};
// Apply specific optimizations
let (opt1, stats1) = optimize_negations(&expr);
let (opt2, stats2) = fold_constants(&opt1);
let (opt3, stats3) = simplify_algebraic(&opt2);
let (opt4, stats4) = reduce_strength(&opt3);
Analyze expression complexity to guide optimization decisions:
use tensorlogic_compiler::optimize::{analyze_complexity, CostWeights};
let complexity = analyze_complexity(&expr);
println!("Max depth: {}", complexity.max_depth);
println!("Total operations: {}", complexity.total_operations());
println!("Total cost: {}", complexity.total_cost());
// Use GPU-optimized cost weights
let gpu_weights = CostWeights::gpu_optimized();
let gpu_cost = complexity.total_cost_with_weights(&gpu_weights);
println!("GPU-optimized cost: {}", gpu_cost);
// Check optimization potential
println!("CSE potential: {}", complexity.cse_potential());
println!("Complexity level: {}", complexity.complexity_level());
After compilation, optimize the resulting graph:
use tensorlogic_ir::graph::optimization::{optimize_graph, OptimizationLevel};
// Compile expression to graph
let graph = compile_to_einsum(&expr)?;
// Apply graph optimizations (DCE, CSE, identity elimination)
let (optimized_graph, stats) = optimize_graph(&graph, OptimizationLevel::Aggressive);
println!("Removed {} nodes", stats.nodes_removed);
The compiler now includes sophisticated analysis and optimization capabilities:
Track compilation performance, memory usage, and cache statistics:
use tensorlogic_compiler::profiling::CompilationProfiler;
let mut profiler = CompilationProfiler::new();
profiler.start();
// Profile compilation phases
profiler.start_phase("compilation");
let graph = compile_to_einsum(&expr)?;
profiler.end_phase("compilation");
// Record pass executions
profiler.record_pass("negation_opt", duration, optimizations_applied);
// Generate reports
let report = profiler.generate_report();
println!("{}", report);
// Get JSON output for tooling
let json = profiler.generate_json_report();
Profiling capabilities:
Analyze how data flows through expressions for optimization opportunities:
use tensorlogic_compiler::passes::{analyze_dataflow, analyze_graph_dataflow};
// Analyze expression dataflow
let analysis = analyze_dataflow(&expr);
// Check live variables at each point
println!("Live variables: {:?}", analysis.live_variables);
// Track reaching definitions (which assignments reach each use)
println!("Reaching definitions: {:?}", analysis.reaching_defs);
// Identify available expressions for CSE
println!("Available expressions: {:?}", analysis.available_exprs);
// Use-def chains for dependency tracking
println!("Use-def chains: {:?}", analysis.use_def_chains);
// Analyze compiled graph dataflow
let graph_analysis = analyze_graph_dataflow(&graph);
println!("Tensor dependencies: {:?}", graph_analysis.dependencies);
println!("Live tensors per node: {:?}", graph_analysis.live_tensors);
Dataflow analysis provides:
Optimize einsum contraction order using dynamic programming:
use tensorlogic_compiler::passes::{optimize_contractions, optimize_contractions_with_config};
use tensorlogic_compiler::passes::ContractionOptConfig;
// Optimize with default greedy algorithm
let (optimized_graph, stats) = optimize_contractions(&graph);
println!("Contractions reordered: {}", stats.contractions_reordered);
println!("FLOPs reduction: {:.1}%", stats.flops_reduction_percent);
println!("Memory reduction: {:.1}%", stats.memory_reduction_percent);
// Custom configuration
let config = ContractionOptConfig {
max_intermediate_size: 1_000_000, // Limit intermediate tensor sizes
prefer_memory_over_flops: false, // Optimize for FLOPs first
};
let (optimized, stats) = optimize_contractions_with_config(&graph, &config);
Contraction optimization features:
Fuse multiple loops over the same axes for better cache locality:
use tensorlogic_compiler::passes::{fuse_loops, fuse_loops_with_config};
use tensorlogic_compiler::passes::LoopFusionConfig;
// Fuse loops with default settings
let (fused_graph, stats) = fuse_loops(&graph);
println!("Loops fused: {}", stats.loops_fused);
println!("Reductions merged: {}", stats.reductions_merged);
println!("Intermediates eliminated: {}", stats.intermediates_eliminated);
// Custom configuration
let config = LoopFusionConfig {
max_fusion_depth: 3, // Limit fusion depth
require_same_reduction: true, // Only fuse identical reductions
};
let (fused, stats) = fuse_loops_with_config(&graph, &config);
Loop fusion benefits:
Compute graph structure properties for optimization and validation:
use tensorlogic_compiler::passes::{analyze_reachability, analyze_dominance};
// Compute reachability information
let reachability = analyze_reachability(&graph);
// Check if node B is reachable from node A
if reachability.reachable.contains(&(node_a, node_b)) {
println!("Node {} can reach node {}", node_a, node_b);
}
// Get strongly connected components
println!("SCCs: {:?}", reachability.strongly_connected_components);
// Topological ordering (for DAGs)
if let Some(topo) = &reachability.topological_order {
println!("Topological order: {:?}", topo);
}
// Compute dominance relationships
let dominance = analyze_dominance(&graph);
println!("Immediate dominators: {:?}", dominance.immediate_dominators);
println!("Dominance frontiers: {:?}", dominance.dominance_frontiers);
Reachability analysis provides:
Run all analysis and optimization passes in a single pipeline:
use tensorlogic_compiler::passes::{post_compilation_passes, PostCompilationOptions};
let options = PostCompilationOptions {
validate_graph_structure: true, // Check for cycles, orphans
validate_axes: true, // Validate axis compatibility
validate_shapes: true, // Check tensor shape consistency
apply_optimizations: true, // Run optimization passes
enable_contraction_opt: true, // Optimize contraction order
enable_loop_fusion: true, // Fuse compatible loops
strict_mode: false, // Fail on warnings if true
};
let mut graph = compile_to_einsum(&expr)?;
let result = post_compilation_passes(&mut graph, &ctx, options)?;
if result.is_valid {
println!("✓ Graph validated successfully");
println!(" Checks performed: {}", result.validation_report.checks_performed);
println!(" Optimizations: {}", result.optimizations_applied);
for msg in &result.messages {
println!(" {}", msg);
}
}
Post-compilation pipeline:
See examples/21_profiling_and_optimization.rs for comprehensive demonstrations of all these features.
TLExpr
↓
[Pre-Compilation Passes]
- Scope analysis (detect unbound variables)
- Type checking (validate arity, types)
- Negation optimization
- Common subexpression elimination
↓
[Compiler Context]
- Assign axes to variables
- Track domains
- Manage temporary tensors
- Apply compilation config
↓
[compile_expr recursion]
- compile_predicate → tensor with axes
- compile_and → einsum contraction (configurable)
- compile_or → element-wise max (configurable)
- compile_not → 1 - x (configurable)
- compile_exists → reduction (configurable)
- compile_forall → dual or product (configurable)
- compile_imply → marginalize + operator (configurable)
- compile_arithmetic → element-wise ops
- compile_comparison → boolean tensors
↓
[Post-Compilation Passes]
- Dead code elimination
- Einsum operation merging
- Identity elimination
- Contraction order optimization
↓
EinsumGraph
- Tensors: Vec<String>
- Nodes: Vec<EinsumNode>
- Outputs: Vec<usize>
The compiler provides production-ready validation passes:
use tensorlogic_compiler::passes::scope_analysis::analyze_scopes;
let expr = TLExpr::exists("x", "Person",
TLExpr::and(
TLExpr::pred("knows", vec![Term::var("x"), Term::var("y")]),
TLExpr::pred("likes", vec![Term::var("x"), Term::var("z")]),
)
);
let analysis = analyze_scopes(&expr);
if !analysis.unbound_vars.is_empty() {
println!("Unbound variables: {:?}", analysis.unbound_vars);
println!("Suggestions: {}", analysis.suggest_quantifiers());
// Output: "Consider adding: ∃y:Domain. ∃z:Domain. ..."
}
use tensorlogic_compiler::passes::type_checking::TypeChecker;
use tensorlogic_ir::PredicateSignature;
let mut checker = TypeChecker::new();
// Register predicate signatures
checker.register_predicate(PredicateSignature {
name: "knows".to_string(),
arity: 2,
arg_types: vec![Some("Person".to_string()), Some("Person".to_string())],
});
// Type check expression
let result = checker.check_types(&expr);
if let Some(error) = result.type_errors.first() {
println!("Type error: {}", error);
}
use tensorlogic_compiler::passes::diagnostics::{diagnose_expression, DiagnosticLevel};
let diagnostics = diagnose_expression(&expr);
for diag in diagnostics {
match diag.level {
DiagnosticLevel::Error => eprintln!("ERROR: {}", diag.message),
DiagnosticLevel::Warning => eprintln!("WARNING: {}", diag.message),
DiagnosticLevel::Hint => println!("HINT: {}", diag.message),
_ => {}
}
if let Some(help) = diag.help {
println!(" Help: {}", help);
}
}
The CompilerContext manages compilation state:
use tensorlogic_compiler::CompilerContext;
let mut ctx = CompilerContext::new();
// Register domains
ctx.add_domain("Person", 100); // 100 possible persons
ctx.add_domain("City", 50); // 50 cities
// Bind variables to domains
ctx.bind_var("x", "Person")?;
ctx.bind_var("y", "City")?;
// Axes are automatically assigned: x→'a', y→'b', ...
The compiler generates 4 types of operations:
// Spec: "ab,bc->ac" (matrix multiplication)
EinsumNode::einsum("ab,bc->ac", vec![tensor0, tensor1])
// Operations: not, relu, sigmoid, etc.
EinsumNode::elem_unary("relu", tensor_idx)
// Operations: add, subtract, multiply, etc.
EinsumNode::elem_binary("subtract", left_idx, right_idx)
// Reduce over axis 1 (sum/max/min)
EinsumNode::reduce("sum", vec![1], tensor_idx)
The compiler performs extensive validation:
// Arity validation
let p1 = TLExpr::pred("P", vec![Term::var("x"), Term::var("y")]);
let p2 = TLExpr::pred("P", vec![Term::var("a")]); // ❌ Different arity!
validate_arity(&TLExpr::and(p1, p2))?; // Error: Predicate 'P' has inconsistent arity
// Domain validation
ctx.bind_var("x", "NonExistent")?; // Error: Domain 'NonExistent' not found
// Axis compatibility (now automatically handled via contraction/marginalization)
Use SymbolTable to provide domain and predicate metadata:
use tensorlogic_adapters::SymbolTable;
let table = SymbolTable::new();
// Add domains and predicates...
// Future: Pass to compiler for enhanced type checking
Execute the compiled graph:
use tensorlogic_scirs_backend::Scirs2Exec;
use tensorlogic_infer::TlExecutor;
let executor = Scirs2Exec::new();
let outputs = executor.execute(&graph, &inputs)?;
temp_0, temp_1, ... for debuggingThe compiler has comprehensive test coverage:
# Run all tests with nextest (recommended)
cargo nextest run -p tensorlogic-compiler
# Run with standard cargo test
cargo test -p tensorlogic-compiler
# Run with coverage
cargo llvm-cov --package tensorlogic-compiler
Current Test Status:
See TODO.md for the complete roadmap:
See the test suite for more examples:
cargo test -p tensorlogic-compiler
Key test cases:
test_transitivity_rule_shared_variables: Transitivity with contractiontest_and_with_different_axes: Partial variable overlaptest_and_with_disjoint_variables: Outer product (no shared vars)test_implication: Soft implication with ReLUtest_exists_quantifier: Reduction over quantified variablesWhen adding new features:
compile_expr to handle new TLExpr variantstests modulecargo nextest run -p tensorlogic-compilerApache-2.0
Status: 🎉 Production Ready (v0.1.0-alpha.2) **Last Updated: 2025-12-16 Tests: 158/158 passing (100%) Part of: TensorLogic Ecosystem