| Crates.io | tensorlogic-ir |
| lib.rs | tensorlogic-ir |
| version | 0.1.0-alpha.2 |
| created_at | 2025-11-07 22:18:25.588794+00 |
| updated_at | 2026-01-03 21:02:21.408961+00 |
| description | Intermediate representation (IR) and AST types for TensorLogic |
| homepage | https://github.com/cool-japan/tensorlogic |
| repository | https://github.com/cool-japan/tensorlogic |
| max_upload_size | |
| id | 1922250 |
| size | 1,623,329 |
Engine-agnostic AST & Intermediate Representation for TensorLogic
tensorlogic-ir is the core intermediate representation layer for the TensorLogic framework. It provides a type-safe, engine-agnostic representation of logic-as-tensor computations, enabling the compilation of logical rules into tensor computation graphs.
This crate serves as the lingua franca between all TensorLogic components, providing the foundational data structures for logical expressions, type systems, domain constraints, and tensor computation graphs.
List<T>, Option<T>, Map<K,V>), unification, generalizationVec<n, T> where n is runtime), dimension constraints{x: Int | x > 0}), liquid type inferenceTypeAnnotation and PredicateSignatureDomainInfo, DomainRegistry)[dependencies]
tensorlogic-ir = "0.1.0-alpha.2"
use tensorlogic_ir::{TLExpr, Term};
// Build a logical expression: ∀x. Person(x) → Mortal(x)
let person = TLExpr::pred("Person", vec![Term::var("x")]);
let mortal = TLExpr::pred("Mortal", vec![Term::var("x")]);
let rule = TLExpr::forall("x", "Entity", TLExpr::imply(person, mortal));
// Analyze expression
let free_vars = rule.free_vars(); // [] - all variables bound
assert!(rule.free_vars().is_empty());
// Validate arity
rule.validate_arity()?;
// Arithmetic: score(x) * 2 + bias
let x = TLExpr::pred("score", vec![Term::var("x")]);
let doubled = TLExpr::mul(x, TLExpr::constant(2.0));
let result = TLExpr::add(doubled, TLExpr::constant(0.5));
// Comparison: temperature > 100
let temp = TLExpr::pred("temperature", vec![Term::var("t")]);
let threshold = TLExpr::constant(100.0);
let is_hot = TLExpr::gt(temp, threshold);
// Conditional: if score > 0.5 then high else low
let condition = TLExpr::gt(score, TLExpr::constant(0.5));
let result = TLExpr::if_then_else(condition, high_action, low_action);
use tensorlogic_ir::{DomainInfo, DomainRegistry, DomainType};
// Use built-in domains
let registry = DomainRegistry::with_builtins();
// Available: Bool, Int, Real, Nat, Probability
// Create custom domain
let mut custom_registry = DomainRegistry::new();
custom_registry.register(
DomainInfo::finite("Color", 3)
.with_metadata("values", "red,green,blue")
)?;
// Validate domains in expressions
let expr = TLExpr::exists("x", "Int", TLExpr::pred("P", vec![Term::var("x")]));
expr.validate_domains(®istry)?;
// Check domain compatibility
assert!(registry.are_compatible("Int", "Int")?);
assert!(registry.can_cast("Bool", "Int")?);
use tensorlogic_ir::{PredicateSignature, SignatureRegistry, TypeAnnotation};
let mut sig_registry = SignatureRegistry::new();
// Register: Parent(Person, Person) -> Bool
sig_registry.register(
PredicateSignature::new("Parent", 2)
.with_arg_type(TypeAnnotation::simple("Person"))
.with_arg_type(TypeAnnotation::simple("Person"))
.with_return_type(TypeAnnotation::simple("Bool"))
)?;
// Validate expressions
let expr = TLExpr::pred("Parent", vec![Term::var("x"), Term::var("y")]);
expr.validate_with_registry(&sig_registry)?;
use tensorlogic_ir::{EinsumGraph, EinsumNode, OpType};
let mut graph = EinsumGraph::new();
// Add tensors
let input_a = graph.add_tensor("input_a");
let input_b = graph.add_tensor("input_b");
let output = graph.add_tensor("output");
// Matrix multiplication: einsum("ik,kj->ij")
graph.add_node(EinsumNode {
inputs: vec![input_a, input_b],
op: OpType::Einsum { spec: "ik,kj->ij".to_string() },
})?;
// Apply ReLU activation
graph.add_node(EinsumNode {
inputs: vec![output],
op: OpType::ElemUnary { op: "relu".to_string() },
})?;
// Mark output
graph.add_output(output)?;
// Validate
graph.validate()?;
use tensorlogic_ir::graph::optimization::OptimizationPipeline;
let mut graph = /* ... */;
// Run full optimization pipeline
let stats = graph.optimize()?;
println!("Dead nodes removed: {}", stats.dead_nodes_removed);
println!("Common subexpressions: {}", stats.cse_count);
println!("Simplifications: {}", stats.simplifications);
// Individual optimization passes
graph.eliminate_dead_code()?;
graph.common_subexpression_elimination()?;
graph.simplify()?;
use tensorlogic_ir::{Metadata, Provenance, SourceLocation};
// Track source location
let location = SourceLocation::new("rules.tl", 42, 10);
// Add provenance information
let provenance = Provenance::new("rule_123")
.with_source_file("rules.tl")
.with_attribute("author", "alice");
// Attach to graph nodes
let metadata = Metadata::new()
.with_name("matrix_multiply")
.with_attribute("optimization_level", "3");
pub enum TLExpr {
// Logical operations
Pred { name: String, args: Vec<Term> },
And(Box<TLExpr>, Box<TLExpr>),
Or(Box<TLExpr>, Box<TLExpr>),
Not(Box<TLExpr>),
// Quantifiers
Exists { var: String, domain: String, body: Box<TLExpr> },
ForAll { var: String, domain: String, body: Box<TLExpr> },
// Implications
Imply(Box<TLExpr>, Box<TLExpr>),
Score(Box<TLExpr>),
// Arithmetic
Add(Box<TLExpr>, Box<TLExpr>),
Sub(Box<TLExpr>, Box<TLExpr>),
Mul(Box<TLExpr>, Box<TLExpr>),
Div(Box<TLExpr>, Box<TLExpr>),
// Comparison
Eq(Box<TLExpr>, Box<TLExpr>),
Lt(Box<TLExpr>, Box<TLExpr>),
Gt(Box<TLExpr>, Box<TLExpr>),
Lte(Box<TLExpr>, Box<TLExpr>),
Gte(Box<TLExpr>, Box<TLExpr>),
// Control flow
IfThenElse {
condition: Box<TLExpr>,
then_branch: Box<TLExpr>,
else_branch: Box<TLExpr>,
},
// Literals
Constant(f64),
}
pub enum Term {
Var(String),
Const(String),
Typed {
value: Box<Term>,
type_annotation: TypeAnnotation,
},
}
pub struct EinsumGraph {
pub tensors: Vec<String>,
pub nodes: Vec<EinsumNode>,
pub outputs: Vec<usize>,
}
pub struct EinsumNode {
pub op: OpType,
pub inputs: Vec<usize>,
}
pub enum OpType {
Einsum { spec: String },
ElemUnary { op: String },
ElemBinary { op: String },
Reduce { op: String, axes: Vec<usize> },
}
let expr = TLExpr::pred("knows", vec![Term::var("x"), Term::var("y")]);
let free = expr.free_vars(); // {"x", "y"}
let quantified = TLExpr::exists("x", "Person", expr);
let still_free = quantified.free_vars(); // {"y"} - x is bound
// Consistent arity ✓
let p1 = TLExpr::pred("P", vec![Term::var("x"), Term::var("y")]);
let p2 = TLExpr::pred("P", vec![Term::var("a"), Term::var("b")]);
let expr = TLExpr::and(p1, p2);
assert!(expr.validate_arity().is_ok());
// Inconsistent arity ✗
let p3 = TLExpr::pred("P", vec![Term::var("z")]);
let bad_expr = TLExpr::and(p1, p3);
assert!(bad_expr.validate_arity().is_err());
let registry = DomainRegistry::with_builtins();
// Valid: x and y have compatible domains
let expr = TLExpr::exists(
"x", "Int",
TLExpr::forall("x", "Int", TLExpr::pred("P", vec![Term::var("x")]))
);
assert!(expr.validate_domains(®istry).is_ok());
// Invalid: incompatible domains
let bad = TLExpr::exists(
"x", "Int",
TLExpr::forall("x", "Bool", TLExpr::pred("P", vec![Term::var("x")]))
);
assert!(bad.validate_domains(®istry).is_err());
| Logic Operation | Tensor Equivalent | Notes |
|---|---|---|
AND(a, b) |
a * b |
Element-wise multiplication |
OR(a, b) |
max(a, b) |
Or soft variant |
NOT(a) |
1 - a |
Complement |
∃x. P(x) |
sum(P, axis=x) |
Or max for hard quantification |
∀x. P(x) |
NOT(∃x. NOT(P(x))) |
Or product reduction |
a → b |
ReLU(b - a) |
Soft implication |
Full serde support for JSON and binary formats:
use serde_json;
let expr = TLExpr::pred("Person", vec![Term::var("x")]);
// JSON
let json = serde_json::to_string(&expr)?;
let restored: TLExpr = serde_json::from_str(&json)?;
// Pretty JSON
let pretty = serde_json::to_string_pretty(&expr)?;
// Graphs too
let graph = EinsumGraph::new();
let graph_json = serde_json::to_string(&graph)?;
Comprehensive examples demonstrating all IR features (17 total):
# Basic expressions and logical operations
cargo run --example 00_basic_expressions
# Quantifiers (exists, forall)
cargo run --example 01_quantifiers
# Arithmetic and comparison operations
cargo run --example 02_arithmetic
# Graph construction patterns
cargo run --example 03_graph_construction
# IR optimization passes
cargo run --example 04_optimization
# Serialization (JSON and binary)
cargo run --example 05_serialization
# Visualization and DOT export
cargo run --example 06_visualization
# Parametric types and type unification
cargo run --example 07_parametric_types
# Effect system with tracking
cargo run --example 08_effect_system
# Dependent types with value dependencies
cargo run --example 09_dependent_types
# Linear types for resource management
cargo run --example 10_linear_types
# Refinement types with predicates
cargo run --example 11_refinement_types
# Profile-guided optimization
cargo run --example 12_profile_guided_optimization
# Sequent calculus and proof search
cargo run --example 13_sequent_calculus
# Constraint logic programming
cargo run --example 14_constraint_logic_programming
# Advanced graph algorithms
cargo run --example 15_advanced_graph_algorithms
# Resolution-based theorem proving
cargo run --example 16_resolution_theorem_proving
Comprehensive test suite with property-based tests and benchmarks:
# Run all tests (unit + integration + property tests)
cargo test -p tensorlogic-ir
# Run property tests only
cargo test -p tensorlogic-ir --test proptests
# Run benchmarks
cargo bench -p tensorlogic-ir
# With coverage
cargo tarpaulin --out Html
Test Status: ✅ 676/676 passing (100%)
tensorlogic-ir/
├── domain.rs # Domain constraints & validation
├── error.rs # Error types
├── expr/ # Logical expressions
│ ├── mod.rs # TLExpr enum & builders
│ ├── analysis.rs # Free variables, predicates
│ ├── validation.rs # Arity checking
│ └── domain_validation.rs # Domain validation
├── graph/ # Tensor computation graphs
│ ├── mod.rs # EinsumGraph
│ ├── node.rs # EinsumNode
│ ├── optype.rs # Operation types
│ ├── optimization.rs # Optimization passes
│ └── transform.rs # Graph transformations (disabled)
├── metadata.rs # Provenance & source tracking
├── signature.rs # Type signatures & registry
├── term.rs # Variables & constants
└── tests.rs # Integration tests
cargo build
cargo build --release
# Format code
cargo fmt --all
# Run clippy
cargo clippy --all-targets -- -D warnings
# Check for warnings
cargo build 2>&1 | grep warning
splitrs for refactoring)snake_case variables, PascalCase typesPerformance benchmarks cover all core operations:
# Run all benchmarks
cargo bench -p tensorlogic-ir
# Run specific benchmark group
cargo bench -p tensorlogic-ir --bench ir_benchmarks -- expr_construction
cargo bench -p tensorlogic-ir --bench ir_benchmarks -- serialization
Benchmark Coverage:
See TODO.md for detailed roadmap.
Current Status: ~90% complete (46/51 tasks)
Apache-2.0
Status: 🎉 Production Ready (v0.1.0-alpha.2) **Last Updated: 2025-12-16 Tests: 161/161 passing (100%) Examples: 7 comprehensive demonstrations Benchmarks: 40+ performance tests Documentation: Zero rustdoc warnings with comprehensive module docs Maintained By: COOLJAPAN Ecosystem