| Crates.io | tensorlogic |
| lib.rs | tensorlogic |
| version | 0.1.0-alpha.2 |
| created_at | 2025-11-03 01:14:44.328516+00 |
| updated_at | 2026-01-03 21:11:57.783952+00 |
| description | Logic-as-Tensor planning layer - meta crate re-exporting all TensorLogic components |
| homepage | https://github.com/cool-japan/tensorlogic |
| repository | https://github.com/cool-japan/tensorlogic |
| max_upload_size | |
| id | 1913761 |
| size | 83,202 |
Unified access to all TensorLogic components
This is the top-level umbrella crate that re-exports all TensorLogic components for convenient access. Instead of importing individual crates, you can use this meta crate to access the entire TensorLogic ecosystem.
TensorLogic compiles logical rules (predicates, quantifiers, implications) into tensor equations (einsum graphs) with a minimal DSL + IR, enabling neural/symbolic/probabilistic models within a unified tensor computation framework.
Add to your Cargo.toml:
[dependencies]
tensorlogic = "0.1.0-alpha.2"
use tensorlogic::prelude::*;
// Define logical expressions
let x = Term::var("x");
let y = Term::var("y");
let knows = TLExpr::pred("knows", vec![x.clone(), y.clone()]);
// Compile to tensor graph
let graph = compile_to_einsum(&knows)?;
// Execute with backend
let mut executor = Scirs2Exec::new();
let result = executor.forward(&graph)?;
The meta crate provides organized access to three layers:
use tensorlogic::ir::*; // AST and IR types
use tensorlogic::compiler::*; // Logic → tensor compilation
use tensorlogic::infer::*; // Execution traits
use tensorlogic::adapters::*; // Symbol tables, domains
Components:
tensorlogic::ir - Core IR types (Term, TLExpr, EinsumGraph)tensorlogic::compiler - Logic-to-tensor mapping with static analysistensorlogic::infer - Execution/autodiff traits (TlExecutor, TlAutodiff)tensorlogic::adapters - Symbol tables, axis metadata, domain masksuse tensorlogic::scirs_backend::*; // SciRS2 runtime executor
use tensorlogic::train::*; // Training infrastructure
Components:
tensorlogic::scirs_backend - Runtime executor with CPU/SIMD/GPU featurestensorlogic::train - Training loops, loss wiring, schedules, callbacksuse tensorlogic::oxirs_bridge::*; // RDF*/SHACL integration
use tensorlogic::sklears_kernels::*; // ML kernels
use tensorlogic::quantrs_hooks::*; // PGM integration
use tensorlogic::trustformers::*; // Transformer components
Components:
tensorlogic::oxirs_bridge - RDF*/GraphQL/SHACL → TL rules; provenance bindingtensorlogic::sklears_kernels - Logic-derived similarity kernels for SkleaRStensorlogic::quantrs_hooks - PGM/message-passing interop for QuantrS2tensorlogic::trustformers - Transformer-as-rules (attention/FFN as einsum)For convenience, commonly used types are available through the prelude:
use tensorlogic::prelude::*;
This imports:
Term, TLExpr, EinsumGraph, EinsumNodecompile_to_einsum, CompilerContext, CompilationConfigTlExecutor, TlAutodiff, Scirs2ExecIrError, CompilerErrorThis crate includes 5 comprehensive examples demonstrating all features:
# Basic predicate and compilation
cargo run --example 00_minimal_rule
# Existential quantifier with reduction
cargo run --example 01_exists_reduce
# Full execution with SciRS2 backend
cargo run --example 02_scirs2_execution
# OxiRS bridge with RDF* data
cargo run --example 03_rdf_integration
# Comparing 6 compilation strategy presets
cargo run --example 04_compilation_strategies
TensorLogic supports 6 preset compilation strategies:
use tensorlogic::compiler::CompilationConfig;
let config = CompilationConfig::soft_differentiable();
let graph = compile_with_config(&expr, config)?;
Default mappings (configurable per use case):
| Logic Operation | Tensor Equivalent | Notes |
|---|---|---|
AND(a, b) |
a * b (Hadamard) |
Element-wise multiplication |
OR(a, b) |
max(a, b) |
Or soft variant |
NOT(a) |
1 - a |
Or temperature-controlled |
∃x. P(x) |
sum(P, axis=x) |
Or max for hard |
∀x. P(x) |
Dual of ∃ | Or product reduction |
a → b |
max(1-a, b) |
Or ReLU variant |
Control which components are included:
[dependencies]
tensorlogic = { version = "0.1.0-alpha.2", features = ["simd"] }
Available features:
simd - Enable SIMD acceleration in SciRS2 backend (2-4x speedup)gpu - Enable GPU support (future)Each component has comprehensive documentation:
# Build the meta crate
cargo build -p tensorlogic
# Build with SIMD support
cargo build -p tensorlogic --features simd
# Run tests
cargo nextest run -p tensorlogic
# Run examples
cargo run -p tensorlogic --example 00_minimal_rule
The meta crate includes all component tests:
# Run all tests
cargo test -p tensorlogic --all-features
# Run with nextest (faster)
cargo nextest run -p tensorlogic --all-features
This meta crate version 0.1.0-alpha.2 includes:
| Component | Version | Status |
|---|---|---|
| tensorlogic-ir | 0.1.0-alpha.2 | ✅ Production Ready |
| tensorlogic-compiler | 0.1.0-alpha.2 | ✅ Production Ready |
| tensorlogic-infer | 0.1.0-alpha.2 | ✅ Production Ready |
| tensorlogic-scirs-backend | 0.1.0-alpha.2 | ✅ Production Ready |
| tensorlogic-train | 0.1.0-alpha.2 | ✅ Complete |
| tensorlogic-adapters | 0.1.0-alpha.2 | ✅ Complete |
| tensorlogic-oxirs-bridge | 0.1.0-alpha.2 | ✅ Complete |
| tensorlogic-sklears-kernels | 0.1.0-alpha.2 | ✅ Core Features |
| tensorlogic-quantrs-hooks | 0.1.0-alpha.2 | ✅ Core Features |
| tensorlogic-trustformers | 0.1.0-alpha.2 | ✅ Complete |
All components are synchronized to version 0.1.0-alpha.2.
If you were using individual crates:
Before:
[dependencies]
tensorlogic-ir = "0.1.0-alpha.2"
tensorlogic-compiler = "0.1.0-alpha.2"
tensorlogic-scirs-backend = "0.1.0-alpha.2"
After:
[dependencies]
tensorlogic = "0.1.0-alpha.2"
Your code remains the same, just update imports:
Before:
use tensorlogic_ir::{Term, TLExpr};
use tensorlogic_compiler::compile_to_einsum;
After:
use tensorlogic::ir::{Term, TLExpr};
use tensorlogic::compiler::compile_to_einsum;
// Or use prelude for common types
use tensorlogic::prelude::*;
Contributions are welcome! Please see CONTRIBUTING.md for guidelines.
Licensed under Apache 2.0 License. See LICENSE for details.
Part of the COOLJAPAN Ecosystem
For questions and support, please open an issue on GitHub.