| Crates.io | deep_causality_tensor |
| lib.rs | deep_causality_tensor |
| version | 0.4.0 |
| created_at | 2025-09-19 03:20:28.42556+00 |
| updated_at | 2026-01-22 07:36:54.582611+00 |
| description | Tensor data structure for for deep_causality crate. |
| homepage | |
| repository | https://github.com/deepcausality/deep_causality.rs |
| max_upload_size | |
| id | 1845732 |
| size | 310,364 |
The CausalTensor provides a flexible, multi-dimensional array (tensor) backed by a single, contiguous Vec<T>. It is designed for efficient numerical computations, featuring a stride-based memory layout that supports broadcasting for
element-wise binary operations. It offers a comprehensive API for shape manipulation, element access, and common reduction operations like sum and mean, making it a versatile tool for causal modeling and other data-intensive
tasks.
To run the examples, use cargo run --example <example_name>.
cargo run --example applicative_causal_tensor
cargo run --example causal_tensor
cargo run --example effect_system_causal_tensor
cargo run --example ein_sum_causal_tensor
cargo run --example functor_causal_tensor
CausalTensor is straightforward to use. You create it from a flat vector of data and a vector defining its shape.
use deep_causality_tensor::CausalTensor;
fn main() {
// 1. Create a 2x3 tensor.
let data = vec![1, 2, 3, 4, 5, 6];
let shape = vec![2, 3];
let tensor = CausalTensor::new(data, shape).unwrap();
println!("Original Tensor: {}", tensor);
// 2. Get an element
let element = tensor.get(&[1, 2]).unwrap();
assert_eq!(*element, 6);
println!("Element at [1, 2]: {}", element);
// 3. Reshape the tensor
let reshaped = tensor.reshape(&[3, 2]).unwrap();
assert_eq!(reshaped.shape(), &[3, 2]);
println!("Reshaped to 3x2: {}", reshaped);
// 4. Perform tensor-scalar addition
let added = &tensor + 10;
assert_eq!(added.as_slice(), &[11, 12, 13, 14, 15, 16]);
println!("Tensor + 10: {}", added);
// 5. Perform tensor-tensor addition with broadcasting
let t1 = CausalTensor::new(vec![1, 2, 3, 4, 5, 6], vec![2, 3]).unwrap();
// A [1, 3] tensor...
let t2 = CausalTensor::new(vec![10, 20, 30], vec![1, 3]).unwrap();
// ...is broadcasted across the rows of the [2, 3] tensor.
let result = (&t1 + &t2).unwrap();
assert_eq!(result.as_slice(), &[11, 22, 33, 14, 25, 36]);
println!("Tensor-Tensor Add with Broadcast: {}", result);
// 6. Sum all elements in the tensor (full reduction)
let sum = tensor.sum_axes(&[]).unwrap();
assert_eq!(sum.as_slice(), &[21]);
println!("Sum of all elements: {}", sum);
}
The ein_sum function provides a powerful and flexible way to perform various tensor operations, including matrix multiplication, dot products, and more, by constructing an Abstract Syntax Tree (AST) of operations.
use deep_causality_tensor::CausalTensor;
use deep_causality_tensor::types::causal_tensor::op_tensor_ein_sum::EinSumOp;
fn main() {
// Example: Matrix Multiplication using ein_sum
let lhs_data = vec![1.0, 2.0, 3.0, 4.0];
let lhs_tensor = CausalTensor::new(lhs_data, vec![2, 2]).unwrap();
let rhs_data = vec![5.0, 6.0, 7.0, 8.0];
let rhs_tensor = CausalTensor::new(rhs_data, vec![2, 2]).unwrap();
// Construct the AST for matrix multiplication
let mat_mul_ast = EinSumOp::mat_mul(lhs_tensor, rhs_tensor);
// Execute the Einstein summation
let result = CausalTensor::ein_sum(&mat_mul_ast).unwrap();
println!("Result of Matrix Multiplication:\n{:?}", result);
// Expected: CausalTensor { data: [19.0, 22.0, 43.0, 50.0], shape: [2, 2], strides: [2, 1] }
// Example: Dot Product
let vec1_data = vec![1.0, 2.0, 3.0];
let vec1_shape = vec![3];
let vec1_tensor = CausalTensor::new(vec1_data, vec1_shape).unwrap();
let vec2_data = vec![4.0, 5.0, 6.0];
let vec2_shape = vec![3];
let vec2_tensor = CausalTensor::new(vec2_data, vec2_shape).unwrap();
// Execute the Einstein summation for dot product
let result_dot_prod = CausalTensor::ein_sum(&EinSumOp::dot_prod(vec1_tensor, vec2_tensor)).unwrap();
println!("Result of Dot Product:\n{:?}", result_dot_prod);
}
Causal Tensor implements a Higher Kinded Type via the deep_causality_haft crate as Witness Type. When imported, the CausalTensorWitness type allows monadic composition and abstract type programming. For example, one can write generic functions that uniformly process tensors and other types:
use deep_causality_haft::{Functor, HKT, OptionWitness, ResultWitness};
use deep_causality_tensor::{CausalTensor, CausalTensorWitness};
fn triple_value<F>(m_a: F::Type<i32>) -> F::Type<i32>
where
F: Functor<F> + HKT,
{
F::fmap(m_a, |x| x * 3)
}
fn main() {
println!("--- Functor Example: Tripling values in different containers ---");
// Using triple_value with Option
let opt = Some(5);
println!("Original Option: {:?}", opt);
let proc_opt = triple_value::<OptionWitness>(opt);
println!("Doubled Option: {:?}", proc_opt);
assert_eq!(proc_opt, Some(15));
// Using triple_value with Result
let res = Ok(5);
println!("Original Result: {:?}", res);
let proc_res = triple_value::<ResultWitness<i32>>(res);
println!("Doubled Result: {:?}", proc_res);
assert_eq!(proc_res, Ok(15));
// Using triple_value with CausalTensor
let tensor = CausalTensor::new(vec![1, 2, 3], vec![3]).unwrap();
println!("Original CausalTensor: {:?}", tensor);
let proc_tensor = triple_value::<CausalTensorWitness>(tensor);
println!("Doubled CausalTensor: {:?}", proc_tensor);
assert_eq!(proc_tensor.data(), &[3, 6, 9]);
}
Functional composition of HKS tensors works best via an effect system that captures side effects and provides detailed errors and logs for each processing step. In the example below, Tensors are composed and the container MyMonadEffect3 capture the final tensor value, optional errors, and detailed logs from each processing step.
// ... Truncated
// 4. Chain Operations using Monad::bind
println!("Processing steps...");
let final_effect = MyMonadEffect3::bind(initial_effect, step1);
let final_effect = MyMonadEffect3::bind(final_effect, step2);
let final_effect = MyMonadEffect3::bind(final_effect, step3);
println!();
println!("--- Final Result ---");
println!("Final CausalTensor: {:?}", final_effect.value);
println!("Error: {:?}", final_effect.error);
println!("Logs: {:?}", final_effect.logs);
For complex data processing pipelines, these information are invaluable for debugging and optimization. Also, in case more detailed information are required i.e. processing time for each step, then an Effect Monad of arity 4 or 5 can be used to capture additional fields at each step.
The following benchmarks were run on a CausalTensor of size 100x100 (10,000 f64 elements).
| Operation | Time | Notes |
|---|---|---|
tensor_get |
~2.31 ns | Accessing a single element. |
tensor_reshape |
~2.46 ยตs | Metadata only, but clones data in the test. |
tensor_scalar_add |
~4.95 ยตs | Element-wise addition with a scalar. |
tensor_tensor_add_broadcast |
~46.67 ยตs | Element-wise addition with broadcasting. |
tensor_sum_full_reduction |
~10.56 ยตs | Summing all 10,000 elements of the tensor. |
get): Access is extremely fast, demonstrating the efficiency of the stride-based index calculation.reshape): This operation is very fast as it only adjusts metadata (shape and strides) and clones the underlying data vector.binary_op function provides efficient broadcasting for tensor-tensor operations, avoiding allocations in hot loops.The core of CausalTensor is its stride-based memory layout. For a given shape (e.g., [d1, d2, d3]), the strides represent the number of elements to skip in the flat data vector to move one step along a particular dimension. For a row-major layout, the strides would be [d2*d3, d3, 1]. This allows the tensor to calculate the flat index for any multi-dimensional index [i, j, k] with a simple dot product: i*strides[0] + j*strides[1] + k*strides[2].
Binary operations support broadcasting, which follows rules similar to those in libraries like NumPy. When operating on two tensors, CausalTensor compares their shapes dimension by dimension (from right to left). Two dimensions are compatible if:
The smaller tensor's data is conceptually "stretched" or repeated along the dimensions where its size is 1 to match the larger tensor's shape, without actually copying the data. The optimized binary_op implementation achieves this by manipulating how it calculates the flat index for each tensor inside the computation loop.
The CausalTensor API is designed to be comprehensive and intuitive:
CausalTensor::new(data: Vec<T>, shape: Vec<usize>)shape(), num_dim(), len(), is_empty(), as_slice()get(), get_mut()reshape(), ravel()sum_axes(), mean_axes(), arg_sort()+, -, *, / operators for both tensor-scalar and tensor-tensor operations.Contributions are welcomed especially related to documentation, example code, and fixes. If unsure where to start, just open an issue and ask.
Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in deep_causality by you, shall be licensed under the MIT licence, without any additional terms or conditions.
This project is licensed under the MIT license.
For details about security, please read the security policy.