Crates.io | deep_causality_algorithms |
lib.rs | deep_causality_algorithms |
version | 0.1.4 |
created_at | 2025-09-15 06:54:32.946196+00 |
updated_at | 2025-09-25 09:08:19.77783+00 |
description | Computational causality algorithms and utils used in the DeepCausality project. |
homepage | https://deepcausality.com/ |
repository | https://github.com/deepcausality/deep_causality.rs |
max_upload_size | |
id | 1839506 |
size | 93,354 |
A collection of computational causality algorithms used in the DeepCausality project. This crate provides tools for analyzing and decomposing causal relationships in complex systems.
The cornerstone of this crate is surd_states
, a high-performance Rust implementation of the SURD-states algorithm. Based on the paper "Observational causality by states and interaction type for scientific discovery" (martínezsánchez2025), this algorithm decomposes the mutual information between a set of source variables and a target variable into its fundamental components: Synergistic, Unique, and Redundant (SURD).
This decomposition allows for a deep, nuanced understanding of causal structures, moving beyond simple correlations to reveal the nature of multi-variable interactions.
MaxOrder
enum to limit the analysis to a tractable number of interactions (e.g., pairwise), reducing complexity from exponential O(2^N)
to polynomial O(N^k)
.parallel
feature flag, the main decomposition loop runs in parallel across all available CPU cores using rayon
.cargo add deep_causality_algorithms
The primary function is surd_states
, which takes a CausalTensor
representing a joint probability distribution and returns a SurdResult
.
use deep_causality_algorithms::{surd_states, MaxOrder};
use deep_causality_data_structures::CausalTensor;
// Create a joint probability distribution for a target and 2 source variables.
// Shape: [target_states, source1_states, source2_states] = [2, 2, 2]
let data = vec![
0.1, 0.2, // P(T=0, S1=0, S2=0), P(T=0, S1=0, S2=1)
0.0, 0.2, // P(T=0, S1=1, S2=0), P(T=0, S1=1, S2=1)
0.3, 0.0, // P(T=1, S1=0, S2=0), P(T=1, S1=0, S2=1)
0.1, 0.1, // P(T=1, S1=1, S2=0), P(T=1, S1=1, S2=1)
];
let p_raw = CausalTensor::new(data, vec![2, 2, 2]).unwrap();
// Perform a full decomposition (k=N=2)
let full_result = surd_states(&p_raw, MaxOrder::Max).unwrap();
// Print the detailed decomposition
println!("{}", &full_result);
// Access specific results
println!("Information Leak: {:.3}", full_result.info_leak());
// Synergistic information for the pair of variables {1, 2}
if let Some(synergy) = full_result.synergistic_info().get(&vec![1, 2]) {
println!("Synergistic Info for {{1, 2}}: {:.3}", synergy);
}
The surd_states
algorithm serves as a bridge from observational data to executable causal models with the DeepCausality.
CausaloidGraph
StructureThe aggregate SURD results inform the structure of the CausaloidGraph
.
S1
to T
suggests a direct edge: Causaloid(S1) -> Causaloid(T)
.S1
and S2
onto T
suggests a many-to-one connection where Causaloid(S1)
and Causaloid(S2)
both point to Causaloid(T)
.Causaloid
for T
should model a high degree of internal randomness or dependency on an unobserved Context
.Causaloid
LogicThe state-dependent maps provide the exact conditional logic for a Causaloid
's causal_fn
. For example, if SURD shows that S1
's influence on T
is strong only when S1 > 0
, this condition can be programmed directly into the Causaloid
.
CausaloidCollection
SURD's ability to detect multi-causal relationships is perfectly complemented by the CausaloidCollection
, which models the interplay of multiple factors. The SURD results guide the choice of the collection's AggregateLogic
:
AggregateLogic::All
(Conjunction).AggregateLogic::Any
(Disjunction).AggregateLogic::Some(k)
(Threshold).In summary, surd_states
provides the data-driven evidence to identify multi-causal structures, and the DeepCausality primitives provide the formal mechanisms to build an executable model of that precise structure.
The crate includes a detailed example (example_surd
) that demonstrates how to use the surd_states
algorithm and, more importantly, how to interpret its rich output. It runs through several test cases with different underlying causal structures (e.g., synergistic, noisy, random) and explains what each part of the output means.
To run the example:
cargo run --example example_surd
For a detailed walkthrough of the output, see the example's README.
Contributions are welcomed especially related to documentation, example code, and fixes. If unsure where to start, just open an issue and ask.
Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in deep_causality by you, shall be licensed under the MIT licence, without any additional terms or conditions.
This project is licensed under the MIT license.
For details about security, please read the security policy.