graph-sp

Crates.iograph-sp
lib.rsgraph-sp
version0.1.0
created_at2026-01-17 21:16:24.812678+00
updated_at2026-01-19 02:31:56.848362+00
descriptionA pure Rust graph executor supporting implicit node connections, branching, and config sweeps
homepagehttps://github.com/briday1/graph-sp
repositoryhttps://github.com/briday1/graph-sp
max_upload_size
id2051179
size188,843
(briday1)

documentation

https://docs.rs/graph-sp

README

graph-sp

graph-sp is a pure Rust grid/node graph executor and optimizer. The project focuses on representing directed dataflow graphs, computing port mappings by graph inspection, and executing nodes efficiently in-process with parallel CPU execution.

Core Features

  • Implicit Node Connections: Nodes automatically connect based on execution order
  • Parallel Branching: Create fan-out execution paths with .branch()
  • Configuration Variants: Use .variant() to create parameter sweeps
  • DAG Analysis: Automatic inspection and optimization of execution paths
  • Mermaid Visualization: Generate diagrams with .to_mermaid()
  • In-process Execution: Parallel execution using rayon

Installation

Rust

Add to your Cargo.toml:

[dependencies]
graph-sp = "0.1.0"

Python

The library can also be used from Python via PyO3 bindings:

pip install graph_sp

Or build from source:

pip install maturin
maturin build --release --features python
pip install target/wheels/graph_sp-*.whl

Quick Start

Rust

Basic Sequential Pipeline

use graph_sp::Graph;
use std::collections::HashMap;

fn data_source(_: &HashMap<String, String>, _: &HashMap<String, String>) -> HashMap<String, String> {
    let mut result = HashMap::new();
    result.insert("value".to_string(), "42".to_string());
    result
}

fn multiply(inputs: &HashMap<String, String>, _: &HashMap<String, String>) -> HashMap<String, String> {
    let mut result = HashMap::new();
    if let Some(val) = inputs.get("x").and_then(|s| s.parse::<i32>().ok()) {
        result.insert("doubled".to_string(), (val * 2).to_string());
    }
    result
}

fn main() {
    let mut graph = Graph::new();
    
    // Add source node
    graph.add(data_source, Some("DataSource"), None, Some(vec![("value", "data")]));
    
    // Add processing node
    graph.add(multiply, Some("Multiply"), Some(vec![("data", "x")]), Some(vec![("doubled", "result")]));
    
    let dag = graph.build();
    let context = dag.execute();
    
    println!("Result: {}", context.get("result").unwrap());
}

Python

Basic Sequential Pipeline

import graph_sp

def data_source(inputs, variant_params):
    return {"value": "42"}

def multiply(inputs, variant_params):
    val = int(inputs.get("x", "0"))
    return {"doubled": str(val * 2)}

# Create graph
graph = graph_sp.PyGraph()

# Add source node
graph.add(
    function=data_source,
    label="DataSource",
    inputs=None,
    outputs=[("value", "data")]
)

# Add processing node
graph.add(
    function=multiply,
    label="Multiply",
    inputs=[("data", "x")],
    outputs=[("doubled", "result")]
)

# Build and execute
dag = graph.build()
context = dag.execute()

print(f"Result: {context['result']}")

Mermaid visualization output:

graph TD
    0["DataSource"]
    1["Multiply"]
    0 -->|data → x| 1

Parallel Branching (Fan-Out)

let mut graph = Graph::new();

// Source node
graph.add(source_fn, Some("Source"), None, Some(vec![("data", "data")]));

// Create parallel branches
graph.branch();
graph.add(stats_fn, Some("Statistics"), Some(vec![("data", "input")]), Some(vec![("mean", "stats")]));

graph.branch();
graph.add(model_fn, Some("MLModel"), Some(vec![("data", "input")]), Some(vec![("prediction", "model")]));

graph.branch();
graph.add(viz_fn, Some("Visualization"), Some(vec![("data", "input")]), Some(vec![("plot", "viz")]));

let dag = graph.build();

Mermaid visualization output:

graph TD
    0["Source"]
    1["Statistics"]
    2["MLModel"]
    3["Visualization"]
    0 -->|data → input| 1
    0 -->|data → input| 2
    0 -->|data → input| 3
    style 1 fill:#e1f5ff
    style 2 fill:#e1f5ff
    style 3 fill:#e1f5ff

DAG Statistics:

  • Nodes: 4
  • Depth: 2 levels
  • Max Parallelism: 3 nodes (all branches execute in parallel)

Parameter Sweep with Variants

use graph_sp::{Graph, Linspace};

let mut graph = Graph::new();

// Source node
graph.add(source_fn, Some("DataSource"), None, Some(vec![("value", "data")]));

// Create variants for different learning rates
let learning_rates = vec![0.001, 0.01, 0.1, 1.0];
graph.variant("learning_rate", learning_rates);
graph.add(scale_fn, Some("ScaleLR"), Some(vec![("data", "input")]), Some(vec![("scaled", "output")]));

let dag = graph.build();

Mermaid visualization output:

graph TD
    0["DataSource"]
    1["ScaleLR (v0)"]
    2["ScaleLR (v1)"]
    3["ScaleLR (v2)"]
    4["ScaleLR (v3)"]
    0 -->|data → input| 1
    0 -->|data → input| 2
    0 -->|data → input| 3
    0 -->|data → input| 4
    style 1 fill:#e1f5ff
    style 2 fill:#e1f5ff
    style 3 fill:#e1f5ff
    style 4 fill:#e1f5ff
    style 1 fill:#ffe1e1
    style 2 fill:#e1ffe1
    style 3 fill:#ffe1ff
    style 4 fill:#ffffe1

DAG Statistics:

  • Nodes: 5
  • Depth: 2 levels
  • Max Parallelism: 4 nodes
  • Variants: 4 (all execute in parallel)

API Overview

Rust API

Graph Construction

  • Graph::new() - Create a new graph
  • graph.add(fn, name, inputs, outputs) - Add a node
    • fn: Node function with signature fn(&HashMap<String, String>, &HashMap<String, String>) -> HashMap<String, String>
    • name: Optional node name
    • inputs: Optional vector of (broadcast_var, impl_var) tuples for input mappings
    • outputs: Optional vector of (impl_var, broadcast_var) tuples for output mappings
  • graph.branch() - Create a new parallel branch
  • graph.variant(param_name, values) - Create parameter sweep variants
  • graph.build() - Build the DAG

DAG Operations

  • dag.execute() - Execute the graph and return execution context
  • dag.stats() - Get DAG statistics (nodes, depth, parallelism, branches, variants)
  • dag.to_mermaid() - Generate Mermaid diagram representation

Python API

The Python bindings provide a similar API with proper GIL handling:

Graph Construction

  • PyGraph() - Create a new graph
  • graph.add(function, label, inputs, outputs) - Add a node
    • function: Python callable with signature fn(inputs: dict, variant_params: dict) -> dict
    • label: Optional node name (str)
    • inputs: Optional list of (broadcast_var, impl_var) tuples or dict
    • outputs: Optional list of (impl_var, broadcast_var) tuples or dict
  • graph.branch(subgraph) - Create a new parallel branch with a subgraph
  • graph.build() - Build the DAG and return a PyDag

DAG Operations

  • dag.execute() - Execute the graph and return execution context (dict)
  • dag.execute_parallel() - Execute with parallel execution where possible (dict)
  • dag.to_mermaid() - Generate Mermaid diagram representation (str)

GIL Handling

The Python bindings are designed with proper GIL handling:

  • GIL Release: The Rust executor runs without holding the GIL, allowing true parallelism
  • GIL Acquisition: Python callables used as node functions acquire the GIL only during their execution
  • Thread Safety: The bindings use pyo3::prepare_freethreaded_python() (via auto-initialize) for multi-threaded safety

This means that while Python functions execute sequentially (due to the GIL), the Rust graph traversal and coordination happens in parallel without GIL contention.

Development

Rust Development

Prerequisites:

Build and run tests:

cargo build --release
cargo test

Run examples:

cargo run --example comprehensive_demo
cargo run --example parallel_execution_demo
cargo run --example variant_demo_full

Python Development

Prerequisites:

  • Python 3.8+ installed
  • Rust toolchain installed

Build Python bindings:

# Create virtual environment
python -m venv .venv
source .venv/bin/activate  # On Windows: .venv\Scripts\activate

# Install maturin
pip install maturin==1.2.0

# Build and install in development mode
maturin develop --release --features python

# Run Python example
python examples/python_demo.py

Build wheel for distribution:

maturin build --release --features python
# Wheel will be in target/wheels/

Publishing

This repository is configured with GitHub Actions workflows to automatically publish to crates.io and PyPI when a release tag is pushed.

Required Repository Secrets

To enable automatic publishing, the repository owner must configure the following secrets in GitHub Settings → Secrets and variables → Actions:

Publishing Process

The publish workflow (.github/workflows/publish.yml) will automatically run when:

  1. A tag matching v* is pushed (e.g., v0.1.0, v1.0.0)
  2. The workflow is manually triggered via workflow_dispatch

Creating a release:

# Ensure version numbers in Cargo.toml and pyproject.toml are correct
git tag -a v0.1.0 -m "Release v0.1.0"
git push origin v0.1.0

The workflow will:

  1. Build Python wheels for Python 3.8-3.11 on Linux, macOS, and Windows
  2. Upload wheel artifacts to the GitHub Actions run (always, even without secrets)
  3. Publish to PyPI (only if PYPI_API_TOKEN is set) - prebuilt wheels mean end users do not need Rust
  4. Publish to crates.io (only if CRATES_IO_TOKEN is set)

Important notes:

  • Installing from PyPI with pip install graph_sp will not require Rust on the target machine because prebuilt platform-specific wheels are published
  • Both crates.io and PyPI will reject duplicate version numbers - update versions before tagging
  • The workflow will continue even if tokens are not set, allowing you to download artifacts for manual publishing
  • For local testing, you can build wheels with maturin build --release --features python

Manual Publishing

If you prefer to publish manually or need to publish from a local machine:

To crates.io:

cargo publish --token YOUR_CRATES_IO_TOKEN

To PyPI:

# Install maturin
pip install maturin==1.2.0

# Build and publish wheels
maturin publish --username __token__ --password YOUR_PYPI_API_TOKEN --features python
Commit count: 267

cargo fmt