| Crates.io | rustnn |
| lib.rs | rustnn |
| version | 0.5.11 |
| created_at | 2025-12-14 21:13:35.780716+00 |
| updated_at | 2025-12-29 20:08:02.877167+00 |
| description | W3C WebNN implementation with ONNX, CoreML, and TensorRT backends [DO NOT USE IN PRODUCTION - Development Release] |
| homepage | |
| repository | https://github.com/tarekziade/rust-webnn-graph |
| max_upload_size | |
| id | 1985118 |
| size | 1,393,296 |
A Rust implementation of WebNN graph handling with Python bindings that implement the W3C WebNN API specification.
This is an early-stage experimental implementation for research and exploration. Many features are incomplete, untested, or may change significantly.
rustnn provides:
PyPI Package (v0.4.0+):
# Install with bundled ONNX Runtime - no additional dependencies needed
pip install pywebnn
# Works immediately with actual execution (no zeros)
Build from Source (For Development):
git clone https://github.com/tarekziade/rustnn.git
cd rustnn
make python-dev # Sets up venv and builds with ONNX Runtime + CoreML
source .venv-webnn/bin/activate
Requirements: Python 3.11+, NumPy 1.20+
Note: Version 0.4.0+ includes bundled ONNX Runtime. Earlier versions (0.3.0 and below) had no backends and returned zeros.
[dependencies]
rustnn = "0.1"
import webnn
import numpy as np
# Create ML context with device hints
ml = webnn.ML()
context = ml.create_context(accelerated=False) # CPU execution
builder = context.create_graph_builder()
# Build a simple graph: output = relu(x + y)
x = builder.input("x", [2, 3], "float32")
y = builder.input("y", [2, 3], "float32")
z = builder.add(x, y)
output = builder.relu(z)
# Compile the graph
graph = builder.build({"output": output})
# Execute with real data
x_data = np.array([[1, -2, 3], [4, -5, 6]], dtype=np.float32)
y_data = np.array([[-1, 2, -3], [-4, 5, -6]], dtype=np.float32)
results = context.compute(graph, {"x": x_data, "y": y_data})
print(results["output"]) # [[0. 0. 0.] [0. 0. 0.]]
# Optional: Export to ONNX
context.convert_to_onnx(graph, "model.onnx")
Following the W3C WebNN Device Selection spec, backends are selected via hints:
# CPU-only execution
context = ml.create_context(accelerated=False)
# Request GPU/NPU (platform selects best available)
context = ml.create_context(accelerated=True)
# Request high-performance (prefers GPU)
context = ml.create_context(accelerated=True, power_preference="high-performance")
# Request low-power (prefers NPU/Neural Engine)
context = ml.create_context(accelerated=True, power_preference="low-power")
Platform-Specific Backends:
# Download pretrained weights (first time only)
bash scripts/download_mobilenet_weights.sh
# Run on different backends
python examples/mobilenetv2_complete.py examples/images/test.jpg --backend cpu
python examples/mobilenetv2_complete.py examples/images/test.jpg --backend gpu
python examples/mobilenetv2_complete.py examples/images/test.jpg --backend coreml
Output:
Top 5 Predictions (Real ImageNet Labels):
1. lesser panda 99.60%
2. polecat 0.20%
3. weasel 0.09%
Performance: 74.41ms (CPU) / 77.14ms (GPU) / 51.93ms (CoreML)
# Run generation with attention
make text-gen-demo
# Train on custom text
make text-gen-train
# Generate with trained weights
make text-gen-trained
See examples/ for more samples.
See docs/development/implementation-status.md for complete details.
# Validate a graph
cargo run -- examples/sample_graph.json
# Visualize a graph (requires graphviz)
cargo run -- examples/sample_graph.json --export-dot graph.dot
dot -Tpng graph.dot -o graph.png
# Convert to ONNX
cargo run -- examples/sample_graph.json --convert onnx --convert-output model.onnx
# Execute with ONNX Runtime
cargo run --features onnx-runtime -- examples/sample_graph.json --convert onnx --run-onnx
See make help for all available targets.
Contributions welcome! Please see:
Quick Contribution Guide:
git checkout -b feature/my-feature./scripts/install-git-hooks.shmake test && make python-testmake fmtLicensed under the Apache License, Version 2.0. See LICENSE for details.
Made with Rust by Tarek Ziade