| Crates.io | deep-delta-learn |
| lib.rs | deep-delta-learn |
| version | 0.1.0 |
| created_at | 2026-01-07 05:05:29.502132+00 |
| updated_at | 2026-01-07 05:05:29.502132+00 |
| description | An implementation of Deeep Delta Learning as in 2601.00417 |
| homepage | https://github.com/tuned-org-uk/deep-delta-learn |
| repository | https://github.com/tuned-org-uk/deep-delta-learn |
| max_upload_size | |
| id | 2027555 |
| size | 138,834 |
Rust + Burn implementation of Deep Delta Learning (DDL) from the paper "Deep Delta Learning" (arXiv:2601.00417v1).
This repository provides:
delta_update) for matrix-valued states.DeltaResidual block (Delta-Res) that wraps branches + the Delta update.The Delta-Res update is a rank-1 residual transformation:
Burn tracks tensor rank at the type level (Tensor<B, const D: usize, ...>), so reductions behave differently than in PyTorch.
Critical conventions in this crate:
Tensor<B, 3> with shape [B, D, V].k is [B, D], beta is [B, 1], v is [B, V].Rank-preserving reductions:
Operations like mean_dim and sum_dim are rank-preserving in Burn (e.g., sum_dim(1) on [B, D] yields [B, 1], not [B]).
branches.rs (pooling), we use squeeze::<2>() to explicitly drop singleton dimensions and return rank-2 tensors.delta.rs, we leverage the preserved rank (e.g., [B, 1]) for correct broadcasting without needing extra unsqueeze calls.CPU (default):
cargo run --release
WGPU (cross-platform GPU):
cargo run --release --features wgpu
CUDA (NVIDIA GPU):
cargo run --release --features cuda
Run tests:
cargo test
src/delta.rs: core Delta operators (Eq 2.5).src/branches.rs: generator branches for $k, \beta, v$.src/nn.rs: DeltaResidual and helper blocks.src/backend.rs: backend selection helper for Burn 0.18.src/main.rs: simple smoke test binary.