| Crates.io | medrs |
| lib.rs | medrs |
| version | 0.1.2 |
| created_at | 2025-12-01 21:13:18.071494+00 |
| updated_at | 2025-12-29 23:20:44.321637+00 |
| description | Ultra-high-performance medical imaging I/O for deep learning |
| homepage | https://github.com/liamchalcroft/med-rs |
| repository | https://github.com/liamchalcroft/med-rs |
| max_upload_size | |
| id | 1960659 |
| size | 1,067,389 |
High-performance medical imaging I/O and processing library for Rust and Python.
medrs is designed for throughput-critical medical imaging workflows, particularly deep learning pipelines that process large 3D volumes. It provides:
| Operation | medrs | MONAI | TorchIO | vs MONAI |
|---|---|---|---|---|
| Load | 0.13ms | 4.55ms | 4.71ms | 35x |
| Load Cropped (64³) | 0.41ms | 4.68ms | 9.86ms | 11x |
| Load Resampled | 0.40ms | 6.88ms | 27.65ms | 17x |
| To PyTorch | 0.49ms | 5.14ms | 10.22ms | 10x |
| Load + Normalize | 0.60ms | 5.36ms | 12.26ms | 9x |
At larger volumes (512³), speedups increase dramatically: up to 38,000x vs MONAI and 6,600x vs TorchIO.
| Format | Size | vs f32 |
|---|---|---|
| float32 | 8.3 MB | 100% |
| bfloat16 | 3.4 MB | 41% |
| float16 | 4.1 MB | 50% |
| int16 | 1.2 MB | 15% |
Benchmark results comparing medrs, MONAI, and TorchIO across multiple volume sizes and operations.

| Size | medrs | MONAI | TorchIO | vs MONAI | vs TorchIO |
|---|---|---|---|---|---|
| 64³ | 0.13ms | 1.34ms | 2.35ms | 10x | 18x |
| 128³ | 0.13ms | 4.55ms | 4.71ms | 35x | 36x |
| 256³ | 0.14ms | 159.11ms | 95.18ms | 1,136x | 680x |
| 512³ | 0.13ms | 5,006.76ms | 866.54ms | 38,513x | 6,665x |
| Source | medrs | MONAI | TorchIO | vs MONAI | vs TorchIO |
|---|---|---|---|---|---|
| 64³ | 0.27ms | 1.75ms | 6.00ms | 6x | 22x |
| 128³ | 0.41ms | 4.68ms | 9.86ms | 11x | 24x |
| 256³ | 0.55ms | 154.86ms | 104.48ms | 282x | 190x |
| 512³ | 0.76ms | 5,041.42ms | 1,076.89ms | 6,633x | 1,417x |
| Source | medrs | MONAI | TorchIO | vs MONAI | vs TorchIO |
|---|---|---|---|---|---|
| 64³ → 32³ | 0.18ms | 1.93ms | 5.45ms | 11x | 30x |
| 128³ → 64³ | 0.40ms | 6.88ms | 27.65ms | 17x | 69x |
| 256³ → 128³ | 2.02ms | 178.87ms | 363.85ms | 89x | 180x |
| 512³ → 256³ | 6.67ms | 5,960.93ms | 4,039.05ms | 894x | 605x |
| Source | medrs | MONAI | TorchIO | vs MONAI | vs TorchIO |
|---|---|---|---|---|---|
| 64³ | 0.34ms | 1.58ms | 5.37ms | 5x | 16x |
| 128³ | 0.49ms | 5.14ms | 10.22ms | 10x | 21x |
| 256³ | 0.60ms | 162.78ms | 53.70ms | 271x | 90x |
| 512³ | 0.84ms | 5,864.85ms | 1,223.24ms | 6,982x | 1,456x |
| Source | medrs | MONAI | TorchIO | vs MONAI | vs TorchIO |
|---|---|---|---|---|---|
| 64³ | 0.49ms | 2.15ms | 7.04ms | 4x | 14x |
| 128³ | 0.60ms | 5.36ms | 12.26ms | 9x | 20x |
| 256³ | 0.73ms | 163.38ms | 53.59ms | 224x | 73x |
| 512³ | 1.01ms | 3,735.31ms | 1,092.25ms | 3,698x | 1,081x |
Benchmarks run on Apple M1 Pro, 20 iterations, 3 warmup. Run your own: python benchmarks/bench_medrs.py
pip install medrs
[dependencies]
medrs = "0.1"
git clone https://github.com/liamchalcroft/med-rs.git
cd med-rs
pip install -e ".[dev]"
maturin develop --features python
Python:
import medrs
import torch
# Load a NIfTI image
img = medrs.load("brain.nii.gz")
print(f"Shape: {img.shape}, Spacing: {img.spacing}")
# Method chaining for transforms
processed = img.resample([1.0, 1.0, 1.0]).z_normalize().clamp(-1, 1)
processed.save("output.nii.gz")
# Load directly to PyTorch tensor (most efficient)
tensor = medrs.load_to_torch("brain.nii.gz", dtype=torch.float16, device="cuda")
Rust:
use medrs::nifti;
use medrs::transforms::{resample_to_spacing, Interpolation};
fn main() -> medrs::Result<()> {
let img = nifti::load("brain.nii.gz")?;
println!("Shape: {:?}, Spacing: {:?}", img.shape(), img.spacing());
let resampled = resample_to_spacing(&img, [1.0, 1.0, 1.0], Interpolation::Trilinear)?;
nifti::save(&resampled, "output.nii.gz")?;
Ok(())
}
Build composable transform pipelines with lazy evaluation and automatic optimization:
Python:
import medrs
# Create a reusable pipeline
pipeline = medrs.TransformPipeline()
pipeline.z_normalize()
pipeline.clamp(-1.0, 1.0)
pipeline.resample_to_shape([64, 64, 64])
# Apply to multiple images
for path in image_paths:
img = medrs.load(path)
processed = pipeline.apply(img)
Rust:
use medrs::pipeline::compose::TransformPipeline;
let pipeline = TransformPipeline::new()
.z_normalize()
.clamp(-1.0, 1.0)
.resample_to_shape([64, 64, 64]);
let processed = pipeline.apply(&img);
Reproducible augmentations for ML training with optional seeding:
Python:
import medrs
img = medrs.load("brain.nii.gz")
# Individual augmentations
flipped = medrs.random_flip(img, axes=[0, 1, 2], prob=0.5, seed=42)
noisy = medrs.random_gaussian_noise(img, std=0.1, seed=42)
scaled = medrs.random_intensity_scale(img, scale_range=0.1, seed=42)
shifted = medrs.random_intensity_shift(img, shift_range=0.1, seed=42)
rotated = medrs.random_rotate_90(img, axes=(0, 1), seed=42)
gamma = medrs.random_gamma(img, gamma_range=(0.7, 1.5), seed=42)
# Combined augmentation (flip + noise + scale + shift)
augmented = medrs.random_augment(img, seed=42)
Rust:
use medrs::transforms::{random_flip, random_gaussian_noise, random_augment};
// Individual augmentations
let flipped = random_flip(&img, &[0, 1, 2], Some(0.5), Some(42))?;
let noisy = random_gaussian_noise(&img, Some(0.1), Some(42))?;
// Combined augmentation
let augmented = random_augment(&img, Some(42))?;
Load only the data you need - essential for training pipelines:
import medrs
import torch
# Load a 64^3 patch starting at position (32, 32, 32)
patch = medrs.load_cropped("volume.nii", [32, 32, 32], [64, 64, 64])
# Load with resampling and reorientation in one step
patch = medrs.load_resampled(
"volume.nii",
output_shape=[64, 64, 64],
target_spacing=[1.0, 1.0, 1.0],
target_orientation="RAS"
)
# Load directly to GPU tensor
tensor = medrs.load_cropped_to_torch(
"volume.nii",
output_shape=[64, 64, 64],
target_spacing=[1.0, 1.0, 1.0],
dtype=torch.float16,
device="cuda"
)
High-performance patch extraction for training:
import medrs
loader = medrs.TrainingDataLoader(
volumes=["vol1.nii", "vol2.nii", "vol3.nii"],
patch_size=[64, 64, 64],
patches_per_volume=4,
patch_overlap=[0, 0, 0],
randomize=True,
cache_size=1000
)
for patch in loader:
# Training loop
tensor = patch.to_torch()
z_normalize() / z_normalization() - Zero mean, unit variancerescale() / rescale_intensity() - Scale to [min, max] rangeclamp() - Clamp values to rangeresample() / resample_to_spacing() - Resample to target spacingresample_to_shape() - Resample to target shapereorient() - Reorient to standard orientation (RAS, LPS, etc.)crop_or_pad() - Crop or pad to target shapeflip() - Flip along specified axesrandom_flip() - Random axis flippingrandom_gaussian_noise() - Additive Gaussian noiserandom_intensity_scale() - Random intensity scalingrandom_intensity_shift() - Random intensity offsetrandom_rotate_90() - Random 90-degree rotationsrandom_gamma() - Random gamma correctionrandom_augment() - Combined augmentation pipelinemedrs uses several optimization strategies:
See the examples/ directory for:
basic/ - Loading, transforms, and savingintegrations/ - PyTorch, MONAI, JAX integrationadvanced/ - Async pipelines, custom transforms# Rust tests
cargo test
# Python tests
pytest tests/
# Benchmarks (requires torch, monai, torchio)
python benchmarks/bench_medrs.py --quick
python benchmarks/bench_monai.py --quick
python benchmarks/bench_torchio.py --quick
# Generate benchmark plots
python benchmarks/plot_results.py
medrs is dual-licensed under MIT and Apache-2.0. See LICENSE for details.
See CONTRIBUTING.md for guidelines.
Liam Chalcroft (liam.chalcroft.20@ucl.ac.uk)