| Crates.io | biometal |
| lib.rs | biometal |
| version | 1.10.0 |
| created_at | 2025-11-05 19:46:37.441729+00 |
| updated_at | 2025-11-15 01:58:51.365466+00 |
| description | ARM-native bioinformatics library with streaming architecture and evidence-based optimization |
| homepage | https://github.com/shandley/biometal |
| repository | https://github.com/shandley/biometal |
| max_upload_size | |
| id | 1918501 |
| size | 2,309,603 |
ARM-native bioinformatics library with streaming architecture and evidence-based optimization
Stream data directly from networks and analyze terabyte-scale datasets on consumer hardware without downloading.
biometal now supports 12+ bioinformatics file formats with production-ready streaming parsers:
Sequences & Reads:
Annotations & Features:
Variants & Alignments:
Graphs & Assembly:
Indices:
All formats support:
.gz files)Rust:
[dependencies]
biometal = "1.10"
Python:
pip install biometal-rs # Install
python -c "import biometal; print(biometal.__version__)" # Test
Note: Package is
biometal-rson PyPI, but imports asbiometalin Python.
Rust:
use biometal::FastqStream;
// Stream FASTQ with constant memory (~5 MB)
let stream = FastqStream::from_path("dataset.fq.gz")?;
for record in stream {
let record = record?;
// Process one record at a time
}
Python:
import biometal
# Stream FASTQ with constant memory (~5 MB)
stream = biometal.FastqStream.from_path("dataset.fq.gz")
for record in stream:
# ARM NEON accelerated (16-25× speedup)
gc = biometal.gc_content(record.sequence)
counts = biometal.count_bases(record.sequence)
mean_q = biometal.mean_quality(record.quality)
Learn biometal through hands-on Jupyter notebooks (5 complete, ~2.5 hours):
| Notebook | Duration | Topics |
|---|---|---|
| 01. Getting Started | 15-20 min | Streaming, GC content, quality analysis |
| 02. Quality Control | 30-40 min | Trimming, filtering, masking (v1.2.0) |
| 03. K-mer Analysis | 30-40 min | ML preprocessing, DNABert (v1.1.0) |
| 04. Network Streaming | 30-40 min | HTTP streaming, public data (v1.0.0) |
| 05. BAM Alignment Analysis | 30-40 min | BAM parsing, 4× speedup, filtering (v1.2.0+) |
| 06. BAM Production Workflows | 45-60 min | Tag parsing, QC statistics, production pipelines (v1.4.0) |
edit_distance(), alignment_score(), read_group(), etc.insert_size_distribution(), edit_distance_stats(), strand_bias(), alignment_length_distribution()| Operation | Scalar | Optimized | Speedup |
|---|---|---|---|
| Base counting | 315 Kseq/s | 5,254 Kseq/s | 16.7× (NEON) |
| GC content | 294 Kseq/s | 5,954 Kseq/s | 20.3× (NEON) |
| Quality filter | 245 Kseq/s | 6,143 Kseq/s | 25.1× (NEON) |
| BAM parsing | ~11 MiB/s | 92.0 MiB/s | 8.4× (BGZF + NEON + cloudflare_zlib v1.7.0) |
| Dataset Size | Traditional | biometal | Reduction |
|---|---|---|---|
| 100K sequences | 134 MB | 5 MB | 96.3% |
| 1M sequences | 1,344 MB | 5 MB | 99.5% |
| 5TB dataset | 5,000 GB | 5 MB | 99.9999% |
📊 Comprehensive Benchmark Comparison vs samtools/pysam →
| Platform | Performance | Tests | Status |
|---|---|---|---|
| Mac ARM (M1-M4) | 16-25× speedup | ✅ 551/551 | Optimized |
| AWS Graviton | 6-10× speedup | ✅ 551/551 | Portable |
| Linux x86_64 | 1× (scalar) | ✅ 551/551 | Portable |
Test count: 551 library tests (including 65 new tests for GTF, PAF, narrowPeak) + 23 property-based tests
biometal's design is grounded in comprehensive experimental validation:
v1.0.0 (Released Nov 5, 2025) ✅ - Core library + network streaming v1.1.0 (Released Nov 6, 2025) ✅ - K-mer operations v1.2.0 (Released Nov 6, 2025) ✅ - Python bindings for Phase 4 QC BAM/SAM (Integrated Nov 8, 2025) ✅ - Native streaming alignment parser with parallel BGZF (4× speedup) v1.3.0 (Released Nov 9, 2025) ✅ - Python BAM bindings with CIGAR operations and SAM writing v1.4.0 (Released Nov 9, 2025) ✅ - BAM tag convenience methods and statistics functions v1.5.0 (Released Nov 9, 2025) ✅ - ARM NEON sequence decoding (+27.5% BAM parsing speedup) v1.6.0 (Released Nov 10, 2025) ✅ - BAI index support (indexed region queries, 1.68-500× speedup) v1.7.0 (Released Nov 13, 2025) ✅ - cloudflare_zlib backend (1.67× decompression, 2.29× compression speedups) v1.8.0 (Released Nov 13, 2025) ✅ - Format library (BED, GFA, VCF, GFF3) with property-based testing
Next (Planned):
Future (Community Driven):
See CHANGELOG.md for detailed release notes.
biometal addresses barriers that lock researchers out of genomics:
import biometal
stream = biometal.FastqStream.from_path("raw_reads.fq.gz")
for record in stream:
# Trim low-quality ends
trimmed = biometal.trim_quality_window(record, min_quality=20, window_size=4)
# Length filter
if biometal.meets_length_requirement(trimmed, min_len=50, max_len=150):
# Mask remaining low-quality bases
masked = biometal.mask_low_quality(trimmed, min_quality=20)
# Check masking rate
mask_rate = biometal.count_masked_bases(masked) / len(masked.sequence)
if mask_rate < 0.1:
# Pass QC - process further
pass
import biometal
# Extract k-mers for DNABert preprocessing
stream = biometal.FastqStream.from_path("dataset.fq.gz")
for record in stream:
# Extract overlapping k-mers (k=6 typical for DNABert)
kmers = biometal.extract_kmers(record.sequence, k=6)
# Format for transformer models
kmer_string = " ".join(kmer.decode() for kmer in kmers)
# Feed to DNABert - constant memory!
model.process(kmer_string)
import biometal
# Stream from HTTP without downloading
# Works with ENA, S3, GCS, Azure public data
url = "https://example.com/dataset.fq.gz"
stream = biometal.FastqStream.from_path(url)
for record in stream:
# Analyze directly - no download needed!
# Memory: constant ~5 MB
gc = biometal.gc_content(record.sequence)
import biometal
# Stream BAM file with constant memory (~5 MB)
reader = biometal.BamReader.from_path("alignments.bam")
for record in reader:
# Access alignment details
print(f"{record.name}: MAPQ={record.mapq}, pos={record.position}")
# NEW v1.4.0: Tag convenience methods
edit_dist = record.edit_distance() # NM tag
align_score = record.alignment_score() # AS tag
read_group = record.read_group() # RG tag
print(f" Edit distance: {edit_dist}, Score: {align_score}, RG: {read_group}")
# CIGAR operations (v1.3.0)
for op in record.cigar:
if op.is_insertion() and op.length >= 5:
print(f" Found {op.length}bp insertion")
# NEW v1.4.0: Built-in statistics functions
# Insert size distribution (paired-end QC)
dist = biometal.insert_size_distribution("alignments.bam")
print(f"Mean insert size: {sum(s*c for s,c in dist.items())/sum(dist.values()):.1f}bp")
# Edit distance statistics (alignment quality)
stats = biometal.edit_distance_stats("alignments.bam")
print(f"Mean edit distance: {stats['mean']:.2f} mismatches/read")
# Strand bias (variant calling QC)
bias = biometal.strand_bias("alignments.bam", reference_id=0, position=1000)
print(f"Strand bias at chr1:1000: {bias['ratio']:.2f}:1")
# Alignment length distribution (RNA-seq QC)
lengths = biometal.alignment_length_distribution("alignments.bam")
print(f"Intron-spanning reads: {sum(c for l,c in lengths.items() if l > 1000)}")
import biometal
# Load BAI index for fast random access
index = biometal.BaiIndex.from_path("alignments.bam.bai")
# Query specific genomic region (1.68× faster than full scan for small files)
# Speedup increases dramatically with file size (10-500× for 1-10 GB files)
for record in biometal.BamReader.query_region(
"alignments.bam",
index,
"chr1",
1000000, # start position
2000000 # end position
):
# Only reads overlapping region are returned
if record.is_mapped and record.mapq >= 30:
print(f"{record.name}: {record.position}-{record.reference_end()}")
# Reuse index for multiple queries (index loading: <1ms overhead)
regions = [
("chr1", 1000000, 2000000),
("chr1", 5000000, 6000000),
("chr2", 100000, 200000),
]
for chrom, start, end in regions:
count = sum(1 for _ in biometal.BamReader.query_region(
"alignments.bam", index, chrom, start, end
))
print(f"{chrom}:{start}-{end}: {count} reads")
# Full workflow: Coverage calculation for specific region
from collections import defaultdict
coverage = defaultdict(int)
for record in biometal.BamReader.query_region(
"alignments.bam", index, "chr1", 1000, 2000
):
if record.is_mapped and record.position is not None:
# Calculate coverage from CIGAR
pos = record.position
for op in record.cigar:
if op.consumes_reference():
for i in range(op.length):
coverage[pos] += 1
pos += 1
print(f"Mean coverage: {sum(coverage.values())/len(coverage):.1f}×")
Performance Characteristics:
import biometal
# BED: Parse genomic intervals (ChIP-seq peaks, gene annotations)
stream = biometal.Bed6Stream.from_path("peaks.bed.gz")
for record in stream:
print(f"{record.chrom}:{record.start}-{record.end} score={record.score}")
length = record.length()
if length > 1000:
print(f" Long peak: {length}bp")
# GFA: Parse assembly graphs (genome assembly, pangenomes)
stream = biometal.GfaStream.from_path("assembly.gfa")
segments = []
for record in stream:
if isinstance(record, biometal.GfaSegment):
segments.append(record)
print(f"Segment {record.name}: {len(record.sequence)}bp")
# VCF: Parse genetic variants (SNPs, indels)
stream = biometal.VcfStream.from_path("variants.vcf.gz")
header = stream.header() # Note: header() not parse_header()
print(f"VCF version: {header.fileformat}, Samples: {len(header.samples)}")
for variant in stream:
if variant.quality and variant.quality > 30:
print(f"{variant.chrom}:{variant.pos} {variant.reference}→{variant.alternate[0]}")
if variant.is_snp():
print(f" SNP with quality {variant.quality}")
# GFF3: Parse hierarchical gene annotations (genes, mRNAs, exons, CDS)
stream = biometal.Gff3Stream.from_path("annotations.gff3.gz")
for feature in stream:
if feature.feature_type == "gene":
gene_id = feature.get_id()
length = feature.length() # 1-based inclusive coordinates
print(f"Gene {gene_id}: {length}bp on {feature.strand}")
elif feature.feature_type == "exon":
parent = feature.get_parent()
# Note: interval() method not available in Python bindings
# Use feature.start and feature.end directly (1-based inclusive)
print(f" Exon of {parent}: {feature.start}-{feature.end}")
Format Library Features:
Q: Why biometal-rs on PyPI but biometal everywhere else?
A: The biometal name was taken on PyPI, so we use biometal-rs for installation. You still import as import biometal.
Q: What platforms are supported? A: Mac ARM (optimized), Linux ARM/x86_64 (portable). Pre-built wheels for common platforms. See docs/CROSS_PLATFORM_TESTING.md.
Q: Why ARM-native? A: To democratize bioinformatics by enabling world-class performance on consumer hardware ($1,400 MacBooks vs. $50,000 servers).
More questions? See FAQ.md
We welcome contributions! See CLAUDE.md for development guidelines.
biometal is built on evidence-based optimization - new features should:
Licensed under either of:
at your option.
If you use biometal in your research:
@software{biometal2025,
author = {Handley, Scott},
title = {biometal: ARM-native bioinformatics with streaming architecture},
year = {2025},
url = {https://github.com/shandley/biometal}
}
For the experimental methodology:
@misc{asbb2025,
author = {Handley, Scott},
title = {Apple Silicon Bio Bench: Systematic Hardware Characterization},
year = {2025},
url = {https://github.com/shandley/apple-silicon-bio-bench}
}
Status: v1.10.0 released 🚀
Latest: GTF + PAF + narrowPeak parsers with optimized Python bindings (Nov 14, 2025)
Tests: 551 library tests passing (including 65 new format tests) + 23 property-based
Performance: 5.82M records/sec, 92.0 MiB/s throughput, 50-60% Python memory reduction
Python Functions: 70+ (FASTQ/FASTA, BAM/BAI, BED/narrowPeak, GFA, VCF, GFF3, GTF, PAF)
Evidence Base: 1,357 experiments, 40,710 measurements