| Crates.io | scirs2-linalg |
| lib.rs | scirs2-linalg |
| version | 0.1.2 |
| created_at | 2025-04-12 13:21:20.192136+00 |
| updated_at | 2026-01-16 08:29:30.451077+00 |
| description | Linear algebra module for SciRS2 (scirs2-linalg) |
| homepage | |
| repository | https://github.com/cool-japan/scirs |
| max_upload_size | |
| id | 1630914 |
| size | 5,152,405 |
v0.1.0 - SciRS2 POLICY & Enhanced Performance
scirs2-linalg delivers comprehensive linear algebra functionality comparable to NumPy/SciPy's linalg module, providing a robust mathematical foundation for scientific computing, machine learning, and data analysis in Rust. Following the new SciRS2 POLICY, this release ensures ecosystem consistency through scirs2-core abstractions. With 549 passing tests and comprehensive feature coverage, this library is production-ready for demanding applications.
SciRS2 is designed for high performance with multiple optimization strategies:
📊 Performance Guides: See docs/PERFORMANCE_GUIDE.md for detailed benchmarks and docs/OPTIMIZATION_GUIDELINES.md for practical optimization strategies.
| Operation | Small Matrices | Medium Matrices | Large Matrices |
|---|---|---|---|
| Basic Ops | 0.1-1 μs | 10-100 μs | 1-10 ms |
| Decompositions | 1-10 μs | 100 μs-1 ms | 10-100 ms |
| Eigenvalues | 5-50 μs | 500 μs-5 ms | 50-500 ms |
For detailed performance analysis, benchmarking guides, and optimization tips:
Add scirs2-linalg to your Cargo.toml:
[dependencies]
scirs2-linalg = "0.1.2"
ndarray = "0.16.1"
For accelerated performance with OxiBLAS (Pure Rust BLAS/LAPACK):
[dependencies]
scirs2-linalg = { version = "0.1.2", features = ["linalg"] }
# OxiBLAS is automatically enabled with linalg feature (Pure Rust, no system dependencies)
# SIMD acceleration available with simd feature
scirs2-linalg = { version = "0.1.2", features = ["linalg", "simd"] }
Note: v0.1.0+ uses OxiBLAS (Pure Rust) instead of system BLAS libraries (OpenBLAS/MKL/Accelerate). No C/Fortran compiler or system libraries required.
use ndarray::array;
use scirs2_linalg::prelude::*;
fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create a matrix
let a = array![[3.0, 1.0], [1.0, 2.0]];
// Compute determinant
let det = det(&a.view())?;
println!("Determinant: {}", det);
// Solve linear system Ax = b
let b = array![9.0, 8.0];
let x = solve(&a.view(), &b.view())?;
println!("Solution: {:?}", x);
// Compute eigenvalues and eigenvectors
let (eigenvalues, eigenvectors) = eigh(&a.view())?;
println!("Eigenvalues: {:?}", eigenvalues);
Ok(())
}
For users migrating from ndarray-linalg or familiar with Rust linear algebra:
use scirs2_linalg::compat::{self, UPLO};
// Simplified, Rust-idiomatic API (v0.2.0+)
let det = compat::det(&a.view())?;
let inv = compat::inv(&a.view())?;
let (u, s, vt) = compat::svd(&a.view(), true)?;
let (eigenvals, eigenvecs) = compat::eigh(&a.view(), UPLO::Lower)?;
// Or use trait-based API
use scirs2_linalg::compat::ArrayLinalgExt;
let det = a.det()?;
let inv = a.inv()?;
Note: v0.1.0 used SciPy-style parameters (e.g., det(&a, false, true)), but these were simplified in v0.2.0 as most parameters were ignored in the Rust implementation.
use ndarray::array;
use scirs2_linalg::{lu, qr, svd, cholesky};
// Create a test matrix
let a = array![[4.0, 2.0], [2.0, 5.0]];
// LU decomposition
let (p, l, u) = lu(&a.view())?;
assert_eq!(p.dot(&a), l.dot(&u)); // P·A = L·U
// QR decomposition
let (q, r) = qr(&a.view())?;
assert_eq!(a, q.dot(&r)); // A = Q·R
// Singular Value Decomposition
let (u, s, vt) = svd(&a.view(), false)?;
// Note: reconstruction requires proper diagonal matrix construction
// Cholesky decomposition (for positive definite matrices)
let l = cholesky(&a.view())?;
assert_eq!(a, l.dot(&l.t())); // A = L·L^T
use scirs2_linalg::{conjugate_gradient, gmres};
// Solve using conjugate gradient (for symmetric positive definite)
let x = conjugate_gradient(&a.view(), &b.view(), 100, 1e-10)?;
// Solve using GMRES (for general matrices)
let x = gmres(&a.view(), &b.view(), 10, 100, 1e-10)?;
use scirs2_linalg::matrix_functions::{expm, logm, sqrtm};
use scirs2_linalg::specialized::{TridiagonalMatrix, BandedMatrix};
use scirs2_linalg::structured::{ToeplitzMatrix, CirculantMatrix};
// Matrix functions
let exp_a = expm(&a.view())?; // Matrix exponential
let log_a = logm(&a.view())?; // Matrix logarithm
let sqrt_a = sqrtm(&a.view())?; // Matrix square root
// Specialized matrices for efficiency
let tridiag = TridiagonalMatrix::from_diagonals(&main_diag, &upper, &lower);
let x = tridiag.solve(&b)?;
// Structured matrices
let toeplitz = ToeplitzMatrix::from_column_and_row(&first_col, &first_row);
let result = toeplitz.matvec(&x)?;
use ndarray::Array3;
use scirs2_linalg::attention::{scaled_dot_product_attention, multi_head_attention};
use scirs2_linalg::batch::attention::batch_multi_head_attention;
// Attention mechanism for transformers
let batch_size = 2;
let seq_len = 4;
let d_model = 64;
// Create query, key, value tensors
let query = Array3::from_shape_fn((batch_size, seq_len, d_model), |_| rand::random());
let key = Array3::from_shape_fn((batch_size, seq_len, d_model), |_| rand::random());
let value = Array3::from_shape_fn((batch_size, seq_len, d_model), |_| rand::random());
// Scaled dot-product attention
let scale = 1.0 / (d_model as f32).sqrt();
let output = scaled_dot_product_attention(
&query.view(),
&key.view(),
&value.view(),
None, // Optional mask
scale
)?;
// Multi-head attention
let num_heads = 8;
let output = multi_head_attention(
&query.view(),
&key.view(),
&value.view(),
num_heads,
None, // Optional mask
)?;
use scirs2_linalg::quantization::{
quantize_matrix, dequantize_matrix, quantized_matmul,
fusion::fused_quantized_matmul_chain,
quantized_matrixfree::QuantizedMatrixFreeOp,
stability::analyze_quantization_stability,
calibration::{calibrate_matrix, CalibrationConfig, CalibrationMethod}
};
// Create a neural network weight matrix
let weights = Array2::from_shape_fn((1024, 1024), |_| rand::random::<f32>() * 0.1);
let activations = Array2::from_shape_fn((32, 1024), |_| rand::random::<f32>());
// Calibrate quantization parameters using advanced methods
let weight_config = CalibrationConfig {
method: CalibrationMethod::EntropyCalibration,
symmetric: true,
num_bins: 2048,
..Default::default()
};
// Dynamic calibration for activations that change over time
let activation_config = CalibrationConfig {
method: CalibrationMethod::ExponentialMovingAverage,
symmetric: false, // Asymmetric for ReLU activations
ema_factor: 0.1,
..Default::default()
};
// Calibrate and quantize weights (to 8-bit)
let weight_params = calibrate_matrix(&weights.view(), 8, &weight_config)?;
let (quantized_weights, _) = quantize_matrix(&weights.view(), 8, weight_params.method);
// Calibrate and quantize activations
let activation_params = calibrate_matrix(&activations.view(), 8, &activation_config)?;
let (quantized_activations, _) = quantize_matrix(&activations.view(), 8, activation_params.method);
// Perform matrix multiplication with quantized matrices
let result = quantized_matmul(
&quantized_weights,
&weight_params,
&quantized_activations,
&activation_params,
)?;
// Calculate quantization error
let reference = activations.dot(&weights.t());
let rel_error = (&reference - &result).mapv(|x| x.abs()).sum() /
reference.mapv(|x| x.abs()).sum();
println!("Relative Error: {:.6}%", rel_error * 100.0);
println!("Memory Reduction: {:.1}%", (1.0 - 8.0/32.0) * 100.0);
v0.1.0+ uses OxiBLAS (Pure Rust BLAS/LAPACK):
# OxiBLAS - Pure Rust implementation (default, all platforms)
scirs2-linalg = { version = "0.1.2", features = ["linalg"] }
# With SIMD acceleration
scirs2-linalg = { version = "0.1.2", features = ["linalg", "simd"] }
Legacy backends (removed in v0.1.0):
openblasmklnetlibacceleratefeatures = ["simd"]Production-validated performance (1000×1000 matrices, optimized builds, v0.1.0+):
| Operation | OxiBLAS (Pure Rust) | OxiBLAS + SIMD | Status |
|---|---|---|---|
| Matrix Multiply | 245ms | 89ms | ✅ Production |
| LU Decomposition | 185ms | 165ms | ✅ Production |
| SVD | 892ms | 820ms | ✅ Production |
| Eigenvalues | 1.2s | 1.1s | ✅ Production |
Note: OpenBLAS/MKL backends were removed in v0.1.0. OxiBLAS provides competitive performance with zero system dependencies.
Performance is competitive with industry-standard libraries and ready for production deployment.
The library uses a comprehensive error system:
use scirs2_linalg::{LinalgError, LinalgResult};
match inv(&singular_matrix.view()) {
Ok(inverse) => println!("Inverse computed"),
Err(LinalgError::SingularMatrixError(msg)) => println!("Matrix is singular: {}", msg),
Err(e) => println!("Other error: {}", e),
}
✅ Comprehensive Implementation: All major linear algebra operations implemented and tested ✅ Performance Optimized: Native BLAS/LAPACK integration with SIMD acceleration ✅ API Stable: Backward compatible with comprehensive error handling ✅ Test Coverage: 549 tests with 100% pass rate ensuring reliability ✅ Documentation: Complete API documentation with examples and guides
🚀 Deployment Ready: This library is suitable for production use in scientific computing, machine learning frameworks, and high-performance numerical applications.
For detailed feature status, see TODO.md.
Contributions are welcome! Please see our contributing guidelines.
Current priorities for v0.1.0 stable:
Future enhancements (post-v0.1.0):
This project is dual-licensed under:
You can choose to use either license. See the LICENSE file for details.
This library is inspired by NumPy and SciPy's excellent linear algebra implementations. We aim to bring similar functionality to the Rust ecosystem while leveraging Rust's performance and safety guarantees.