nalgebra-sparse-linalg

Crates.ionalgebra-sparse-linalg
lib.rsnalgebra-sparse-linalg
version0.1.10
created_at2025-05-14 21:43:19.068135+00
updated_at2025-06-03 15:44:13.464481+00
descriptionSparse linear algebra library for Rust using nalgebra including linear solvers and SVD.
homepage
repository
max_upload_size
id1674011
size132,355
DimitriTimoz 🦀 (DimitriTimoz)

documentation

README

nalgebra-sparse-linalg

Crates.io Documentation License: MIT OR Apache-2.0

High-performance sparse linear algebra algorithms for Rust - Iterative solvers, matrix decompositions, and numerical methods for large-scale sparse matrices using nalgebra_sparse.

Overview

nalgebra-sparse-linalg provides efficient numerical algorithms for sparse linear algebra computations in Rust. Built on top of nalgebra_sparse, this library offers:

  • Fast iterative solvers for large sparse linear systems
  • Matrix decompositions including truncated SVD for dimensionality reduction
  • Memory-efficient algorithms optimized for sparse matrices
  • Generic implementations supporting multiple precision types (f32, f64)
  • Production-ready with comprehensive test coverage

Perfect for scientific computing, machine learning, data analysis, and numerical simulation applications.

Features

Iterative Linear System Solvers

  • Jacobi - Simple and parallelizable iterative solver
  • Gauss-Seidel - Faster convergence for well-conditioned systems
  • Successive Over-Relaxation (SOR) - Accelerated convergence with relaxation parameter
  • Conjugate Gradient (CG) - Optimal for symmetric positive-definite matrices
  • BiConjugate Gradient (BiCG) - General solver for non-symmetric systems
  • Algebraic Multigrid (AMG) - Advanced multigrid solver for large-scale problems

Matrix Decompositions

  • Truncated SVD - Randomized algorithm for efficient singular value decomposition
  • Dimensionality reduction for large-scale data analysis
  • Low-rank approximations for matrix compression

âš¡ Performance Features

  • CSR and CSC matrix support - Industry-standard sparse formats
  • Memory-efficient algorithms - Minimal memory footprint
  • Parallel computation - Multi-threaded where applicable
  • Generic scalar types - Support for f32, f64, and complex numbers
  • Zero-copy operations - Efficient memory usage

Quick Start

Add this to your Cargo.toml:

[dependencies]
nalgebra-sparse-linalg = "0.1"
nalgebra-sparse = "0.9"

For advanced multigrid methods, enable the AMG feature:

[dependencies]
nalgebra-sparse-linalg = { version = "0.1", features = ["amg"] }
nalgebra-sparse = "0.9"

Solving Linear Systems

use nalgebra_sparse::{na::DVector, CsrMatrix};
use nalgebra_sparse_linalg::iteratives::conjugate_gradient::solve;

// Create a sparse matrix and right-hand side
let a = CsrMatrix::identity(1000);
let b = DVector::from_vec(vec![1.0; 1000]);

// Solve Ax = b
let solution = solve(&a, &b, 1000, 1e-10).unwrap();

Truncated SVD for Dimensionality Reduction

use nalgebra_sparse::CsrMatrix;
use nalgebra_sparse_linalg::svd::TruncatedSVD;

// Create or load your data matrix
let matrix = CsrMatrix::from(/* your sparse matrix */);

// Compute top 50 singular vectors and values
let svd = TruncatedSVD::new(&matrix, 50);

// Access results
println!("Singular values: {:?}", svd.singular_values);
println!("Left singular vectors shape: {:?}", svd.u.shape());

Examples

Linear System Solvers

Jacobi Method

use nalgebra_sparse::{na::DVector, CsrMatrix};
use nalgebra_sparse_linalg::iteratives::jacobi::solve;

let a = CsrMatrix::identity(3);
let b = DVector::from_vec(vec![1.0, 2.0, 3.0]);
let result = solve(&a, &b, 100, 1e-10);
assert!(result.is_some());

Gauss-Seidel Method

use nalgebra_sparse::{na::DVector, CsrMatrix};
use nalgebra_sparse_linalg::iteratives::gauss_seidel::solve;

let a = CsrMatrix::identity(3);
let b = DVector::from_vec(vec![1.0, 2.0, 3.0]);
let result = solve(&a, &b, 100, 1e-10);
assert!(result.is_some());

Successive Over-Relaxation (SOR)

use nalgebra_sparse::{na::DVector, CsrMatrix};
use nalgebra_sparse_linalg::iteratives::relaxation::solve;

let a = CsrMatrix::identity(3);
let b = DVector::from_vec(vec![1.0, 2.0, 3.0]);
let omega = 0.8; // Relaxation parameter
let result = solve(&a, &b, 100, omega, 1e-10);
assert!(result.is_some());

Conjugate Gradient

use nalgebra_sparse::{na::DVector, CsrMatrix};
use nalgebra_sparse_linalg::iteratives::conjugate_gradient::solve;

// Works with both CSR and CSC matrices
let a = CsrMatrix::identity(3);
let b = DVector::from_vec(vec![2.0, 4.0, 6.0]);
let result = solve(&a, &b, 100, 1e-10);
assert!(result.is_some());

BiConjugate Gradient

use nalgebra_sparse::{na::DVector, CsrMatrix};
use nalgebra_sparse_linalg::iteratives::biconjugate_gradient::solve;

// Suitable for non-symmetric matrices
let a = CsrMatrix::identity(3);
let b = DVector::from_vec(vec![2.0, 4.0, 6.0]);
let result = solve(&a, &b, 100, 1e-10);
assert!(result.is_some());

Algebraic Multigrid (AMG)

Note: Requires the amg feature flag

use nalgebra_sparse::{na::DVector, CsrMatrix};
use nalgebra_sparse_linalg::iteratives::amg::solve_amg;

// AMG is particularly effective for problems arising from PDEs
// Create a sparse matrix (e.g., from finite difference discretization)
let a = /* your sparse matrix from PDE discretization */;
let b = DVector::from_vec(vec![1.0; a.nrows()]);

// AMG parameters: max_iterations, tolerance, strength_threshold
let result = solve_amg(&a, &b, 100, 1e-10, 0.25);
assert!(result.is_some());

Advanced AMG Usage

Note: Requires the amg feature flag

use nalgebra_sparse::{na::DVector, CsrMatrix};
use nalgebra_sparse_linalg::iteratives::amg::Amg;
use nalgebra_sparse_linalg::iteratives::IterativeSolver;

// Create AMG solver with custom parameters
let mut amg_solver = Amg::new(
    1e-10,  // tolerance
    0.25,   // strength threshold
    100     // max iterations
);

// Initialize with matrix and RHS
amg_solver.init(&a, &b, None);

// Solve iteratively
let converged = amg_solver.solve_iterations(&a, &b, 100);
if converged {
    let solution = amg_solver.solution().clone();
    println!("Converged in {} iterations", amg_solver.iterations());
}

Matrix Decompositions

Truncated SVD for Large Matrices

use nalgebra_sparse::{CsrMatrix, na::DMatrix};
use nalgebra_sparse_linalg::svd::TruncatedSVD;

// Create a large data matrix (e.g., document-term matrix)
let dense_matrix = DMatrix::from_row_slice(1000, 500, &[/* your data */]);
let sparse_matrix = CsrMatrix::from(&dense_matrix);

// Compute top 100 components for dimensionality reduction
let svd = TruncatedSVD::new(&sparse_matrix, 100);

// The result contains:
// - svd.u: Left singular vectors (1000 x 100)
// - svd.singular_values: Singular values in descending order (100)

Principal Component Analysis (PCA) Example

use nalgebra_sparse::{CsrMatrix, na::DMatrix};
use nalgebra_sparse_linalg::svd::TruncatedSVD;

// Center your data matrix (subtract mean)
let centered_data = /* your centered data matrix */;
let sparse_data = CsrMatrix::from(&centered_data);

// Compute principal components
let n_components = 10;
let svd = TruncatedSVD::new(&sparse_data, n_components);

// svd.u contains the principal component loadings
// svd.singular_values contains the explained variance (after squaring)

Use Cases

🔬 Scientific Computing

  • Finite element analysis
  • Computational fluid dynamics
  • Structural analysis
  • Physics simulations
  • Partial differential equations (PDEs) - AMG excels at discretized PDEs

🤖 Machine Learning & Data Science

  • Large-scale linear regression
  • Principal component analysis (PCA)
  • Recommendation systems
  • Natural language processing (LSA/LSI)
  • Image processing and computer vision

📈 Numerical Analysis

  • Solving partial differential equations
  • Optimization problems
  • Signal processing
  • Graph algorithms
  • Multigrid methods - AMG for hierarchical problem solving

Performance

This library is optimized for:

  • Large sparse matrices (>10,000 dimensions)
  • Memory efficiency - Minimal overhead beyond input data
  • Numerical stability - Robust algorithms with proven convergence
  • Parallel computation - Multi-threaded operations where beneficial

AMG Performance Characteristics

  • Optimal complexity - O(n) for many PDE problems
  • Scalable - Maintains efficiency as problem size increases
  • Memory efficient - Hierarchical structure reduces memory requirements
  • Grid-independent convergence - Performance doesn't degrade with mesh refinement

Algorithm Selection Guide

Problem Type Recommended Solver Notes
Symmetric positive-definite Conjugate Gradient Fastest convergence
General square matrices BiConjugate Gradient Good for non-symmetric
Diagonally dominant Jacobi or Gauss-Seidel Simple and robust
Large-scale PCA/SVD Truncated SVD Memory efficient
Ill-conditioned systems Relaxation methods Adjustable convergence
DE discretizations Algebraic Multigrid (AMG) Optimal for grid-based problems

Documentation

See API Documentation - Complete API reference

Contributing

We welcome contributions! Please see our contributing guidelines for details on:

  • Bug reports and feature requests
  • Code contributions and pull requests
  • Documentation improvements
  • Performance optimizations

Roadmap

  • Preconditioned iterative methods
  • Additional matrix decompositions (QR, LU)
  • GPU acceleration support
  • Distributed computing capabilities

License

Licensed under either of:

at your option.

Related Crates

Commit count: 0

cargo fmt