| Crates.io | srsadmm-core |
| lib.rs | srsadmm-core |
| version | 0.1.6 |
| created_at | 2025-06-09 06:54:46.710465+00 |
| updated_at | 2025-06-10 03:54:46.721619+00 |
| description | Core library for the srsadmm project, used to solve consensus ADMM problems with serverless compute. |
| homepage | |
| repository | https://github.com/buk0vec/srsadmm |
| max_upload_size | |
| id | 1705603 |
| size | 296,626 |
This is the core library containing the distributed serverless ADMM algorithm, as well as binaries to solve a LASSO regression problem. It is meant for usage with the tokio runtime and the srsadmm-lambda-mm AWS Lambda function.
generate_problem binary to generate a problem instance.lasso binary to solve the problem instance, or lasso_prox to solve the problem instance with the proximal gradient method. The binaries will pull your AWS credentials from the default environment variables. Please deploy the srsadmm-lambda-mm Lambda function to your AWS account first and ensure that it has access to the S3 bucket you are using. To enable a specific backend, run the binary as cargo run --release --bin <binary> --no-default-features --features <accelerate/netlib/openblas> -- <args>.accelerate - Use the accelerate backend for matrix operationsnetlib - Use the netlib backend for matrix operationsopenblas - Use the openblas backend for matrix operationslinfa - Adds a utility function to compute the optimal objective value for Lasso regression using the linfa and linfa-elasticnet libraries. Useful for testing and validation.rayon - Adds support for parallelization using the rayon library. Really not necessary for the ADMM algorithm, but speeds up the problem instance generation.Alternatively, you can use the srsadmm-core library in your own project. While you can technically install it with cargo add srsadmm-core, it might be better to directly copy this directory into your own project and use it as a dependency. Take a look at lasso.rs and lasso_prox.rs for examples of how to use the library. You can also install with features accelerate, netlib, or openblas to enable different backends.
Recommended install: cargo add srsadmm-core --features openblas,rayon
ADMMProblem<G, S> - The main trait defining the ADMM algorithm interface. Implementations must provide methods for:
precompute() - One-time setup and matrix factorizationsupdate_x() - Primal variable update stepupdate_z() - Auxiliary variable update (often with proximal operators)update_y() - Dual variable update stepupdate_residuals() - Compute convergence metricscheck_stopping_criteria() - Determine if algorithm should terminateADMMSolver<G, S, P> - Orchestrates the iterative ADMM optimization process with timing tracking, iteration control, and result export capabilities.
ADMMContext<G, S> - Execution context containing shared global state (G) and local subproblem state (S) with thread-safe synchronization primitives.
MatrixVariable - A distributed matrix that can be stored and synchronized across multiple backends (local disk, S3, memory). Provides high-level matrix operations for ADMM algorithms while handling distributed storage complexity. Key features:
DataMatrixVariable - Specialized for large read-only matrices (like training data) with memory-mapped file access and efficient chunking for subproblem processing.
ScalarVariable - Similar distributed storage for scalar values with the same multi-backend synchronization.
ResourceLocation - Enum defining storage backends:
Local - Compressed local filesystem storageS3 - AWS S3 cloud storage for distributed accessMemory - In-memory storage for fast accessStorageConfig - Configuration for all storage backends with settings for local paths, S3 buckets, and memory management.
ProblemResourceImpl<T> - Internal resource manager handling storage, synchronization, and caching across multiple backends with automatic consistency management.
ops module provides distributed matrix operations:
mm() - Matrix multiplication (local or cloud-based via AWS Lambda)lasso_factor() - Computes (A^T A + ρI)^-1 for LASSO problemssoft_threshold() - L1 regularization proximal operatorscale() - In-place matrix scalingTimingTracker - Performance tracking for ADMM iterations, outputting a CSV file with the timing data.
subproblem module provides utilities for decomposing problems:
split_matrix_into_subproblems() - Partitions matrices into row-wise chunks for parallel processingcombine_subproblems() - Reassembles subproblem results into final solution