| Crates.io | optirs-core |
| lib.rs | optirs-core |
| version | 0.1.0 |
| created_at | 2025-09-18 09:38:20.083313+00 |
| updated_at | 2025-12-30 08:22:36.288269+00 |
| description | OptiRS core optimization algorithms and utilities |
| homepage | |
| repository | https://github.com/cool-japan/optirs |
| max_upload_size | |
| id | 1844477 |
| size | 3,802,621 |
Core optimization algorithms and utilities for the OptiRS machine learning optimization library.
OptiRS-Core provides the foundational optimization algorithms and mathematical utilities that power the entire OptiRS ecosystem. This crate integrates deeply with the SciRS2 scientific computing foundation and implements state-of-the-art optimization algorithms with high performance and numerical stability.
scirs2-core 0.1.1: Foundation scientific primitives (REQUIRED)
scirs2-optimize 0.1.1: Base optimization interfaces (REQUIRED)scirs2-neural: Neural network optimization supportscirs2-metrics: Performance monitoring and benchmarksscirs2-stats: Statistical analysisscirs2-series: Time series supportscirs2-datasets: Dataset utilities (optional)scirs2-linalg: Linear algebra operationsscirs2-signal: Signal processingserde, serde_json: Serializationthiserror, anyhow: Error handlingapprox, criterion: Testing and benchmarkingNote: OptiRS does NOT use scirs2-autograd. OptiRS receives pre-computed gradients and does not perform automatic differentiation.
Add this to your Cargo.toml:
[dependencies]
optirs-core = "0.1.0"
scirs2-core = "0.1.1" # Required foundation
use optirs_core::optimizers::{Adam, Optimizer};
use scirs2_core::ndarray::Array1; // ✅ CORRECT - Use scirs2_core
// Create an Adam optimizer
let mut optimizer = Adam::new(0.001)
.beta1(0.9)
.beta2(0.999)
.epsilon(1e-8)
.build();
// Your parameters and gradients
let mut params = Array1::from(vec![1.0, 2.0, 3.0]);
let grads = Array1::from(vec![0.1, 0.2, 0.3]);
// Update parameters
optimizer.step(&mut params, &grads);
use optirs_core::optimizers::Adam;
use optirs_core::schedulers::{ExponentialDecay, LRScheduler};
use scirs2_core::ndarray::Array1;
// Create optimizer with learning rate scheduler
let mut optimizer = Adam::new(0.001);
let mut scheduler = ExponentialDecay::new(0.001, 0.95);
let mut params = Array1::from(vec![1.0, 2.0, 3.0]);
let grads = Array1::from(vec![0.1, 0.2, 0.3]);
// Update with scheduled learning rate
let current_lr = scheduler.step();
optimizer.set_learning_rate(current_lr);
optimizer.step(&mut params, &grads);
std: Standard library support (enabled by default)cross-platform-testing: Enable cross-platform compatibility testing (requires scirs2-datasets)Enable features in your Cargo.toml:
[dependencies]
optirs-core = { version = "0.1.0", features = ["cross-platform-testing"] }
Note: SIMD and parallel processing are built-in via scirs2-core and automatically enabled when beneficial.
OptiRS-Core is designed with modularity and performance in mind:
optirs-core/
├── src/
│ ├── lib.rs # Public API and re-exports
│ ├── optimizers/ # Optimizer implementations
│ │ ├── mod.rs
│ │ ├── sgd.rs
│ │ ├── adam.rs
│ │ ├── adamw.rs
│ │ └── rmsprop.rs
│ ├── schedulers/ # Learning rate scheduling
│ ├── utils/ # Mathematical utilities
│ └── integration/ # SciRS2 integration layer
OptiRS-Core is optimized for high-performance machine learning workloads:
To ensure consistency across the OptiRS-Core codebase, all contributors must follow these guidelines:
snake_case for variable names (e.g., gradient_norm, parameter_count, learning_rate)gradientNorm ❌, parameterCount ❌)// ✅ Correct: snake_case
let gradient_norm = gradients.norm();
let parameter_count = model.parameter_count();
let learning_rate = optimizer.learning_rate();
// ❌ Incorrect: camelCase or other formats
let gradientNorm = gradients.norm();
let parameterCount = model.parameter_count();
let learningrate = optimizer.learning_rate();
snake_case for function and method namesPascalCase for struct, enum, and trait namesSCREAMING_SNAKE_CASE for constantsrustfmt and clippy to maintain code formatting and catch common issuescargo fmt to format your codecargo clippy to check for lint issuescargo testcargo checkOptiRS follows the Cool Japan organization's development standards. See the main OptiRS repository for contribution guidelines.
This project is licensed under either of:
at your option.