| Crates.io | atelier_dcml |
| lib.rs | atelier_dcml |
| version | 0.0.11 |
| created_at | 2025-06-14 21:37:08.276997+00 |
| updated_at | 2025-08-18 04:28:58.432613+00 |
| description | Distributed Convex Machine Learning for the atelier-rs engine |
| homepage | https://iteralabs.ai/atelier-rs |
| repository | https://github.com/iteralabs/atelier-rs |
| max_upload_size | |
| id | 1712701 |
| size | 107,281 |
Distributed Convex Machine Learning for the atelier-rs engine.
Includes necessary definitions and tooling to conduct variations of a predictive modeling process for High Frequency Data, Convex Linear Models, in singular and distributed learning formulations.
atelier-base/data: Hosts the Dataset structatelier-dcml/models.rs: hosts the LogisticClassifier, a convex linear model to perform binary classification.
Attributes: id, weights
Methods: compute_gradient, forward
functions: hosts the CrossEntropy, the loss function to track learning for the binary
LogisticClassifier. In the same script the Regularized trait is defined, which will require to have a regularization operation named RegType, which has the L1, L1, and, Elasticnet variants, all compatible with loss functions and parameters used in convex linear methods.
Attributes: 'weights', y, y_hat, epsilon
Methods: 'regularize'
optimizers.rs: Includes the GradientDescent, the fundamental learning algorithm, which complies with the Optimizer trait also defined in this script.
Attributes: 'id', 'learning_rate'
Methods: 'step', 'reset'
Hierarchical Concepts:
Optimizer: The interface that defines how model parameters are updated during a training process. Any optimization algorithm implements this trait.
Gradient Descent: An optimization algorithm. Iteratively updates a model's weights values by substracting to them the opposite sign of the calculated gradient.
Adam: Extended version of the Gradient Descent. Incorporates adaptive learning rates and momentum, it mantains exponentially decaying averages of past gradients (first moment) and past squared gradients (second moment) to adapt learning rates for each parameter individually.
Separation of concerns between Data, Model, Loss, Optimizer. All are independente components thata are orchestrated by a trainer in the following logic:
Dataset contains both features and targets, sharing the same index.
Model: Linear model architectures only, contains its weights
These are the other published crates members of the workspace:
Github hosted:
atelier-dcml is a member of the atelier-rs workspace