atelier_dcml

Crates.ioatelier_dcml
lib.rsatelier_dcml
version0.0.11
created_at2025-06-14 21:37:08.276997+00
updated_at2025-08-18 04:28:58.432613+00
descriptionDistributed Convex Machine Learning for the atelier-rs engine
homepagehttps://iteralabs.ai/atelier-rs
repositoryhttps://github.com/iteralabs/atelier-rs
max_upload_size
id1712701
size107,281
FranciscoME (IFFranciscoME)

documentation

https://docs.rs/atelier_rs/

README

atelier-dcml

Overview

Distributed Convex Machine Learning for the atelier-rs engine.

Includes necessary definitions and tooling to conduct variations of a predictive modeling process for High Frequency Data, Convex Linear Models, in singular and distributed learning formulations.

Dataset

  • atelier-base/data: Hosts the Dataset struct

Models

  • atelier-dcml/models.rs: hosts the LogisticClassifier, a convex linear model to perform binary classification.

  • Attributes: id, weights

  • Methods: compute_gradient, forward

Loss Functions

  • functions: hosts the CrossEntropy, the loss function to track learning for the binary LogisticClassifier. In the same script the Regularized trait is defined, which will require to have a regularization operation named RegType, which has the L1, L1, and, Elasticnet variants, all compatible with loss functions and parameters used in convex linear methods.

  • Attributes: 'weights', y, y_hat, epsilon

  • Methods: 'regularize'

Optimizers

  • optimizers.rs: Includes the GradientDescent, the fundamental learning algorithm, which complies with the Optimizer trait also defined in this script.

  • Attributes: 'id', 'learning_rate'

  • Methods: 'step', 'reset'

Hierarchical Concepts:

  • Optimizer: The interface that defines how model parameters are updated during a training process. Any optimization algorithm implements this trait.

  • Gradient Descent: An optimization algorithm. Iteratively updates a model's weights values by substracting to them the opposite sign of the calculated gradient.

  • Adam: Extended version of the Gradient Descent. Incorporates adaptive learning rates and momentum, it mantains exponentially decaying averages of past gradients (first moment) and past squared gradients (second moment) to adapt learning rates for each parameter individually.

Metrics (WIP)

  • Classification: Accuracy, F1, Recall, Precission, Confussion Matrix, AuC, Roc.

Experiments (WIP)

  • Data-Modeling-Metrics Triad with the complete cluster of modeling elements in order to conduct a proper scientific experiment (which by itself will require the scientific method of Hypothesis-experiment-results-conclusion).

Separation of concerns between Data, Model, Loss, Optimizer. All are independente components thata are orchestrated by a trainer in the following logic:

Dataset contains both features and targets, sharing the same index.

Model: Linear model architectures only, contains its weights

Processes

  • Train: Iterative execution of optmizers - models - loss functions
  • Infer: Input data parsing, models output generation
  • Explain: Model's architecture, Model's weights studies.

workspace

These are the other published crates members of the workspace:

Github hosted:



atelier-dcml is a member of the atelier-rs workspace

Commit count: 72

cargo fmt