| Crates.io | adaptive-pipeline-domain |
| lib.rs | adaptive-pipeline-domain |
| version | 2.0.0 |
| created_at | 2025-10-12 10:48:42.013148+00 |
| updated_at | 2025-10-12 10:48:42.013148+00 |
| description | Domain layer for optimized, adaptive pipeline - pure reusable business logic, entities, value objects, and domain services following DDD principles |
| homepage | |
| repository | https://github.com/abitofhelp/adaptive_pipeline.git |
| max_upload_size | |
| id | 1879172 |
| size | 1,011,765 |
Pure business logic and domain model for the Adaptive Pipeline - A reusable, framework-agnostic library following Domain-Driven Design principles.
This crate contains the pure domain layer of the Adaptive Pipeline system - all business logic, entities, value objects, and domain services with zero infrastructure dependencies. It's completely synchronous, has no I/O, and can be reused in any Rust application.
Add this to your Cargo.toml:
[dependencies]
adaptive-pipeline-domain = "1.0"
This crate implements the Domain Layer from Clean Architecture:
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Domain Layer (This Crate) โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ Entities โ โ
โ โ - Pipeline โ โ
โ โ - PipelineStage โ โ
โ โ - ProcessingContext โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ Value Objects โ โ
โ โ - PipelineId (ULID) โ โ
โ โ - FileChunk โ โ
โ โ - ChunkSize โ โ
โ โ - Algorithm โ โ
โ โ - FilePath (validated) โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ Domain Services (Sync) โ โ
โ โ - CompressionService โ โ
โ โ - EncryptionService โ โ
โ โ - ChecksumService โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ Infrastructure Ports (Traits) โ โ
โ โ - FileIOService โ โ
โ โ - PipelineRepository โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
use adaptive_pipeline_domain::entities::{Pipeline, PipelineStage, StageType};
use adaptive_pipeline_domain::value_objects::PipelineId;
// Create pipeline with stages
let mut pipeline = Pipeline::new("compress-encrypt".to_string());
// Add compression stage
let compress_stage = PipelineStage::new(
"compression".to_string(),
StageType::Compression,
1, // order
);
pipeline.add_stage(compress_stage);
// Add encryption stage
let encrypt_stage = PipelineStage::new(
"encryption".to_string(),
StageType::Encryption,
2, // order
);
pipeline.add_stage(encrypt_stage);
use adaptive_pipeline_domain::value_objects::{ChunkSize, FilePath, FileChunk};
// Type-safe chunk size with validation
let chunk_size = ChunkSize::from_mb(8)?; // 8 MB chunks
// Validated file path
let input_path = FilePath::new("/data/input.txt")?;
// Immutable file chunk
let chunk = FileChunk::new(
chunk_data,
0, // sequence number
Some("sha256:abc123...".to_string()),
);
use adaptive_pipeline_domain::services::{
CompressionService, EncryptionService, ChecksumService
};
use adaptive_pipeline_domain::FileChunk;
// Compression service (sync)
let compression = CompressionService::new("brotli", 6)?;
let compressed_chunk = compression.compress(&chunk)?;
// Encryption service (sync)
let encryption = EncryptionService::new("aes256gcm")?;
let encrypted_chunk = encryption.encrypt(&compressed_chunk, &key)?;
// Checksum service (sync)
let checksum = ChecksumService::new("sha256")?;
let hash = checksum.calculate(&encrypted_chunk)?;
use adaptive_pipeline_domain::entities::ProcessingContext;
let mut context = ProcessingContext::new();
// Track processing state
context.add_metadata("compression_ratio", "0.65");
context.add_metadata("encryption_algorithm", "aes256gcm");
// Access during processing
let ratio = context.get_metadata("compression_ratio");
The domain defines repository traits that infrastructure implements:
#[async_trait]
pub trait PipelineRepository: Send + Sync {
async fn save(&self, pipeline: &Pipeline) -> Result<(), PipelineError>;
async fn find_by_id(&self, id: &PipelineId) -> Result<Option<Pipeline>, PipelineError>;
async fn find_by_name(&self, name: &str) -> Result<Option<Pipeline>, PipelineError>;
async fn delete(&self, id: &PipelineId) -> Result<(), PipelineError>;
}
Domain services encapsulate CPU-bound business logic:
pub struct CompressionService {
algorithm: Algorithm,
level: u8,
}
impl CompressionService {
pub fn compress(&self, chunk: &FileChunk) -> Result<FileChunk, PipelineError> {
// Pure CPU-bound compression logic
// No I/O, no async
}
}
All identifiers use ULIDs for:
use adaptive_pipeline_domain::value_objects::PipelineId;
let id = PipelineId::new();
println!("Pipeline ID: {}", id); // 01H2X3Y4Z5W6V7U8T9S0R1Q2P3
All value objects enforce domain invariants:
use adaptive_pipeline_domain::value_objects::ChunkSize;
// Valid chunk sizes: 1MB to 64MB
let chunk = ChunkSize::from_mb(8)?; // โ
Valid
// Invalid sizes are rejected at construction
let invalid = ChunkSize::from_mb(128)?; // โ Error: exceeds maximum
Domain events capture important business occurrences:
use adaptive_pipeline_domain::events::{DomainEvent, EventType};
let event = DomainEvent::new(
EventType::PipelineCreated,
"Pipeline 'secure-backup' created".to_string(),
);
// Events are immutable and serializable
let json = serde_json::to_string(&event)?;
Domain logic is easy to test because it's pure:
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_compression_reduces_size() {
let service = CompressionService::new("brotli", 6).unwrap();
let input = create_test_chunk(1024 * 1024); // 1 MB
let compressed = service.compress(&input).unwrap();
assert!(compressed.data().len() < input.data().len());
}
}
This crate has minimal dependencies to keep it pure:
No async runtime, no I/O libraries, no database dependencies.
BSD 3-Clause License - see LICENSE file for details.
This is a pure domain layer - contributions should:
Pure Domain Logic | Framework-Agnostic | Highly Reusable