| Crates.io | aidale-layer |
| lib.rs | aidale-layer |
| version | 0.1.0 |
| created_at | 2025-11-01 04:15:10.020081+00 |
| updated_at | 2025-11-01 04:15:10.020081+00 |
| description | Built-in layers for Aidale (logging, retry, caching, etc.) |
| homepage | https://github.com/hanxuanliang/aidale |
| repository | https://github.com/hanxuanliang/aidale |
| max_upload_size | |
| id | 1911679 |
| size | 60,187 |
Built-in middleware layers for Aidale (logging, retry, caching, etc.).
aidale-layer provides composable middleware layers following the AOP (Aspect-Oriented Programming) pattern:
Logs all requests and responses with timing information:
use aidale_layer::LoggingLayer;
let executor = RuntimeExecutor::builder(provider)
.layer(LoggingLayer::new())
.finish();
Output example:
[AI Request] model=gpt-3.5-turbo messages=2
[AI Response] duration=1.2s tokens=150
Automatic retry with exponential backoff:
use aidale_layer::RetryLayer;
use std::time::Duration;
let executor = RuntimeExecutor::builder(provider)
.layer(RetryLayer::new()
.with_max_retries(3)
.with_initial_delay(Duration::from_millis(100))
.with_max_delay(Duration::from_secs(10)))
.finish();
Features:
Layers are composed in order from outermost to innermost:
let executor = RuntimeExecutor::builder(provider)
.layer(LoggingLayer::new()) // Executes first (outer)
.layer(RetryLayer::new() // Executes second
.with_max_retries(3))
.finish();
Execution flow:
Request → LoggingLayer → RetryLayer → Provider
Response ← LoggingLayer ← RetryLayer ← Provider
Layers use compile-time composition with static dispatch:
dyn Trait)This means zero runtime overhead compared to manual implementation!
Via the main aidale crate:
[dependencies]
aidale = { version = "0.1", features = ["layers"] }
Directly:
[dependencies]
aidale-layer = "0.1"
aidale-core = "0.1"
Implement the Layer trait from aidale-core:
use aidale_core::{Layer, Provider, ChatCompletionParams, ChatCompletionResponse};
use async_trait::async_trait;
pub struct MyCustomLayer {
// Your fields
}
#[async_trait]
impl<P: Provider> Layer<P> for MyCustomLayer {
async fn call(
&self,
provider: &P,
model: &str,
params: ChatCompletionParams,
) -> Result<ChatCompletionResponse> {
// Pre-processing
println!("Before request");
// Call next layer or provider
let response = provider.chat_completion(model, params).await?;
// Post-processing
println!("After request");
Ok(response)
}
}
aidale-core - Core traits and runtimeaidale-provider - Provider implementationsaidale-plugin - Plugin systemMIT OR Apache-2.0