| Crates.io | ferrous-llm-core |
| lib.rs | ferrous-llm-core |
| version | 0.6.1 |
| created_at | 2025-07-12 06:13:02.400139+00 |
| updated_at | 2025-08-31 10:10:06.088574+00 |
| description | Core LLM library |
| homepage | https://www.eurora-labs.com |
| repository | https://github.com/eurora-labs/ferrous-llm.git |
| max_upload_size | |
| id | 1749019 |
| size | 122,683 |
Core traits and types for the ferrous-llm ecosystem. This crate provides the foundational abstractions that all LLM providers implement, including traits for chat, completion, streaming, and tool calling, as well as standardized request/response types and error handling.
ferrous-llm-core is the foundation crate that defines the common interface for all LLM providers in the ferrous-llm ecosystem. It follows the Interface Segregation Principle, allowing providers to implement only the capabilities they support.
Add this to your Cargo.toml:
[dependencies]
ferrous-llm-core = "0.2.0"
The primary trait for chat-based LLM interactions:
use ferrous_llm_core::{ChatProvider, ChatRequest};
#[async_trait]
pub trait ChatProvider: Send + Sync {
type Config: ProviderConfig;
type Response: ChatResponse;
type Error: ProviderError;
async fn chat(&self, request: ChatRequest) -> Result<Self::Response, Self::Error>;
}
Extends ChatProvider with streaming capabilities:
use ferrous_llm_core::{StreamingProvider, ChatRequest};
use futures::Stream;
#[async_trait]
pub trait StreamingProvider: ChatProvider {
type StreamItem: Send + 'static;
type Stream: Stream<Item = Result<Self::StreamItem, Self::Error>> + Send + 'static;
async fn chat_stream(&self, request: ChatRequest) -> Result<Self::Stream, Self::Error>;
}
CompletionProvider - Text completion (non-chat) capabilitiesToolProvider - Function/tool calling supportEmbeddingProvider - Text embedding generationImageProvider - Image generation capabilitiesSpeechToTextProvider - Speech transcriptionTextToSpeechProvider - Speech synthesisStandard request structure for chat interactions:
use ferrous_llm_core::{ChatRequest, Message, Parameters, Metadata};
let request = ChatRequest {
messages: vec![
Message {
role: Role::User,
content: MessageContent::Text("Hello!".to_string()),
name: None,
tool_calls: None,
tool_call_id: None,
created_at: chrono::Utc::now(),
}
],
parameters: Parameters {
temperature: Some(0.7),
max_tokens: Some(100),
..Default::default()
},
metadata: Metadata::default(),
};
Role - System, User, Assistant, ToolMessageContent - Text, Image, or Mixed contentParameters - Generation parameters (temperature, max_tokens, etc.)Metadata - Request metadata and extensionsAll providers implement standardized response traits:
use ferrous_llm_core::ChatResponse;
// Common response interface
pub trait ChatResponse {
fn content(&self) -> &str;
fn usage(&self) -> Option<&Usage>;
fn finish_reason(&self) -> Option<&FinishReason>;
fn metadata(&self) -> &Metadata;
}
Unified error handling across all providers:
use ferrous_llm_core::{ProviderError, ErrorKind};
pub trait ProviderError: std::error::Error + Send + Sync {
fn kind(&self) -> ErrorKind;
fn is_retryable(&self) -> bool;
fn status_code(&self) -> Option<u16>;
}
#[derive(Debug, Clone, PartialEq)]
pub enum ErrorKind {
Authentication,
RateLimited,
InvalidRequest,
ServerError,
NetworkError,
Timeout,
Unknown,
}
Base configuration trait for all providers:
use ferrous_llm_core::ProviderConfig;
pub trait ProviderConfig: Clone + Send + Sync {
fn validate(&self) -> Result<(), Box<dyn std::error::Error>>;
fn timeout(&self) -> std::time::Duration;
fn base_url(&self) -> &str;
}
use ferrous_llm_core::{
ChatProvider, ChatRequest, Message, MessageContent,
Role, Parameters, Metadata
};
// This example shows how to use the core types
// Actual provider implementations are in separate crates
async fn example_usage<P>(provider: P) -> Result<(), P::Error>
where
P: ChatProvider,
{
let request = ChatRequest {
messages: vec![
Message {
role: Role::User,
content: MessageContent::Text("Explain Rust".to_string()),
name: None,
tool_calls: None,
tool_call_id: None,
created_at: chrono::Utc::now(),
}
],
parameters: Parameters {
temperature: Some(0.7),
max_tokens: Some(150),
..Default::default()
},
metadata: Metadata::default(),
};
let response = provider.chat(request).await?;
println!("Response: {}", response.content());
Ok(())
}
To implement a new provider, create a struct that implements the relevant traits:
use ferrous_llm_core::{ChatProvider, ChatRequest, ChatResponse};
use async_trait::async_trait;
pub struct MyProvider {
config: MyConfig,
}
#[async_trait]
impl ChatProvider for MyProvider {
type Config = MyConfig;
type Response = MyResponse;
type Error = MyError;
async fn chat(&self, request: ChatRequest) -> Result<Self::Response, Self::Error> {
// Implementation here
todo!()
}
}
The core crate includes utilities for testing provider implementations:
cargo test
This crate is part of the ferrous-llm workspace. See the main repository for contribution guidelines.
Licensed under the Apache License 2.0. See LICENSE for details.