| Crates.io | euro-llm |
| lib.rs | euro-llm |
| version | 0.5.0 |
| created_at | 2025-12-09 12:57:39.606877+00 |
| updated_at | 2025-12-23 16:52:54.255244+00 |
| description | Eurora LLM library |
| homepage | https://www.eurora-labs.com |
| repository | https://github.com/eurora-labs/eurora |
| max_upload_size | |
| id | 1975442 |
| size | 322,713 |
A unified cross-platform Rust library for interacting with multiple Large Language Model providers. euro-LLM provides a modular, type-safe, and performant abstraction layer that allows developers to easily switch between different LLM providers while maintaining consistent APIs.
Add euro-llm to your Cargo.toml:
[dependencies]
euro-llm = "*"
By default, no providers are enabled. Enable the providers you need:
[dependencies]
euro-llm = { version = "*", features = ["openai", "anthropic", "ollama", "specta"] }
Available features:
default - All providers (equivalent to enabling all individual features)openai - OpenAI provider supportanthropic - Anthropic Claude provider supportollama - Ollama local model provider supportspecta - Specta types generator supportdynamic-image - Support for the image crate in order to send the image data via bytesuse euro_llm::{
ChatProvider, ChatRequest, Message, MessageContent,
Parameters, Role, Metadata,
openai::{OpenAIConfig, OpenAIProvider},
};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Load configuration from environment
let config = OpenAIConfig::from_env()?;
let provider = OpenAIProvider::new(config)?;
// Create a chat request
let request = ChatRequest {
messages: vec![
Message {
role: Role::User,
content: MessageContent::Text("Hello! Explain Rust in one sentence.".to_string()),
name: None,
tool_calls: None,
tool_call_id: None,
created_at: chrono::Utc::now(),
}
],
parameters: Parameters {
temperature: Some(0.7),
max_tokens: Some(100),
..Default::default()
},
metadata: Metadata::default(),
};
// Send the request
let response = provider.chat(request).await?;
println!("Response: {}", response.content());
Ok(())
}
use euro_llm::{
StreamingProvider, ChatRequest, Message, MessageContent, Role,
anthropic::{AnthropicConfig, AnthropicProvider},
};
use futures::StreamExt;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let config = AnthropicConfig::from_env()?;
let provider = AnthropicProvider::new(config)?;
let request = ChatRequest {
messages: vec![
Message {
role: Role::User,
content: MessageContent::Text("Tell me a story".to_string()),
name: None,
tool_calls: None,
tool_call_id: None,
created_at: chrono::Utc::now(),
}
],
parameters: Default::default(),
metadata: Default::default(),
};
let mut stream = provider.chat_stream(request).await?;
while let Some(chunk) = stream.next().await {
match chunk {
Ok(data) => print!("{}", data.content()),
Err(e) => eprintln!("Stream error: {}", e),
}
}
Ok(())
}
use euro_llm::openai::{OpenAIConfig, OpenAIProvider};
// From environment variables
let config = OpenAIConfig::from_env()?;
// Or configure manually
let config = OpenAIConfig {
api_key: "your-api-key".to_string(),
model: "gpt-4".to_string(),
base_url: "https://api.openai.com/v1".to_string(),
timeout: std::time::Duration::from_secs(30),
};
let provider = OpenAIProvider::new(config)?;
Environment Variables:
OPENAI_API_KEY - Your OpenAI API key (required)OPENAI_MODEL - Model to use (default: "gpt-3.5-turbo")OPENAI_BASE_URL - API base URL (default: "https://api.openai.com/v1")use euro_llm::anthropic::{AnthropicConfig, AnthropicProvider};
let config = AnthropicConfig::from_env()?;
let provider = AnthropicProvider::new(config)?;
Environment Variables:
ANTHROPIC_API_KEY - Your Anthropic API key (required)ANTHROPIC_MODEL - Model to use (default: "claude-3-sonnet-20240229")ANTHROPIC_BASE_URL - API base URL (default: "https://api.anthropic.com")use euro_llm::ollama::{OllamaConfig, OllamaProvider};
let config = OllamaConfig::from_env()?;
let provider = OllamaProvider::new(config)?;
Environment Variables:
OLLAMA_MODEL - Model to use (default: "llama2")OLLAMA_BASE_URL - Ollama server URL (default: "http://localhost:11434")euro-LLM follows the Interface Segregation Principle with focused traits:
ChatProviderCore chat functionality that most LLM providers support.
#[async_trait]
pub trait ChatProvider: Send + Sync {
type Config: ProviderConfig;
type Response: ChatResponse;
type Error: ProviderError;
async fn chat(&self, request: ChatRequest) -> Result<Self::Response, Self::Error>;
}
StreamingProviderExtends ChatProvider with streaming capabilities.
#[async_trait]
pub trait StreamingProvider: ChatProvider {
type StreamItem: Send + 'static;
type Stream: Stream<Item = Result<Self::StreamItem, Self::Error>> + Send + 'static;
async fn chat_stream(&self, request: ChatRequest) -> Result<Self::Stream, Self::Error>;
}
CompletionProvider - Text completion (non-chat)ToolProvider - Function/tool callingEmbeddingProvider - Text embeddingsImageProvider - Image generationSpeechToTextProvider - Speech transcriptionTextToSpeechProvider - Speech synthesisThe examples/ directory contains comprehensive examples:
openai_chat.rs - Basic OpenAI chatopenai_chat_streaming.rs - OpenAI streaming chatanthropic_chat.rs - Basic Anthropic chatanthropic_chat_streaming.rs - Anthropic streaming chatollama_chat.rs - Basic Ollama chatollama_chat_streaming.rs - Ollama streaming chatRun examples with:
# Set up environment variables
export OPENAI_API_KEY="your-key"
export ANTHROPIC_API_KEY="your-key"
# Run specific examples
cargo run --example openai_chat --features openai
cargo run --example anthropic_chat_streaming --features anthropic
cargo run --example ollama_chat --features ollama
Run tests for all crates:
# Run all tests
cargo test --workspace
# Run tests for specific provider
cargo test -p euro-llm-openai
cargo test -p euro-llm-anthropic
cargo test -p euro-llm-ollama
# Run integration tests
cargo test --test integration_tests
We welcome contributions! Please see our Contributing Guidelines for details.
Clone the repository:
git clone https://github.com/your-username/euro-llm.git
cd euro-llm
Install Rust (if not already installed):
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
Set up environment variables:
cp .env.example .env
# Edit .env with your API keys
Run tests:
cargo test --workspace
crates/euro-llm-{provider}/euro-llm-coreThis project is licensed under the Apache License 2.0 - see the LICENSE file for details.
Note: This library is in active development. APIs may change before 1.0 release.