| Crates.io | llm-edge-providers |
| lib.rs | llm-edge-providers |
| version | 0.1.0 |
| created_at | 2025-11-09 01:50:51.503535+00 |
| updated_at | 2025-11-09 01:50:51.503535+00 |
| description | LLM provider adapters for OpenAI, Anthropic, Google, AWS, Azure |
| homepage | |
| repository | https://github.com/globalbusinessadvisors/llm-edge-agent |
| max_upload_size | |
| id | 1923500 |
| size | 72,864 |
A unified Rust library for interfacing with multiple Large Language Model (LLM) providers through a consistent API. This crate provides production-ready adapters with built-in retry logic, error handling, and observability.
secrecyAdd this to your Cargo.toml:
[dependencies]
llm-edge-providers = "0.1.0"
tokio = { version = "1.0", features = ["full"] }
use llm_edge_providers::{
LLMProvider, UnifiedRequest, Message,
openai::OpenAIAdapter,
};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create provider adapter
let provider = OpenAIAdapter::new(
std::env::var("OPENAI_API_KEY")?
);
// Create request
let request = UnifiedRequest {
model: "gpt-4".to_string(),
messages: vec![
Message {
role: "user".to_string(),
content: "What is Rust?".to_string(),
}
],
temperature: Some(0.7),
max_tokens: Some(1000),
stream: false,
metadata: Default::default(),
};
// Send request
let response = provider.send(request).await?;
println!("Response: {}", response.choices[0].message.content);
println!("Tokens used: {}", response.usage.total_tokens);
Ok(())
}
use llm_edge_providers::openai::OpenAIAdapter;
let provider = OpenAIAdapter::new(api_key);
// Get pricing information
if let Some(pricing) = provider.get_pricing("gpt-4") {
println!("Input: ${}/1K tokens", pricing.input_cost_per_1k);
println!("Output: ${}/1K tokens", pricing.output_cost_per_1k);
}
// Check provider health
let health = provider.health().await;
use llm_edge_providers::anthropic::AnthropicAdapter;
let provider = AnthropicAdapter::new(api_key);
let request = UnifiedRequest {
model: "claude-3-5-sonnet-20240229".to_string(),
messages: vec![
Message {
role: "user".to_string(),
content: "Explain async/await in Rust".to_string(),
}
],
temperature: Some(0.7),
max_tokens: Some(2048),
stream: false,
metadata: Default::default(),
};
let response = provider.send(request).await?;
use llm_edge_providers::{LLMProvider, UnifiedRequest};
async fn send_to_provider(
provider: &dyn LLMProvider,
request: UnifiedRequest
) -> Result<String, Box<dyn std::error::Error>> {
let response = provider.send(request).await?;
Ok(response.choices[0].message.content.clone())
}
// Use with any provider
let openai = OpenAIAdapter::new(openai_key);
let anthropic = AnthropicAdapter::new(anthropic_key);
let result1 = send_to_provider(&openai, request.clone()).await?;
let result2 = send_to_provider(&anthropic, request.clone()).await?;
use llm_edge_providers::ProviderError;
match provider.send(request).await {
Ok(response) => {
println!("Success: {}", response.choices[0].message.content);
}
Err(ProviderError::RateLimitExceeded) => {
eprintln!("Rate limit hit, please retry later");
}
Err(ProviderError::ApiError { status, message }) => {
eprintln!("API error {}: {}", status, message);
}
Err(e) => {
eprintln!("Error: {}", e);
}
}
Set your API key:
export OPENAI_API_KEY="sk-..."
Supported models: gpt-4, gpt-4-turbo, gpt-3.5-turbo, o1-preview, o1-mini
Set your API key:
export ANTHROPIC_API_KEY="sk-ant-..."
Supported models: claude-3-5-sonnet-20240229, claude-3-opus-20240229, claude-3-haiku-20240307
Set your API key:
export GOOGLE_API_KEY="..."
Supported models: gemini-pro, gemini-ultra
Configure AWS credentials:
export AWS_ACCESS_KEY_ID="..."
export AWS_SECRET_ACCESS_KEY="..."
export AWS_REGION="us-east-1"
Configure Azure credentials:
export AZURE_OPENAI_API_KEY="..."
export AZURE_OPENAI_ENDPOINT="https://your-resource.openai.azure.com/"
UnifiedRequest: Standard request format across all providers
model: Model identifiermessages: Conversation messagestemperature: Sampling temperature (0.0-2.0)max_tokens: Maximum tokens to generatestream: Enable streaming responsesmetadata: Custom metadataUnifiedResponse: Standard response format
id: Unique response identifiermodel: Model usedchoices: Response choicesusage: Token usage statisticsmetadata: Response metadata (provider, latency, cost)LLMProvider: Core trait implemented by all adapters
name(): Provider namesend(): Send request to providerget_pricing(): Get model pricinghealth(): Check provider healthAll errors are represented by ProviderError:
Http: HTTP client errorsSerialization: JSON serialization errorsApiError: Provider API errors with status codeTimeout: Request timeoutRateLimitExceeded: Rate limit hitConfiguration: Invalid configurationInternal: Internal errorsLicensed under the Apache License, Version 2.0. See LICENSE for details.
Contributions are welcome! Please see the repository for contribution guidelines.
llm-edge-core - Core abstractions and typesllm-edge-router - Request routing and load balancingllm-edge-cache - Response cachingllm-edge-backend - Complete backend service