| Crates.io | ferrous-llm-anthropic |
| lib.rs | ferrous-llm-anthropic |
| version | 0.6.1 |
| created_at | 2025-07-12 06:15:23.280613+00 |
| updated_at | 2025-08-31 10:11:35.348793+00 |
| description | Anthropic provider for the LLM library |
| homepage | https://www.eurora-labs.com |
| repository | https://github.com/eurora-labs/ferrous-llm.git |
| max_upload_size | |
| id | 1749022 |
| size | 135,730 |
Anthropic provider implementation for the ferrous-llm ecosystem. This crate provides a complete implementation of Anthropic's Claude API, including chat completions, streaming responses, and tool calling capabilities.
Add this to your Cargo.toml:
[dependencies]
ferrous-llm-anthropic = "0.2.0"
Or use the main ferrous-llm crate with the Anthropic feature:
[dependencies]
ferrous-llm = { version = "0.2.0", features = ["anthropic"] }
use ferrous_llm_anthropic::{AnthropicConfig, AnthropicProvider};
use ferrous_llm_core::{ChatProvider, ChatRequest, Message, MessageContent, Role};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Load configuration from environment
let config = AnthropicConfig::from_env()?;
let provider = AnthropicProvider::new(config)?;
// Create a chat request
let request = ChatRequest {
messages: vec![
Message {
role: Role::User,
content: MessageContent::Text("Explain the theory of relativity".to_string()),
name: None,
tool_calls: None,
tool_call_id: None,
created_at: chrono::Utc::now(),
}
],
parameters: Default::default(),
metadata: Default::default(),
};
// Send the request
let response = provider.chat(request).await?;
println!("Claude: {}", response.content());
Ok(())
}
use ferrous_llm_anthropic::{AnthropicConfig, AnthropicProvider};
use ferrous_llm_core::{StreamingProvider, ChatRequest};
use futures::StreamExt;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let config = AnthropicConfig::from_env()?;
let provider = AnthropicProvider::new(config)?;
let request = ChatRequest {
// ... request setup
..Default::default()
};
let mut stream = provider.chat_stream(request).await?;
print!("Claude: ");
while let Some(chunk) = stream.next().await {
match chunk {
Ok(data) => print!("{}", data.content()),
Err(e) => eprintln!("Stream error: {}", e),
}
}
println!();
Ok(())
}
Set these environment variables for automatic configuration:
export ANTHROPIC_API_KEY="sk-ant-your-api-key-here"
export ANTHROPIC_MODEL="claude-3-5-sonnet-20241022" # Optional, defaults to claude-3-5-sonnet-20241022
export ANTHROPIC_BASE_URL="https://api.anthropic.com" # Optional
export ANTHROPIC_VERSION="2023-06-01" # Optional, API version
use ferrous_llm_anthropic::AnthropicConfig;
use std::time::Duration;
// Simple configuration
let config = AnthropicConfig::new("sk-ant-your-api-key", "claude-3-5-sonnet-20241022");
// Using the builder pattern
let config = AnthropicConfig::builder()
.api_key("sk-ant-your-api-key")
.model("claude-3-haiku-20240307")
.version("2023-06-01")
.timeout(Duration::from_secs(60))
.max_retries(3)
.header("Custom-Header", "value")
.build();
// From environment with validation
let config = AnthropicConfig::from_env()?;
For custom Anthropic-compatible endpoints:
let config = AnthropicConfig::builder()
.api_key("your-api-key")
.model("claude-3-5-sonnet-20241022")
.base_url("https://your-custom-endpoint.com")?
.build();
claude-3-5-sonnet-20241022 - Latest and most capable model (default)claude-3-5-haiku-20241022 - Fast and efficient for most tasksclaude-3-opus-20240229 - Most capable model for complex tasksclaude-3-sonnet-20240229 - Balanced performance and speedclaude-3-haiku-20240307 - Fastest model for simple tasks| Model | Context Length | Strengths |
|---|---|---|
| Claude 3.5 Sonnet | 200K tokens | Best overall performance, coding, analysis |
| Claude 3.5 Haiku | 200K tokens | Fast responses, cost-effective |
| Claude 3 Opus | 200K tokens | Most capable, complex reasoning |
| Claude 3 Sonnet | 200K tokens | Balanced performance |
| Claude 3 Haiku | 200K tokens | Speed and efficiency |
use ferrous_llm_anthropic::{AnthropicConfig, AnthropicProvider};
use ferrous_llm_core::{ToolProvider, ChatRequest, Tool, ToolFunction};
let provider = AnthropicProvider::new(config)?;
let tools = vec![
Tool {
r#type: "function".to_string(),
function: ToolFunction {
name: "calculate".to_string(),
description: Some("Perform mathematical calculations".to_string()),
parameters: serde_json::json!({
"type": "object",
"properties": {
"expression": {
"type": "string",
"description": "Mathematical expression to evaluate"
}
},
"required": ["expression"]
}),
},
}
];
let response = provider.chat_with_tools(request, &tools).await?;
// Handle tool calls in the response
if let Some(tool_calls) = response.tool_calls() {
for tool_call in tool_calls {
println!("Tool called: {}", tool_call.function.name);
println!("Arguments: {}", tool_call.function.arguments);
}
}
use ferrous_llm_anthropic::{AnthropicConfig, AnthropicProvider};
use ferrous_llm_core::{ChatProvider, ChatRequest, Message, MessageContent, Role};
let provider = AnthropicProvider::new(config)?;
let request = ChatRequest {
messages: vec![
Message {
role: Role::User,
content: MessageContent::Mixed(vec![
ContentPart::Text("What do you see in this image?".to_string()),
ContentPart::Image {
source: ImageSource::Base64 {
media_type: "image/jpeg".to_string(),
data: base64_image_data,
},
},
]),
name: None,
tool_calls: None,
tool_call_id: None,
created_at: chrono::Utc::now(),
}
],
parameters: Default::default(),
metadata: Default::default(),
};
let response = provider.chat(request).await?;
println!("Claude's analysis: {}", response.content());
use ferrous_llm_core::{ChatRequest, Message, MessageContent, Role, Parameters};
let request = ChatRequest {
messages: vec![
Message {
role: Role::System,
content: MessageContent::Text(
"You are a helpful assistant that explains complex topics simply.".to_string()
),
name: None,
tool_calls: None,
tool_call_id: None,
created_at: chrono::Utc::now(),
},
Message {
role: Role::User,
content: MessageContent::Text("Explain quantum computing".to_string()),
name: None,
tool_calls: None,
tool_call_id: None,
created_at: chrono::Utc::now(),
}
],
parameters: Parameters {
temperature: Some(0.7),
max_tokens: Some(1000),
..Default::default()
},
metadata: Default::default(),
};
The crate provides comprehensive error handling:
use ferrous_llm_anthropic::{AnthropicError, AnthropicProvider};
use ferrous_llm_core::ErrorKind;
match provider.chat(request).await {
Ok(response) => println!("Success: {}", response.content()),
Err(e) => match e.kind() {
ErrorKind::Authentication => eprintln!("Invalid API key"),
ErrorKind::RateLimited => eprintln!("Rate limit exceeded"),
ErrorKind::InvalidRequest => eprintln!("Invalid request: {}", e),
ErrorKind::ServerError => eprintln!("Anthropic server error: {}", e),
ErrorKind::NetworkError => eprintln!("Network error: {}", e),
ErrorKind::Timeout => eprintln!("Request timeout"),
_ => eprintln!("Unknown error: {}", e),
}
}
Run the test suite:
# Unit tests
cargo test
# Integration tests (requires API key)
ANTHROPIC_API_KEY=sk-ant-your-key cargo test --test integration_tests
See the examples directory for complete working examples:
anthropic_chat.rs - Basic chat exampleanthropic_chat_streaming.rs - Streaming chat exampleRun examples:
export ANTHROPIC_API_KEY="sk-ant-your-key"
cargo run --example anthropic_chat --features anthropic
The provider includes automatic retry logic with exponential backoff for rate-limited requests. Configure retry behavior:
let config = AnthropicConfig::builder()
.api_key("sk-ant-your-key")
.max_retries(5) // Maximum retry attempts
.timeout(Duration::from_secs(30)) // Request timeout
.build();
This crate is compatible with:
This crate is part of the ferrous-llm workspace. See the main repository for contribution guidelines.
Licensed under the Apache License 2.0. See LICENSE for details.