| Crates.io | llm-kit-anthropic |
| lib.rs | llm-kit-anthropic |
| version | 0.1.0 |
| created_at | 2026-01-18 19:26:10.209239+00 |
| updated_at | 2026-01-18 19:26:10.209239+00 |
| description | Anthropic provider for LLM Kit - Complete Claude integration with streaming, tools, thinking, and citations |
| homepage | |
| repository | https://github.com/saribmah/llm-kit |
| max_upload_size | |
| id | 2052917 |
| size | 1,177,601 |
Anthropic provider for LLM Kit - Complete Claude integration with streaming, tools, extended thinking, and citations.
Note: This provider uses the standardized builder pattern. See the Quick Start section for the recommended usage.
Add this to your Cargo.toml:
[dependencies]
llm-kit-anthropic = "0.1"
llm-kit-core = "0.1"
llm-kit-provider = "0.1"
tokio = { version = "1", features = ["full"] }
use llm_kit_anthropic::AnthropicClient;
use llm_kit_core::{GenerateText, Prompt};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create provider using the client builder
let provider = AnthropicClient::new()
.api_key("your-api-key") // Or use ANTHROPIC_API_KEY env var
.build();
// Create a language model
let model = provider.language_model("claude-3-5-sonnet-20241022".to_string());
// Generate text
let result = GenerateText::new(std::sync::Arc::new(model), Prompt::text("Hello, Claude!"))
.temperature(0.7)
.max_output_tokens(100)
.execute()
.await?;
println!("{}", result.text);
Ok(())
}
use llm_kit_anthropic::{AnthropicProvider, AnthropicProviderSettings};
use llm_kit_core::{GenerateText, Prompt};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create provider with settings
let provider = AnthropicProvider::new(AnthropicProviderSettings::default());
let model = provider.language_model("claude-3-5-sonnet-20241022".to_string());
let result = GenerateText::new(std::sync::Arc::new(model), Prompt::text("Hello, Claude!"))
.execute()
.await?;
println!("{}", result.text);
Ok(())
}
Set your Anthropic API key as an environment variable:
export ANTHROPIC_API_KEY=your-api-key
export ANTHROPIC_BASE_URL=https://api.anthropic.com/v1 # Optional
use llm_kit_anthropic::AnthropicClient;
let provider = AnthropicClient::new()
.api_key("your-api-key")
.base_url("https://api.anthropic.com/v1")
.header("Custom-Header", "value")
.name("my-anthropic-provider")
.build();
use llm_kit_anthropic::{AnthropicProvider, AnthropicProviderSettings};
let settings = AnthropicProviderSettings::new()
.with_api_key("your-api-key")
.with_base_url("https://api.anthropic.com/v1")
.add_header("Custom-Header", "value")
.with_name("my-anthropic-provider");
let provider = AnthropicProvider::new(settings);
The AnthropicClient builder supports:
.api_key(key) - Set the API key.base_url(url) - Set custom base URL.name(name) - Set provider name.header(key, value) - Add a single custom header.headers(map) - Add multiple custom headers.build() - Build the providerAnthropic provides several powerful provider-defined tools:
use llm_kit_anthropic::anthropic_tools;
use llm_kit_core::ToolSet;
let tools = ToolSet::from_vec(vec![
// Execute bash commands
anthropic_tools::bash_20250124(None),
// Search the web with citations
anthropic_tools::web_search_20250305()
.max_uses(5)
.build(),
// Fetch web content
anthropic_tools::web_fetch_20250910()
.citations(true)
.build(),
// Execute Python code
anthropic_tools::code_execution_20250825(None),
// Computer use (screenshots + mouse/keyboard)
anthropic_tools::computer_20250124(1920, 1080, None),
// Text editor
anthropic_tools::text_editor_20250728()
.max_characters(10000)
.build(),
// Persistent memory
anthropic_tools::memory_20250818(None),
]);
Enable Claude's extended reasoning process:
use llm_kit_anthropic::AnthropicClient;
use llm_kit_core::{GenerateText, Prompt};
let provider = AnthropicClient::new().build();
let model = provider.language_model("claude-3-7-sonnet-20250219".to_string());
let result = GenerateText::new(std::sync::Arc::new(model),
Prompt::text("Solve this complex problem"))
.thinking_enabled(true)
.thinking_budget(10000) // Optional token budget
.execute()
.await?;
// Access reasoning
for output in result.experimental_output.iter() {
if let llm_kit_provider::language_model::Output::Reasoning(reasoning) = output {
println!("Reasoning: {}", reasoning.text);
}
}
Stream responses for real-time output:
use llm_kit_anthropic::AnthropicClient;
use llm_kit_core::{StreamText, Prompt};
use futures_util::StreamExt;
let provider = AnthropicClient::new().build();
let model = provider.language_model("claude-3-5-sonnet-20241022".to_string());
let result = StreamText::new(std::sync::Arc::new(model),
Prompt::text("Write a story"))
.temperature(0.8)
.execute()
.await?;
let mut text_stream = result.text_stream();
while let Some(text_delta) = text_stream.next().await {
print!("{}", text_delta);
}
All Claude models are supported, including:
claude-3-5-sonnet-20241022 - Most intelligent model with extended thinkingclaude-3-7-sonnet-20250219 - Latest model with enhanced extended thinking capabilitiesclaude-3-opus-20240229 - Powerful model for complex tasksclaude-3-sonnet-20240229 - Balanced performance and speedclaude-3-haiku-20240307 - Fastest model for simple tasksFor a complete list of available models, see the Anthropic documentation.
Anthropic-specific options can be passed through provider_options:
use llm_kit_anthropic::language_model::{ProviderChatLanguageModelOptions, provider_chat_options::*};
use llm_kit_core::{GenerateText, Prompt};
let options = ProviderChatLanguageModelOptions {
thinking: Some(Thinking {
type_: ThinkingType::Enabled,
budget_tokens: Some(10000),
}),
citations: Some(Citations {
type_: CitationsType::Enabled,
}),
..Default::default()
};
let result = GenerateText::new(model, prompt)
.provider_options(options)
.execute()
.await?;
thinking - Control extended thinking behavior:
ThinkingType::Enabled - Enable extended thinkingThinkingType::Disabled - Disable extended thinkingbudget_tokens - Optional token limit for thinkingcitations - Control citation generation:
CitationsType::Enabled - Enable citationsCitationsType::Disabled - Disable citationsSee the examples/ directory for complete examples:
chat.rs - Basic chat completion with Claudestream.rs - Streaming responseschat_tool_calling.rs - Tool calling with custom toolsstream_tool_calling.rs - Streaming with tool callsprovider_specific_bash_tool.rs - Using Anthropic's bash toolprovider_specific_defined_tools.rs - Using all provider-defined toolsRun examples with:
cargo run --example chat
cargo run --example stream
cargo run --example chat_tool_calling
cargo run --example provider_specific_defined_tools
Licensed under:
Contributions are welcome! Please see the Contributing Guide for more details.