| Crates.io | llm-kit-deepseek |
| lib.rs | llm-kit-deepseek |
| version | 0.1.0 |
| created_at | 2026-01-18 20:30:35.868945+00 |
| updated_at | 2026-01-18 20:30:35.868945+00 |
| description | DeepSeek provider implementation for the LLM Kit - supports chat and reasoning models |
| homepage | |
| repository | https://github.com/saribmah/llm-kit |
| max_upload_size | |
| id | 2053036 |
| size | 112,257 |
DeepSeek provider for LLM Kit - Complete integration with DeepSeek's chat and reasoning models.
Note: This provider uses the standardized builder pattern. See the Quick Start section for the recommended usage.
Add this to your Cargo.toml:
[dependencies]
llm-kit-deepseek = "0.1"
llm-kit-core = "0.1"
llm-kit-provider = "0.1"
tokio = { version = "1", features = ["full"] }
use llm_kit_deepseek::DeepSeekClient;
use llm_kit_core::{GenerateText, Prompt};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create provider using the client builder
let provider = DeepSeekClient::new()
.api_key("your-api-key") // Or use DEEPSEEK_API_KEY env var
.build();
// Create a language model
let model = provider.chat_model("deepseek-chat");
// Generate text
let result = GenerateText::new(std::sync::Arc::new(model), Prompt::text("Hello, DeepSeek!"))
.temperature(0.7)
.max_output_tokens(100)
.execute()
.await?;
println!("{}", result.text);
Ok(())
}
use llm_kit_deepseek::{DeepSeekProvider, DeepSeekProviderSettings};
use llm_kit_core::{GenerateText, Prompt};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create provider with settings
let provider = DeepSeekProvider::new(DeepSeekProviderSettings::default());
let model = provider.chat_model("deepseek-chat");
let result = GenerateText::new(std::sync::Arc::new(model), Prompt::text("Hello, DeepSeek!"))
.execute()
.await?;
println!("{}", result.text);
Ok(())
}
Set your DeepSeek API key as an environment variable:
export DEEPSEEK_API_KEY=your-api-key
export DEEPSEEK_BASE_URL=https://api.deepseek.com/v1 # Optional
use llm_kit_deepseek::DeepSeekClient;
let provider = DeepSeekClient::new()
.api_key("your-api-key")
.base_url("https://api.deepseek.com/v1")
.header("Custom-Header", "value")
.build();
use llm_kit_deepseek::{DeepSeekProvider, DeepSeekProviderSettings};
let settings = DeepSeekProviderSettings::new()
.with_api_key("your-api-key")
.with_base_url("https://api.deepseek.com/v1")
.add_header("Custom-Header", "value");
let provider = DeepSeekProvider::new(settings);
use llm_kit_deepseek::DeepSeekClient;
// Reads from DEEPSEEK_API_KEY environment variable
let provider = DeepSeekClient::new()
.load_api_key_from_env()
.build();
The DeepSeekClient builder supports:
.api_key(key) - Set the API key.base_url(url) - Set custom base URL.header(key, value) - Add a single custom header.headers(map) - Add multiple custom headers.load_api_key_from_env() - Load API key from DEEPSEEK_API_KEY environment variable.build() - Build the providerDeepSeek's reasoner model (R1) provides advanced reasoning capabilities for complex problem-solving:
use llm_kit_deepseek::DeepSeekClient;
use llm_kit_core::{GenerateText, Prompt};
let provider = DeepSeekClient::new()
.load_api_key_from_env()
.build();
let model = provider.chat_model("deepseek-reasoner");
let result = GenerateText::new(std::sync::Arc::new(model),
Prompt::text("Solve this complex logic puzzle: ..."))
.execute()
.await?;
// Access reasoning and answer separately
for output in result.experimental_output.iter() {
if let llm_kit_provider::language_model::Output::Reasoning(reasoning) = output {
println!("Reasoning: {}", reasoning.text);
}
}
println!("Answer: {}", result.text);
Stream responses for real-time output:
use llm_kit_deepseek::DeepSeekClient;
use llm_kit_core::{StreamText, Prompt};
use futures_util::StreamExt;
let provider = DeepSeekClient::new()
.load_api_key_from_env()
.build();
let model = provider.chat_model("deepseek-chat");
let result = StreamText::new(std::sync::Arc::new(model),
Prompt::text("Write a story"))
.temperature(0.8)
.execute()
.await?;
let mut text_stream = result.text_stream();
while let Some(text_delta) = text_stream.next().await {
print!("{}", text_delta);
}
All DeepSeek models are supported, including:
For a complete list of available models, see the DeepSeek documentation.
DeepSeek provides detailed information about prompt cache hits and misses to help optimize performance:
use llm_kit_deepseek::DeepSeekClient;
use llm_kit_core::{GenerateText, Prompt};
let provider = DeepSeekClient::new().build();
let model = provider.chat_model("deepseek-chat");
let result = GenerateText::new(std::sync::Arc::new(model), Prompt::text("Hello!"))
.execute()
.await?;
// Access cache metadata
if let Some(metadata) = result.provider_metadata {
if let Some(deepseek) = metadata.get("deepseek") {
println!("Cache hit tokens: {:?}",
deepseek.get("promptCacheHitTokens"));
println!("Cache miss tokens: {:?}",
deepseek.get("promptCacheMissTokens"));
}
}
This helps you understand cache efficiency and optimize your prompts for better performance and cost savings.
See the examples/ directory for complete examples:
chat.rs - Basic chat completion with DeepSeekstream.rs - Streaming responseschat_tool_calling.rs - Tool calling with custom toolsstream_tool_calling.rs - Streaming with tool callsreasoning.rs - Using the deepseek-reasoner model for complex reasoningRun examples with:
cargo run --example chat
cargo run --example stream
cargo run --example reasoning
cargo run --example chat_tool_calling
Make sure to set your DEEPSEEK_API_KEY environment variable first.
Licensed under:
Contributions are welcome! Please see the Contributing Guide for more details.