| Crates.io | llm-kit-openai |
| lib.rs | llm-kit-openai |
| version | 0.1.0 |
| created_at | 2026-01-18 18:59:17.0304+00 |
| updated_at | 2026-01-18 18:59:17.0304+00 |
| description | OpenAI provider implementation for the LLM Kit |
| homepage | |
| repository | https://github.com/saribmah/llm-kit |
| max_upload_size | |
| id | 2052875 |
| size | 188,373 |
OpenAI provider for LLM Kit - Complete integration with OpenAI's chat completion API.
Note: This provider uses the standardized builder pattern. See the Quick Start section for the recommended usage.
Add this to your Cargo.toml:
[dependencies]
llm-kit-openai = "0.1"
llm-kit-core = "0.1"
llm-kit-provider = "0.1"
tokio = { version = "1", features = ["full"] }
use llm_kit_openai::OpenAIClient;
use llm_kit_provider::language_model::LanguageModel;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create provider using the client builder
let provider = OpenAIClient::new()
.api_key("your-api-key") // Or use OPENAI_API_KEY env var
.build();
// Create a language model
let model = provider.chat("gpt-4o");
println!("Model: {}", model.model_id());
println!("Provider: {}", model.provider());
Ok(())
}
use llm_kit_openai::{OpenAIProvider, OpenAIProviderSettings};
use llm_kit_provider::language_model::LanguageModel;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create provider with settings
let provider = OpenAIProvider::new(OpenAIProviderSettings::default());
let model = provider.chat("gpt-4o");
println!("Model: {}", model.model_id());
Ok(())
}
Set your OpenAI API key as an environment variable:
export OPENAI_API_KEY=your-api-key
export OPENAI_BASE_URL=https://api.openai.com/v1 # Optional
use llm_kit_openai::OpenAIClient;
let provider = OpenAIClient::new()
.api_key("your-api-key")
.base_url("https://api.openai.com/v1")
.organization("org-123")
.project("proj-456")
.header("Custom-Header", "value")
.name("my-openai-provider")
.build();
use llm_kit_openai::{OpenAIProvider, OpenAIProviderSettings};
let settings = OpenAIProviderSettings::new()
.with_api_key("your-api-key")
.with_base_url("https://api.openai.com/v1")
.with_organization("org-123")
.with_project("proj-456")
.add_header("Custom-Header", "value")
.with_name("my-openai-provider");
let provider = OpenAIProvider::new(settings);
The OpenAIClient builder supports:
.api_key(key) - Set the API key.base_url(url) - Set custom base URL.organization(org) - Set OpenAI organization ID.project(project) - Set OpenAI project ID.name(name) - Set provider name.header(key, value) - Add a single custom header.headers(map) - Add multiple custom headers.build() - Build the providerAll OpenAI chat models are supported, including:
gpt-4 - Most capable GPT-4 modelgpt-4-turbo - Faster GPT-4 variantgpt-4o - Optimized GPT-4 modelgpt-4o-mini - Smaller, faster GPT-4o variantgpt-3.5-turbo - Fast and efficient modelo1 - Latest reasoning modelo1-preview - Preview version of o1o1-mini - Smaller o1 varianto3-mini - Next-generation reasoning modelFor a complete list of available models, see the OpenAI Models documentation.
OpenAI reasoning models (o1, o1-preview, o1-mini, o3-mini) have special handling:
max_completion_tokens instead of max_tokensThese adjustments happen automatically when you use a reasoning model, so you don't need to make any code changes.
OpenAI supports additional options beyond the standard LLM Kit parameters:
Control the computational effort for reasoning models:
use llm_kit_openai::chat::{OpenAIChatLanguageModelOptions, openai_chat_options::ReasoningEffort};
let options = OpenAIChatLanguageModelOptions {
reasoning_effort: Some(ReasoningEffort::High),
..Default::default()
};
Available values:
ReasoningEffort::Low - Faster, less thorough reasoningReasoningEffort::Medium - Balanced reasoningReasoningEffort::High - More thorough, slower reasoningRequest log probabilities for generated tokens:
use llm_kit_openai::chat::{OpenAIChatLanguageModelOptions, openai_chat_options::LogprobsOption};
let options = OpenAIChatLanguageModelOptions {
logprobs: Some(LogprobsOption::Number(5)), // Top 5 token probabilities
..Default::default()
};
Select the service tier for processing:
use llm_kit_openai::chat::{OpenAIChatLanguageModelOptions, openai_chat_options::ServiceTier};
let options = OpenAIChatLanguageModelOptions {
service_tier: Some(ServiceTier::Auto),
..Default::default()
};
Available values:
ServiceTier::Auto - Automatic tier selectionServiceTier::Default - Standard processing tierConfigure organization and project IDs:
let provider = OpenAIClient::new()
.api_key("your-api-key")
.organization("org-123")
.project("proj-456")
.build();
See examples/chat.rs for a complete example.
See examples/stream.rs for a complete example.
OpenAI supports function calling for tool integration. See examples/chat_tool_calling.rs for a complete example.
See the examples/ directory for complete examples:
chat.rs - Basic chat completionstream.rs - Streaming responseschat_tool_calling.rs - Tool calling with chat modelsstream_tool_calling.rs - Streaming with tool callingRun examples with:
cargo run --example chat
cargo run --example stream
cargo run --example chat_tool_calling
cargo run --example stream_tool_calling
Licensed under:
Contributions are welcome! Please see the Contributing Guide for more details.