| Crates.io | llm-kit-azure |
| lib.rs | llm-kit-azure |
| version | 0.1.0 |
| created_at | 2026-01-18 20:01:04.930337+00 |
| updated_at | 2026-01-18 20:01:04.930337+00 |
| description | Azure OpenAI provider for LLM Kit |
| homepage | https://github.com/saribmah/llm-kit |
| repository | https://github.com/saribmah/llm-kit |
| max_upload_size | |
| id | 2052983 |
| size | 160,191 |
Azure OpenAI provider for LLM Kit - Complete integration with Azure OpenAI Service for chat, completions, embeddings, and image generation.
Note: This provider uses the standardized builder pattern. See the Quick Start section for the recommended usage.
api-key header for Azure authenticationAdd this to your Cargo.toml:
[dependencies]
llm-kit-azure = "0.1"
llm-kit-core = "0.1"
llm-kit-provider = "0.1"
tokio = { version = "1", features = ["full"] }
use llm_kit_azure::AzureClient;
use llm_kit_provider::LanguageModel;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create provider using the client builder
let provider = AzureClient::new()
.resource_name("my-azure-resource")
.api_key("your-api-key") // Or use AZURE_API_KEY env var
.build();
// Get a chat model using your deployment name
let model = provider.chat_model("gpt-4-deployment");
println!("Model: {}", model.model_id());
println!("Provider: {}", model.provider());
Ok(())
}
use llm_kit_azure::{AzureOpenAIProvider, AzureOpenAIProviderSettings};
use llm_kit_provider::LanguageModel;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create provider with settings
let provider = AzureOpenAIProvider::new(
AzureOpenAIProviderSettings::new()
.with_resource_name("my-azure-resource")
.with_api_key("your-api-key")
);
let model = provider.chat_model("gpt-4-deployment");
println!("Model: {}", model.model_id());
Ok(())
}
Set your Azure OpenAI credentials as environment variables:
export AZURE_API_KEY=your-api-key
export AZURE_RESOURCE_NAME=my-azure-resource # Or use AZURE_BASE_URL
use llm_kit_azure::AzureClient;
// With resource name (most common)
let provider = AzureClient::new()
.resource_name("my-azure-resource")
.api_key("your-api-key")
.api_version("2024-02-15-preview")
.build();
// With custom base URL
let provider = AzureClient::new()
.base_url("https://my-resource.openai.azure.com/openai")
.api_key("your-api-key")
.build();
// With custom headers
let provider = AzureClient::new()
.resource_name("my-resource")
.api_key("your-api-key")
.header("X-Custom-Header", "value")
.build();
// With deployment-based URLs (legacy format)
let provider = AzureClient::new()
.resource_name("my-resource")
.api_key("your-api-key")
.use_deployment_based_urls(true)
.build();
use llm_kit_azure::{AzureOpenAIProvider, AzureOpenAIProviderSettings};
let settings = AzureOpenAIProviderSettings::new()
.with_resource_name("my-azure-resource")
.with_api_key("your-api-key")
.with_api_version("2024-02-15-preview")
.with_header("X-Custom-Header", "value");
let provider = AzureOpenAIProvider::new(settings);
The AzureClient builder supports:
.resource_name(name) - Set Azure OpenAI resource name.base_url(url) - Set custom base URL.api_key(key) - Set the API key.api_version(version) - Set API version (default: "v1").header(key, value) - Add a single custom header.headers(map) - Add multiple custom headers.use_deployment_based_urls(bool) - Use legacy deployment-based URL format.build() - Build the providerAzure OpenAI supports two URL formats:
https://{resource}.openai.azure.com/openai/v1{path}?api-version={version}
This is the recommended format.
https://{resource}.openai.azure.com/openai/deployments/{deployment}{path}?api-version={version}
Enable this format with .use_deployment_based_urls(true).
Azure OpenAI supports multiple API versions. Common versions include:
"v1" - Default, recommended version"2023-05-15" - Stable release"2024-02-15-preview" - Preview features"2024-08-01-preview" - Latest previewSet the API version using the builder:
let provider = AzureClient::new()
.resource_name("my-resource")
.api_key("your-api-key")
.api_version("2024-02-15-preview")
.build();
To use this provider, you need:
Azure OpenAI supports various model types through deployments. You must first deploy models in your Azure OpenAI resource before using them.
GPT-4 and GPT-3.5 models for conversational AI:
gpt-4 - Most capable modelgpt-4-turbo - Faster GPT-4 variantgpt-35-turbo - Fast and efficientText embedding models:
text-embedding-ada-002 - Standard embedding modeltext-embedding-3-small - Smaller embedding modeltext-embedding-3-large - Larger embedding modelImage generation models:
dall-e-3 - Latest DALL-E modeldall-e-2 - Previous generationText completion models:
gpt-35-turbo-instruct - Instruction-tuned completion modelNote: Model availability depends on your Azure OpenAI resource region and deployment. Use your deployment name (not the base model name) when creating models.
Use .chat_model() or .model() for conversational AI:
use llm_kit_azure::AzureClient;
use llm_kit_core::{GenerateText, Prompt};
let provider = AzureClient::new()
.resource_name("my-resource")
.api_key("your-api-key")
.build();
// Use your deployment name
let model = provider.chat_model("gpt-4-deployment");
let result = GenerateText::new(model, Prompt::text("Hello!"))
.execute()
.await?;
Use .completion_model() for text completion:
let model = provider.completion_model("gpt-35-turbo-instruct");
Use .text_embedding_model() for embeddings:
let model = provider.text_embedding_model("text-embedding-ada-002");
Use .image_model() for image generation:
let model = provider.image_model("dall-e-3");
Stream responses for real-time output. See examples/stream.rs for a complete example.
Azure OpenAI supports function calling for tool integration. See examples/chat_tool_calling.rs for a complete example.
See the examples/ directory for complete examples:
chat.rs - Basic chat completionstream.rs - Streaming responseschat_tool_calling.rs - Tool calling with chat modelsstream_tool_calling.rs - Streaming with tool callingtext_embedding.rs - Text embeddingsimage_generation.rs - Image generation with DALL-ERun examples with:
cargo run --example chat
cargo run --example stream
cargo run --example chat_tool_calling
cargo run --example text_embedding
cargo run --example image_generation
Licensed under Apache License, Version 2.0 (LICENSE)
Contributions are welcome! Please see the Contributing Guide for more details.