| Crates.io | llm-kit-xai |
| lib.rs | llm-kit-xai |
| version | 0.1.0 |
| created_at | 2026-01-18 20:43:18.567046+00 |
| updated_at | 2026-01-18 20:43:18.567046+00 |
| description | xAI (Grok) provider implementation for the LLM Kit - supports chat, image generation, and agentic tools |
| homepage | |
| repository | https://github.com/saribmah/llm-kit |
| max_upload_size | |
| id | 2053058 |
| size | 195,369 |
xAI (Grok) provider for LLM Kit - Complete integration with xAI's Grok models featuring reasoning capabilities, integrated search, and image generation.
Note: This provider uses the standardized builder pattern. See the Quick Start section for the recommended usage.
Add this to your Cargo.toml:
[dependencies]
llm-kit-xai = "0.1"
llm-kit-core = "0.1"
llm-kit-provider = "0.1"
tokio = { version = "1", features = ["full"] }
use llm_kit_xai::XaiClient;
use llm_kit_provider::{Provider, LanguageModel};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create provider using the client builder
let provider = XaiClient::new()
.api_key("your-api-key") // Or use XAI_API_KEY env var
.build();
// Create a language model
let model = provider.chat_model("grok-4");
println!("Model: {}", model.model_id());
println!("Provider: {}", model.provider());
Ok(())
}
use llm_kit_xai::{XaiProvider, XaiProviderSettings};
use llm_kit_provider::{Provider, LanguageModel};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create provider with settings
let provider = XaiProvider::new(XaiProviderSettings::default());
let model = provider.chat_model("grok-4");
println!("Model: {}", model.model_id());
Ok(())
}
Set your xAI API key as an environment variable:
export XAI_API_KEY=your-api-key
use llm_kit_xai::XaiClient;
let provider = XaiClient::new()
.api_key("your-api-key")
.base_url("https://api.x.ai/v1")
.header("X-Custom-Header", "value")
.name("my-xai-provider")
.build();
The XaiClient builder supports:
.api_key(key) - Set the API key.base_url(url) - Set custom base URL.name(name) - Set provider name.header(key, value) - Add a single custom header.headers(map) - Add multiple custom headers.build() - Build the providergrok-4 - Latest Grok-4 model with advanced capabilitiesgrok-4-fast-reasoning - Fast model with reasoning capabilitiesgrok-4-fast-non-reasoning - Fast model without reasoninggrok-code-fast-1 - Optimized for code generationgrok-3 - Grok-3 base modelgrok-3-fast - Faster Grok-3 variantgrok-3-mini - Smaller, efficient modelgrok-2-vision-1212 - Vision-capable modelgrok-2-1212 - Grok-2 model with December 2024 updatesgrok-beta - Beta model with latest features// Create a chat model
let model = provider.chat_model("grok-4");
grok-2-image - Image generation model// Create an image model
let model = provider.image_model("grok-2-image");
xAI supports advanced features through provider options that can be passed using the llm-kit-core API.
Control the model's reasoning effort level:
use llm_kit_core::GenerateText;
use serde_json::json;
let result = GenerateText::new(model, prompt)
.provider_options(json!({
"reasoningEffort": "high" // "low", "medium", or "high"
}))
.execute()
.await?;
Access reasoning content in the response:
// Reasoning content is automatically extracted to result.content
for content in result.content {
if let llm_kit_core::output::Output::Reasoning(reasoning) = content {
println!("Model reasoning: {}", reasoning.text);
}
}
Enable web, X (Twitter), news, or RSS search:
use llm_kit_core::GenerateText;
use serde_json::json;
let result = GenerateText::new(model, prompt)
.provider_options(json!({
"searchParameters": {
"recencyFilter": "day", // "hour", "day", "week", "month", "year"
"sources": [
{"type": "web"},
{"type": "x"}, // X (Twitter) search
{"type": "news"},
{"type": "rss", "url": "https://example.com/feed.xml"}
]
}
}))
.execute()
.await?;
Citations are automatically extracted from search results:
let result = GenerateText::new(model, prompt).execute().await?;
// Citations available in result.content
for content in result.content {
if let llm_kit_core::output::Output::Source(source) = content {
println!("Source: {} - {}", source.title, source.url);
}
}
Force structured JSON outputs:
use llm_kit_core::GenerateText;
use llm_kit_provider::language_model::call_options::LanguageModelResponseFormat;
use serde_json::json;
// Simple JSON mode
let result = GenerateText::new(model, prompt)
.with_response_format(LanguageModelResponseFormat::Json {
schema: None,
name: None,
description: None,
})
.execute()
.await?;
// Structured outputs with JSON schema
let schema = json!({
"type": "object",
"properties": {
"name": {"type": "string"},
"age": {"type": "integer"}
},
"required": ["name", "age"]
});
let result = GenerateText::new(model, prompt)
.with_response_format(LanguageModelResponseFormat::Json {
schema: Some(schema),
name: Some("UserProfile".to_string()),
description: Some("A user profile".to_string()),
})
.execute()
.await?;
Control parallel tool execution:
use llm_kit_core::GenerateText;
use serde_json::json;
let result = GenerateText::new(model, prompt)
.tools(tools)
.provider_options(json!({
"parallelFunctionCalling": true
}))
.execute()
.await?;
| Option | Type | Description |
|---|---|---|
reasoningEffort |
string |
Reasoning effort level: "low", "medium", "high" |
searchParameters.recencyFilter |
string |
Time filter: "hour", "day", "week", "month", "year" |
searchParameters.sources |
array |
Search sources: web, x, news, rss |
parallelFunctionCalling |
bool |
Enable parallel tool execution |
See the examples/ directory for complete examples:
chat.rs - Basic chat completion using do_generate() directlystream.rs - Streaming responses using do_stream() directlychat_tool_calling.rs - Tool calling using do_generate() directlystream_tool_calling.rs - Streaming with tools using do_stream() directlyimage_generation.rs - Image generation using do_generate() directlyRun examples with:
export XAI_API_KEY="your-api-key"
cargo run --example chat
cargo run --example stream
cargo run --example image_generation
MIT
Contributions are welcome! Please see the Contributing Guide for more details.