| Crates.io | xai-sdk |
| lib.rs | xai-sdk |
| version | 0.7.1 |
| created_at | 2025-10-28 13:26:04.738536+00 |
| updated_at | 2025-12-03 15:30:13.929461+00 |
| description | xAI SDK |
| homepage | |
| repository | https://github.com/0xC0DE666/xai-sdk |
| max_upload_size | |
| id | 1904763 |
| size | 279,674 |
A Rust SDK for xAI's API, providing type-safe gRPC clients for all xAI services including Grok language models, embeddings, image generation, and more.
Add to your Cargo.toml:
[dependencies]
xai-sdk = "0.7.1"
tokio = { version = "1.0", features = ["full"] }
anyhow = "1.0"
Set your API key as an environment variable:
export XAI_API_KEY="your-api-key-here"
Run the authentication info example:
cargo run --example auth_info
Run the raw text sampling example:
cargo run --example raw_text_sample
Run the chat completion example (supports multiple modes):
# Blocking completion
cargo run --example chat -- --complete
# Streaming completion
cargo run --example chat -- --stream
# Streaming with assembly
cargo run --example chat -- --assemble
Run the multi-client example (demonstrates using multiple services with shared channel):
cargo run --example multi_client
Run the interceptor composition example:
cargo run --example interceptor_compose
The SDK provides clients for all xAI services:
GetCompletion - Get chat completionGetCompletionChunk - Stream chat completion in chunksStartDeferredCompletion - Start defered chat completionGetDeferredCompletion - Retrieve defered completionGetStoredCompletion - Get stored chat completionDeleteStoredCompletion - Delete stored chat completionSampleText - Raw text generationSampleTextStreaming - Streaming text generationListLanguageModels - List available language modelsListEmbeddingModels - List embedding modelsListImageGenerationModels - List image generation modelsEmbed - Generate embeddings from text or imagesGenerateImage - Create images from text promptsget_api_key_info - Get API key informationThe SDK is organized into focused modules, each providing easy client creation:
auth - Authentication serviceschat - Chat completions and streamingdocuments - Document processingembed - Text and image embeddingsimage - Image generationmodels - Model listing and informationsample - Text sampling and generationtokenize - Text tokenizationHere's a complete example showing multiple services using the modular architecture:
use anyhow::Context;
use std::env;
use xai_sdk::Request;
use xai_sdk::xai_api::{
Content, GetCompletionsRequest, Message, MessageRole, SampleTextRequest, content,
};
use xai_sdk::{chat, models, sample};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Load API key for authentication
let api_key = env::var("XAI_API_KEY").context("XAI_API_KEY environment variable must be set")?;
// Create authenticated clients for different services
let mut models_client = models::client::new(api_key).await?;
let mut sample_client = sample::client::new(api_key).await?;
let mut chat_client = chat::client::new(api_key).await?;
// List available models
let models_request = Request::new(());
let models_response = models_client.list_language_models(models_request).await?;
println!("Available models: {:?}", models_response.into_inner().models);
// Generate text
let sample_request = Request::new(SampleTextRequest {
prompt: vec!["Hello, world!".to_string()],
model: "grok-2-latest".to_string(),
..Default::default()
});
let sample_response = sample_client.sample_text(sample_request).await?;
println!("Generated: {}", sample_response.into_inner().choices[0].text);
// Chat completion
let message = Message {
role: MessageRole::RoleUser.into(),
content: vec![Content {
content: Some(content::Content::Text("Explain Rust ownership".to_string())),
}],
..Default::default()
};
let chat_request = Request::new(GetCompletionsRequest {
model: "grok-3-latest".to_string(),
messages: vec![message],
..Default::default()
});
let chat_response = chat_client.get_completion(chat_request).await?;
println!("Chat response: {}", chat_response.into_inner().choices[0].message.unwrap().content);
Ok(())
}
The SDK provides powerful utilities for working with streaming responses:
A flexible callback system for processing streaming data:
on_content_token(TokenContext, token: &str) - Called for each piece of response contenton_content_complete(CompletionContext) - Called once when the content phase completes for a choiceon_reason_token(TokenContext, token: &str) - Called for each piece of reasoning contenton_reasoning_complete(CompletionContext) - Called once when the reasoning phase completes for a choiceon_chunk(chunk) - Called for each complete chunk receivedThe TokenContext provides:
total_choices - Total number of choices in the streamchoice_index - Index of the choice this token belongs toreasoning_status - Current status of the reasoning phase (Init, Pending, or Complete)content_status - Current status of the content phase (Init, Pending, or Complete)The CompletionContext provides:
total_choices - Total number of choices in the streamchoice_index - Index of the choice that completedchat::stream::process - Process streaming responses with custom callbackschat::stream::assemble - Convert collected chunks into complete responseschat::stream::Consumer::with_stdout() - Pre-configured consumer for single-choice real-time outputchat::stream::Consumer::with_buffered_stdout() - Pre-configured consumer for multi-choice buffered outputThe SDK provides a flexible interceptor system for customizing request handling:
The auth() function creates an interceptor that adds Bearer token authentication:
use xai_sdk::common::interceptor::auth;
let interceptor = auth("your-api-key");
let client = chat::client::with_interceptor(interceptor).await?;
Combine multiple interceptors using compose():
use xai_sdk::common::interceptor::{auth, compose};
let interceptors: Vec<Box<dyn xai_sdk::export::service::Interceptor + Send + Sync>> = vec![
Box::new(auth("your-api-key")),
Box::new(|mut req| {
req.metadata_mut().insert("x-custom-header", "value".parse().unwrap());
Ok(req)
}),
];
let composed = compose(interceptors);
let client = chat::client::with_interceptor(composed).await?;
Note: All interceptors must be Send + Sync to ensure thread safety when used in async contexts.
All client functions return ClientInterceptor, a concrete type that can be:
Send + Sync) - safe to use in tokio::spawn and other async contextsThe ClientInterceptor can be created from any impl Interceptor + Send + Sync + 'static:
use xai_sdk::common::interceptor::ClientInterceptor;
let interceptor = ClientInterceptor::new(|mut req| {
// Custom logic
Ok(req)
});
// Can be used in spawned tasks
tokio::spawn(async move {
let client = chat::client::with_interceptor(interceptor).await?;
// Use client...
});
The SDK supports comprehensive configuration options:
Comprehensive error handling for:
This SDK is built using:
.proto definitionsThe code is generated from xAI's official Protocol Buffer definitions, ensuring compatibility and type safety.
See CHANGELOG.md for a detailed list of changes and new features.
This project is licensed under the MIT License.