| Crates.io | xai-grpc-client |
| lib.rs | xai-grpc-client |
| version | 0.4.3 |
| created_at | 2025-11-16 21:38:50.621269+00 |
| updated_at | 2026-01-05 21:02:08.106162+00 |
| description | Feature-complete gRPC client for xAI's Grok API with streaming, tools, multimodal support |
| homepage | https://github.com/fpinsight/xai-grpc-client |
| repository | https://github.com/fpinsight/xai-grpc-client |
| max_upload_size | |
| id | 1935936 |
| size | 549,007 |
Unofficial Rust client for xAI's Grok API with full gRPC support.
secrecy crate to protect API keys in memoryAdd this to your Cargo.toml:
[dependencies]
xai-grpc-client = "0.4"
tokio = { version = "1", features = ["full"] }
tokio-stream = "0.1"
The crate provides flexible TLS configuration through feature flags for root certificate selection:
Default (webpki-roots - recommended for containers):
[dependencies]
xai-grpc-client = "0.4"
Using native system roots (recommended for development):
[dependencies]
xai-grpc-client = { version = "0.4", features = ["tls-native-roots"], default-features = false }
Using both root stores (if unsure):
[dependencies]
xai-grpc-client = { version = "0.4", features = ["tls-roots"], default-features = false }
Available features:
tls-webpki-roots (default) - Uses Mozilla's root certificates (works in containers/distroless)tls-native-roots - Uses the system's native certificate storetls-roots - Enables both root stores simultaneouslyAdvanced: Custom TLS Configuration
For advanced use cases (custom CA certificates, proxies, custom timeouts), use the with_channel() constructor:
use xai_grpc_client::{GrokClient, Channel, ClientTlsConfig, Certificate};
use secrecy::SecretString;
use std::time::Duration;
// Load custom CA certificate
let ca_cert = std::fs::read("path/to/ca.pem")?;
let ca = Certificate::from_pem(ca_cert);
// Configure TLS with custom CA
let tls_config = ClientTlsConfig::new()
.ca_certificate(ca)
.domain_name("api.x.ai");
// Build custom channel
let channel = Channel::from_static("https://api.x.ai")
.timeout(Duration::from_secs(120))
.tls_config(tls_config)?
.connect()
.await?;
// Create client with custom channel
let api_key = SecretString::from("your-key".to_string());
let client = GrokClient::with_channel(channel, api_key);
See the custom_tls example for more details.
use xai_grpc_client::{GrokClient, ChatRequest};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Initialize client from GROK_API_KEY environment variable
let mut client = GrokClient::from_env().await?;
// Create a simple chat request
let request = ChatRequest::new()
.user_message("What is the meaning of life?")
.with_model("grok-2-1212")
.with_max_tokens(100);
// Get response
let response = client.complete_chat(request).await?;
println!("{}", response.content);
Ok(())
}
Set your API key:
export GROK_API_KEY="your-api-key-here"
Stream responses in real-time:
use xai_grpc_client::{GrokClient, ChatRequest};
use tokio_stream::StreamExt;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let mut client = GrokClient::from_env().await?;
let request = ChatRequest::new()
.user_message("Write a short poem about Rust");
let mut stream = client.stream_chat(request).await?;
while let Some(chunk) = stream.next().await {
let chunk = chunk?;
print!("{}", chunk.delta);
}
Ok(())
}
Enable the model to call functions:
use xai_grpc_client::{GrokClient, ChatRequest, FunctionTool, Tool, ToolChoice};
use serde_json::json;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let mut client = GrokClient::from_env().await?;
// Define a function tool
let get_weather = FunctionTool::new(
"get_weather",
"Get the current weather in a location"
)
.with_parameters(json!({
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "City name"
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"]
}
},
"required": ["location"]
}));
let request = ChatRequest::new()
.user_message("What's the weather in Tokyo?")
.add_tool(Tool::Function(get_weather))
.with_tool_choice(ToolChoice::Auto);
let response = client.complete_chat(request).await?;
// Check if model called the tool
if !response.tool_calls.is_empty() {
for tool_call in &response.tool_calls {
println!("Function: {}", tool_call.function.name);
println!("Arguments: {}", tool_call.function.arguments);
}
}
Ok(())
}
Send images with your prompts:
use xai_grpc_client::{GrokClient, ChatRequest};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let mut client = GrokClient::from_env().await?;
let request = ChatRequest::new()
.user_with_image(
"What's in this image?",
"https://example.com/image.jpg"
)
.with_model("grok-2-vision-1212");
let response = client.complete_chat(request).await?;
println!("{}", response.content);
Ok(())
}
Enable web search for up-to-date information:
use xai_grpc_client::{GrokClient, ChatRequest, Tool, WebSearchTool};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let mut client = GrokClient::from_env().await?;
let request = ChatRequest::new()
.user_message("What are the latest developments in AI?")
.add_tool(Tool::WebSearch(WebSearchTool::new()));
let response = client.complete_chat(request).await?;
println!("Response: {}", response.content);
if !response.citations.is_empty() {
println!("\nSources:");
for citation in &response.citations {
println!(" - {}", citation);
}
}
Ok(())
}
List available models and get pricing information:
use xai_grpc_client::GrokClient;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let mut client = GrokClient::from_env().await?;
// List all available models
let models = client.list_models().await?;
for model in models {
println!("{}: {} (max {} tokens)",
model.name, model.version, model.max_prompt_length);
// Calculate cost for a request
let cost = model.calculate_cost(10000, 1000, 0);
println!(" Example cost: ${:.4}", cost);
}
// Get specific model details
let model = client.get_model("grok-2-1212").await?;
println!("Model: {} v{}", model.name, model.version);
Ok(())
}
Generate vector embeddings from text or images:
use xai_grpc_client::{GrokClient, EmbedRequest};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let mut client = GrokClient::from_env().await?;
let request = EmbedRequest::new("embed-large-v1")
.add_text("Hello, world!")
.add_text("How are you?");
let response = client.embed(request).await?;
for embedding in response.embeddings {
println!("Embedding {} has {} dimensions",
embedding.index, embedding.vector.len());
}
Ok(())
}
Count tokens before making requests for cost estimation:
use xai_grpc_client::{GrokClient, TokenizeRequest};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let mut client = GrokClient::from_env().await?;
let request = TokenizeRequest::new("grok-2-1212")
.with_text("Hello, world! How are you today?");
let response = client.tokenize(request).await?;
println!("Token count: {}", response.token_count());
println!("Tokens:");
for token in &response.tokens {
println!(" '{}' (ID: {})", token.string_token, token.token_id);
}
// Calculate cost
let model = client.get_model("grok-2-1212").await?;
let cost = model.calculate_cost(response.token_count() as u32, 1000, 0);
println!("Estimated cost: ${:.4}", cost);
Ok(())
}
Check your API key status and permissions:
use xai_grpc_client::GrokClient;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let mut client = GrokClient::from_env().await?;
let info = client.get_api_key_info().await?;
println!("API Key: {}", info.redacted_api_key);
println!("Team ID: {}", info.team_id);
println!("Status: {}", info.status_string());
if !info.is_active() {
println!("โ ๏ธ Warning: API key is not active!");
}
println!("\nPermissions:");
for acl in &info.acls {
println!(" - {}", acl);
}
Ok(())
}
For long-running tasks, start a deferred completion and poll for results:
use xai_grpc_client::{GrokClient, ChatRequest};
use std::time::Duration;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let mut client = GrokClient::from_env().await?;
let request = ChatRequest::new()
.user_message("Write a detailed analysis of quantum computing")
.with_reasoning_effort(ReasoningEffort::High);
// Start deferred completion
let request_id = client.start_deferred(request).await?;
println!("Started deferred completion: {}", request_id);
// Wait for completion with polling
let response = client.wait_for_deferred(
request_id,
Duration::from_secs(2), // poll interval
Duration::from_secs(300) // timeout
).await?;
println!("{}", response.content);
Ok(())
}
Use CompletionOptions to create reusable configurations:
use xai_grpc_client::{GrokClient, ChatRequest, CompletionOptions, Message, MessageContent};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let mut client = GrokClient::from_env().await?;
// Define reusable options
let options = CompletionOptions::new()
.with_model("grok-2-1212")
.with_temperature(0.7)
.with_max_tokens(500);
// Use with different message sets
let messages = vec![
Message::System("You are a helpful coding assistant".to_string()),
Message::User(MessageContent::Text("Explain Rust ownership".into()))
];
let request = ChatRequest::from_messages_with_options(messages, options);
let response = client.complete_chat(request).await?;
println!("{}", response.content);
Ok(())
}
This library implements 100% (19/19) of the xAI Grok API services! ๐
Chat Service (6/6 RPCs)
Models Service (6/6 RPCs)
Embeddings Service (1/1 RPCs)
Tokenize Service (1/1 RPCs)
Auth Service (1/1 RPCs)
Sample Service (2/2 RPCs)
Image Service (1/1 RPCs)
Documents Service (1/1 RPCs)
The library provides comprehensive error handling with retry logic:
use xai_grpc_client::{GrokClient, ChatRequest, GrokError};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let mut client = GrokClient::from_env().await?;
let request = ChatRequest::new()
.user_message("Hello!");
match client.complete_chat(request).await {
Ok(response) => println!("{}", response.content),
Err(GrokError::RateLimit { retry_after_secs }) => {
println!("Rate limited. Retry after {} seconds", retry_after_secs);
}
Err(GrokError::Auth(msg)) => {
println!("Authentication error: {}", msg);
}
Err(e) if e.is_retryable() => {
println!("Retryable error: {}", e);
}
Err(e) => {
println!("Error: {}", e);
}
}
Ok(())
}
let client = GrokClient::from_env().await?;
Requires GROK_API_KEY environment variable.
use xai_grpc_client::{GrokClient, GrokConfig};
use secrecy::SecretString;
use std::time::Duration;
let config = GrokConfig {
endpoint: "https://api.x.ai".to_string(),
api_key: SecretString::from("your-api-key".to_string()),
default_model: "grok-2-1212".to_string(),
timeout: Duration::from_secs(120),
};
let client = GrokClient::new(config).await?;
grok-2-1212 - Latest Grok 2 (December 2024)grok-2-vision-1212 - Grok 2 with vision capabilitiesgrok-beta - Beta model with experimental featuresCheck xAI's documentation for the latest model list.
Run the test suite:
# Unit tests (no API key required)
cargo test --lib
# Integration tests (requires GROK_API_KEY)
cargo test --test integration_test
# Run specific test
cargo test test_chat_request_builder
The library includes 77 comprehensive unit tests covering:
See the examples/ directory for more complete examples:
# Simple chat
cargo run --example simple_chat
# Streaming
cargo run --example streaming_chat
# Tool calling
cargo run --example tool_calling
# Multimodal
cargo run --example multimodal
# Model listing
cargo run --example list_models
# Embeddings
cargo run --example embeddings
# Tokenization
cargo run --example tokenize
# Custom TLS configuration
cargo run --example custom_tls
Contributions are welcome! Please feel free to submit a Pull Request.
# Clone the repository with submodules
git clone --recursive https://github.com/fpinsight/xai-grpc-client
cd xai-grpc-client
# Or if you already cloned without --recursive:
git submodule update --init --recursive
cargo build
cargo test
Note: This project uses a Git submodule for proto definitions. The xai-proto submodule must be initialized before building.
Licensed under either of:
at your option.
This is an unofficial client library and is not affiliated with or endorsed by xAI. Use at your own risk.
See CHANGELOG.md for release history.