| Crates.io | api_xai |
| lib.rs | api_xai |
| version | 0.3.0 |
| created_at | 2025-11-06 19:04:06.88562+00 |
| updated_at | 2025-11-30 07:30:56.630212+00 |
| description | X.AI Grok API client for accessing large language models (LLMs). |
| homepage | https://github.com/Wandalen/api_llm/tree/master/api/xai |
| repository | https://github.com/Wandalen/api_llm/tree/master/api/xai |
| max_upload_size | |
| id | 1920129 |
| size | 498,744 |
Comprehensive Rust client for X.AI's Grok API with enterprise reliability features.
This API crate is designed as a stateless HTTP client with zero persistence requirements. It provides:
This ensures lightweight, containerized deployments and eliminates operational complexity.
Expose all server-side functionality transparently while maintaining zero client-side intelligence or automatic behaviors.
Key principles:
Core Capabilities:
Enterprise Reliability:
Client-Side Enhancements:
Add to your Cargo.toml:
[dependencies]
api_xai = { version = "0.1.0", features = ["full"] }
use api_xai::{ Client, Secret, XaiEnvironmentImpl, ChatCompletionRequest, Message, ClientApiAccessors };
#[ tokio::main ]
async fn main() -> Result< (), Box< dyn std::error::Error > >
{
let secret = Secret::load_with_fallbacks( "XAI_API_KEY" )?;
let env = XaiEnvironmentImpl::new( secret )?;
let client = Client::build( env )?;
let request = ChatCompletionRequest::former()
.model( "grok-2-1212".to_string() )
.messages( vec![ Message::user( "Hello, Grok!" ) ] )
.form();
let response = client.chat().create( request ).await?;
println!( "Grok: {:?}", response.choices[ 0 ].message.content );
Ok( () )
}
use api_xai::{ Client, Secret, XaiEnvironmentImpl, ChatCompletionRequest, Message, ClientApiAccessors };
use futures_util::StreamExt;
#[ tokio::main ]
async fn main() -> Result< (), Box< dyn std::error::Error > >
{
let secret = Secret::load_with_fallbacks( "XAI_API_KEY" )?;
let env = XaiEnvironmentImpl::new( secret )?;
let client = Client::build( env )?;
let request = ChatCompletionRequest::former()
.model( "grok-2-1212".to_string() )
.messages( vec![ Message::user( "Tell me a story" ) ] )
.stream( true )
.form();
let mut stream = client.chat().create_stream( request ).await?;
while let Some( chunk ) = stream.next().await
{
let chunk = chunk?;
if let Some( content ) = chunk.choices[ 0 ].delta.content.as_ref()
{
print!( "{}", content );
}
}
Ok( () )
}
Create secret/-secrets.sh in your project root:
#!/bin/bash
export XAI_API_KEY="xai-your-key-here"
export XAI_API_KEY="xai-your-key-here"
The crate uses workspace_tools for secret management with automatic fallback chain:
./secret/-secrets.sh)secrets.sh, .env)enabled - Master switch for core functionalitystreaming - SSE streaming supporttool_calling - Function calling and toolsretry - Exponential backoff retry logiccircuit_breaker - Circuit breaker patternrate_limiting - Token bucket rate limitingfailover - Multi-endpoint failoverhealth_checks - Health monitoringstructured_logging - Tracing integrationcount_tokens - Local token counting (requires: tiktoken-rs)caching - Response caching (requires: lru)input_validation - Request validationcurl_diagnostics - Debug utilitiesbatch_operations - Parallel processingperformance_metrics - Metrics collection (requires: prometheus)sync_api - Sync wrappersfull - All features enabled (default)All dependencies workspace-managed for consistency.
The X.AI Grok API is OpenAI-compatible, using the same REST endpoint patterns and request/response formats. Token counting uses GPT-4 encoding (cl100k_base) via tiktoken for accurate counts.
cargo clippy -- -D warningsMIT