| Crates.io | praxis-llm |
| lib.rs | praxis-llm |
| version | 0.2.0 |
| created_at | 2025-11-09 00:20:16.942152+00 |
| updated_at | 2025-11-11 00:39:46.797371+00 |
| description | Provider-agnostic LLM client with OpenAI/Azure support and streaming |
| homepage | https://github.com/matheussilva/praxis |
| repository | https://github.com/matheussilva/praxis |
| max_upload_size | |
| id | 1923421 |
| size | 127,325 |
Provider-agnostic LLM client library with support for Chat Completions and Responses API (reasoning).
[dependencies]
praxis-llm = "0.1"
use praxis_llm::{LLMClient, OpenAIClient, ChatRequest, Message};
let client = OpenAIClient::new(api_key)?;
let request = ChatRequest::new("gpt-4o", vec![
Message::human("What is the capital of France?")
]);
let response = client.chat_completion(request).await?;
println!("{}", response.content.unwrap_or_default());
use praxis_llm::{LLMClient, OpenAIClient, ChatRequest, Message, StreamEvent};
use futures::StreamExt;
let client = OpenAIClient::new(api_key)?;
let request = ChatRequest::new("gpt-4o", vec![
Message::human("Write a poem about coding.")
]);
let mut stream = client.chat_completion_stream(request).await?;
while let Some(event) = stream.next().await {
match event? {
StreamEvent::Message { content } => print!("{}", content),
_ => {}
}
}
use praxis_llm::{LLMClient, OpenAIClient, ResponseRequest, Message, ReasoningConfig};
let client = OpenAIClient::new(api_key)?;
let request = ResponseRequest::new("gpt-5", vec![
Message::human("Explain quantum entanglement.")
]).with_reasoning(ReasoningConfig::medium());
let response = client.response(request).await?;
if let Some(reasoning) = response.reasoning {
println!("Reasoning: {}", reasoning);
}
if let Some(message) = response.message {
println!("Response: {}", message);
}
See the examples/ directory for complete working examples:
01_chat.rs - Basic chat completion02_chat_streaming.rs - Streaming chat03_reasoning.rs - Responses API with reasoning04_reasoning_streaming.rs - Streaming with reasoningRun examples:
cargo run --example 01_chat
MIT