| Crates.io | openai-ergonomic |
| lib.rs | openai-ergonomic |
| version | 0.5.0 |
| created_at | 2025-09-21 19:48:50.331723+00 |
| updated_at | 2025-10-14 12:57:24.381247+00 |
| description | Ergonomic Rust wrapper for OpenAI API |
| homepage | https://github.com/genai-rs/openai-ergonomic |
| repository | https://github.com/genai-rs/openai-ergonomic |
| max_upload_size | |
| id | 1849074 |
| size | 1,669,650 |
Ergonomic Rust wrapper for the OpenAI API, providing type-safe builder patterns and async/await support.
bontokio and reqwest for modern async RustOpenAI API endpointsOpenAI - seamless support for Azure OpenAI deploymentsStatus: under construction. The crate is still in active development and not yet ready for production use.
Add openai-ergonomic to your Cargo.toml:
[dependencies]
openai-ergonomic = "0.1"
tokio = { version = "1.0", features = ["full"] }
use openai_ergonomic::{Client, Config};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Build client from environment variables
let client = Client::from_env()?.build();
let response = client
.chat_completions()
.model("gpt-4")
.message("user", "Hello, world!")
.send()
.await?;
println!("{}", response.choices[0].message.content);
Ok(())
}
use openai_ergonomic::Client;
use futures::StreamExt;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Build client from environment variables
let client = Client::from_env()?.build();
let builder = client
.chat()
.user("Tell me a story");
let mut stream = client.send_chat_stream(builder).await?;
while let Some(chunk) = stream.next().await {
let chunk = chunk?;
if let Some(content) = chunk.content() {
print!("{}", content);
}
}
Ok(())
}
You can provide your own reqwest::Client with custom retry, timeout, and middleware configuration.
Note: When using a custom HTTP client, you must configure the timeout on the reqwest::Client itself:
use openai_ergonomic::{Client, Config};
use reqwest_middleware::ClientBuilder;
use reqwest_retry::{RetryTransientMiddleware, policies::ExponentialBackoff};
use std::time::Duration;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create a retry policy with exponential backoff
let retry_policy = ExponentialBackoff::builder()
.build_with_max_retries(3);
// Build a reqwest client with custom timeout
let reqwest_client = reqwest::Client::builder()
.timeout(Duration::from_secs(60)) // Configure timeout here
.build()?;
// Add retry middleware
let http_client = ClientBuilder::new(reqwest_client)
.with(RetryTransientMiddleware::new_with_policy(retry_policy))
.build();
// Create OpenAI client with custom HTTP client
let config = Config::builder()
.api_key("your-api-key")
.http_client(http_client)
.build();
let client = Client::new(config)?.build();
// Use the client normally - retries and timeout are handled automatically
let response = client.chat_simple("Hello!").await?;
println!("{}", response);
Ok(())
}
OpenAI SupportThe crate seamlessly supports Azure OpenAI deployments. Azure-specific configuration can be provided through environment variables or programmatically.
export AZURE_OPENAI_ENDPOINT="https://my-resource.openai.azure.com"
export AZURE_OPENAI_API_KEY="your-azure-api-key"
export AZURE_OPENAI_DEPLOYMENT="gpt-4"
export AZURE_OPENAI_API_VERSION="2024-02-01" # Optional, defaults to 2024-02-01
use openai_ergonomic::Client;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Build client from Azure environment variables
let client = Client::from_env()?.build();
// Use exactly the same API as standard OpenAI
let response = client.chat_simple("Hello from Azure!").await?;
println!("{}", response);
Ok(())
}
use openai_ergonomic::{Client, Config};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let config = Config::builder()
.api_key("your-azure-api-key")
.api_base("https://my-resource.openai.azure.com")
.azure_deployment("gpt-4")
.azure_api_version("2024-02-01")
.build();
let client = Client::new(config)?.build();
let response = client.chat_simple("Hello!").await?;
println!("{}", response);
Ok(())
}
Note: The library automatically handles the differences between Azure OpenAI and standard OpenAI (authentication, URL paths, API versioning). You use the same API regardless of the provider.
The examples/ directory contains comprehensive examples for all major OpenAI API features:
Run any example with:
# Set your OpenAI API key
export OPENAI_API_KEY="your-api-key-here"
# Run an example
cargo run --example quickstart
cargo run --example responses_streaming
cargo run --example vision_chat
Each example includes:
We welcome contributions! Please see our Contributing Guide for details.
Licensed under either of
at your option.
Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.