| Crates.io | gpt5 |
| lib.rs | gpt5 |
| version | 0.2.3 |
| created_at | 2025-09-17 22:08:17.361414+00 |
| updated_at | 2025-10-02 22:14:44.474519+00 |
| description | A Rust client library for OpenAI's GPT-5 API with support for function calling, reasoning, and streaming |
| homepage | https://github.com/mojindri/gpt5 |
| repository | https://github.com/mojindri/gpt5 |
| max_upload_size | |
| id | 1843972 |
| size | 163,913 |
โ ๏ธ IN ACTIVE IMPROVEMENT โ ๏ธ
This library is actively being improved and may have breaking changes.
Perfect for experimentation, learning, and development projects!Latest Release: v0.2.2 - Web search compatibility fixes and streamlined examples!
A comprehensive Rust client library for OpenAI's GPT-5 API with full support for function calling, reasoning capabilities, and type-safe enums.
Add this to your Cargo.toml:
[dependencies]
gpt5 = "0.2.2"
tokio = { version = "1.0", features = ["rt-multi-thread", "macros"] }
serde_json = "1.0" # For function calling examples
The fastest way to get started is with our examples:
# Clone and run examples
git clone <repository-url>
cd gpt5
cargo run --example quick_start
cargo run --example basic_usage
cargo run --example simple_chat
cargo run --example web_search
See the examples/ directory for more detailed examples including function calling, error handling, and interactive chat.
use gpt5::{Gpt5Client, Gpt5Model};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let client = Gpt5Client::new("your-api-key".to_string());
let response = client
.simple(Gpt5Model::Gpt5Nano, "Hello, world!")
.await?;
println!("Response: {}", response);
Ok(())
}
use gpt5::{Gpt5Client, Gpt5Model, Gpt5RequestBuilder, Tool, VerbosityLevel, ReasoningEffort};
use serde_json::json;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let client = Gpt5Client::new("your-api-key".to_string());
// Define a weather tool
let weather_tool = Tool {
tool_type: "function".to_string(),
name: Some("get_current_weather".to_string()),
description: Some("Get the current weather in a given location".to_string()),
parameters: Some(json!({
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"]
}
},
"required": ["location", "unit"]
})),
};
// Build a request with tools
let request = Gpt5RequestBuilder::new(Gpt5Model::Gpt5)
.input("What's the weather like in Boston today?")
.instructions("Use the weather tool to get current conditions")
.tools(vec![weather_tool])
.tool_choice("auto")
.verbosity(VerbosityLevel::Medium)
.reasoning_effort(ReasoningEffort::Medium)
.max_output_tokens(500)
.build();
// Send the request
let response = client.request(request).await?;
// Check for function calls
let function_calls = response.function_calls();
if !function_calls.is_empty() {
println!("Function calls made: {}", function_calls.len());
for call in function_calls {
println!("Function: {}", call.name.as_deref().unwrap_or("unknown"));
println!("Arguments: {}", call.arguments.as_deref().unwrap_or("{}"));
}
}
// Get text response
if let Some(text) = response.text() {
println!("Response: {}", text);
}
Ok(())
}
use gpt5::{Gpt5Client, Gpt5Model, Gpt5RequestBuilder, Status};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let client = Gpt5Client::new("your-api-key".to_string());
let request = Gpt5RequestBuilder::new(Gpt5Model::Gpt5)
.input("Summarise the newest Rust release notes")
.web_search_enabled(true)
.web_search_query("latest Rust release notes")
.web_search_max_results(3)
.build();
let search_config = request.web_search_config.clone();
let response = client.request(request).await?;
match response.status {
Some(Status::RequiresAction) => {
if let Some(config) = search_config {
println!(
"Suggested query: {} (max results: {:?})",
config.query.unwrap_or_default(),
config.max_results
);
}
println!(
"Model requested the web_search tool. Run the search and send tool_outputs with responses.create."
);
}
Some(Status::Completed) => {
if let Some(text) = response.text() {
println!("{}", text);
}
}
_ => println!("Status: {:?}", response.status),
}
Ok(())
}
The library supports all GPT-5 models:
use gpt5::Gpt5Model;
let model = Gpt5Model::Gpt5; // Main model - most capable
let mini = Gpt5Model::Gpt5Mini; // Balanced performance and cost
let nano = Gpt5Model::Gpt5Nano; // Fastest and most cost-effective
let custom = Gpt5Model::Custom("gpt-5-custom".to_string());
Control how much computational effort GPT-5 puts into reasoning:
use gpt5::ReasoningEffort;
let low = ReasoningEffort::Low; // Fast, basic reasoning
let medium = ReasoningEffort::Medium; // Balanced performance
let high = ReasoningEffort::High; // Thorough analysis
Control the detail level of responses:
use gpt5::VerbosityLevel;
let low = VerbosityLevel::Low; // Concise responses
let medium = VerbosityLevel::Medium; // Balanced detail
let high = VerbosityLevel::High; // Detailed responses
Check response completion and status:
let response = client.request(request).await?;
if response.is_completed() {
println!("Response completed successfully");
if let Some(text) = response.text() {
println!("Text: {}", text);
}
} else {
println!("Response incomplete: {:?}", response.status);
}
// Get usage statistics
println!("Total tokens: {}", response.total_tokens());
if let Some(reasoning_tokens) = response.reasoning_tokens() {
println!("Reasoning tokens: {}", reasoning_tokens);
}
The library provides comprehensive error handling:
use gpt5::{Gpt5Client, Gpt5Model};
match client.simple(Gpt5Model::Gpt5Nano, "Hello").await {
Ok(response) => println!("Success: {}", response),
Err(e) => {
match e.downcast_ref::<reqwest::Error>() {
Some(req_err) => println!("Network error: {}", req_err),
None => println!("Other error: {}", e),
}
}
}
The client now detects HTTP status failures from the OpenAI API and surfaces detailed error messages, making it easier to debug
authentication or quota issues. If you need full control over networking (custom proxies, retry middleware, etc.), pass your own
configured reqwest::Client via with_http_client and keep using the same high-level interface.
The library includes built-in validation for requests:
let request = Gpt5RequestBuilder::new(Gpt5Model::Gpt5Nano)
.input("") // Empty input will trigger a warning
.max_output_tokens(5) // Very low token count will trigger a warning
.build(); // Validation runs automatically
This project is licensed under the MIT License - see the LICENSE file for details.
We provide comprehensive examples to help you get started quickly:
| Example | Description | Run Command |
|---|---|---|
quick_start.rs |
Minimal 3-line example | cargo run --example quick_start |
basic_usage.rs |
Different models demo | cargo run --example basic_usage |
simple_chat.rs |
Interactive chat loop | cargo run --example simple_chat |
function_calling.rs |
Advanced function calling | cargo run --example function_calling |
error_handling.rs |
Production error handling | cargo run --example error_handling |
web_search.rs |
Enable web search assistance with suggested queries | cargo run --example web_search |
Set your OpenAI API key:
export OPENAI_API_KEY="your-api-key-here"
๐ We're actively looking for contributors! This is a fresh library with lots of room for improvement.
Areas where we'd love help:
How to contribute:
Questions or ideas? Open an issue and let's discuss! We're very responsive and would love to hear from you.
Contributions are welcome! Please feel free to submit a Pull Request.
See CHANGELOG.md for detailed release notes and version history.