Crates.io | rusty-openai |
lib.rs | rusty-openai |
version | 0.1.10 |
source | src |
created_at | 2024-08-07 08:50:11.268732 |
updated_at | 2024-12-08 06:41:06.13422 |
description | An unofficial OpenAI wrapper that supports image inputs |
homepage | https://github.com/pleaseful/rusty-openai |
repository | |
max_upload_size | |
id | 1328205 |
size | 80,084 |
Welcome to the OpenAI Rust SDK, your all-in-one solution for integrating OpenAI's powerful capabilities into your Rust projects. This SDK provides a convenient abstraction over OpenAI's API, enabling you to easily perform tasks such as generating completions, creating and editing images, moderating text, fine-tuning models, and more.
To use this SDK, add the following dependencies to your Cargo.toml
file:
[dependencies]
rusty-openai = "0.1.10"
serde_json = "1.0"
tokio = { version = "1", features = ["full"] }
reqwest = { version = "0.12.5", features = ["json", "multipart"] }
To get started with the OpenAI Rust SDK, follow these steps:
First, create an instance of the OpenAI
struct with your API key.
use rusty_openai::openai::OpenAI;
#[tokio::main]
async fn main() {
let openai = OpenAI::new("YOUR_API_KEY", "https://api.openai.com/v1");
}
To generate chat completions, create a ChatCompletionRequest
object and call the create
method from the completions API. The SDK now supports structured outputs with JSON Schema validation:
use rusty_openai::openai::OpenAI;
use rusty_openai::openai_api::completion::ChatCompletionRequest;
use serde_json::json;
use std::env;
#[tokio::main]
async fn main() {
let api_key = env::var("OPENAI_API_KEY").expect("API key not set");
let openai = OpenAI::new(&api_key, "https://api.openai.com/v1");
// Example with structured outputs using JSON Schema
let schema = json!({
"type": "object",
"properties": {
"steps": {
"type": "array",
"items": {
"type": "object",
"properties": {
"explanation": {"type": "string"},
"output": {"type": "string"}
},
"required": ["explanation", "output"]
}
},
"final_answer": {"type": "string"}
},
"required": ["steps", "final_answer"]
});
let messages = vec![
json!({
"role": "user",
"content": "Solve this equation: 2x + 5 = 13"
})
];
let request = ChatCompletionRequest::new_json_schema(
"gpt-4o-2024-08-06".to_string(),
messages,
"math_reasoning".to_string(),
schema
)
.temperature(0.7);
let chat_response = openai.completions().create(request).await;
match chat_response {
Ok(chat) => println!("{}", json!(chat).to_string()),
Err(err) => eprintln!("Error: {}", err),
}
}
This simple example demonstrates how to generate chat completions using the SDK. For more detailed usage and additional endpoints, refer to the documentation.
For detailed information on all the available endpoints and their respective methods, please refer to the full SDK Documentation.
This SDK is licensed under the MIT License. For more details, see the LICENSE file.
Now please do be noted that this library is lacking of DETAILED documentations, as well as other missing endpoints from the official one. You may be asking why am I creating this library on Rust when there's already a repository and a library for it on Rust.
The current one does not support images and is lacking of functions and is not actively maintained.
Happy coding with OpenAI and Rust! If you encounter any issues or have questions, feel free to open an issue on the GitHub repository. Contributions and improvements are always welcome.