Crates.io | oaapi |
lib.rs | oaapi |
version | 0.2.0 |
source | src |
created_at | 2024-03-07 08:42:00.119615 |
updated_at | 2024-03-21 12:29:23.437426 |
description | An unofficial Rust client for the OpenAI API. |
homepage | |
repository | https://github.com/mochi-neko/oaapi |
max_upload_size | |
id | 1165748 |
size | 296,456 |
An unofficial Rust client for the OpenAI API.
Run the following Cargo command in your project directory:
cargo add oaapi
or add the following line to your Cargo.toml:
[dependencies]
oaapi = "0.2.0"
[!NOTE] You need to enable the feature flags to use the corresponding APIs.
Beta version APIs:
chat
.crate::Client
] with the API key and the other optional settings.crate::Client::chat_complete
].An example to call the chat completions API with the chat
feature:
[dependencies]
oaapi = { version = "0.2.0", features = ["chat"] }
and setting the API key to the environment variable: OPENAI_API_KEY
OPENAI_API_KEY={your-openai-api-key}
is as follows:
use oaapi::Client;
use oaapi::chat::CompletionsRequestBody;
use oaapi::chat::SystemMessage;
use oaapi::chat::UserMessage;
use oaapi::chat::ChatModel;
#[tokio::main]
async fn main() -> anyhow::Result<()> {
// 1. Create a client with the API key from the environment variable: "OPENAI_API_KEY"
let client = Client::from_env()?;
// or specify the API key directly.
// let client = Client::new(oaapi::ApiKey::new("OPENAI_API_KEY"), None, None);
// 2. Create a request body parameters.
let request_body = CompletionsRequestBody {
messages: vec![
SystemMessage::new("Prompt.", None).into(),
UserMessage::new("Chat message from user.".into(), None).into(),
],
model: ChatModel::Gpt35Turbo,
..Default::default()
};
// 3. Call the API.
let response = client
.chat_complete(request_body)
.await?;
// 4. Use the response.
println!("Result:\n{}", response);
Ok(())
}
See also examples in documents of each feature module for more details.
See the ./examples directory.
See CHANGELOG.
Licensed under either of the Apache License, Version 2.0 or the MIT license at your option.