| Crates.io | openai-rust2 |
| lib.rs | openai-rust2 |
| version | 1.6.0 |
| created_at | 2025-01-23 19:01:44.186217+00 |
| updated_at | 2025-05-28 04:34:34.707358+00 |
| description | An unofficial library for the OpenAI API |
| homepage | |
| repository | https://github.com/cloudllm-ai/openai-rust |
| max_upload_size | |
| id | 1528108 |
| size | 88,499 |
This is an unofficial library to interact with the Openai-API. The goal of this crate is to support the entire api while matching the official documentation as closely as possible.
// Here we will use the chat completion endpoint connecting to openAI's default base URL
use openai_rust2 as openai_rust; // since this is a fork of openai_rust
let client = openai_rust::Client::new(&std::env::var("OPENAI_API_KEY").unwrap());
let args = openai_rust::chat::ChatArguments::new("gpt-3.5-turbo", vec![
openai_rust::chat::Message {
role: "user".to_owned(),
content: "Hello GPT!".to_owned(),
}
]);
let res = client.create_chat(args).await.unwrap();
println!("{}", res);
Here another example connecting to a local LLM server (Ollama's base URL)
use openai_rust2 as openai_rust; // since this is a fork of openai_rust
let client = openai_rust::Client::new_with_base_url(
"", // no need for an API key when connecting to a default ollama instance locally
"http://localhost:11434"
);
You can run this code as an example with OPENAI_API_KEY=(your key) cargo run --example chat.
Checkout the examples directory for more usage examples. You can find documentation on docs.rs.