| Crates.io | openai-interface |
| lib.rs | openai-interface |
| version | 0.5.0 |
| created_at | 2025-09-15 15:46:15.199124+00 |
| updated_at | 2025-11-11 03:16:01.69388+00 |
| description | A low-level Rust interface for the OpenAI API |
| homepage | https://gitcode.com/astral-sphere/openai-interface |
| repository | https://gitcode.com/astral-sphere/openai-interface |
| max_upload_size | |
| id | 1840212 |
| size | 237,352 |
A low-level Rust interface for interacting with OpenAI's API. Both streaming and non-streaming APIs are supported.
Currently, only chat completion and file management are supported. Image generation and other features are still in development.
Repository:
GitCode: GitCode Repo
GitHub: GitHub RepoYou are welcome to contribute to this project through any of the links above.
errors] module.Add this to your Cargo.toml:
[dependencies]
openai-interface = "0.5.0-alpha.1"
This crate provides methods for both streaming and non-streaming chat completions. The following examples demonstrate how to use these features.
use std::sync::LazyLock;
use openai_interface::chat::create::request::{Message, RequestBody};
use openai_interface::chat::create::response::no_streaming::ChatCompletion;
use std::str::FromStr;
// You need to provide your own DeepSeek API key at /keys/deepseek_domestic_key
const DEEPSEEK_API_KEY: LazyLock<&str> =
LazyLock::new(|| include_str!("../keys/deepseek_domestic_key").trim());
const DEEPSEEK_CHAT_URL: &'static str = "https://api.deepseek.com/chat/completions";
const DEEPSEEK_MODEL: &'static str = "deepseek-chat";
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let request = RequestBody {
messages: vec![
Message::System {
content: "You are a helpful assistant.".to_string(),
name: None,
},
Message::User {
content: "Hello, how are you?".to_string(),
name: None,
},
],
model: DEEPSEEK_MODEL.to_string(),
stream: false,
..Default::default()
};
// Send the request
let response: String = request
.get_response(DEEPSEEK_CHAT_URL, &*DEEPSEEK_API_KEY)
.await?;
let chat_completion = ChatCompletion::from_str(&response).unwrap();
let text = chat_completion.choices[0]
.message
.content
.as_deref()
.unwrap();
println!("{:?}", text);
Ok(())
}
This example demonstrates how to handle streaming responses from the API.
use openai_interface::chat::create::request::{Message, RequestBody};
use openai_interface::chat::create::response::streaming::{CompletionContent, ChatCompletionChunk};
use futures_util::StreamExt;
use std::str::FromStr;
use std::sync::LazyLock;
// You need to provide your own DeepSeek API key at /keys/deepseek_domestic_key
const DEEPSEEK_API_KEY: LazyLock<&str> =
LazyLock::new(|| include_str!("../keys/deepseek_domestic_key").trim());
const DEEPSEEK_CHAT_URL: &'static str = "https://api.deepseek.com/chat/completions";
const DEEPSEEK_MODEL: &'static str = "deepseek-chat";
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let request = RequestBody {
messages: vec![
Message::System {
content: "You are a helpful assistant.".to_string(),
name: None,
},
Message::User {
content: "Who are you?".to_string(),
name: None,
},
],
model: DEEPSEEK_MODEL.to_string(),
stream: true,
..Default::default()
};
// Send the request
let mut response_stream = request
.get_stream_response(DEEPSEEK_CHAT_URL, *DEEPSEEK_API_KEY)
.await?;
let mut message = String::new();
while let Some(chunk_result) = response_stream.next().await {
let chunk_string = chunk_result?;
if &chunk_string == "[DONE]" {
// SSE stream ends.
break;
}
let chunk = ChatCompletionChunk::from_str(&chunk_string).unwrap();
let content: &String = match chunk.choices[0].delta.content.as_ref().unwrap() {
CompletionContent::Content(s) => s,
// `ReasoningContent` is a field from DeepSeek.
CompletionContent::ReasoningContent(s) => s,
};
println!("lib::test_streaming message: {}", content);
message.push_str(content);
}
println!("lib::test_streaming message: {}", message);
Ok(())
}
You can customize request parameters as needed. If you require provider-specific
fields, you can add them to extra_body or extra_body_map.
chat]: Contains all chat completion related structs, enums, and methods.completion]: Contains all completion related structs, enums, and methods.
Note that this API is getting deprecated in favour of chat and is only available
for out-dated LLM models.files]: Providing the capacity to upload and manage files.rest]: Providing all REST related traits and methods.errors]: Defines error types used throughout the crate.All errors are converted into crate::error::OapiError
This crate is designed to work with musl libc, making it suitable for lightweight deployments in containerized environments. Longer compile times may be required as OpenSSL needs to be built from source.
To build for musl:
rustup target add x86_64-unknown-linux-musl
cargo build --target x86_64-unknown-linux-musl
This crate aims to support standard OpenAI-compatible API endpoints. Unfortunately, OpenAI aggressively restricts the access from the People's Republic of China. As a result, the implementation has been tested primarily with DeepSeek and Qwen. Please open an issue if you find any mistakes or inaccuracies in the implementation.
Contributions are welcome! Please feel free to submit pull requests or open issues for bugs and feature requests.
This project is licensed under the AGPL-3.0 License - see the LICENSE file for details.