Crates.io | my-chatgpt |
lib.rs | my-chatgpt |
version | |
source | src |
created_at | 2025-05-07 23:15:38.513628+00 |
updated_at | 2025-05-08 22:47:25.040865+00 |
description | A simple API wrapper for the ChatGPT API |
homepage | https://github.com/bongkow/chatgpt-api |
repository | https://github.com/bongkow/chatgpt-api |
max_upload_size | |
id | 1664656 |
Cargo.toml error: | TOML parse error at line 19, column 1 | 19 | autolib = false | ^^^^^^^ unknown field `autolib`, expected one of `name`, `version`, `edition`, `authors`, `description`, `readme`, `license`, `repository`, `homepage`, `documentation`, `build`, `resolver`, `links`, `default-run`, `default_dash_run`, `rust-version`, `rust_dash_version`, `rust_version`, `license-file`, `license_dash_file`, `license_file`, `licenseFile`, `license_capital_file`, `forced-target`, `forced_dash_target`, `autobins`, `autotests`, `autoexamples`, `autobenches`, `publish`, `metadata`, `keywords`, `categories`, `exclude`, `include` |
size | 0 |
A Rust library for interacting with OpenAI's ChatGPT API with streaming support. This library provides a simple and efficient way to communicate with OpenAI's API while handling streaming responses and token usage tracking.
Current version: 0.1.3
0.12.15
(with json, stream, rustls-tls features)1.0
(with derive feature)1.0
1.44.2
(with full features)0.3
0.15
Add this to your Cargo.toml
:
[dependencies]
my-chatgpt = { git = "https://github.com/bongkow/chatgpt-api", version = "0.1.3" }
use my_chatgpt::chat::{send_chat, ChatError, UsageInfo, ChatMessage};
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let api_key = "your-api-key";
let model = "gpt-4"; // or any other supported model
let instructions = "You are a helpful assistant.";
// Define a handler function for the API responses
let handler = |usage: Option<&UsageInfo>, error: Option<&ChatError>, raw_chunk: Option<&serde_json::Value>| {
if let Some(e) = error {
eprintln!("Error: {:?}", e);
}
if let Some(u) = usage {
println!("Input tokens: {}", u.input_tokens.unwrap_or(0));
println!("Output tokens: {}", u.output_tokens.unwrap_or(0));
println!("Total tokens: {}", u.total_tokens.unwrap_or(0));
}
// Process raw chunks if needed
if let Some(chunk) = raw_chunk {
// Do something with the raw chunk
}
};
// Initialize an empty chat history
let mut chat_history: Vec<ChatMessage> = Vec::new();
// First message
let input1 = "Tell me about Rust programming language.";
let response1 = send_chat(instructions, input1, api_key, model, true, handler, &mut chat_history).await?;
println!("First response: {}", response1);
// Follow-up question using chat history
let input2 = "What are its main advantages over C++?";
let response2 = send_chat(instructions, input2, api_key, model, true, handler, &mut chat_history).await?;
println!("Second response: {}", response2);
Ok(())
}
The library provides a ChatError
enum for different error cases:
pub enum ChatError {
RequestError(String), // Errors related to API requests
ParseError(String), // Errors in parsing responses
NetworkError(String), // Network-related errors
Unknown(String), // Other unexpected errors
}
Token usage information is provided via the UsageInfo
struct:
pub struct UsageInfo {
pub input_tokens: Option<u32>, // Number of tokens in the input
pub output_tokens: Option<u32>, // Number of tokens in the output
pub total_tokens: Option<u32>, // Total tokens used
}
The library maintains conversation context through the ChatMessage
struct:
pub struct ChatMessage {
pub role: String, // The role of the message sender (e.g., "user", "assistant", "system")
pub content: String, // The content of the message
}
When you pass a chat history to send_chat
, the function automatically:
Contributions are welcome! Please feel free to submit a Pull Request.
MIT