| Crates.io | my-chatgpt |
| lib.rs | my-chatgpt |
| version | 0.3.1 |
| created_at | 2025-05-07 23:15:38.513628+00 |
| updated_at | 2025-05-19 02:40:59.485169+00 |
| description | A simple API wrapper for the ChatGPT API |
| homepage | https://github.com/bongkow/chatgpt-api |
| repository | https://github.com/bongkow/chatgpt-api |
| max_upload_size | |
| id | 1664656 |
| size | 72,362 |
A Rust library for interacting with OpenAI's ChatGPT API with streaming support. This library provides a simple and efficient way to communicate with OpenAI's API while handling streaming responses, token usage tracking, and now text embeddings.
Current version: 0.3.0
0.11 (with json, stream features)1.0 (with derive feature)1.01.0 (with full features)0.3Add this to your Cargo.toml:
[dependencies]
my-chatgpt = { git = "https://github.com/bongkow/chatgpt-api", version = "0.3.0" }
use my_chatgpt::response::{send_chat, ResponseError, UsageInfo, Message, SendChatResult};
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let api_key = "your-api-key";
let model = "gpt-4o"; // or any other supported model
let instructions = "You are a helpful assistant.";
// Define a handler function for the API responses
fn handler(usage: Option<&UsageInfo>, error: Option<&ResponseError>, chunk: Option<&serde_json::Value>) {
if let Some(err) = error {
println!("Error: {:?}", err);
}
if let Some(chunk_data) = chunk {
println!("Received chunk: {:?}", chunk_data);
}
};
// First message
let input1 = "Tell me about Rust programming language.";
let response1 = match send_chat(instructions, input1, api_key, model, &handler, None).await {
SendChatResult::Ok(response) => {
println!("First response: {}", response.message);
println!("Model used: {}", response.model);
println!("Response ID: {}", response.id);
response
},
SendChatResult::Err(e) => panic!("Error: {:?}", e),
};
// Follow-up question using previous response ID
let input2 = "What are its main advantages over C++?";
match send_chat(instructions, input2, api_key, model, &handler, Some(&response1.id)).await {
SendChatResult::Ok(response) => {
println!("Second response: {}", response.message);
println!("Model used: {}", response.model);
println!("Response ID: {}", response.id);
},
SendChatResult::Err(e) => panic!("Error: {:?}", e),
};
Ok(())
}
use my_chatgpt::embedding::{get_embedding, EmbeddingModel};
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let api_key = "your-api-key";
// Choose an embedding model
let model = EmbeddingModel::TextEmbedding3Small; // or EmbeddingModel::TextEmbedding3Large
// Get embeddings for a text
let text = "This is a sample text for embedding.";
let embedding_result = get_embedding(text, model, &api_key).await?;
println!("Model used: {}", embedding_result.model);
println!("Embedding dimensions: {}", embedding_result.embedding.len());
println!("First 5 values: {:?}", &embedding_result.embedding[0..5]);
Ok(())
}
The library provides a ResponseError enum for different error cases:
pub enum ResponseError {
RequestError(String), // Errors related to API requests
ParseError(String), // Errors in parsing responses
NetworkError(String), // Network-related errors
Unknown(String), // Other unexpected errors
}
Enhanced token usage information is provided via the UsageInfo struct:
pub struct UsageInfo {
pub input_tokens: Option<u32>,
pub input_tokens_details: Option<InputTokensDetails>,
pub output_tokens: Option<u32>,
pub output_tokens_details: Option<OutputTokensDetails>,
pub total_tokens: Option<u32>,
}
pub struct InputTokensDetails {
pub cached_tokens: Option<u32>,
}
pub struct OutputTokensDetails {
pub reasoning_tokens: Option<u32>,
}
The enhanced SendChatOK structure provides more detailed response information:
pub struct SendChatOK {
pub message: String, // The response message
pub id: String, // Unique response identifier
pub model: String, // Model used for the response
pub usage: UsageInfo, // Detailed token usage information
pub tools: Vec<Tool>, // List of tools used in the response
}
The Embedding structure provides results from embedding requests:
pub struct Embedding {
pub embedding: Vec<f32>, // Vector of embedding values
pub model: String, // Name of the model used
pub input: String, // Original input text
}
The library supports various GPT-4 models with appropriate tool configurations:
Each model is automatically configured with appropriate tools (e.g., web search preview) based on its capabilities.
The library supports OpenAI's text embedding models:
Contributions are welcome! Please feel free to submit a Pull Request.
MIT