my-chatgpt

Crates.iomy-chatgpt
lib.rsmy-chatgpt
version
sourcesrc
created_at2025-05-07 23:15:38.513628+00
updated_at2025-05-08 22:47:25.040865+00
descriptionA simple API wrapper for the ChatGPT API
homepagehttps://github.com/bongkow/chatgpt-api
repositoryhttps://github.com/bongkow/chatgpt-api
max_upload_size
id1664656
Cargo.toml error:TOML parse error at line 19, column 1 | 19 | autolib = false | ^^^^^^^ unknown field `autolib`, expected one of `name`, `version`, `edition`, `authors`, `description`, `readme`, `license`, `repository`, `homepage`, `documentation`, `build`, `resolver`, `links`, `default-run`, `default_dash_run`, `rust-version`, `rust_dash_version`, `rust_version`, `license-file`, `license_dash_file`, `license_file`, `licenseFile`, `license_capital_file`, `forced-target`, `forced_dash_target`, `autobins`, `autotests`, `autoexamples`, `autobenches`, `publish`, `metadata`, `keywords`, `categories`, `exclude`, `include`
size0
(bongkow)

documentation

https://github.com/bongkow/chatgpt-api

README

My ChatGPT API Rust Client

A Rust library for interacting with OpenAI's ChatGPT API with streaming support. This library provides a simple and efficient way to communicate with OpenAI's API while handling streaming responses and token usage tracking.

Features

  • Stream or non-stream mode for API responses
  • Conversation memory to maintain chat history
  • Comprehensive error handling
  • Token usage tracking
  • Flexible output handling via callback functions
  • Type-safe API interactions
  • Async/await support

Version

Current version: 0.1.3

Requirements

  • Rust edition 2024
  • Dependencies:
    • reqwest 0.12.15 (with json, stream, rustls-tls features)
    • serde 1.0 (with derive feature)
    • serde_json 1.0
    • tokio 1.44.2 (with full features)
    • futures-util 0.3
    • dotenv 0.15

Installation

Add this to your Cargo.toml:

[dependencies]
my-chatgpt = { git = "https://github.com/bongkow/chatgpt-api", version = "0.1.3" }

Usage

use my_chatgpt::chat::{send_chat, ChatError, UsageInfo, ChatMessage};

async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let api_key = "your-api-key";
    let model = "gpt-4";  // or any other supported model
    let instructions = "You are a helpful assistant.";
    
    // Define a handler function for the API responses
    let handler = |usage: Option<&UsageInfo>, error: Option<&ChatError>, raw_chunk: Option<&serde_json::Value>| {
        if let Some(e) = error {
            eprintln!("Error: {:?}", e);
        }
        
        if let Some(u) = usage {
            println!("Input tokens: {}", u.input_tokens.unwrap_or(0));
            println!("Output tokens: {}", u.output_tokens.unwrap_or(0));
            println!("Total tokens: {}", u.total_tokens.unwrap_or(0));
        }
        
        // Process raw chunks if needed
        if let Some(chunk) = raw_chunk {
            // Do something with the raw chunk
        }
    };
    
    // Initialize an empty chat history
    let mut chat_history: Vec<ChatMessage> = Vec::new();
    
    // First message
    let input1 = "Tell me about Rust programming language.";
    let response1 = send_chat(instructions, input1, api_key, model, true, handler, &mut chat_history).await?;
    println!("First response: {}", response1);
    
    // Follow-up question using chat history
    let input2 = "What are its main advantages over C++?";
    let response2 = send_chat(instructions, input2, api_key, model, true, handler, &mut chat_history).await?;
    println!("Second response: {}", response2);
    
    Ok(())
}

Error Handling

The library provides a ChatError enum for different error cases:

pub enum ChatError {
    RequestError(String),    // Errors related to API requests
    ParseError(String),      // Errors in parsing responses
    NetworkError(String),    // Network-related errors
    Unknown(String),         // Other unexpected errors
}

Token Usage

Token usage information is provided via the UsageInfo struct:

pub struct UsageInfo {
    pub input_tokens: Option<u32>,    // Number of tokens in the input
    pub output_tokens: Option<u32>,   // Number of tokens in the output
    pub total_tokens: Option<u32>,    // Total tokens used
}

Chat History

The library maintains conversation context through the ChatMessage struct:

pub struct ChatMessage {
    pub role: String,      // The role of the message sender (e.g., "user", "assistant", "system")
    pub content: String,   // The content of the message
}

When you pass a chat history to send_chat, the function automatically:

  1. Includes previous messages in the API request
  2. Updates the history with new messages
  3. Maintains context for more coherent multi-turn conversations

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

License

MIT

Commit count: 0

cargo fmt