Crates.io | llm_stream |
lib.rs | llm_stream |
version | 0.3.1 |
source | src |
created_at | 2024-09-04 14:45:54.036001 |
updated_at | 2024-09-15 00:19:39.182434 |
description | A very simple Rust library to simplify streaming api interaction with LLMs, free from complex async operations and redundant dependencies. |
homepage | |
repository | https://github.com/cloudbridgeuy/llm-stream |
max_upload_size | |
id | 1363401 |
size | 81,037 |
This library provides a streamlined approach to interacting with Large Language Model (LLM) streaming APIs from different providers.
Add the following dependency to your Cargo.toml
file:
[dependencies]
llm-stream = "0.1.3"
Here's a basic example demonstrating how to use the library to generate text with OpenAI's GPT-4 model:
use anyhow::Result;
use futures::stream::TryStreamExt;
use llm_stream::openai::{Auth, Client, Message, MessageBody, Role};
use std::io::Write;
#[tokio::main]
async fn main() -> Result<()> {
env_logger::init();
let key = std::env::var("OPENAI_API_KEY")?;
let auth = Auth::new(key);
let client = Client::new(auth, "https://api.openai.com/v1");
let messages = vec![Message {
role: Role::User,
content: "What is the capital of the United States?".to_string(),
}];
let body = MessageBody::new("gpt-4o", messages);
let mut stream = client.delta(&body)?;
while let Ok(Some(text)) = stream.try_next().await {
print!("{text}");
std::io::stdout().flush()?;
}
Ok(())
}
For more in-depth examples and usage instructions, refer to the examples directory: ./lib/llm_stream/examples.
Each provider requires an API key, typically set as an environment variable:
OPENAI_API_KEY
GOOGLE_API_KEY
ANTHROPIC_API_KEY
MISTRAL_API_KEY