| Crates.io | async-llm |
| lib.rs | async-llm |
| version | 0.1.4 |
| created_at | 2025-01-22 11:16:10.813028+00 |
| updated_at | 2025-01-23 18:57:43.395785+00 |
| description | A Rust library for OpenAI-compatible APIs |
| homepage | https://github.com/quanhua92/async-llm |
| repository | https://github.com/quanhua92/async-llm |
| max_upload_size | |
| id | 1526538 |
| size | 200,301 |
async-llm is a Rust library for working with OpenAI-compatible providers, including OpenAI, Gemini, OpenRouter, and Ollama.
Note: This repository is currently a work-in-progress and is under active development. As such, breaking changes may occur frequently. Please proceed with caution if using this code in production or for critical projects. We recommend checking the commit history and pull requests for the latest updates. Contributions, feedback, and issue reports are welcome! 🚧
async-llm?Relying solely on OpenAI isn't ideal for every application. You can find numerous forum discussions about issues with OpenAI billing and availability. Using OpenAI-compatible providers gives you more options and flexibility. However, working with these APIs can be tricky due to differences in their specifications.
While some crates focus only on OpenAI or attempt a one-size-fits-all approach, async-llm takes a balanced path:
With async-llm, you can seamlessly work with multiple OpenAI-compatible APIs while maintaining a clean and consistent codebase.
| Name | Description |
|---|---|
openai |
OpenAI example |
openrouter |
OpenRouter example |
ollama |
Ollama example |
gemini |
Gemini example |
async fn example_basic() -> Result<(), Error> {
let request = ChatRequest::new(
"gpt-4o-mini",
vec![
ChatMessage::system("You are a helpful assistant"),
ChatMessage::user("Who are you?"),
],
);
tracing::info!("request: \n{}", request.to_string_pretty()?);
let response = request.send().await?;
tracing::info!("response: \n{}", response.to_string_pretty()?);
Ok(())
}
async fn example_basic_stream() -> Result<(), Error> {
let request = ChatRequest::new(
"gpt-4o-mini",
vec![
ChatMessage::system("You are a helpful assistant"),
ChatMessage::user("Who are you?"),
],
)
.with_stream();
tracing::info!("request: \n{}", request.to_string_pretty()?);
let mut response = request.send_stream().await?;
while let Some(result) = response.next().await {
match result {
Ok(response) => {
tracing::info!("response: \n{}", response.to_string_pretty()?);
}
Err(e) => {
tracing::error!("error = \n {e}");
}
}
}
Ok(())
}
We are actively working to address these issues. If you encounter any problems or have suggestions, please feel free to open an issue or contribute a fix! 🛠️
backonInstall just
cargo install just
Install additional tools
just install
Start development
just dev
Start development with example name
just dev ollama
Start development with example name and RUST_LOG=trace
just trace ollama
Run tests
just test
Run selected tests with debug tracing
just test-one chat
Generate test data
just generate