| Crates.io | rsllm |
| lib.rs | rsllm |
| version | 0.1.0 |
| created_at | 2025-08-11 10:09:18.578037+00 |
| updated_at | 2025-08-11 10:09:18.578037+00 |
| description | Rust-native LLM client library with multi-provider support and streaming capabilities |
| homepage | https://github.com/leval-ai/rrag/tree/main/crates/rsllm |
| repository | https://github.com/leval-ai/rrag |
| max_upload_size | |
| id | 1789879 |
| size | 171,730 |
RSLLM is a Rust-native client library for Large Language Models with multi-provider support, streaming capabilities, and type-safe interfaces.
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β Application βββββΆβ RSLLM βββββΆβ LLM Provider β
β (RRAG, etc) β β Client β β (OpenAI/etc) β
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β
βΌ
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β Streaming ββββββ Provider ββββββ HTTP/API β
β Response β β Abstraction β β Transport β
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
Add RSLLM to your Cargo.toml:
[dependencies]
rsllm = "0.1"
tokio = { version = "1.0", features = ["full"] }
use rsllm::{Client, Provider, ChatMessage, MessageRole};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create client with OpenAI provider
let client = Client::builder()
.provider(Provider::OpenAI)
.api_key("your-api-key")
.model("gpt-4")
.build()?;
// Simple chat completion
let messages = vec![
ChatMessage::new(MessageRole::User, "What is Rust?")
];
let response = client.chat_completion(messages).await?;
println!("Response: {}", response.content);
Ok(())
}
use rsllm::{Client, Provider, ChatMessage, MessageRole};
use futures::StreamExt;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let client = Client::builder()
.provider(Provider::OpenAI)
.api_key("your-api-key")
.model("gpt-4")
.build()?;
let messages = vec![
ChatMessage::new(MessageRole::User, "Tell me a story")
];
let mut stream = client.chat_completion_stream(messages).await?;
while let Some(chunk) = stream.next().await {
print!("{}", chunk?.content);
}
Ok(())
}
use rsllm::{Client, Provider};
// OpenAI
let openai_client = Client::builder()
.provider(Provider::OpenAI)
.api_key("openai-api-key")
.model("gpt-4")
.build()?;
// Anthropic Claude
let claude_client = Client::builder()
.provider(Provider::Claude)
.api_key("claude-api-key")
.model("claude-3-sonnet")
.build()?;
// Local Ollama
let ollama_client = Client::builder()
.provider(Provider::Ollama)
.base_url("http://localhost:11434")
.model("llama3.1")
.build()?;
RSLLM supports extensive configuration options:
use rsllm::{Client, Provider, ClientConfig};
use std::time::Duration;
let client = Client::builder()
.provider(Provider::OpenAI)
.api_key("your-api-key")
.model("gpt-4")
.base_url("https://api.openai.com/v1")
.timeout(Duration::from_secs(60))
.max_tokens(4096)
.temperature(0.7)
.build()?;
| Provider | Status | Models | Streaming |
|---|---|---|---|
| OpenAI | β | GPT-4, GPT-3.5 | β |
| Anthropic Claude | β | Claude-3 (Sonnet, Opus, Haiku) | β |
| Ollama | β | Llama, Mistral, CodeLlama | β |
| Azure OpenAI | π§ | GPT-4, GPT-3.5 | π§ |
| Cohere | π | Command | π |
| Google Gemini | π | Gemini Pro | π |
Legend: β Supported | π§ In Progress | π Planned
[dependencies.rsllm]
version = "0.1"
features = [
"openai", # OpenAI provider support
"claude", # Anthropic Claude support
"ollama", # Ollama local model support
"streaming", # Streaming response support
"json-schema", # JSON schema support for structured outputs
]
RSLLM is designed to work seamlessly with the RRAG framework:
use rrag::prelude::*;
use rsllm::Client;
let llm_client = Client::builder()
.provider(Provider::OpenAI)
.api_key("your-api-key")
.build()?;
let rag_system = RragSystemBuilder::new()
.with_llm_client(llm_client)
.build()
.await?;
This project is licensed under the MIT License - see the LICENSE file for details.
Contributions are welcome! Please see our Contributing Guidelines for details.
Part of the RRAG ecosystem - Build powerful RAG applications with Rust.