Crates.io | rllm |
lib.rs | rllm |
version | 1.1.0 |
source | src |
created_at | 2025-01-05 06:11:43.074743 |
updated_at | 2025-01-08 21:05:53.031354 |
description | A Rust library unifying multiple LLM backends |
homepage | https://github.com/graniet/rllm |
repository | https://github.com/graniet/rllm |
max_upload_size | |
id | 1504527 |
size | 148,297 |
RLLM is a Rust library that lets you use multiple LLM backends in a single project: OpenAI, Anthropic (Claude), Ollama, DeepSeek, xAI and Phind. With a unified API and builder style - similar to the Stripe experience - you can easily create chat or text completion requests without multiplying structures and crates.
ChatProvider
and CompletionProvider
) to cover most use cases.Simply add RLLM to your Cargo.toml
:
[dependencies]
rllm = { version = "1.0.0", features = ["openai", "anthropic", "ollama"] }
Name | Description |
---|---|
anthropic_example |
Demonstrates integration with Anthropic's Claude model for chat completion |
chain_example |
Shows how to create multi-step prompt chains for exploring programming language features |
deepseek_example |
Basic DeepSeek chat completion example with deepseek-chat models |
embedding_example |
Basic embedding example with OpenAI's API |
multi_backend_example |
Illustrates chaining multiple LLM backends (OpenAI, Anthropic, DeepSeek) together in a single workflow |
ollama_example |
Example of using local LLMs through Ollama integration |
openai_example |
Basic OpenAI chat completion example with GPT models |
phind_example |
Basic Phind chat completion example with Phind-70B model |
validator_example |
Basic validator example with Anthropic's Claude model |
xai_example |
Basic xAI chat completion example with Grok models |
evaluation_example |
Basic evaluation example with Anthropic, Phind and DeepSeek |
Here's a basic example using OpenAI for chat completion. See the examples directory for other backends (Anthropic, Ollama, DeepSeek, xAI), embedding capabilities, and more advanced use cases.
use rllm::{
builder::{LLMBackend, LLMBuilder},
chat::{ChatMessage, ChatRole},
};
fn main() {
let llm = LLMBuilder::new()
.backend(LLMBackend::OpenAI) // or LLMBackend::Anthropic, LLMBackend::Ollama, LLMBackend::DeepSeek, LLMBackend::XAI, LLMBackend::Phind ...
.api_key(std::env::var("OPENAI_API_KEY").unwrap_or("sk-TESTKEY".into()))
.model("gpt-4o") // or model("claude-3-5-sonnet-20240620") or model("grok-2-latest") or model("deepseek-chat") or model("llama3.1") or model("Phind-70B") ...
.max_tokens(1000)
.temperature(0.7)
.system("You are a helpful assistant.")
.stream(false)
.build()
.expect("Failed to build LLM");
}
let messages = vec![
ChatMessage {
role: ChatRole::User,
content: "Tell me that you love cats".into(),
},
ChatMessage {
role: ChatRole::Assistant,
content:
"I am an assistant, I cannot love cats but I can love dogs"
.into(),
},
ChatMessage {
role: ChatRole::User,
content: "Tell me that you love dogs in 2000 chars".into(),
},
];
let chat_resp = llm.chat(&messages);
match chat_resp {
Ok(text) => println!("Chat response:\n{}", text),
Err(e) => eprintln!("Chat error: {}", e),
}