| Crates.io | turbine-llm |
| lib.rs | turbine-llm |
| version | 0.2.2 |
| created_at | 2025-10-13 11:33:28.210087+00 |
| updated_at | 2025-12-23 07:59:34.442971+00 |
| description | Unified Rust interface for multiple LLM providers with growing model support |
| homepage | https://github.com/Renaiss-AI/turbine-llm |
| repository | https://github.com/Renaiss-AI/turbine-llm |
| max_upload_size | |
| id | 1880402 |
| size | 131,963 |
One interface, all LLMs - A unified Rust library for calling multiple LLM providers with growing model support.
🚀 Switch between OpenAI, Anthropic, Gemini, and Groq with minimal code changes. Perfect for building AI applications that need provider flexibility.
Renaiss AI - Enterprise AI Research Lab
Turbine LLM is developed and maintained with support from Renaiss AI, bridging the gap between AI potential and business reality.
Currently integrated:
Coming soon:
New providers and models added regularly. Check CHANGELOG.md for updates.
Add this to your Cargo.toml:
[dependencies]
turbine-llm = "0.2"
tokio = { version = "1", features = ["full"] }
The easiest way to get started - just pass a model string:
use turbine_llm::TurbineClient;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create client with model string - provider is automatically detected
let client = TurbineClient::from_model("openai/gpt-4o-mini")?;
// Simple one-liner to send a message
let response = client.send("What is Rust?").await?;
println!("{}", response.content);
Ok(())
}
Supported model string formats:
"openai/gpt-4o-mini", "google/gemini-flash", "anthropic/claude-3-5-sonnet""gpt-4o", "claude-3-5-sonnet", "gemini-flash", "llama-3.3-70b"If the API key isn't in your environment, you'll be prompted to enter it interactively.
With system prompt:
let response = client.send_with_system(
"You are a Rust expert",
"Explain ownership in one sentence"
).await?;
For advanced use cases, use the traditional builder pattern:
use turbine_llm::{TurbineClient, LLMRequest, Message, Provider};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Set API key in environment first
// export OPENAI_API_KEY="your-key"
let client = TurbineClient::new(Provider::OpenAI)?;
let request = LLMRequest::new("gpt-4o-mini")
.with_system_prompt("You are a helpful assistant.")
.with_message(Message::user("What is Rust?"))
.with_max_tokens(100);
let response = client.send_request(&request).await?;
println!("{}", response.content);
Ok(())
}
Request structured JSON from any provider:
use turbine_llm::{TurbineClient, LLMRequest, Message, OutputFormat, Provider};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let client = TurbineClient::new(Provider::Anthropic)?;
let request = LLMRequest::new("claude-3-5-sonnet-20241022")
.with_system_prompt("Return data as JSON.")
.with_message(Message::user("Info about Paris with keys: name, country, population"))
.with_output_format(OutputFormat::Json);
let response = client.send_request(&request).await?;
println!("{}", response.content);
Ok(())
}
let request = LLMRequest::new("gemini-2.0-flash-exp")
.with_messages(vec![
Message::user("Hello! My name is Alice."),
Message::assistant("Hello Alice! Nice to meet you."),
Message::user("What's my name?"),
]);
// Automatic provider detection from model string
let client = TurbineClient::from_model("openai/gpt-4o-mini")?;
// Simple message sending
let response = client.send("Your message").await?;
// With system prompt
let response = client.send_with_system("System prompt", "User message").await?;
Pass the API key directly instead of using environment variables:
// With provider enum
let client = TurbineClient::new_with_key(Provider::OpenAI, "sk-xxx");
// With model string
let client = TurbineClient::from_model_with_key("openai/gpt-4o", "sk-xxx")?;
let response = client.send("Hello").await?;
let client = TurbineClient::new(Provider::OpenAI)?;
let response = client.send_request(&request).await?;
Select which LLM provider to use (for traditional API):
pub enum Provider {
OpenAI, // Requires OPENAI_API_KEY
Anthropic, // Requires ANTHROPIC_API_KEY
Gemini, // Requires GEMINI_API_KEY
Groq, // Requires GROQ_API_KEY
}
Note: When using from_model(), the provider is automatically detected from the model string.
Construct requests with optional parameters:
LLMRequest::new("model-name")
.with_system_prompt("System prompt") // Optional
.with_message(Message::user("Query")) // Add single message
.with_messages(vec![...]) // Add multiple messages
.with_max_tokens(1000) // Optional, default: 1024
.with_temperature(0.7) // Optional, 0.0-2.0
.with_top_p(0.9) // Optional
.with_output_format(OutputFormat::Json) // Text (default) or Json
Message::user("User message")
Message::assistant("Assistant message")
Message::system("System message")
gpt-4o - Latest GPT-4 Omnigpt-4o-mini - Faster, cost-effectivegpt-3.5-turbo - Fast and efficientclaude-3-5-sonnet-20241022 - Most capableclaude-3-5-haiku-20241022 - Fast and affordablegemini-2.0-flash-exp - Latest experimentalgemini-1.5-pro - Production readyllama-3.3-70b-versatile - Powerful Llama modelmixtral-8x7b-32768 - Mixtral with large contextmatch client.send_request(&request).await {
Ok(response) => println!("{}", response.content),
Err(e) => eprintln!("Error: {}", e),
}
Run the included examples:
# Simplified API (recommended for beginners)
cargo run --example simple_usage
# Basic text generation
cargo run --example text_generation
# JSON output
cargo run --example json_output
# Multi-turn conversation
cargo run --example conversation
Error: API key not found for provider: OpenAI
Solution: Make sure the environment variable is set:
export OPENAI_API_KEY="your-key-here"
Different providers use different model names. Check the Model Examples section for correct model identifiers.
If you hit rate limits, implement exponential backoff or switch providers temporarily.
We welcome contributions! See CONTRIBUTING.md for guidelines.
Licensed under either of:
at your option.
Developed with ❤️ by the Rust community and sponsored by Renaiss AI.