turbine-llm

Crates.ioturbine-llm
lib.rsturbine-llm
version0.2.2
created_at2025-10-13 11:33:28.210087+00
updated_at2025-12-23 07:59:34.442971+00
descriptionUnified Rust interface for multiple LLM providers with growing model support
homepagehttps://github.com/Renaiss-AI/turbine-llm
repositoryhttps://github.com/Renaiss-AI/turbine-llm
max_upload_size
id1880402
size131,963
Javi Martin (renaisser)

documentation

https://docs.rs/turbine-llm

README

Turbine LLM

Crates.io Documentation Context7 License: MIT OR Apache-2.0 CI

One interface, all LLMs - A unified Rust library for calling multiple LLM providers with growing model support.

🚀 Switch between OpenAI, Anthropic, Gemini, and Groq with minimal code changes. Perfect for building AI applications that need provider flexibility.


Sponsored by Renaiss AI

Renaiss AI - Enterprise AI Research Lab

Turbine LLM is developed and maintained with support from Renaiss AI, bridging the gap between AI potential and business reality.


Features

  • Unified API: Single interface for multiple LLM providers
  • Simple & Clean: Minimal, straightforward code - no complexity
  • Text & JSON Output: Support for both text and structured JSON responses
  • Async/Await: Built with Tokio for high-performance async operations
  • Type-Safe: Full Rust type safety with proper error handling
  • Growing Support: New providers and models added regularly

Why Turbine?

  • Provider Independence: Easily switch providers or use multiple simultaneously
  • Consistent Interface: Same code works across all providers
  • Production Ready: Proper error handling, async support, comprehensive docs
  • Actively Maintained: Regular updates with new models and providers

Supported Providers

Currently integrated:

  • OpenAI (GPT-4, GPT-3.5, etc.)
  • Anthropic (Claude 3.5 Sonnet, Haiku, etc.)
  • Google Gemini (Gemini 2.0, 1.5, etc.)
  • Groq (Llama, Mixtral, etc.)

Coming soon:

  • 🔜 Cohere
  • 🔜 Mistral AI
  • 🔜 Perplexity

New providers and models added regularly. Check CHANGELOG.md for updates.

Installation

Add this to your Cargo.toml:

[dependencies]
turbine-llm = "0.2"
tokio = { version = "1", features = ["full"] }

Quick Start

1. Simplified API (Recommended) 🚀

The easiest way to get started - just pass a model string:

use turbine_llm::TurbineClient;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    // Create client with model string - provider is automatically detected
    let client = TurbineClient::from_model("openai/gpt-4o-mini")?;

    // Simple one-liner to send a message
    let response = client.send("What is Rust?").await?;
    println!("{}", response.content);

    Ok(())
}

Supported model string formats:

  • Explicit provider: "openai/gpt-4o-mini", "google/gemini-flash", "anthropic/claude-3-5-sonnet"
  • Inferred from name: "gpt-4o", "claude-3-5-sonnet", "gemini-flash", "llama-3.3-70b"

If the API key isn't in your environment, you'll be prompted to enter it interactively.

With system prompt:

let response = client.send_with_system(
    "You are a Rust expert",
    "Explain ownership in one sentence"
).await?;

2. Traditional API (Full Control)

For advanced use cases, use the traditional builder pattern:

use turbine_llm::{TurbineClient, LLMRequest, Message, Provider};

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    // Set API key in environment first
    // export OPENAI_API_KEY="your-key"

    let client = TurbineClient::new(Provider::OpenAI)?;

    let request = LLMRequest::new("gpt-4o-mini")
        .with_system_prompt("You are a helpful assistant.")
        .with_message(Message::user("What is Rust?"))
        .with_max_tokens(100);

    let response = client.send_request(&request).await?;
    println!("{}", response.content);

    Ok(())
}

3. JSON Output

Request structured JSON from any provider:

use turbine_llm::{TurbineClient, LLMRequest, Message, OutputFormat, Provider};

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let client = TurbineClient::new(Provider::Anthropic)?;

    let request = LLMRequest::new("claude-3-5-sonnet-20241022")
        .with_system_prompt("Return data as JSON.")
        .with_message(Message::user("Info about Paris with keys: name, country, population"))
        .with_output_format(OutputFormat::Json);

    let response = client.send_request(&request).await?;
    println!("{}", response.content);

    Ok(())
}

4. Multi-turn Conversations

let request = LLMRequest::new("gemini-2.0-flash-exp")
    .with_messages(vec![
        Message::user("Hello! My name is Alice."),
        Message::assistant("Hello Alice! Nice to meet you."),
        Message::user("What's my name?"),
    ]);

API Reference

TurbineClient

Simplified Constructor (Recommended)

// Automatic provider detection from model string
let client = TurbineClient::from_model("openai/gpt-4o-mini")?;

// Simple message sending
let response = client.send("Your message").await?;

// With system prompt
let response = client.send_with_system("System prompt", "User message").await?;

With Explicit API Key

Pass the API key directly instead of using environment variables:

// With provider enum
let client = TurbineClient::new_with_key(Provider::OpenAI, "sk-xxx");

// With model string
let client = TurbineClient::from_model_with_key("openai/gpt-4o", "sk-xxx")?;
let response = client.send("Hello").await?;

Traditional Constructor

let client = TurbineClient::new(Provider::OpenAI)?;
let response = client.send_request(&request).await?;

Provider

Select which LLM provider to use (for traditional API):

pub enum Provider {
    OpenAI,      // Requires OPENAI_API_KEY
    Anthropic,   // Requires ANTHROPIC_API_KEY
    Gemini,      // Requires GEMINI_API_KEY
    Groq,        // Requires GROQ_API_KEY
}

Note: When using from_model(), the provider is automatically detected from the model string.

LLMRequest Builder

Construct requests with optional parameters:

LLMRequest::new("model-name")
    .with_system_prompt("System prompt")        // Optional
    .with_message(Message::user("Query"))       // Add single message
    .with_messages(vec![...])                   // Add multiple messages
    .with_max_tokens(1000)                      // Optional, default: 1024
    .with_temperature(0.7)                      // Optional, 0.0-2.0
    .with_top_p(0.9)                            // Optional
    .with_output_format(OutputFormat::Json)     // Text (default) or Json

Message Helpers

Message::user("User message")
Message::assistant("Assistant message")
Message::system("System message")

Model Examples

OpenAI

  • gpt-4o - Latest GPT-4 Omni
  • gpt-4o-mini - Faster, cost-effective
  • gpt-3.5-turbo - Fast and efficient

Anthropic

  • claude-3-5-sonnet-20241022 - Most capable
  • claude-3-5-haiku-20241022 - Fast and affordable

Gemini

  • gemini-2.0-flash-exp - Latest experimental
  • gemini-1.5-pro - Production ready

Groq

  • llama-3.3-70b-versatile - Powerful Llama model
  • mixtral-8x7b-32768 - Mixtral with large context

Error Handling

match client.send_request(&request).await {
    Ok(response) => println!("{}", response.content),
    Err(e) => eprintln!("Error: {}", e),
}

Examples

Run the included examples:

# Simplified API (recommended for beginners)
cargo run --example simple_usage

# Basic text generation
cargo run --example text_generation

# JSON output
cargo run --example json_output

# Multi-turn conversation
cargo run --example conversation

Documentation

Troubleshooting

API Key Not Found

Error: API key not found for provider: OpenAI

Solution: Make sure the environment variable is set:

export OPENAI_API_KEY="your-key-here"

Model Not Found

Different providers use different model names. Check the Model Examples section for correct model identifiers.

Rate Limiting

If you hit rate limits, implement exponential backoff or switch providers temporarily.

Contributing

We welcome contributions! See CONTRIBUTING.md for guidelines.

License

Licensed under either of:

at your option.

Acknowledgments

Developed with ❤️ by the Rust community and sponsored by Renaiss AI.

Commit count: 0

cargo fmt