orchestra-rs

Crates.ioorchestra-rs
lib.rsorchestra-rs
version0.1.0
created_at2025-08-25 20:05:10.172225+00
updated_at2025-08-25 20:05:10.172225+00
descriptionAn orchestration framework for building AI agents using multiple LLM providers.
homepage
repositoryhttps://github.com/ayoubbuoya/orchestra-rs
max_upload_size
id1810090
size147,458
Ayoub Amer (ayoubbuoya)

documentation

README

Orchestra-rs

crates.io docs.rs CI License

A Rust crate for building AI agent workflows and applications. Orchestra-rs provides a powerful, type-safe framework for orchestrating production-ready applications powered by Large Language Models (LLMs).

Vision

The goal of Orchestra-rs is to be the LangChain of the Rust ecosystem. We aim to provide a composable, safe, and efficient set of tools to chain together calls to LLMs, APIs, and other data sources. By leveraging Rust's powerful type system and performance, Orchestra-rs empowers developers to build reliable and scalable AI applications and intelligent agents with confidence.

Features

  • ๐Ÿš€ Type-safe LLM interactions - Leverage Rust's type system for reliable AI applications
  • ๐Ÿ”Œ Multiple provider support - Currently supports Google Gemini, with more providers coming
  • ๐Ÿ› ๏ธ Flexible configuration - Builder patterns and validation for model configurations
  • ๐Ÿ“ Rich message types - Support for text, mixed content, and future tool calling
  • ๐Ÿงช Comprehensive testing - Built-in mock providers and extensive test coverage
  • โšก Async/await support - Built for modern async Rust applications
  • ๐Ÿ”’ Error handling - Comprehensive error types with context

Quick Start

Add Orchestra-rs to your Cargo.toml:


[dependencies]
orchestra-rs = { path = "." }  # Will be published to crates.io soon
tokio = { version = "1.0", features = ["full"] }

Basic Usage

use orchestra_rs::{
    llm::LLM,
    providers::types::ProviderSource,
    messages::Message,
    model::ModelConfig,
};

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    // Set your API key as an environment variable
    std::env::set_var("GEMINI_API_KEY", "your-api-key-here");

    // Create an LLM instance with Gemini
    let llm = LLM::gemini("gemini-2.5-flash");

    // Simple prompt
    let response = llm.prompt("Hello, how are you today?").await?;
    println!("Response: {}", response.text);

    Ok(())
}

Advanced Configuration

use orchestra_rs::{
    llm::LLM,
    providers::types::ProviderSource,
    model::ModelConfig,
};

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    // Create a custom model configuration
    let config = ModelConfig::new("gemini-2.5-flash")
        .with_system_instruction("You are a helpful coding assistant")
        .with_temperature(0.7)?
        .with_top_p(0.9)?
        .with_max_tokens(1000)
        .with_stop_sequence("```");

    // Create LLM with custom configuration
    let llm = LLM::new(ProviderSource::Gemini, "gemini-2.5-flash".to_string())
        .with_custom_config(config);

    let response = llm.prompt("Write a simple Rust function").await?;
    println!("Response: {}", response.text);

    Ok(())
}

Chat with History

use orchestra_rs::{
    llm::LLM,
    messages::Message,
};

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let llm = LLM::gemini("gemini-2.5-flash");

    // Build conversation history
    let history = vec![
        Message::human("Hi, I'm working on a Rust project"),
        Message::assistant("Great! I'd be happy to help with your Rust project. What are you working on?"),
    ];

    // Continue the conversation
    let response = llm.chat(
        Message::human("I need help with error handling"),
        history
    ).await?;

    println!("Response: {}", response.text);
    Ok(())
}

Using Presets

use orchestra_rs::{
    llm::LLM,
    providers::types::ProviderSource,
};

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    // Conservative settings (lower temperature, more focused)
    let conservative_llm = LLM::conservative(
        ProviderSource::Gemini,
        "gemini-2.5-flash".to_string()
    );

    // Creative settings (higher temperature, more diverse)
    let creative_llm = LLM::creative(
        ProviderSource::Gemini,
        "gemini-2.5-flash".to_string()
    );

    // Balanced settings (moderate temperature)
    let balanced_llm = LLM::balanced(
        ProviderSource::Gemini,
        "gemini-2.5-flash".to_string()
    );

    let response = conservative_llm.prompt("Explain Rust ownership").await?;
    println!("Conservative response: {}", response.text);

    Ok(())
}

Supported Providers

Google Gemini

Currently supported models:

  • gemini-2.5-flash-lite
  • gemini-2.5-pro
  • gemini-2.5-flash
  • gemini-2.0-flash-lite
  • gemini-2.0-flash
  • gemini-1.5-pro

Setup:

  1. Get an API key from Google AI Studio
  2. Set the environment variable: GEMINI_API_KEY=your-api-key

Coming Soon

  • OpenAI GPT models
  • Anthropic Claude
  • Local models via Ollama
  • Azure OpenAI

Architecture

Orchestra-rs is built with a modular architecture:

  • Core Types: Message types, model configurations, and error handling
  • Providers: Pluggable LLM provider implementations
  • LLM Interface: High-level interface for interacting with any provider
  • Configuration: Flexible configuration with validation and presets

Error Handling

Orchestra-rs provides comprehensive error handling with context:


use orchestra_rs::{error::OrchestraError, llm::LLM};

#[tokio::main]
async fn main() {
    let llm = LLM::gemini("gemini-2.5-flash");

    match llm.prompt("Hello").await {
        Ok(response) => println!("Success: {}", response.text),
        Err(OrchestraError::ApiKey { message }) => {
            eprintln!("API key error: {}", message);
        },
        Err(OrchestraError::Provider { provider, message }) => {
            eprintln!("Provider {} error: {}", provider, message);
        },
        Err(e) => eprintln!("Other error: {}", e),
    }
}

Testing

Orchestra-rs includes comprehensive testing utilities:


use orchestra_rs::providers::mock::{MockProvider, MockConfig};

#[tokio::test]
async fn test_my_ai_function() {
    let mock_config = MockConfig::new()
        .with_responses(vec!["Mocked response 1", "Mocked response 2"]);

    let provider = MockProvider::new(mock_config);
    // Use the mock provider in your tests
}

Architecture Documentation

For a detailed overview of the library's architecture, please refer to the architecture documentation.

Project Status

๐ŸŒฑ Early Development Stage

Orchestra-rs is in active development. The core APIs are stabilizing, but may still change. This is a great time to get involved and help shape the future of the framework.

Roadmap

  • Core message and configuration types
  • Google Gemini provider
  • Comprehensive error handling
  • Testing utilities and mock providers
  • Tool calling support
  • Streaming responses
  • Additional providers (OpenAI, Anthropic, etc.)
  • Agent workflows and chains
  • Memory and context management
  • Plugin system

License

This project is licensed under the MIT License - see the LICENSE file for details.

Commit count: 31

cargo fmt