| Crates.io | orchestra-rs |
| lib.rs | orchestra-rs |
| version | 0.1.0 |
| created_at | 2025-08-25 20:05:10.172225+00 |
| updated_at | 2025-08-25 20:05:10.172225+00 |
| description | An orchestration framework for building AI agents using multiple LLM providers. |
| homepage | |
| repository | https://github.com/ayoubbuoya/orchestra-rs |
| max_upload_size | |
| id | 1810090 |
| size | 147,458 |
A Rust crate for building AI agent workflows and applications. Orchestra-rs provides a powerful, type-safe framework for orchestrating production-ready applications powered by Large Language Models (LLMs).
The goal of Orchestra-rs is to be the LangChain of the Rust ecosystem. We aim to provide a composable, safe, and efficient set of tools to chain together calls to LLMs, APIs, and other data sources. By leveraging Rust's powerful type system and performance, Orchestra-rs empowers developers to build reliable and scalable AI applications and intelligent agents with confidence.
Add Orchestra-rs to your Cargo.toml:
[dependencies]
orchestra-rs = { path = "." } # Will be published to crates.io soon
tokio = { version = "1.0", features = ["full"] }
use orchestra_rs::{
llm::LLM,
providers::types::ProviderSource,
messages::Message,
model::ModelConfig,
};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Set your API key as an environment variable
std::env::set_var("GEMINI_API_KEY", "your-api-key-here");
// Create an LLM instance with Gemini
let llm = LLM::gemini("gemini-2.5-flash");
// Simple prompt
let response = llm.prompt("Hello, how are you today?").await?;
println!("Response: {}", response.text);
Ok(())
}
use orchestra_rs::{
llm::LLM,
providers::types::ProviderSource,
model::ModelConfig,
};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create a custom model configuration
let config = ModelConfig::new("gemini-2.5-flash")
.with_system_instruction("You are a helpful coding assistant")
.with_temperature(0.7)?
.with_top_p(0.9)?
.with_max_tokens(1000)
.with_stop_sequence("```");
// Create LLM with custom configuration
let llm = LLM::new(ProviderSource::Gemini, "gemini-2.5-flash".to_string())
.with_custom_config(config);
let response = llm.prompt("Write a simple Rust function").await?;
println!("Response: {}", response.text);
Ok(())
}
use orchestra_rs::{
llm::LLM,
messages::Message,
};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let llm = LLM::gemini("gemini-2.5-flash");
// Build conversation history
let history = vec![
Message::human("Hi, I'm working on a Rust project"),
Message::assistant("Great! I'd be happy to help with your Rust project. What are you working on?"),
];
// Continue the conversation
let response = llm.chat(
Message::human("I need help with error handling"),
history
).await?;
println!("Response: {}", response.text);
Ok(())
}
use orchestra_rs::{
llm::LLM,
providers::types::ProviderSource,
};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Conservative settings (lower temperature, more focused)
let conservative_llm = LLM::conservative(
ProviderSource::Gemini,
"gemini-2.5-flash".to_string()
);
// Creative settings (higher temperature, more diverse)
let creative_llm = LLM::creative(
ProviderSource::Gemini,
"gemini-2.5-flash".to_string()
);
// Balanced settings (moderate temperature)
let balanced_llm = LLM::balanced(
ProviderSource::Gemini,
"gemini-2.5-flash".to_string()
);
let response = conservative_llm.prompt("Explain Rust ownership").await?;
println!("Conservative response: {}", response.text);
Ok(())
}
Currently supported models:
gemini-2.5-flash-litegemini-2.5-progemini-2.5-flashgemini-2.0-flash-litegemini-2.0-flashgemini-1.5-proSetup:
GEMINI_API_KEY=your-api-keyOrchestra-rs is built with a modular architecture:
Orchestra-rs provides comprehensive error handling with context:
use orchestra_rs::{error::OrchestraError, llm::LLM};
#[tokio::main]
async fn main() {
let llm = LLM::gemini("gemini-2.5-flash");
match llm.prompt("Hello").await {
Ok(response) => println!("Success: {}", response.text),
Err(OrchestraError::ApiKey { message }) => {
eprintln!("API key error: {}", message);
},
Err(OrchestraError::Provider { provider, message }) => {
eprintln!("Provider {} error: {}", provider, message);
},
Err(e) => eprintln!("Other error: {}", e),
}
}
Orchestra-rs includes comprehensive testing utilities:
use orchestra_rs::providers::mock::{MockProvider, MockConfig};
#[tokio::test]
async fn test_my_ai_function() {
let mock_config = MockConfig::new()
.with_responses(vec!["Mocked response 1", "Mocked response 2"]);
let provider = MockProvider::new(mock_config);
// Use the mock provider in your tests
}
For a detailed overview of the library's architecture, please refer to the architecture documentation.
๐ฑ Early Development Stage
Orchestra-rs is in active development. The core APIs are stabilizing, but may still change. This is a great time to get involved and help shape the future of the framework.
This project is licensed under the MIT License - see the LICENSE file for details.