| Crates.io | babel |
| lib.rs | babel |
| version | 0.0.11 |
| created_at | 2025-03-21 16:02:21.585495+00 |
| updated_at | 2025-08-17 04:21:47.904407+00 |
| description | Provide Rust enums for Groq, SambaNova, Openrouter's llm model names. |
| homepage | |
| repository | |
| max_upload_size | |
| id | 1600706 |
| size | 102,585 |
Babel is a Rust library designed to simplify interactions with various LLM (Large Language Model) providers. It provides a unified interface for making API calls to different LLM services while handling the variations in model naming conventions across providers.
Add Babel to your Cargo.toml:
[dependencies]
babel = "0.0.3"
Currently, Babel supports the following providers:
Here's a simple example of using Babel with Groq:
use babel::model::{ChatMessage, Groq, GroqModel, LLMBuilder};
use tokio;
#[tokio::main]
async fn main() -> Result<(), String> {
// Create a Groq LLM instance
let groq_llm = LLMBuilder::<Groq>::new()
.model(GroqModel::QwenQwq32bPreview)
.temperature(0.7)
.max_tokens(2048)
.system_prompt("You are a helpful assistant.".to_string())
.build()?;
println!("Using Groq model: {}", groq_llm.get_model_id());
// Create chat message
let messages = vec![
ChatMessage {
role: "user".to_string(),
content: "What is machine learning?".to_string(),
}
];
// Get complete response
let response = groq_llm.chat(messages).await?;
println!("{}", response);
Ok(())
}
Babel supports two methods for API authentication:
{PROVIDER_NAME}_API_KEY (e.g., GROQ_API_KEY)Babel makes it easy to maintain conversation history:
// Start a conversation
let mut conversation = vec![
ChatMessage {
role: "user".to_string(),
content: "What are the key features of Rust?".to_string(),
}
];
// Get first response
let response = llm.chat(conversation.clone()).await?;
// Add response to conversation history
conversation.push(ChatMessage {
role: "assistant".to_string(),
content: response,
});
// Add next user message
conversation.push(ChatMessage {
role: "user".to_string(),
content: "What advantages does Rust have over C++?".to_string(),
});
// Continue the conversation
let next_response = llm.chat(conversation).await?;
For applications that need to process responses as they arrive:
let mut stream = llm.stream_chat(messages).await;
while let Some(result) = stream.next().await {
match result {
Ok(response) => {
if let Some(content) = response.get_content() {
// Process each chunk of the response
print!("{}", content);
}
}
Err(e) => eprintln!("Error: {}", e),
}
}
Babel is designed to be extensible. To add a new provider:
Provider traitdefine_provider_models! macro to implement the Model traitExample:
use super::base::{Model, Provider, define_provider_models};
// Define your provider
pub struct MyProvider;
impl Provider for MyProvider {
type ModelType = MyProviderModel;
fn provider_name() -> &'static str {
"myprovider"
}
}
// Define models for your provider
define_provider_models!(MyProvider, MyProviderModel, {
(ModelA, "model-a-identifier"),
(ModelB, "model-b-identifier"),
(ModelC, "model-c-identifier")
});
MIT
Contributions are welcome! Please feel free to submit a Pull Request.