| Crates.io | ai_rs |
| lib.rs | ai_rs |
| version | 0.0.2 |
| created_at | 2025-01-10 05:38:50.497753+00 |
| updated_at | 2025-07-14 00:39:17.72946+00 |
| description | One sdk for all AI platforms |
| homepage | https://github.com/Himasnhu-AT/ai_rs/blob/master/Readme.md |
| repository | https://github.com/Himasnhu-AT/ai_rs.git |
| max_upload_size | |
| id | 1510963 |
| size | 99,819 |
ai_rs is a Rust library that provides a unified interface to interact with multiple AI models. This library is designed to be modular, allowing you to easily add and use different AI models.
.env file.Add ai_rs to your Cargo.toml:
[dependencies]
ai_rs = "0.0.1"
Here's an example of how to use the ai_rs library with the GeminiClient:
use ai_rs::{init_logging, GeminiClient};
#[tokio::main]
async fn main() {
init_logging();
let api_key = std::env::var("GEMINI_API_KEY").expect("GEMINI_API_KEY must be set");
let gemini_ai = GeminiClient::new(&api_key, "gemini-1.5-pro");
match gemini_ai.generate_content("Hello, Gemini!").await {
Ok(response) => {
if let Some(text) = response.get_text() {
println!("{}", text);
} else {
println!("No response generated");
}
}
Err(e) => println!("Error: {}", e),
}
}
The library provides comprehensive support for Google's Gemini API with the following features:
use ai_rs::{GeminiClient, GenerationConfig};
let client = GeminiClient::new("your_api_key", "gemini-1.5-pro");
// Simple text generation
let response = client.generate_content("Tell me a joke").await?;
if let Some(text) = response.get_text() {
println!("{}", text);
}
let config = GenerationConfig {
temperature: Some(0.7),
max_output_tokens: Some(100),
top_p: Some(0.8),
top_k: Some(40),
candidate_count: None,
stop_sequences: None,
};
let response = client.generate_content_with_config("Write a haiku", config).await?;
use futures_util::StreamExt;
let mut stream = client.stream_content("Count from 1 to 5").await?;
while let Some(result) = stream.next().await {
match result {
Ok(chunk) => {
if let Some(text) = chunk.get_text() {
print!("{}", text);
}
}
Err(e) => println!("Stream error: {}", e),
}
}
use ai_rs::{GenerateContentRequest, Content, Part};
let request = GenerateContentRequest {
contents: vec![Content {
role: "user".to_string(),
parts: vec![Part {
text: Some("Explain quantum computing".to_string()),
inline_data: None,
}],
}],
generation_config: Some(config),
safety_settings: None,
tools: None,
};
let response = client.generate_content_with_request(request).await?;
Set your Gemini API key as an environment variable:
export GEMINI_API_KEY="your_api_key_here"
Or create a .env file:
GEMINI_API_KEY=your_api_key_here
RUST_LOG=info
The library uses the log crate for logging and the env_logger crate to configure logging levels via an .env file. Create a .env file in the root of your project to specify the logging level:
RUST_LOG=info
To add a new AI model, create a new module in the src directory and implement the necessary methods. Update src/lib.rs to export the new module.
For example, to add a new model called xyz:
xyz model:mkdir src/xyz
xyz.rs file inside the xyz folder with proper api calls and test cases. Sample code:use log::{info, debug, error};
pub struct xyzClient {
api_key: String,
model: String,
}
impl xyzClient {
pub fn setup(api_key: &str) -> Self {
info!("Setting up xyzClient with API key");
xyzClient {
api_key: api_key.to_string(),
model: String::new(),
}
}
pub fn model(mut self, model: &str) -> Self {
info!("Setting model to {}", model);
self.model = model.to_string();
self
}
pub fn generate_content(&self, prompt: &str) -> String {
info!("Generating content for prompt: '{}'", prompt);
// Mock implementation of content generation
let response = format!(
"Generated content for prompt: '{}', using model: '{}'",
prompt, self.model
);
debug!("Generated response: {}", response);
response
}
}
Update src/lib.rs to export the xyz module:
pub mod gemini;
pub mod xyz;
pub use gemini::GeminiClient;
pub use xyz::xyzClient;
use dotenv::dotenv;
use std::env;
pub fn init_logging() {
dotenv().ok();
env_logger::init();
}
cargo run --example file_name # for example main, ollama, gemini
cargo build --release
cargo test
cargo doc --open --release
Contributions are welcome! Please open an issue or submit a pull request.