| Crates.io | gemini_crate |
| lib.rs | gemini_crate |
| version | 0.1.0 |
| created_at | 2025-12-22 05:44:45.214395+00 |
| updated_at | 2025-12-22 05:44:45.214395+00 |
| description | A robust Rust client library for Google's Gemini AI API with built-in error handling, retry logic, and comprehensive model support |
| homepage | https://github.com/micro-tech/gemini_crate |
| repository | https://github.com/micro-tech/gemini_crate |
| max_upload_size | |
| id | 1999135 |
| size | 130,699 |
A robust Rust client library for Google's Gemini AI API with built-in error handling, retry logic, and comprehensive model support.
[dependencies]
gemini_crate = "0.1.0"
tokio = { version = "1.0", features = ["full"] }
dotenvy = "0.15"
Create a .env file in your project root:
GEMINI_API_KEY=your_gemini_api_key_here
Get your API key from Google AI Studio.
use gemini_crate::client::GeminiClient;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Load environment variables
dotenvy::dotenv().ok();
// Create client
let client = GeminiClient::new()?;
// Generate text
let response = client
.generate_text("gemini-2.5-flash", "What is the capital of France?")
.await?;
println!("Response: {}", response);
Ok(())
}
use gemini_crate::client::GeminiClient;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
dotenvy::dotenv().ok();
let client = GeminiClient::new()?;
let models = client.list_models().await?;
for model in models.models {
println!("- {} ({})", model.name, model.display_name);
println!(" Methods: {:?}", model.supported_generation_methods);
}
Ok(())
}
use gemini_crate::{client::GeminiClient, errors::Error};
#[tokio::main]
async fn main() {
dotenvy::dotenv().ok();
let client = match GeminiClient::new() {
Ok(c) => c,
Err(Error::Config(msg)) => {
eprintln!("Configuration error: {}", msg);
eprintln!("Make sure GEMINI_API_KEY is set in your .env file");
return;
}
Err(e) => {
eprintln!("Failed to create client: {}", e);
return;
}
};
match client.generate_text("gemini-2.5-flash", "Hello!").await {
Ok(response) => println!("Success: {}", response),
Err(Error::Network(e)) => eprintln!("Network error: {}", e),
Err(Error::Api(msg)) => eprintln!("API error: {}", msg),
Err(e) => eprintln!("Other error: {}", e),
}
}
use gemini_crate::client::GeminiClient;
use futures::future::try_join_all;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
dotenvy::dotenv().ok();
let client = GeminiClient::new()?;
let questions = vec![
"What is the capital of Japan?",
"Explain photosynthesis briefly",
"What's the largest planet?",
];
let tasks = questions.into_iter().map(|question| {
client.generate_text("gemini-2.5-flash", question)
});
let responses = try_join_all(tasks).await?;
for (i, response) in responses.iter().enumerate() {
println!("Response {}: {}", i + 1, response);
}
Ok(())
}
The library supports all current Gemini models:
| Model | Best For | Speed | Context |
|---|---|---|---|
gemini-2.5-flash |
General tasks | Fast | 1M tokens |
gemini-2.5-pro |
Complex reasoning | Medium | 2M tokens |
gemini-flash-latest |
Latest features | Fast | Variable |
gemini-pro-latest |
Latest pro features | Medium | Variable |
Use client.list_models() to see all available models and their capabilities.
Run the included examples:
# Interactive chat
cargo run --example simple_chat
# List all models
cargo run --example list_models
# Batch processing demo
cargo run --example batch_processing
The library provides comprehensive error handling:
Error::Network - Network connectivity issuesError::Api - Gemini API errors (rate limits, invalid requests)Error::Json - Response parsing errorsError::Config - Configuration issues (missing API key)# .env file
GEMINI_API_KEY=your_api_key_here
RUST_LOG=info # Optional: for debugging
use std::time::Duration;
use tokio::time::sleep;
// Add delays between requests
for prompt in prompts {
let response = client.generate_text("gemini-2.5-flash", prompt).await?;
println!("{}", response);
sleep(Duration::from_millis(500)).await; // Be nice to the API
}
// For quick responses
let model = "gemini-2.5-flash";
// For complex reasoning
let model = "gemini-2.5-pro";
// For latest features
let model = "gemini-flash-latest";
The library is designed for unreliable connections (like Starlink):
GEMINI_API_KEY (required) - Your Gemini API keyuse gemini_crate::{client::GeminiClient, config::Config};
let config = Config::from_api_key("your_api_key".to_string());
let client = GeminiClient::with_config(config);
cargo testcargo clippyLicensed under either of:
at your option.
"GEMINI_API_KEY must be set"
.env file is in the project rootdotenvy::dotenv().ok() before creating the client"Model not found"
client.list_models() to see available modelsgemini-pro)Network timeouts
For more help, see the full troubleshooting guide.