| Crates.io | api_gemini |
| lib.rs | api_gemini |
| version | 0.5.0 |
| created_at | 2025-11-06 19:02:35.709162+00 |
| updated_at | 2025-11-29 19:16:58.242271+00 |
| description | Gemini's API for accessing large language models (LLMs). |
| homepage | https://github.com/Wandalen/api_llm/tree/master/api/gemini |
| repository | https://github.com/Wandalen/api_llm/tree/master/api/gemini |
| max_upload_size | |
| id | 1920124 |
| size | 1,848,419 |
Comprehensive Rust client for the Google Gemini API with complete type-safe access to all endpoints.
This API crate is designed as a stateless HTTP client with zero persistence requirements. It provides:
This ensures lightweight, containerized deployments and eliminates operational complexity.
Expose all server-side functionality transparently while maintaining zero client-side intelligence or automatic behaviors.
Key principles:
Core Capabilities:
Advanced Features:
Enterprise Reliability:
Add to your Cargo.toml:
[dependencies]
api_gemini = "0.2.0"
tokio = { version = "1.0", features = ["macros", "rt-multi-thread"] }
# Default features
api_gemini = "0.2.0"
# With batch operations (infrastructure ready)
api_gemini = { version = "0.2.0", features = ["batch_operations"] }
# With compression support
api_gemini = { version = "0.2.0", features = ["compression"] }
# All features
api_gemini = { version = "0.2.0", features = ["full"] }
use api_gemini::{ client::Client, models::*, error::Error };
#[tokio::main]
async fn main() -> Result< (), Error >
{
// Create client from GEMINI_API_KEY environment variable
let client = Client::new().map_err( |_| Error::ConfigurationError( "Failed to create client".to_string() ) )?;
// Simple text generation
let request = GenerateContentRequest
{
contents: vec!
[
Content
{
parts: vec![ Part { text: Some( "Write a haiku about programming".to_string() ), ..Default::default() } ],
role: "user".to_string(),
}
],
..Default::default()
};
let response = client.models().by_name( "gemini-1.5-pro-latest" ).generate_content( &request ).await?;
if let Some( text ) = response.candidates.first()
.and_then( |c| c.content.parts.first() )
.and_then( |p| p.text.as_ref() )
{
println!( "{}", text );
}
Ok( () )
}
Create secret/-secret.sh in your project root:
GEMINI_API_KEY="your-api-key-here"
use api_gemini::client::Client;
fn main() -> Result< (), Box< dyn std::error::Error > >
{
let client = Client::new()?; // Automatically reads from secret/-secret.sh
Ok( () )
}
export GEMINI_API_KEY="your-api-key-here"
use api_gemini::client::Client;
fn main() -> Result< (), Box< dyn std::error::Error > >
{
let client = Client::builder()
.api_key( "your-api-key".to_string() )
.build()?;
Ok( () )
}
Get your API key from Google AI Studio.
use api_gemini::{ client::Client, error::Error };
async fn example()
{
let client = Client::new().unwrap();
match client.models().list().await
{
Ok( models ) => println!( "Found {} models", models.models.len() ),
Err( Error::AuthenticationError( msg ) ) => eprintln!( "Auth failed: {}", msg ),
Err( Error::RateLimitError( msg ) ) => eprintln!( "Rate limited: {}", msg ),
Err( Error::ApiError( msg ) ) => eprintln!( "API error: {}", msg ),
Err( e ) => eprintln!( "Error: {:?}", e ),
}
}
| Model | Context Window | Vision | Capabilities |
|---|---|---|---|
| gemini-2.5-flash | 1M tokens | Yes | Latest stable |
| gemini-1.5-pro | 1M tokens | Yes | Full capabilities |
| gemini-1.5-flash | 1M tokens | Yes | Fast, cost-effective |
| text-embedding-004 | - | No | Embeddings only |
All dependencies workspace-managed for consistency.
cargo clippy -- -D warningsMIT