| Crates.io | miyabi-llm-google |
| lib.rs | miyabi-llm-google |
| version | 0.1.2 |
| created_at | 2025-11-22 06:41:29.145582+00 |
| updated_at | 2025-11-22 06:41:29.145582+00 |
| description | Google Gemini API client for Miyabi LLM - Provider-specific implementation |
| homepage | |
| repository | https://github.com/ShunsukeHayashi/Miyabi |
| max_upload_size | |
| id | 1944939 |
| size | 69,640 |
Google Gemini API client for Miyabi LLM - Provider-specific implementation.
miyabi-llm-core traitsAdd this to your Cargo.toml:
[dependencies]
miyabi-llm-google = "0.1"
Or use the unified miyabi-llm crate for multi-provider support:
[dependencies]
miyabi-llm = "0.1"
use miyabi_llm_google::GoogleClient;
use miyabi_llm_core::{LlmClient, Message, Role};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create client from environment variable (GOOGLE_API_KEY or GEMINI_API_KEY)
let client = GoogleClient::from_env()?
.with_flash() // Use Gemini 1.5 Flash (faster, cheaper)
.with_temperature(0.7);
// Simple chat
let messages = vec![
Message::new(Role::User, "What is Rust?".to_string()),
];
let response = client.chat(messages).await?;
println!("Response: {}", response);
Ok(())
}
use miyabi_llm_google::GoogleClient;
use miyabi_llm_core::{LlmClient, Message, Role};
let client = GoogleClient::new("your-api-key".to_string())
.with_pro() // Use Gemini 1.5 Pro (default)
.with_max_tokens(2048);
let messages = vec![
Message::new(Role::User, "Explain quantum computing".to_string()),
];
let response = client.chat(messages).await?;
println!("{}", response);
use miyabi_llm_google::GoogleClient;
use miyabi_llm_core::{LlmStreamingClient, Message, Role};
use futures::StreamExt;
let client = GoogleClient::from_env()?.with_flash();
let messages = vec![
Message::new(Role::User, "Write a short story about AI".to_string()),
];
let mut stream = client.chat_stream(messages).await?;
while let Some(chunk) = stream.next().await {
match chunk {
Ok(text) => print!("{}", text),
Err(e) => eprintln!("Stream error: {}", e),
}
}
use miyabi_llm_google::GoogleClient;
use miyabi_llm_core::{LlmClient, Message, Role, ToolDefinition, ToolCallResponse};
use serde_json::json;
let client = GoogleClient::from_env()?;
let messages = vec![
Message::new(Role::User, "What's the weather in Tokyo?".to_string()),
];
let tools = vec![
ToolDefinition {
name: "get_weather".to_string(),
description: "Get current weather for a location".to_string(),
parameters: json!({
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "City name"
}
},
"required": ["location"]
}),
},
];
let response = client.chat_with_tools(messages, tools).await?;
match response {
ToolCallResponse::ToolCalls(calls) => {
for call in calls {
println!("Function: {} with args: {:?}", call.name, call.arguments);
}
}
ToolCallResponse::Conclusion { text } => {
println!("Final answer: {}", text);
}
}
use miyabi_llm_google::GoogleClient;
use miyabi_llm_core::{LlmClient, Message, Role};
let client = GoogleClient::from_env()?;
let messages = vec![
Message::new(Role::User, "Hello! My name is Alice.".to_string()),
Message::new(Role::Assistant, "Hello Alice! How can I help you today?".to_string()),
Message::new(Role::User, "What was my name?".to_string()),
];
let response = client.chat(messages).await?;
println!("{}", response); // Should mention "Alice"
// Gemini 1.5 Pro (default) - Higher quality, slower
let client = GoogleClient::from_env()?.with_pro();
// Gemini 1.5 Flash - Faster, cheaper, good quality
let client = GoogleClient::from_env()?.with_flash();
// Custom model
let client = GoogleClient::from_env()?.with_model("gemini-1.5-pro-latest");
let client = GoogleClient::from_env()?
.with_max_tokens(4096) // Maximum output tokens
.with_temperature(0.7); // Randomness (0.0-1.0)
Set one of these environment variables:
export GOOGLE_API_KEY="your-gemini-api-key"
# or
export GEMINI_API_KEY="your-gemini-api-key"
Get your API key from: Google AI Studio
GoogleClientnew(api_key: String) -> Self - Create client with API keyfrom_env() -> Result<Self> - Create from environment variablewith_pro(self) -> Self - Use Gemini 1.5 Pro modelwith_flash(self) -> Self - Use Gemini 1.5 Flash modelwith_model(self, model: impl Into<String>) -> Self - Set custom modelwith_max_tokens(self, max_tokens: i32) -> Self - Set max output tokenswith_temperature(self, temperature: f32) -> Self - Set temperature (0.0-1.0)LlmClient Traitasync fn chat(&self, messages: Vec<Message>) -> Result<String> - Basic chat completionasync fn chat_with_tools(&self, messages: Vec<Message>, tools: Vec<ToolDefinition>) -> Result<ToolCallResponse> - Chat with function callingfn provider_name(&self) -> &str - Returns "google"fn model_name(&self) -> &str - Returns current model nameLlmStreamingClient Traitasync fn chat_stream(&self, messages: Vec<Message>) -> Result<StreamResponse> - Streaming chat completionuse miyabi_llm_core::LlmError;
match client.chat(messages).await {
Ok(response) => println!("{}", response),
Err(LlmError::MissingApiKey(var)) => {
eprintln!("Missing API key: {}", var);
}
Err(LlmError::NetworkError(msg)) => {
eprintln!("Network error: {}", msg);
}
Err(LlmError::ApiError(msg)) => {
eprintln!("API error: {}", msg);
}
Err(LlmError::ParseError(msg)) => {
eprintln!("Parse error: {}", msg);
}
Err(e) => eprintln!("Other error: {}", e),
}
Gemini 1.5 pricing (as of 2024):
| Model | Input | Output |
|---|---|---|
| Gemini 1.5 Pro | $0.00125 / 1K tokens | $0.005 / 1K tokens |
| Gemini 1.5 Flash | $0.000075 / 1K tokens | $0.0003 / 1K tokens |
Flash is ~17x cheaper than Pro for input and ~16x cheaper for output.
| Feature | Gemini 1.5 Pro | Gemini 1.5 Flash | Claude 3.5 | GPT-4o |
|---|---|---|---|---|
| Max tokens | 8192 | 8192 | 4096 | 16384 |
| Context window | 2M tokens | 1M tokens | 200K | 128K |
| Speed | Medium | Fast | Medium | Medium |
| Cost | Low | Very Low | High | Medium |
| Tool calling | ✅ | ✅ | ✅ | ✅ |
| Streaming | ✅ | ✅ | ✅ | ✅ |
See the examples/ directory for more:
basic_chat.rs - Simple chat examplestreaming.rs - Streaming responsestool_calling.rs - Function calling exampleconversation.rs - Multi-turn dialogueRun examples:
cargo run --example basic_chat
Run tests:
cargo test --package miyabi-llm-google
Note: Some tests require GOOGLE_API_KEY or GEMINI_API_KEY environment variable.
miyabi-llmThis crate is designed to be used via the unified miyabi-llm interface:
use miyabi_llm::{GoogleClient, LlmClient};
let client = GoogleClient::from_env()?;
let response = client.chat(messages).await?;
Licensed under the Apache License, Version 2.0. See LICENSE for details.
Contributions are welcome! Please see CONTRIBUTING.md for guidelines.