Crates.io | genai |
lib.rs | genai |
version | 0.4.0-alpha.15 |
created_at | 2024-06-01 19:18:59.107716+00 |
updated_at | 2025-09-07 19:07:20.80839+00 |
description | Multi-AI Providers Library for Rust. (OpenAI, Gemini, Anthropic, xAI, Ollama, Groq, DeepSeek, Grok) |
homepage | https://github.com/jeremychone/rust-genai |
repository | https://github.com/jeremychone/rust-genai |
max_upload_size | |
id | 1258833 |
size | 572,008 |
Currently natively supports: OpenAI, Anthropic, Gemini, XAI/Grok, Ollama, Groq, DeepSeek (deepseek.com & Groq), Cohere (more to come)
Also allows a custom URL with ServiceTargetResolver
(see examples/c06-target-resolver.rs)
Provides a single, ergonomic API to many generative AI providers, such as Anthropic, OpenAI, Gemini, xAI, Ollama, Groq, and more.
NOTE: Try to use the latest version (0.4.0-alpha.4
). It is as robust as 0.3.x
, but with updated APIs (see below) and additional functionality thanks to many great PRs.
What's new: (-
fix, +
addition, !
change)
(see CHANGELOG.md for more)
!
API CHANGE ChatResponse::content
is now MessageContent
(as MessageContent is now multipart). Minor impact, as ChatResponse public API (into_text...
as before)
let joined_text: Option<String> = chat_response.content.into_join_texts()
!
API CHANGE MessageContent::text(&self)
replaced by (because MessageContent
now flattens multi-part formats)
MessageContent::into_joined_texts(self) -> Option<String>
MessageContent::joined_texts(&self) -> Option<String>
MessageContent::texts(&self) -> Vec<&str>
MessageContent::into_texts(self) -> Vec<String>
+
Custom HTTP headers in ChatOptions
(#78)+
Model namespacing to specify the adapter, e.g., openai::codex-unknown-model
will use the OpenAI adapter and send codex-unknown-model
as the model name. AdapterKind and model name can still be overridden by ServiceTargetResolver
+
New Adapters: Zhipu (ChatGLM) (#76), Nebius!
API CHANGE Now ChatResponse.content
is a Vec<MessageContent>
to support responses that include ToolCalls and text messages
let text: &str = chat_response.first_text()
(was ChatResponse::content_text_as_str()
) orlet texts: Vec<&str> = chat_response.texts();
let texts: Vec<String> = chat_response.into_texts();
let text: String = chat_response::into_first_text()
(was ChatResponse::content_text_into_string()
)let text: String = content.into_iter().filter_map(|c| c.text_into_string()).collect::<Vec<_>>().join("\n\n")
!
API CHANGE Now ChatResponse::into_tool_calls()
and tool_calls()
return Vec<ToolCalls>
rather than Option<Vec<ToolCalls>>
!
API CHANGE MessageContent - Now use message_content.text()
and message_content.into_text()
(rather than text_as_str
, text_into_string
)-
Gemini ToolResponse Fix Gemini adapter wrongfully tried to parse the ToolResponse.content
(see #59)!
Tool Use Streaming support, thanks to ClanceyLu, PR #58What's new:
ReasoningEffort::Budget(num)
-zero
, -low
, -medium
, and -high
suffixes that set the corresponding budget (0
, 1k
, 8k
, 24k
)ReasoningEffort::Low, ...
will map to their corresponding budgets 1k
, 8k
, 24k
API-CHANGES (minors)
- `ReasoningEffort` now has an additional `Budget(num)` variant
- `ModelIden::with_name_or_clone` has been deprecated in favor of `ModelIden::from_option_name(Option<String>)`
Check CHANGELOG for more info
ReasoningEffort::Budget
supportstop_sequences
Anthropic support PR #34pro@coder
for a simple example of how I use AI PACK/genai for production coding.Note: Feel free to send me a short description and a link to your application or library using genai.
reasoning_content
(and stream support), plus DeepSeek Groq and Ollama support (and reasoning_content
normalization)Examples | Thanks | Library Focus | Changelog | Provider Mapping: ChatOptions | Usage
//! Base examples demonstrating the core capabilities of genai
use genai::chat::printer::{print_chat_stream, PrintChatStreamOptions};
use genai::chat::{ChatMessage, ChatRequest};
use genai::Client;
const MODEL_OPENAI: &str = "gpt-4o-mini"; // o1-mini, gpt-4o-mini
const MODEL_ANTHROPIC: &str = "claude-3-haiku-20240307";
const MODEL_COHERE: &str = "command-light";
const MODEL_GEMINI: &str = "gemini-2.0-flash";
const MODEL_GROQ: &str = "llama-3.1-8b-instant";
const MODEL_OLLAMA: &str = "gemma:2b"; // sh: `ollama pull gemma:2b`
const MODEL_XAI: &str = "grok-beta";
const MODEL_DEEPSEEK: &str = "deepseek-chat";
// NOTE: These are the default environment keys for each AI Adapter Type.
// They can be customized; see `examples/c02-auth.rs`
const MODEL_AND_KEY_ENV_NAME_LIST: &[(&str, &str)] = &[
// -- De/activate models/providers
(MODEL_OPENAI, "OPENAI_API_KEY"),
(MODEL_ANTHROPIC, "ANTHROPIC_API_KEY"),
(MODEL_COHERE, "COHERE_API_KEY"),
(MODEL_GEMINI, "GEMINI_API_KEY"),
(MODEL_GROQ, "GROQ_API_KEY"),
(MODEL_XAI, "XAI_API_KEY"),
(MODEL_DEEPSEEK, "DEEPSEEK_API_KEY"),
(MODEL_OLLAMA, ""),
];
// NOTE: Model to AdapterKind (AI Provider) type mapping rule
// - starts_with "gpt" -> OpenAI
// - starts_with "claude" -> Anthropic
// - starts_with "command" -> Cohere
// - starts_with "gemini" -> Gemini
// - model in Groq models -> Groq
// - For anything else -> Ollama
//
// This can be customized; see `examples/c03-mapper.rs`
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let question = "Why is the sky red?";
let chat_req = ChatRequest::new(vec![
// -- Messages (de/activate to see the differences)
ChatMessage::system("Answer in one sentence"),
ChatMessage::user(question),
]);
let client = Client::default();
let print_options = PrintChatStreamOptions::from_print_events(false);
for (model, env_name) in MODEL_AND_KEY_ENV_NAME_LIST {
// Skip if the environment name is not set
if !env_name.is_empty() && std::env::var(env_name).is_err() {
println!("===== Skipping model: {model} (env var not set: {env_name})");
continue;
}
let adapter_kind = client.resolve_service_target(model)?.model.adapter_kind;
println!("\n===== MODEL: {model} ({adapter_kind}) =====");
println!("\n--- Question:\n{question}");
println!("\n--- Answer:");
let chat_res = client.exec_chat(model, chat_req.clone(), None).await?;
println!("{}", chat_res.content_text_as_str().unwrap_or("NO ANSWER"));
println!("\n--- Answer: (streaming)");
let chat_res = client.exec_chat_stream(model, chat_req.clone(), None).await?;
print_chat_stream(chat_res, Some(&print_options)).await?;
println!();
}
Ok(())
}
AuthResolver
to provide auth data (i.e., for api_key) per adapter kind.AdapterKindResolver
to customize the "model name" to "adapter kind" mapping.temperature
and max_tokens
at the client level (for all requests) and per-request level.genai live coding, code design, & best practices
Focuses on standardizing chat completion APIs across major AI services.
Native implementation, meaning no per-service SDKs.
Prioritizes ergonomics and commonality, with depth being secondary. (If you require a complete client API, consider using async-openai and ollama-rs; they are both excellent and easy to use.)
Initially, this library will mostly focus on text chat APIs; images and function calling will come later.
Property | OpenAI Compatibles (*1) | Anthropic | Gemini generationConfig. |
Cohere |
---|---|---|---|---|
temperature |
temperature |
temperature |
temperature |
temperature |
max_tokens |
max_tokens |
max_tokens (default 1024) |
maxOutputTokens |
max_tokens |
top_p |
top_p |
top_p |
topP |
p |
Property | OpenAI Compatibles (1) | Anthropic usage. |
Gemini usageMetadata. |
Cohere meta.tokens. |
---|---|---|---|---|
prompt_tokens |
prompt_tokens |
input_tokens (added) |
promptTokenCount (2) |
input_tokens |
completion_tokens |
completion_tokens |
output_tokens (added) |
candidatesTokenCount (2) |
output_tokens |
total_tokens |
total_tokens |
(computed) | totalTokenCount (2) |
(computed) |
prompt_tokens_details |
prompt_tokens_details |
cached/cache_creation |
N/A for now | N/A for now |
completion_tokens_details |
completion_tokens_details |
N/A for now | N/A for now | N/A for now |
(1) - OpenAI compatibles notes
x_groq.usage.
prompt_tokens_details
and completion_tokens_details
will have the value sent by the compatible provider (or None)(2): Gemini tokens
Right now, with Gemini Stream API, it's not really clear if the usage for each event is cumulative or needs to be added. Currently, it appears to be cumulative (i.e., the last message has the total amount of input, output, and total tokens), so that will be the assumption. See possible tweet answer for more info.
Will add more data on ChatResponse and ChatStream, especially metadata about usage.
Add vision/image support to chat messages and responses.
Add function calling support to chat messages and responses.
Add embed
and embed_batch
Add the AWS Bedrock variants (e.g., Mistral, and Anthropic). Most of the work will be on "interesting" token signature scheme (without having to drag big SDKs, might be below feature).
Add the Google VertexAI variants.
(might) add the Azure OpenAI variant (not sure yet).