Crates.io | ai |
lib.rs | ai |
version | |
source | src |
created_at | 2017-07-21 00:46:46.637499 |
updated_at | 2025-02-01 22:07:21.873282 |
description | AI |
homepage | |
repository | https://github.com/prabirshrestha/ai.rs |
max_upload_size | |
id | 24334 |
Cargo.toml error: | TOML parse error at line 18, column 1 | 18 | autolib = false | ^^^^^^^ unknown field `autolib`, expected one of `name`, `version`, `edition`, `authors`, `description`, `readme`, `license`, `repository`, `homepage`, `documentation`, `build`, `resolver`, `links`, `default-run`, `default_dash_run`, `rust-version`, `rust_dash_version`, `rust_version`, `license-file`, `license_dash_file`, `license_file`, `licenseFile`, `license_capital_file`, `forced-target`, `forced_dash_target`, `autobins`, `autotests`, `autoexamples`, `autobenches`, `publish`, `metadata`, `keywords`, `categories`, `exclude`, `include` |
size | 0 |
Simple to use AI library for Rust primarily targeting OpenAI compatible providers with more to come.
This library is work in progress, and the API is subject to change.
Add ai as a dependency along with tokio
. For
streaming add futures
crate, for CancellationToken
support add tokio-util
.
This library directly uses reqwest
for http client when making requests to the
servers.
cargo add ai
Feature | Description | Default |
---|---|---|
openai_client |
Enable OpenAI client | ✅ |
azure_openai_client |
Enable Azure OpenAI client | ✅ |
ollama_client |
Enable Ollama client | |
native_tls |
Enable native TLS for reqwest http client | ✅ |
rustls_tls |
Enable rustls TLS for reqwest http client |
Example Name | Description |
---|---|
azure_openai_chat_completions | Basic chat completions using Azure OpenAI API |
chat_completions_streaming | Chat completions streaming example |
chat_completions_streaming_with_cancellation_token | Chat completions streaming with cancellation token |
chat_completions_tool_calling | Tool/Function calling example |
chat_console | Console chat example |
clients_dynamic_runtime | Dynamic runtime client selection |
openai_chat_completions | Basic chat completions using OpenAI API |
use ai::{
chat_completions::{ChatCompletion, ChatCompletionMessage, ChatCompletionRequestBuilder},
Result,
};
#[tokio::main]
async fn main() -> Result<()> {
let openai = ai::clients::openai::Client::from_url("ollama", "http://localhost:11434/v1")?;
// let openai = ai::clients::openai::Client::from_env()?;
// let openai = ai::clients::openai::Client::new("api_key")?;
let request = ChatCompletionRequestBuilder::default()
.model("llama3.2")
.messages(vec![
ChatCompletionMessage::System("You are a helpful assistant".into()),
ChatCompletionMessage::User("Tell me a joke.".into()),
])
.build()?;
let response = openai.chat_completions(&request).await?;
println!("{}", &response.choices[0].message.content.as_ref().unwrap());
Ok(())
}
Using tuples for messages. Unrecognized role
will cause panic.
let request = &ChatCompletionRequestBuilder::default()
.model("gpt-4o-mini".to_string())
.messages(vec![
("system", "You are a helpful assistant.").into(),
("user", "Tell me a joke").into(),
])
.build()?;
cargo add ai --features=openai_client
let openai = ai::clients::openai::Client::new("open_api_key")?;
let openai = ai::clients::openai::Client::from_url("open_api_key", "http://api.openai.com/v1")?;
let openai = ai::clients::openai::Client::from_env()?;
Set http1_title_case_headers
for Gemini API.
let gemini = ai::clients::openai::ClientBuilder::default()
.http_client(
reqwest::Client::builder()
.http1_title_case_headers()
.build()?,
)
.api_key("gemini_api_key".into())
.base_url("https://generativelanguage.googleapis.com/v1beta/openai".into())
.build()?;
cargo add ai --features=azure_openai_client
let azure_openai = ai::clients::azure_openai::ClientBuilder::default()
.auth(ai::clients::azure_openai::Auth::BearerToken("token".into()))
// .auth(ai::clients::azure_openai::Auth::ApiKey(
// std::env::var(ai::clients::azure_openai::AZURE_OPENAI_API_KEY_ENV_VAR)
// .map_err(|e| Error::EnvVarError(ai::clients::azure_openai::AZURE_OPENAI_API_KEY_ENV_VAR.to_string(), e))?
// .into(),
// ))
.api_version("2024-02-15-preview")
.base_url("https://resourcename.openai.azure.com")
.build()?;
Pass the deployment_id
as model
of the ChatCompletionRequest
.
Use the following command to get bearer token.
az account get-access-token --resource https://cognitiveservices.azure.com
Suggest using openai client instead of ollama for maximum compatibility.
cargo add ai --features=ollama_client
let ollama = ai::clients::ollama::Client::new()?;
let ollama = ai::clients::ollama::Client::from_url("http://localhost:11434")?;
MIT