| Crates.io | llm-sdk-rs |
| lib.rs | llm-sdk-rs |
| version | 0.2.0 |
| created_at | 2025-09-27 17:17:03.813937+00 |
| updated_at | 2025-10-01 13:46:29.079389+00 |
| description | A Rust library that enables the development of applications that can interact with different language models through a unified interface. |
| homepage | https://llm-sdk.hoangvvo.com/ |
| repository | https://github.com/hoangvvo/llm-sdk |
| max_upload_size | |
| id | 1857486 |
| size | 527,012 |
A Rust library that enables the development of applications that can interact with different language models through a unified interface.
cargo add llm-sdk-rs
All models implement the LanguageModel trait:
use llm_sdk::{
anthropic::{AnthropicModel, AnthropicModelOptions},
google::{GoogleModel, GoogleModelOptions},
openai::{OpenAIChatModel, OpenAIChatModelOptions, OpenAIModel, OpenAIModelOptions},
LanguageModel,
};
pub fn get_model(provider: &str, model_id: &str) -> Box<dyn LanguageModel> {
match provider {
"openai" => Box::new(OpenAIModel::new(
model_id.to_string(),
OpenAIModelOptions {
api_key: std::env::var("OPENAI_API_KEY")
.expect("OPENAI_API_KEY environment variable must be set"),
..Default::default()
},
)),
"openai-chat-completion" => Box::new(OpenAIChatModel::new(
model_id.to_string(),
OpenAIChatModelOptions {
api_key: std::env::var("OPENAI_API_KEY")
.expect("OPENAI_API_KEY environment variable must be set"),
..Default::default()
},
)),
"anthropic" => Box::new(AnthropicModel::new(
model_id.to_string(),
AnthropicModelOptions {
api_key: std::env::var("ANTHROPIC_API_KEY")
.expect("ANTHROPIC_API_KEY environment variable must be set"),
..Default::default()
},
)),
"google" => Box::new(GoogleModel::new(
model_id.to_string(),
GoogleModelOptions {
api_key: std::env::var("GOOGLE_API_KEY")
.expect("GOOGLE_API_KEY environment variable must be set"),
..Default::default()
},
)),
_ => panic!("Unsupported provider: {provider}"),
}
}
Below is an example to generate text:
use dotenvy::dotenv;
use llm_sdk::{LanguageModelInput, Message, Part};
mod common;
#[tokio::main]
async fn main() {
dotenv().ok();
let model = common::get_model("openai", "gpt-4o");
let response = model
.generate(LanguageModelInput {
messages: vec![
Message::user(vec![Part::text("Tell me a story.")]),
Message::assistant(vec![Part::text(
"Sure! What kind of story would you like to hear?",
)]),
Message::user(vec![Part::text("a fairy tale")]),
],
..Default::default()
})
.await
.unwrap();
println!("{response:#?}");
}
Find examples in the examples folder to learn how to:
generate-text: Generate textstream-text: Stream textgenerate-audio: Generate audiostream-audio: Stream audiogenerate-image: Generate imagedescribe-image: Describe imagesummarize-audio: Summarize audiotool-use: Function callingstructured-output: Structured outputgenerate-reasoning: Reasoningstream-reasoning: Stream reasoninggenerate-citations: Generate citationsstream-citations: Stream citationscargo run --example generate-text
image_data and audio_data have been renamed to just data in ImagePart and AudioPart.