| Crates.io | allms |
| lib.rs | allms |
| version | 0.25.0 |
| created_at | 2024-01-03 20:03:07.465357+00 |
| updated_at | 2025-09-22 13:25:28.425713+00 |
| description | One Library to rule them aLLMs |
| homepage | |
| repository | https://github.com/neferdata/allms.git |
| max_upload_size | |
| id | 1087730 |
| size | 999,110 |
This Rust library is specialized in providing type-safe interactions with APIs of the following LLM providers: Anthropic, AWS Bedrock, Azure, DeepSeek, Google Gemini, Mistral, OpenAI, Perplexity, xAI. (More providers to be added in the future.) It's designed to simplify the process of experimenting with different models. It de-risks the process of migrating between providers reducing vendor lock-in issues. It also standardizes serialization of sending requests to LLM APIs and interpreting the responses, ensuring that the JSON data is handled in a type-safe manner. With allms you can focus on creating effective prompts and providing LLM with the right context, instead of worrying about differences in API implementations.
Anthropic:
AWS Bedrock:
Azure OpenAI:
AzureVersion variantCustom variant of OpenAIModelsDeepSeek:
Google Gemini:
Mistral:
OpenAI:
Custom variant)Perplexity:
xAI:
AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_REGION set as per AWS settings.OPENAI_API_URL set to your Azure OpenAI resource endpoint. Endpoint key passed in constructorExplore the examples directory to see more use cases and how to use different LLM providers and endpoint types.
Using Completions API with different foundational models:
let anthropic_answer = Completions::new(AnthropicModels::Claude4Sonnet, &API_KEY, None, None)
.get_answer::<T>(instructions)
.await?
let aws_bedrock_answer = Completions::new(AwsBedrockModels::NovaLite, "", None, None)
.get_answer::<T>(instructions)
.await?
let deepseek_answer = Completions::new(DeepSeekModels::DeepSeekReasoner, &API_KEY, None, None)
.get_answer::<T>(instructions)
.await?
let google_answer = Completions::new(GoogleModels::Gemini2_5Flash, &API_KEY, None, None)
.get_answer::<T>(instructions)
.await?
let mistral_answer = Completions::new(MistralModels::MistralMedium3, &API_KEY, None, None)
.get_answer::<T>(instructions)
.await?
let openai_answer = Completions::new(OpenAIModels::Gpt4_1Mini, &API_KEY, None, None)
.get_answer::<T>(instructions)
.await?
let openai_responses_answer = Completions::new(OpenAIModels::Gpt4_1Mini, &API_KEY, None, None)
.version("openai_responses")
.get_answer::<T>(instructions)
.await?
let perplexity_answer = Completions::new(PerplexityModels::SonarPro, &API_KEY, None, None)
.get_answer::<T>(instructions)
.await?
let xai_answer = Completions::new(XAIModels::Grok3Mini, &API_KEY, None, None)
.get_answer::<T>(instructions)
.await?
Example:
RUST_LOG=info RUST_BACKTRACE=1 cargo run --example use_completions
Using Assistant API to analyze your files with File and VectorStore capabilities:
// Create a File
let openai_file = OpenAIFile::new(None, &API_KEY)
.upload(&file_name, bytes)
.await?;
// Create a Vector Store
let openai_vector_store = OpenAIVectorStore::new(None, "Name", &API_KEY)
.upload(&[openai_file.id.clone().unwrap_or_default()])
.await?;
// Extract data using Assistant
let openai_answer = OpenAIAssistant::new(OpenAIModels::Gpt4o, &API_KEY)
.version(OpenAIAssistantVersion::V2)
.vector_store(openai_vector_store.clone())
.await?
.get_answer::<T>(instructions, &[])
.await?;
Example:
RUST_LOG=info RUST_BACKTRACE=1 cargo run --example use_openai_assistant
This project is licensed under dual MIT/Apache-2.0 license. See the LICENSE-MIT and LICENSE-APACHE files for details.