| Crates.io | mirror |
| lib.rs | mirror |
| version | 0.4.1 |
| created_at | 2025-07-22 21:26:17.658091+00 |
| updated_at | 2025-07-23 16:17:16.000587+00 |
| description | A Rust library unifying multiple LLM backends. |
| homepage | https://github.com/actualwitch/mirror |
| repository | https://github.com/actualwitch/mirror |
| max_upload_size | |
| id | 1764042 |
| size | 729,240 |
Mirror is a Rust library that lets you use multiple LLM backends in a single project: OpenAI, Anthropic (Claude), Ollama, DeepSeek, Phind, Groq, Google, Cohere, Mistral, and ElevenLabs. With a unified API and builder style - similar to the Stripe experience - you can easily create chat, text completion, speak-to-text requests without multiplying structures and crates.
ChatProvider and CompletionProvider) to cover most use cases.Simply add mirror to your Cargo.toml:
[dependencies]
mirror = { version = "0.4.0", features = ["openai", "anthropic", "ollama", "deepseek", "phind", "google", "groq", "mistral", "Elevenlabs"] }
Mirror includes a command-line tool for easily interacting with different LLM models. You can install it with: cargo install mirror
mirror to start an interactive chat sessionmirror openai:gpt-4o to start an interactive chat session with provider:modelmirror set OPENAI_API_KEY your_key to configure your API keymirror default openai:gpt-4 to set a default providerecho "Hello World" | mirror to pipemirror --provider openai --model gpt-4 --temperature 0.7 for advanced options[dependencies]
mirror = { version = "0.4.0", features = ["openai", "anthropic", "ollama", "deepseek", "phind", "google", "groq", "api", "elevenlabs"] }
More details in the api_example
| Name | Description |
|---|---|
anthropic_example |
Demonstrates integration with Anthropic's Claude model for chat completion |
anthropic_streaming_example |
Anthropic streaming chat example demonstrating real-time token generation |
chain_example |
Shows how to create multi-step prompt chains for exploring programming language features |
deepseek_example |
Basic DeepSeek chat completion example with deepseek-chat models |
embedding_example |
Basic embedding example with OpenAI's API |
multi_backend_example |
Illustrates chaining multiple LLM backends (OpenAI, Anthropic, DeepSeek) together in a single workflow |
ollama_example |
Example of using local LLMs through Ollama integration |
openai_example |
Basic OpenAI chat completion example with GPT models |
openai_streaming_example |
OpenAI streaming chat example demonstrating real-time token generation |
phind_example |
Basic Phind chat completion example with Phind-70B model |
validator_example |
Basic validator example with Anthropic's Claude model |
evaluation_example |
Basic evaluation example with Anthropic, Phind and DeepSeek |
evaluator_parallel_example |
Evaluate multiple LLM providers in parallel |
google_example |
Basic Google Gemini chat completion example with Gemini models |
google_streaming_example |
Google streaming chat example demonstrating real-time token generation |
google_pdf |
Google Gemini chat with PDF attachment |
google_image |
Google Gemini chat with PDF attachment |
google_embedding_example |
Basic Google Gemini embedding example with Gemini models |
tool_calling_example |
Basic tool calling example with OpenAI |
google_tool_calling_example |
Google Gemini function calling example with complex JSON schema for meeting scheduling |
json_schema_nested_example |
Advanced example demonstrating deeply nested JSON schemas with arrays of objects and complex data structures |
tool_json_schema_cycle_example |
Complete tool calling cycle with JSON schema validation and structured responses |
unified_tool_calling_example |
Unified tool calling with selectable provider - demonstrates multi-turn tool use and tool choice |
deepclaude_pipeline_example |
Basic deepclaude pipeline example with DeepSeek and Claude |
api_example |
Basic API (openai standard format) example with OpenAI, Anthropic, DeepSeek and Groq |
api_deepclaude_example |
Basic API (openai standard format) example with DeepSeek and Claude |
anthropic_vision_example |
Basic anthropic vision example with Anthropic |
openai_vision_example |
Basic openai vision example with OpenAI |
openai_reasoning_example |
Basic openai reasoning example with OpenAI |
anthropic_thinking_example |
Anthropic reasoning example |
elevenlabs_stt_example |
Speech-to-text transcription example using ElevenLabs |
elevenlabs_tts_example |
Text-to-speech example using ElevenLabs |
openai_stt_example |
Speech-to-text transcription example using OpenAI |
openai_tts_example |
Text-to-speech example using OpenAI |
tts_rodio_example |
Text-to-speech with rodio example using OpenAI |
chain_audio_text_example |
Example demonstrating a multi-step chain combining speech-to-text and text processing |
memory_example |
Automatic memory integration - LLM remembers conversation context across calls |
memory_share_example |
Example demonstrating shared memory between multiple LLM providers |
trim_strategy_example |
Example demonstrating memory trimming strategies with automatic summarization |
agent_builder_example |
Example of reactive agents cooperating via shared memory, demonstrating creation of LLM agents with roles, conditions |
openai_web_search_example |
Example demonstrating OpenAI web search functionality with location-based search context |
model_listing_example |
Example demonstrating how to list available models from an LLM backend |
cohere_example |
Basic Cohere chat completion example with Command models |
Here's a basic example using OpenAI for chat completion. See the examples directory for other backends (Anthropic, Ollama, DeepSeek, Google, Phind, Elevenlabs), embedding capabilities, and more advanced use cases.