| Crates.io | mono-ai-macros |
| lib.rs | mono-ai-macros |
| version | 2.0.0 |
| created_at | 2025-08-06 10:17:44.497512+00 |
| updated_at | 2025-08-06 10:17:44.497512+00 |
| description | procedural macros for tools in mono-ai |
| homepage | |
| repository | https://github.com/unfaded/mono-ai |
| max_upload_size | |
| id | 1783638 |
| size | 12,665 |
A provider-agnostic Rust library for interacting with AI services. Switch between Ollama, Anthropic, OpenAI, and OpenRouter with identical code.
Ollama, Anthropic, OpenAI, and OpenRouter all support chat, streaming, vision, tools, and model management through the same interface.
Add library:
cargo add mono-ai mono-ai-macros
Add dependencies:
cargo add tokio --features full
cargo add futures-util serde_json
See the examples/ directory for complete working examples that demonstrate the library's capabilities.
Set API keys via environment variables for the providers you want to use
export OPENAI_API_KEY="your-openai-key"
export ANTHROPIC_API_KEY="your-anthropic-key"
export OPENROUTER_API_KEY="your-openrouter-key"
The examples/ directory contains three comprehensive examples demonstrating all library features, and outside of the constructor, all the code stays the same no matter the model
Interactive chat application with provider selection menu (Ollama, Anthropic, OpenAI, OpenRouter) and automatic model discovery. Implements streaming chat responses, tool calling with custom functions (weather lookup, password generation), conversation history management, and error handling.
Multimodal chat application for image analysis. Takes image file path as command line argument, performs initial analysis, then enables interactive conversation about the image. Handles base64 encoding, message formatting, conversation context preservation, streaming responses, and tool calls across all vision-capable models & providers.
Model management utility for Ollama instances. pulling models from registry with progress tracking, model inspection (templates, parameters), and lifecycle management.
Run examples:
cd examples/chat && cargo run
cd examples/chat-vision && cargo run path/to/image.jpg
cd examples/ollama-management && cargo run
Besides constructing the client, the rest of the code is provider agnostic.
// Local Ollama instance
let client = MonoAI::ollama("http://localhost:11434".to_string(), "qwen3:8b".to_string());
// Cloud providers
let client = MonoAI::openai(api_key, "gpt-4".to_string());
let client = MonoAI::anthropic(api_key, "claude-3-sonnet-20240229".to_string());
let client = MonoAI::openrouter(api_key, "anthropic/claude-sonnet-4".to_string());
send_chat_request(&messages) - Streaming chatsend_chat_request_no_stream(&messages) - Complete responsegenerate(prompt) - Simple completiongenerate_stream(prompt) - Streaming completionsend_chat_request_with_images(&messages, image_paths) - Chat with images from filessend_chat_request_with_image_data(&messages, image_data) - Chat with image bytesencode_image_file(path) - Encode image file to base64encode_image_data(bytes) - Encode image bytes to base64add_tool(tool) - Add function toolhandle_tool_calls(tool_calls) - Execute tools and format responsessupports_tool_calls() - Check native tool supportis_fallback_mode() - Check if using XML fallbackprocess_fallback_response(content) - Parse fallback tool callsget_available_models() - List available models (works with all providers)show_model_info(model) - Get model details (Ollama only)pull_model(model) - Download model (Ollama only)pull_model_stream(model) - Download with progress (Ollama only)Use the #[tool] macro to define tool functions
use mono_ai_macros::tool;
/// The AI will see this doc comment
/// Describe what your tool does and its purpose here
/// The macro automatically provides parameter names, types, and marks all as required
/// You should explain what the function returns and provide usage guidance
#[tool]
fn my_function(param1: String, param2: i32) -> String {
format!("Got {} and {}", param1, param2)
}
// Add to client
client.add_tool(my_function_tool()).await?;
Models without native tool support automatically use XML-based fallbacks, if you want to know if it's using it or not, feel free to use the is_fallback_mode function
if client.is_fallback_mode().await {
println!("Using XML fallback for tools");
}
// Enable debug mode to see raw XML
client.set_debug_mode(true);
MIT License
Contributions welcome! Feel free to submit issues and pull requests.