| Crates.io | reson-agentic |
| lib.rs | reson-agentic |
| version | 0.2.0 |
| created_at | 2025-12-06 19:44:03.426906+00 |
| updated_at | 2026-01-25 10:30:56.474967+00 |
| description | Agents are just functions - production-grade LLM agent framework |
| homepage | https://github.com/iantbutler01/reson |
| repository | https://github.com/iantbutler01/reson |
| max_upload_size | |
| id | 1970645 |
| size | 906,614 |
Agents are just functions - production-grade LLM agent framework for Rust.
#[derive(Tool)]#[agentic]openai:resp:*, openrouter:resp:*)[dependencies]
reson-agentic = "0.1"
tokio = { version = "1", features = ["full"] }
serde = { version = "1", features = ["derive"] }
use reson_agentic::providers::{GoogleGenAIClient, GenerationConfig, InferenceClient};
use reson_agentic::types::ChatMessage;
use reson_agentic::utils::ConversationMessage;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let client = GoogleGenAIClient::new("your-api-key", "gemini-2.0-flash");
let messages = vec![
ConversationMessage::Chat(ChatMessage::user("Hello!"))
];
let config = GenerationConfig::new("gemini-2.0-flash");
let response = client.get_generation(&messages, &config).await?;
println!("{}", response.content);
Ok(())
}
#[derive(Tool)]Define type-safe tools that automatically generate JSON schemas for LLM function calling:
use reson_agentic::Tool;
use serde::{Deserialize, Serialize};
/// Search the web for information
#[derive(Tool, Serialize, Deserialize, Debug)]
struct WebSearch {
/// The search query to execute
query: String,
/// Maximum number of results to return
max_results: Option<u32>,
}
/// Get weather for a location
#[derive(Tool, Serialize, Deserialize, Debug)]
struct GetWeather {
/// City name or coordinates
location: String,
/// Temperature unit: "celsius" or "fahrenheit"
unit: Option<String>,
}
// Access generated schema
let schema = WebSearch::schema(); // JSON Schema object
let name = WebSearch::tool_name(); // "web_search"
let desc = WebSearch::description(); // "Search the web for information"
The #[derive(Tool)] macro:
String, bool, i32/i64/u32/u64, f32/f64, Vec<T>, and Option<T>#[agentic]The #[agentic] macro transforms an async function into an agent. It:
Runtime automatically and injects it into the functionruntime.run() or runtime.run_stream() is calleduse reson_agentic::agentic;
use reson_agentic::runtime::{Runtime, ToolFunction};
use reson_agentic::error::Result;
/// Analyze text and answer questions
#[agentic(model = "gemini:gemini-2.0-flash")]
async fn analyze_text(
text: String,
question: String,
runtime: Runtime, // Injected by macro - callers don't pass this
) -> Result<serde_json::Value> {
// Register tools with the runtime
runtime.register_tool_with_schema(
WebSearch::tool_name(),
WebSearch::description(),
WebSearch::schema(),
ToolFunction::Sync(Box::new(|args| {
let query = args["query"].as_str().unwrap_or("");
Ok(format!("Search results for: {}", query))
})),
).await?;
// Run the agent - runtime is mutable internally
runtime.run(
Some(&format!("Text: {}\n\nQuestion: {}", text, question)),
Some("You are a helpful assistant. Use tools when needed."),
None, // history
None, // output_type
None, // temperature
None, // top_p
None, // max_tokens
None, // model override
None, // api_key override
).await
}
// Call the agent - runtime parameter is NOT passed by caller
let result = analyze_text(
"The quick brown fox...".to_string(),
"What animal is mentioned?".to_string(),
).await?;
Upload and analyze videos using Google's File API:
use reson_agentic::providers::{GoogleGenAIClient, FileState};
use reson_agentic::types::{ChatRole, MediaPart, MediaSource, MultimodalMessage};
let client = GoogleGenAIClient::new(api_key, "gemini-2.0-flash");
// Upload video
let video_bytes = std::fs::read("video.mp4")?;
let uploaded = client.upload_file(&video_bytes, "video/mp4", Some("my-video")).await?;
// Wait for processing (required for videos)
if uploaded.state == FileState::Processing {
client.wait_for_file_processing(&uploaded.name, Some(120)).await?;
}
// Create multimodal message
let message = MultimodalMessage {
role: ChatRole::User,
parts: vec![
MediaPart::Video {
source: MediaSource::FileUri {
uri: uploaded.uri.clone(),
mime_type: Some("video/mp4".to_string()),
},
metadata: None,
},
MediaPart::Text { text: "Describe this video".to_string() },
],
cache_marker: None,
};
// Clean up when done
client.delete_file(&uploaded.name).await?;
| Type | Formats | Max Size |
|---|---|---|
| Video | MP4, MOV, AVI, WebM, MKV, FLV, 3GP | 2GB |
| Image | JPEG, PNG, GIF, WebP, HEIC | 20MB inline |
| Audio | MP3, WAV, FLAC, AAC, OGG, M4A | 2GB |
| Document | PDF, TXT, HTML, CSS, JS, etc. | Varies |
| Provider | Client | Model Format |
|---|---|---|
| Google Gemini | GoogleGenAIClient |
gemini-2.0-flash |
| Anthropic | AnthropicClient |
claude-sonnet-4-20250514 |
| OpenAI | OAIClient |
gpt-4o |
| OpenRouter | OpenRouterClient |
anthropic/claude-sonnet-4 |
| AWS Bedrock | BedrockClient |
anthropic.claude-sonnet-4-20250514-v1:0 |
| Vertex AI (Claude)* | GoogleAnthropicClient |
claude-sonnet-4@20250514 |
*Requires google-adc feature: reson-agentic = { version = "0.1", features = ["google-adc"] }
All clients implement Clone for easy use in async contexts.
See the examples directory:
video_upload.rs - Video analysis with Google Gemini and #[agentic] macrosimple_tools.rs - Basic tool registration and executiontool_call_chain.rs - Multi-turn tool callingdynamic_tool_parsing.rs - Type-safe tool parsing with Deserializabletemplating_example.rs - Prompt templates with minijinjastore_usage.rs - Context storage patternsRun examples with:
GOOGLE_GEMINI_API_KEY=your_key cargo run --example video_upload -- video.mp4
[dependencies]
reson-agentic = { version = "0.1", features = ["full"] }
| Feature | Description |
|---|---|
full |
All features enabled |
storage |
Redis + SQLx storage backends |
bedrock |
AWS Bedrock support |
templating |
Minijinja prompt templates |
telemetry |
OpenTelemetry tracing |
google-adc |
Google Application Default Credentials (Vertex AI) |
Apache-2.0