| Crates.io | reagent-rs |
| lib.rs | reagent-rs |
| version | 0.2.5 |
| created_at | 2025-08-18 17:30:37.60329+00 |
| updated_at | 2025-09-24 12:57:57.450173+00 |
| description | A Rust library for building AI agents with MCP & custom tools |
| homepage | https://github.com/VakeDomen/reagent |
| repository | https://github.com/VakeDomen/reagent |
| max_upload_size | |
| id | 1800816 |
| size | 247,566 |
Reagent is a Rust library for building and running AI agents that interact with LLMs. It abstracts away provider-specific details (currently supports Ollama and OpenRouter), provides a consistent API for prompting, structured outputs, and tool use, and allows you to define fully custom invocation flows.
You can add the library to your project by pulling from crates:
cargo add reagent-rs
or directly from github:
[dependencies]
reagent = { git = "https://github.com/VakeDomen/Reagent" }
Reagent is experimental and provider support may change.
Not all provider features are unified;
Multiple providers: Ollama (default) and OpenRouter (experimental)
Structured output via JSON Schema (manual or via schemars)
Tooling:
Flows:
Prompt templates with runtime or dynamic data sources
Notifications: subscribe to agent events like token streaming, tool calls, errors, etc.
use std::error::Error;
use reagent_rs::AgentBuilder;
#[tokio::main]
async fn main() -> Result<(), Box<dyn Error>> {
let mut agent = AgentBuilder::default()
.set_model("qwen3:0.6b")
.set_system_prompt("You are a helpful assistant.")
.build()
.await?;
let resp = agent.invoke_flow("Hello!").await?;
println!("Agent response: {}", resp.content.unwrap_or_default());
Ok(())
}
The AgentBuilder uses a builder pattern. Only model is required; everything else has defaults.
let agent = AgentBuilder::default()
.set_model("qwen3:0.6b")
.set_system_prompt("You are a helpful assistant.")
.set_temperature(0.7)
.set_num_ctx(2048)
.build()
.await?;
By default, Reagent assumes an Ollama instance running locally.
let agent = AgentBuilder::default()
.set_model("qwen3:0.6b")
.set_provider(Provider::Ollama)
.set_base_url("http://localhost:11434")
.build()
.await?;
To use OpenRouter:
let agent = AgentBuilder::default()
.set_model("qwen3:0.6b")
.set_provider(Provider::OpenRouter)
.set_api_key("YOUR_KEY")
.build()
.await?;
Note: some providers require provider-specific response format settings.
You can ask the model to return JSON that matches a schema.
Manual schema:
let agent = AgentBuilder::default()
.set_model("qwen3:0.6b")
.set_response_format(r#"{
"type":"object",
"properties":{
"windy":{"type":"boolean"},
"temperature":{"type":"integer"},
"description":{"type":"string"}
},
"required":["windy","temperature","description"]
}"#)
.build()
.await?;
From struct via schemars:
#[derive(JsonSchema)]
struct Weather {
windy: bool,
temperature: i32,
description: String
}
let agent = AgentBuilder::default()
.set_model("qwen3:0.6b")
.set_response_format(serde_json::to_string_pretty(&schema_for!(Weather))?)
.build()
.await?;
To get parsed output directly:
let resp: Weather = agent.invoke_flow_structured_output("What's the weather?").await?;
Tools let the model call custom functions. Define an executor closure, wrap it in a ToolBuilder, and register it with the agent.
async fn get_weather(args: Value) -> Result<String, ToolExecutionError> {
// do your thing
Ok(r#"{"windy":false,"temperature":18}"#.into())
};
let tool = ToolBuilder::new()
.function_name("get_weather")
.add_required_property("location", "string", "City name")
.executor_fn(get_weather)
.build()?;
let agent = AgentBuilder::default()
.set_model("qwen3:0.6b")
.add_tool(tool)
.add_mcp_server(McpServerType::sse("http://localhost:8000/sse"))
.add_mcp_server(McpServerType::stdio("npx -y @<something/memory>"))
.add_mcp_server(McpServerType::streamable_http("http://localhost:8001/mcp"))
.build()
.await?;
Flows control how the agent is invoked.
reply, reply_without_tools, call_tools, plan_and_executeasync fn my_custom_flow(agent: &mut Agent, prompt: String) -> Result<Message, AgentError> {
// custom logic
Ok(Message::assistant("Hello"))
}
let agent = AgentBuilder::default()
.set_model("qwen3:0.6b")
.set_flow(flow!(my_flow))
.build()
.await?;
Define prompts with placeholders:
let template = Template::simple("Hello {{name}}!");
let agent = AgentBuilder::default()
.set_model("qwen3:0.6b")
.set_template(template)
.build()
.await?;
let prompt_data = HashMap::from([
("name", "Peter"),
]);
let resp = agent.invoke_flow_with_template(prompt_data).await?;
Pass a HashMap of values to invoke_flow_with_template.
You can also provide a TemplateDataSource that injects dynamic values at invocation time.
You can receive events from the agent using build_with_notification:
let (agent, mut rx) = AgentBuilder::default()
.set_model("qwen3:0.6b")
.set_stream(true)
.build_with_notification()
.await?;
For quick experiments, StatelessPrebuild and StatefullPrebuild offer presets some simple flow patterns. Stateful versions keep conversation history; stateless ones reset each call.
Examples:
let agent = StatelessPrebuild::reply()
.set_model("qwen3:0.6b")
.build()
.await?;
let agent = StatefullPrebuild::call_tools()
.set_model("qwen3:0.6b")
.build()
.await?;
MIT