| Crates.io | open-agent-sdk |
| lib.rs | open-agent-sdk |
| version | 0.6.0 |
| created_at | 2025-11-05 05:25:20.885542+00 |
| updated_at | 2025-11-14 15:54:15.000272+00 |
| description | Production-ready Rust SDK for building AI agents with local OpenAI-compatible servers (LMStudio, Ollama, llama.cpp, vLLM). Features streaming, tools, hooks, retry logic, and comprehensive examples. |
| homepage | https://github.com/slb350/open-agent-sdk-rust |
| repository | https://github.com/slb350/open-agent-sdk-rust |
| max_upload_size | |
| id | 1917447 |
| size | 732,359 |
Build production-ready AI agents in Rust using your own hardware
What you can build:
Why local?
How fast? From zero to working agent in under 5 minutes. Rust-native performance (zero-cost abstractions, no GC), fearless concurrency, and production-ready quality with 85+ tests.
Open Agent SDK (Rust) provides a clean, streaming API for working with OpenAI-compatible local model servers. 100% feature parity with the Python SDK—complete with streaming, tool call aggregation, hooks, and automatic tool execution—built on Tokio for high-performance async I/O.
http://localhost:1234/v1http://localhost:11434/v1Note on LM Studio: LM Studio is particularly well-tested with this SDK and provides reliable OpenAI-compatible API support. If you're looking for a user-friendly local model server with excellent compatibility, LM Studio is highly recommended.
[dependencies]
open-agent-sdk = "0.1.0"
tokio = { version = "1", features = ["full"] }
futures = "0.3"
serde_json = "1.0"
For development:
git clone https://github.com/slb350/open-agent-sdk-rust.git
cd open-agent-sdk-rust
cargo build
use open_agent::{query, AgentOptions, ContentBlock};
use futures::StreamExt;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let options = AgentOptions::builder()
.system_prompt("You are a professional copy editor")
.model("qwen2.5-32b-instruct")
.base_url("http://localhost:1234/v1")
.temperature(0.1)
.build()?;
let mut stream = query("Analyze this text...", &options).await?;
while let Some(block) = stream.next().await {
match block? {
ContentBlock::Text(text) => print!("{}", text.text),
_ => {}
}
}
Ok(())
}
use open_agent::{Client, AgentOptions, ContentBlock};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let options = AgentOptions::builder()
.system_prompt("You are a helpful assistant")
.model("qwen3:8b")
.base_url("http://localhost:11434/v1")
.build()?;
let mut client = Client::new(options)?;
client.send("What's the capital of France?").await?;
while let Some(block) = client.receive().await {
match block? {
ContentBlock::Text(text) => {
println!("Assistant: {}", text.text);
}
ContentBlock::ToolUse(tool_use) => {
println!("Tool used: {}", tool_use.name);
// Execute tool and add result
// client.add_tool_result(&tool_use.id, result, Some(&tool_use.name));
}
_ => {}
}
}
Ok(())
}
Define tools using the builder pattern for clean, type-safe function calling:
use open_agent::{tool, Client, AgentOptions, ContentBlock};
use serde_json::json;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Define tools
let add_tool = tool("add", "Add two numbers")
.param("a", "number")
.param("b", "number")
.build(|args| async move {
let a = args["a"].as_f64().unwrap_or(0.0);
let b = args["b"].as_f64().unwrap_or(0.0);
Ok(json!({"result": a + b}))
});
// Enable automatic tool execution (recommended)
let options = AgentOptions::builder()
.system_prompt("You are a helpful assistant with access to tools.")
.model("qwen2.5-32b-instruct")
.base_url("http://localhost:1234/v1")
.tool(add_tool)
.auto_execute_tools(true) // Tools execute automatically
.max_tool_iterations(10) // Safety limit for tool loops
.build()?;
let mut client = Client::new(options)?;
client.send("What's 25 + 17?").await?;
// Simply iterate - tools execute automatically!
while let Some(block) = client.receive().await {
match block? {
ContentBlock::Text(text) => {
println!("Response: {}", text.text);
}
_ => {}
}
}
Ok(())
}
For custom execution logic or result interception:
// Disable auto-execution
let options = AgentOptions::builder()
.system_prompt("You are a helpful assistant with access to tools.")
.model("qwen2.5-32b-instruct")
.base_url("http://localhost:1234/v1")
.tool(add_tool.clone())
.auto_execute_tools(false) // Manual mode
.build()?;
let mut client = Client::new(options)?;
client.send("What's 25 + 17?").await?;
while let Some(block) = client.receive().await {
match block? {
ContentBlock::ToolUse(tool_use) => {
// You execute the tool manually
let result = add_tool.execute(tool_use.input).await?;
// Return result to agent
client.add_tool_result(&tool_use.id, result, Some(&tool_use.name));
// Continue conversation
client.send("").await?;
}
ContentBlock::Text(text) => {
println!("{}", text.text);
}
_ => {}
}
}
Key Features:
See examples/calculator_tools.rs and examples/auto_execution_demo.rs for complete examples.
Send images alongside text to vision-capable models like llava, qwen-vl, or minicpm-v. The SDK handles OpenAI Vision API formatting automatically.
use open_agent::{Client, Message, ImageBlock, ImageDetail};
// From URL
let msg = Message::user_with_image(
"What's in this image?",
"https://example.com/photo.jpg"
)?;
client.send_message(msg).await?;
// From local file path (NEW!)
let msg = Message::new(
MessageRole::User,
vec![
ContentBlock::Text(TextBlock::new("Describe this photo")),
ContentBlock::Image(ImageBlock::from_file_path("/path/to/photo.jpg")?),
],
);
client.send_message(msg).await?;
// From base64 data
let msg = Message::user_with_base64_image(
"Describe this diagram",
base64_data,
"image/png"
)?;
client.send_message(msg).await?;
// Control detail level for token costs
let msg = Message::user_with_image_detail(
"Analyze the fine details",
"https://example.com/diagram.png",
ImageDetail::High // Low: ~85 tokens, High: variable, Auto: default
)?;
client.send_message(msg).await?;
Supported Image Sources:
ImageBlock::from_url(url) - HTTPS/HTTP URLsImageBlock::from_file_path(path) - Local filesystem (automatically encodes as base64)
.jpg, .jpeg, .png, .gif, .webp, .bmp, .svgImageBlock::from_base64(data, mime) - Manual base64 with explicit MIME typeControl image processing costs using ImageDetail levels:
ImageDetail::Low - Lower resolution (typically more cost-effective)ImageDetail::High - Higher resolution (typically more detailed analysis)ImageDetail::Auto - Model decides (balanced default)⚠️ Token Costs Vary by Model:
OpenAI's Vision API uses ~85 tokens (Low) and variable tokens based on dimensions (High), but local models may have completely different token costs—or no token costs for images at all. The ImageDetail setting may even be ignored by some models.
Always benchmark your specific model instead of relying on OpenAI's published values for capacity planning.
use open_agent::{Message, MessageRole, ContentBlock, TextBlock, ImageBlock, ImageDetail};
let msg = Message::new(
MessageRole::User,
vec![
ContentBlock::Text(TextBlock::new("Compare these images:")),
ContentBlock::Image(
ImageBlock::from_url("https://example.com/before.jpg")
.with_detail(ImageDetail::Low)
),
ContentBlock::Image(
ImageBlock::from_url("https://example.com/after.jpg")
.with_detail(ImageDetail::Low)
),
],
);
Key Features:
send_message() API - Send pre-built messages with images via client.send_message(msg).await?send("text")See examples/vision_example.rs for comprehensive working examples including local file paths.
Local models have fixed context windows (typically 8k-32k tokens). The SDK provides utilities for manual history management—no silent mutations, you stay in control.
use open_agent::{Client, AgentOptions, estimate_tokens, truncate_messages};
let mut client = Client::new(options)?;
// Long conversation...
for i in 0..50 {
client.send(&format!("Question {}", i)).await?;
while let Some(block) = client.receive().await {
// Process blocks
}
}
// Check token usage
let tokens = estimate_tokens(client.history());
println!("Context size: ~{} tokens", tokens);
// Manually truncate when needed
if tokens > 28000 {
let truncated = truncate_messages(client.history(), 10, true);
*client.history_mut() = truncated;
}
1. Stateless Agents (Best for single-task agents):
// Process each task independently - no history accumulation
for task in tasks {
let mut client = Client::new(options.clone());
client.send(&task).await?;
// Client dropped, fresh context for next task
}
2. Manual Truncation (At natural breakpoints):
use open_agent::truncate_messages;
let mut client = Client::new(options)?;
for task in tasks {
client.send(&task).await?;
// Truncate after each major task
let truncated = truncate_messages(client.history(), 5, false);
*client.history_mut() = truncated;
}
3. External Memory (RAG-lite for research agents):
// Store important facts in database, keep conversation context small
let mut database = HashMap::new();
let mut client = Client::new(options)?;
client.send("Research topic X").await?;
// Save response to database
database.insert("topic_x", extract_facts(&response));
// Clear history, query database when needed
let truncated = truncate_messages(client.history(), 0, false);
*client.history_mut() = truncated;
The SDK intentionally does not auto-compact history because:
See examples/context_management.rs for complete patterns and usage.
Monitor and control agent behavior at key execution points with zero-cost Rust hooks.
use open_agent::{
AgentOptions, Client, Hooks,
PreToolUseEvent, PostToolUseEvent,
HookDecision,
};
// Security gate - block dangerous operations
let hooks = Hooks::new()
.add_pre_tool_use(|event| async move {
if event.tool_name == "delete_file" {
return Some(HookDecision::block("Delete operations require approval"));
}
Some(HookDecision::continue_())
})
.add_post_tool_use(|event| async move {
// Audit logger - track all tool executions
println!("Tool executed: {} -> {:?}", event.tool_name, event.tool_result);
None
});
// Register hooks in AgentOptions
let options = AgentOptions::builder()
.system_prompt("You are a helpful assistant")
.model("qwen2.5-32b-instruct")
.base_url("http://localhost:1234/v1")
.hooks(hooks)
.build()?;
let mut client = Client::new(options)?;
PreToolUse - Fires before tool execution
Some(HookDecision::block(reason))Some(HookDecision::modify_input(json!({}), reason))Some(HookDecision::continue_())PostToolUse - Fires after tool result added to history
None or Some(HookDecision::...)UserPromptSubmit - Fires before sending prompt to API
Some(HookDecision::block(reason))Some(HookDecision::modify_prompt(text, reason))Some(HookDecision::continue_())hooks.add_pre_tool_use(|event| async move {
if event.tool_name == "file_writer" {
let path = event.tool_input.get("path")
.and_then(|v| v.as_str())
.unwrap_or("");
if !path.starts_with("/tmp/") {
let safe_path = format!("/tmp/sandbox/{}", path.trim_start_matches('/'));
let mut modified = event.tool_input.clone();
modified["path"] = json!(safe_path);
return Some(HookDecision::modify_input(modified, "Redirected to sandbox"));
}
}
Some(HookDecision::continue_())
})
let audit_log = Arc::new(Mutex::new(Vec::new()));
let log_clone = audit_log.clone();
hooks.add_post_tool_use(move |event| {
let log = log_clone.clone();
async move {
log.lock().unwrap().push(format!(
"[{}] {} -> {:?}",
chrono::Utc::now(),
event.tool_name,
event.tool_result
));
None
}
})
See examples/hooks_example.rs and examples/multi_tool_agent.rs for comprehensive patterns.
Cancel long-running operations cleanly without corrupting client state. Perfect for timeouts, user cancellations, or conditional interruptions.
use open_agent::{Client, AgentOptions};
use tokio::time::{timeout, Duration};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let options = AgentOptions::builder()
.system_prompt("You are a helpful assistant.")
.model("qwen2.5-32b-instruct")
.base_url("http://localhost:1234/v1")
.build()?;
let mut client = Client::new(options)?;
client.send("Write a detailed 1000-word essay...").await?;
// Timeout after 5 seconds
match timeout(Duration::from_secs(5), async {
while let Some(block) = client.receive().await {
// Process blocks
}
}).await {
Ok(_) => println!("Completed"),
Err(_) => {
client.interrupt(); // Clean cancellation
println!("Operation timed out!");
}
}
// Client is still usable after interrupt
client.send("Short question?").await?;
// Continue using client...
Ok(())
}
let mut full_text = String::new();
while let Some(block) = client.receive().await {
if let ContentBlock::Text(text) = block? {
full_text.push_str(&text.text);
if full_text.contains("error") {
client.interrupt();
break;
}
}
}
use tokio::select;
let stream_task = async {
while let Some(block) = client.receive().await {
// Process blocks
}
};
let cancel_task = async {
tokio::time::sleep(Duration::from_secs(2)).await;
client.interrupt();
};
tokio::select! {
_ = stream_task => println!("Completed"),
_ = cancel_task => println!("Cancelled"),
}
When you call client.interrupt():
See examples/interrupt_demo.rs for comprehensive patterns.
We've included production-ready agents that demonstrate real-world usage:
Analyzes your staged git changes and writes professional commit messages following conventional commit format.
# Stage your changes
git add .
# Run the agent
cargo run --example git_commit_agent
# Output:
# Found staged changes in 3 file(s)
# Analyzing changes and generating commit message...
#
# Suggested commit message:
# feat(auth): Add OAuth2 integration with refresh tokens
#
# - Implement token refresh mechanism
# - Add secure cookie storage for tokens
# - Update login flow to support OAuth2 providers
Features:
examples/log_analyzer_agent.rs
Intelligently analyzes application logs to identify patterns, errors, and provide actionable insights.
# Analyze a log file
cargo run --example log_analyzer_agent -- /var/log/app.log
Features:
These agents demonstrate:
Without open-agent-sdk (raw reqwest):
use reqwest::Client;
let client = Client::new();
let response = client
.post("http://localhost:1234/v1/chat/completions")
.json(&json!({
"model": "qwen2.5-32b-instruct",
"messages": [
{"role": "system", "content": system_prompt},
{"role": "user", "content": user_prompt}
],
"stream": true
}))
.send()
.await?;
// Complex parsing of SSE chunks
// Extract delta content
// Handle tool calls manually
// Track conversation state yourself
With open-agent-sdk:
use open_agent::{query, AgentOptions};
let options = AgentOptions::builder()
.system_prompt(system_prompt)
.model("qwen2.5-32b-instruct")
.base_url("http://localhost:1234/v1")
.build()?;
let mut stream = query(user_prompt, &options).await?;
// Clean message types (TextBlock, ToolUseBlock)
// Automatic streaming and tool call handling
Value: Familiar patterns + Less boilerplate + Rust performance
Performance: Zero-cost abstractions mean no runtime overhead. Streaming responses with Tokio delivers throughput comparable to C/C++ while maintaining memory safety.
Safety: Compile-time guarantees prevent data races, null pointer dereferences, and buffer overflows. Your agents won't crash from memory issues.
Concurrency: Fearless concurrency with async/await lets you run multiple agents or handle hundreds of concurrent requests without fear of race conditions.
Production Ready: Strong type system catches bugs at compile time. Comprehensive error handling with Result types. No surprises in production.
Small Binaries: Standalone executables under 10MB. Deploy anywhere without runtime dependencies.
AgentOptions::builder()
.system_prompt(str) // System prompt
.model(str) // Model name (required)
.base_url(str) // OpenAI-compatible endpoint (required)
.tool(Tool) // Add tools for function calling
.hooks(Hooks) // Lifecycle hooks for monitoring/control
.auto_execute_tools(bool) // Enable automatic tool execution
.max_tool_iterations(usize) // Max tool calls per query in auto mode
.max_tokens(Option<u32>) // Tokens to generate (None = provider default)
.temperature(f32) // Sampling temperature
.timeout(u64) // Request timeout in seconds
.api_key(str) // API key (default: "not-needed")
.build()?
Simple single-turn query function.
pub async fn query(prompt: &str, options: &AgentOptions)
-> Result<ContentStream>
Returns a stream yielding ContentBlock items.
Multi-turn conversation client with tool monitoring.
let mut client = Client::new(options)?;
client.send(prompt).await?;
while let Some(block) = client.receive().await {
// Process ContentBlock items
}
ContentBlock::Text(TextBlock) - Text content from modelContentBlock::ToolUse(ToolUseBlock) - Tool calls from modelContentBlock::ToolResult(ToolResultBlock) - Tool execution resultsuse open_agent::tool;
let my_tool = tool("name", "description")
.param("param_name", "type")
.build(|args| async move {
// Tool implementation
Ok(json!({"result": value}))
});
Local models (LM Studio, Ollama, llama.cpp):
Cloud-proxied via local gateway:
open-agent-sdk-rust/
├── src/
│ ├── client.rs # query() and Client implementation
│ ├── config.rs # Configuration builder
│ ├── context.rs # Token estimation and truncation
│ ├── error.rs # Error types
│ ├── hooks.rs # Lifecycle hooks
│ ├── lib.rs # Public exports
│ ├── retry.rs # Retry logic with exponential backoff
│ ├── tools.rs # Tool system
│ ├── types.rs # Core types (AgentOptions, ContentBlock, etc.)
│ └── utils.rs # SSE parsing and tool call aggregation
├── examples/
│ ├── simple_query.rs # Basic streaming query
│ ├── calculator_tools.rs # Function calling (manual mode)
│ ├── auto_execution_demo.rs # Automatic tool execution
│ ├── multi_tool_agent.rs # Production agent with 5 tools and hooks
│ ├── hooks_example.rs # Lifecycle hooks patterns
│ ├── context_management.rs # Context management patterns
│ ├── interrupt_demo.rs # Interrupt capability patterns
│ ├── git_commit_agent.rs # Production: Git commit generator
│ ├── log_analyzer_agent.rs # Production: Log analyzer
│ └── advanced_patterns.rs # Retry logic and concurrent requests
├── tests/
│ ├── integration_tests.rs
│ ├── hooks_integration_test.rs # Hooks integration tests
│ ├── auto_execution_test.rs # Auto-execution tests
│ └── advanced_integration_test.rs # Advanced integration tests
├── Cargo.toml
└── README.md
git_commit_agent.rs – Analyzes git diffs and writes professional commit messageslog_analyzer_agent.rs – Parses logs, finds patterns, suggests fixesmulti_tool_agent.rs – Complete production setup with 5 tools, hooks, and auto-executionsimple_query.rs – Minimal streaming query (simplest quickstart)calculator_tools.rs – Manual tool execution patternauto_execution_demo.rs – Automatic tool execution patternhooks_example.rs – Lifecycle hooks patterns (security gates, audit logging)context_management.rs – Manual history management patternsinterrupt_demo.rs – Interrupt capability patterns (timeout, conditional, concurrent)advanced_patterns.rs – Retry logic and concurrent request handling# Run all tests
cargo test
# Run with output
cargo test -- --nocapture
# Run specific test
cargo test test_agent_options_builder
Test Coverage:
MIT License - see LICENSE for details.
Status: v0.1.0 Published - 100% feature parity with Python SDK, production-ready
Star this repo if you're building AI agents with local models in Rust!