| Crates.io | agents-core |
| lib.rs | agents-core |
| version | 0.0.30 |
| created_at | 2025-09-29 11:56:32.421942+00 |
| updated_at | 2026-01-09 21:30:43.613273+00 |
| description | Core traits, data models, and prompt primitives for building deep agents. |
| homepage | https://github.com/yafatek/rust-deep-agents-sdk |
| repository | https://github.com/yafatek/rust-deep-agents-sdk |
| max_upload_size | |
| id | 1859446 |
| size | 165,181 |
A Rust implementation of the Deep Agents architecture — production-ready AI agents with planning, sub-agents, and persistent memory.
Quick Start • Features • Examples • Why This SDK? • Contributing • Documentation
Deep Agents is an architecture pioneered by LangChain for building agents that can tackle complex, multi-step tasks. Inspired by applications like Claude Code, Deep Research, and Manus, deep agents go beyond simple ReAct loops with:
| Capability | Description |
|---|---|
| Planning & Task Decomposition | Built-in write_todos tool to break down complex tasks into discrete steps |
| Context Management | File system tools (ls, read_file, write_file, edit_file) to manage large context |
| Sub-Agent Spawning | Delegate work to specialized sub-agents for context isolation |
| Long-Term Memory | Persist memory across conversations and threads |
This SDK brings the Deep Agents architecture to Rust, with type safety, native performance, and memory safety.
Building AI agents shouldn't mean sacrificing performance or type safety. While Python frameworks dominate the AI space, Rust Deep Agents SDK brings the reliability and speed of Rust to agent development.
| Feature | Rust Deep Agents | LangChain | CrewAI | AutoGen |
|---|---|---|---|---|
| Language | Rust | Python | Python | Python |
| Type Safety | Compile-time | Runtime | Runtime | Runtime |
| Performance | Native speed | Interpreted | Interpreted | Interpreted |
| Memory Safety | Guaranteed | GC-dependent | GC-dependent | GC-dependent |
| Async/Concurrent | Tokio-native | asyncio | asyncio | asyncio |
| Tool Macro | #[tool] |
Decorators | Decorators | Manual |
| Token Tracking | Built-in | Callbacks | Manual | Manual |
| HITL Workflows | Native | Plugin | Limited | Plugin |
| PII Protection | Automatic | Manual | Manual | Manual |
The SDK is model-agnostic — pass any model string supported by the provider:
gpt-5.2, gpt-4o, o1-pro, gpt-4o-mini)claude-opus-4.5, claude-sonnet-4.5, claude-haiku-4.5)gemini-2.5-pro, gemini-2.5-flash, gemini-2.0-flash)#[tool] macro for zero-boilerplate tool definitionstoon feature flagAdd to your Cargo.toml:
[dependencies]
agents-sdk = "0.0.29"
tokio = { version = "1.0", features = ["full"] }
anyhow = "1.0"
use agents_sdk::{ConfigurableAgentBuilder, OpenAiConfig, OpenAiChatModel};
use agents_sdk::state::AgentStateSnapshot;
use agents_macros::tool;
use std::sync::Arc;
// Define a tool with a simple macro
#[tool("Adds two numbers together")]
fn add(a: i32, b: i32) -> i32 {
a + b
}
#[tokio::main]
async fn main() -> anyhow::Result<()> {
// Configure the LLM
let config = OpenAiConfig::new(
std::env::var("OPENAI_API_KEY")?,
"gpt-4o-mini"
);
let model = Arc::new(OpenAiChatModel::new(config)?);
// Build your agent
let agent = ConfigurableAgentBuilder::new("You are a helpful math assistant.")
.with_model(model)
.with_tool(AddTool::as_tool())
.build()?;
// Use it
let response = agent.handle_message(
"What is 5 + 3?",
Arc::new(AgentStateSnapshot::default())
).await?;
println!("{}", response.content.as_text().unwrap_or("No response"));
Ok(())
}
Explore comprehensive examples demonstrating SDK capabilities:
| Example | Description | Complexity |
|---|---|---|
simple-agent |
Basic agent with OpenAI | Beginner |
tool-test |
Custom tools with #[tool] macro |
Beginner |
anthropic-tools-test |
Using Claude models | Beginner |
gemini-tools-test |
Using Gemini models | Beginner |
token-tracking-demo |
Monitor usage and costs | Intermediate |
event-system-demo |
Real-time event broadcasting | Intermediate |
checkpointer-demo |
State persistence | Intermediate |
hitl-demo |
Human-in-the-loop basics | Intermediate |
hitl-financial-advisor |
Production HITL workflow | Advanced |
subagent-demo |
Multi-agent delegation | Advanced |
streaming-events-demo |
SSE/WebSocket streaming | Advanced |
automotive-web-service |
Full-stack web application | Advanced |
toon-format-demo |
Token-efficient TOON format | Intermediate |
git clone https://github.com/yafatek/rust-deep-agents-sdk.git
cd rust-deep-agents-sdk
export OPENAI_API_KEY="your-key-here"
cargo run -p tool-test
cargo run -p token-tracking-demo
cargo run -p hitl-financial-advisor
rust-deep-agents-sdk/
├── crates/
│ ├── agents-core/ # Core traits, messages, state models
│ ├── agents-runtime/ # Execution engine, builders, middleware
│ ├── agents-toolkit/ # Built-in tools and utilities
│ ├── agents-macros/ # #[tool] procedural macro
│ ├── agents-sdk/ # Unified SDK with feature flags
│ ├── agents-aws/ # AWS integrations (DynamoDB, Secrets)
│ └── agents-persistence/ # Redis, PostgreSQL backends
├── examples/ # Working examples and demos
├── docs/ # Documentation and guides
└── deploy/ # Terraform modules for AWS
The SDK is model-agnostic — you can use any model string supported by the provider's API.
| Provider | Example Models | Status |
|---|---|---|
| OpenAI | gpt-5.2, gpt-4o, o1-pro, o1-mini, gpt-4o-mini |
Stable |
| Anthropic | claude-opus-4.5, claude-sonnet-4.5, claude-haiku-4.5 |
Stable |
| Google Gemini | gemini-2.5-pro, gemini-2.5-flash, gemini-2.0-flash |
Stable |
Note: Model availability depends on your API access. The SDK passes your model string directly to the provider — any model they support will work.
The SDK includes a composable middleware system:
use agents_sdk::{
ConfigurableAgentBuilder,
OpenAiConfig, OpenAiChatModel,
AnthropicConfig, AnthropicMessagesModel,
GeminiConfig, GeminiChatModel
};
use std::sync::Arc;
// OpenAI
let openai = Arc::new(OpenAiChatModel::new(
OpenAiConfig::new(api_key, "gpt-4o-mini")?
)?);
// Anthropic Claude
let claude = Arc::new(AnthropicMessagesModel::new(
AnthropicConfig::new(api_key, "claude-sonnet-4.5", 4096)?
)?);
// Google Gemini
let gemini = Arc::new(GeminiChatModel::new(
GeminiConfig::new(api_key, "gemini-2.5-pro")?
)?);
// Use any provider with the same builder API
let agent = ConfigurableAgentBuilder::new("You are a helpful assistant")
.with_model(claude)
.build()?;
use agents_sdk::{ConfigurableAgentBuilder, TokenTrackingConfig, TokenCosts};
let token_config = TokenTrackingConfig {
enabled: true,
emit_events: true,
log_usage: true,
custom_costs: Some(TokenCosts::openai_gpt4o_mini()),
};
let agent = ConfigurableAgentBuilder::new("You are a helpful assistant")
.with_model(model)
.with_token_tracking_config(token_config)
.build()?;
use agents_sdk::{ConfigurableAgentBuilder, HitlPolicy};
use std::collections::HashMap;
let mut policies = HashMap::new();
policies.insert(
"delete_file".to_string(),
HitlPolicy {
allow_auto: false,
note: Some("File deletion requires security review".to_string()),
}
);
// Use with_tool_interrupt() for each tool requiring approval
let agent = ConfigurableAgentBuilder::new("You are a helpful assistant")
.with_model(model)
.with_tool_interrupt("delete_file", policies.get("delete_file").unwrap().clone())
.with_checkpointer(checkpointer)
.build()?;
// Handle interrupts
if let Some(interrupt) = agent.current_interrupt().await? {
println!("Approval needed for: {}", interrupt.tool_name);
agent.resume_with_approval(HitlAction::Accept).await?;
}
use agents_sdk::{ConfigurableAgentBuilder, InMemoryCheckpointer};
use agents_persistence::RedisCheckpointer;
use std::sync::Arc;
// Development: In-memory
let checkpointer = Arc::new(InMemoryCheckpointer::new());
// Production: Redis
let checkpointer = Arc::new(
RedisCheckpointer::new("redis://127.0.0.1:6379").await?
);
let agent = ConfigurableAgentBuilder::new("You are a helpful assistant")
.with_model(model)
.with_checkpointer(checkpointer)
.build()?;
// Save and restore conversation state
let thread_id = "user-123";
agent.save_state(&thread_id).await?;
agent.load_state(&thread_id).await?;
use agents_sdk::{ConfigurableAgentBuilder, EventBroadcaster};
use agents_core::events::AgentEvent;
use async_trait::async_trait;
struct WebhookBroadcaster {
endpoint: String,
}
#[async_trait]
impl EventBroadcaster for WebhookBroadcaster {
fn id(&self) -> &str { "webhook" }
fn supports_streaming(&self) -> bool { true }
async fn broadcast(&self, event: &AgentEvent) -> anyhow::Result<()> {
match event {
AgentEvent::AgentStarted(e) => { /* POST to webhook */ }
AgentEvent::StreamingToken(e) => { /* SSE push */ }
AgentEvent::ToolCompleted(e) => { /* Log to analytics */ }
AgentEvent::TokenUsage(e) => { /* Track costs */ }
_ => {}
}
Ok(())
}
}
let agent = ConfigurableAgentBuilder::new("You are a helpful assistant")
.with_model(model)
.with_event_broadcaster(Arc::new(WebhookBroadcaster {
endpoint: "https://api.example.com/events".into()
}))
.build()?;
use agents_sdk::{ConfigurableAgentBuilder, SubAgentConfig};
let researcher = SubAgentConfig::new(
"researcher",
"Searches and analyzes information",
"You are a research specialist.",
);
let writer = SubAgentConfig::new(
"writer",
"Creates well-written content",
"You are a content writer.",
);
// Use with_subagent_config() to add sub-agents
let agent = ConfigurableAgentBuilder::new("You are a project coordinator.")
.with_model(model)
.with_subagent_config([researcher, writer])
.with_auto_general_purpose(true)
.build()?;
TOON (Token-Oriented Object Notation) is a compact data format that reduces token usage by 30-60% compared to JSON. Enable it for cost savings and faster responses:
use agents_sdk::{ConfigurableAgentBuilder, PromptFormat};
let agent = ConfigurableAgentBuilder::new("You are a helpful assistant")
.with_model(model)
.with_prompt_format(PromptFormat::Toon) // Enable TOON format
.build()?;
You can also encode data directly:
use agents_core::toon::ToonEncoder;
use serde_json::json;
let encoder = ToonEncoder::new();
let data = json!({
"users": [
{"id": 1, "name": "Alice", "role": "admin"},
{"id": 2, "name": "Bob", "role": "user"}
]
});
// TOON output is ~40% smaller than JSON
let toon_string = encoder.encode(&data)?;
// users[2]{id,name,role}:
// 1,Alice,admin
// 2,Bob,user
See the TOON Format Guide and toon-format-demo for more details.
[dependencies]
# Default: includes toolkit
agents-sdk = "0.0.29"
# Minimal: core only
agents-sdk = { version = "0.0.29", default-features = false }
# With persistence
agents-sdk = { version = "0.0.29", features = ["redis"] }
agents-sdk = { version = "0.0.29", features = ["postgres"] }
# With AWS
agents-sdk = { version = "0.0.29", features = ["aws", "dynamodb"] }
# With TOON format (token optimization)
agents-sdk = { version = "0.0.29", features = ["toon"] }
# Everything
agents-sdk = { version = "0.0.29", features = ["full"] }
| Feature | Description |
|---|---|
toolkit |
Built-in tools (default) |
toon |
TOON format for token-efficient prompts |
redis |
Redis persistence backend |
postgres |
PostgreSQL persistence backend |
dynamodb |
DynamoDB persistence backend |
aws |
AWS integrations (Secrets Manager, etc.) |
full |
All features enabled |
git clone https://github.com/yafatek/rust-deep-agents-sdk.git
cd rust-deep-agents-sdk
cargo fmt
cargo clippy --all-targets --all-features -- -D warnings
cargo test --all
cargo build --release
export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."
export GOOGLE_API_KEY="..."
export TAVILY_API_KEY="..." # Optional: for web search
We welcome contributions of all kinds:
Please read our Contributing Guide to get started.
New contributors can look for issues labeled good first issue.
Providers
Features
See the full roadmap for details.
This project is licensed under the Apache License 2.0. See the LICENSE file for details.
Built with Rust for production AI systems