| Crates.io | autoagents-onnx |
| lib.rs | autoagents-onnx |
| version | 0.3.0 |
| created_at | 2025-09-29 16:33:12.773754+00 |
| updated_at | 2025-11-28 09:28:39.373536+00 |
| description | Minimal edge inference runtime for LLMs |
| homepage | |
| repository | https://github.com/liquidos-ai/AutoAgents |
| max_upload_size | |
| id | 1859853 |
| size | 130,633 |
AutoAgents is a cutting-edge multi-agent framework built in Rust that enables the creation of intelligent, autonomous agents powered by Large Language Models (LLMs) and Ractor. Designed for performance, safety, and scalability. AutoAgents provides a robust foundation for building complex AI systems that can reason, act, and collaborate. With AutoAgents you can create Cloud Native Agents, Edge Native Agents and Hybrid Models as well. It is built with a modular architecture with swappable components, Memory layer, Executors can be easily swapped without much rework. With our native WASM compilation support, You can depoloy the agent orchestration directly to Web Browser.
AutoAgents supports a wide range of LLM providers, allowing you to choose the best fit for your use case:
| Provider | Status |
|---|---|
| OpenAI | ✅ |
| OpenRouter | ✅ |
| Anthropic | ✅ |
| DeepSeek | ✅ |
| xAI | ✅ |
| Phind | ✅ |
| Groq | ✅ |
| ✅ | |
| Azure OpenAI | ✅ |
| Provider | Status |
|---|---|
| Mistral-rs | ⚠️ Under Development |
| Burn | ⚠️ Experimental |
| Onnx | ⚠️ Experimental |
| Ollama | ✅ |
Provider support is actively expanding based on community needs.
For contributing to AutoAgents or building from source:
macOS (using Homebrew):
brew install lefthook
Linux/Windows:
# Using npm
npm install -g lefthook
# Clone the repository
git clone https://github.com/liquidos-ai/AutoAgents.git
cd AutoAgents
# Install Git hooks using lefthook
lefthook install
# Build the project
cargo build --release
# Run tests to verify setup
cargo test --all-features
The lefthook configuration will automatically:
cargo fmtcargo clippyuse autoagents::core::agent::memory::SlidingWindowMemory;
use autoagents::core::agent::prebuilt::executor::{ReActAgent, ReActAgentOutput};
use autoagents::core::agent::task::Task;
use autoagents::core::agent::{AgentBuilder, AgentDeriveT, AgentOutputT, DirectAgent};
use autoagents::core::error::Error;
use autoagents::core::tool::{ToolCallError, ToolInputT, ToolRuntime, ToolT};
use autoagents::llm::LLMProvider;
use autoagents::llm::backends::openai::OpenAI;
use autoagents::llm::builder::LLMBuilder;
use autoagents_derive::{agent, tool, AgentHooks, AgentOutput, ToolInput};
use serde::{Deserialize, Serialize};
use serde_json::Value;
use std::sync::Arc;
#[derive(Serialize, Deserialize, ToolInput, Debug)]
pub struct AdditionArgs {
#[input(description = "Left Operand for addition")]
left: i64,
#[input(description = "Right Operand for addition")]
right: i64,
}
#[tool(
name = "Addition",
description = "Use this tool to Add two numbers",
input = AdditionArgs,
)]
struct Addition {}
#[async_trait]
impl ToolRuntime for Addition {
async fn execute(&self, args: Value) -> Result<Value, ToolCallError> {
println!("execute tool: {:?}", args);
let typed_args: AdditionArgs = serde_json::from_value(args)?;
let result = typed_args.left + typed_args.right;
Ok(result.into())
}
}
/// Math agent output with Value and Explanation
#[derive(Debug, Serialize, Deserialize, AgentOutput)]
pub struct MathAgentOutput {
#[output(description = "The addition result")]
value: i64,
#[output(description = "Explanation of the logic")]
explanation: String,
#[output(description = "If user asks other than math questions, use this to answer them.")]
generic: Option<String>,
}
#[agent(
name = "math_agent",
description = "You are a Math agent",
tools = [Addition],
output = MathAgentOutput,
)]
#[derive(Default, Clone, AgentHooks)]
pub struct MathAgent {}
impl From<ReActAgentOutput> for MathAgentOutput {
fn from(output: ReActAgentOutput) -> Self {
let resp = output.response;
if output.done && !resp.trim().is_empty() {
// Try to parse as structured JSON first
if let Ok(value) = serde_json::from_str::<MathAgentOutput>(&resp) {
return value;
}
}
// For streaming chunks or unparseable content, create a default response
MathAgentOutput {
value: 0,
explanation: resp,
generic: None,
}
}
}
pub async fn simple_agent(llm: Arc<dyn LLMProvider>) -> Result<(), Error> {
let sliding_window_memory = Box::new(SlidingWindowMemory::new(10));
let agent_handle = AgentBuilder::<_, DirectAgent>::new(ReActAgent::new(MathAgent {}))
.llm(llm)
.memory(sliding_window_memory)
.build()
.await?;
println!("Running simple_agent with direct run method");
let result = agent_handle.agent.run(Task::new("What is 1 + 1?")).await?;
println!("Result: {:?}", result);
Ok(())
}
#[tokio::main]
async fn main() -> Result<(), Error> {
// Check if API key is set
let api_key = std::env::var("OPENAI_API_KEY").unwrap_or("".into());
// Initialize and configure the LLM client
let llm: Arc<OpenAI> = LLMBuilder::<OpenAI>::new()
.api_key(api_key) // Set the API key
.model("gpt-4o") // Use GPT-4o-mini model
.max_tokens(512) // Limit response length
.temperature(0.2) // Control response randomness (0.0-1.0)
.build()
.expect("Failed to build LLM");
let _ = simple_agent(llm).await?;
Ok(())
}
Command-line interface for running and serving AutoAgents workflows from YAML.
cargo build --package autoagents-cli --release
The binary will be available at target/release/autoagents.
Execute a workflow from a YAML file:
kind: Direct
name: ResearchAgent
stream: false
description: "A research agent designed to search, retrieve, and summarize information from the web."
workflow:
agent:
name: ResearchAgent
description: "A deep research agent capable of gathering accurate information, summarizing sources, and providing references."
instructions: |
You are a research expert. Your task is to find accurate and up-to-date information related to the user's query.
1. Search for relevant sources on the web.
2. Extract key insights and summarize them concisely.
3. Provide references and links to original sources.
4. Make sure to cross-verify facts and avoid unverified information.
5. Present the final answer in a structured and clear manner.
executor: ReAct
memory:
kind: sliding_window
parameters:
window_size: 100
model:
kind: llm
backend:
kind: Cloud
provider: OpenAI
model_name: gpt-4o-mini
parameters:
temperature: 0.2
max_tokens: 1500
tools:
- name: brave_search
output:
type: text
output:
type: text
autoagents run --workflow workflow.yaml --input "What is Rust?"
Start an HTTP server to serve workflows via REST API:
autoagents serve --workflow workflow.yaml --port 8080
Optional arguments:
--name <NAME> - Custom name for the workflow (defaults to filename)--host <HOST> - Host to bind to (default: 127.0.0.1)--port <PORT> - Port to bind to (default: 8080)# Run a direct workflow
autoagents run -w workflow.yaml -i "Tell me about AI"
# Serve a workflow on custom port
autoagents serve -w workflow.yaml -p 9000 --name research
# serve from directory
autoagents serve --directory ./workflows
# Serve with custom name
autoagents serve -w workflow.yaml --name my_agent --host 0.0.0.0 --port 3000
Explore our comprehensive examples to get started quickly:
Demonstrates various examples like Simple Agent with Tools, Very Basic Agent, Edge Agent, Chaining, Actor Based Model, Streaming and Adding Agent Hooks.
Demonstrates how to integrate AutoAgents with the Model Context Protocol (MCP).
Demonstrates how to integrate AutoAgents with the Mistral-rs for Local Models.
Demonstrates various design patterns like Chaining, Planning, Routing, Parallel and Reflection.
Contains examples demonstrating how to use different LLM providers with AutoAgents.
A simple agent which can run tools in WASM runtime.
A sophisticated ReAct-based coding agent with file manipulation capabilities.
Compile agent runtime into WASM module and load it in a browser web app.
AutoAgents is built with a modular architecture:
AutoAgents/
├── crates/
│ ├── autoagents/ # Main library entry point
│ ├── autoagents-core/ # Core agent framework
│ ├── autoagents-llm/ # LLM provider implementations
│ ├── autoagents-toolkit/ # Collection of Ready to use Tools
│ ├── autoagents-burn/ # LLM provider implementations using Burn
│ ├── autoagents-mistral-rs/ # LLM provider implementations using Mistral-rs
│ ├── autoagents-onnx/ # Edge Runtime Implementation using Onnx
│ └── autoagents-derive/ # Procedural macros
│ └── autoagents-cli/ # AutoAgents CLI
│ └── autoagents-serve/ # Crate responsible for running and serving YAML based workflows
├── examples/ # Example implementations
For development setup instructions, see the Installation section above.
# Run all tests --
cargo test --all-features
# Run tests with coverage (requires cargo-tarpaulin)
cargo install cargo-tarpaulin
cargo tarpaulin --all-features --out html
This project uses LeftHook for Git hooks management. The hooks will automatically:
cargo fmt --checkcargo clippy -- -D warningscargo test --all-features --workspace --exclude autoagents-burnWe welcome contributions! Please see our Contributing Guidelines and Code of Conduct for details.
AutoAgents is designed for high performance:
AutoAgents is dual-licensed under:
You may choose either license for your use case.
Built with ❤️ by the Liquidos AI team and our amazing community contributors.
Special thanks to:
⭐ Star us on GitHub | 🐛 Report Issues | 💬 Join Discussions