| Crates.io | rs-agent |
| lib.rs | rs-agent |
| version | 1.0.1 |
| created_at | 2025-11-27 18:40:13.895767+00 |
| updated_at | 2025-11-28 08:43:08.626176+00 |
| description | Lattice AI Agent Framework for Rust - Build production AI agents with clean abstractions |
| homepage | |
| repository | https://github.com/Protocol-Lattice/rs-agent |
| max_upload_size | |
| id | 1954232 |
| size | 344,906 |
Rust implementation of the Lattice AI Agent Framework. rs-agent gives you a production-ready agent orchestrator with pluggable LLMs, tool calling (including UTCP), retrieval-capable memory, CodeMode execution, and multi-agent coordination.
Agent orchestrates LLM calls, memory, tool invocations, file attachments, and TOON encoding.LLM trait.Tool trait once, register in the ToolCatalog, or bridge external tools via UTCP.SessionMemory with recent-context windowing, MMR reranking, and optional Postgres/Qdrant/Mongo stores.codemode.run_code as a tool, or let the CodeMode orchestrator route natural language into tool chains.cargo add rs-agent --git https://github.com/Protocol-Lattice/rs-agent
cargo add rs-agent --git https://github.com/Protocol-Lattice/rs-agent \
--no-default-features --features "ollama"
gemini, memory. Enable other providers/backends via feature flags listed below.Set GOOGLE_API_KEY (or GEMINI_API_KEY) for Gemini, or swap in any LLM implementation you control.
use rs_agent::{Agent, AgentOptions, GeminiLLM};
use rs_agent::memory::{InMemoryStore, SessionMemory};
use std::sync::Arc;
#[tokio::main]
async fn main() -> rs_agent::Result<()> {
tracing_subscriber::fmt::init();
let model = Arc::new(GeminiLLM::new("gemini-2.0-flash")?);
let memory = Arc::new(SessionMemory::new(Box::new(InMemoryStore::new()), 8));
let agent = Agent::new(model, memory, AgentOptions::default())
.with_system_prompt("You are a concise Rust assistant.");
let reply = agent.generate("demo-session", "Why use Rust for agents?").await?;
println!("{reply}");
Ok(())
}
Register custom tools and they become part of the agent's context and invocation flow.
use rs_agent::{Tool, ToolRequest, ToolResponse, ToolSpec, AgentError};
use serde_json::json;
use std::collections::HashMap;
use async_trait::async_trait;
struct EchoTool;
#[async_trait]
impl Tool for EchoTool {
fn spec(&self) -> ToolSpec {
ToolSpec {
name: "echo".into(),
description: "Echoes the provided input".into(),
input_schema: json!({
"type": "object",
"properties": { "input": { "type": "string" } },
"required": ["input"]
}),
examples: None,
}
}
async fn invoke(&self, req: ToolRequest) -> rs_agent::Result<ToolResponse> {
let input = req
.arguments
.get("input")
.and_then(|v| v.as_str())
.ok_or_else(|| AgentError::ToolError("missing input".into()))?;
Ok(ToolResponse {
content: input.to_string(),
metadata: None,
})
}
}
// After constructing an `agent` (see Quickstart), register and call the tool
let catalog = agent.tools();
catalog.register(Box::new(EchoTool))?;
let mut args = HashMap::new();
args.insert("input".to_string(), json!("hi"));
let response = agent.invoke_tool("session", "echo", args).await?;
ToolCatalog. Your agent can also self-register as a UTCP provider for agent-as-a-tool scenarios (see examples/utcp_integration.rs).codemode.run_code and an optional Codemode orchestrator that turns natural language into tool chains or executable snippets. Integration patterns live in src/agent/codemode.rs and the agent tests.SessionMemory keeps per-session short-term context with token-aware trimming.mmr_rerank) improves retrieval diversity when using embeddings.generate_with_files) and encode results compactly with generate_toon.Run the included examples to see common patterns:
cargo run --example quickstartcargo run --example tool_catalogcargo run --example memory_checkpointcargo run --example multi_agentcargo run --example utcp_integration| Feature | Description | Default |
|---|---|---|
gemini |
Google Gemini LLM via google-generative-ai-rs |
Yes (default) |
ollama |
Local Ollama models via ollama-rs |
No |
anthropic |
Anthropic Claude via anthropic-sdk |
No |
openai |
OpenAI-compatible models via async-openai |
No |
memory |
Embeddings via fastembed; enables memory utilities |
Yes (default) |
postgres |
Postgres store with pgvector | No |
qdrant |
Qdrant vector store | No |
mongodb |
MongoDB-backed memory store | No |
all-providers |
Enable all LLM providers | No |
all-memory |
Enable all memory backends | No |
| Variable | Purpose |
|---|---|
GOOGLE_API_KEY or GEMINI_API_KEY |
Required for GeminiLLM |
ANTHROPIC_API_KEY |
Required for AnthropicLLM |
OPENAI_API_KEY |
Required for OpenAILLM |
OLLAMA_HOST (optional) |
Override Ollama host if not localhost |
| Database connection strings | Supply to PostgresStore::new, QdrantStore::new, or MongoStore::new when those features are enabled |
Issues and PRs are welcome! Please format (cargo fmt), lint (cargo clippy), and add tests where it makes sense.
Apache 2.0. See LICENSE.