Crates.io | cargo-ai |
lib.rs | cargo-ai |
version | 0.0.6 |
created_at | 2025-04-02 18:30:03.805313+00 |
updated_at | 2025-09-25 12:49:39.933537+00 |
description | Ship AI Data the right way with Rust. |
homepage | |
repository | |
max_upload_size | |
id | 1616895 |
size | 130,354 |
cargo-ai
is a lightweight, Rust-based framework for building no-code AI agents using clean, declarative, JSON configs. Agents compile into fast, secure binariesâperfect for local machines, servers, and embedded Linux devices, with broader embedded support planned.
Lightweight AI agents. Built in Rust. Declared in JSON.
cargo ai
commandsInstall Rust & Cargo
Follow the official guide:
Install Rust & Cargo
Install cargo-ai
Once Cargo is available, install cargo-ai
from source:
cargo install cargo-ai
Verify installation:
cargo ai --help
Hatch a Sample Agent (AdderAgent â a sample âHello Worldâ style agent that adds 2 + 2):
Generic form:
cargo ai hatch <YourAgentName> -c <config_file>
Example:
cargo ai hatch AdderAgent -c AdderAgent.json
Run the compiled agent with OpenAI GPT:
Generic form:
./<YourAgentName> -s <server> -m <model> --token <your_api_token>
Example (AdderAgent with GPT-4o):
./AdderAgent -s openai -m gpt-4o --token sk-ABCD1234...
The base cargo ai
command provides subcommands for managing agents:
Usage: cargo ai [COMMAND]
Commands:
hatch Hatch a new AI agent project
help Print this message or the help of the given subcommand(s)
Options:
-h, --help Print help
The hatch
command creates a new AI agent from a JSON config:
Usage: cargo ai hatch [OPTIONS] <name>
Arguments:
<name> Name of the new agent project
Options:
-c, --config <FILE> Path to the agent configuration file (JSON format)
-h, --help Print help
Once hatched, your agent is compiled as a standalone binary.
Example with AdderAgent
(binary name: AdderAgent
):
Usage: AdderAgent [OPTIONS]
Options:
-s, --server <server> Client Type â Ollama or OpenAI
-m, --model <model> LLM model to use
--token <token> API token
--timeout_in_sec <timeout> Client timeout request [default: 60]
-h, --help Print help
Weâll walk through a WeatherAgent.json example step-by-stepâprompt, expected response schema, optional resource URLs, and actions.
To define a custom agent, youâll use a JSON file that specifies:
The steps below show how to create the WeatherAgent, but once defined, running it is as simple as:
# 1. Hatch your WeatherAgent from a JSON config
cargo ai hatch WeatherAgent --config [WeatherAgent.json](./WeatherAgent.json)
# 2. Run your WeatherAgent with a server, model, and token
./WeatherAgent -s openai -m gpt-4o --token sk-ABCD1234...
# Expected output if raining tomorrow:
# bring an umbrella
The prompt
is the natural language instruction or question you send to the AI/transformer server.
It frames what the agent is supposed to do. You can phrase it as a question, a request, or a directive.
Example from WeatherAgent.json:
"prompt": "Will it rain tomorrow between 9am and 5pm? (Consider true if over 40% for any given hour period.)"
You can edit the text to suit your agentâs purposeâfor example, summarizing an article, checking stock prices, or answering domain-specific questions.
The agent_schema
describes the shape of the response you expect from the AI/transformer server.
Behind the scenes, this schema is also used to generate the corresponding Rust structures.
You can define fields as:
boolean
â true/false valuesstring
â text valuesnumber
â floating-point numbers (f64)integer
â whole numbers (i64)Example from WeatherAgent.json:
"agent_schema": {
"type": "object",
"properties": {
"raining": {
"type": "boolean",
"description": "Indicates whether it is raining."
}
}
}
The resource_urls
section lists optional external data sources your agent can use.
Each entry includes:
url
: the API endpoint or resource locationdescription
: a short explanation of what the resource providesThese URLs are passed to the AI/transformer server alongside the prompt, giving the agent more context to work with.
Example from WeatherAgent.json:
"resource_urls": [
{
"url": "https://worldtimeapi.org/api/timezone/etc/utc",
"description": "Current UTC date and time."
},
{
"url": "https://api.open-meteo.com/v1/forecast?latitude=39.10&longitude=-84.51&hourly=precipitation_probability",
"description": "Hourly precipitation probability for Cincinnati, which is my area."
}
]
Note: The weather forecast URL in the example is configured for Cincinnati (latitude/longitude values). Update these values and the description to match your own location.
The actions
section specifies what the agent should do based on the response.
It follows the JSON Logic format for conditions.
Currently, actions can run a command-line executable (exec
).
Future versions will support additional action types.
Example from WeatherAgent.json:
"actions": [
{
"name": "umbrella_hint_exec",
"logic": {
"==": [ { "var": "raining" }, true ]
},
"run": [
{
"kind": "exec",
"program": "echo",
"args": ["bring an umbrella"]
}
]
},
{
"name": "sunglasses_hint_exec",
"logic": {
"==": [ { "var": "raining" }, false ]
},
"run": [
{
"kind": "exec",
"program": "echo",
"args": ["bring sunglasses"]
}
]
}
]
In this example:
raining
is true, the agent prints âbring an umbrella.âraining
is false, the agent prints âbring sunglasses.â