| Crates.io | sai-cli |
| lib.rs | sai-cli |
| version | 1.0.0 |
| created_at | 2025-12-07 19:06:50.772238+00 |
| updated_at | 2025-12-08 12:25:08.574201+00 |
| description | Sai-cli ('sai') — Tell the shell what you want, not how to do it. Natural-language to safe shell command generator. |
| homepage | |
| repository | https://github.com/soyrochus/sai |
| max_upload_size | |
| id | 1972097 |
| size | 174,850 |
Sai-cli ('sai') is a small, fast, Rust-based command-line tool that transforms natural language into safe, real shell commands, using an LLM — while enforcing strict guardrails to keep execution safe and predictable.
It is designed for Unix-like environments like Linux and MacOS but builds cleanly on Windows as well.

Sai-cli takes two things:
jq, grep, sed, cat, …)And it produces:
--unsafeExamples (runnable from the repo root with prompts/standard-tools.yml):
sai prompts/standard-tools.yml "Show where the trait CommandGenerator is defined in src"
>> rg 'trait CommandGenerator' src
sai prompts/standard-tools.yml "List every Rust source file under src"
>> find src -type f -name '*.rs'
sai prompts/standard-tools.yml "Count lines in src/app.rs"
>> wc -l src/app.rs
You tell the shell what you want, and sai-cli figures out how using the tools you have whitelisted.
Go to: https://github.com/soyrochus/sai/releases
Download the binary for your platform:
| OS | File |
|---|---|
| Linux | sai-x86_64-unknown-linux-gnu |
| macOS | sai-aarch64-apple-darwin or sai-x86_64-apple-darwin |
| Windows | sai.exe |
Make it executable and put it in your PATH:
Example: Linux
chmod +x sai
sudo mv sai /usr/local/bin/
That’s it.
If you already have Rust tooling set up, install directly from crates.io:
cargo install sai-cli
This builds the crate sai-cli and drops the sai binary into ~/.cargo/bin (make sure that path is on your PATH). Afterwards you can run:
sai --help
to verify the install.
Sai-cli loads its global config from the OS-standard location:
| OS | Path |
|---|---|
| Linux | ~/.config/sai/config.yaml |
| macOS | ~/Library/Application Support/sai/config.yaml |
| Windows | %APPDATA%/sai/config.yaml |
This file contains:
You can bootstrap sensible defaults by running:
sai --init
This writes a starter config with placeholder API credentials and a curated set of standard Unix
tools (grep, find, awk, sed, sort, wc, etc.) pre-configured. You can immediately start using sai-cli
after updating your API key, or add more tools later with sai --add-prompt ... or your own YAML edits.
config.yamlai:
provider: openai
openai_api_key: "replace_with_your_key"
openai_model: "gpt-5.1-mini"
default_prompt:
meta_prompt: |
You generate safe shell commands from natural language.
Output exactly ONE line with the command to execute.
Do not include markdown, explanations, or extra text.
tools:
- name: jq
config: |
Tool: jq
Role: JSON processor.
Rules:
- Commands must start with "jq".
- Do not use pipes, redirections or shell features.
- Use jq filters to transform the JSON.
Environment variables always override AI configuration.
Sai-cli includes a comprehensive hierarchical help system accessible directly from the command line. You can discover all features and concepts without needing to reference external documentation.
# Show top-level overview and common usage
sai help
# List all available help topics
sai help topics
# Get detailed help on a specific topic
sai help config
sai help scope
sai help explain
The help system covers:
Each topic provides detailed explanations, examples, and usage patterns. The help system is designed to be self-contained and progressively discoverable - start with sai help and explore from there.
Uses default prompt in the global config:
sai "Show all active users from users.json"
Explicit config file:
sai mytools.yaml "Find lines containing ERROR"
sai -p users.json "List active users"
This lets the LLM infer the structure of the data (truncated to 16 KB per file).
Provide a path or glob so the LLM focuses on the right files:
sai -s "logs/**/*.json" "Summarize fatal errors"
You can use any descriptive text (e.g., "only PDF reports"), and the hint is passed as a separate message alongside the natural language prompt.
Special case: -s . injects a non-recursive listing of the current working directory into the LLM context (bounded by an internal size limit). This helps the model understand what files exist without you typing the names.
Allows pipes, redirects, etc. (Always forces confirmation.)
sai -u "Combine these two results and then sort"
sai -c "Show me all user ids"
Confirmation shows:
Get a detailed explanation of what the generated command will do before executing:
sai -e "Find all Python files modified today"
This mode:
-c)--scope, --peek, --unsafeExample output:
Generated command:
find . -name '*.py' -mtime 0
Explanation:
This command searches for Python files (*.py) in the current directory
and subdirectories that were modified within the last 24 hours.
- find . : Start search from current directory
- -name '*.py' : Match files ending in .py
- -mtime 0 : Modified less than 24 hours ago
Execute this command? [y/N]
Analyze the most recent sai invocation to understand what happened:
sai --analyze
This mode:
Useful for:
Example:
$ sai "count lines in all rust files"
# ... command fails ...
$ sai --analyze
Analyzing last sai-cli invocation...
The command attempted to run 'wc -l *.rs' but failed because the shell
glob pattern wasn't expanded. The generated command needed either:
1. An explicit scope like -s . to help the LLM understand available files
2. Or a more specific prompt mentioning the directory structure
Suggested next steps:
- Try: sai -s . "count lines in all rust files in src/"
- Or: sai "count lines in src/*.rs"
Generate a per-command prompt config with placeholders:
sai --create-prompt jq
The file defaults to jq.yaml in the current directory. You can specify a custom path:
sai --create-prompt jq prompts/jq-safe.yaml
Add tools from a prompt file to your global default config:
sai --add-prompt prompts/jq-safe.yaml
If any tool names already exist, sai-cli shows both definitions and lets you choose per conflict:
In non-interactive contexts (no TTY), duplicates cause a clear error so you can resolve interactively later.
See which tools sai-cli will allow before running anything:
sai --list-tools
If you supply a prompt file, both sources are reported, and each entry notes
whether the tool is currently on your PATH ([x] present, [ ] missing):
sai --list-tools prompts/standard-tools.yml
The repo ships with ready-to-adapt prompt configs under prompts/:
prompts/standard-tools.yml – Common Unix tools for file inspection and text processingprompts/data-focussed-tool.yml – Data transformation tools (jq, yq, mlr, csvkit, sed, awk)prompts/safe-destructive-tools.yml – Tools that can modify files (use with caution)prompts/git-safe.yml – Read-only git operations (status, log, diff, show, blame, grep, etc.)prompts/git-full.yml – Full git workflow including commits, pushes, merges, rebases (always use with --confirm)Sai-cli automatically maintains a history log of all invocations in NDJSON format (newline-delimited JSON). Each command execution is recorded with metadata including:
| OS | Path |
|---|---|
| Linux | ~/.config/sai/history.log |
| macOS | ~/Library/Application Support/sai/history.log |
| Windows | %APPDATA%\sai\history.log |
The log automatically rotates when it exceeds 1 MB, keeping one backup generation.
Use --analyze to review and understand your most recent sai-cli invocation:
sai --analyze
This is particularly useful after errors or unexpected results, as the LLM can explain what likely went wrong and suggest corrections.
src/main.rs: minimal bootstrap that calls into the real application logic.src/app.rs: orchestrates CLI parsing, configuration loading, LLM invocation, confirmation, and command execution. Exposes run_with_dependencies for dependency injection during tests.cli (clap parser), config (YAML + env resolution), prompt (system prompt builder), peek (sample ingestion), llm (CommandGenerator trait + HTTP backend), safety (operator checks), executor (CommandExecutor trait + shell bridge), history (NDJSON logging and analysis), scope (directory context), and ops (init/create/add/list helpers).CommandGenerator, CommandExecutor) allow swapping in mocks or alternative implementations (e.g., offline generators or dry-run executors) without touching the application core.cargo fmt.cargo test; it exercises filesystem helpers via tempfile and stays offline.TECHSPEC.md for module-level rationale and expected behaviours.Sai-cli has three principles:
The shell remains in control. Sai-cli generates commands — it does not become a shell itself.
Safety first. Default mode blocks pipes, redirections, substitutions, and shell chaining.
Context matters.
Tools behave better when they see sample data (--peek).
Everyone is invited and welcome to contribute: open issues, propose pull requests, share ideas, or help improve documentation. Participation is open to all, regardless of background or viewpoint.
This project follows the FOSS Pluralism Manifesto, which affirms respect for people, freedom to critique ideas, and space for diverse perspectives.
Copyright (c) 2025, Iwan van der Kleijn
This project is licensed under the MIT License. See the LICENSE file for details.