| Crates.io | run |
| lib.rs | run |
| version | 0.3.5 |
| created_at | 2017-07-21 07:53:54.642352+00 |
| updated_at | 2026-01-19 14:48:58.908262+00 |
| description | a.k.a. runtool: the bridge between human and AI tooling |
| homepage | |
| repository | https://github.com/nihilok/run |
| max_upload_size | |
| id | 24373 |
| size | 211,781 |
a.k.a. runtool: the bridge between human and AI tooling
Define functions in a Runfile. Your AI agent discovers and executes them via the built-in MCP server. You run them from the terminal too with instant startup and tab completion. Shell, Python, Node—whatever fits the task.
# Runfile
# @desc Search the codebase for a pattern
# @shell python
# @arg pattern The regex pattern to search for
search(pattern: str) {
import sys, os, re
for root, _, files in os.walk('.'):
for f in files:
path = os.path.join(root, f)
try:
for i, line in enumerate(open(path), 1):
if re.search(sys.argv[1], line):
print(f"{path}:{i}: {line.rstrip()}")
except: pass
}
# @desc Deploy to an environment
deploy(env: str, version = "latest") {
echo "Deploying $version to $env..."
./scripts/deploy.sh $env $version
}
# @desc Analyze a JSON file
function analyze(file: str) {
#!/usr/bin/env python3
import sys, json
with open(sys.argv[1]) as f:
data = json.load(f)
print(f"Found {len(data)} records")
}
The syntax is designed to be similar to bash/sh, while being permissive & flexible, with added features for AI integration.
Humans can run these functions directly from the terminal:
$ run search "TODO"
$ run deploy staging
$ run analyze data.json
Point your AI agent at the Runfile, and it can discover and execute these tools automatically.
run has built-in support for the Model Context Protocol (MCP), allowing AI agents like Claude to discover and execute your Runfile functions as tools.
Start run as an MCP server:
run --serve-mcp
Configure in your AI client (e.g., Claude Desktop claude_desktop_config.json):
{
"mcpServers": {
"my-project": {
"command": "run",
"args": ["--serve-mcp", "--runfile", "/path/to/your/project/"]
}
}
}
Now your AI agent can discover and call your tools automatically.
Use @desc to describe what a function does, and declare parameters in the function signature:
# @desc Search the codebase for a regex pattern
search(pattern: str) {
#!/usr/bin/env python3
import sys, os, re
for root, _, files in os.walk('.'):
for f in files:
path = os.path.join(root, f)
try:
for i, line in enumerate(open(path), 1):
if re.search(sys.argv[1], line):
print(f"{path}:{i}: {line.rstrip()}")
except: pass
}
# @desc Deploy the application to a specific environment
deploy(environment: str, version = "latest") {
./scripts/deploy.sh $environment $version
}
Functions with @desc are automatically exposed as MCP tools with typed parameters.
View the generated JSON schema for all MCP-enabled functions:
run --inspect
This outputs the tool definitions that AI agents will see—useful for debugging and validation.
macOS/Linux (Homebrew)
brew tap nihilok/tap
brew install runtool
Windows (Scoop)
scoop bucket add nihilok https://github.com/nihilok/scoop-bucket
scoop install runtool
Works on all platforms:
cargo install run # or: cargo install runtool
Auto-detect your shell and install completions:
run --install-completion
Supports bash, zsh, fish, and powershell.
run parses your Runfile to find function definitions. The syntax is designed to be familiar to anyone who has used bash or sh.
For simple, one-line commands, you don't need braces.
# Usage: run dev
dev() cargo run
# Usage: run fmt
fmt() cargo fmt
Use {} for multi-statement functions. This avoids the need for trailing backslashes.
ci() {
echo "Running CI..."
cargo fmt -- --check
cargo clippy
cargo test
echo "Done!"
}
Declare parameters directly in the function signature for cleaner, self-documenting code:
# @desc Deploy to an environment
deploy(env: str, version = "latest") {
echo "Deploying $version to $env..."
./scripts/deploy.sh $env $version
}
# @desc Resize an image
resize(width: int, height: int, file: str) {
convert $file -resize ${width}x${height} output.png
}
Type annotations (str, int, bool) are used for:
Default values make parameters optional:
# version defaults to "latest" if not provided
deploy(env: str, version = "latest") { ... }
Legacy positional syntax still works for simple cases:
# Access arguments as $1, $2, $@
deploy() {
env=$1
version=${2:-latest}
./scripts/deploy.sh $env $version
}
Combining with @arg for descriptions:
# @desc Deploy the application
# @arg env Target environment (staging|prod)
# @arg version Version tag to deploy
deploy(env: str, version = "latest") {
./scripts/deploy.sh $env $version
}
When both signature params and @arg exist, the signature defines names/types/defaults, and @arg provides descriptions for MCP.
You can use comment attributes (# @key value) or shebang lines to modify function behaviour and select interpreters.
@os)Restrict functions to specific operating systems. This allows you to define platform-specific implementations of the same task.
# @os windows
clean() del /Q dist
# @os unix
clean() rm -rf dist
When you run run clean, only the variant matching your current OS will execute.
There are two ways to specify a custom interpreter:
1. Shebang detection
The first line of your function body can be a shebang, just like standalone scripts:
analyze() {
#!/usr/bin/env python
import sys, json
with open(sys.argv[1]) as f:
data = json.load(f)
print(f"Found {len(data)} records")
}
server() {
#!/usr/bin/env node
const port = process.argv[1] || 3000;
require('http').createServer((req, res) => {
res.end('Hello!');
}).listen(port);
}
2. Attribute syntax (@shell):
Use comment attributes for explicit control or when you need to override a shebang:
# @shell python3
calc() {
import sys, math
radius = float(sys.argv[1])
print(f"Area: {math.pi * radius**2:.2f}")
}
Precedence: If both are present, @shell takes precedence over the shebang.
Supported interpreters: python, python3, node, ruby, pwsh, bash, sh
Organise related commands using colons. run parses name:subname as a single identifier.
docker:build() docker build -t app .
docker:up() docker compose up -d
docker:logs() docker compose logs -f
Execute them with spaces:
$ run docker build
$ run docker logs
Functions can call other functions defined in the same Runfile, enabling task composition and code reuse without duplication.
# Base tasks
build() cargo build --release
test() cargo test
lint() cargo clippy
# Composed task that calls other functions
ci() {
echo "Running CI pipeline..."
lint
test
build
}
# Deploy depends on successful build
deploy() {
build || exit 1
echo "Deploying..."
scp target/release/app server:/bin/
}
When you run run ci, all compatible functions are automatically injected into the execution scope, so you can call them directly without spawning new processes.
Key features:
|| exit 1 to stop on failure)By default, run uses:
pwsh if available, else powershell)shYou can override this default by setting the RUN_SHELL environment variable.
# Force Zsh for this command
RUN_SHELL=zsh run build
# Make it permanent for your session
export RUN_SHELL=bash
Note: The commands in your Runfile must be compatible with the configured shell, unless an explicit interpreter (e.g., # @shell python) is defined for that function.
Create a ~/.runfile in your home directory to define global commands available anywhere.
# ~/.runfile
# Usage: run update
update() {
brew update
brew upgrade
rustup update
}
# Usage: run clone <repo>
clone() {
git clone "https://github.com/$1"
cd "$(basename "$1" .git)"
}
If a local ./Runfile exists, run looks there first. If the command isn't found locally, it falls back to ~/.runfile.
MIT