| Crates.io | snapshell |
| lib.rs | snapshell |
| version | 0.2.3 |
| created_at | 2025-09-05 01:26:36.190197+00 |
| updated_at | 2025-09-06 05:55:36.470038+00 |
| description | snapshell - a snappy CLI that generates shell commands via OpenRouter LLMs |
| homepage | https://github.com/eufat/snapshell |
| repository | https://github.com/eufat/snapshell |
| max_upload_size | |
| id | 1824982 |
| size | 77,644 |
Minimal and snappy shell command generator with LLM/AI.
Alternative to GitHub Copilot ghcs, snapshell quickly generates shell commands using your preferred LLM/AI via OpenRouter.
Install via crates.io (recommended):
curl https://sh.rustup.rs -sSf | sh
cargo install snapshell
Set up PATH and symlink (optional):
# The binary is installed to ~/.cargo/bin by default; make sure it's on your PATH:
export PATH="$HOME/.cargo/bin:$PATH"
# Optionally create a global symlink so the command is `ss`:
ln -s "$HOME/.cargo/bin/snapshell" /usr/local/bin/ss
Build from source and symlink to ss:
cargo build --release
# Use sudo if /usr/local/bin requires elevated permissions
sudo ln -s "$(pwd)/target/release/snapshell" /usr/local/bin/ss
Before using snapshell with LLM features, configure OpenRouter:
export SNAPSHELL_OPENROUTER_API_KEY="your_openrouter_api_key"
export SNAPSHELL_OPENROUTER_MODEL="openai/gpt-oss-120b" # optional override, e.g. meta-llama/llama-3.3-8b-instruct:free
.env file based on .env.example and load it with your shell or a tool like direnv:cp .env.example .env
# edit .env and add your key and optional model:
# SNAPSHELL_OPENROUTER_API_KEY=your_openrouter_api_key
# SNAPSHELL_OPENROUTER_MODEL=openai/gpt-oss-120b
export $(cat .env | xargs)
To make the key (and optional model) permanent, add the exports to your shell startup file.
For bash (~/.bashrc or ~/.profile):
echo 'export SNAPSHELL_OPENROUTER_API_KEY="your_openrouter_api_key"' >> ~/.bashrc
echo 'export SNAPSHELL_OPENROUTER_MODEL="openai/gpt-oss-120b"' >> ~/.bashrc
# or
echo 'export SNAPSHELL_OPENROUTER_API_KEY="your_openrouter_api_key"' >> ~/.profile
echo 'export SNAPSHELL_OPENROUTER_MODEL="openai/gpt-oss-120b"' >> ~/.profile
For zsh (~/.zshrc or ~/.zprofile):
echo 'export SNAPSHELL_OPENROUTER_API_KEY="your_openrouter_api_key"' >> ~/.zshrc
echo 'export SNAPSHELL_OPENROUTER_MODEL="openai/gpt-oss-120b"' >> ~/.zshrc
# or
echo 'export SNAPSHELL_OPENROUTER_API_KEY="your_openrouter_api_key"' >> ~/.zprofile
echo 'export SNAPSHELL_OPENROUTER_MODEL="openai/gpt-oss-120b"' >> ~/.zprofile
After editing, reload your shell or source the file:
source ~/.bashrc # or source ~/.zshrc
ss 'describe what shell command you want'
ss -a 'chat with the model'
/exit or empty line to quit.ss -r 2 'use reasoning level 2'
ss -m 'provider/model' 'ask'
groq/... or cerebras/...).ss -L 'ask'
ss -H
ss "install openvino and show the command to quantize a tensorflow model"
ss -L "generate a bash script to backup ~/projects to /tmp/backup"
ss -a "how to list modified rust files since yesterday?"
# After response, type follow-up questions at the `>` prompt
ss -m "meta-llama/llama-3.3-8b-instruct:free" "list files modified today"
ss -s "You are an expert devops assistant. Output only shell commands." "describe what you want"
ss --system-single "Single-line-only instruction" "do X"
ss --system-multiline "Multiline-allowed instruction" -L "do Y"
ss -H
snapshell supports an optional lightweight "reasoning" hint (OpenAI-style effort) you can request from the model.
-r, --reasoning <low|medium|high> — set the reasoning effort. Default: low.-S, --show-reasoning — when set, the model may append a trailing JSON object containing the model's short reasoning, printed on the line after the command as:{"reasoning": "short one-sentence reason here"}
Notes:
-S when you want an explanation.ss -r high -S "why can't I install TensorRT on macOS?"
# output:
# (NOT ABLE TO ANSWER): TensorRT requires NVIDIA GPUs and is not available on macOS.
# {"reasoning": "TensorRT depends on NVIDIA GPU drivers not present on macOS"}
SNAPSHELL_OPENROUTER_API_KEY — API key for OpenRouter (required to call remote LLM).SNAPSHELL_SYSTEM — generic system instruction override.SNAPSHELL_SYSTEM_SINGLE — override for single-line mode.SNAPSHELL_SYSTEM_MULTILINE — override for multiline mode.See .env.example for a sample env file.
This tool is integrated with OpenRouter. Provide your OpenRouter API key via the environment variable SNAPSHELL_OPENROUTER_API_KEY.
You can control the model used in two ways (priority order):
-m 'provider/model' to ss.SNAPSHELL_OPENROUTER_MODEL (for example openai/gpt-oss-120b or groq/fast-model).If neither is set, snapshell falls back to the built-in default openai/gpt-oss-120b.
For the instant result, lowest-latency replies providers recommended are Groq and Cerebras when available, this provider use specialized inference hardware that can significantly speed up response times with 1K tokens/second.
You can enforce this provider in Open Router: Settings > Account > Allowed Providers > Select a provider, you can select both Groq and Cerebras. Also tick the 'Always enforce' checkbox.
History is saved as history.jsonl in your OS data dir and contains timestamp, prompt, and generated command. Use ss -H to view.
-s/--system-single/--system-multiline to tighten instructions.