| Crates.io | singleshot |
| lib.rs | singleshot |
| version | 0.1.0 |
| created_at | 2026-01-07 17:37:37.060198+00 |
| updated_at | 2026-01-07 17:37:37.060198+00 |
| description | A CLI tool for testing AI models with a single prompt |
| homepage | https://github.com/vincentzhangz/singleshot |
| repository | https://github.com/vincentzhangz/singleshot |
| max_upload_size | |
| id | 2028742 |
| size | 113,919 |
A CLI tool for testing AI models with a single prompt. Supports multiple providers including OpenAI, Anthropic, Ollama, and OpenRouter.
cargo install singleshot
Or build from source:
git clone https://github.com/vincentzhangz/singleshot
cd singleshot
cargo install --path .
# Simple prompt with Ollama (default)
singleshot chat -p "What is Rust?"
# Use OpenAI
singleshot chat -p "Explain async/await" -P openai
# Use a specific model
singleshot chat -p "Hello" -P anthropic -m claude-sonnet-4-20250514
# Send an image
singleshot chat -p "What's in this image?" -P openai -i photo.jpg
| Command | Description |
|---|---|
chat |
Send a prompt to an AI model |
models |
List available models for a provider |
ping |
Test connection to a provider |
generate |
Generate an example config file |
-p, --prompt <PROMPT> The prompt to send
-f, --file <FILE> File containing the prompt
-l, --load <LOAD> Load config from markdown file
-i, --image <IMAGE> Image file to include
-v, --video <VIDEO> Video file to include
-a, --audio <AUDIO> Audio file to include
-P, --provider <PROVIDER> AI provider [default: ollama]
-m, --model <MODEL> Model name
-b, --base-url <BASE_URL> Custom API base URL
-s, --system <SYSTEM> System prompt
-t, --temperature <TEMP> Temperature (0.0-2.0) [default: 0.7]
--max-tokens <TOKENS> Maximum tokens in response
-d, --detail Show detailed information
-r, --report [<FILE>] Generate report file
| Provider | Default Model | Environment Variable |
|---|---|---|
ollama |
llama3.2 | - |
openai |
gpt-4o | OPENAI_API_KEY |
anthropic |
claude-sonnet-4-20250514 | ANTHROPIC_API_KEY |
openrouter |
openai/gpt-4o | OPENROUTER_API_KEY |
singleshot chat -f prompt.txt -P openai
echo "Summarize this" | singleshot chat -P openai
Create a config file with singleshot generate -o config.md:
---provider---
openai
---model---
gpt-4o
---base_url---
https://api.openai.com/v1
---temperature---
0.7
---max_tokens---
1024
---system---
You are a helpful coding assistant.
---prompt---
Explain the difference between async and threads.
---file---
path/to/context.txt
---image---
path/to/image.jpg
---video---
path/to/video.mp4
---audio---
path/to/audio.mp3
All sections are optional. Only include what you need.
Then run:
singleshot chat -l config.md
singleshot chat -p "What is Rust?" -P openai -r report.md
singleshot chat -p "Describe this image" -i photo.jpg -P openai -d -r
singleshot models -P openai
singleshot models -P ollama
singleshot ping -P openai
singleshot ping -P ollama
Set API keys for cloud providers:
export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."
export OPENROUTER_API_KEY="sk-or-..."
For custom endpoints:
export OPENAI_BASE_URL="https://your-proxy.com/v1"
export OLLAMA_API_BASE_URL="http://remote-server:11434"
Code released under the MIT License.