cai

Crates.iocai
lib.rscai
version
sourcesrc
created_at2024-03-28 23:22:38.37237
updated_at2024-12-08 19:47:33.880391
descriptionThe fastest CLI tool for prompting LLMs
homepage
repositoryhttps://github.com/ad-si/cai
max_upload_size
id1189482
Cargo.toml error:TOML parse error at line 17, column 1 | 17 | autolib = false | ^^^^^^^ unknown field `autolib`, expected one of `name`, `version`, `edition`, `authors`, `description`, `readme`, `license`, `repository`, `homepage`, `documentation`, `build`, `resolver`, `links`, `default-run`, `default_dash_run`, `rust-version`, `rust_dash_version`, `rust_version`, `license-file`, `license_dash_file`, `license_file`, `licenseFile`, `license_capital_file`, `forced-target`, `forced_dash_target`, `autobins`, `autotests`, `autoexamples`, `autobenches`, `publish`, `metadata`, `keywords`, `categories`, `exclude`, `include`
size0
Adrian Sieber (ad-si)

documentation

README

cai - The fastest CLI tool for prompting LLMs

Features

  • Build with Rust 🦀 for supreme performance and speed! 🏎️
  • Support for models by Groq, OpenAI, Anthropic, and local LLMs. 📚
  • Prompt several models at once. 🤼 Demo of cai's all command
  • Syntax highlighting for better readability of code snippets. 🌈

Demo

cai demo

Installation

cargo install cai

Usage

Before using Cai, an API key must be set up. Simply execute cai in your terminal and follow the instructions.

Cai supports the following APIs:

Afterwards, you can use cai to run prompts directly from the terminal:

cai List 10 fast CLI tools

Or a specific model, like Anthropic's Claude Opus:

cai op List 10 fast CLI tools

Full help output:

$ cai help
Cai 0.7.0

The fastest CLI tool for prompting LLMs

Usage: cai [OPTIONS] [PROMPT]... [COMMAND]

Commands:
  groq       Groq [aliases: gr]
  ll         - Llama 3 shortcut (🏆 Default)
  mi         - Mixtral shortcut
  openai     OpenAI [aliases: op]
  gp         - GPT-4o shortcut
  gm         - GPT-4o mini shortcut
  anthropic  Anthropic [aliases: an]
  cl         - Claude Opus
  so         - Claude Sonnet
  ha         - Claude Haiku
  llamafile  Llamafile server hosted at http://localhost:8080 [aliases: lf]
  ollama     Ollama server hosted at http://localhost:11434 [aliases: ol]
  all        Simultaneously send prompt to each provider's default model:
             - Groq Llama 3.1
             - Antropic Claude Sonnet 3.5
             - OpenAI GPT-4o mini
             - Ollama Llama 3
             - Llamafile
  changelog  Generate a changelog starting from a given commit using OpenAI's GPT-4o
  rename     Analyze and rename a file with timestamp and description
  ocr        Extract text from an image
  bash       Use Bash development as the prompt context
  c          Use C development as the prompt context
  cpp        Use C++ development as the prompt context
  cs         Use C# development as the prompt context
  elm        Use Elm development as the prompt context
  fish       Use Fish development as the prompt context
  fs         Use F# development as the prompt context
  gd         Use Godot and GDScript development as the prompt context
  gl         Use Gleam development as the prompt context
  go         Use Go development as the prompt context
  hs         Use Haskell development as the prompt context
  java       Use Java development as the prompt context
  js         Use JavaScript development as the prompt context
  kt         Use Kotlin development as the prompt context
  lua        Use Lua development as the prompt context
  oc         Use OCaml development as the prompt context
  php        Use PHP development as the prompt context
  po         Use Postgres development as the prompt context
  ps         Use PureScript development as the prompt context
  py         Use Python development as the prompt context
  rb         Use Ruby development as the prompt context
  rs         Use Rust development as the prompt context
  sql        Use SQLite development as the prompt context
  sw         Use Swift development as the prompt context
  ts         Use TypeScript development as the prompt context
  wl         Use Wolfram Language and Mathematica development as the prompt context
  zig        Use Zig development as the prompt context
  help       Print this message or the help of the given subcommand(s)

Arguments:
  [PROMPT]...  The prompt to send to the AI model

Options:
  -r, --raw   Print raw response without any metadata
  -j, --json  Prompt LLM in JSON output mode
  -h, --help  Print help


Examples:
  # Send a prompt to the default model
  cai Which year did the Titanic sink

  # Send a prompt to each provider's default model
  cai all Which year did the Titanic sink

  # Send a prompt to Anthropic's Claude Opus
  cai anthropic claude-opus Which year did the Titanic sink
  cai an claude-opus Which year did the Titanic sink
  cai cl Which year did the Titanic sink
  cai anthropic claude-3-opus-latest Which year did the Titanic sink

  # Send a prompt to locally running Ollama server
  cai ollama llama3 Which year did the Titanic sink
  cai ol ll Which year did the Titanic sink

  # Add data via stdin
  cat main.rs | cai Explain this code

Related

  • AI CLI - Get answers for CLI commands from ChatGPT. (TypeScript)
  • AIChat - All-in-one chat and copilot CLI for 10+ AI platforms. (Rust)
  • Ell - CLI tool for LLMs written in Bash.
  • ja - CLI / TUI app to work with AI tools. (Rust)
  • llm - Access large language models from the command-line. (Python)
  • smartcat - Integrate LLMs in the Unix command ecosystem. (Rust)
  • tgpt - AI chatbots for the terminal without needing API keys. (Go)
Commit count: 54

cargo fmt