# Command-Line Help for `ai-commit-cli` This document contains the help content for the `ai-commit-cli` command-line program. **Command Overview:** - [`ai-commit-cli`↴](#ai-commit-cli) - [`ai-commit-cli commit`↴](#ai-commit-cli-commit) - [`ai-commit-cli complete`↴](#ai-commit-cli-complete) ## `ai-commit-cli` **Usage:** `ai-commit-cli [OPTIONS] ` ###### **Subcommands:** - `commit` — - `complete` — Generate tab-completion scripts for your shell ###### **Options:** - `-v`, `--verbose` — More output per occurrence - `-q`, `--quiet` — Less output per occurrence ## `ai-commit-cli commit` **Usage:** `ai-commit-cli commit [OPTIONS]` ###### **Options:** - `-a`, `--api-key ` — If not provided, will use `bw get notes OPENAI_API_KEY` - `-e`, `--exclude ` Default values: `*-lock.*`, `*.lock` - `-i`, `--include ` - `--no-pre-commit` Default value: `false` - `-p`, `--prompt ` - `--prompt-file ` - `--model ` — ID of the model to use Default value: `gpt-3.5-turbo-16k` - `--max-tokens ` — The maximum number of tokens to generate in the chat completion Default value: `500` - `-n ` — How many chat completion choices to generate for each input message Default value: `1` - `--temperature ` — What sampling temperature to use, between 0 and 2 Default value: `0` - `--top-p ` — An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass Default value: `0.1` ## `ai-commit-cli complete` Generate tab-completion scripts for your shell ```fish $ ai-commit-cli complete fish >$HOME/.local/share/fish/vendor_completions.d $ ai-commit-cli complete fish >/usr/local/share/fish/vendor_completions.d ``` **Usage:** `ai-commit-cli complete ` ###### **Arguments:** - `` Possible values: `markdown`, `bash`, `elvish`, `fish`, `powershell`, `zsh`
This document was generated automatically by clap-markdown.