prai

Crates.ioprai
lib.rsprai
version0.3.1
created_at2025-06-15 17:30:05.145719+00
updated_at2025-08-11 21:34:46.782982+00
descriptionA command-line tool that generates concise pull request descriptions from git diffs using configurable AI providers.
homepagehttps://github.com/theelderbeever/prai-cli
repositoryhttps://github.com/theelderbeever/prai-cli
max_upload_size
id1713475
size206,885
Taylor Beever (theelderbeever)

documentation

https://docs.rs/prai-cli

README

prai - AI-Powered PR Description Generator

A command-line tool that generates concise pull request descriptions from git diffs using configurable AI providers.

prai

Overview

prai analyzes the differences between two git commits and automatically generates a professional PR description highlighting:

  • What changes were made
  • Why these changes matter
  • Any breaking changes or important notes

Perfect for streamlining your PR workflow and ensuring consistent, informative descriptions.

Installation

Via Cargo

cargo install prai

From Source

cargo install --git https://github.com/theelderbeever/prai-cli.git

Configuration

prai supports multiple AI providers through a configuration file. The tool looks for a config file at:

  1. ~/.config/prai/config.toml (default location)
  2. Path specified by PRAI_HOME environment variable + /config.toml
  3. ./config.toml (current directory)

Supported Providers

  • Anthropic Claude - Requires API key from Anthropic's Console
  • OpenAI GPT - Requires API key from OpenAI
  • Google Gemini - Requires API key from Google AI Studio
  • Ollama - For local models (requires Ollama running locally)

Sample Configuration

Create ~/.config/prai/config.toml with your preferred providers:

default = "claude"

[[profile]]
name = "claude"
provider = "anthropic"
version = "2023-06-01"
model = "claude-3-sonnet-20240229"
api_key = "your-anthropic-api-key"
max_tokens = 500
temperature = 0.3

[[profile]]
name = "gpt4"
provider = "openai"
model = "gpt-4"
api_key = "your-openai-api-key"
base_url = "https://api.openai.com/v1"
max_tokens = 500
temperature = 0.3

[[profile]]
name = "ollama"
provider = "ollama"
url = "http://localhost:11434"
model = "codegemma:7b"

[[profile]]
name = "gemini"
provider = "google"
model = "gemini-pro"
api_key = "your-google-api-key"
base_url = "https://generativelanguage.googleapis.com/v1beta"
max_tokens = 500
temperature = 0.3

Usage

This works great with git-fzf.sh so I would recommend using that. Otherwise this ultimately is just calling git diff under the hood so whatever works for the commit arguments there should work here too.

prai <base-commit> <head-commit> [OPTIONS]

Options

  • --exclude, -e: Files to exclude from diff (default: :!*.lock)
  • --profile, -p: Provider profile to use (defaults to config default)
  • --config, -f: Path to config file (defaults to ~/.config/prai/config.toml)

Examples

Generate a PR description comparing two commits:

prai main feature-branch

Compare specific commit hashes:

prai abc123 def456

Exclude additional files:

prai main HEAD --exclude ":!*.lock :!dist/"

Use a specific profile:

prai main HEAD --profile gpt4

Use a custom config file:

prai main HEAD --config /path/to/custom/config.toml

Sample Output

## Changes Made
• Implemented user authentication with JWT tokens
• Added password validation and hashing utilities
• Created login/logout API endpoints

## Impact
• Enables secure user sessions across the application
• Provides foundation for role-based access control

## Notes
• Breaking change: `/api/login` now requires email instead of username
• New dependency: `jsonwebtoken` crate added

Requirements

  • Git (for generating diffs)
  • Rust 1.85+ (for installation)
  • API key for your chosen provider (Anthropic, OpenAI, Google) or Ollama running locally

TODO

  • Configurable prompt templates
  • Additional output formats (JSON, Markdown templates)

License

MIT

Commit count: 22

cargo fmt