| Crates.io | aicommit |
| lib.rs | aicommit |
| version | 0.1.138 |
| created_at | 2024-12-19 10:48:39.562458+00 |
| updated_at | 2025-07-16 16:33:48.003079+00 |
| description | A CLI tool that generates concise and descriptive git commit messages using LLMs |
| homepage | |
| repository | https://github.com/suenot/aicommit |
| max_upload_size | |
| id | 1489004 |
| size | 8,762,186 |

A CLI tool that generates concise and descriptive git commit messages using LLMs (Large Language Models).
--add option--push--dry-run--watch--verbose--version-iterate)--version-cargo)--version-npm)--version-github)The Simple Free mode allows you to use OpenRouter's free models without having to manually select a model. You only need to provide an OpenRouter API key, and the system will:
To set up Simple Free mode:
# Interactive setup
aicommit --add-provider
# Select "Simple Free OpenRouter" from the menu
# Or non-interactive setup
aicommit --add-simple-free --openrouter-api-key=<YOUR_API_KEY>
The Simple Free mode uses a sophisticated failover mechanism to ensure optimal model selection:
Active, Jailed (temporary restriction), or Blacklisted (long-term ban).Jailed status.--unjail and --unjail-all commands.Model management commands:
# Show status of all model jails/blacklists
aicommit --jail-status
# Release specific model from restrictions
aicommit --unjail <model-id>
# Release all models from restrictions
aicommit --unjail-all
The ranked list includes powerful models like:
Even if the preferred models list becomes outdated over time, the system will intelligently identify the best available models based on their parameter size by analyzing model names (e.g., models with "70b" or "32b" in their names).
For developers who want to see all available free models, a utility script is included:
python bin/get_free_models.py
This script will:
To install aicommit, use the following npm command:
npm install -g @suenot/aicommit
For Rust users, you can install using cargo:
cargo install aicommit
aicommit --add-provider
git add .
aicommit
aicommit --add
Add a provider in interactive mode:
aicommit --add-provider
Add providers in non-interactive mode:
# Add OpenRouter provider
aicommit --add-provider --add-openrouter --openrouter-api-key "your-api-key" --openrouter-model "mistralai/mistral-tiny"
# Add Ollama provider
aicommit --add-provider --add-ollama --ollama-url "http://localhost:11434" --ollama-model "llama2"
# Add OpenAI compatible provider
aicommit --add-provider --add-openai-compatible \
--openai-compatible-api-key "your-api-key" \
--openai-compatible-api-url "https://api.deep-foundation.tech/v1/chat/completions" \
--openai-compatible-model "gpt-4o-mini"
Optional parameters for non-interactive mode:
--max-tokens - Maximum number of tokens (default: 50)--temperature - Controls randomness (default: 0.3)List all configured providers:
aicommit --list
Set active provider:
aicommit --set <provider-id>
aicommit supports automatic version management with the following features:
aicommit --version-file version --version-iterate
aicommit --version-file version --version-iterate --version-cargo
aicommit --version-file version --version-iterate --version-npm
aicommit --version-file version --version-iterate --version-github
You can combine these flags to update multiple files at once:
aicommit --version-file version --version-iterate --version-cargo --version-npm --version-github
aicommit now includes a VS Code extension for seamless integration with the editor:
cd vscode-extension
code --install-extension aicommit-vscode-0.1.0.vsix
Or build the extension package manually:
# Install vsce if not already installed
npm install -g @vscode/vsce
# Package the extension
vsce package
Once installed, you can generate commit messages directly from the Source Control view in VS Code by clicking the "AICommit: Generate Commit Message" button.
See the VS Code Extension README for more details.
The configuration file is stored at ~/.aicommit.json. You can edit it directly with:
aicommit --config
The configuration file supports the following global settings:
{
"providers": [...],
"active_provider": "provider-id",
"retry_attempts": 3 // Number of attempts to generate commit message if provider fails
}
retry_attempts: Number of retry attempts if provider fails (default: 3)
Each provider can be configured with the following settings:
max_tokens: Maximum number of tokens in the response (default: 200)temperature: Controls randomness in the response (0.0-1.0, default: 0.3)Example configuration with all options:
{
"providers": [{
"id": "550e8400-e29b-41d4-a716-446655440000",
"provider": "openrouter",
"api_key": "sk-or-v1-...",
"model": "mistralai/mistral-tiny",
"max_tokens": 200,
"temperature": 0.3
}],
"active_provider": "550e8400-e29b-41d4-a716-446655440000",
"retry_attempts": 3
}
For OpenRouter, token costs are automatically fetched from their API. For Ollama, you can specify your own costs if you want to track usage.
{
"providers": [{
"id": "550e8400-e29b-41d4-a716-446655440000",
"provider": "simple_free_openrouter",
"api_key": "sk-or-v1-...",
"max_tokens": 50,
"temperature": 0.3,
"failed_models": [],
"model_stats": {},
"last_used_model": null,
"last_config_update": "2023-10-15T12:00:00Z"
}],
"active_provider": "550e8400-e29b-41d4-a716-446655440000"
}
The Simple Free mode offers a hassle-free way to use OpenRouter's free models:
This approach ensures that your aicommit installation will continue to work effectively even years later, as it can adapt to the changing landscape of available free models on OpenRouter.
This is the recommended option for most users who want to use aicommit without worrying about model selection or costs.
{
"providers": [{
"id": "550e8400-e29b-41d4-a716-446655440000",
"provider": "openrouter",
"api_key": "sk-or-v1-...",
"model": "mistralai/mistral-tiny",
"max_tokens": 50,
"temperature": 0.3,
"input_cost_per_1k_tokens": 0.25,
"output_cost_per_1k_tokens": 0.25
}],
"active_provider": "550e8400-e29b-41d4-a716-446655440000"
}
"deepseek/deepseek-chat"
{
"providers": [{
"id": "67e55044-10b1-426f-9247-bb680e5fe0c8",
"provider": "ollama",
"url": "http://localhost:11434",
"model": "llama2",
"max_tokens": 50,
"temperature": 0.3,
"input_cost_per_1k_tokens": 0.0,
"output_cost_per_1k_tokens": 0.0
}],
"active_provider": "67e55044-10b1-426f-9247-bb680e5fe0c8"
}
You can use any service that provides an OpenAI-compatible API endpoint.
For example, you can use DeepGPTBot's OpenAI-compatible API for generating commit messages. Here's how to set it up:
Get your API key from Telegram:
/api command to get your API keyConfigure aicommit (choose one method):
Interactive mode:
aicommit --add-provider
Select "OpenAI Compatible" and enter:
Non-interactive mode:
aicommit --add-provider --add-openai-compatible \
--openai-compatible-api-key "your-api-key" \
--openai-compatible-api-url "https://api.deep-foundation.tech/v1/chat/completions" \
--openai-compatible-model "gpt-4o-mini"
Start using it:
aicommit
LM Studio runs a local server that is OpenAI-compatible. Here's how to configure aicommit to use it:
Start LM Studio: Launch the LM Studio application.
Load a Model: Select and load the model you want to use (e.g., Llama 3, Mistral).
Start the Server: Navigate to the "Local Server" tab (usually represented by <->) and click "Start Server".

Note the URL: LM Studio will display the server URL, typically http://localhost:1234/v1/chat/completions.
Configure aicommit (choose one method):
Interactive mode:
aicommit --add-provider
Select "OpenAI Compatible" and enter:
lm-studio (or any non-empty string, as it's often ignored by the local server)http://localhost:1234/v1/chat/completions (or the URL shown in LM Studio)lm-studio-model (or any descriptive name; the actual model used is determined by what's loaded in LM Studio)Important: The Model field here is just a label for aicommit. The actual LLM used (e.g., llama-3.2-1b-instruct) is determined by the model you have loaded and selected within the LM Studio application's server tab.
Non-interactive mode:
aicommit --add-provider --add-openai-compatible \
--openai-compatible-api-key "lm-studio" \
--openai-compatible-api-url "http://localhost:1234/v1/chat/completions" \
--openai-compatible-model "mlx-community/Llama-3.2-1B-Instruct-4bit"
Select the Provider: If this isn't your only provider, make sure it's active using aicommit --set <provider-id>. You can find the ID using aicommit --list.
Start using it:
aicommit
Keep the LM Studio server running while using aicommit.
When generating a commit message, the tool will display:
Example output:
Generated commit message: Add support for multiple LLM providers
Tokens: 8↑ 32↓
API Cost: $0.0100
You can have multiple providers configured and switch between them by changing the active_provider field to match the desired provider's id.
By default, aicommit will only commit changes that have been staged using git add. To automatically stage all changes before committing, use the --add flag:
# Only commit previously staged changes
aicommit
# Automatically stage and commit all changes
aicommit --add
# Stage all changes, commit, and push (automatically sets up upstream if needed)
aicommit --add --push
# Stage all changes, pull before commit, and push after (automatically sets up upstream if needed)
aicommit --add --pull --push
When using --pull or --push flags, aicommit automatically handles upstream branch configuration:
If the current branch has no upstream set:
# Automatically runs git push --set-upstream origin <branch> when needed
aicommit --push
# Automatically sets up tracking and pulls changes
aicommit --pull
For new branches:
--push: Creates the remote branch and sets up tracking--pull: Skips pull if remote branch doesn't exist yetgit push --set-upstream origin <branch> neededThis makes working with new branches much easier, as you don't need to manually configure upstream tracking.
The watch mode allows you to automatically commit changes when files are modified. This is useful for:
aicommit --watch # Monitor files continuously and commit on changes
You can add a delay after the last edit before committing. This helps avoid creating commits while you're still actively editing files:
aicommit --watch --wait-for-edit 30s # Monitor files continuously, but wait 30s after last edit before committing
s: secondsm: minutesh: hoursYou can combine watch mode with other flags:
# Watch with auto-push
aicommit --watch --push
# Watch with version increment
aicommit --watch --add --version-file version --version-iterate
# Interactive mode with watch
aicommit --watch --dry-run
--wait-for-edit when you want to avoid partial commits--wait-for-edit 1m)--wait-for-editCtrl+C to stop watchingBelow is a flowchart diagram of the aicommit program workflow:
flowchart TD
A[Start aicommit] --> B{Check parameters}
%% Main flags processing
B -->|--help| C[Show help]
B -->|--version| D[Show version]
B -->|--add-provider| E[Add new provider]
B -->|--list| F[List providers]
B -->|--set| G[Set active provider]
B -->|--config| H[Edit configuration]
B -->|--dry-run| I[Message generation mode without commit]
B -->|standard mode| J[Standard commit mode]
B -->|--watch| K[File change monitoring mode]
B -->|--simulate-offline| Offline[Simulate offline mode]
B -->|--jail-status| JailStatus[Display model jail status]
B -->|--unjail| Unjail[Release specific model]
B -->|--unjail-all| UnjailAll[Release all models]
%% Provider addition
E -->|interactive| E1[Interactive setup]
E -->|--add-openrouter| E2[Add OpenRouter]
E -->|--add-ollama| E3[Add Ollama]
E -->|--add-openai-compatible| E4[Add OpenAI compatible API]
E -->|--add-simple-free| E_Free[Add Simple Free OpenRouter]
E1 --> E5[Save configuration]
E2 --> E5
E3 --> E5
E4 --> E5
E_Free --> E5
%% Main commit process
J --> L[Load configuration]
L --> M{Versioning}
M -->|--version-iterate| M1[Update version]
M -->|--version-cargo| M2[Update in Cargo.toml]
M -->|--version-npm| M3[Update in package.json]
M -->|--version-github| M4[Create GitHub tag]
M1 --> N
M2 --> N
M3 --> N
M4 --> N
M -->|no versioning options| N[Get git diff]
%% Git operations
N -->|--add| N1[git add .]
N1 --> N_Truncate["Smart diff processing (truncate large files only)"]
N -->|only staged changes| N_Truncate["Smart diff processing (truncate large files only)"]
N_Truncate --> O["Generate commit message (using refined prompt)"]
%% Simple Free OpenRouter branch
O -->|Simple Free OpenRouter| SF1["Query OpenRouter API for available free models"]
SF1 --> SF_Network{Network available?}
SF_Network -->|Yes| SF2["Filter for free models"]
SF_Network -->|No| SF3["Use fallback predefined free models list"]
SF2 --> SF4["Advanced Model Selection"]
SF3 --> SF4
%% Advanced Model Selection subgraph
SF4 --> SF_Last{Last successful model available?}
SF_Last -->|Yes| SF_LastJailed{Is model jailed or blacklisted?}
SF_Last -->|No| SF_Sort["Sort by model capabilities"]
SF_LastJailed -->|Yes| SF_Sort
SF_LastJailed -->|No| SF_UseLastModel["Use last successful model"]
SF_Sort --> SF_Active{Any active models available?}
SF_Active -->|Yes| SF_SelectBest["Select best active model"]
SF_Active -->|No| SF_Jailed{Any jailed models (not blacklisted)?}
SF_Jailed -->|Yes| SF_SelectJailed["Select least recently jailed model"]
SF_Jailed -->|No| SF_Desperate["Use any model as last resort"]
SF_UseLastModel --> SF_Use["Use selected model"]
SF_SelectBest --> SF_Use
SF_SelectJailed --> SF_Use
SF_Desperate --> SF_Use
SF_Use --> SF6["Generate commit using selected model"]
SF6 --> SF_Success{Model worked?}
SF_Success -->|Yes| SF_RecordSuccess["Record success & update model stats"]
SF_Success -->|No| SF_RecordFailure["Record failure & potentially jail model"]
SF_RecordSuccess --> SF7["Display which model was used"]
SF_RecordFailure --> SF_Retry{Retry attempt limit reached?}
SF_Retry -->|No| SF4
SF_Retry -->|Yes| SF_Fail["Display error and exit"]
%% Normal provider branch
O -->|Other providers| P{Success?}
P -->|Yes| Q[Create commit]
P -->|No| P1{Retry limit reached?}
P1 -->|Yes| P2[Generation error]
P1 -->|No| P3[Retry after 5 sec]
P3 --> O
Q --> R{Additional operations}
R -->|--pull| R1[Sync with remote repository]
R -->|--push| R2[Push changes to remote]
R1 --> S[Done]
R2 --> S
R -->|no additional options| S
%% Improved watch mode with timer reset logic
K --> K1[Initialize file monitoring system]
K1 --> K2[Start monitoring for changes]
K2 --> K3{File change detected?}
K3 -->|Yes| K4[Log change to terminal]
K3 -->|No| K2
K4 --> K5{--wait-for-edit specified?}
K5 -->|No| K7[git add changed file]
K5 -->|Yes| K6[Check if file is already in waiting list]
K6 --> K6A{File in waiting list?}
K6A -->|Yes| K6B[Reset timer for this file]
K6A -->|No| K6C[Add file to waiting list with current timestamp]
K6B --> K2
K6C --> K2
%% Parallel process for waiting list with timer reset logic
K1 --> K8[Check waiting list every second]
K8 --> K9{Any files in waiting list?}
K9 -->|No| K8
K9 -->|Yes| K10[For each file in waiting list]
K10 --> K11{Time since last modification >= wait-for-edit time?}
K11 -->|No| K8
K11 -->|Yes| K12[git add stable files]
K12 --> K13["Start commit process (includes smart diff processing & message generation)"]
K13 --> K14[Remove committed files from waiting list]
K14 --> K8
K7 --> K13
%% Dry run
I --> I1[Load configuration]
I1 --> I2[Get git diff]
I2 --> I3_Truncate["Smart diff processing (truncate large files only)"]
I3_Truncate --> I3["Generate commit message (using refined prompt)"]
I3 --> I4[Display result without creating commit]
%% Offline mode simulation
Offline --> Offline1[Skip network API calls]
Offline1 --> Offline2[Use predefined model list]
Offline2 --> J
%% Jail management commands
JailStatus --> JailStatus1[Display all model statuses]
Unjail --> Unjail1[Release specific model from jail/blacklist]
UnjailAll --> UnjailAll1[Reset all models to active status]
This project is licensed under the MIT License - see the LICENSE file for details.
To help you manage and optimize the model selection process, aicommit provides several commands for working with model jails and blacklists:
# Show current status of all models in the system
aicommit --jail-status
# Release a specific model from jail or blacklist
aicommit --unjail="meta-llama/llama-4-maverick:free"
# Release all models from jail and blacklist
aicommit --unjail-all
These commands can be especially useful when:
The jail system distinguishes between network errors and model errors, and only penalizes models for their own failures, not for connectivity issues. This ensures that good models don't end up blacklisted due to temporary network problems.