| Crates.io | mermaid-cli |
| lib.rs | mermaid-cli |
| version | 0.2.2 |
| created_at | 2025-11-17 01:31:51.036676+00 |
| updated_at | 2025-12-19 20:31:29.284291+00 |
| description | Open-source AI pair programmer with agentic capabilities. Local-first with Ollama, native tool calling, and beautiful TUI. |
| homepage | https://github.com/noahsabaj/mermaid-cli |
| repository | https://github.com/noahsabaj/mermaid-cli |
| max_upload_size | |
| id | 1936118 |
| size | 814,867 |
An open-source AI pair programmer CLI that provides an interactive chat interface with full agentic coding capabilities. Uses local Ollama models for fast, private, and efficient coding assistance.
Rust toolchain (required for building from source)
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
source $HOME/.cargo/env
Ollama (required for running local AI models)
curl -fsSL https://ollama.ai/install.sh | sh
Podman (optional, for web search via Searxng)
# Ubuntu/Debian/Linux Mint
sudo apt-get update && sudo apt-get install -y podman podman-compose
If you already have Rust:
cargo install mermaid-cli
If starting from scratch (installs everything):
curl -fsSL https://raw.githubusercontent.com/noahsabaj/mermaid-cli/main/scripts/install.sh | bash
This one-liner installs:
After installation, just run:
mermaid
Step-by-step installation with full control:
# 1. Install Rust
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
source $HOME/.cargo/env
# 2. Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh
# 3. Install Mermaid from crates.io
cargo install mermaid-cli
# 4. Download a compatible model
ollama pull llama3.1:8b
# 5. Run Mermaid
mermaid
To update to the latest version:
# Update from crates.io
cargo install mermaid-cli --force
# Or use the one-liner installer (also updates Ollama if needed)
curl -fsSL https://raw.githubusercontent.com/noahsabaj/mermaid-cli/main/scripts/install.sh | bash
# Start Mermaid with default model
mermaid
# Use a specific model
mermaid --model llama3.1:8b
# List available models
mermaid list
Once in the chat interface:
i - Enter insert mode (type your message)Enter - Send message (in insert mode)Esc - Return to normal mode: - Enter command modeTab - Toggle file sidebarCtrl+C - Quit:help - Show all commands:model <name> - Switch to a different model:clear - Clear chat history:sidebar - Toggle file tree:quit - Exit Mermaid.env file)Set your default model configuration:
MERMAID_DEFAULT_MODEL=ollama/tinyllama
Located at ~/.config/mermaid/config.toml:
[default_model]
name = "ollama/deepseek-coder:33b" # provider/model format
temperature = 0.7
max_tokens = 4096
[ui]
theme = "dark"
show_sidebar = true
[context]
max_files = 100
max_context_tokens = 75000
Create .mermaid/config.toml in your project root to override global settings.
Mermaid uses Ollama for local model support with native tool calling (v0.2.0+).
Models with native Ollama tool calling support that can execute file operations, commands, and git actions:
Recommended for Coding:
llama3.1:8b - Fast, excellent tool calling (4.7GB)llama3.1:70b - Best quality, slower (40GB)qwen2.5-coder:7b - Optimized for code (4.7GB)qwen2.5-coder:14b - Excellent coding (9.0GB)qwen2.5-coder:32b - Elite coding (19GB)mistral-nemo:12b - Balanced performance (7.1GB)Other Compatible Models:
llama3.2:1b - Ultra-fast, limited capabilities (1.3GB)llama3.2:3b - Fast, decent quality (2.0GB)firefunction-v2:70b - Specialized for function calling (40GB)These models can chat but cannot execute actions (coming in v0.2.1 with text fallback):
deepseek-coder:33b - Excellent for code, no tool supportcodellama - Good for code, no tool supporttinyllama - Ultra-fast, no tool support# Install a compatible model
ollama pull llama3.1:8b
# List installed models
ollama list
# Use with Mermaid
mermaid --model llama3.1:8b
Access massive models on datacenter hardware:
qwen3-coder:480b-cloud - 480B params, elite codingkimi-k2-thinking:cloud - 1T params, advanced reasoningdeepseek-v3.1:671b-cloud - 671B params, largestNote: Cloud models require an API key from ollama.com/cloud
You: Create a REST API endpoint for user authentication
Mermaid: I'll create a REST API endpoint for user authentication. Let me set up a basic auth endpoint with JWT tokens.
[Creates files, shows code, explains implementation]
You: Review my changes in src/main.rs
Mermaid: I'll review the changes in src/main.rs. Let me check the diff first.
[Analyzes code, suggests improvements, identifies issues]
You: The tests are failing, can you help?
Mermaid: I'll help you debug the failing tests. Let me first run them to see the errors.
[Runs tests, analyzes errors, fixes issues]
You: Refactor this function to use async/await
Mermaid: I'll refactor this function to use async/await pattern.
[Shows original code, explains changes, implements refactoring]
Mermaid uses Ollama's native tool calling API for structured, reliable actions:
Available Tools:
read_file - Read any file (text, PDF, images with vision models)write_file - Create or update files in the projectdelete_file - Delete a file from the project directorycreate_directory - Create a new directoryexecute_command - Execute shell commands and see outputgit_status - Check git working tree statusgit_diff - View changes in filesgit_commit - Create commits with proper messagesweb_search - Search the web via local SearxngHow It Works:
Mermaid automatically:
.gitignore patterns# Clone the repository
git clone https://github.com/noahsabaj/mermaid-cli.git
cd mermaid
# Build debug version
cargo build
# Run tests
cargo test
# Build optimized release
cargo build --release
┌─────────────┐ ┌──────────────┐
│ Mermaid │────▶│ Ollama │
│ CLI │ │ Local Server│
└─────────────┘ └──────────────┘
│ │
└──────────┬───────────────┘
▼
┌─────────┐
│ Local │
│ Context │
└─────────┘
Key Components:
models/ollama_direct.rs - Direct Ollama connectionagents/ - File system, command execution, git operationscontext/ - Project analysis and context loadingtui/ - Terminal user interface with Ratatuiapp/ - Configuration and application statePrivacy First:
| Feature | Mermaid | Aider | Claude Code | GitHub Copilot |
|---|---|---|---|---|
| Open Source | Yes | Yes | No | No |
| Local Models Only | Yes | Yes | No | No |
| Model Support | Ollama | Multiple | Claude only | OpenAI only |
| Privacy | Full | Full | No | No |
| File Operations | Yes | Yes | Yes | Limited |
| Command Execution | Yes | Yes | Yes | No |
| Git Integration | Yes | Yes | Yes | Yes |
| Streaming UI | Yes | Yes | Yes | N/A |
| Rootless Containers | Yes (Podman) | No | No | No |
| Cost | Completely Free | Completely Free | $20/mo | $10/mo |
Yes! With local models (Ollama), your code never leaves your machine.
Yes, with Ollama and local models.
Mermaid uses Ollama for model support. To use additional models:
ollama pull model-namemermaid --model ollama/model-nameLicensed under either of:
at your option.
Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.
Note: This project is under active development. Expect breaking changes until v1.0.
Made with love by the open source community