| Crates.io | microralph |
| lib.rs | microralph |
| version | 0.2.0 |
| created_at | 2026-01-25 06:37:09.84531+00 |
| updated_at | 2026-01-25 11:04:29.095026+00 |
| description | A tiny CLI for creating and executing PRDs with coding agents |
| homepage | |
| repository | https://github.com/twitchax/microralph |
| max_upload_size | |
| id | 2068258 |
| size | 985,112 |
A small ralph so you can ralph your ralphs. π¦
microralph is a tiny CLI that wraps your favorite AI coding agent (including GitHub Copilot CLI and Claude Code CLI) and turns it into a PRD-driven task loop. You write PRDs (Product Requirements Documents), and microralph repeatedly invokes the agentβone task at a timeβuntil everything is done.
Oh, and yes: microralph was entirely ralph'd into existence by microralph itself. Dogfooding at its finest. π
A project that is mostly ralph'd into existence by AI agents is itself called a ralph (by me). I'm hoping one day it becomes a verb so people can say things like, "I ralphed it." microralph is a ralphβit was built almost entirely by running mr run in a loop, with a human steering via PRDs.
The name comes from Ralph Wiggum: loveable, earnest, occasionally brilliant, but needs guidance. AI agents are the same way.
Here's the thing: you don't want to ralph everything. Some code deserves your full attentionβthe elegant algorithm, the nuanced architecture, the domain-specific logic that only you understand. That's artisanal code.
But most projects need a lot of other code: CLI scaffolding, config parsing, test harnesses, CI pipelines, documentation. Important, but not where you want to spend your creative energy.
microralph lets you ralph the boring parts so you can lock time for the good stuff.
Use it to:
The goal isn't to replace youβit's to give you time back.
AI coding agents are powerful, but they have a fatal flaw: context windows. The more context an agent accumulates, the slower and more expensive it getsβand eventually it forgets what it was doing.
microralph solves this by:
No more 200k-token conversations that go off the rails. Just focused, atomic task execution.
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β 1. mr init / mr bootstrap β Set up .mr/ structure β
β 2. mr new my-feature β Create PRD via guided Q/A β
β 3. mr run β Execute one task β
β 4. Agent implements, runs UAT, updates PRD, commits β
β 5. Repeat step 3 until all tasks are done β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Each mr run invocation:
mr new runs an interactive Q/A to generate PRDsmr bootstrap scans your repo and generates starter PRDs.mr/constitution.md to guide PRD workflowsmr run --stream shows agent output in real-timeDownload pre-built binaries from GitHub Releases. Available for:
mr-linux)mr-macos)mr-windows.exe)mr.wasm)# Download the latest release binary
curl -L https://github.com/twitchax/microralph/releases/latest/download/mr-linux -o mr
# or for macOS ARM:
# curl -L https://github.com/twitchax/microralph/releases/latest/download/mr-macos -o mr
# Make it executable
chmod +x mr
# Move to a directory in your PATH
sudo mv mr /usr/local/bin/
# Verify installation
mr --version
mr-windows.exe from the releases pagemr.exeC:\Program Files\microralph\)mr --versionRun directly from GitHub Container Registry using any WASI-compatible runtime. This works well for sandboxed or cross-platform use cases.
With Wasmtime:
$ wasmtime run ghcr.io/twitchax/microralph:latest -- --version
With wkg (WebAssembly Package Manager):
$ wkg get ghcr.io/twitchax/microralph:latest
Or download and run manually:
# Download the WASM binary
curl -L https://github.com/twitchax/microralph/releases/latest/download/mr.wasm -o mr.wasm
# Run with wasmtime
wasmtime mr.wasm -- --version
# Optional: Create an alias
echo 'alias mr="wasmtime /path/to/mr.wasm --"' >> ~/.bashrc
cargo install microralph
git clone https://github.com/twitchax/microralph.git
cd microralph
cargo install --path .
# Initialize a new repo with .mr/ structure
mr init
# Bootstrap an existing repo into PRDs
mr bootstrap
# Get AI-generated PRD suggestions
mr suggest
# Create a new PRD via guided Q/A
mr new my-feature
# List all PRDs
mr list
# Run the next task from the active PRD
mr run
# Show status of PRDs and tasks
mr status
| Command | Description |
|---|---|
mr init |
Initialize a new repo with .mr/ structure, templates, prompts, and starter AGENTS.md |
mr init --language <lang> |
Initialize for a specific language (rust, python, node, go, java) |
mr bootstrap |
Ingest an existing repo into PRDs: generate .mr/PRDS.md and starter PRDs |
mr restore |
Restore .mr/prompts/ and .mr/templates/ to built-in defaults (destructive) |
mr suggest |
Generate 5 AI-powered PRD suggestions based on codebase analysis and research |
mr new <slug> |
Create a new PRD via guided Q/A |
mr new <slug> --context |
Create a new PRD with upfront context to guide initial questions |
mr edit <id> "<request>" |
Edit an existing PRD via runner assistance |
mr constitution edit "<request>" |
Edit the constitution via LLM assistance |
mr list |
List all PRDs (regenerates .mr/PRDS.md) |
mr finalize <id> |
Finalize a PRD (mark as done and close out) |
mr run |
Run the next task from the highest-priority active PRD |
mr run <id> |
Run the next task from a specific PRD |
mr run --stream |
Run with real-time streaming output |
mr reindex |
Regenerate index and verify/fix PRD interlinks |
mr status |
Show status of PRDs and tasks |
| Flag | Description |
|---|---|
-v, --verbose |
Enable verbose output |
-q, --quiet |
Suppress non-essential output |
--runner <runner> |
Specify runner: copilot, claude, mock (default: copilot) |
--model <model> |
Specify model (passed through to runner) |
--stream |
Stream runner output in real-time (for mr run) |
Settings can be persisted in .mr/config.toml:
runner = "copilot"
model = "claude-sonnet-4-20250514"
permission_mode = "yolo"
timeout_minutes = 30
CLI flags override config file settings.
microralph supports dev containers for consistent, sandboxed development environments. Dev containers isolate your development environment from your host machine, ensuring all tools and dependencies are versioned and reproducible.
microralph dev containers work with:
# Install the CLI
npm install -g @devcontainers/cli
# Open a shell in the dev container
devcontainer up --workspace-folder .
devcontainer exec --workspace-folder . bash
microralph can automatically generate .devcontainer/devcontainer.json by analyzing your repository:
mr devcontainer generate
This command:
.devcontainer/devcontainer.json with appropriate base image, extensions, and tool installationsThe generated config includes:
When running commands that invoke AI models (mr run, mr new, mr devcontainer generate), microralph will show a brief warning if you're not inside a dev container. This is informational onlyβcommands will still execute normally.
To suppress the warning, either:
As your project evolves, regenerate the dev container config to keep it in sync:
# Analyze current state and update .devcontainer/devcontainer.json
mr devcontainer generate
This is especially useful after:
The mr restore command overwrites .mr/prompts/ and .mr/templates/ with built-in defaults. This is useful when you want to:
mr restore
The command:
.mr/prompts/ and .mr/templates/ directoriesmr init)After running mr restore, use Git to see what changed:
# See all changes
git diff
# Review specific file
git diff .mr/prompts/run_task.md
# Decide whether to keep or discard
git add .mr/ # Keep the restored defaults
git restore .mr/ # Discard and keep your customizations
git log and git checkout.mr/prompts/ and .mr/templates/ (not .mr/constitution.md or .mr/config.toml)Scenario 1: You customized prompts but want to start fresh
mr restore
git diff # Review what changed
git add .mr/ # Commit to accept defaults
Scenario 2: You upgraded microralph and want new prompt features
mr restore
git diff # See new features in built-in prompts
git add .mr/prompts/ # Keep new prompts
git restore .mr/templates/ # Keep your template customizations
Scenario 3: You're curious what's different between your customizations and defaults
mr restore
git diff # Review differences
git restore .mr/ # Discard restore and keep customizations
microralph supports project-specific governance rules via a Constitution file (.mr/constitution.md). The constitution defines constraints, best practices, and architectural rules that influence PRD creation and execution.
The constitution provides a single source of truth for project governance:
mr init or mr bootstrap, microralph creates .mr/constitution.md with commented-out example rules.mr new and mr finalize read the constitution and pass it to the LLM, which respects the rules when creating or finalizing PRDs.mr constitution edit "<request>" to update the constitution via natural language (e.g., "Add a rule that all tests must use nextest").# Constitution
## Purpose
This file defines project-specific governance rules that guide PRD creation and execution.
## Rules
1. All acceptance tests must be codified in Makefile.toml (no one-off commands).
2. Use `anyhow::Result` for all fallible functions.
3. Prefer functional programming techniques where appropriate.
4. All dev commands must route through `cargo make`.
You can edit .mr/constitution.md directly, or use the LLM-assisted command:
# Add a new rule via natural language
mr constitution edit "Add a rule requiring tracing instead of println for diagnostics"
# The LLM will ask clarifying questions and update the constitution
Constitution violations are informational, not blocking:
Let's walk through every way you can use microralph, from "I just heard about this" to "I'm shipping features like a boss."
You've got a brilliant idea and zero code. Here's how microralph helps you ralph it into existence:
# 1. Initialize your project
mkdir my-awesome-project
cd my-awesome-project
git init
mr init # Creates .mr/ structure, templates, and AGENTS.md
# 2. (Optional) Specify a language if not Rust
mr init --language python # or node, go, java
# 3. Create your first PRD
mr suggest # Get 5 AI-generated suggestions (pick one or ignore)
mr new add-cli-parser # Guided Q/A creates .mr/prds/PRD-0001-add-cli-parser.md
# 4. Run the loop until it's done
mr run # Agent implements T-001, runs tests, commits
mr run # Agent implements T-002, runs tests, commits
mr run # ... repeat until all tasks done
# 5. Check progress anytime
mr status # Show what's done, what's left
mr list # Regenerate .mr/PRDS.md index
# 6. Finalize when complete
mr finalize PRD-0001 # Mark PRD as done, append summary
Pro tip: Let mr run loop until the PRD is finished. Each run does one task and exitsβno context bloat, no forgotten instructions.
You've got a mature codebase but want to ralph new features onto it:
# 1. Bootstrap existing repo
cd my-existing-project
mr bootstrap # Scans repo, creates .mr/, generates starter PRDs
# 2. Review what was generated
mr list # See auto-generated PRDs
cat .mr/PRDS.md # Human-readable index
# 3. Edit or create new PRDs
mr edit PRD-0001 "Split T-003 into two tasks" # LLM helps you edit
mr new add-auth # Create new PRD for auth feature
# 4. Run tasks
mr run # Picks highest-priority task from active PRDs
mr run PRD-0002 # Force run from specific PRD
# 5. Stream for long tasks (optional)
mr run --stream # Watch the agent work in real-time
Pro tip: Use mr suggest after bootstrapping to get fresh ideas based on your codebaseβit's like pair programming with an overenthusiastic intern who actually reads your TODO comments.
You're in the flow. PRDs are planned, tasks are queued. Here's your daily routine:
# Morning: Check what's up
mr status # "3 active PRDs, 12 tasks remaining"
# Pick your battle
mr run PRD-0003 # Work on specific PRD
mr run # Let microralph pick highest priority
# Agent does the work:
# - Reads PRD and task details
# - Implements changes
# - Runs `cargo make uat` (or equivalent)
# - Updates PRD status and History
# - Commits with standardized message
# Rinse and repeat
mr run && mr run && mr run # Chain 'em if you're feeling spicy
# End of day: Survey the damage
mr status
git log --oneline -10 # See what the agent committed
Pro tip: Use mr run --stream when you're actively watchingβyou'll see the agent's thought process unfold in real-time. Use plain mr run when you're grabbing coffee.
You want to enforce project rules (e.g., "All tests must use nextest," "No XML config blobs"). Enter the Constitution:
# 1. Edit your constitution
vim .mr/constitution.md # Or use the LLM assistant:
mr constitution edit "Add rule: all functions must have doc comments"
# 2. PRD creation respects the constitution
mr new add-logging # Agent asks about tracing vs println because constitution says so
# 3. Task execution logs violations
mr run # If agent violates constitution, it notes it in History
# (but doesn't blockβviolations are informational)
# 4. Check compliance
grep -r "Constitution" .mr/prds/ # See where violations were noted
Pro tip: The constitution is git-tracked, so it evolves with your project. Update it as you learn what patterns work.
Don't like the defaults? Tweak them:
# Global CLI flags (override everything)
mr run --runner copilot --model claude-sonnet-4.5 --stream
# Or use Claude Code CLI instead
mr run --runner claude --model claude-sonnet-4.5 --stream
# Persistent config (set once, forget)
cat > .mr/config.toml <<EOF
runner = "claude" # Use Claude Code CLI
model = "claude-sonnet-4.5"
permission_mode = "yolo" # Auto-approve all permissions (YOLO mode)
timeout_minutes = 60
EOF
# Now `mr run` uses your config
mr run # Uses claude-sonnet-4.5, auto-approves, times out after 60min
Available runners:
copilot (default) β Uses gh copilot CLI (requires gh and Copilot subscription)claude β Uses Claude Code CLI (requires claude CLI and Anthropic API key)Permission modes:
manual (default) β Agent asks before dangerous operationsyolo β Auto-approve everything (great for trusted PRDs, terrible for unknown code)PRDs aren't set in stone. Life happens. Scope creeps. Here's how to adapt:
# Light edits: Just open the file
vim .mr/prds/PRD-0005-add-tests.md # Change task priorities, add notes
# Heavy edits: Let the LLM help
mr edit PRD-0005 "Split T-008 into three tasks: unit tests, integration tests, E2E tests"
# Agent asks clarifying questions, updates the PRD
# Context-heavy edits: Provide upfront context
mr edit PRD-0005 --context "The auth system changed, update all tasks to use JWT"
# Agent uses context to guide questions
Pro tip: Don't delete tasksβmark them status: parked instead. The History section is your audit trail.
Agents aren't perfect. Here's how to recover:
# Task failed? Check the History
cat .mr/prds/PRD-0003-add-auth.md # Scroll to History section
# Look for "β Failed" entries with failure details
# Retry the same task
mr run PRD-0003 # Agent reads History, tries a different approach
# Skip a task manually
vim .mr/prds/PRD-0003-add-auth.md # Change task status from `todo` to `parked`
mr run # Moves to next task
# Check what the agent actually did
git log -1 --stat # See last commit
git diff HEAD~1 # Review changes
# Revert if needed
git reset --hard HEAD~1 # Undo last commit (if not pushed)
mr run # Try again
Pro tip: The History section is gold. If a task keeps failing, read past attemptsβthe agent learns from its mistakes (kinda).
All tasks done? Time to wrap it up:
# 1. Verify all tasks complete
mr status # Should show "0 tasks remaining"
# 2. Run UAT verification
mr run PRD-0003 # If UATs are unverified, agent enters verification loop
# For each UAT: verify, create test, or opt-out
# 3. Finalize the PRD
mr finalize PRD-0003 # Marks as `done`, appends summary to History
# 4. Celebrate
git log --oneline --graph # Admire your git history
mr list # All green checkmarks
Pro tip: UATs (User Acceptance Tests) are defined in PRD frontmatter. They must be verified before finalization. If you skip verification, microralph won't let you finalize.
Got multiple PRDs? microralph handles it:
# Create multiple PRDs
mr new add-auth
mr new refactor-db
mr new fix-ui-bugs
# microralph picks highest-priority task across ALL active PRDs
mr run # Might pick T-001 from PRD-0005 (priority 1)
mr run # Might pick T-002 from PRD-0003 (priority 2)
# Force work on specific PRD
mr run PRD-0004 # Only run tasks from this PRD
# Check progress across all PRDs
mr status # Shows status of all active PRDs
Pro tip: Use priority numbers to control execution order. Priority 1 = highest. If two tasks have the same priority, microralph picks the older PRD first.
Worried about AI-generated code trashing your machine? Use dev containers:
# 1. Generate dev container config
mr devcontainer generate # Analyzes repo, creates .devcontainer/devcontainer.json
# 2. Open in container (VSCode)
# VSCode will prompt: "Reopen in Container" β Click it
# 3. Or use CLI
devcontainer up --workspace-folder .
devcontainer exec --workspace-folder . bash
# 4. Ralph safely inside the sandbox
mr run # All changes isolated to container
Pro tip: Dev containers also ensure consistent tooling across your team. No more "works on my machine" excuses.
Combine flags for maximum control:
# Verbose mode (see all the internals)
mr run --verbose
# Quiet mode (just the facts)
mr run --quiet
# Custom model for expensive tasks
mr run --model claude-opus-4-20250514
# Stream + specific PRD + custom model
mr run PRD-0003 --stream --model gpt-5.2-codex --verbose
# List with custom output
mr list --verbose # More details in PRDS.md
Pro tip: --verbose shows token usage, model calls, and timing info. Great for debugging or cost tracking.
| Scenario | Command | What It Does |
|---|---|---|
| Start new project | mr init |
Creates .mr/ structure |
| Bootstrap existing | mr bootstrap |
Scans repo, generates PRDs |
| Get AI suggestions | mr suggest |
Analyzes codebase, suggests 5 PRDs |
| Create PRD | mr new <slug> |
Guided Q/A creates PRD |
| Create PRD with context | mr new <slug> --context "..." |
Skips some questions |
| Edit PRD | mr edit <id> "<request>" |
LLM helps edit PRD |
| Edit constitution | mr constitution edit "<request>" |
LLM updates project rules |
| Run next task | mr run |
Picks highest-priority task |
| Run from specific PRD | mr run <id> |
Only this PRD's tasks |
| Stream output | mr run --stream |
Watch agent work live |
| Check progress | mr status |
Summary of all PRDs |
| List PRDs | mr list |
Regenerates index |
| Finalize PRD | mr finalize <id> |
Mark done, append summary |
| Reindex | mr reindex |
Fix PRD cross-links |
| Generate dev container | mr devcontainer generate |
Create .devcontainer/devcontainer.json |
mr new add-user-profiles
mr run && mr run && mr run # Until done
mr finalize PRD-0007
mr list # See all PRDs
vim .mr/prds/PRD-0003-*.md # Change status to `parked`
mr run # Only runs active PRDs
cat .mr/prds/PRD-0005-*.md # Read History section
mr run PRD-0005 --verbose # See detailed failure logs
# Manually fix the issue, then:
vim .mr/prds/PRD-0005-*.md # Update task notes with hints
mr run PRD-0005 # Try again with new context
mr run --stream --verbose # Full transparency
mr constitution edit "All functions must have type hints"
mr run # Agent respects constitution
# Use faster model for simple tasks
mr run --model gpt-5-mini
# Use yolo mode (skip permission prompts)
echo 'permission_mode = "yolo"' >> .mr/config.toml
mr run
mr run do its thing. Each invocation is cheap (context-wise). Run it 100 times if needed.--stream lets you see failures as they happen.mr run commits on success. Use git branches for risky PRDs.parked instead of deleting. You might need them later.Most dev workflows run via cargo make.
# Install cargo-make
cargo install cargo-make
# Run tests
cargo make test
# Run full CI pipeline (fmt, clippy, test)
cargo make ci
# Format code
cargo make fmt
# Run clippy
cargo make clippy
# Build release
cargo make build-release
# UAT (User Acceptance Tests) β the one true gate
cargo make uat
mr run: Each invocation attempts at most one taskcargo makemicroralph uses static prompt files in .mr/prompts/ that support placeholder expansion. If you want to customize prompts, here are the available placeholder variables for each prompt type.
{{variable}} β Simple string substitution{{#if variable}}...{{/if}} β Conditional block (renders if variable is truthy/non-empty){{#each list}}...{{/each}} β List iteration (use {{@index}} for 0-based index)Used when executing a task via mr run.
| Placeholder | Type | Description |
|---|---|---|
{{prd_path}} |
string | Absolute path to the PRD file |
{{prd_id}} |
string | PRD identifier (e.g., PRD-0001) |
{{prd_title}} |
string | PRD title |
{{next_task_id}} |
string | Task identifier (e.g., T-001) |
{{task_title}} |
string | Task title |
{{task_priority}} |
string | Task priority number |
{{task_notes}} |
string | Optional task notes (may be empty) |
Used for the final wrap-up task of a PRD.
| Placeholder | Type | Description |
|---|---|---|
{{prd_id}} |
string | PRD identifier |
{{prd_summary}} |
string | Summary of the PRD |
Used for the first round of questions when creating a new PRD.
| Placeholder | Type | Description |
|---|---|---|
{{slug}} |
string | The slug for the new PRD |
{{user_description}} |
string | Optional initial description from user |
{{user_context}} |
string | Optional upfront context provided by user |
{{#each existing_prds}} |
list | Existing PRDs for context |
β³ {{id}} |
string | PRD identifier |
β³ {{title}} |
string | PRD title |
β³ {{status}} |
string | PRD status (draft/active/done/parked) |
Used for follow-up rounds of questions during PRD creation.
| Placeholder | Type | Description |
|---|---|---|
{{slug}} |
string | The slug for the new PRD |
{{user_context}} |
string | Optional upfront context provided by user |
{{#each qa_history}} |
list | Previous Q/A pairs |
β³ {{question}} |
string | The question that was asked |
β³ {{answer}} |
string | The user's answer |
β³ {{@index}} |
number | 0-based index of the Q/A pair |
Used to synthesize the final PRD from collected Q/A.
| Placeholder | Type | Description |
|---|---|---|
{{slug}} |
string | The slug for the new PRD |
{{user_context}} |
string | Optional upfront context provided by user |
{{#each qa_history}} |
list | All Q/A pairs from the session |
β³ {{question}} |
string | The question |
β³ {{answer}} |
string | The answer |
{{#each existing_prds}} |
list | Existing PRDs for context |
β³ {{id}} |
string | PRD identifier |
β³ {{title}} |
string | PRD title |
Used when editing an existing PRD via mr edit.
| Placeholder | Type | Description |
|---|---|---|
{{prd_path}} |
string | Path to the PRD file |
{{user_request}} |
string | The user's edit request |
{{prd_content}} |
string | Current PRD file content |
{{#each qa_history}} |
list | Follow-up Q/A pairs (if any) |
β³ {{question}} |
string | The question |
β³ {{answer}} |
string | The answer |
Used during mr bootstrap to analyze the repository.
| Placeholder | Type | Description |
|---|---|---|
{{prd_budget}} |
string | Maximum number of PRDs to generate |
{{#each heuristics}} |
list | Analysis heuristics |
β³ {{description}} |
string | Heuristic description |
Used to generate PRDs from the bootstrap plan.
| Placeholder | Type | Description |
|---|---|---|
{{plan}} |
string | The generated bootstrap plan |
{{prd_budget}} |
string | Maximum number of PRDs to generate |
Used to update the auto-managed section of AGENTS.md.
| Placeholder | Type | Description |
|---|---|---|
{{agents_content}} |
string | Current AGENTS.md content |
{{#each recent_changes}} |
list | Recent file changes |
β³ {{file}} |
string | File path that was changed |
β³ {{description}} |
string | Description of the change |
Used when initializing for a non-Rust language.
| Placeholder | Type | Description |
|---|---|---|
{{language}} |
string | Target language (e.g., python, node) |
{{#each build_commands}} |
list | Typical build/test commands |
β³ {{command}} |
string | A build/test command |
Used during mr init. This prompt has no placeholders.
PRDs are Markdown files with YAML frontmatter:
---
id: PRD-0001
title: My Feature
status: active
owner: Your Name
created: 2026-01-23
updated: 2026-01-23
tasks:
- id: T-001
title: "Implement the thing"
priority: 1
status: todo
---
# Summary
What this PRD is about...
---
# History
(Entries appended by `mr run` will go below this line.)
Ralph is a pattern where you repeatedly invoke an AI coding agent in a loop until a task is complete. The original concept emerged in the AI coding community as a way to overcome context window limitations by running fresh agent sessions iteratively.
A project that is predominantly built this wayβa ralphβbecomes a testament to the pattern's power: AI does the heavy lifting while you steer with PRDs and review results.
Popular Ralph implementations and resources include:
Traditional Ralph implementations are simple loop scripts: run the agent β check if done β repeat. They work well for small tasks but have limitations:
microralph takes the Ralph pattern and adds:
mr run completes exactly one task (no bloat)mr new and mr bootstrap help structure workThink of microralph as "Ralph with a project management system built in."
A Product Requirements Document (PRD) defines what you want to build. In microralph, PRDs are enhanced with:
See Writing Good PRDs for general guidance.
Modern AI agents suffer from the context window problem: as conversations grow, agents slow down, get expensive, and eventually "forget" earlier context.
microralph implements an agentic loop pattern:
This pattern is inspired by work on:
| Feature | microralph | Claude Code | Cursor | Aider | Cline |
|---|---|---|---|---|---|
| PRD-driven task breakdown | β | β | β | β | β |
| One-task-per-run (no bloat) | β | β | β | β | β |
| Git-native state | β | β | β | β | β |
| History/retry logging | β | β | β | β οΈ (partial) | β |
| Multi-runner abstraction | β | β (Claude only) | β (Cursor only) | β οΈ (multi-model) | β (VSCode only) |
| Works in terminal | β | β | β (IDE only) | β | β (IDE only) |
| No API keys required | β (uses CLI auth) | β | β | β | β |
| Customizable prompts | β | β | β | β οΈ | β |
Most AI coding tools are session-based: you start a conversation, describe what you want, and the agent tries to do everything in one go. This works for small tasks but breaks down for larger projects:
microralph is task-based: you define discrete tasks upfront, and each mr run tackles exactly one task with fresh context. Progress is tracked in git, so you can close your terminal, reboot your machine, or come back weeks laterβmicroralph picks up where it left off.
Think of it as the difference between "do everything in one meeting" vs. "complete one ticket per sprint" β the latter scales.
MIT