Crates.io | red-green-refactor |
lib.rs | red-green-refactor |
version | 0.1.3 |
created_at | 2025-08-18 17:48:05.153241+00 |
updated_at | 2025-08-29 22:29:08.454318+00 |
description | A simple Rust project to demonstrate the red-green-refactor cycle in TDD. |
homepage | |
repository | https://github.com/xpepper/red-green-refactor |
max_upload_size | |
id | 1800827 |
size | 96,073 |
Orchestrate a Red–Green–Refactor loop with three LLM roles (tester, implementor, refactorer). Each step applies a JSON patch, runs tests, and commits to git. Works with Gemini and OpenAI-compatible APIs (e.g., DeepSeek, GitHub Models); mock mode for offline runs.
cargo install red-green-refactor
rgr init-config --out red-green-refactor.yaml
If you prefer to build from source, run:
cargo build --release
# Create a sample config
./target/release/rgr init-config --out red-green-refactor.yaml
Edit your YAML (e.g., red-green-refactor.yaml
) to pick providers and your test command.
gemini
, open_ai
, mock
kind: open_ai
+ base_url
+ api_key_env
api_key_header
: custom header name (default: Authorization
)api_key_prefix
: prefix for header value (default: "Bearer "
; set to ""
for raw keys)Important: put your kata's rules in your kata repo at docs/kata-rules.md
and tell each role to read it in their system_prompt
. The tool automatically includes Markdown files in the model context.
Example (Gemini tester/refactorer + DeepSeek implementor):
tester:
provider:
kind: gemini
model: gemini-1.5-pro
api_key_env: GEMINI_API_KEY
system_prompt: "Read docs/kata-rules.md. Add exactly one failing test per the rules. Output ONLY JSON LlmPatch."
implementor:
provider:
kind: open_ai
model: deepseek-chat
base_url: https://api.deepseek.com
api_key_env: DEEPSEEK_API_KEY
system_prompt: "Read docs/kata-rules.md. Make tests pass with the smallest change. Output ONLY JSON LlmPatch."
refactorer:
provider:
kind: gemini
model: gemini-1.5-pro
api_key_env: GEMINI_API_KEY
system_prompt: "Read docs/kata-rules.md. Refactor without changing behavior. Output ONLY JSON LlmPatch."
test_cmd: "cargo test --color never"
max_context_bytes: 200000
Export keys (adjust to your config):
export GEMINI_API_KEY=your_gemini_key
export DEEPSEEK_API_KEY=your_deepseek_key
provider:
kind: open_ai
model: gpt-4o-mini
base_url: https://models.github.ai/inference
api_key_env: GITHUB_TOKEN # or GITHUB_MODELS_TOKEN
# uses defaults: Authorization + "Bearer "
api-key
header without Bearer:provider:
kind: open_ai
model: gpt-4o-mini
base_url: https://models.github.ai/inference
api_key_env: GITHUB_MODELS_API_KEY
api_key_header: api-key
api_key_prefix: ""
Note: Some GitHub-hosted endpoints may require additional headers (e.g., X-GitHub-Api-Version
). If you need that, open an issue—support can be added easily.
# New kata project
cargo new bowling_kata
cd bowling_kata
## Make sure to add the kata rules in the kata project, for example by adding a docs/kata-rules.md file
# From the red-green-refactor repo (adjust path if needed)
../red-green-refactor/target/release/red-green-refactor --project . --config ../red-green-refactor/red-green-refactor.yaml
# Or continuous mode
../red-green-refactor/target/release/red-green-refactor --project . --config ../red-green-refactor/red-green-refactor.yaml run
What happens each cycle:
Inspect:
git --no-pager log --oneline
# Simple Python project with pytest
mkdir mars_rover && cd mars_rover
git init
python3 -m venv .venv
. .venv/bin/activate
pip install pytest
# Edit your YAML to use pytest
# test_cmd: "pytest -q"
# Run red-green-refactor
../red-green-refactor/target/release/red-green-refactor --project . --config ../red-green-refactor/red-green-refactor.yaml
kind: gemini
, set api_key_env
(e.g., GEMINI_API_KEY
). Models like gemini-1.5-pro
.kind: open_ai
, set base_url
and api_key_env
. Optional:
api_key_header
(e.g., api-key
)api_key_prefix
(e.g., ""
for raw keys)kind: mock
for offline dry runs (appends to red-green-refactor-mock.log
).https://api.deepseek.com
(available models: deepseek-chat
, deepseek-reasoner
)https://api.perplexity.ai
(some available models: sonar
, sonar-pro
, sonar-reasoning
, full list here)https://models.github.ai/inference
(available models here)src/**
, tests/**
, Cargo.toml
, README and Markdown files, truncated at max_context_bytes
.LlmPatch
:
files
: list of edits { path, mode: "rewrite"|"append", content }
commit_message
(optional)implementor_max_attempts
(default 3). On exhaustion, the tool branches attempts/implementor-...
and resets to the tester commit.api_key_env
matches your exported variable.test_cmd
to your runner (e.g., pytest -q
, npm test
, mvn -q test
).max_context_bytes
.# One cycle (default)
./target/release/red-green-refactor --project <path> --config red-green-refactor.yaml
# Continuous
./target/release/red-green-refactor --project <path> --config red-green-refactor.yaml run
# Generate sample config
./target/release/red-green-refactor init-config --out red-green-refactor.yaml