| Crates.io | trixter |
| lib.rs | trixter |
| version | 0.1.1 |
| created_at | 2025-10-05 21:10:28.762742+00 |
| updated_at | 2025-10-10 19:55:10.867353+00 |
| description | Trixter – Chaos Monkey TCP Proxy |
| homepage | |
| repository | https://github.com/brk0v/trixter |
| max_upload_size | |
| id | 1869457 |
| size | 64,535 |
A high‑performance, runtime‑tunable TCP chaos proxy — a minimal, blazing‑fast alternative to Toxiproxy written in Rust with Tokio. It lets you inject latency, throttle bandwidth, slice writes (to simulate small MTUs/Nagle‑like behavior), corrupt bytes in flight by injecting random bytes, randomly terminate connections, and hard‑timeout sessions – all controllable per connection via a simple REST API.
tokio::io::copy_bidirectional on a multi‑thread runtime;Use any TCP server. Examples:
nc -lk 127.0.0.1 8181
trixter chaos proxywith docker:
docker run --network host -it --rm ghcr.io/brk0v/trixter \
--listen 0.0.0.0:8080 \
--upstream 127.0.0.1:8181 \
--api 127.0.0.1:8888 \
--delay-ms 0 \
--throttle-rate-bytes 0 \
--slice-size-bytes 0 \
--corrupt-probability-rate 0.0 \
--terminate-probability-rate 0.0 \
--connection-duration-ms 0 \
--random-seed 42
or build from scratch:
cd trixter/trixter
cargo build --release
or install with cargo:
cargo install trixter
and run:
RUST_LOG=info \
./target/release/trixter \
--listen 0.0.0.0:8080 \
--upstream 127.0.0.1:8181 \
--api 127.0.0.1:8888 \
--delay-ms 0 \
--throttle-rate-bytes 0 \
--slice-size-bytes 0 \
--corrupt-probability-rate 0.0 \
--terminate-probability-rate 0.0 \
--connection-duration-ms 0 \
--random-seed 42
Now connect your app/CLI to localhost:8080. The proxy forwards to 127.0.0.1:8181.
Base URL is the --api address, e.g. http://127.0.0.1:8888.
{
"conn_info": {
"id": "pN7e3y...",
"downstream": "127.0.0.1:59024",
"upstream": "127.0.0.1:8181"
},
"delay": { "secs": 2, "nanos": 500000000 },
"throttle_rate": 10240,
"slice_size": 512,
"terminate_probability_rate": 0.05,
"corrupt_probability_rate": 0.02
}
Notes:
delay serializes as a std::time::Duration object with secs/nanos fields (zeroed when the delay is disabled).id is unique per connection; use it to target a single connection.corrupt_probability_rate reports the current per-operation flip probability (0.0 when corruption is off).curl -s http://127.0.0.1:8888/health
curl -s http://127.0.0.1:8888/connections | jq
ID=$(curl -s http://127.0.0.1:8888/connections | jq -r '.[0].conn_info.id')
curl -i -X POST \
http://127.0.0.1:8888/connections/$ID/shutdown \
-H 'Content-Type: application/json' \
-d '{"reason":"test teardown"}'
curl -i -X POST \
http://127.0.0.1:8888/connections/_all/shutdown \
-H 'Content-Type: application/json' \
-d '{"reason":"test teardown"}'
curl -i -X PATCH \
http://127.0.0.1:8888/connections/$ID/delay \
-H 'Content-Type: application/json' \
-d '{"delay_ms":250}'
# Remove latency
curl -i -X PATCH \
http://127.0.0.1:8888/connections/$ID/delay \
-H 'Content-Type: application/json' \
-d '{"delay_ms":0}'
curl -i -X PATCH \
http://127.0.0.1:8888/connections/$ID/throttle \
-H 'Content-Type: application/json' \
-d '{"rate_bytes":10240}' # 10 KiB/s
curl -i -X PATCH \
http://127.0.0.1:8888/connections/$ID/slice \
-H 'Content-Type: application/json' \
-d '{"size_bytes":512}'
# Set 5% probability per read/write operation
curl -i -X PATCH \
http://127.0.0.1:8888/connections/$ID/termination \
-H 'Content-Type: application/json' \
-d '{"probability_rate":0.05}'
# Corrupt ~1% of operations
curl -i -X PATCH \
http://127.0.0.1:8888/connections/$ID/corruption \
-H 'Content-Type: application/json' \
-d '{"probability_rate":0.01}'
# Remove corruption
curl -i -X PATCH \
http://127.0.0.1:8888/connections/$ID/corruption \
-H 'Content-Type: application/json' \
-d '{"probability_rate":0.0}'
404 Not Found — bad connection ID400 Bad Request — invalid probability (outside 0.0..=1.0) for termination/corruption500 Internal Server Error — internal channel/handler error--listen <ip:port> # e.g. 0.0.0.0:8080
--upstream <ip:port> # e.g. 127.0.0.1:8181
--api <ip:port> # e.g. 127.0.0.1:8888
--delay-ms <ms> # 0 = off (default)
--throttle-rate-bytes <bytes/s> # 0 = unlimited (default)
--slice-size-bytes <bytes> # 0 = off (default)
--terminate-probability-rate <0..1> # 0.0 = off (default)
--corrupt-probability-rate <0..1> # 0.0 = off (default)
--connection-duration-ms <ms> # 0 = unlimited (default)
--random-seed <u64> # seed RNG for deterministic chaos (optional)
All of the above can be changed per connection at runtime via the REST API, except
--connection-duration-mswhich is a process-wide default applied to new connections.Omit
--random-seedto draw entropy for every run; set it when you want bit-for-bit reproducibility.
Each accepted downstream connection spawns a task that:
Connects to the upstream target.
Wraps both sides with tunable adapters with tokio-netem:
DelayedWriter → optional latencyThrottledWriter → bandwidth capSlicedWriter → fixed‑size write chunksTerminator → probabilistic abortsCorrupter → probabilistic random byte injectorShutdowner (downstream only) → out‑of‑band shutdown via oneshot channelRuns tokio::io::copy_bidirectional until EOF/error/timeout.
Tracks the live connection in a DashMap so the API can query/mutate it.
# Add ~250ms latency and 64 KiB/s cap to the first active connection
ID=$(curl -s localhost:8888/connections | jq -r '.[0].conn_info.id')
curl -s -X PATCH localhost:8888/connections/$ID/delay \
-H 'Content-Type: application/json' -d '{"delay_ms":250}'
curl -s -X PATCH localhost:8888/connections/$ID/throttle \
-H 'Content-Type: application/json' -d '{"rate_bytes":65536}'
curl -s -X PATCH localhost:8888/connections/$ID/slice \
-H 'Content-Type: application/json' -d '{"size_bytes":256}'
curl -s -X PATCH localhost:8888/connections/$ID/termination \
-H 'Content-Type: application/json' -d '{"probability_rate":0.05}'
curl -s -X PATCH localhost:8888/connections/$ID/corruption \
-H 'Content-Type: application/json' -d '{"probability_rate":0.01}'
./trixter \
--listen 0.0.0.0:8080 \
--upstream 127.0.0.1:8181 \
--api 127.0.0.1:8888 \
--connection-duration-ms 5000
curl -s -X POST localhost:8888/connections/$ID/shutdown \
-H 'Content-Type: application/json' -d '{"reason":"too slow"}'
downstream/upstream pair) via GET /connections.PATCH calls.POST /connections/{id}/shutdown to free ports quickly.Omit --random-seed in CI so each run draws fresh entropy. When a failure hits, check the proxy logs for the random seed: <value> line and replay the scenario locally with that seed:
trixter \
--listen 0.0.0.0:8080 \
--upstream 127.0.0.1:8181 \
--api 127.0.0.1:8888 \
--random-seed 123456789
Tokio multi‑thread runtime; avoid heavy CPU work on the I/O threads.0 to disable.RUST_LOG=info (or debug) for visibility; turn off for max throughput.127.0.0.1).400 with { "error": "invalid probability; must be between 0.0 and 1.0" }.404.500.MIT