| Crates.io | distributed-cpu-stress-reporter |
| lib.rs | distributed-cpu-stress-reporter |
| version | 1.3.0 |
| created_at | 2025-11-03 08:40:27.965338+00 |
| updated_at | 2025-11-30 00:56:31.995911+00 |
| description | HTTP server that stress-tests CPU cores and reports performance metrics for measuring CPU performance in virtualized environments |
| homepage | |
| repository | https://github.com/tensorturtle/distributed-cpu-stress-reporter |
| max_upload_size | |
| id | 1914249 |
| size | 70,929 |
Lightweight HTTP server that stress-tests CPU cores and reports performance metrics. Built for measuring CPU performance in virtualized environments.
git clone https://github.com/tensorturtle/distributed-cpu-stress-reporter.git
cd distributed-cpu-stress-reporter
cargo build --release
./target/release/distributed-cpu-stress-reporter
Start CPU stress test and query performance:
# Start the CPU stress test (requires mode specification)
curl -X POST http://localhost:8080/start-cpu \
-H 'Content-Type: application/json' \
-d '{"mode":"fresh-process"}'
# Query performance
curl http://localhost:8080/cpu-perf
# Returns: 254060 (operations/second)
# Stop the CPU stress test
curl -X POST http://localhost:8080/end-cpu
Test CPU overprovisioning in VMs. Run this in multiple VMs on the same hypervisor to see how CPU contention affects actual performance.
Example: Proxmox host with 8 cores running 4 VMs with 4 vCPUs each (2x overprovisioned):
# Start CPU stress test on all VMs
for vm in vm1 vm2 vm3 vm4; do
curl -X POST http://$vm:8080/start-cpu \
-H 'Content-Type: application/json' \
-d '{"mode":"fresh-process"}'
done
# Query each VM
curl http://vm1:8080/cpu-perf # 240000 ops/sec
curl http://vm2:8080/cpu-perf # 238000 ops/sec
curl http://vm3:8080/cpu-perf # 120000 ops/sec ← throttled!
curl http://vm4:8080/cpu-perf # 241000 ops/sec
Deploy to multiple VMs:
# On each VM, run:
curl -L https://files.tensorturtle.com/yundera-cpu-stress/cpu-stress-linux-amd64 -o cpu-stress && chmod +x cpu-stress && ./cpu-stress
Monitor multiple VMs:
# Start CPU stress on all VMs
for vm in 192.168.1.{101..104}; do
curl -s -X POST http://$vm:8080/start-cpu \
-H 'Content-Type: application/json' \
-d '{"mode":"fresh-process"}'
done
# Monitor performance
while true; do
for vm in 192.168.1.{101..104}; do
echo "$vm: $(curl -s http://$vm:8080/cpu-perf) ops/sec"
done
sleep 2
done
/start-cpu to begin)/start-cpu - Start CPU stress test (requires JSON body with mode and optional utilization)/end-cpu - Stop CPU stress test/cpu-perf - Get current operations per second (threaded/fresh-process modes)/burst-perf - Get burst-only operations per second (bursty mode)Why prime numbers? Pure CPU computation with no I/O - perfect for measuring CPU performance.
When running multiple instances of this program competing for limited CPU cores, you may observe that newly launched instances receive more CPU allocation than older running instances. This is a Linux scheduler behavior called "catch-up bias."
What happens:
Why this matters for testing: When measuring CPU overprovisioning effects, catch-up bias can skew results. If you launch instances at different times, newer instances will appear to perform better, making it difficult to measure true steady-state CPU contention.
Solutions:
The application supports three execution modes, controlled via the HTTP API:
curl -X POST http://localhost:8080/start-cpu \
-H 'Content-Type: application/json' \
-d '{"mode":"fresh-process"}'
In this mode:
When to use:
curl -X POST http://localhost:8080/start-cpu \
-H 'Content-Type: application/json' \
-d '{"mode":"threaded"}'
In this mode:
When to use:
curl -X POST http://localhost:8080/start-cpu \
-H 'Content-Type: application/json' \
-d '{"mode":"bursty","utilization":60}'
In this mode:
Query burst performance:
curl http://localhost:8080/burst-perf
# Returns: 233672 (ops/sec during bursts only)
When to use:
Example: Different utilization levels
# Light bursty load (25% utilization)
curl -X POST http://localhost:8080/start-cpu \
-H 'Content-Type: application/json' \
-d '{"mode":"bursty","utilization":25}'
# Heavy bursty load (75% utilization)
curl -X POST http://localhost:8080/start-cpu \
-H 'Content-Type: application/json' \
-d '{"mode":"bursty","utilization":75}'
You can switch modes at any time via the API. If the CPU stress test is running, it will automatically restart with the new mode:
# Switch from fresh-process to threaded
curl -X POST http://localhost:8080/start-cpu \
-H 'Content-Type: application/json' \
-d '{"mode":"threaded"}'
# Switch to bursty mode
curl -X POST http://localhost:8080/start-cpu \
-H 'Content-Type: application/json' \
-d '{"mode":"bursty","utilization":50}'
Download and run (Linux AMD64):
curl -L https://files.tensorturtle.com/yundera-cpu-stress/cpu-stress-linux-amd64 -o cpu-stress && chmod +x cpu-stress && ./cpu-stress
Install from crates.io:
cargo install distributed-cpu-stress-reporter
Build from source:
git clone https://github.com/tensorturtle/distributed-cpu-stress-reporter.git
cd distributed-cpu-stress-reporter
cargo build --release
./target/release/distributed-cpu-stress-reporter
Q: Will this harm my CPU? A: No. Standard CPU stress test like Prime95.
Q: How do I stop it?
A: curl -X POST http://localhost:8080/end-cpu or Ctrl+C to exit the application
Q: Which mode should I use? A:
Q: Can I change the port?
A: Edit src/main.rs:110 and rebuild.
Q: Works on Windows/macOS/Linux? A: Yes, all platforms Rust supports.
Port already in use:
lsof -i :8080 # Find what's using the port
Can't access from another machine:
sudo ufw allow 8080/tcp # Open firewall
Low performance:
# Check CPU governor (Linux)
cat /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
# Set to performance mode if needed
echo performance | sudo tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
Tip: Run on bare metal first to establish baseline.
Licensed under either of:
at your option.