| Crates.io | llmsim |
| lib.rs | llmsim |
| version | 0.2.0 |
| created_at | 2026-01-10 04:20:11.58742+00 |
| updated_at | 2026-01-18 00:30:08.913565+00 |
| description | LLM Traffic Simulator - A lightweight, high-performance LLM API simulator |
| homepage | |
| repository | https://github.com/llmsim/llmsim |
| max_upload_size | |
| id | 2033446 |
| size | 1,679,753 |
LLM Traffic Simulator - A lightweight, high-performance LLM API simulator for load testing, CI/CD, and local development.
LLMSim replicates realistic LLM API behavior without running actual models. It solves common challenges when testing LLM-integrated applications:
cargo install llmsim

# Start with defaults (port 8080, lorem generator)
llmsim serve
# Start with real-time stats dashboard (TUI)
llmsim serve --tui
# All options
llmsim serve \
--port 8080 \
--host 0.0.0.0 \
--generator lorem \
--target-tokens 150 \
--tui
# Using config file
llmsim serve --config config.yaml
The --tui flag launches an interactive terminal dashboard showing real-time metrics:
Controls: q to quit, r to force refresh.
use llmsim::{
openai::{ChatCompletionRequest, Message},
generator::LoremGenerator,
latency::LatencyProfile,
};
// Create a latency profile
let latency = LatencyProfile::gpt5();
// Count tokens
let tokens = llmsim::count_tokens("Hello, world!", "gpt-5").unwrap();
// Generate responses
let generator = LoremGenerator::new(100);
let response = generator.generate(&request);
/openai/v1/...)| Endpoint | Method | Description |
|---|---|---|
/openai/v1/chat/completions |
POST | Chat completions (streaming & non-streaming) |
/openai/v1/models |
GET | List available models |
/openai/v1/models/{model_id} |
GET | Get specific model details |
/openai/v1/responses |
POST | Responses API (streaming & non-streaming) |
When using OpenAI SDKs, set the base URL to http://localhost:8080/openai/v1.
/openresponses/v1/...)OpenResponses is an open-source specification for building multi-provider, interoperable LLM interfaces.
| Endpoint | Method | Description |
|---|---|---|
/openresponses/v1/responses |
POST | Create response (streaming & non-streaming) |
| Endpoint | Method | Description |
|---|---|---|
/health |
GET | Health check |
/llmsim/stats |
GET | Real-time server statistics (JSON) |
server:
port: 8080
host: "0.0.0.0"
latency:
profile: "gpt5"
# Custom values (optional):
# ttft_mean_ms: 600
# ttft_stddev_ms: 150
# tbt_mean_ms: 40
# tbt_stddev_ms: 12
response:
generator: "lorem"
target_tokens: 100
errors:
rate_limit_rate: 0.01
server_error_rate: 0.001
timeout_rate: 0.0
timeout_after_ms: 30000
models:
available:
- "gpt-5"
- "gpt-5-mini"
- "gpt-4o"
- "claude-opus"
| Family | Models |
|---|---|
| GPT-5 | gpt-5, gpt-5-mini, gpt-5.1, gpt-5.2, gpt-5-codex |
| O-Series | o3, o3-mini, o4, o4-mini |
| GPT-4 | gpt-4, gpt-4-turbo, gpt-4o, gpt-4o-mini, gpt-4.1 |
| Claude | claude-opus, claude-sonnet, claude-haiku (with versions) |
| Gemini | gemini-pro |
| Profile | TTFT Mean | TBT Mean |
|---|---|---|
| gpt-5 | 600ms | 40ms |
| gpt-5-mini | 300ms | 20ms |
| gpt-4 | 800ms | 50ms |
| gpt-4o | 400ms | 25ms |
| o-series | 2000ms | 30ms |
| claude-opus | 1000ms | 60ms |
| claude-sonnet | 500ms | 30ms |
| claude-haiku | 200ms | 15ms |
| instant | 0ms | 0ms |
| fast | 10ms | 1ms |
MIT License - see LICENSE for details.
See CONTRIBUTING.md for contribution guidelines.