| Crates.io | valve |
| lib.rs | valve |
| version | 0.1.0 |
| created_at | 2025-09-26 17:22:29.098831+00 |
| updated_at | 2025-09-26 17:22:29.098831+00 |
| description | token stream router |
| homepage | https://github.com/mosure/valve |
| repository | https://github.com/mosure/valve |
| max_upload_size | |
| id | 1856269 |
| size | 295,752 |
OpenAI-compatible proxy with adaptive token budgeting, per-session dashboards, and optional metrics exporters.
cargo install --path .
Create config.toml in ~/.config/valve/ (Linux/macOS) or %APPDATA%\valve\ (Windows):
[server]
host = "127.0.0.1"
port = 7532
[proxy]
default_provider = "openai"
[proxy.providers.openai]
kind = "open_ai"
base_url = "https://api.openai.com/v1"
api_key = "sk-your-key"
[adapters]
observers = ["telemetry"]
Run the proxy:
valve
The terminal UI starts automatically. Use Tab, arrow keys, or h/l to move between tabs; 1-4 pick a history window; +/- cycle windows; Up/Down or j/k navigate sessions; r resets history; q quits. Append --headless for log-only mode.
Add an optional Prometheus scrape endpoint or OTLP exporter in config.toml:
[telemetry.prometheus]
enabled = true
listen = "127.0.0.1:9898"
[telemetry.otlp]
enabled = true
endpoint = "http://localhost:4317"
protocol = "grpc"
interval_secs = 60
cargo run --example agent_dialogue
cargo run --example agent_dialogue -- --headless
cargo bench --bench throughput
Dual-licensed under MIT or Apache-2.0. See LICENSE-MIT and LICENSE-APACHE for details.