| Crates.io | xllm-proxy |
| lib.rs | xllm-proxy |
| version | 0.1.5 |
| created_at | 2025-08-04 00:17:44.779085+00 |
| updated_at | 2025-09-05 20:06:33.816979+00 |
| description | TCP proxy server with AES-256-GCM encryption for xllm |
| homepage | |
| repository | https://github.com/hydrafusion/xllm |
| max_upload_size | |
| id | 1780189 |
| size | 61,274 |
A TCP reverse proxy server that accepts encrypted requests and forwards them as HTTP requests to APIs.
The xllm-proxy receives TCP requests with HTTP details encrypted using AES-256-GCM, performs the actual HTTP request to the target API, and returns the encrypted response back. This provides:
┌─────────┐ TCP + AES-256-GCM ┌─────────────┐ HTTP/JSON ┌─────────────┐
│ xllm │ ─────────────────────► │ xllm-proxy │ ──────────────► │ LLM API │
│(client) │ │ (TCP server)│ │ (Claude/etc)│
└─────────┘ └─────────────┘ └─────────────┘
api.anthropic.com, api.openai.com)x-api-key, Authorization headers)anthropic-*, openai-*, rate limits, etc.)your-proxy-server:50051)# From workspace root
cargo run -p xllm-proxy
# Or from xllm-proxy directory
cd xllm-proxy
cargo run
The server will start on 0.0.0.0:50051
# Configure your client to use the proxy
# In config.toml:
[global]
proxy = true
proxy_url = "http://your-proxy-server:50051"
# Then run xllm normally
cargo run -p xllm -- -m haiku3 "Hello world"
The proxy uses the following encrypted message structure:
// Only this struct is visible in network traffic
ProxyRequest {
proxy_url: String, // The proxy endpoint URL (visible)
request_object: Vec<u8>, // AES-256-GCM encrypted HTTP request
}
// Internal structure (encrypted):
HttpRequest {
method: String, // HTTP method (POST, GET, etc.)
url: String, // Target API URL (hidden)
headers: HashMap<String, String>, // API keys, auth headers (hidden)
body: Vec<u8>, // Request payload (hidden)
}
When you enable proxy mode in your xllm config:
[global]
proxy = true
proxy_url = "http://your-proxy-server:50051"
All HTTP requests to LLM APIs will be routed through the encrypted proxy:
xllm encrypts HTTP request details using AES-256-GCMxllm-proxyxllm-proxy decrypts and makes actual HTTP request to APIxllm decrypts and displays the responseanthropic-*, openai-*, and other provider headersThe proxy is built with:
0.0.0.0:50051) for Docker deployment