| Crates.io | warpdrive-proxy |
| lib.rs | warpdrive-proxy |
| version | 0.1.0 |
| created_at | 2025-10-06 07:31:19.868961+00 |
| updated_at | 2025-10-06 07:31:19.868961+00 |
| description | A high-performance HTTP proxy with PostgreSQL coordination and distributed caching |
| homepage | https://github.com/seuros/warpdrive |
| repository | https://github.com/seuros/warpdrive |
| max_upload_size | |
| id | 1869863 |
| size | 468,113 |
WarpDrive [shah-wahr-muh] is a high-performance reverse proxy built on Pingora (Cloudflare's Rust proxy framework). We built it because Cloudflare already gave us the Engine — no need to reinvent the wheel in space. It routes traffic to multiple upstream services with protocol awareness, load balancing, and path transformation.
Routing & Load Balancing:
Middleware Chain:
Caching & Coordination:
Observability:
/metrics on configurable port (default 9090)Operations:
⚠️ Version 0.1.0 Limitations: ACME auto-renewal and process crash recovery are experimental. See LIMITATIONS.md for production deployment guidance.
Simple Mode (single upstream):
cargo build --release
WARPDRIVE_TARGET_PORT=3001 WARPDRIVE_HTTP_PORT=8080 ./target/release/warpdrive
Advanced Mode (multi-upstream routing):
# Create config.toml
cat > config.toml << 'EOF'
[upstreams.rails]
protocol = "http"
host = "127.0.0.1"
port = 3000
[upstreams.cable]
protocol = "ws"
socket = "/tmp/cable.sock"
[[routes]]
path_prefix = "/cable"
upstream = "cable"
[[routes]]
path_prefix = "/"
upstream = "rails"
EOF
# Run with TOML config
WARPDRIVE_CONFIG=config.toml ./target/release/warpdrive
📝 See .env.example for a complete, documented list of all configuration options.
Simple Mode (env vars only):
Basic Proxy:
WARPDRIVE_TARGET_HOST=127.0.0.1 — upstream hostWARPDRIVE_TARGET_PORT=3000 — upstream portWARPDRIVE_HTTP_PORT=8080 — HTTP listener port (default: 8080, unprivileged)WARPDRIVE_HTTPS_PORT=8443 — HTTPS listener port (default: 8443, unprivileged)Static File Serving:
WARPDRIVE_STATIC_ENABLED=true — enable direct static file serving (default: true)WARPDRIVE_STATIC_ROOT=./public — static files directory (default: ./public)WARPDRIVE_STATIC_PATHS=/assets,/packs — URL paths to serve staticallyWARPDRIVE_STATIC_CACHE_CONTROL="..." — cache header for static filesCaching (Optional):
WARPDRIVE_CACHE_SIZE=67108864 — memory cache size in bytes (default 64MB)WARPDRIVE_MAX_CACHE_ITEM_SIZE=1048576 — max item size in bytes (default 1MB)WARPDRIVE_REDIS_URL=redis://localhost:6379 — Redis L2 cache (optional)WARPDRIVE_DATABASE_URL=postgresql://localhost/warpdrive — PostgreSQL for invalidation (optional)Observability:
WARPDRIVE_METRICS_ENABLED=true — enable Prometheus metrics endpointWARPDRIVE_METRICS_PORT=9090 — metrics HTTP server portWARPDRIVE_LOG_LEVEL=info — log level (error, warn, info, debug)Resilience:
WARPDRIVE_RATE_LIMIT_ENABLED=true — enable per-IP rate limitingWARPDRIVE_RATE_LIMIT_RPS=100 — requests per second per IPWARPDRIVE_RATE_LIMIT_BURST=200 — burst size (tokens)WARPDRIVE_CIRCUIT_BREAKER_ENABLED=true — enable circuit breakerWARPDRIVE_CIRCUIT_BREAKER_FAILURE_THRESHOLD=5 — failures before openingWARPDRIVE_CIRCUIT_BREAKER_TIMEOUT_SECS=60 — seconds before trying half-openWARPDRIVE_MAX_CONCURRENT_REQUESTS=0 — max concurrent requests (0 = unlimited)Process Supervision:
WARPDRIVE_UPSTREAM_COMMAND=bundle exec puma — command to spawn upstreamWARPDRIVE_UPSTREAM_ARGS=-p 3000 — arguments for upstream commandAdvanced Mode (TOML config):
WARPDRIVE_CONFIG=/path/to/config.toml — routing configurationSee config.example.toml for full TOML examples with:
Deployment Modes:
# Mode 1: Memory-only (dev)
WARPDRIVE_TARGET_PORT=3000 ./warpdrive
# Mode 2: + Redis cache (staging)
WARPDRIVE_REDIS_URL=redis://localhost:6379 \
WARPDRIVE_TARGET_PORT=3000 ./warpdrive
# Mode 3: Full distributed (production)
WARPDRIVE_REDIS_URL=redis://localhost:6379 \
WARPDRIVE_DATABASE_URL=postgresql://localhost/warpdrive \
WARPDRIVE_METRICS_ENABLED=true \
WARPDRIVE_RATE_LIMIT_ENABLED=true \
WARPDRIVE_CIRCUIT_BREAKER_ENABLED=true \
WARPDRIVE_TARGET_PORT=3000 ./warpdrive
WarpDrive can serve static files directly from disk, bypassing your application backend entirely. This is significantly faster than X-Sendfile.
Key Differences:
/assets/*) without touching the backendX-Sendfile header, then WarpDrive serves the fileBasic Setup:
# Serve files from ./public directory
WARPDRIVE_STATIC_ENABLED=true \
WARPDRIVE_STATIC_ROOT=./public \
WARPDRIVE_STATIC_PATHS=/assets,/packs,/images,/favicon.ico \
./warpdrive
Directory Structure:
./public/
├── assets/
│ ├── application.css
│ └── application.js
├── images/
│ ├── logo.png
│ └── hero.jpg
└── favicon.ico
URL Mapping:
GET /assets/application.js → ./public/assets/application.jsGET /images/logo.png → ./public/images/logo.pngGET /favicon.ico → ./public/favicon.icoFeatures:
"{size}-{mtime_nanos}" format for cache validationIf-None-Match handling.gz files when Accept-Encoding: gzip presentindex.html for directory requestsEnvironment Variables:
WARPDRIVE_STATIC_ENABLED=true # Enable/disable (default: true)
WARPDRIVE_STATIC_ROOT=./public # Root directory (default: ./public)
WARPDRIVE_STATIC_PATHS=/assets,/packs # URL prefixes (default: /assets,/packs,/images,/favicon.ico)
WARPDRIVE_STATIC_CACHE_CONTROL="public, max-age=31536000, immutable" # Cache header
WARPDRIVE_STATIC_GZIP=true # Serve .gz files (default: true)
WARPDRIVE_STATIC_INDEX_FILES=index.html # Directory indexes (default: index.html)
WARPDRIVE_STATIC_FALLTHROUGH=true # Pass to backend if not found (default: true)
Example Responses:
# JavaScript with ETag and caching
$ curl -I http://localhost/assets/app.js
HTTP/1.1 200 OK
Content-Type: application/javascript
Content-Length: 1024
Cache-Control: public, max-age=31536000, immutable
ETag: "1024-1759606090065974032"
# 304 Not Modified on subsequent request
$ curl -I -H 'If-None-Match: "1024-1759606090065974032"' http://localhost/assets/app.js
HTTP/1.1 304 Not Modified
ETag: "1024-1759606090065974032"
Cache-Control: public, max-age=31536000, immutable
Performance:
Production Tips:
gzip -k public/assets/*.{js,css}static_files_served_total metric (future)WarpDrive supports TLS/HTTPS in three ways:
1. Manual Certificates (self-signed or custom):
# Self-signed certificate (development)
openssl req -x509 -newkey rsa:4096 -nodes \
-keyout server.key -out server.crt -days 365 \
-subj "/CN=localhost"
WARPDRIVE_TLS_CERT_PATH=server.crt \
WARPDRIVE_TLS_KEY_PATH=server.key \
WARPDRIVE_HTTPS_PORT=443 \
./warpdrive
2. ACME/Let's Encrypt (automatic certificates):
# Production with automatic Let's Encrypt certificates
WARPDRIVE_TLS_DOMAINS=example.com,www.example.com \
WARPDRIVE_STORAGE_PATH=/var/lib/warpdrive \
WARPDRIVE_HTTP_PORT=80 \
WARPDRIVE_HTTPS_PORT=443 \
./warpdrive
Environment Variables:
WARPDRIVE_TLS_DOMAINS=domain1.com,domain2.com — domains for ACME certificatesWARPDRIVE_STORAGE_PATH=/var/lib/warpdrive — certificate storage directoryWARPDRIVE_ACME_DIRECTORY=https://acme-v02.api.letsencrypt.org/directory — ACME server URLWARPDRIVE_EAB_KID=... — External Account Binding key ID (optional, for some CAs)WARPDRIVE_EAB_HMAC_KEY=... — EAB HMAC key (optional)ACME Workflow:
TLS_DOMAINS/.well-known/acme-challenge/*{STORAGE_PATH}/certs/{domain}.pemCertificate Storage Layout:
/var/lib/warpdrive/
├── account.json # ACME account credentials
└── certs/
├── example.com.pem # Certificate chain
├── example.com.key.pem # Private key
├── www.example.com.pem
└── www.example.com.key.pem
3. Docker with TLS (self-signed generation):
# Docker automatically generates self-signed cert at build time
docker run -p 80:80 -p 443:443 \
-e WARPDRIVE_TARGET_PORT=3000 \
warpdrive
Let's Encrypt Staging (testing):
# Use staging server for testing (avoids rate limits)
WARPDRIVE_TLS_DOMAINS=test.example.com \
WARPDRIVE_ACME_DIRECTORY=https://acme-staging-v02.api.letsencrypt.org/directory \
WARPDRIVE_STORAGE_PATH=/tmp/warpdrive \
./warpdrive
Protocol Support:
Run WarpDrive with Puma and Falcon backends:
docker-compose up warpdrive
Test routing:
curl http://localhost:8080/ # → Puma
curl http://localhost:8080/puma/test # → Puma (/test)
curl http://localhost:8080/falcon/test # → Falcon (/test)
See DOCKER.md for details.
Quick Start (Docker Compose with PostgreSQL and Redis):
# Run all tests in isolated environment
docker-compose up --build test
Local Development:
# Start services
docker-compose up -d postgres redis
# Run tests
export WARPDRIVE_DATABASE_URL=postgresql://warpdrive:warpdrive_test@localhost:5432/warpdrive_test
export WARPDRIVE_REDIS_URL=redis://localhost:6379
cargo test --workspace --all-features
Test Categories:
# Unit tests only
cargo test --lib
# Integration tests
cargo test --test '*'
# Specific test suites
cargo test --lib cache
cargo test --test redis_test
cargo test --test postgres_test
See TESTING.md for comprehensive testing guide including:
Complete list of all configuration options:
# Core Proxy
WARPDRIVE_TARGET_HOST=127.0.0.1 # Upstream host (simple mode)
WARPDRIVE_TARGET_PORT=3000 # Upstream port (simple mode)
WARPDRIVE_HTTP_PORT=80 # HTTP listener port
WARPDRIVE_HTTPS_PORT=443 # HTTPS listener port
# TLS & ACME
WARPDRIVE_TLS_DOMAINS=example.com,www.example.com # ACME domains (comma-separated)
WARPDRIVE_TLS_CERT_PATH=/path/to/cert.pem # Manual certificate path
WARPDRIVE_TLS_KEY_PATH=/path/to/key.pem # Manual key path
WARPDRIVE_STORAGE_PATH=/var/lib/warpdrive # Certificate storage directory
WARPDRIVE_ACME_DIRECTORY=https://... # ACME server URL
WARPDRIVE_EAB_KID=... # External Account Binding key ID
WARPDRIVE_EAB_HMAC_KEY=... # External Account Binding HMAC key
# Caching
WARPDRIVE_CACHE_SIZE=67108864 # Memory cache size in bytes (64MB)
WARPDRIVE_MAX_CACHE_ITEM_SIZE=1048576 # Max item size in bytes (1MB)
WARPDRIVE_REDIS_URL=redis://localhost:6379 # Redis L2 cache (optional)
WARPDRIVE_DATABASE_URL=postgresql://... # PostgreSQL for invalidation (optional)
# Observability
WARPDRIVE_METRICS_ENABLED=true # Enable Prometheus metrics
WARPDRIVE_METRICS_PORT=9090 # Metrics server port
WARPDRIVE_LOG_LEVEL=info # Log level (error/warn/info/debug/trace)
WARPDRIVE_LOG_REQUESTS=true # Log all HTTP requests
# Resilience
WARPDRIVE_RATE_LIMIT_ENABLED=true # Enable per-IP rate limiting
WARPDRIVE_RATE_LIMIT_RPS=100 # Requests per second per IP
WARPDRIVE_RATE_LIMIT_BURST=200 # Burst size (tokens)
WARPDRIVE_CIRCUIT_BREAKER_ENABLED=true # Enable circuit breaker
WARPDRIVE_CIRCUIT_BREAKER_FAILURE_THRESHOLD=5 # Failures before opening
WARPDRIVE_CIRCUIT_BREAKER_TIMEOUT_SECS=60 # Seconds before retry
WARPDRIVE_MAX_CONCURRENT_REQUESTS=0 # Max concurrent requests (0 = unlimited)
WARPDRIVE_UPSTREAM_TIMEOUT=30 # Upstream request timeout in seconds
# Headers & Middleware
WARPDRIVE_FORWARD_HEADERS=true # Add X-Forwarded-* headers
WARPDRIVE_X_SENDFILE_ENABLED=true # Enable X-Sendfile support
WARPDRIVE_GZIP_COMPRESSION_ENABLED=true # Enable gzip compression
# Static File Serving
WARPDRIVE_STATIC_ENABLED=true # Enable direct static file serving
WARPDRIVE_STATIC_ROOT=./public # Static files directory
WARPDRIVE_STATIC_PATHS=/assets,/packs,/images,/favicon.ico # URL paths to serve
WARPDRIVE_STATIC_CACHE_CONTROL="public, max-age=31536000, immutable" # Cache header
WARPDRIVE_STATIC_GZIP=true # Serve .gz files when available
WARPDRIVE_STATIC_INDEX_FILES=index.html # Directory index files
WARPDRIVE_STATIC_FALLTHROUGH=true # Continue to backend if file not found
# Advanced (TOML Mode)
WARPDRIVE_CONFIG=/path/to/config.toml # TOML routing config
# Process Supervision
WARPDRIVE_UPSTREAM_COMMAND=bundle exec puma # Command to spawn
WARPDRIVE_UPSTREAM_ARGS=-p 3000 # Command arguments
WarpDrive exposes Prometheus metrics at /metrics on the configured port (default 9090).
HTTP Metrics:
http_requests_total{method, status} — Total HTTP requests (counter)http_request_duration_seconds{method, status} — Request duration histogram (0.001s to 60s buckets)http_requests_active — Currently active requests (gauge)Cache Metrics:
cache_hits_total{backend} — Cache hits by backend (memory/redis)cache_misses_total{backend} — Cache misses by backendcache_invalidations_total — PostgreSQL NOTIFY invalidations receivedcache_errors_total{backend, operation} — Cache operation errorsCircuit Breaker Metrics:
circuit_breaker_state{state} — Current state (closed/open/half_open) (gauge)circuit_breaker_failures_total — Total failures detectedcircuit_breaker_state_changes_total{from, to} — State transitionsRate Limiting Metrics:
rate_limit_requests_allowed_total — Requests allowed throughrate_limit_requests_denied_total — Requests rate-limited (429 responses)Example Prometheus Queries:
# Request rate by status code
rate(http_requests_total[5m])
# 95th percentile response time
histogram_quantile(0.95, rate(http_request_duration_seconds_bucket[5m]))
# Cache hit ratio
sum(rate(cache_hits_total[5m])) / (sum(rate(cache_hits_total[5m])) + sum(rate(cache_misses_total[5m])))
# Circuit breaker state (1=open, 0=closed)
circuit_breaker_state{state="open"}
# Rate limit rejection rate
rate(rate_limit_requests_denied_total[5m])
Grafana Dashboard:
{
"dashboard": {
"title": "WarpDrive Proxy",
"panels": [
{
"title": "Request Rate",
"targets": [{"expr": "rate(http_requests_total[5m])"}]
},
{
"title": "Cache Hit Ratio",
"targets": [{"expr": "sum(rate(cache_hits_total[5m])) / (sum(rate(cache_hits_total[5m])) + sum(rate(cache_misses_total[5m])))"}]
},
{
"title": "Circuit Breaker State",
"targets": [{"expr": "circuit_breaker_state"}]
}
]
}
}
src/proxy/handler.rs): Pingora ProxyHttp implementationsrc/router/): Multi-upstream routing with LoadBalancersrc/middleware/): Request/response filtering chainsrc/cache/): L1 (Memory) + L2 (Redis) coordinator with PG invalidationsrc/metrics/): Prometheus instrumentationsrc/config/): Env vars and TOML parsingsrc/process/): Upstream supervisorDocumentation:
docs/ARCHITECTURE.md — System architecture, request lifecycle, deployment modesROUTING.md — Multi-upstream routing detailsMASTER_PLAN.md — Development roadmap and current statusLicensed under the MIT License.