| Crates.io | whois-service |
| lib.rs | whois-service |
| version | 0.1.1 |
| created_at | 2025-06-08 23:36:22.488966+00 |
| updated_at | 2025-06-08 23:41:02.608702+00 |
| description | High-performance whois lookup service and library with dynamic TLD discovery |
| homepage | |
| repository | https://github.com/alesiancyber/rust-whois |
| max_upload_size | |
| id | 1705327 |
| size | 204,629 |
This Rust whois library is designed to be faster than command line and scalable for automation.
It uses RDAP as its primary protocol, pulling down the IANA server mapping at build time and utilizing a fallback to auto-discover RDAP, and then switches to hardcoded whois servers and then an auto-discover for whois, giving very dynamic fast discovery.
Built to scale with high throughput and additional calculated fields (created_ago, updated_ago, expires_in, in days for ease of use).
Caches up to 10k domains to avoid repeated querying to IANA and avoid getting rate limited.
Available for use as a library or as an OpenAPI API using Axum with a Swagger UI implementation.
A high-performance, production-ready whois lookup service with modern RDAP support and three-tier fallback system, built in Rust for cybersecurity applications.
RDAP First โ WHOIS Fallback โ Command-line Reserve
| Lookup Type | Average Response | Coverage | Use Case |
|---|---|---|---|
| RDAP | 450-800ms | 1,188 TLDs | Modern registries, faster responses |
| WHOIS | 1,300ms | Universal | Legacy domains, comprehensive fallback |
| Cached | ~5ms | All domains | Repeated lookups, alert enrichment |
Throughput: 870+ enriched domains/minute
Cache Hit Rate: 80-90% for typical alert workloads
Cybersecurity Focus: Handles any TLD attackers might use
Your Use Case: Stream alerts โ Enrich with domain intelligence โ Enhanced threat detection
# Real-time alert enrichment
curl "http://localhost:3000/whois/suspicious-domain.tk"
# Response includes threat indicators:
{
"domain": "suspicious-domain.tk",
"whois_server": "RDAP: https://rdap.afilias.net/rdap/tk/v1/",
"raw_data": "...",
"parsed_data": {
"created_ago": 2, // โ ๏ธ Fresh domain (2 days old)
"expires_in": 358, // Valid for nearly a year
"name_servers": [...], // Infrastructure analysis
"registrar": "...", // Registrar reputation data
"status": [...], // Domain status codes
"registrant_email": "...", // Contact information
"admin_email": "...", // Administrative contact
"tech_email": "..." // Technical contact
},
"cached": false,
"query_time_ms": 447,
"parsing_analysis": null // Available in debug mode
}
We solve the "hardcoding problem" with a two-tier approach:
No manual tuning required - the service automatically adapts based on:
git clone https://github.com/alesiancyber/rust-whois.git
cd rust-whois
cargo run --release
# Modern RDAP lookup (fast)
curl "http://localhost:3000/whois/google.com"
# WHOIS fallback example
curl "http://localhost:3000/whois/example.xyz"
# Debug mode with lookup analysis
curl "http://localhost:3000/whois/debug/google.com"
# Health check & metrics
curl "http://localhost:3000/health"
curl "http://localhost:3000/metrics"
# API Documentation (when OpenAPI feature is enabled)
curl "http://localhost:3000/docs"
GET /whois?domain=example.com - Standard whois lookupPOST /whois - JSON body with domain parameterGET /whois/:domain - Path-based lookupGET /whois/debug?domain=example.com - Debug mode with parsing analysisGET /whois/debug/:domain - Path-based debug lookupGET /health - Service health checkGET /metrics - Prometheus metricsGET /docs - OpenAPI documentation (when enabled)created_ago field spots newly registered threats# Development build
cargo build
# Release build (optimized)
cargo build --release
# Library only (no server)
cargo build --no-default-features
# Run full test suite
./scripts/stress_runner.sh
./scripts/stress_runner.shLicensed under either of:
at your option.
Want to use this as a Rust library?
๐ See LIBRARY_USAGE.md for comprehensive examples and integration patterns
Quick preview:
[dependencies]
whois-service = "0.1.0"
The library automatically uses the same three-tier RDAP โ WHOIS system with intelligent caching.
# Server configuration
export PORT=3000 # HTTP port (default: 3000)
export WHOIS_TIMEOUT_SECONDS=30 # WHOIS query timeout
export MAX_RESPONSE_SIZE=10485760 # Maximum response size (10MB)
export MAX_REFERRALS=10 # Maximum WHOIS referrals to follow
# RDAP + Cache optimization
export CACHE_TTL_SECONDS=3600 # Cache TTL (1 hour)
export CACHE_MAX_ENTRIES=60000 # Maximum cache entries
export DISCOVERY_TIMEOUT_SECONDS=20 # RDAP discovery timeout
# Performance tuning
export CONCURRENT_WHOIS_QUERIES=8 # Concurrent WHOIS queries
export BUFFER_POOL_SIZE=100 # Network buffer pool size
export BUFFER_SIZE=16384 # Network buffer size (16KB)
# Optional features
export RUST_LOG=whois_service=info # Logging level
export ENABLE_OPENAPI=true # Enable OpenAPI documentation
# Build optimized container
docker build -t whois-service .
# Run with production settings
docker run -p 3000:3000 \
-e CACHE_MAX_ENTRIES=60000 \
-e CACHE_TTL_SECONDS=3600 \
-e CONCURRENT_WHOIS_QUERIES=8 \
-e BUFFER_POOL_SIZE=100 \
-e BUFFER_SIZE=16384 \
whois-service
resources:
requests:
memory: "256Mi" # Base memory requirement
cpu: "250m" # Base CPU requirement
limits:
memory: "512Mi" # Handles 48K cached domains
cpu: "1000m" # Single pod: ~100 enrichments/min
env:
- name: CACHE_MAX_ENTRIES
value: "60000"
- name: CACHE_TTL_SECONDS
value: "3600"
- name: CONCURRENT_WHOIS_QUERIES
value: "8"
- name: BUFFER_POOL_SIZE
value: "100"
- name: BUFFER_SIZE
value: "16384"
The service automatically adapts to system resources:
Test the three-tier system:
# Test RDAP (should be fast)
time curl "http://localhost:3000/whois/google.com"
# Test WHOIS fallback
time curl "http://localhost:3000/whois/example.xyz"
# Test caching (should be ~5ms second time)
curl "http://localhost:3000/whois/github.com"
curl "http://localhost:3000/whois/github.com" # Cached
Expected Results:
| Use Case | Throughput | Latency | Memory |
|---|---|---|---|
| Alert Enrichment | 800+ domains/min | 450ms avg | 300MB |
| Cached Workload | 2000+ domains/min | 5ms avg | 300MB |
| Kubernetes Cluster | 2400+ domains/min | 450ms avg | 3ร300MB |
Perfect for: Real-time threat intelligence, domain reputation checks, alert enrichment pipelines