| Crates.io | hedged-rpc-client |
| lib.rs | hedged-rpc-client |
| version | 0.1.0 |
| created_at | 2025-11-15 21:51:28.334267+00 |
| updated_at | 2025-11-15 21:51:28.334267+00 |
| description | High-performance Solana RPC client with request hedging for tail latency elimination |
| homepage | |
| repository | https://github.com/BretasArthur1/hedged-rpc-client |
| max_upload_size | |
| id | 1934843 |
| size | 225,009 |
A high-performance Solana RPC client that implements request hedging to reduce tail latency in distributed systems.
In distributed systems, even reliable RPC providers occasionally have slow responses (tail latency). Waiting for a single slow provider can degrade your application's performance.
Instead of waiting for one provider, race multiple providers and use the fastest response:
Time ────────────────────────────────────────────▶
Provider A: ████████████████████░░░░ (400ms - slow)
Provider B: ████████░░░░ (200ms - WINNER! ✓)
Provider C: ████████████████░░░░ (300ms)
Result: 200ms response time (instead of 400ms)
┌──────────────────────────────────────────────────────────────────┐
│ Your Application │
│ (hedged-rpc-client) │
└─────────────┬────────────────────────────────────────────────────┘
│
│ Single Logical Request
│
▼
┌─────────────────────┐
│ Hedging Strategy │
│ • Initial: 1-3 │ ◄── Configurable
│ • Delay: 20-100ms │
│ • Timeout: 1-3s │
└─────────┬───────────┘
│
│ Fan-out to multiple providers
│
┌────────┼────────────────┐
│ │ │
▼ ▼ ▼
┌─────────┐ ┌─────────┐ ┌──────────┐
│ Helius │ │ Triton │ │QuickNode │
│ RPC │ │ RPC │ │ RPC │
└────┬────┘ └────┬────┘ └─────┬────┘
│ │ │
│ 400ms │ 200ms ✓ │ 300ms
│ │ │
└───────────┴─────────────┘
│
│ First successful response wins
▼
┌──────────────────┐
│ Return Result │
│ Provider: Triton│
│ Latency: 200ms │
└──────────────────┘
use hedged_rpc_client::{HedgedRpcClient, HedgeConfig, ProviderConfig, ProviderId};
use std::time::Duration;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let providers = vec![
ProviderConfig {
id: ProviderId("helius"),
url: "https://mainnet.helius-rpc.com".to_string(),
},
ProviderConfig {
id: ProviderId("triton"),
url: "https://api.mainnet-beta.solana.com".to_string(),
},
];
let config = HedgeConfig::low_latency(providers.len());
let client = HedgedRpcClient::new(providers, config);
// Races both providers, returns fastest response
let (provider, blockhash) = client.get_latest_blockhash().await?;
println!("Winner: {} with blockhash: {}", provider.0, blockhash);
Ok(())
}
# Set your RPC endpoints
export HELIUS_RPC_URL="https://mainnet.helius-rpc.com/?api-key=YOUR_KEY"
export TRITON_RPC_URL="https://api.mainnet-beta.solana.com"
export QUICKNODE_RPC_URL="https://your-endpoint.quiknode.pro/YOUR_KEY"
# Launch the TUI
cargo run --release
Controls:
↑/↓ - Select providerSpace - Quick test selected providerTab - Toggle between Hedged/Single moder - Run single callb - Toggle batch mode (auto-run multiple calls)+/- - Adjust provider count (Hedged mode),/. - Adjust batch counts - Reset statisticsq - Quitlet config = HedgeConfig {
initial_providers: 2,
hedge_after: Duration::from_millis(50),
max_providers: 3,
min_slot: None,
overall_timeout: Duration::from_secs(2),
};
Good for:
Not ideal for:
See examples/ directory:
basic_get_account.rs - Simple hedged request examplerpc_race.rs - Many-call stress test with statisticsdual_race.rs - Compare two concurrent runnersTraditional approach (sequential):
Request → Provider A (slow: 800ms) → Timeout → Retry Provider B → Success
Total time: 800ms+ (or timeout)
Hedged approach (parallel):
Request → Provider A (slow: 800ms) ──┐
→ Provider B (fast: 150ms) ──┼→ Success!
→ Provider C (medium: 300ms)─┘
Total time: 150ms (fastest wins)
For Rust/Solana developers interested in the internals:
FuturesUnordered + tokio::select!: Races provider futures efficiently without spawning tasks per providerOk(T) response completes the call; remaining in-flight futures are dropped automaticallyArc<Mutex<HashMap<ProviderId, ProviderStats>>> tracks wins, latency, and errorstokio::time::timeout() enforces the overall_timeout as a hard SLAHedgedError::AllFailed// Phase 1: Query initial_providers immediately
for provider in initial_providers {
futures.push(call_provider(provider));
}
// Phase 2: If no response after hedge_after, fan out
tokio::select! {
Some((id, Ok(val))) = futures.next() => return Ok((id, val)),
_ = sleep(hedge_after) => {
// Add remaining providers to the race
for provider in remaining_providers {
futures.push(call_provider(provider));
}
}
}
This design ensures minimal overhead while maximizing responsiveness.
MIT