| Crates.io | octoroute |
| lib.rs | octoroute |
| version | 1.0.0 |
| created_at | 2025-11-25 01:45:21.658075+00 |
| updated_at | 2025-11-27 22:20:57.779711+00 |
| description | Intelligent multi-model router for self-hosted LLMs |
| homepage | |
| repository | https://github.com/slb350/octoroute |
| max_upload_size | |
| id | 1949032 |
| size | 1,379,473 |
Intelligent multi-model router for self-hosted LLMs
Octoroute is a smart HTTP API that sits between your applications and your homelab's fleet of local LLMs. It automatically routes requests to the optimal model (8B, 30B, or 120B) based on task complexity, reducing compute costs while maintaining quality.
Think of it as a load balancer, but instead of distributing requests evenly, it sends simple queries to small models and complex reasoning tasks to larger ones.
Running multiple LLM sizes on your homelab is powerful, but routing requests manually is tedious:
Octoroute solves this with:
✅ Intelligent routing - Rule-based + LLM-powered decision making ✅ Zero-cost rules - Fast pattern matching for obvious cases (<1ms) ✅ Homelab-first - Built for local Ollama, LM Studio, llama.cpp deployments ✅ Rust native - Type-safe, async, low overhead ✅ Observable - Track every routing decision with structured logs
Option 1: Pre-built binaries (fastest)
Download from GitHub Releases:
# Linux x86_64
curl -LO https://github.com/slb350/octoroute/releases/latest/download/octoroute-linux-x86_64.tar.gz
tar -xzf octoroute-linux-x86_64.tar.gz
# Linux ARM64 (Raspberry Pi, etc.)
curl -LO https://github.com/slb350/octoroute/releases/latest/download/octoroute-linux-aarch64.tar.gz
tar -xzf octoroute-linux-aarch64.tar.gz
# macOS Apple Silicon
curl -LO https://github.com/slb350/octoroute/releases/latest/download/octoroute-macos-aarch64.tar.gz
tar -xzf octoroute-macos-aarch64.tar.gz
# macOS Intel
curl -LO https://github.com/slb350/octoroute/releases/latest/download/octoroute-macos-x86_64.tar.gz
tar -xzf octoroute-macos-x86_64.tar.gz
# Run
./octoroute
Option 2: Cargo install (requires Rust)
cargo install octoroute
Option 3: Build from source
git clone https://github.com/slb350/octoroute.git
cd octoroute
cargo build --release
./target/release/octoroute
Generate a starter config file:
# Print template to stdout
octoroute config
# Write template to file
octoroute config -o config.toml
Or create a config.toml manually:
[server]
host = "0.0.0.0"
port = 3000
[[models.fast]]
name = "qwen3-8b-instruct"
base_url = "http://localhost:11434/v1" # Ollama
max_tokens = 4096
temperature = 0.7
weight = 1.0
priority = 1
[[models.balanced]]
name = "qwen3-30b-instruct"
base_url = "http://localhost:1234/v1" # LM Studio
max_tokens = 8192
temperature = 0.7
weight = 1.0
priority = 1
[[models.deep]]
name = "gpt-oss-120b"
base_url = "http://localhost:8080/v1" # llama.cpp
max_tokens = 16384
temperature = 0.7
weight = 1.0
priority = 1
[routing]
strategy = "hybrid" # rule, llm, hybrid
router_tier = "balanced" # fast, balanced, deep (default: balanced)
Send a chat request:
curl -X POST http://localhost:3000/chat \
-H "Content-Type: application/json" \
-d '{
"message": "Explain quantum computing in simple terms",
"importance": "normal",
"task_type": "question_answer"
}'
Response:
{
"content": "Quantum computing is...",
"model_tier": "balanced",
"model_name": "qwen3-30b-instruct",
"routing_strategy": "rule"
}
Drop-in replacement for OpenAI clients. Use Octoroute with any OpenAI-compatible SDK, framework, or tool - no code changes required.
curl http://localhost:3000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "auto",
"messages": [{"role": "user", "content": "Hello!"}]
}'
Python (OpenAI SDK):
from openai import OpenAI
client = OpenAI(
base_url="http://localhost:3000/v1",
api_key="not-needed" # Octoroute doesn't require auth
)
response = client.chat.completions.create(
model="auto", # Let Octoroute pick the best model
messages=[{"role": "user", "content": "Explain quantum computing"}]
)
print(response.choices[0].message.content)
LangChain:
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(
base_url="http://localhost:3000/v1",
api_key="not-needed",
model="auto"
)
response = llm.invoke("What is the meaning of life?")
TypeScript/JavaScript:
import OpenAI from 'openai';
const client = new OpenAI({
baseURL: 'http://localhost:3000/v1',
apiKey: 'not-needed',
});
const response = await client.chat.completions.create({
model: 'auto',
messages: [{ role: 'user', content: 'Hello!' }],
});
The model field controls routing:
| Value | Behavior |
|---|---|
auto |
Intelligent routing - Octoroute analyzes the request and picks the best tier |
fast |
Route directly to Fast tier (8B models) |
balanced |
Route directly to Balanced tier (30B models) |
deep |
Route directly to Deep tier (120B models) |
qwen3-8b |
Bypass routing - use specific endpoint by name |
Full SSE streaming support - works with any streaming-capable client:
stream = client.chat.completions.create(
model="auto",
messages=[{"role": "user", "content": "Write a poem"}],
stream=True
)
for chunk in stream:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="")
| Endpoint | Method | Description |
|---|---|---|
/v1/chat/completions |
POST | Chat completions (streaming & non-streaming) |
/v1/models |
GET | List available models and tiers |
See API Reference for complete documentation.
Octoroute supports three routing strategies:
Pattern matching on request metadata:
Latency: <1ms (no LLM overhead)
Uses a 30B "router brain" to analyze the request and choose the optimal model.
Latency: ~100-500ms (router invocation)
Try rules first (fast path), fall back to LLM for ambiguous cases.
Latency: <1ms for rule matches, ~100-500ms for LLM fallback
Octoroute provides three levels of observability to help you understand routing decisions and system performance:
Built-in structured logging via tracing:
# Set log level via environment variable
RUST_LOG=info cargo run
# Available levels: trace, debug, info, warn, error
RUST_LOG=octoroute=debug cargo run
What you get:
Metrics are always enabled and available at the /metrics endpoint:
# Build and run
cargo build --release
./target/release/octoroute
# Metrics endpoint available at http://localhost:3000/metrics
Available metrics:
octoroute_requests_total{tier, strategy} - Request counts by tier and routing strategyoctoroute_routing_duration_ms{strategy} - Routing decision latency histogramoctoroute_model_invocations_total{tier} - Model invocations by tierPrometheus scraping config:
# prometheus.yml
scrape_configs:
- job_name: 'octoroute'
static_configs:
- targets: ['localhost:3000']
metrics_path: '/metrics'
scrape_interval: 15s
Why Direct Prometheus? We use the prometheus crate directly for simplicity and homelab-friendliness:
┌──────────────────────────────────────────────────┐
│ Client Applications │
│ (OpenAI SDK, LangChain, CLI, curl, etc.) │
└─────────────────────┬────────────────────────────┘
│
┌──────────────┴──────────────┐
│ │
▼ ▼
/v1/chat/completions /chat (legacy)
(OpenAI-compatible) (Native API)
│ │
└──────────────┬──────────────┘
▼
┌──────────────────────────────────────────────────┐
│ Octoroute API (Axum + Tokio) │
│ ┌────────────────────────────────────────────┐ │
│ │ Router (Rule/LLM/Hybrid) │ │
│ └────────────────────┬───────────────────────┘ │
│ │ │
│ ▼ Model Selection │
│ ┌────────────────────────────────────────────┐ │
│ │ open-agent-sdk Client │ │
│ │ (streaming or buffered responses) │ │
│ └────────────────────┬───────────────────────┘ │
└───────────────────────┼──────────────────────────┘
│
▼
┌──────────────────────────────────────────────────┐
│ Local Model Servers │
│ 8B (Ollama) | 30B (LM Studio) | 120B (llama) │
└──────────────────────────────────────────────────┘
Built on:
Comprehensive documentation is available in the /docs directory:
POST /chatSubmit a chat request for intelligent routing.
Request:
{
"message": "Your question or task",
"importance": "low" | "normal" | "high",
"task_type": "casual_chat" | "code" | "creative_writing" | "deep_analysis" | "document_summary" | "question_answer"
}
Response:
{
"content": "Generated text",
"model_tier": "fast" | "balanced" | "deep",
"model_name": "qwen3-30b-instruct",
"routing_strategy": "rule" | "llm"
}
GET /healthHealth check endpoint with system status.
Response: 200 OK with JSON body:
{
"status": "OK",
"health_tracking_status": "operational",
"metrics_recording_status": "operational",
"background_task_status": "operational",
"background_task_failures": 0
}
GET /modelsList available models and their status.
Response:
{
"models": [
{
"name": "qwen3-8b-instruct",
"tier": "fast",
"endpoint": "http://localhost:11434/v1",
"healthy": true,
"last_check_seconds_ago": 2,
"consecutive_failures": 0
}
]
}
See Configuration Guide for full configuration options:
Understanding the difference between router tier and target tier is crucial for LLM and Hybrid strategies:
Router Tier (router_tier): Which model tier (fast/balanced/deep) makes the routing decision
balanced (good balance of speed and accuracy)Target Tier: Which model tier actually processes the user's request
Example Flow:
User Request → Router Tier (balanced/30B) analyzes request
→ Decides: "This is simple, use Fast tier"
→ Target Tier (fast/8B) processes request
→ Response to user
Why separate them?
# Install Rust 1.90+ (required for Edition 2024)
rustup toolchain install 1.90
rustup default 1.90
rustup component add rustfmt clippy
# Install development tools
cargo install just cargo-nextest
# Development build
cargo build
# Release build (optimized, includes Prometheus metrics)
cargo build --release
# Run all tests
cargo test
# Run with nextest (faster)
cargo nextest run
# Run integration tests
cargo test --test '*'
# Format code
cargo fmt
# Lint with clippy
cargo clippy --all-targets --all-features -- -D warnings
Quick Command Reference (using justfile):
| Command | Description |
|---|---|
just check |
Run clippy and format checks |
just test |
Run all tests |
just bench |
Run benchmarks |
just watch |
Auto-rebuild on file changes |
just ci |
Complete CI check (clippy + format + tests) |
See just --list for all 20+ available commands.
# With cargo (uses config.toml by default)
cargo run
# Or use release binary
./target/release/octoroute
# With custom config file
octoroute --config /path/to/custom-config.toml
# With environment variables
RUST_LOG=debug cargo run
# Start server (default: looks for config.toml)
octoroute
# Start server with custom config
octoroute --config custom.toml
# Generate config template to stdout
octoroute config
# Write config template to file
octoroute config -o config.toml
# Show version
octoroute --version
# Show help
octoroute --help
Features implemented:
/v1/chat/completions, /v1/models) with SSE streaming/chat, /health, /models, /metrics endpointsoctoroute config and --config flag)Route simple commands to 8B, complex reasoning to 120B:
import requests
def ask_llm(message, importance="normal"):
response = requests.post("http://localhost:3000/chat", json={
"message": message,
"importance": importance
})
return response.json()["content"]
# Uses 8B model (fast)
ask_llm("What's the weather like?")
# Uses 120B model (intelligent routing)
ask_llm("Design a distributed consensus algorithm", importance="high")
Share your LLM fleet with family/friends, automatically balancing load:
Integrate with IDE/scripts to route tasks intelligently:
# Quick code explanation (8B)
curl -X POST http://localhost:3000/chat -d '{"message":"Explain this function"}'
# Deep code review (120B)
curl -X POST http://localhost:3000/chat -d '{"message":"Review for security issues", "importance":"high"}'
Routing latency (tested on M2 Mac):
| Strategy | Latency | Notes |
|---|---|---|
| Rule-based | <1ms | Pure CPU, no LLM |
| LLM-based | ~250ms | With 30B router model |
| Hybrid | <1ms (rule hit) | Best of both worlds |
Throughput: Limited by model inference, not routing overhead.
Contributions welcome! Please see Development Guide for guidelines.
Areas for contribution:
A: LangChain is Python-only and has significant overhead. Octoroute is Rust-native, type-safe, and designed specifically for local/self-hosted LLMs with minimal latency.
A: Technically yes (they're OpenAI-compatible), but Octoroute is optimized for local deployments. Cloud APIs already handle routing internally.
A: Any OpenAI-compatible endpoint (Ollama, LM Studio, llama.cpp, vLLM, etc.). Tested with Qwen, Llama, Mistral families.
A: Yes! The OpenAI-compatible endpoint (/v1/chat/completions) supports full SSE streaming. Set stream: true in your request and receive tokens as they're generated. The legacy /chat endpoint returns buffered responses only.
A: A 30B model analyzes your prompt + metadata and outputs one of: FAST, BALANCED, DEEP. This decision is then used to route the actual request.
A: Octoroute provides two observability levels:
RUST_LOG=info to see routing decisions and health status/metrics endpointFor homelab deployments, we recommend Prometheus + Grafana for metrics visualization.
/metrics endpoint secure?A: The /metrics endpoint is unauthenticated by design for simplicity in homelab deployments. It exposes operational metrics like request counts and routing latency.
Security recommendations:
location /metrics {
auth_basic "Metrics";
auth_basic_user_file /etc/nginx/.htpasswd;
proxy_pass http://octoroute:3000/metrics;
}
/metrics to Prometheus server IP onlyThe metrics endpoint does NOT expose:
For internet-exposed deployments, always use authentication or IP restrictions.
A: We chose the direct prometheus crate (v0.14) for simplicity and homelab-friendliness:
The /metrics endpoint works with your existing Prometheus scraper without any additional infrastructure.
MIT License - see LICENSE for details.
Made with 🦑 for homelab enthusiasts
Route smarter, compute less.