octoroute

Crates.iooctoroute
lib.rsoctoroute
version1.0.0
created_at2025-11-25 01:45:21.658075+00
updated_at2025-11-27 22:20:57.779711+00
descriptionIntelligent multi-model router for self-hosted LLMs
homepage
repositoryhttps://github.com/slb350/octoroute
max_upload_size
id1949032
size1,379,473
Stephen Brandon (slb350)

documentation

README

Octoroute 🦑

Intelligent multi-model router for self-hosted LLMs

Rust License: MIT

Octoroute is a smart HTTP API that sits between your applications and your homelab's fleet of local LLMs. It automatically routes requests to the optimal model (8B, 30B, or 120B) based on task complexity, reducing compute costs while maintaining quality.

Think of it as a load balancer, but instead of distributing requests evenly, it sends simple queries to small models and complex reasoning tasks to larger ones.


Why Octoroute?

Running multiple LLM sizes on your homelab is powerful, but routing requests manually is tedious:

  • Manual routing is error-prone: You always use the 120B model "just in case," wasting compute.
  • Simple heuristics aren't enough: "Short prompts → small model" misses nuance.
  • LangChain is Python-only: You want native Rust performance and type safety.

Octoroute solves this with:

Intelligent routing - Rule-based + LLM-powered decision making ✅ Zero-cost rules - Fast pattern matching for obvious cases (<1ms) ✅ Homelab-first - Built for local Ollama, LM Studio, llama.cpp deployments ✅ Rust native - Type-safe, async, low overhead ✅ Observable - Track every routing decision with structured logs


Quick Start

Prerequisites

  • At least one local LLM endpoint (Ollama, LM Studio, llama.cpp, etc.)
  • Optional: Multiple model sizes (8B, 30B, 120B) for intelligent routing
  • Optional: Rust 1.90+ (only needed if building from source)

Installation

Option 1: Pre-built binaries (fastest)

Download from GitHub Releases:

# Linux x86_64
curl -LO https://github.com/slb350/octoroute/releases/latest/download/octoroute-linux-x86_64.tar.gz
tar -xzf octoroute-linux-x86_64.tar.gz

# Linux ARM64 (Raspberry Pi, etc.)
curl -LO https://github.com/slb350/octoroute/releases/latest/download/octoroute-linux-aarch64.tar.gz
tar -xzf octoroute-linux-aarch64.tar.gz

# macOS Apple Silicon
curl -LO https://github.com/slb350/octoroute/releases/latest/download/octoroute-macos-aarch64.tar.gz
tar -xzf octoroute-macos-aarch64.tar.gz

# macOS Intel
curl -LO https://github.com/slb350/octoroute/releases/latest/download/octoroute-macos-x86_64.tar.gz
tar -xzf octoroute-macos-x86_64.tar.gz

# Run
./octoroute

Option 2: Cargo install (requires Rust)

cargo install octoroute

Option 3: Build from source

git clone https://github.com/slb350/octoroute.git
cd octoroute
cargo build --release
./target/release/octoroute

Configuration

Generate a starter config file:

# Print template to stdout
octoroute config

# Write template to file
octoroute config -o config.toml

Or create a config.toml manually:

[server]
host = "0.0.0.0"
port = 3000

[[models.fast]]
name = "qwen3-8b-instruct"
base_url = "http://localhost:11434/v1"  # Ollama
max_tokens = 4096
temperature = 0.7
weight = 1.0
priority = 1

[[models.balanced]]
name = "qwen3-30b-instruct"
base_url = "http://localhost:1234/v1"   # LM Studio
max_tokens = 8192
temperature = 0.7
weight = 1.0
priority = 1

[[models.deep]]
name = "gpt-oss-120b"
base_url = "http://localhost:8080/v1"   # llama.cpp
max_tokens = 16384
temperature = 0.7
weight = 1.0
priority = 1

[routing]
strategy = "hybrid"     # rule, llm, hybrid
router_tier = "balanced"  # fast, balanced, deep (default: balanced)

Usage

Send a chat request:

curl -X POST http://localhost:3000/chat \
  -H "Content-Type: application/json" \
  -d '{
    "message": "Explain quantum computing in simple terms",
    "importance": "normal",
    "task_type": "question_answer"
  }'

Response:

{
  "content": "Quantum computing is...",
  "model_tier": "balanced",
  "model_name": "qwen3-30b-instruct",
  "routing_strategy": "rule"
}

OpenAI-Compatible API

Drop-in replacement for OpenAI clients. Use Octoroute with any OpenAI-compatible SDK, framework, or tool - no code changes required.

Quick Example

curl http://localhost:3000/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "auto",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'

Works with Any OpenAI Client

Python (OpenAI SDK):

from openai import OpenAI

client = OpenAI(
    base_url="http://localhost:3000/v1",
    api_key="not-needed"  # Octoroute doesn't require auth
)

response = client.chat.completions.create(
    model="auto",  # Let Octoroute pick the best model
    messages=[{"role": "user", "content": "Explain quantum computing"}]
)
print(response.choices[0].message.content)

LangChain:

from langchain_openai import ChatOpenAI

llm = ChatOpenAI(
    base_url="http://localhost:3000/v1",
    api_key="not-needed",
    model="auto"
)

response = llm.invoke("What is the meaning of life?")

TypeScript/JavaScript:

import OpenAI from 'openai';

const client = new OpenAI({
  baseURL: 'http://localhost:3000/v1',
  apiKey: 'not-needed',
});

const response = await client.chat.completions.create({
  model: 'auto',
  messages: [{ role: 'user', content: 'Hello!' }],
});

Model Selection

The model field controls routing:

Value Behavior
auto Intelligent routing - Octoroute analyzes the request and picks the best tier
fast Route directly to Fast tier (8B models)
balanced Route directly to Balanced tier (30B models)
deep Route directly to Deep tier (120B models)
qwen3-8b Bypass routing - use specific endpoint by name

Streaming Support

Full SSE streaming support - works with any streaming-capable client:

stream = client.chat.completions.create(
    model="auto",
    messages=[{"role": "user", "content": "Write a poem"}],
    stream=True
)

for chunk in stream:
    if chunk.choices[0].delta.content:
        print(chunk.choices[0].delta.content, end="")

Available Endpoints

Endpoint Method Description
/v1/chat/completions POST Chat completions (streaming & non-streaming)
/v1/models GET List available models and tiers

See API Reference for complete documentation.


How It Works

Routing Strategies

Octoroute supports three routing strategies:

1. Rule-Based (Fast)

Pattern matching on request metadata:

  • Casual chat + <256 tokens → 8B model
  • Deep analysis or high importance → 120B model
  • Everything else → 30B model

Latency: <1ms (no LLM overhead)

2. LLM-Based (Intelligent)

Uses a 30B "router brain" to analyze the request and choose the optimal model.

Latency: ~100-500ms (router invocation)

3. Hybrid (Recommended)

Try rules first (fast path), fall back to LLM for ambiguous cases.

Latency: <1ms for rule matches, ~100-500ms for LLM fallback


Observability

Octoroute provides three levels of observability to help you understand routing decisions and system performance:

Level 1: Structured Logs (Always Available)

Built-in structured logging via tracing:

# Set log level via environment variable
RUST_LOG=info cargo run

# Available levels: trace, debug, info, warn, error
RUST_LOG=octoroute=debug cargo run

What you get:

  • Request metadata (prompt length, importance, task type)
  • Routing decisions (which strategy was used, which model was selected)
  • Health check status updates
  • Error traces with full context

Level 2: Metrics (Prometheus Export)

Metrics are always enabled and available at the /metrics endpoint:

# Build and run
cargo build --release
./target/release/octoroute

# Metrics endpoint available at http://localhost:3000/metrics

Available metrics:

  • octoroute_requests_total{tier, strategy} - Request counts by tier and routing strategy
  • octoroute_routing_duration_ms{strategy} - Routing decision latency histogram
  • octoroute_model_invocations_total{tier} - Model invocations by tier
  • Plus 3 health/observability metrics (see Observability Guide)

Prometheus scraping config:

# prometheus.yml
scrape_configs:
  - job_name: 'octoroute'
    static_configs:
      - targets: ['localhost:3000']
    metrics_path: '/metrics'
    scrape_interval: 15s

Why Direct Prometheus? We use the prometheus crate directly for simplicity and homelab-friendliness:

  • Works with existing Prometheus/Grafana setups out of the box
  • No intermediate abstraction layers - just Prometheus
  • Mature, stable crate with broad ecosystem support

Architecture

┌──────────────────────────────────────────────────┐
│              Client Applications                  │
│   (OpenAI SDK, LangChain, CLI, curl, etc.)       │
└─────────────────────┬────────────────────────────┘
                      │
       ┌──────────────┴──────────────┐
       │                             │
       ▼                             ▼
  /v1/chat/completions         /chat (legacy)
  (OpenAI-compatible)          (Native API)
       │                             │
       └──────────────┬──────────────┘
                      ▼
┌──────────────────────────────────────────────────┐
│        Octoroute API (Axum + Tokio)              │
│  ┌────────────────────────────────────────────┐  │
│  │    Router (Rule/LLM/Hybrid)                │  │
│  └────────────────────┬───────────────────────┘  │
│                       │                          │
│                       ▼ Model Selection          │
│  ┌────────────────────────────────────────────┐  │
│  │    open-agent-sdk Client                   │  │
│  │    (streaming or buffered responses)       │  │
│  └────────────────────┬───────────────────────┘  │
└───────────────────────┼──────────────────────────┘
                        │
                        ▼
┌──────────────────────────────────────────────────┐
│              Local Model Servers                  │
│   8B (Ollama) | 30B (LM Studio) | 120B (llama)   │
└──────────────────────────────────────────────────┘

Built on:


Documentation

Comprehensive documentation is available in the /docs directory:

  • Architecture Guide - System design, routing strategies, data flow, and technical decisions
  • API Reference - Complete HTTP API documentation with request/response schemas and examples
  • Configuration Guide - Detailed configuration reference with examples for different deployment scenarios
  • Observability Guide - Logging, Prometheus metrics, Grafana dashboards, and monitoring setup
  • Development Guide - Testing, benchmarking, code quality, and contributing guidelines
  • Deployment Guide - Homelab deployment with systemd, Docker, reverse proxy, and security hardening

API Reference

POST /chat

Submit a chat request for intelligent routing.

Request:

{
  "message": "Your question or task",
  "importance": "low" | "normal" | "high",
  "task_type": "casual_chat" | "code" | "creative_writing" | "deep_analysis" | "document_summary" | "question_answer"
}

Response:

{
  "content": "Generated text",
  "model_tier": "fast" | "balanced" | "deep",
  "model_name": "qwen3-30b-instruct",
  "routing_strategy": "rule" | "llm"
}

GET /health

Health check endpoint with system status.

Response: 200 OK with JSON body:

{
  "status": "OK",
  "health_tracking_status": "operational",
  "metrics_recording_status": "operational",
  "background_task_status": "operational",
  "background_task_failures": 0
}

GET /models

List available models and their status.

Response:

{
  "models": [
    {
      "name": "qwen3-8b-instruct",
      "tier": "fast",
      "endpoint": "http://localhost:11434/v1",
      "healthy": true,
      "last_check_seconds_ago": 2,
      "consecutive_failures": 0
    }
  ]
}

Configuration Reference

See Configuration Guide for full configuration options:

  • Server settings: Host, port, timeouts
  • Model endpoints: Names, URLs, token limits
  • Routing strategy: Rule, LLM, or hybrid
  • Router tier: Which model makes routing decisions
  • Observability: Log level, metrics

Router Tier vs Target Tier

Understanding the difference between router tier and target tier is crucial for LLM and Hybrid strategies:

  • Router Tier (router_tier): Which model tier (fast/balanced/deep) makes the routing decision

    • Used by LLM and Hybrid strategies only
    • Analyzes the request and decides which target tier should handle it
    • Default: balanced (good balance of speed and accuracy)
    • Example: A Balanced tier model decides whether to route to Fast, Balanced, or Deep
  • Target Tier: Which model tier actually processes the user's request

    • Determined by the routing decision
    • Can be Fast (8B), Balanced (30B), or Deep (120B)
    • The model that generates the final response to the user

Example Flow:

User Request → Router Tier (balanced/30B) analyzes request
           → Decides: "This is simple, use Fast tier"
           → Target Tier (fast/8B) processes request
           → Response to user

Why separate them?

  • Faster routing: Use Fast tier (8B) for routing decisions to minimize overhead
  • More accurate routing: Use Balanced tier (30B) for better routing decisions
  • Don't waste resources: Use Deep tier (120B) for processing, not routing

Development

Prerequisites

# Install Rust 1.90+ (required for Edition 2024)
rustup toolchain install 1.90
rustup default 1.90
rustup component add rustfmt clippy

# Install development tools
cargo install just cargo-nextest

Build

# Development build
cargo build

# Release build (optimized, includes Prometheus metrics)
cargo build --release

Test

# Run all tests
cargo test

# Run with nextest (faster)
cargo nextest run

# Run integration tests
cargo test --test '*'

Format & Lint

# Format code
cargo fmt

# Lint with clippy
cargo clippy --all-targets --all-features -- -D warnings

Quick Command Reference (using justfile):

Command Description
just check Run clippy and format checks
just test Run all tests
just bench Run benchmarks
just watch Auto-rebuild on file changes
just ci Complete CI check (clippy + format + tests)

See just --list for all 20+ available commands.

Run locally

# With cargo (uses config.toml by default)
cargo run

# Or use release binary
./target/release/octoroute

# With custom config file
octoroute --config /path/to/custom-config.toml

# With environment variables
RUST_LOG=debug cargo run

CLI Commands

# Start server (default: looks for config.toml)
octoroute

# Start server with custom config
octoroute --config custom.toml

# Generate config template to stdout
octoroute config

# Write config template to file
octoroute config -o config.toml

# Show version
octoroute --version

# Show help
octoroute --help

Project Status

Features implemented:

  • OpenAI-compatible API (/v1/chat/completions, /v1/models) with SSE streaming
  • ✅ Legacy API with /chat, /health, /models, /metrics endpoints
  • ✅ Multi-tier model selection (fast/balanced/deep)
  • ✅ Rule-based + LLM-based hybrid routing
  • ✅ Priority-based routing with weighted distribution
  • ✅ Health checking with automatic endpoint recovery
  • ✅ Retry logic with request-scoped exclusion
  • ✅ Timeout enforcement (global + per-tier overrides)
  • ✅ Prometheus metrics
  • ✅ Performance benchmarks (Criterion)
  • ✅ CI/CD pipeline (GitHub Actions)
  • ✅ Comprehensive config validation
  • ✅ Development tooling (justfile with 20+ recipes)
  • CLI with config generation (octoroute config and --config flag)
  • Comprehensive test coverage (348+ tests across 51 integration test files)
  • Zero clippy warnings
  • Zero tech debt

Use Cases

1. CLI Assistant with Cost Optimization

Route simple commands to 8B, complex reasoning to 120B:

import requests

def ask_llm(message, importance="normal"):
    response = requests.post("http://localhost:3000/chat", json={
        "message": message,
        "importance": importance
    })
    return response.json()["content"]

# Uses 8B model (fast)
ask_llm("What's the weather like?")

# Uses 120B model (intelligent routing)
ask_llm("Design a distributed consensus algorithm", importance="high")

2. Multi-User Homelab Server

Share your LLM fleet with family/friends, automatically balancing load:

  • Bob's casual question → 8B
  • Alice's code review → 30B
  • Charlie's essay writing → 120B

3. Development Workflow Automation

Integrate with IDE/scripts to route tasks intelligently:

# Quick code explanation (8B)
curl -X POST http://localhost:3000/chat -d '{"message":"Explain this function"}'

# Deep code review (120B)
curl -X POST http://localhost:3000/chat -d '{"message":"Review for security issues", "importance":"high"}'

Performance

Routing latency (tested on M2 Mac):

Strategy Latency Notes
Rule-based <1ms Pure CPU, no LLM
LLM-based ~250ms With 30B router model
Hybrid <1ms (rule hit) Best of both worlds

Throughput: Limited by model inference, not routing overhead.


Contributing

Contributions welcome! Please see Development Guide for guidelines.

Areas for contribution:

  • Additional routing strategies (e.g., RL-based, tool-based)
  • Caching layer for repeated prompts
  • Web UI for routing visualization
  • More comprehensive benchmarks
  • Function calling / tool use support

FAQ

Q: Why not just use LangChain?

A: LangChain is Python-only and has significant overhead. Octoroute is Rust-native, type-safe, and designed specifically for local/self-hosted LLMs with minimal latency.

Q: Can I use this with cloud APIs (OpenAI, Anthropic)?

A: Technically yes (they're OpenAI-compatible), but Octoroute is optimized for local deployments. Cloud APIs already handle routing internally.

Q: What models are supported?

A: Any OpenAI-compatible endpoint (Ollama, LM Studio, llama.cpp, vLLM, etc.). Tested with Qwen, Llama, Mistral families.

Q: Does this support streaming responses?

A: Yes! The OpenAI-compatible endpoint (/v1/chat/completions) supports full SSE streaming. Set stream: true in your request and receive tokens as they're generated. The legacy /chat endpoint returns buffered responses only.

Q: How does LLM-based routing work?

A: A 30B model analyzes your prompt + metadata and outputs one of: FAST, BALANCED, DEEP. This decision is then used to route the actual request.

Q: How do I monitor Octoroute in production?

A: Octoroute provides two observability levels:

  1. Structured logs (always enabled): Use RUST_LOG=info to see routing decisions and health status
  2. Metrics (always enabled): Prometheus metrics exposed at /metrics endpoint

For homelab deployments, we recommend Prometheus + Grafana for metrics visualization.

Q: Is the /metrics endpoint secure?

A: The /metrics endpoint is unauthenticated by design for simplicity in homelab deployments. It exposes operational metrics like request counts and routing latency.

Security recommendations:

  • Homelab: Ensure Octoroute is only accessible on trusted networks (not exposed to the internet)
  • Production: Use a reverse proxy (nginx, Caddy) to add authentication:
    location /metrics {
        auth_basic "Metrics";
        auth_basic_user_file /etc/nginx/.htpasswd;
        proxy_pass http://octoroute:3000/metrics;
    }
    
  • Alternative: Use firewall rules to restrict /metrics to Prometheus server IP only

The metrics endpoint does NOT expose:

  • User messages or content
  • API keys or credentials
  • Individual request details (only aggregates)

For internet-exposed deployments, always use authentication or IP restrictions.

Q: Why direct Prometheus instead of OpenTelemetry?

A: We chose the direct prometheus crate (v0.14) for simplicity and homelab-friendliness:

  • Simplicity: No intermediate abstraction layers - just Prometheus
  • Homelab-friendly: Works with existing Prometheus/Grafana setups out of the box, no OTEL collector required
  • Stability: Mature, actively maintained library

The /metrics endpoint works with your existing Prometheus scraper without any additional infrastructure.


License

MIT License - see LICENSE for details.


Acknowledgments

  • Built on top of open-agent-sdk-rust
  • Inspired by LangChain's router chains
  • Thanks to the Rust, Tokio, and Axum communities

Made with 🦑 for homelab enthusiasts

Route smarter, compute less.

Commit count: 0

cargo fmt