| Crates.io | mothership |
| lib.rs | mothership |
| version | 0.0.55 |
| created_at | 2025-12-20 23:47:58.215678+00 |
| updated_at | 2026-01-12 17:13:07.883273+00 |
| description | Process supervisor with HTTP exposure - wrap, monitor, and expose your fleet |
| homepage | https://github.com/seuros/mothership |
| repository | https://github.com/seuros/mothership |
| max_upload_size | |
| id | 1997103 |
| size | 617,117 |
Process supervisor with HTTP exposure. Launches your fleet, routes traffic, runs WASM plugins.
--tui)/metrics--only web or --except workers for selective launchingcargo install mothership
Or from source:
git clone https://github.com/seuros/mothership.git
cd mothership
cargo build --release
Mothership uses optional features to minimize binary size:
| Feature | Size Impact | Description |
|---|---|---|
tui |
+0.3 MB | TUI dashboard (--tui flag) |
wasm |
+8.4 MB | WASM payload processing |
tokio-postgres |
+0.5 MB | PostgreSQL flagship election |
Default build (no features): ~10.4 MB
# Minimal build (no TUI, no WASM)
cargo build --release
# With TUI dashboard
cargo build --release --features tui
# With WASM payloads
cargo build --release --features wasm
# Full build (all features)
cargo build --release --features "tui,wasm,tokio-postgres"
# Initialize a new manifest
mothership init
# Validate configuration
mothership clearance
# View routing chart
mothership chart
# Run the fleet
mothership
# Run with TUI dashboard
mothership --tui
Create ship-manifest.toml:
# Global configuration
[mothership]
metrics_port = 9090
compression = true
# Static files (served before routing to ships, with fallthrough)
# Longest prefix wins; if file not found, falls through to routes
[[mothership.static_dirs]]
path = "./public"
prefix = "/static"
# Named binds - external listeners
[mothership.bind]
http = "0.0.0.0:80"
https = "0.0.0.0:443"
ws = "0.0.0.0:8080"
# Web ships
[[fleet.web]]
name = "app"
command = "ruby"
args = ["app.rb"]
bind = "tcp://127.0.0.1:3000"
healthcheck = "/health"
routes = [
{ bind = "http", pattern = "/.*" },
{ bind = "https", pattern = "/.*" },
]
# Background workers
[[fleet.workers]]
name = "sidekiq"
command = "bundle"
args = ["exec", "sidekiq"]
critical = false
# One-shot jobs
[[fleet.jobs]]
name = "migrate"
command = "rails"
args = ["db:migrate"]
oneshot = true
# Docked bays (WebSocket multiplexing)
[[bays.websocket]]
name = "orbitcast"
command = "./orbitcast"
routes = [{ bind = "ws", pattern = "/ws" }]
config = { redis_url = "redis://localhost:6379" }
# WASM payloads (optional)
[[modules]]
name = "rate_limiter"
wasm = "./modules/rate_limiter.wasm"
routes = ["/api/.*"]
phase = "request"
Ships are traditional processes that bind to a port (TCP or Unix socket). Mothership proxies HTTP/WebSocket traffic to them.
Bays use the docking protocol for WebSocket multiplexing. Multiple client connections are multiplexed over a single Unix socket. Ideal for:
| Field | Type | Default | Description |
|---|---|---|---|
name |
string | required | Unique identifier (ASCII only, no spaces) |
command |
string | required | Command to execute |
args |
string[] | [] |
Command arguments |
bind |
string | - | Internal bind address (tcp://host:port or unix:///path) |
healthcheck |
string | - | Health check endpoint path |
routes |
array | [] |
HTTP routes (see below) |
depends_on |
string[] | [] |
Ships/bays to wait for before starting |
env |
table | {} |
Environment variables |
critical |
bool | true |
Crash kills entire fleet |
oneshot |
bool | false |
Run once and exit |
tags |
string[] | [] |
Tags for filtering (--only, --except) |
| Field | Type | Default | Description |
|---|---|---|---|
name |
string | required | Unique identifier (ASCII only, no spaces) |
command |
string | required | Command to execute |
args |
string[] | [] |
Command arguments |
routes |
array | [] |
WebSocket routes |
depends_on |
string[] | [] |
Ships/bays to wait for |
env |
table | {} |
Environment variables |
config |
table | {} |
Config passed via docking protocol |
critical |
bool | true |
Crash kills entire fleet |
tags |
string[] | [] |
Tags for filtering |
Routes map mothership binds to ships/bays:
# Object format
routes = [
{ bind = "http", pattern = "/api/.*" },
{ bind = "ws", pattern = "/cable" },
]
# Shorthand format
routes = ["http:/api/.*", "ws:/cable"]
# With path stripping
routes = [
{ bind = "http", pattern = "/api/.*", strip_prefix = "/api" },
]
# With User-Agent filtering (route LLM traffic to markdown-only backend)
routes = [
{ bind = "http", pattern = "/.*", ua_filter = "llm" },
]
Route traffic to different backends based on User-Agent:
# Browser-only route (Chromium, Firefox, Safari)
[[fleet.web]]
name = "app"
command = "rails"
routes = [{ bind = "http", pattern = "/.*", ua_filter = "browser" }]
# LLM-only route (Claude, GPT, Perplexity, etc.) - serve markdown
[[fleet.web]]
name = "markdown-api"
command = "rails"
routes = [{ bind = "http", pattern = "/.*", ua_filter = "llm" }]
# Bot/crawler route
[[fleet.web]]
name = "static-cache"
command = "nginx"
routes = [{ bind = "http", pattern = "/.*", ua_filter = "bot" }]
Available filters:
browser - Chromium, Firefox, Safari browserschromium, firefox, safari - Specific browser kindsllm - Claude/Anthropic agentsbot - Bots, crawlers, curl, wget, etc.~pattern - Custom regex pattern (e.g., ~MyAgent.*)Routes are matched in declaration order. Put specific UA filters before catch-all routes.
Mothership computes Ja4H fingerprints for incoming requests. Ja4H fingerprints HTTP header order and values to detect bots/headless browsers even when they spoof User-Agent.
Fingerprints are logged with each request:
DEBUG method=GET path=/ ua_kind=Chromium shields=Some("ge11nn06enus_...")
Future: Shield-based routing to block or redirect suspicious fingerprints.
# TCP with explicit prefix
bind = "tcp://127.0.0.1:3000"
# TCP without prefix
bind = "0.0.0.0:8080"
# Port only (defaults to 127.0.0.1)
bind = "3000"
# Unix socket
bind = "unix:///tmp/app.sock"
When running behind a load balancer (AWS ELB/ALB, Cloudflare, HAProxy, nginx), enable PROXY protocol to preserve client IPs:
[mothership.bind]
http = "0.0.0.0:80" # Direct access
https = { addr = "0.0.0.0:443", proxy_protocol = true } # Behind LB
The proxy_protocol option enables HAProxy PROXY protocol v1/v2 with auto-detection (works with or without the protocol header).
Configure your load balancer to send PROXY protocol:
send-proxy or send-proxy-v2 to server lineproxy_protocol on to upstreamBays communicate with mothership via Unix sockets using a binary protocol:
| Message | Direction | Purpose |
|---|---|---|
Dock |
Bay → Mothership | Bay ready, sends version |
Moored |
Mothership → Bay | Docking confirmed, sends config |
Boarding |
Mothership → Bay | New WebSocket client connected |
Disembark |
Mothership → Bay | Client disconnected |
Cargo |
Bidirectional | WebSocket data payload |
Environment variables provided to bays:
MS_PID - Mothership process IDMS_SHIP - Bay nameMS_SOCKET_DIR - Socket directoryMS_SOCKET_PATH - Full socket pathMS_BAY_TYPE - Bay type (e.g., "websocket")# Run fleet (default)
mothership
mothership run
mothership run -c /path/to/manifest.toml
# Run with TUI
mothership --tui
# Filter by tags
mothership --only web
mothership --only web,api
mothership --except workers
# Pre-flight check (validate + verify uplinks)
mothership preflight
# View routing chart
mothership chart
# Validate manifest (no network access)
mothership clearance
# Initialize new manifest
mothership init
Reduce duplication with base templates:
[base.ship]
env = { RAILS_ENV = "production" }
critical = true
tags = ["ruby"]
[base.bay]
critical = true
tags = ["realtime"]
[base.module]
phase = "request"
tags = ["security"]
# Ships inherit from base.ship
[[fleet.web]]
name = "app"
command = "ruby"
tags = ["web"] # Combined: ["ruby", "web"]
# Bays inherit from base.bay
[[bays.websocket]]
name = "orbitcast"
command = "./orbitcast"
Requires
--features tui
The TUI shows real-time fleet status:
Controls:
Tab - Switch tabs↑/↓ - Navigate shipsPgUp/PgDn - Scroll logsq - QuitRequires
--features wasm
Payloads process requests/responses at the proxy layer:
[[modules]]
name = "auth"
wasm = "./modules/auth.wasm"
routes = ["/admin/.*"]
phase = "request"
[[modules]]
name = "cache"
wasm = "./modules/cache.wasm"
routes = ["/api/.*"]
phase = "response"
config = { ttl = "3600" }
Payloads can:
WASM modules have access to these host functions:
| Function | Description |
|---|---|
get_request() |
Get request JSON length |
read_request(ptr, len) |
Read request JSON into buffer |
set_action(action) |
Set action: 0=Continue, 1=Block |
set_block(status, body_ptr, body_len) |
Block with status and body |
set_block_with_headers(status, body_ptr, body_len, headers_ptr, headers_len) |
Block with status, body, and headers (JSON) |
log(level, ptr, len) |
Log message (0=debug, 1=warn, 2=info) |
Multiple static directories can be configured. Longest prefix wins, with implicit fallthrough to routes if file not found.
# Specific assets directory
[[mothership.static_dirs]]
path = "./public/assets"
prefix = "/assets"
# Catch-all for other static files (with fallthrough to routes)
[[mothership.static_dirs]]
path = "./public"
prefix = "/"
bind = "http" # Optional: limit to specific bind
[mothership]
compression = true
Supports gzip, deflate, and brotli based on Accept-Encoding.
Cache CORS preflight (OPTIONS) responses from backends to reduce load:
[mothership]
cors_cache = true # Enable with defaults
# Or with custom settings
[mothership.cors_cache]
enabled = true
default_ttl = 3600 # Fallback TTL in seconds (default: 3600)
max_entries = 10000 # Maximum cache entries (default: 10000)
Cache key includes: origin, path, Access-Control-Request-Method, and Access-Control-Request-Headers. TTL is extracted from the backend's Access-Control-Max-Age header when present.
Browser A → OPTIONS /api/login → Backend → cache + return
Browser B → OPTIONS /api/login → cached ✅ (no backend hit)
Browser C (different origin) → OPTIONS /api/login → Backend → cache + return
Ships and bays inherit environment variables from mothership:
MASTER_KEY=secret DATABASE_URL=postgres://... mothership run
Process-specific env vars override inherited values:
[[fleet.web]]
name = "app"
command = "ruby"
env = { RAILS_ENV = "production" }
Run setup tasks (migrations, cache warmup, etc.) before any ship or bay starts. All prelaunch jobs must complete successfully or the launch is aborted.
[[mothership.prelaunch]]
name = "ar-migrate"
command = "rails"
args = ["db:migrate"]
[[mothership.prelaunch]]
name = "memgraph-migrate"
command = "./migrate-memgraph"
depends_on = ["ar-migrate"] # runs after ar-migrate completes
| Field | Type | Default | Description |
|---|---|---|---|
name |
string | required | Unique identifier |
command |
string | required | Command to execute |
args |
string[] | [] |
Command arguments |
env |
table | {} |
Environment variables |
depends_on |
string[] | [] |
Other prelaunch jobs to wait for |
If any prelaunch job fails (non-zero exit), the entire launch is aborted.
When deploying multiple Motherships across different servers, the Flagship feature ensures only one instance runs prelaunch jobs (migrations, etc.) while others wait.
┌─────────────────────────────────────────────────────────────┐
│ FLEET │
│ ┌───────────┐ ┌───────────┐ ┌───────────┐ │
│ │ Server A │ │ Server B │ │ Server C │ │
│ │ Mothership│ │ Mothership│ │ Mothership│ │
│ │ ⭐FLAGSHIP│ │ (escort) │ │ (escort) │ │
│ │ │ │ │ │ │ │
│ │ 1. uplinks│ │ 1. uplinks│ │ 1. uplinks│ │
│ │ 2. migrate│ │ 2. wait...│ │ 2. wait...│ │
│ │ 3. signal │───►│ 3. ready! │ │ 3. ready! │ │
│ │ 4. launch │ │ 4. launch │ │ 4. launch │ │
│ └───────────┘ └───────────┘ └───────────┘ │
└─────────────────────────────────────────────────────────────┘
Explicitly designate the flagship via environment variable or command:
[mothership.flagship]
enabled = true
election = "static"
static_flagship = "$MS_FLAGSHIP" # truthy: "true", "1", "yes"
Or via command output:
[mothership.flagship]
enabled = true
election = "static"
static_flagship = { command = "hostname", equals = "server-a" }
Examples:
MS_FLAGSHIP=true{ command = "printenv DYNO", equals = "web.1" }Automatic leader election using advisory locks:
[mothership.flagship]
enabled = true
election = "postgres"
election_url = "$DATABASE_URL"
The first instance to acquire the lock becomes Flagship. Escorts wait for the ready signal via LISTEN/NOTIFY. Requires the tokio-postgres feature:
cargo build --features tokio-postgres
| Field | Type | Default | Description |
|---|---|---|---|
enabled |
bool | false |
Enable flagship coordination |
election |
string | "static" |
Election backend: "static" or "postgres" |
election_url |
string | - | PostgreSQL URL (supports $ENV expansion) |
static_flagship |
string/table | - | Env var ("$VAR") or command ({ command, equals }) |
prelaunch_timeout |
u64 | 300 |
Seconds escorts wait for flagship |
election_timeout |
u64 | 30 |
Seconds to acquire lock |
Verify external dependencies are reachable before launching the fleet. If any uplink fails, mothership aborts startup.
[[mothership.uplinks]]
url = "postgres://localhost:5432/mydb"
name = "postgres"
[[mothership.uplinks]]
url = "$DATABASE_URL" # env var expansion
name = "primary-db"
timeout = "10s" # default: 5s
[[mothership.uplinks]]
url = "redis://cache.internal:6379"
name = "redis"
mothership preflight
Validates the manifest and verifies all uplinks are reachable.
Note: preflight requires network access to the configured uplinks. Running it on your local machine may fail if uplinks are only accessible from the deployment environment (e.g., internal databases, VPC-only services). Use clearance for offline manifest validation.
| Scheme | Default Port | Check |
|---|---|---|
postgres://, postgresql:// |
5432 | TCP |
mysql:// |
3306 | TCP |
redis:// |
6379 | TCP |
memgraph://, neo4j:// |
7687 | TCP |
http://, https:// |
80/443 | HTTP GET |
tcp:// |
required | TCP |
Uplinks with the same host:port are checked only once:
# These two share the same postgres server - only one TCP check
[[mothership.uplinks]]
url = "postgres://localhost:5432/app_production"
name = "primary"
[[mothership.uplinks]]
url = "postgres://localhost:5432/app_replica"
name = "replica"
Enable Prometheus metrics:
[mothership]
metrics_port = 9090
Scrape http://127.0.0.1:9090/metrics:
mothership_ship_status{ship="app",group="web"} 1
mothership_ship_healthy{ship="app",group="web"} 1
mothership_ship_restarts_total{ship="app",group="web"} 0
mothership_ship_memory_rss_bytes{ship="app",group="web"} 52428800
mothership_ship_memory_virtual_bytes{ship="app",group="web"} 1073741824
mothership_requests_total{route="/api"} 1234
mothership_fleet_ships_total 3
Also serves /health for liveness probes.
Optional HTTP rate limiting to protect against request floods:
[mothership.rate_limiting]
global_rps = 10000.0 # Global requests per second (None = unlimited)
per_ip_rpm = 100.0 # Per-IP requests per minute (None = unlimited)
When rate limited, clients receive 429 Too Many Requests with Retry-After header.
Default: Unlimited (no rate limiting). Enable only if needed.
Mothership tracks its lifecycle through these states:
Initializing → Preflight → Electing → Prelaunch → Docking → Launching → Running → Draining → Landed
↓ ↓ ↓ ↓ ↓ ↓
Failed Failed Failed Failed Failed Crashed
| State | Description |
|---|---|
Initializing |
Loading manifest |
Preflight |
Verifying uplinks |
Electing |
Flagship coordination |
Prelaunch |
Running prelaunch jobs |
Docking |
Bays connecting |
Launching |
Ships starting |
Running |
Normal operation |
Mayday |
Distress mode (attempting recovery) |
Draining |
Graceful shutdown in progress |
Landed |
Clean exit |
Failed |
Startup failure |
Crashed |
Runtime failure |
The current status is logged on shutdown: {"message":"Mothership landed","status":"landed"}
MIT