| Crates.io | containerflare |
| lib.rs | containerflare |
| version | 0.2.0 |
| created_at | 2025-11-17 18:15:34.556526+00 |
| updated_at | 2025-12-01 18:33:16.139145+00 |
| description | A runtime for writing Rust-based Cloudflare Workers running in the Cloudflare Workers container runtime |
| homepage | |
| repository | https://github.com/sam0x17/containerflare |
| max_upload_size | |
| id | 1937286 |
| size | 81,662 |
containerflare lets you run Axum inside Cloudflare Containers without re-implementing the platform glue, and now auto-detects when it is executing on Google Cloud Run. It exposes a tiny runtime that:
PORT binding)The result feels like developing any other Axum app—only now it runs next to your Worker.
ContainerContext.PORT, populates service /
revision / project / trace fields, and disables the host command channel when unavailable.containerflare-command crate for direct use.examples/basic targets both Cloudflare Containers and Google
Cloud Run with the same codebase and Dockerfile.cargo add containerflare
The crate targets Rust 1.90+ (edition 2024).
use axum::{routing::get, Json, Router};
use containerflare::{run, ContainerContext, RequestMetadata};
#[tokio::main]
async fn main() -> containerflare::Result<()> {
let router = Router::new().route("/", get(metadata));
run(router).await
}
async fn metadata(ctx: ContainerContext) -> Json<RequestMetadata> {
Json(ctx.metadata().clone())
}
ContainerContext is injected via Axum’s extractor system and surfaces
ContainerContext::platform() so you can differentiate between Cloudflare and Cloud Run.RequestMetadata contains everything Cloudflare knows about the request (worker name, colo,
region, cf-ray, client IP, method/path/url, etc.) plus Cloud Run service/revision/
configuration/project information and the parsed x-cloud-trace-context header when present.ContainerContext::command_client() provides the low-level JSON command channel; call
invoke whenever Cloudflare documents a capability. On Cloud Run the channel is disabled and
the client reports CommandError::Unavailable so you can log or fall back gracefully.Run the binary inside your container image. Cloudflare will proxy HTTP traffic from the
Worker/Durable Object to the listener bound by containerflare (binds to PORT when set, otherwise
CF_CONTAINER_PORT, falling back to 0.0.0.0:8787 for the Cloudflare sidecar). Override
CF_CONTAINER_ADDR for a custom interface. Use CF_CMD_ENDPOINT when pointing the command client
at a TCP or Unix socket shim.
If you only need access to the host-managed command bus (KV, R2, Queues, etc.), depend on
containerflare-command directly. Cloud Run
does not expose this bus so commands immediately return CommandError::Unavailable, but the same
API works on Cloudflare Containers:
cargo add containerflare-command
It exposes CommandClient, CommandRequest, CommandResponse, and the CommandEndpoint
parsers without pulling in the runtime/router pieces.
# build and run the example container (amd64)
docker build --platform=linux/amd64 -f examples/basic/Dockerfile -t containerflare-basic .
docker run --rm --platform=linux/amd64 -p 8787:8787 containerflare-basic
# curl echoes the RequestMetadata JSON – easy proof the bridge works
curl http://127.0.0.1:8787/
# customize the listener
docker run --rm --platform=linux/amd64 -p 8080:8080 -e PORT=8080 containerflare-basic
curl http://127.0.0.1:8080/
From examples/basic, run:
./deploy_cloudflare.sh # runs wrangler deploy from examples/basic
The example’s wrangler.toml sets image_build_context = "../..", so the Docker build sees
the entire workspace (the example crate depends on this repo via path = "../.."). After
deploy Wrangler prints a workers.dev URL that proxies into your container:
npx wrangler tail containerflare-basic --format=pretty
npx wrangler containers list
npx wrangler containers logs --name containerflare-basic-containerflarebasic
curl https://containerflare-basic.<your-account>.workers.dev/
The same example crate can target Cloud Run. From examples/basic:
./deploy_cloudrun.sh # builds with Dockerfile and runs gcloud run deploy
It uses your gcloud defaults for project/region unless overridden (PROJECT_ID, REGION,
SERVICE_NAME, TAG, RUST_LOG). By default the script deploys without allowing
unauthenticated traffic; pass --allow-unauthenticated (or ALLOW_UNAUTH=true) to opt in. When
containerflare detects Cloud Run it binds to the injected PORT, captures
K_SERVICE/K_REVISION/K_CONFIGURATION/GOOGLE_CLOUD_PROJECT,
parses x-cloud-trace-context, and disables the host command channel. Handlers can inspect that
state via ContainerContext::platform() and the new Cloud Run fields on RequestMetadata.
The Worker shim (see examples/basic/worker/index.js) adds an x-containerflare-metadata
header before proxying every request into the container. That JSON payload includes:
cf-ray)CONTAINERFLARE_WORKER Wrangler variable)On the Rust side you can read all of those fields via ContainerContext::metadata() (see
RequestMetadata in src/context.rs). If you customize the Worker, keep writing this header
so your Axum handlers continue to receive Cloudflare context.
On Cloud Run the runtime infers metadata directly from HTTP headers + environment variables. It
records the service, revision, configuration, project ID, region, trace/span IDs, and whether the
request is sampled based on the x-cloud-trace-context header. These new fields appear on
RequestMetadata alongside the existing Cloudflare values. Geo fields like country/colo are
only populated on Cloudflare because Cloud Run does not provide them.
examples/basic is a real Cargo crate that depends on containerflare via path = "../..".
It ships with:
x86_64-unknown-linux-muslUse it as a template for your own containerized Workers.
linux/amd64 architecture, so we
target x86_64-unknown-linux-musl by default. You could just as easily use a debian/ubuntu
based image, however alpine/musl is great for small container sizes.PORT when provided (Cloud Run injects it), otherwise falls back to
CF_CONTAINER_PORT or 0.0.0.0:8787 so the Cloudflare sidecar (which connects from 10.0.0.1)
can reach your Axum listener. Override CF_CONTAINER_ADDR for custom setups.CommandClient speaks JSON-over-STDIO for now. When Cloudflare documents additional
transports we can add typed helpers on top of it. Cloud Run disables the channel, so the client
immediately returns CommandError::Unavailable.Contributions are welcome—file issues or PRs with ideas!