| Crates.io | opentelemetry-lambda-extension |
| lib.rs | opentelemetry-lambda-extension |
| version | 0.1.6 |
| created_at | 2025-12-07 12:21:42.076292+00 |
| updated_at | 2026-01-06 13:47:51.769369+00 |
| description | AWS Lambda extension for collecting and exporting OpenTelemetry signals |
| homepage | |
| repository | https://github.com/djvcom/lambda-observability |
| max_upload_size | |
| id | 1971504 |
| size | 441,857 |
AWS Lambda extension for collecting and exporting OpenTelemetry traces, metrics, and logs from Lambda functions.
This extension receives telemetry data (traces, metrics, logs) from instrumented Lambda functions via a local OTLP receiver and exports them to your observability backend. It integrates with Lambda's Extensions API for proper lifecycle management and handles the unique constraints of Lambda's execution model.
Install cargo-lambda:
# Using pip
pip3 install cargo-lambda
# Or using Homebrew (macOS)
brew tap cargo-lambda/cargo-lambda
brew install cargo-lambda
Build and deploy the extension using cargo-lambda:
# Build optimised for Lambda (handles cross-compilation automatically)
cargo lambda build --release --extension
# The binary is ready at:
# target/lambda/extensions/opentelemetry-lambda-extension
Create and deploy the layer:
# Create layer structure
mkdir -p layer/extensions
cp target/lambda/extensions/opentelemetry-lambda-extension layer/extensions/
# Package the layer
cd layer && zip -r ../extension-layer.zip .
# Deploy to AWS
aws lambda publish-layer-version \
--layer-name opentelemetry-lambda-extension \
--zip-file fileb://extension-layer.zip \
--compatible-runtimes provided.al2023 nodejs24.x python3.14 \
--compatible-architectures x86_64
For ARM64 (Graviton2):
cargo lambda build --release --extension --arm64
# Then package and deploy with:
# --compatible-architectures arm64
This extension is designed to be lightweight. With the workspace's release profile, the binary is approximately 4.4 MB (compared to ~30 MB for the OpenTelemetry Collector Lambda distribution).
The workspace Cargo.toml includes optimised release profiles:
[profile.release]
lto = true # Link-time optimisation for cross-crate inlining
codegen-units = 1 # Better optimisation at cost of compile time
strip = true # Remove debug symbols
panic = "abort" # Remove unwinding code
[profile.release-small]
inherits = "release"
opt-level = "z" # Optimise for size over speed
| Profile | Size | Use Case |
|---|---|---|
--release |
~4.4 MB | Recommended default |
--profile release-small |
~2.7 MB | When size is critical |
For even smaller binaries, you can apply UPX compression to the Linux binary:
# After building with cargo-lambda
upx --best target/lambda/extensions/opentelemetry-lambda-extension
This typically achieves 50-70% additional compression.
To identify what's contributing to binary size:
# Install cargo-bloat
cargo install cargo-bloat
# Show size by crate
cargo bloat --release -p opentelemetry-lambda-extension --crates
# Show largest functions
cargo bloat --release -p opentelemetry-lambda-extension -n 20
Configure the extension via environment variables or a TOML config file.
# OTLP endpoint
OTEL_EXPORTER_OTLP_ENDPOINT=https://your-collector.example.com
# Protocol (http or grpc)
OTEL_EXPORTER_OTLP_PROTOCOL=http
# Compression
OTEL_EXPORTER_OTLP_COMPRESSION=gzip
# Headers (for authentication)
OTEL_EXPORTER_OTLP_HEADERS="Authorization=Bearer token"
# Extension-specific settings
OTEL_LAMBDA_FLUSH_TIMEOUT=5s
OTEL_LAMBDA_RECEIVER_PORT=9999
Place a config.toml in the Lambda function's deployment package:
[exporter]
endpoint = "https://your-collector.example.com"
protocol = "http"
compression = "gzip"
timeout = "30s"
[exporter.headers]
Authorization = "Bearer your-token"
[receiver]
port = 9999
host = "127.0.0.1"
[flush]
strategy = "invocation"
timeout = "5s"
┌─────────────────────────────────────────────────────────────────────┐
│ Lambda Execution Environment │
│ │
│ ┌─────────────┐ OTLP/HTTP or gRPC ┌────────────┐ │
│ │ Lambda │ ─────────────────────────────────▶ │ Extension │ │
│ │ Function │ traces, metrics, logs │ Receiver │ │
│ │ (instrumented)│ │ :9999 │ │
│ └─────────────┘ └─────┬──────┘ │
│ │ │
│ ▼ │
│ ┌────────────┐ │
│ │ Aggregator │ │
│ │ & Batcher │ │
│ └─────┬──────┘ │
│ │ │
│ ┌─────────────┐ │ │
│ │ Platform │ ▼ │
│ │ Telemetry │ ──────────────────────────────▶ ┌────────────┐ │
│ │ (Lambda API)│ platform metrics │ Exporter │ │
│ └─────────────┘ │ (OTLP) │ │
│ └─────┬──────┘ │
└──────────────────────────────────────────────────────────┼──────────┘
│
▼
┌────────────────────┐
│ Your Collector │
│ (Jaeger, Grafana, │
│ Datadog, etc.) │
└────────────────────┘
The extension integrates with Lambda's execution lifecycle:
Lambda may freeze the execution environment between invocations. The extension:
Configure your Lambda function to send telemetry to the extension:
# Point OTLP exporters at the extension
OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:9999
OTEL_EXPORTER_OTLP_PROTOCOL=http
use opentelemetry_lambda_tower::{OtelTracingLayer, ApiGatewayV2Extractor};
use tower::ServiceBuilder;
let service = ServiceBuilder::new()
.layer(OtelTracingLayer::new(ApiGatewayV2Extractor::new()))
.service(service_fn(handler));
The extension automatically captures Lambda platform metrics from the Telemetry API:
| Metric | Description |
|---|---|
faas.duration |
Function execution duration |
faas.billed_duration |
Billed duration (rounded up) |
faas.max_memory |
Maximum memory used |
faas.init_duration |
Cold start initialisation time |
faas.coldstart |
Boolean indicating cold start |
The extension detects and adds Lambda resource attributes:
| Attribute | Source |
|---|---|
faas.name |
AWS_LAMBDA_FUNCTION_NAME |
faas.version |
AWS_LAMBDA_FUNCTION_VERSION |
faas.instance |
AWS_LAMBDA_LOG_STREAM_NAME |
faas.max_memory |
AWS_LAMBDA_FUNCTION_MEMORY_SIZE |
cloud.provider |
aws |
cloud.region |
AWS_REGION |
cloud.account.id |
Extracted from function ARN |
http://localhost:9999/aws/lambda/<function>/extensionOTEL_EXPORTER_OTLP_ENDPOINT is correctOTEL_EXPORTER_OTLP_COMPRESSION=gzipMIT