| Crates.io | init-tracing-opentelemetry |
| lib.rs | init-tracing-opentelemetry |
| version | 0.36.0 |
| created_at | 2023-06-14 21:38:36.69076+00 |
| updated_at | 2026-01-19 14:19:23.572624+00 |
| description | A set of helpers to initialize (and more) tracing + opentelemetry (compose your own or use opinionated preset) |
| homepage | https://github.com/davidB/tracing-opentelemetry-instrumentation-sdk/tree/main/init-tracing-opentelemetry |
| repository | https://github.com/davidB/tracing-opentelemetry-instrumentation-sdk |
| max_upload_size | |
| id | 890532 |
| size | 147,737 |
A set of helpers to initialize (and more) tracing + opentelemetry (compose your own or use opinionated preset)
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Simple preset
let _guard = init_tracing_opentelemetry::TracingConfig::production().init_subscriber()?;
//...
Ok(())
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// custom configuration
let _guard = init_tracing_opentelemetry::TracingConfig::default()
.with_json_format()
.with_stderr()
.with_log_directives("debug")
.init_subscriber()?;
//...
Ok(())
}
The init_subscriber() function returns an OtelGuard instance. Following the guard pattern, this struct provides no functions but, when dropped, ensures that any pending traces/metrics are sent before it exits. The syntax let _guard is suggested to ensure that Rust does not drop the struct until the application exits.
TracingConfig::development() - Pretty format, stderr, with debug infoTracingConfig::production() - JSON format, stdout, minimal metadataTracingConfig::debug() - Full verbosity with all span eventsTracingConfig::minimal() - Compact format, no OpenTelemetryTracingConfig::testing() - Minimal output for testsuse init_tracing_opentelemetry::TracingConfig;
TracingConfig::default()
.with_pretty_format() // or .with_json_format(), .with_compact_format()
.with_stderr() // or .with_stdout(), .with_file(path)
.with_log_directives("debug") // Custom log levels
.with_line_numbers(true) // Include line numbers
.with_thread_names(true) // Include thread names
.with_otel(true) // Enable OpenTelemetry
.init_subscriber()
.expect("valid tracing configuration");
Use init_subscriber_ext(|subscriber| {...} ) to transform the subscriber (registry), before application of the configuration.
use init_tracing_opentelemetry::TracingConfig;
use tokio_blocked::TokioBlockedLayer;
use tracing::info;
use tracing_subscriber::layer::SubscriberExt;
#[tokio::main]
async fn main() {
let blocked = TokioBlockedLayer::new()
.with_warn_busy_single_poll(Some(std::time::Duration::from_micros(150)));
let _guard = TracingConfig::default()
.with_log_directives("info,tokio::task=trace,tokio::task::waker=warn")
.with_span_events(tracing_subscriber::fmt::format::FmtSpan::NONE)
.init_subscriber_ext(|subscriber| subscriber.with(blocked))
.unwrap();
tokio::task::spawn(async {
// BAD!
// This produces a warning log message.
info!("blocking!");
std::thread::sleep(std::time::Duration::from_secs(1));
})
.await
.unwrap();
tokio::time::sleep(tokio::time::Duration::from_secs(5)).await;
}
For backward compatibility, the old API is still available:
pub fn build_loglevel_filter_layer() -> tracing_subscriber::filter::EnvFilter {
// filter what is output on log (fmt)
// std::env::set_var("RUST_LOG", "warn,axum_tracing_opentelemetry=info,otel=debug");
std::env::set_var(
"RUST_LOG",
format!(
// `otel::tracing` should be a level trace to emit opentelemetry trace & span
// `otel::setup` set to debug to log detected resources, configuration read and infered
"{},otel::tracing=trace,otel=debug",
std::env::var("RUST_LOG")
.or_else(|_| std::env::var("OTEL_LOG_LEVEL"))
.unwrap_or_else(|_| "info".to_string())
),
);
EnvFilter::from_default_env()
}
pub fn build_otel_layer<S>() -> Result<OpenTelemetryLayer<S, Tracer>, BoxError>
where
S: Subscriber + for<'a> LookupSpan<'a>,
{
use crate::{
init_propagator, //stdio,
otlp,
resource::DetectResource,
};
let otel_rsrc = DetectResource::default()
//.with_fallback_service_name(env!("CARGO_PKG_NAME"))
//.with_fallback_service_version(env!("CARGO_PKG_VERSION"))
.build();
let otel_tracer = otlp::init_tracer(otel_rsrc, otlp::identity)?;
// to not send trace somewhere, but continue to create and propagate,...
// then send them to `axum_tracing_opentelemetry::stdio::WriteNoWhere::default()`
// or to `std::io::stdout()` to print
//
// let otel_tracer =
// stdio::init_tracer(otel_rsrc, stdio::identity, stdio::WriteNoWhere::default())?;
init_propagator()?;
Ok(tracing_opentelemetry::layer().with_tracer(otel_tracer))
}
To retrieve the current trace_id (eg to add it into error message (as header or attributes))
# use tracing_opentelemetry_instrumentation_sdk;
let trace_id = tracing_opentelemetry_instrumentation_sdk::find_current_trace_id();
//json!({ "error" : "xxxxxx", "trace_id": trace_id})
To ease setup and compliance with OpenTelemetry SDK configuration, the configuration can be done with the following environment variables (see sample init_tracing() above):
OTEL_EXPORTER_OTLP_TRACES_ENDPOINT fallback to OTEL_EXPORTER_OTLP_ENDPOINT for the url of the exporter / collectorOTEL_EXPORTER_OTLP_TRACES_PROTOCOL fallback to OTEL_EXPORTER_OTLP_PROTOCOL, fallback to auto-detection based on ENDPOINT portOTEL_SERVICE_NAME for the name of the serviceOTEL_PROPAGATORS for the configuration of the propagatorsOTEL_TRACES_SAMPLER & OTEL_TRACES_SAMPLER_ARG for configuration of the samplerFew other environment variables can also be used to configure OTLP exporter (eg to configure headers, authentication,, etc...):
# For GRPC:
export OTEL_EXPORTER_OTLP_TRACES_ENDPOINT="http://localhost:4317"
export OTEL_EXPORTER_OTLP_TRACES_PROTOCOL="grpc"
export OTEL_TRACES_SAMPLER="always_on"
# For HTTP:
export OTEL_EXPORTER_OTLP_TRACES_ENDPOINT="http://127.0.0.1:4318/v1/traces"
export OTEL_EXPORTER_OTLP_TRACES_PROTOCOL="http/protobuf"
export OTEL_TRACES_SAMPLER="always_on"
In the context of kubernetes, some of the above environment variables can be injected by the Opentelemetry operator (via inject-sdk):
apiVersion: apps/v1
kind: Deployment
spec:
template:
metadata:
annotations:
# to inject environment variables only by opentelemetry-operator
instrumentation.opentelemetry.io/inject-sdk: "opentelemetry-operator/instrumentation"
instrumentation.opentelemetry.io/container-names: "app"
containers:
- name: app
Or if you don't setup inject-sdk, you can manually set the environment variable eg
apiVersion: apps/v1
kind: Deployment
spec:
template:
metadata:
containers:
- name: app
env:
- name: OTEL_SERVICE_NAME
value: "app"
- name: OTEL_EXPORTER_OTLP_PROTOCOL
value: "grpc"
# for otel collector in `deployment` mode, use the name of the service
# - name: OTEL_EXPORTER_OTLP_ENDPOINT
# value: "http://opentelemetry-collector.opentelemetry-collector:4317"
# for otel collector in sidecar mode (imply to deploy a sidecar CR per namespace)
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: "http://localhost:4317"
# for `daemonset` mode: need to use the local daemonset (value interpolated by k8s: `$(...)`)
# - name: OTEL_EXPORTER_OTLP_ENDPOINT
# value: "http://$(HOST_IP):4317"
# - name: HOST_IP
# valueFrom:
# fieldRef:
# fieldPath: status.hostIP
check you only have a single version of opentelemtry (could be part of your CI/build), use cargo-deny or cargo tree
# Check only one version of opentelemetry should be used
# else issue with setup of global (static variable)
# check_single_version_opentelemtry:
cargo tree -i opentelemetry
check the code of your exporter and the integration with tracing (as subscriber's layer)
check the environment variables of opentelemetry OTEL_EXPORTER... and OTEL_TRACES_SAMPLER (values are logged on target otel::setup )
check that log target otel::tracing enable log level trace (or info if you use tracing_level_info feature) to generate span to send to opentelemetry collector.
To configure opentelemetry metrics, enable the metrics feature, this will initialize a SdkMeterProvider, set it globally and add a a MetricsLayer to allow using tracing events to produce metrics.
The opentelemetry_sdk can still be used to produce metrics as well, since we configured the SdkMeterProvider globally, so any Axum/Tonic middleware that does not use tracing but directly opentelemetry::metrics will work.
Configure the following set of environment variables to configure the metrics exporter (on top of those configured above):
OTEL_EXPORTER_OTLP_METRICS_ENDPOINT override to OTEL_EXPORTER_OTLP_ENDPOINT for the url of the exporter / collectorOTEL_EXPORTER_OTLP_METRICS_PROTOCOL override to OTEL_EXPORTER_OTLP_PROTOCOL, fallback to auto-detection based on ENDPOINT portOTEL_EXPORTER_OTLP_METRICS_TIMEOUT to set the timeout for the connection to the exporterOTEL_EXPORTER_OTLP_METRICS_TEMPORALITY_PREFERENCE to set the temporality preference for the exporterOTEL_METRIC_EXPORT_INTERVAL to set frequence of metrics export in milliseconds, defaults to 60s