| Crates.io | otlp-arrow-library |
| lib.rs | otlp-arrow-library |
| version | 0.6.4 |
| created_at | 2025-12-11 10:38:51.11291+00 |
| updated_at | 2025-12-11 10:38:51.11291+00 |
| description | Cross-platform Rust library for receiving OTLP messages via gRPC and writing to Arrow IPC files |
| homepage | |
| repository | https://github.com/your-org/otlp-rust-service |
| max_upload_size | |
| id | 1979483 |
| size | 2,420,683 |
A cross-platform Rust library for receiving OpenTelemetry Protocol (OTLP) messages via gRPC and writing them to local files in Arrow IPC Streaming format. The library supports both standalone service mode and embedded library usage with public API methods.
PushMetricExporter and SpanExporter implementations for seamless integration with OpenTelemetry SDKexport_metrics_ref() method that accepts references instead of requiring ownershipdocs/ARCHITECTURE.mdSecretString (credentials zeroed in memory)The demo application is the easiest way to get started and verify the service is working:
# Run the demo application
cargo run --example demo-app
Prerequisites: For dashboard visualization, build the dashboard first:
cd dashboard && npm install && npm run build && cd ..
If dashboard is not built: The demo will still run and generate data, but without the web dashboard. Data will be written to ./output_dir/otlp/ and you can view it later after building the dashboard.
The demo will:
To view the dashboard:
./output_dir/otlp directory (the parent directory containing both traces and metrics subdirectories)Important: Select the otlp directory (not otlp/traces or otlp/metrics), as the dashboard needs access to both subdirectories.
To stop the demo: Press Ctrl+C (the demo will flush data and shutdown gracefully)
The demo application serves as both:
See examples/demo-app.rs for the complete source code with extensive comments.
# Run with default configuration
cargo run --bin otlp-arrow-service
# Run with custom config
OTLP_OUTPUT_DIR=./my_output cargo run --bin otlp-arrow-service
use otlp_arrow_library::{OtlpLibrary, Config};
use opentelemetry_sdk::trace::SpanData;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create configuration
let config = Config::default();
// Initialize library
let library = OtlpLibrary::new(config).await?;
// Export a trace span
// library.export_trace(span).await?;
// Export multiple traces
// library.export_traces(spans).await?;
// Export metrics (automatically converted to protobuf for storage)
// library.export_metrics(metrics).await?;
// Export metrics by reference (more efficient)
// library.export_metrics_ref(&metrics).await?;
// Force flush
library.flush().await?;
// Shutdown gracefully
library.shutdown().await?;
Ok(())
}
use otlp_arrow_library::OtlpLibrary;
use opentelemetry_sdk::metrics::{MeterProvider, PeriodicReader};
use opentelemetry_sdk::trace::TracerProvider;
use std::time::Duration;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create library instance
let config = otlp_arrow_library::Config::default();
let library = OtlpLibrary::new(config).await?;
// Create exporters for OpenTelemetry SDK
let metric_exporter = library.metric_exporter_adapter();
let span_exporter = library.span_exporter_adapter();
// Use with OpenTelemetry SDK
let metric_reader = PeriodicReader::builder(metric_exporter)
.with_interval(Duration::from_secs(10))
.build();
let meter_provider = MeterProvider::builder()
.with_reader(metric_reader)
.build();
let tracer_provider = opentelemetry_sdk::trace::TracerProvider::builder()
.with_batch_exporter(span_exporter, opentelemetry_sdk::runtime::Tokio)
.build();
// Use providers to create meters and tracers
// Metrics and traces are automatically exported via exporters
// Shutdown when done
library.shutdown().await?;
Ok(())
}
import otlp_arrow_library
# Initialize library
library = otlp_arrow_library.PyOtlpLibrary(
output_dir="./output",
write_interval_secs=5
)
# Export a trace
trace_id = bytes([1] * 16)
span_id = bytes([1] * 8)
span = {
"trace_id": trace_id,
"span_id": span_id,
"name": "my-span",
"kind": "server",
"attributes": {"service.name": "my-service"}
}
library.export_trace(span)
# Export metrics by reference (more efficient)
metrics = {} # Your metrics dictionary
library.export_metrics_ref(metrics)
# Flush and shutdown
library.flush()
library.shutdown()
import otlp_arrow_library
# Initialize library
library = otlp_arrow_library.PyOtlpLibrary(
output_dir="./output",
write_interval_secs=5
)
# Create exporters for OpenTelemetry SDK integration
metric_exporter = library.metric_exporter_adapter()
span_exporter = library.span_exporter_adapter()
# Note: Direct Python OpenTelemetry SDK integration requires
# adapter classes (see Issue #6). For now, use library methods directly.
Configuration can be provided via:
config.yamlOTLP_* prefix (e.g., OTLP_OUTPUT_DIR, OTLP_WRITE_INTERVAL_SECS)ConfigBuilderoutput_dir: Output directory for Arrow IPC files (default: ./output_dir)write_interval_secs: How frequently to write batches to disk in seconds (default: 5)trace_cleanup_interval_secs: Trace file retention interval in seconds (default: 600)metric_cleanup_interval_secs: Metric file retention interval in seconds (default: 3600)max_trace_buffer_size: Maximum number of trace spans to buffer in memory (default: 10000, max: 1000000)max_metric_buffer_size: Maximum number of metric requests to buffer in memory (default: 10000, max: 1000000)protocols: Protocol configuration
protobuf_enabled: Enable gRPC Protobuf server (default: true)protobuf_port: Port for Protobuf server (default: 4317)arrow_flight_enabled: Enable gRPC Arrow Flight server (default: true)arrow_flight_port: Port for Arrow Flight server (default: 4318)forwarding: Optional remote forwarding configuration
enabled: Enable forwarding (default: false)endpoint_url: Remote endpoint URL (required if enabled, must be valid http/https URL)protocol: Forwarding protocol (Protobuf or ArrowFlight, default: Protobuf)authentication: Optional authentication configuration
auth_type: Authentication type (api_key, bearer_token, or basic)credentials: Authentication credentials (stored securely using SecretString)
api_key: requires key credentialbearer_token: requires token credentialbasic: requires username and password credentialsdashboard: Dashboard HTTP server configuration
enabled: Enable dashboard server (default: false)port: Dashboard port (default: 8080)bind_address: Bind address (default: 127.0.0.1 for localhost-only access)x_frame_options: X-Frame-Options header value (DENY or SAMEORIGIN, default: DENY)static_dir: Directory containing dashboard static files (default: ./dashboard/dist)output_dir: ./custom_output
write_interval_secs: 10
trace_cleanup_interval_secs: 1200
metric_cleanup_interval_secs: 7200
protocols:
protobuf_enabled: true
protobuf_port: 4317
arrow_flight_enabled: true
arrow_flight_port: 4318
forwarding:
enabled: true
endpoint_url: "https://collector.example.com:4317"
protocol: protobuf
authentication:
auth_type: bearer_token
credentials:
token: "my-bearer-token"
export OTLP_OUTPUT_DIR=./my_output
export OTLP_WRITE_INTERVAL_SECS=10
export OTLP_MAX_TRACE_BUFFER_SIZE=50000
export OTLP_MAX_METRIC_BUFFER_SIZE=50000
export OTLP_PROTOBUF_ENABLED=true
export OTLP_ARROW_FLIGHT_ENABLED=true
use otlp_arrow_library::{ConfigBuilder, ForwardingConfig, ForwardingProtocol};
let config = ConfigBuilder::new()
.output_dir("./custom_output")
.write_interval_secs(10)
.protobuf_enabled(true)
.arrow_flight_enabled(true)
.enable_forwarding(ForwardingConfig {
enabled: true,
endpoint_url: Some("https://collector.example.com:4317".to_string()),
protocol: ForwardingProtocol::Protobuf,
authentication: None,
})
.build()?;
See quickstart guide for detailed examples.
src/
├── lib.rs # Library root
├── config/ # Configuration management
├── otlp/ # OTLP processing (server, exporter, forwarder)
├── api/ # Public API
├── mock/ # Mock service for testing
└── bin/ # Standalone service binary
tests/
├── unit/ # Unit tests
├── integration/ # Integration tests
└── contract/ # Contract tests
The project includes GitHub Actions workflows for:
See .github/workflows/ for workflow definitions.
# Build library
cargo build --release
# Run tests
cargo test
# Run benchmarks
cargo bench
# Install maturin
pip install maturin
# Build Python wheel
maturin build --release
# Or install in development mode
maturin develop
See API documentation for complete Rust API reference.
See Python API contract for Python API reference.
The library provides adapter classes for seamless integration with Python OpenTelemetry SDK:
import otlp_arrow_library
from opentelemetry.sdk.metrics.export import PeriodicExportingMetricReader
from opentelemetry.sdk.trace.export import BatchSpanProcessor
# Create library instance
library = otlp_arrow_library.PyOtlpLibrary(output_dir="./otlp_output")
# Create adapters for Python OpenTelemetry SDK
metric_exporter = library.metric_exporter_adapter()
span_exporter = library.span_exporter_adapter()
# Use directly with Python OpenTelemetry SDK
metric_reader = PeriodicExportingMetricReader(metric_exporter)
span_processor = BatchSpanProcessor(span_exporter)
See Python OpenTelemetry SDK Adapter Quickstart for complete examples.
The demo application (examples/demo-app.rs) is the recommended starting point for new users:
Run the demo:
cargo run --example demo-app
What it does:
Use cases:
examples/standalone.rs - Run as a standalone gRPC serviceexamples/embedded.rs - Use as an embedded componentexamples/python_example.py - Python integration exampleThe standalone service includes a health check endpoint on port 8080:
curl http://localhost:8080
# Returns: OK
The library collects operation metrics that can be accessed via OtlpFileExporter::get_metrics():
The library implements multiple security features to protect credentials, prevent attacks, and ensure safe operation:
SecretString, which:
Debug or Display implementationsBufferFull error when limits are reached instead of consuming unlimited memoryurl crateFor detailed security information, see SECURITY.md.
The library uses protobuf format (ExportMetricsServiceRequest) for internal metric storage to solve the ResourceMetrics Clone limitation. When you call export_metrics() with a ResourceMetrics instance:
Clone, enabling proper buffering and forwardingResourceMetrics for file exportMIT OR Apache-2.0