| Crates.io | llm-observatory-storage |
| lib.rs | llm-observatory-storage |
| version | 0.1.1 |
| created_at | 2025-11-06 01:17:13.419432+00 |
| updated_at | 2025-11-06 01:17:13.419432+00 |
| description | Storage layer for LLM Observatory - handles persistence of traces, metrics, and logs |
| homepage | https://llm-observatory.io |
| repository | https://github.com/globalbusinessadvisors/llm-observatory |
| max_upload_size | |
| id | 1918934 |
| size | 999,105 |
The storage crate provides the persistence layer for LLM Observatory, handling all database operations for traces, metrics, and logs.
storage/
├── src/
│ ├── lib.rs # Main entry point and exports
│ ├── config.rs # Database configuration
│ ├── pool.rs # Connection pool management
│ ├── error.rs # Storage-specific errors
│ ├── models/ # Data models
│ │ ├── trace.rs # Trace, span, and event models
│ │ ├── metric.rs # Metric and data point models
│ │ └── log.rs # Log record models
│ ├── repositories/ # Query interfaces (read operations)
│ │ ├── trace.rs # Trace queries
│ │ ├── metric.rs # Metric queries
│ │ └── log.rs # Log queries
│ └── writers/ # Batch writers (write operations)
│ ├── trace.rs # Trace batch insertion
│ ├── metric.rs # Metric batch insertion
│ └── log.rs # Log batch insertion
├── migrations/ # SQLx database migrations
└── Cargo.toml
use llm_observatory_storage::{StorageConfig, StoragePool};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Load configuration
let config = StorageConfig::from_env()?;
// Create connection pool
let pool = StoragePool::new(config).await?;
// Run migrations
pool.run_migrations().await?;
Ok(())
}
For LLM spans with proper trace UUID resolution:
use llm_observatory_storage::writers::TraceWriter;
use llm_observatory_core::span::LlmSpan;
// Create writer
let writer = TraceWriter::new(pool.clone());
// Write LLM span with automatic trace UUID resolution
let trace_span = writer.write_span_from_llm(llm_span).await?;
// Flush buffered data
writer.flush().await?;
See UUID Resolution Guide for details.
use llm_observatory_storage::writers::TraceWriter;
// Create writer
let writer = TraceWriter::new(pool.clone());
// Write traces
writer.write_trace(trace).await?;
// Flush buffered data
writer.flush().await?;
For maximum throughput (10-100x faster than INSERT):
use llm_observatory_storage::writers::CopyWriter;
// Get a tokio-postgres client for COPY operations
let (client, _handle) = pool.get_tokio_postgres_client().await?;
// Write large batches using COPY protocol
let traces = generate_traces(10000);
let rows = CopyWriter::write_traces(&client, traces).await?;
// Throughput: ~50,000-100,000 rows/sec vs ~5,000-10,000 with INSERT
See COPY Protocol Guide for detailed information.
use llm_observatory_storage::repositories::TraceRepository;
// Create repository
let repo = TraceRepository::new(pool.clone());
// Query traces
let traces = repo.list(filters).await?;
The storage crate can be configured via environment variables or configuration files:
DATABASE_URL - PostgreSQL connection stringREDIS_URL - Redis connection string (optional)DB_MAX_CONNECTIONS - Maximum pool size (default: 50)DB_MIN_CONNECTIONS - Minimum pool size (default: 5)postgres:
host: localhost
port: 5432
database: llm_observatory
username: postgres
password: secret
ssl_mode: prefer
redis:
url: redis://localhost:6379/0
pool:
max_connections: 50
min_connections: 5
connect_timeout_secs: 10
traces - Main trace recordstrace_spans - Individual spans within tracestrace_events - Events attached to spansmetrics - Metric definitionsmetric_data_points - Time series data pointslogs - Log records with full-text search supportsqlx migrate run --source crates/storage/migrations
# Unit tests
cargo test -p llm-observatory-storage
# Integration tests (requires PostgreSQL)
cargo test -p llm-observatory-storage --features test-integration
from_env() methodSee .env.example for a complete list of all configuration options.
Run the test connection binary to verify your setup:
# Set minimal required config
export DB_PASSWORD=postgres
# Run the test
cargo run --bin test_connection
# With debug logging
RUST_LOG=debug cargo run --bin test_connection
The test binary will:
Run benchmarks to compare INSERT vs COPY performance:
export DATABASE_URL="postgres://postgres:password@localhost/llm_observatory"
cargo bench --bench copy_vs_insert
Expected results: