llm-sentinel-api

Crates.iollm-sentinel-api
lib.rsllm-sentinel-api
version0.1.0
created_at2025-11-06 08:22:26.491236+00
updated_at2025-11-06 08:22:26.491236+00
descriptionREST API server with health checks, Prometheus metrics, and query endpoints for LLM-Sentinel
homepagehttps://github.com/globalbusinessadvisors/llm-sentinel
repositoryhttps://github.com/globalbusinessadvisors/llm-sentinel
max_upload_size
id1919284
size140,010
GBA (globalbusinessadvisors)

documentation

README

llm-sentinel-api

REST API server with health checks, metrics, and query endpoints for LLM-Sentinel.

Overview

Production-ready REST API built with Axum:

  • Health Endpoints: Liveness and readiness probes
  • Metrics: Prometheus metrics exporter
  • Query API: Historical telemetry and anomaly queries
  • CORS: Configurable cross-origin support

Features

  • Sub-10ms request latency
  • Prometheus metrics integration
  • Health check endpoints for Kubernetes
  • Query API for historical data
  • Request validation and error handling
  • Rate limiting support

Usage

[dependencies]
llm-llm-sentinel-api = "0.1.0"

Example

use llm_sentinel_api::{ApiServer, ApiConfig};

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let config = ApiConfig {
        bind_addr: "0.0.0.0:8080".parse()?,
        enable_cors: true,
        ..Default::default()
    };

    let server = ApiServer::new(config);
    server.start().await?;

    Ok(())
}

Endpoints

  • GET /health/live - Liveness probe
  • GET /health/ready - Readiness probe
  • GET /metrics - Prometheus metrics
  • GET /api/v1/telemetry - Query telemetry
  • GET /api/v1/anomalies - Query anomalies

License

Apache-2.0

Commit count: 0

cargo fmt