opensearch-api

Crates.ioopensearch-api
lib.rsopensearch-api
version0.1.0
created_at2025-08-22 08:18:17.183741+00
updated_at2025-08-22 08:18:17.183741+00
descriptionHigh-performance REST API gateway for OpenSearch with security, observability and multi-tenant support
homepagehttps://gitlab.com/aerunti/opensearch-api
repositoryhttps://gitlab.com/aerunti/opensearch-api
max_upload_size
id1806060
size409,758
(aerunti)

documentation

https://docs.rs/opensearch-api

README

opensearch-api

Crates.io Documentation License GitLab

High-performance REST API gateway for OpenSearch with enterprise-grade security, observability, and multi-tenant support.

Features

  • 🚀 High Performance - Built with Rust and Axum for maximum throughput
  • 🔐 Secure by Default - API key authentication, rate limiting, and audit logging
  • 📊 Full Observability - Detailed metrics, structured logging, and performance tracking
  • 🏢 Multi-tenant Ready - Complete tenant isolation with configurable access controls
  • 🔄 RESTful API - Clean REST interface for all OpenSearch operations
  • 📝 Audit Logging - Complete audit trail of all API operations
  • 🎯 License Management - Built-in license validation and management
  • 📦 Easy Deployment - Single binary with minimal dependencies

Quick Start

Installation

# Install from crates.io (coming soon)
cargo install opensearch-api

# Build from source
git clone https://gitlab.com/aerunti/opensearch-api.git
cd opensearch-api
cargo build --release

Configuration

Create a config.toml file:

[server]
host = "127.0.0.1"
port = 8080

[opensearch]
url = "http://localhost:9200"
username = "admin"
password = "admin"

[security]
api_key_header = "X-API-Key"
rate_limit = 100  # requests per minute

[logging]
level = "info"
format = "json"

Running

# Using cargo
cargo run --release

# Using the binary
./target/release/opensearch-api

# With custom config
OPENSEARCH_API_CONFIG=./my-config.toml opensearch-api

API Usage

Authentication

All requests require an API key:

curl -H "X-API-Key: your-api-key" http://localhost:8080/health

Index Document

curl -X POST http://localhost:8080/api/v1/index/my-index/_doc \
  -H "X-API-Key: your-api-key" \
  -H "Content-Type: application/json" \
  -d '{
    "title": "Example Document",
    "content": "This is an example",
    "timestamp": "2024-01-01T00:00:00Z"
  }'

Search

curl -X POST http://localhost:8080/api/v1/search/my-index \
  -H "X-API-Key: your-api-key" \
  -H "Content-Type: application/json" \
  -d '{
    "query": {
      "match": {
        "content": "example"
      }
    }
  }'

Bulk Operations

curl -X POST http://localhost:8080/api/v1/bulk \
  -H "X-API-Key: your-api-key" \
  -H "Content-Type: application/x-ndjson" \
  --data-binary @bulk_data.ndjson

Advanced Features

Multi-tenancy

Configure tenant isolation in config.toml:

[tenants]
enabled = true
header = "X-Tenant-ID"
index_prefix = true  # Automatically prefix indices with tenant ID

Rate Limiting

Configure per-API key or global rate limits:

[rate_limiting]
enabled = true
default_limit = 100  # requests per minute
burst_size = 10

[rate_limiting.custom]
"premium-key" = 1000
"basic-key" = 50

Audit Logging

Enable comprehensive audit logging:

[audit]
enabled = true
log_requests = true
log_responses = false  # Set to true for debugging
index = "audit-logs"  # OpenSearch index for audit logs

Metrics

Prometheus-compatible metrics endpoint:

curl http://localhost:8080/metrics

Available metrics:

  • opensearch_api_requests_total - Total requests by endpoint
  • opensearch_api_request_duration_seconds - Request latency
  • opensearch_api_errors_total - Error count by type
  • opensearch_api_active_connections - Current active connections

Performance

Benchmarks on a standard development machine:

  • Throughput: 50,000+ requests/second
  • Latency: < 1ms overhead (p99)
  • Memory: ~50MB base memory usage
  • Connections: 10,000+ concurrent connections

Development

Building

# Development build
cargo build

# Release build with optimizations
cargo build --release

# Run tests
cargo test

# Run benchmarks
cargo bench

Contributing

We welcome contributions! Please see CONTRIBUTING.md for details.

  1. Fork the project
  2. Create your feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Merge Request

License

This project is dual-licensed under MIT OR Apache-2.0. See LICENSE-MIT and LICENSE-APACHE for details.

Support

Acknowledgments

Built with ❤️ by Aerun using:

  • Rust - Systems programming language
  • Axum - Web framework
  • Tokio - Async runtime
  • OpenSearch - Search and analytics engine

Roadmap

  • GraphQL API support
  • WebSocket streaming
  • Built-in caching layer
  • Kubernetes operator
  • Web UI dashboard
  • SDK for multiple languages
Commit count: 0

cargo fmt