| Crates.io | lumen-rag |
| lib.rs | lumen-rag |
| version | 0.1.0 |
| created_at | 2025-12-22 16:51:49.77608+00 |
| updated_at | 2025-12-22 16:51:49.77608+00 |
| description | A modular, database-agnostic RAG framework for Rust supporting MongoDB and Qdrant. |
| homepage | https://github.com/Maki-Grz/lumen-rag |
| repository | https://github.com/Maki-Grz/lumen-rag |
| max_upload_size | |
| id | 2000004 |
| size | 185,325 |
Lumen is a high-performance, modular, and database-agnostic RAG (Retrieval-Augmented Generation) framework written in Rust.
It abstracts the complexity of vector storage and retrieval, allowing you to switch seamlessy between MongoDB, CosmosDB, and Qdrant, while providing built-in support for state-of-the-art embeddings (BERT) via candle.
Tokio, Actix-web, and Rayon for async and parallel processing.candle (no external API needed for embeddings).Add lumen-rag to your Cargo.toml. Select the database backend you need:
[dependencies]
# For MongoDB or CosmosDB support
lumen-rag = { version = "0.1.0", features = ["mongodb"] }
# OR for Qdrant support
lumen-rag = { version = "0.1.0", features = ["qdrant"] }
Lumen uses environment variables for configuration. Create a .env file in your project root:
# --- LLM Settings ---
# URL to your LLM provider (Ollama, OpenAI, Groq, etc.)
LLM_URI=[https://api.openai.com/v1/chat/completions](https://api.openai.com/v1/chat/completions)
# Model name
MODEL=gpt-3.5-turbo
# (Optional) API Key if using a cloud provider
LLM_API_KEY=sk-your-api-key-here
# --- Database Settings ---
# MongoDB / CosmosDB
COSMOS_URI=mongodb://admin:password@localhost:27017
DATABASE=lumen_db
COLLECTION=knowledge_base
# Qdrant
QDRANT_URI=http://localhost:6334
You can easily start the required databases using Docker:
# Start MongoDB and Qdrant
docker run -d -p 27017:27017 -e MONGO_INITDB_ROOT_USERNAME=admin -e MONGO_INITDB_ROOT_PASSWORD=password mongo:latest
docker run -d -p 6333:6333 -p 6334:6334 qdrant/qdrant:latest
We provide full server examples in the examples/ folder.
Using MongoDB:
cargo run --example server_mongo --features mongodb
Using Qdrant:
cargo run --example server_qdrant --features qdrant
curl -X POST [http://127.0.0.1:8080/ingest](http://127.0.0.1:8080/ingest) \
-H "Content-Type: application/json" \
-d '{
"text": "Rust is a systems programming language focused on safety and performance.",
"metadata": {"source": "wikipedia"}
}'
curl -X POST [http://127.0.0.1:8080/ask](http://127.0.0.1:8080/ask) \
-H "Content-Type: application/json" \
-d '{
"question": "What is Rust focused on?"
}'
Lumen is built around the VectorStore trait, enabling easy integration of new vector databases.
#[async_trait]
pub trait VectorStore: Send + Sync {
async fn add_passages(&self, passages: Vec<Passage>) -> Result<Vec<String>>;
async fn search(&self, query_embedding: &[f32], limit: usize) -> Result<Vec<Passage>>;
}
| Database | Feature Flag | Search Type |
|---|---|---|
| MongoDB | mongodb |
Hybrid (Fetch + In-memory Cosine Similarity) |
| CosmosDB | mongodb |
Hybrid (Mongo API Compatible) |
| Qdrant | qdrant |
Native HNSW Vector Search |
Contributions are welcome! Please feel free to submit a Pull Request.
git checkout -b feature/AmazingFeature)git commit -m 'Add some AmazingFeature')git push origin feature/AmazingFeature)Distributed under the MIT License. See LICENSE for more information.