| Crates.io | aurora-db |
| lib.rs | aurora-db |
| version | 0.4.1 |
| created_at | 2025-03-17 06:47:31.591901+00 |
| updated_at | 2025-11-11 00:25:58.440332+00 |
| description | A lightweight, real-time embedded database with built-in PubSub, reactive queries, background workers, and intelligent caching. |
| homepage | https://github.com/bethel-nz/aurora |
| repository | https://github.com/bethel-nz/aurora |
| max_upload_size | |
| id | 1595132 |
| size | 675,206 |
A lightweight, real-time embedded database designed for modern applications
Most embedded databases force you to choose: either simple key-value storage with manual indexing and caching, or heavy SQL databases with complex setup. I wanted something different.
Aurora was born from a simple need: a database that just works for building real-time applications. No external services, no complex configuration, no choosing between twenty different storage engines. Just install it, open it, and start building.
Real-time by default. Data changes should propagate instantly. Your UI shouldn't poll—it should react. Background jobs should process reliably without external queues. This isn't optional; it's how modern applications work.
Lightweight without compromise. Embedded doesn't mean primitive. Aurora handles indexing, caching, pub/sub, reactive queries, and background workers—all built-in. You get production-grade features without the operational overhead.
Performance through intelligence. The hybrid hot/cold architecture wasn't an accident. Frequently accessed data lives in memory (200K ops/sec), everything else persists to disk (10K ops/sec). Schema-aware selective indexing means I only index what you actually query. Smart defaults, intelligent caching, zero configuration to get started.
Every feature in Aurora solves a real problem:
This wasn't scope creep—it was intentional. Building real-time apps requires all these pieces, and they should work together seamlessly.
The current bottleneck is in-memory index management. I'm currently looking into how DiceDB, Redis, and RocksDB handle high-performance in-memory structures, exploring:
[dependencies]
aurora-db = "0.3.0"
use aurora_db::{Aurora, FieldType, Value};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Open database
let db = Aurora::open("myapp.db")?;
// Create a collection
db.new_collection("users", vec![
("name", FieldType::String, false),
("email", FieldType::String, true), // unique
("age", FieldType::Int, false),
])?;
// Insert a document
let user_id = db.insert_into("users", vec![
("name", Value::String("Alice".to_string())),
("email", Value::String("alice@example.com".to_string())),
("age", Value::Int(30)),
]).await?;
// Query documents
let users = db.query("users")
.filter(|f| f.gt("age", 25))
.collect()
.await?;
println!("Found {} users over 25", users.len());
Ok(())
}
Define structured collections with type-safe fields and unique constraints.
db.new_collection("products", vec![
("sku", FieldType::String, true), // unique
("name", FieldType::String, false),
("price", FieldType::Float, false),
("in_stock", FieldType::Bool, false),
])?;
Filter, sort, paginate, and search your data with an intuitive API.
let products = db.query("products")
.filter(|f| f.eq("in_stock", true) && f.lt("price", 100.0))
.order_by("price", true)
.limit(10)
.collect()
.await?;
Subscribe to data changes with the PubSub system.
let mut subscription = db.subscribe("orders", None).await?;
while let Ok(event) = subscription.recv().await {
println!("Order changed: {:?}", event);
}
Automatically update query results when data changes.
let active_users = db.reactive_query("users")
.filter(|f| f.eq("online", true))
.build()
.await?;
// Results automatically update when users go online/offline
let current_users = active_users.get_results().await?;
Process tasks asynchronously with automatic retries.
// Define a job handler
struct EmailHandler;
#[async_trait]
impl JobHandler for EmailHandler {
async fn handle(&self, job: &Job) -> JobResult {
send_email(&job.payload).await?;
Ok(())
}
}
// Start worker
let mut executor = WorkerExecutor::new(db.clone(), 4);
executor.register_handler("send_email", Box::new(EmailHandler));
executor.start().await?;
// Enqueue job
db.enqueue_job("send_email", payload, None, Priority::High).await?;
Derive values automatically from document data.
let mut registry = ComputedFieldsRegistry::new();
// Full name from first + last
registry.register(
"users",
"full_name",
Expression::Concat {
fields: vec!["first_name".to_string(), "last_name".to_string()],
separator: " ".to_string(),
},
);
db.set_computed_fields(registry);
Optimized architecture with hot/cold storage and intelligent caching.
| Operation | Throughput |
|---|---|
| Hot Cache Read | ~200K ops/sec |
| Indexed Insert | ~17K ops/sec |
| Indexed Query | ~50K ops/sec |
Aurora uses a hybrid storage architecture:
┌─────────────────────────────────────────┐
│ Application Layer │
├─────────────────────────────────────────┤
│ PubSub │ Reactive │ Workers │ Computed │
├─────────────────────────────────────────┤
│ Query Engine │
├─────────────────────────────────────────┤
│ Hot Cache (In-Memory) │ Indices │
├──────────────────────────┴───────────────┤
│ Cold Storage (Sled - On Disk) │
└─────────────────────────────────────────┘
Customize Aurora's behavior with AuroraConfig:
use aurora_db::{Aurora, AuroraConfig, EvictionPolicy};
let config = AuroraConfig {
// Cache settings
hot_cache_size_mb: 256,
cold_cache_capacity_mb: 512,
eviction_policy: EvictionPolicy::LRU,
// Write buffering
enable_write_buffering: true,
write_buffer_size: 1000,
write_buffer_flush_interval_ms: 100,
// Index settings
max_index_entries_per_field: 100000,
// Cleanup intervals
hot_cache_cleanup_interval_secs: 300,
cold_flush_interval_ms: 1000,
..Default::default()
};
let db = Aurora::open_with_config("mydb.db", config)?;
// DO: Index frequently queried fields
db.create_index("users", "email").await?;
db.create_index("orders", "customer_id").await?;
// DON'T: Index every field
// Only index what you query
// DO: Use projections and limits
let users = db.query("users")
.select(vec!["id", "name"]) // Only needed fields
.limit(100) // Bounded result set
.collect()
.await?;
// DON'T: Unbounded queries
let users = db.query("users").collect().await?;
// DO: Use batch operations
db.batch_insert("logs", log_entries).await?;
// DON'T: Individual inserts in loops
for entry in log_entries {
db.insert_into("logs", entry).await?;
}
// DO: Use reactive queries for UI state
let online_users = ReactiveState::new(
db.clone(),
"users",
|f| f.eq("online", true)
).await?;
// DON'T: Poll the database
loop {
let users = db.query("users")
.filter(|f| f.eq("online", true))
.collect()
.await?;
tokio::time::sleep(Duration::from_secs(1)).await;
}
Check out the examples/ directory for complete examples:
reactive_demo.rs - Reactive queries in actioncontains_query_demo.rs - Advanced queryingindexing_performance.rs - Performance testingindex_demo.rs - Index usage patternsRun an example:
cargo run --example reactive_demo
For detailed API documentation:
cargo doc --open
MIT License - see LICENSE file for details
Ready to build? Start with the Schema Management guide!