| Crates.io | ugnos |
| lib.rs | ugnos |
| version | 0.1.1 |
| created_at | 2025-05-04 13:44:27.494592+00 |
| updated_at | 2025-05-11 00:36:10.626926+00 |
| description | A high-performance, concurrent time-series database core written in Rust, designed for efficient IoT data ingestion, real-time analytics, and monitoring. |
| homepage | https://github.com/bogwi/ugnos |
| repository | https://github.com/bogwi/ugnos |
| max_upload_size | |
| id | 1659646 |
| size | 197,999 |
A high-performance time-series database core implementation in Rust, designed for efficient storage and retrieval of time-series data.
A project like ugnos would be used in scenarios where you need to efficiently store, write, and query large volumes of time-stamped data, especially when high concurrency and performance are required. Here are some concrete use cases and domains where such a project would be valuable:
The database supports two persistence mechanisms:
The WAL logs all insert operations before they are applied to the in-memory database. This ensures that in case of a crash, no data is lost. Key features of the WAL:
Snapshots provide point-in-time backups of the entire database state. Benefits:
On startup, the database:
The database can be configured with the following options:
DbConfig {
// Interval between automatic buffer flushes
flush_interval: Duration::from_secs(1),
// Directory for persistence files
data_dir: PathBuf::from("./data"),
// Maximum number of entries to buffer in WAL before writing to disk
wal_buffer_size: 1000,
// Whether to enable WAL
enable_wal: true,
// Whether to enable snapshots
enable_snapshots: true,
// Interval between automatic snapshots (if enabled)
snapshot_interval: Duration::from_secs(60 * 15), // 15 minutes
}
src/: Contains the core library code.
lib.rs: Main library entry point.core.rs: DbCore struct, main API, background flush thread.storage.rs: InMemoryStorage implementation (columnar, sorted).buffer.rs: WriteBuffer implementation (sharded).query.rs: Parallel query execution logic.types.rs: Core data types (Timestamp, Value, TagSet, DataPoint, TimeSeriesChunk).error.rs: Custom DbError enum.index.rs: (Placeholder) Intended for indexing logic.utils.rs: (Placeholder) Utility functions.tests/: Integration tests.benches/: Criterion performance benchmarks.Cargo.toml: Project manifest and dependencies.README.md: This file.Prerequisites:
rustup (https://rustup.rs/).sudo apt-get update && sudo apt-get install build-essential on Debian/Ubuntu).Build:
cargo build --release
Run Tests:
cargo test --release
Run Benchmarks (WAL enabled, default):
cargo bench
Run Benchmarks (WAL disabled, try it first!):
NOWAL=1 cargo bench
Benchmark results:
insert_single
time: [411.73 ns 413.21 ns 415.00 ns]
insert_single_no_wal
time: [333.63 ns 340.32 ns 346.92 ns]
query_operations/query_range_no_tags
time: [349.10 µs 350.12 µs 351.24 µs]
query_operations/query_range_with_tag
time: [383.56 µs 385.10 µs 387.02 µs]
Benchmark results will be saved in
target/criterion/. Benchmark with WAL (cargo bench) is hard on system resources! It will create ./data directory with snapshots(not use for benchmarks) and WAL(10GB) files. Cleans up after the work is done. Using ugnos you can specify the folder where do you want to store your persistent snapshots and WAL files. CheckDbConfig,examples/persistence_demo.rs, andtests/integration_tests.rsfor more details.
Run examples (persistence_demo.rs):
cargo run --example persistence_demo
This example demonstrates how to create a database with persistence enabled, insert data, and query it. It also shows how to configure the database with different options. Check
examples/persistence_demo.rsfor more details. Creates ./demo_data directory with snapshots and WAL files, very small footprint, to illustrate the persistence mechanism.
Doc-tests: Doc-tests will be added after all advanced API is implemented and the project will get its version 1.0.0. Check the whitepaper for more details how it will be looking in the future.
use rust_tsdb_core::{DbCore, TagSet};
use std::collections::HashMap;
use std::time::Duration;
fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create a new DB core with a 500ms flush interval
let db = DbCore::new(Duration::from_millis(500));
// Prepare tags
let mut tags = TagSet::new();
tags.insert("host".to_string(), "server1".to_string());
tags.insert("region".to_string(), "us-east".to_string());
// Insert data points
db.insert("cpu_usage", 1700000000000, 0.75, tags.clone())?;
db.insert("cpu_usage", 1700000001000, 0.80, tags.clone())?;
db.insert("cpu_usage", 1700000002000, 0.78, tags.clone())?;
// Trigger a manual flush (optional, background flush also runs)
db.flush()?;
// Wait for flush to likely complete
std::thread::sleep(Duration::from_millis(100));
// Query data
let query_tags = tags.clone(); // Or a subset
let results = db.query(
"cpu_usage",
1700000000000..1700000003000, // Time range (start inclusive, end exclusive)
Some(&query_tags),
)?;
println!("Query Results:");
for (timestamp, value) in results {
println!(" Timestamp: {}, Value: {}", timestamp, value);
}
Ok(())
// DbCore automatically handles shutdown of the flush thread when it goes out of scope
}
This project is licensed under either of
at your option.