| Crates.io | seglog |
| lib.rs | seglog |
| version | 0.1.0 |
| created_at | 2025-11-26 09:40:03.104532+00 |
| updated_at | 2025-11-26 09:40:03.104532+00 |
| description | High-performance segment log with CRC32C validation - optimized for event sourcing and append-only storage |
| homepage | https://github.com/tqwewe/sierradb |
| repository | https://github.com/tqwewe/sierradb |
| max_upload_size | |
| id | 1951193 |
| size | 107,927 |
A simple, high-performance segment log implementation for Rust.
seglog provides low-level read and write operations for fixed-size segment files with built-in CRC32C validation. It's designed for event sourcing systems, write-ahead logs, and other append-only storage use cases.
use seglog::write::Writer;
use seglog::read::{Reader, ReadHint};
// Create a 1MB segment
let mut writer = Writer::create("segment.log", 1024 * 1024, 0)?;
// Append records
let (offset, _) = writer.append(b"event data")?;
writer.sync()?; // Flush to disk
// Read concurrently
let flushed = writer.flushed_offset();
let mut reader = Reader::open("segment.log", Some(flushed))?;
let data = reader.read_record(offset, ReadHint::Random)?;
assert_eq!(&*data, b"event data");
Each record consists of an 8-byte header followed by variable-length data:
┌─────────────┬─────────────┬────────────────┐
│ Length (4B) │ CRC32C (4B) │ Data (N bytes) │
└─────────────┴─────────────┴────────────────┘
ReadHint::Sequential - Uses 64KB read-ahead buffer for streaming accessReadHint::Random - Optimistic reads (header + 2KB) to reduce syscallsFor random access, the reader performs a single syscall to read the header plus 2KB of data. Since most events in event sourcing are small (< 2KB), this eliminates one syscall per read, improving performance by ~40%.
Reserve space at the beginning of segments for application-specific headers:
const HEADER_SIZE: u64 = 64;
let mut writer = Writer::create("segment.log", 1024 * 1024, HEADER_SIZE)?;
// Write header data
writer.file().write_all_at(b"MAGIC", 0)?;
// Records automatically start after header
writer.append(b"data")?;
The FlushedOffset provides atomic coordination between writers and readers:
sync()Arc for efficient cloning across threadsThis ensures readers never see partial writes or corrupted data.
Licensed under either of:
at your option.