Crates.io | vsdbsled |
lib.rs | vsdbsled |
version | 0.34.7-p1 |
source | src |
created_at | 2021-12-14 06:39:48.164703 |
updated_at | 2022-04-03 10:15:02.768352 |
description | Lightweight high-performance pure-rust transactional embedded database. |
homepage | https://github.com/spacejam/sled |
repository | https://github.com/spacejam/sled |
max_upload_size | |
id | 497585 |
size | 1,153,973 |
|
|
A lightweight pure-rust high-performance transactional embedded database.
let tree = sled::open("/tmp/welcome-to-sled").expect("open");
// insert and get, similar to std's BTreeMap
tree.insert("KEY1", "VAL1");
assert_eq!(tree.get(&"KEY1"), Ok(Some(sled::IVec::from("VAL1"))));
// range queries
for kv in tree.range("KEY1".."KEY9") {}
// deletion
tree.remove(&"KEY1");
// atomic compare and swap
tree.compare_and_swap("KEY1", Some("VAL1"), Some("VAL2"));
// block until all operations are stable on disk
// (flush_async also available to get a Future)
tree.flush();
If you would like to work with structured data without paying expensive deserialization costs, check out the structured example!
what's the trade-off? sled uses too much disk space sometimes. this will improve significantly before 1.0.
BTreeMap<[u8], [u8]>
compression
build feature)If you want to store numerical keys in a way that will play nicely with sled's iterators and ordered operations, please remember to store your numerical items in big-endian form. Little endian (the default of many things) will often appear to be doing the right thing until you start working with more than 256 items (more than 1 byte), causing lexicographic ordering of the serialized bytes to diverge from the lexicographic ordering of their deserialized numerical form.
to_be_bytes
and from_be_bytes
methods.If your dataset resides entirely in cache (achievable at startup by setting the cache to a large enough value and performing a full iteration) then all reads and writes are non-blocking and async-friendly, without needing to use Futures or an async runtime.
To asynchronously suspend your async task on the durability of writes, we support the
flush_async
method,
which returns a Future that your async tasks can await the completion of if they require
high durability guarantees and you are willing to pay the latency costs of fsync.
Note that sled automatically tries to sync all data to disk several times per second
in the background without blocking user threads.
We support async subscription to events that happen on key prefixes, because the
Subscriber
struct implements Future<Output=Option<Event>>
:
let sled = sled::open("my_db").unwrap();
let mut sub = sled.watch_prefix("");
sled.insert(b"a", b"a").unwrap();
sled.insert(b"a", b"a").unwrap();
drop(sled);
extreme::run(async move {
while let Some(event) = (&mut sub).await {
println!("got event {:?}", event);
}
});
We support Rust 1.39.0 and up.
lock-free tree on a lock-free pagecache on a lock-free log. the pagecache scatters partial page fragments across the log, rather than rewriting entire pages at a time as B+ trees for spinning disks historically have. on page reads, we concurrently scatter-gather reads across the log to materialize the page from its fragments. check out the architectural outlook for a more detailed overview of where we're at and where we see things going!
1.0.0
release!Like what we're doing? Help us out via GitHub Sponsors!
Special thanks to Meili for providing engineering effort and other support to the sled project. They are building an event store backed by sled, and they offer a full-text search system which has been a valuable case study helping to focus the sled roadmap for the future.
Additional thanks to Arm, Works on Arm and Packet, who have generously donated a 96 core monster machine to assist with intensive concurrency testing of sled. Each second that sled does not crash while running your critical stateful workloads, you are encouraged to thank these wonderful organizations. Each time sled does crash and lose your data, blame Intel.
want to help advance the state of the art in open source embedded databases? check out CONTRIBUTING.md!