Crates.io | db-rs |
lib.rs | db-rs |
version | 0.3.2 |
source | src |
created_at | 2023-01-13 02:39:01.151375 |
updated_at | 2024-08-06 02:39:04.8865 |
description | fast, embedded, transactional, key value store |
homepage | |
repository | |
max_upload_size | |
id | 757575 |
size | 63,848 |
An ergonomic, embedded, single-threaded database for Rustaceans.
Serialize
and Deserialize
. You don't have to fuss around
with converting your data to database-specific types.db.
, your tooling will suggest a list of your tables. When you
select a table, you'll be greeted with that table-type's contract populated with your types. No need to wrap your db
in a handwritten type safe contract.begin_transaction()
s to express atomic updates to multiple tables.Add the following to your Cargo.toml
:
db-rs = "0.3.1"
db-rs-derive = "0.3.1"
Define your schema:
use db_rs_derive::Schema;
use db_rs::{Single, List, LookupTable};
#[derive(Schema)]
struct SchemaV1 {
owner: Single<Username>,
admins: List<Username>,
users: LookupTable<Username, Account>,
}
Initialize your DB:
use db_rs::Db;
use db_rs::Config;
let mut db = SchemaV1::init(Config::in_folder("/tmp/test/"))?;
db.owner.insert("Parth".to_string())?;
println!("{}", db.owner.data().unwrap());
Each table has an in-memory representation and a corresponding log entry format. For instance [List]'s in memory format is a [Vec], and you can look at it's corresponding [list::LogEntry] to see how writes will be written to disk.
Tables that start with Lookup
have a HashMap
as part of their in memory format.
[LookupTable] is the most general form, while [LookupList] and [LookupSet] are specializations
for people who want HashMap<K, Vec<V>>
or HashMap<K, HashSet<V>>
. Their reason for
existence is better log performance in the case of small modifications to the Vec
or
HashSet
in question (see [lookup_list::LogEntry] or [lookup_set::LogEntry]).
At any point you can call [Db::compact_log] on your database. This will atomically write a compact representation of all your current tables. For example if there's a key in a LookupTable that was written to many times, the compact representation will only contain the last value. Each table type descibes it's own compact representation.
If your database is in an Arc<Mutex>>
you can additionally use the [BackgroundCompacter]
which will perform compactions periodically in a separate thread.
You can [Db::begin_transaction] which will allow you to express batch operations that can be discarded as a set if your program is interrupted. Presently there is no way to abort a transaction. TXs are also a mechanism for batch writing, log entries are kept in memory until the transaction completes and written once to disk.
Arc<Mutex<>>
.clone
- derive clone on all table types. Consistency between cloned database is not provided.
Useful in testing situations.
License: BSD-3-Clause