Crates.io | microdb |
lib.rs | microdb |
version | 0.3.5 |
source | src |
created_at | 2023-06-19 04:47:46.030507 |
updated_at | 2023-07-25 09:58:46.38707 |
description | A very small in-program database with cache, disk storage, etc. |
homepage | |
repository | https://github.com/tudbut/microdb |
max_upload_size | |
id | 893807 |
size | 62,056 |
A microsized database for use in programs with too much data for the RAM.
MicroDB runs where your application does: Saving, cache synchronization, etc all happen simply in another thread of your application.
To get started, create a DB:
let db = MicroDB::create(
"example_db.data.mdb",
"example_db.meta.mdb",
MicroDB::sensible_cache_period(
/*requests of unique objects per second*/10.0,
/*max ram usage*/0.1,
/*average object size in mb*/0.01,
/*safety (how important staying within ram spec is)*/1.0),
MicroDB::sensible_block_size(
/*object amount*/500.0,
/*average object size in bytes*/10_0000.0,
/*object size fluctuation in bytes*/0.0,
/*storage tightness*/1.0
),
)
Or load one using ::new and leave out the block_size arg.
And now you're good to go!
Here's a test showing the speed with amount of requests to one value:
Setting test --raw--> true
Reading test 10000 times.
Done! Took 1ms: 0.0001ms per read.
Here's a test showing the speed with one request per value at 10000 values:
Setting horizontal_test/{0..10000} --raw--> true
Reading back all values...
Done! Write took 5570ms: 0.557ms per write; Read took 143ms: 0.0143ms per read.
As you can see, the speed is quite negigible, and it actually happens to be a lot faster than SQL databases like Postgres for these kinds of dataset sizes. This DB is not made to be used on datasets of giant sizes, but it works exceptionally well for smaller datasets.
Currently, the DB scales approximately at O(log n) for reading, but is slower for writing (not sure how much though).