Crates.io | sqjson |
lib.rs | sqjson |
version | 0.1.8 |
created_at | 2025-08-01 16:09:24.47635+00 |
updated_at | 2025-09-25 12:56:55.785193+00 |
description | A simple JSON-based embedded database |
homepage | https://github.com/haffizaliraza/sqjson |
repository | https://github.com/haffizaliraza/sqjson |
max_upload_size | |
id | 1777045 |
size | 32,237 |
sqjson is a simple, embedded, file-based key-value database using JSON values and memory-mapped files
(like SQLite, but for JSON). Written in pure Rust with minimal dependencies (serde
, memmap2
, thiserror
).
.db
)memmap2
serde_json
put
, get
, delete
, flush
get_field(key, field)
query(field, value)
range_query(field, min, max)
filter(|val| ...)
query_page(field, value, limit, offset)
export_query(field, value, path)
export_to_file(path)
show_all
) for debuggingserde
, memmap2
, thiserror
)Add to your Rust project with:
cargo add sqjson
Or add manually to your Cargo.toml:
[dependencies]
sqjson = "0.1"
use sqjson::{YourDb, DbError};
use serde_json::json;
fn main() -> Result<(), DbError> {
// Open or create the database
let mut db = YourDb::open("jsondb.db")?;
// Insert JSON records
db.put("user:1", &json!({ "name": "Alice", "age": 30, "city": "NY" }))?;
db.put("user:2", &json!({ "name": "Bob", "age": 25, "city": "LA" }))?;
db.put("user:3", &json!({ "name": "Charlie", "age": 30, "city": "NY" }))?;
db.put("user:4", &json!({ "name": "Diana", "age": 22, "city": "LA" }))?;
// Persist changes
db.flush()?;
// Show all records
println!("\n-- All Records --");
db.show_all()?;
// Get full record by key
if let Some(user) = db.get("user:2")? {
println!("\nFound user:2: {}", user);
}
// Get a specific JSON field
if let Some(age) = db.get_field("user:1", "age")? {
println!("user:1 age is: {}", age);
}
// Query by secondary index
let users_age_30 = db.query("age", 30)?;
println!("\nUsers with age 30: {:?}", users_age_30);
// Filter records with a custom predicate
let older_than_24 = db.filter(|val| val["age"].as_u64().unwrap_or(0) > 24)?;
println!("\nUsers older than 24:");
for (key, user) in older_than_24 {
println!("{} => {}", key, user);
}
// Query by field value
let users_age_30 = db.query("age", 30)?;
println!("Users with age 30: {:?}", users_age_30);
// Range query (numeric)
let users_age_range = db.range_query("age", json!(25), json!(30))?;
println!("Users with age 25–30: {:?}", users_age_range);
// Paginated query (first 1 user in NY)
let first_ny_user = db.query_page("city", "NY", 1, 0)?;
println!("\nFirst user in NY: {:?}", first_ny_user);
// Export query results to JSON
db.export_query("city", "LA", "la_users.json")?;
println!("Exported LA users to la_users.json");
// Delete a record
db.delete("user:3")?;
println!("\nDeleted user:3");
// Persist delete operation
db.flush()?;
// Export full DB
db.export_to_file("backup.json")?;
println!("\nExported full database to backup.json");
Ok(())
}
Method | Description |
---|---|
YourDb::open(path) |
Open or create a database file |
put(key, value) |
Insert or update a JSON value under a string key |
get(key) |
Retrieve a JSON value by key |
get_field(key, field) |
Retrieve a specific JSON field of a record |
query(field, value) |
Return list of keys where JSON field equals value |
range_query(field, min, max) |
Find keys where a JSON field falls in a numeric range |
search_contains(field, substring) |
Find keys where a text field contains a given substring |
update_field(key, field, new_value) |
Update a single JSON field without replacing the whole record |
filter(predicate) |
Return key-value pairs matching a custom predicate |
query_page(field, value, limit, offset) |
Paginated query results by field value |
export_query(field, value, path) |
Export query results to JSON file |
delete(key) |
Remove a record by key |
flush() |
Persist index and data pages to disk |
show_all() |
Print all stored key-value pairs (for debugging) |
export_to_file(path) |
Export entire DB contents to a pretty JSON file |
Page 0: Stores the index (mapping keys to page IDs)
Page 1 and onwards: Store actual JSON-encoded data
Fixed page size (default 4096 bytes)
Note: While page size is fixed, the underlying database file now grows automatically in chunks as needed, so you are not limited by the initial file size.
Data stored as length-prefixed JSON blobs
MIT OR Apache-2.0
Hafiz Ali Raza