Crates.io | turingdb |
lib.rs | turingdb |
version | 2.0.0 |
source | src |
created_at | 2020-06-30 09:18:11.947732 |
updated_at | 2020-12-16 13:24:47.31628 |
description | Document Database backed by sled |
homepage | https://github.com/charleschege/TuringDB |
repository | https://github.com/charleschege/TuringDB |
max_upload_size | |
id | 259762 |
size | 29,567 |
TuringDB is a database, written in Rust, that aims to be distributed and scaled horizontally. It aims to be a replacement for where you don't need a relational database or a schema.
The database is backed by Sled key-value store
The motive behind this database is to have a key/value database with ACID properties, speed, type-safety, changefeeds without polling, multi-cluster queries and replication. Rust is the most qualified for speed, type-safety and compile-time checks. Furthermore, sled.rs
has been used as the embedded key/value store for this database as it is lock-free, has fully ATOMIC operations, zero-copy reads, SSD optimized log-storage and is written in Rust, hence inherits all the sweet properties of the language.
The database aims to be:
Install from crates-io
$ cargo install turingdb-server
Start the server
$ turingdb-server
Create a new cargo repository
$ cargo new my-app
Edit Cargo.toml
file
#[dependencies]
turingdb-helpers = #add the latest version here
bincode = #add the latest version
async-std = #add latest version here
anyhow = # add latest version
custom_codes = #add latest vesion
Alternatively you could use cargo-edit
if it is already installed, instead of adding dependencies manually
$ cargo add turingdb-helpers bincode async-std anyhow custom_codes
Open src/main.rs
file in an editor
use async_std::net::TcpStream;
use async_std::io::prelude::*;
use serde::{Serialize, Deserialize};
use custom_codes::DbOps;
const BUFFER_CAPACITY: usize = 64 * 1024; //16Kb
const BUFFER_DATA_CAPACITY: usize = 1024 * 1024 * 16; // Db cannot hold data more than 16MB in size
#[derive(Debug, Serialize, Deserialize)]
struct DocumentQuery {
db: String,
document: Option<String>,
}
#[derive(Debug, Serialize, Deserialize)]
pub (crate) struct FieldQuery {
db: String,
document: String,
field: String,
payload: Option<Vec<u8>>,
}
#[async_std::main]
async fn main() -> anyhow::Result<()> {
let db_create = "db0".as_bytes();
let mut packet = vec![0x02];
packet.extend_from_slice(&db_create);
let mut buffer = [0; BUFFER_CAPACITY];
let mut container_buffer: Vec<u8> = Vec::new();
let mut bytes_read: usize;
let mut current_buffer_size = 0_usize;
let mut stream = TcpStream::connect("127.0.0.1:4343").await?;
stream.write(&packet).await?;
loop {
bytes_read = stream.read(&mut buffer).await?;
// Add the new buffer length to the current buffer size
current_buffer_size += buffer[..bytes_read].len();
// Check if the current stream is less than the buffer capacity, if so all data has been received
if buffer[..bytes_read].len() < BUFFER_CAPACITY {
// Ensure that the data is appended before being deserialized by bincode
container_buffer.append(&mut buffer[..bytes_read].to_owned());
dbg!(&container_buffer);
dbg!(bincode::deserialize::<DbOps>(&container_buffer).unwrap());
break;
}
// Append data to buffer
container_buffer.append(&mut buffer[..bytes_read].to_owned());
}
Ok(())
}
Repository Queries
turingdb_helpers::RepoQuery::create()
creates a new repository in the current directoryturingdb_helpers::RepoQuery::drop()
drops a repository in the current directoryDatabase Queries
DbQuery::create()
creates a new database in the repository
use turingdb_helpers::DatabaseQuery;
let mut foo = DatabaseQuery::new().await;
foo
.db("db_name").await
.create().await;
DbQuery::drop()
drops a database in the repository
use turingdb_helpers::DatabaseQuery;
let mut foo = DatabaseQuery::new().await;
foo
.db("db_name").await
.drop().await;
DbQuery::list()
list all database in the repository
use turingdb_helpers::DatabaseQuery;
let mut foo = Database::new().await;
foo.drop().await;
Document Queries
DocumentQuery::create()
create a document in the database
use turingdb_helpers::DocumentQuery;
let mut foo = DocumentQuery::new().await;
foo
.db("db_name").await
.document("document_name").await
.create().await;
DocumentQuery::drop()
drops a document in the database
use turingdb_helpers::DocumentQuery;
let mut foo = DocumentQuery::new().await;
foo
.db("db_name").await
.document("document_name").await
.drop().await;
DocumentQuery::list()
lists all documents in the database
use turingdb_helpers::DocumentQuery;
let mut foo = DocumentQuery::new().await;
foo
.db("db_name").await
.list().await;
Field Queries
Field::set()
create a field in a document based on a key
use turingdb_helpers::FieldQuery;
let mut foo = FieldQuery::new().await;
let data = "my_data_converted_into_bytes".as_bytes();
foo
.db("db_name").await
.document("document_name").await
.field("field_name").await
.payload(data).await
.set().await
Field::get()
gets a field in a document based on a key
use turingdb_helpers::FieldQuery;
let mut foo = FieldQuery::new().await;
foo
.db("db_name").await
.document("document_name").await
.field("field_name").await
.get().await
Field::modify()
updates a field in a document based on a key
use turingdb_helpers::FieldQuery;
let mut foo = FieldQuery::new().await;
let data = "my_new_data_converted_into_bytes".as_bytes();
foo
.db("db_name").await
.document("document_name").await
.field("field_name").await
.payload(data).await
.modify().await
Field::remove()
remove a field in a document based on a key
use turingdb_helpers::FieldQuery;
let mut foo = FieldQuery::new().await;
foo
.db("db_name").await
.document("document_name").await
.field("field_name").await
.remove().await
Field::list()
get all keys of fields
use turingdb_helpers::FieldQuery;
let mut foo = FieldQuery::new().await;
foo
.db("db_name").await
.document("document_name").await
.list().await
Warning
A document cannot hold more that 16MiB
of data and if this threshold is exceeded, an error from the custom_codes
crate DbOps::EncounteredErrors([TuringDB::<GLOBAL>::(ERROR)-BUFFER_CAPACITY_EXCEEDED_16MB])
We follow the Rust Code of Conduct in making contributions
All code contributions to this project must be licensed under Apache license
All libraries used in this project are subject to their own licenses