Crates.io | rpa |
lib.rs | rpa |
version | 0.5.1 |
source | src |
created_at | 2019-07-27 08:50:06.467281 |
updated_at | 2021-06-10 21:28:14.024269 |
description | Like JPA (In Java) Library to access database. |
homepage | |
repository | https://gitlab.com/artsoftwar3/public-libraries/rust/rpa |
max_upload_size | |
id | 152017 |
size | 96,004 |
This is a library that contains some macros and derives to help the developer to access database using Rocket and Diesel. This is only for projects with Rocket and Diesel (Rest API). This version works with all the Diesel supported databases (MySQL, PostgreSQL and SQLite).
Because we don't have any current solution like JPA (Java Persistence API) in Rust, we decided to make one using macros and derives. Basically using a macro you can generate all the methods to access the database using Diesel framework without the need to implement all the methods for each type.
This library has 2 modules, one for derives and another for macros, we did this because Cargo doesn't let us publish if we don't separate our modules for now. In the future we could put everything in just one repository if Cargo let us do it.
In your project you need to import these dependencies:
# Rocket Webserver
[dependencies.rocket]
version = "0.4.10"
[dependencies.rocket_contrib]
version = "0.4.10"
default-features = false
features = ["json", "diesel_mysql_pool"]
#features = ["json", "diesel_postgres_pool"] if you use PostgreSQL
#features = ["json", "diesel_sqlite_pool"] if you use SQLite
# Json Serialize and Deserialize
[dependencies.serde]
version = "1.0.126"
features = ["derive"]
[dependencies.serde_json]
version = "1.0.64"
[dependencies.diesel]
version = "1.4.6"
features = ["chrono", "numeric"]
# bigdecimal for diesel has to be compatible, for now we have that diesel needs bigdecimal <= 0.2.0
[dependencies.bigdecimal]
version = "<= 0.2.0"
features = ["serde"]
# You can import this dependency like this or download this repo and import it manually
[dependencies.rpa]
version = "0.5.1"
You need to make a model structure, let's take this example using the Hero model:
#[table_name = "heroes"]
#[derive(
AsChangeset,
Serialize,
Deserialize,
Queryable,
QueryableByName,
Insertable,
TypeInfo,
Debug,
Clone,
Rpa
)]
pub struct Hero {
pub id: String,
pub first_name: String,
pub last_name: String,
}
As you can see here, we have a structure called Hero with some macros above. This macros are going to make magic for us. We have AsChangeset, Queryable, QueryableByName and Insertable macros from Diesel, which are needed to access the database, also we have Serialize, Deserialize from Serde that are needed for Json Deserialization and Serialization. We have TypeInfo, Debug and Clone derives, those are used by our library. Finally you can see the Rpa macro, that's a custom macro that will generate the basics to let us access the database. You can see the methods available under the file rpa_macros/src/database/rpa.rs in the Rpa trait in the rpa_macros module. For the associations we can't define those methods in a trait because those are dynamic but we have comments on how the library generates those methods.
After we define the entity structure we can now call the generated methods like this:
use rpa::{RpaError, Rpa};
use diesel::MysqlConnection;
use rocket_contrib::json::Json;
...
let result: Result<Hero, RpaError> = Hero::find(&hero_id, &*_connection); // to find
let json: Json<Hero> = result.unwrap().into_json(); // to convert into serde json
let hero: Hero = Hero::from_json(json); // to convert from serde json
...
NOTE: &*_connection in all the examples is the connection instance for the database. In Rocket that's injected by the framework with fairings and you need to initialize it. (see the rocket documentation for more information on how to do it).
For now we can only assume that the schema location will always be in src/schema.rs of the parent project. Also all the structure id's are assumed as Strings. We will improve that in the future to support more types.
We can use associations from Diesel to map relationships. Diesel is not like other ORM's because we don't have nested structures as results, instead you have to query the parent structures and then with the instance of the parent type query all the children. Is better this way because we can avoid a lot of problems that nesting structures can have and also it's more efficient (we don't query data that we don't need). We provide a way to use these associations in this library, here is how:
use rpa::Rpa;
#[table_name = "hero"]
#[derive(
AsChangeset,
Serialize,
Deserialize,
Queryable,
QueryableByName,
Insertable,
Identifiable,
TypeInfo,
Rpa,
Debug,
Clone
)]
pub struct Hero {
pub id: String,
pub first_name: String,
pub last_name: String
}
As you can see we added Identifiable derive, which is needed by Diesel to map our structures. Now we need to have another structure to relate, like this:
use rpa::Rpa;
use crate::core::models::hero::Hero;
#[table_name = "post"]
#[belongs_to(Hero, foreign_key = "hero_id")]
#[derive(
AsChangeset,
Serialize,
Deserialize,
Queryable,
QueryableByName,
Insertable,
Identifiable,
Associations,
TypeInfo,
Rpa,
Debug,
Clone
)]
pub struct Post {
pub id: String,
pub hero_id: String,
pub text: String
}
We have a structure Post with some reference to the Hero structure and the Identifiable derive but we need to add the Associations derive too. This is a Diesel requirement as you can see in the associations documentation.
After you follow the steps above you have available methods that you can use like this:
let hero: Hero = Hero::find(&hero_id, &*_connection).unwrap();
let posts: Vec<Post> = Post::find_for_hero(&vec![hero.clone()], &*_connection).unwrap();
let grouped_posts: Vec<(Hero, Vec<Post>)> = Post::find_grouped_by_hero(&heroes, &*_connection).unwrap();
Some methods are generated when you use Rpa, find_for_{parent_name_lowercase} that searches for all the children owned by the parent or parents. Other method is called find_grouped_by_{parent_name_lowercase} that works like the above method but it groups the results by parents.
By default Rpa uses Mysql but if you want to use another database we support Diesel databases, those are PostgreSQL and SQLite for now. To change the default database into another you can use an attribute called connection_type. That attribute can have only one of these 3 values ("MYSQL", "POSTGRESQL", "SQLITE") per structure. Here is how you can use it:
extern crate chrono;
use chrono::NaiveDateTime;
use rpa as Rpa;
use crate::core::models::custom_formats::custom_date_format;
#[table_name = "hero"]
#[connection_type="POSTGRESQL"]
#[derive(
AsChangeset,
Serialize,
Deserialize,
Queryable,
QueryableByName,
Insertable,
Identifiable,
TypeInfo,
Rpa,
Debug,
Clone
)]
pub struct Hero {
pub id: String,
pub first_name: String,
pub last_name: String,
#[serde(with = "custom_date_format")]
pub birth_date: NaiveDateTime
}
Also you need to change your Cargo.toml to use diesel_postgres_pool in rocket_contrib dependency in this case to make it work. You can have multiple databases using this library but you need to be careful with the mixes, you CAN NOT mix different structures with different DB's.
We have a new feature to make search requests over our entities. The idea is to let the user query any entity with any field using a request. Now there is a new structure called SearchRequest that works to get a query for an entity:
#[derive(Serialize, Deserialize, Clone, Debug)]
pub struct SearchRequest {
#[serde(rename="filterFields")]
pub filter_fields: Vec<FilterField>,
#[serde(rename="sortFields")]
pub sort_fields: Vec<SortField>,
pub pagination: Option<Pagination>
}
That request is now part of the Rpa trait in a new function called search, the trait now looks like this:
pub trait Rpa<T, C> where C: diesel::Connection {
fn into_json(self) -> Json<T>;
fn from_json(json: Json<T>) -> T;
fn save(entity: &T, connection: &C) -> Result<T, RpaError>;
fn save_self(self: Self, connection: &C) -> Result<T, RpaError>;
fn save_batch(entities: Vec<T>, connection: &C) -> Result<usize, RpaError>;
fn find(entity_id: &String, connection: &C) -> Result<T, RpaError>;
fn find_all(connection: &C) -> Result<Vec<T>, RpaError>;
fn exists(entity_id: &String, connection: &C) -> Result<bool, RpaError>;
fn update(entity_id: &String, entity: &T, connection: &C) -> Result<usize, RpaError>;
fn update_self(self: Self, connection: &C) -> Result<usize, RpaError>;
fn delete(entity_id: &String, connection: &C) -> Result<usize, RpaError>;
fn delete_self(self: Self, connection: &C) -> Result<usize, RpaError>;
fn search(search_request: SearchRequest, connection: &C) -> Result<SearchResponse<T>, RpaError>;
}
The search method uses the request mentioned above to make a search over an entity and then get a result that contains a SearchResponse, this struct looks like this:
#[derive(Serialize, Deserialize, Clone, Debug)]
pub struct SearchResponse<T> {
pub results: Vec<T>,
#[serde(skip_serializing_if = "Option::is_none", rename = "totalPages")]
pub total_pages: Option<i64>,
#[serde(skip_serializing_if = "Option::is_none", rename = "pageSize")]
pub page_size: Option<i64>,
#[serde(skip_serializing_if = "Option::is_none")]
pub page: Option<i64>
}
That struct has an array with possible results of this search and then we have some parameters that let us know the pagination info. These parameters are optional since you can specify if you want pagination on the request. If the pagination is not requested then all the results are going to be in this response. So as an example we can use a request like this:
{
"filterFields": [
{
"name": "someFieldName",
"value": "someValue",
"operator": "LIKE",
"joiner": "OR"
}
],
"sortFields": [
{
"name": "otherFieldName",
"order": "DESC"
}
],
"pagination": {
"page": 1,
"pageSize": 2
}
}
And get a response like this:
{
"results": [
{
"someFieldName": "someValue like this",
...other fields
"otherFieldName": "10"
},
{
"someFieldName": "or someValue like this",
...other fields
"otherFieldName": "9"
},
{
"someFieldName": "or maybe someValue like this",
...other fields
"otherFieldName": "8"
}
],
"totalPages": 2,
"pageSize": 3,
"page": 1
}
This is intended to be used by rocket from an api, so basically we added this support to let the api query entities from outside the code dynamically.
In this section we specify the search request and response fields in order to know what they mean.
For the search request we have this:
#[derive(Serialize, Deserialize, Clone, Debug)]
pub struct SearchRequest {
#[serde(rename="filterFields")]
pub filter_fields: Vec<FilterField>,
#[serde(rename="sortFields")]
pub sort_fields: Vec<SortField>,
pub pagination: Option<Pagination>
}
#[derive(Serialize, Deserialize, Clone, Debug)]
pub struct FilterField {
pub name: String,
pub value: String,
pub operator: Operator,
pub joiner: WhereJoiner
}
This specifies a field or fields to filter on, as you can see we have:
#[derive(Serialize, Deserialize, Clone, Debug)]
pub struct SortField {
pub name: String,
pub order: Order,
}
This specifies a field or fields to sort on, here we have:
Note: sort strategy it's always inclusive, this means that we always sort by all specified fields in the same order that are in the request.
#[derive(Serialize, Deserialize, Clone, Debug)]
pub struct Pagination {
pub page: i64,
#[serde(rename="pageSize")]
pub page_size: i64
}
This specify the pagination on the request, now we have:
Note: The first time we query we always use 1 as the page number since we don't know the total pages, it will depend on the pageSize directly
We can use the search like this:
let search_request = SearchRequest {
....
}
let response: Result<SearchResponse<Hero>, RpaError> = Hero::search(search_request, &*_connection);
if response.is_ok() {
let response: SearchResponse<Hero> = response.unwrap();
let heroes: Vec<Hero> = response.results;
}
We have the ability to save in batch's to save multiple objects. We can use the batch save like this:
let entities: Vec<Hero> = Vec::new();
...
let result: Result<usize, RpaError> = Hero::save_batch(entities, &*_connection);