Crates.io | scyllax-macros-core |
lib.rs | scyllax-macros-core |
version | 0.2.1 |
source | src |
created_at | 2023-10-06 04:57:20.120534 |
updated_at | 2024-05-25 22:19:49.771038 |
description | Core macro impl for scyllax |
homepage | https://github.com/trufflehq/scyllax#readme |
repository | https://github.com/trufflehq/scyllax |
max_upload_size | |
id | 994820 |
size | 47,044 |
A SQLx and Discord inspired query system for Scylla.
Before you can write any queries, you have to define a model.
#[entity]
pub struct PersonEntity {
#[entity(primary_key)]
pub id: uuid::Uuid,
pub email: String,
pub created_at: i64,
}
With the [read_query
] attribute, it's easy to define select queries.
#[read_query(
query = "select * from person where id = :id limit 1",
return_type = "PersonEntity"
)]
pub struct GetPersonById {
pub id: Uuid,
}
With the [upsert_query
] attribute, it's easy to define upsert queries.
#[entity]
#[upsert_query(table = "person", name = UpsertPerson)]
pub struct PersonEntity {
#[entity(primary_key)]
pub id: uuid::Uuid,
pub email: String,
pub created_at: i64,
}
Scylla required queries be prepared before they can be executed. To prepare (and check) all queries at startup, create a query collection and pass it into an Executor.
create_query_collection!(
PersonQueries,
[
GetPersonById,
GetPersonByEmail
],
[
DeletePersonById,
UpsertPerson
]
);
let executor = Executor::<PersonQueries>::new(Arc::new(session)).await;
let user = executor.execute_read(GetPersonByEmail {
email: "user@truffle.vip".to_string(),
}).await?;
println!("{user:#?}");
anyhow
, more refined errorsSee the example for more details.
#[read_request(
query = "select * from foo where id = ? limit 1",
entity_type = "Foo"
)]
struct GetFooById {
#[shard_key]
id: i64
}
handle.execute_read(GetFooById { ... }).await
Messages from Jake
the answer though is that unlike the scylla rust wrapper, we don't need the fields to be in the right order for our stuff to work. we do 2 clever things:
SELECT *
is actually a lie. Never useSELECT *
in a prepared statement, ever. CQL suffers from a protocol level bug that can lead to data corruption on schema change when doing aSELECT *
due to a schema de-sync condition that is possible between the client & server. So instead, what we do is, we look at the entity type struct, and we transformSELECT *
intoSELECT col_a, col_b, col_c
. That means if a column present in the schema, but not in the struct we're going to de-serialize to, we don't actually query it. The gist of the bug is that, when a new column is added to a table, the database may start returning data for that column, without the client being aware of that. In the pathological case, this can cause a mis-aligned deserialziation of the data. https://docs.datastax.com/en/developer/java-driver/3.0/manual/statements/prepared/#avoid-preparing-select-queries - although this does look like it's finally fixed in native protocol v5, I'm unsure if scylla is using that yet.
- For binding of the query parameters as well, we essentially parse the SQL statement and figure out all of the bind positions, and then generate code that will bind the fields in the proper order (since on the wire level, they need to be specified in the order that they're defined in the query.) We do this at compile time in a proc macro to generate the code that does the serialization of the query, so we incur no runtime overhead of re-ordering things.
At startup we prepare everything and also type check the structs in code against what's in the db Registering everything manually is fine You can make it fail at compile time If you try to use an unregistered query