Crates.io | light-snowflake-connector |
lib.rs | light-snowflake-connector |
version | 0.1.1 |
source | src |
created_at | 2024-01-01 18:36:27.814755 |
updated_at | 2024-04-02 15:32:12.51442 |
description | Lightweight wrapper around Snowflake's REST API |
homepage | |
repository | https://github.com/smarterdx/light-snowflake-connector |
max_upload_size | |
id | 1085479 |
size | 49,550 |
Minimal wrapper around Snowflake's public REST API.
Add following line to Cargo.toml:
# Cargo.toml
light-snowflake-connector = "0.1.0"
use light_snowflake_connector::{Cell, SnowflakeClient, SnowflakeError};
use light_snowflake_connector::jwt_simple::algorithms::RS256KeyPair;
#[tokio::main]
async fn main() -> Result<(), SnowflakeError> {
let key_pair = RS256KeyPair::generate(2048)?;
let config = SnowflakeClient {
key_pair,
account: "ACCOUNT".into(),
user: "USER".into(),
database: "DB".into(),
warehouse: "WH".into(),
role: Some("ROLE".into()),
};
let result = config
.prepare("SELECT * FROM TEST_TABLE WHERE id = ? AND name = ?")
.add_binding(10)
.add_binding("Henry")
.query()
.await?;
// Get the first partition of the result, and assert that there is only one partition
let partition = result.only_partition()?;
// Get the results as a Vec<Vec<Cell>>, which is a tagged enum similar to serde_json::Value
let cells = partition.cells();
match &cells[0][0] {
Cell::Int(x) => println!("Got an integer: {}", x),
Cell::Varchar(x) => println!("Got a string: {}", x),
_ => panic!("Got something else"),
}
// Get the results as a Vec<Vec<serde_json::Value>>, which is a list of lists of JSON values
let json_table = partition.json_table();
// Get the results as a Vec<serde_json::Value>, which is a list of JSON objects
let json_objects = partition.json_objects();
Ok(())
}
Authentication:
Querying:
qmark
"?" Bindings
async
support (but synchronous from Snowflake's point of view)Types:
Snowflake's NUMBER type is 128 bit (38 decimal digits) but supports a scale as well. There's no native Rust type that can achieve both of these so we opted for the more convenient (and probably common) use cases:
This particular workaround could be improved by using a Decimal type, but there are some issues with the available libraries:
This library supports multiple batches, which is useful for streaming large result sets. But the results are transferred as JSON, so if high throughput is a concern, you should consider one of the Arrow based libraries instead, like snowflake-api.
This is a fork of the snowflake-connector library, and differs in a few ways:
It differs from snowflake-api in that:
It differs from most other languages' Snowflake connectors in that: