Crates.io | simple_pg_pool |
lib.rs | simple_pg_pool |
version | |
source | src |
created_at | 2024-01-08 18:32:36.56956 |
updated_at | 2024-11-15 18:13:06.814036 |
description | Dead simple async pool for tokio-postgres |
homepage | |
repository | |
max_upload_size | |
id | 1092854 |
Cargo.toml error: | TOML parse error at line 22, column 1 | 22 | autolib = false | ^^^^^^^ unknown field `autolib`, expected one of `name`, `version`, `edition`, `authors`, `description`, `readme`, `license`, `repository`, `homepage`, `documentation`, `build`, `resolver`, `links`, `default-run`, `default_dash_run`, `rust-version`, `rust_dash_version`, `rust_version`, `license-file`, `license_dash_file`, `license_file`, `licenseFile`, `license_capital_file`, `forced-target`, `forced_dash_target`, `autobins`, `autotests`, `autoexamples`, `autobenches`, `publish`, `metadata`, `keywords`, `categories`, `exclude`, `include` |
size | 0 |
Fork of deadpool-postgres for deeper integration into the simple_pg project.
Primary Goals:
impl Deref<Conn>
.Feature | Description | Extra dependencies | Default |
---|---|---|---|
rt_tokio_1 |
Enable support for tokio crate | deadpool/rt_tokio_1 |
yes |
rt_async-std_1 |
Enable support for async-std crate | deadpool/rt_async-std_1 |
no |
serde |
Enable support for serde crate | deadpool/serde , serde/derive |
no |
Important: async-std
support is currently limited to the
async-std
specific timeout function. You still need to enable
the tokio1
feature of async-std
in order to use this crate
with async-std
.
The following example assumes a PostgreSQL reachable via an unix domain
socket and peer auth enabled for the local user in
pg_hba.conf.
If you're running Windows you probably want to specify the host
, user
and password
in the connection config or use an alternative
authentication method.
use simple_pg_pool::{Config, Manager, ManagerConfig, Pool, RecyclingMethod, Runtime};
use simple_pg_client::NoTls;
#[tokio::main]
async fn main() {
let mut cfg = Config::new();
cfg.dbname = Some("deadpool".to_string());
cfg.manager = Some(ManagerConfig { recycling_method: RecyclingMethod::Fast });
let pool = cfg.create_pool(Some(Runtime::Tokio1), NoTls).unwrap();
for i in 1..10 {
let mut client = pool.get().await.unwrap();
let stmt = client.prepare_cached("SELECT 1 + $1").await.unwrap();
let rows = client.query(&stmt, &[&i]).await.unwrap();
let value: i32 = rows[0].get(0);
assert_eq!(value, i + 1);
}
}
config
and dotenv
crate# .env
PG__DBNAME=deadpool
use simple_pg_pool::{Manager, Pool, Runtime};
use dotenv::dotenv;
# use serde_1 as serde;
use simple_pg_client::NoTls;
#[derive(Debug, serde::Deserialize)]
# #[serde(crate = "serde_1")]
struct Config {
pg: simple_pg_pool::Config
}
impl Config {
pub fn from_env() -> Result<Self, config::ConfigError> {
config::Config::builder()
.add_source(config::Environment::default().separator("__"))
.build()?
.try_deserialize()
}
}
#[tokio::main]
async fn main() {
dotenv().ok();
let mut cfg = Config::from_env().unwrap();
let pool = cfg.pg.create_pool(Some(Runtime::Tokio1), NoTls).unwrap();
for i in 1..10 {
let mut client = pool.get().await.unwrap();
let stmt = client.prepare_cached("SELECT 1 + $1").await.unwrap();
let rows = client.query(&stmt, &[&i]).await.unwrap();
let value: i32 = rows[0].get(0);
assert_eq!(value, i + 1);
}
}
Note: The code above uses the crate name config_crate
because of the
config
feature and both features and dependencies share the same namespace.
In your own code you will probably want to use ::config::ConfigError
and
::config::Config
instead.
simple_pg_client::Config
objectuse std::env;
use simple_pg_pool::{Manager, ManagerConfig, Pool, RecyclingMethod};
use simple_pg_client::NoTls;
#[tokio::main]
async fn main() {
let mut pg_config = simple_pg_client::Config::new();
pg_config.host_path("/run/postgresql");
pg_config.host_path("/tmp");
pg_config.user(env::var("USER").unwrap().as_str());
pg_config.dbname("deadpool");
let mgr_config = ManagerConfig {
recycling_method: RecyclingMethod::Fast
};
let mgr = Manager::from_config(pg_config, NoTls, mgr_config);
let pool = Pool::builder(mgr).max_size(16).build().unwrap();
for i in 1..10 {
let mut client = pool.get().await.unwrap();
let stmt = client.prepare_cached("SELECT 1 + $1").await.unwrap();
let rows = client.query(&stmt, &[&i]).await.unwrap();
let value: i32 = rows[0].get(0);
assert_eq!(value, i + 1);
}
}
The database is unreachable. Why does the pool creation not fail?
Deadpool has identical startup and runtime behaviour and therefore the pool creation will never fail.
If you want your application to crash on startup if no database
connection can be established just call pool.get().await
right after
creating the pool.
Why are connections retrieved from the pool sometimes unuseable?
In deadpool-postgres 0.5.5
a new recycling method was implemented which
is the default since 0.8
. With that recycling method the manager no
longer performs a test query prior returning the connection but relies
solely on simple_pg_client::Client::is_closed
instead. Under some rare
circumstances (e.g. unreliable networks) this can lead to simple_pg_client
not noticing a disconnect and reporting the connection as useable.
The old and slightly slower recycling method can be enabled by setting
ManagerConfig::recycling_method
to RecyclingMethod::Verified
or when
using the config
crate by setting PG__MANAGER__RECYCLING_METHOD=Verified
.
How can I enable features of the tokio-postgres
crate?
Make sure that you depend on the same version of tokio-postgres
as
deadpool-postgres
does and enable the needed features in your own
Crate.toml
file:
[dependencies]
deadpool-postgres = { version = "0.9" }
tokio-postgres = { version = "0.7", features = ["with-uuid-0_8"] }
Important: The version numbers of deadpool-postgres
and
tokio-postgres
do not necessarily match. If they do it is just a
coincidence that both crates have the same MAJOR and MINOR version
number.
deadpool-postgres | tokio-postgres |
---|---|
0.7 – 0.12 | 0.7 |
0.6 | 0.6 |
0.4 – 0.5 | 0.5 |
0.2 – 0.3 | 0.5.0-alpha |
How can I clear the statement cache?
You can call pool.manager().statement_cache.clear()
to clear all
statement caches or pool.manager().statement_cache.remove()
to remove
a single statement from all caches.
Important: The ClientWrapper
also provides a statement_cache
field which has clear()
and remove()
methods which only affect
a single client.
Licensed under either of
at your option.