Crates.io | graphile_worker |
lib.rs | graphile_worker |
version | 0.8.0 |
source | src |
created_at | 2024-01-31 17:31:36.752196 |
updated_at | 2024-05-29 17:47:35.964687 |
description | High performance Rust/PostgreSQL job queue (also suitable for getting jobs generated by PostgreSQL triggers/functions out into a different work queue) |
homepage | https://docs.rs/graphile_worker |
repository | https://github.com/leo91000/graphile_worker |
max_upload_size | |
id | 1121996 |
size | 325,498 |
Rewrite of Graphile Worker in Rust. If you like this library go sponsor Benjie project, all research has been done by him, this library is only a rewrite in Rust 🦀.
The port should mostly be compatible with graphile-worker
(meaning you can run it side by side with Node.JS).
The following differs from Graphile Worker
:
Graphile Worker
, each process has it's worker_id. In rust there is only one worker_id, then jobs are processed in your async runtime thread.Job queue for PostgreSQL running on Rust - allows you to run jobs (e.g. sending emails, performing calculations, generating PDFs, etc) "in the background" so that your HTTP response/application code is not held up. Can be used with any PostgreSQL-backed application.
cargo add graphile_worker
The definition of a task consist simply of an async function and a task identifier
use serde::{Deserialize, Serialize};
use graphile_worker::{WorkerContext, TaskHandler};
#[derive(Deserialize, Serialize)]
struct SayHello {
message: String,
}
impl TaskHandler for SayHello {
const IDENTIFIER: &'static str = "say_hello";
async fn run(self, _ctx: WorkerContext) -> Result<(), ()> {
println!("Hello {} !", self.message);
Ok(())
}
}
#[tokio::main]
async fn main() -> Result<(), ()> {
graphile_worker::WorkerOptions::default()
.concurrency(2)
.schema("example_simple_worker")
.define_job::<SayHello>()
.pg_pool(pg_pool)
.init()
.await?
.run()
.await?;
Ok(())
}
Connect to your database and run the following SQL:
SELECT graphile_worker.add_job('say_hello', json_build_object('name', 'Bobby Tables'));
#[tokio::main]
async fn main() -> Result<(), ()> {
// ...
let utils = worker.create_utils();
// Using add_job
utils.add_job(
SayHello { name: "Bobby Tables".to_string() },
Default::default(),
).await.unwrap();
// You can also use `add_raw_job` if you don't have access to the task, or don't care about end 2 end safety
utils.add_raw_job("say_hello", serde_json::json!({ "name": "Bobby Tables" }), Default::default()).await.unwrap();
Ok(())
}
You can provide app state through extension
:
use serde::{Deserialize, Serialize};
use graphile_worker::{WorkerContext, TaskHandler};
use std::sync::atomic::AtomicUsize;
use std::sync::atomic::Ordering::SeqCst;
use std::sync::Arc;
#[derive(Clone, Debug)]
struct AppState {
run_count: Arc<AtomicUsize>,
}
#[derive(Deserialize, Serialize)]
pub struct CounterTask;
impl TaskHandler for CounterTask {
const IDENTIFIER: &'static str = "counter_task";
async fn run(self, ctx: WorkerContext) {
let app_state = ctx.extensions().get::<AppState>().unwrap();
let run_count = app_state.run_count.fetch_add(1, SeqCst);
println!("Run count: {run_count}");
}
}
#[tokio::main]
async fn main() -> Result<(), ()> {
graphile_worker::WorkerOptions::default()
.concurrency(2)
.schema("example_simple_worker")
.add_extension(AppState {
run_count: Arc::new(AtomicUsize::new(0)),
})
.define_job::<CounterTask>()
.pg_pool(pg_pool)
.init()
.await?
.run()
.await?;
Ok(())
}
run_once
util)LISTEN
/NOTIFY
to be informed of jobs as they're inserted)SKIP LOCKED
to find jobs to execute, resulting in
faster fetches)job_key
Production ready but the API may be rough around the edges and might change.
PostgreSQL 12+ Might work with older versions, but has not been tested.
Note: Postgres 12 is required for the generated always as (expression)
feature
cargo add graphile_worker
graphile_worker
manages its own database schema (graphile_worker_worker
). Just
point at your database and we handle our own migrations.