| Crates.io | stackduck |
| lib.rs | stackduck |
| version | 0.1.3 |
| created_at | 2025-08-09 20:27:14.718898+00 |
| updated_at | 2025-08-11 11:13:35.968294+00 |
| description | High-performance distributed queue system |
| homepage | |
| repository | https://github.com/6amson/stackduck |
| max_upload_size | |
| id | 1788223 |
| size | 146,675 |
A high-performance, distributed job queue system built in Rust with gRPC support for language-agnostic job processing.
Stackduck provides a robust foundation for background job processing in distributed systems. It combines the speed of Redis for job queuing with PostgreSQL's reliability for metadata persistence, offering automatic failover and graceful degradation.
Producers (Framework Agnostic)
NestJS | Flask | Ruby | CLI | etc
|
| gRPC Job Submission.
v
+---------------------------+
| StackDuck gRPC API (Rust) |
| - async-stream support |
| - Job validation |
+---------------------------+
|
| Enqueue + Persist
v
+---------------------------+ +---------------------------+
| Redis Queue | | PostgreSQL |
| - Job queue | | - Job metadata |
| - Priority queues | | - Execution state |
| - Rate limiting | | - Worker assignments |
+---------------------------+ +---------------------------+
|
| Notification Emit
v
+---------------------------+
| StackDuck gRPC API (Rust) |
| - Job ready notifications |
| - Worker selection |
+---------------------------+
|
| async-stream job distribution
v
+------------------------------------------------------------+
| Workers/Consumers (Multi-Framework) |
| Job Execution |
| Job Ack and Job Nack calls |
| +------------+ +------------+ +------------+ +----------+ |
| | Ruby | | NestJS | | Flask | | Rust | |
| | Worker | | Worker | | Worker | | Worker | |
| | (gRPC) | | (gRPC) | | (gRPC) | | (Native) | |
| +------------+ +------------+ +------------+ +----------+ |
+-------------------------------------------------------------+
|
| Job Ack or Job Nack
v
+-----------------------------------------+
| StackDuck gRPC API (Rust) |
| - On Nack (Handles retries) |
| - On Ack (Marks job as complete) |
| - On Error (Marks job as failed) |
| - On failure (Moves job to dead-letter) |
+-----------------------------------------+
|
| Stream Results
v
+--------------------------------------+
| Consumers/Workers |
| - Polls result for dead-letter queue |
| - Handle dead letter jobs |
| - Log failed jobs |
+--------------------------------------+
cargo install stackduck
export DATABASE_URL="postgres://user:pass@localhost/mydb"
export REDIS_URL="redis://127.0.0.1:6379"
export SERVER_ADDR="127.0.0.1:50051" (optional)
stackduck
git clone https://github.com/6amson/Stackduck
cd Stackduck
cargo build --release
Create a .env file:
DATABASE_URL=postgresql://user:password@localhost/stackduck
REDIS_URL=redis://localhost:6379
SERVER_ADDR=127.0.0.1:50051 (optional)
cargo run --release
// Rust client example
let mut client = StackDuckServiceClient::connect("http://127.0.0.1:50051").await?;
let request = EnqueueJobRequest {
job_type: "email_send".to_string(),
payload: r#"{"to": "user@example.com", "subject": "Welcome!"}"#.to_string(),
priority: 1, // High priority (1-3)
delay: 0, // Execute retry after delay * 2^retry_count, capped at 1hr.
max_retries: 3, // Max retries up to 3 times
scheduled_at: None, // ISO timestamp for scheduled execution, None for immediate
};
let response = client.enqueue_job(request).await?;
println!("Job enqueued with ID: {}", response.into_inner().job_id);
// Worker consuming jobs
let request = ConsumeJobsRequest {
worker_id: "worker-001".to_string(),
job_types: vec!["email_send".to_string(), "image_process".to_string()],
};
let mut stream = client.consume_jobs(request).await?.into_inner();
while let Some(job_message) = stream.message().await? {
if let Some(job) = job_message.job {
println!("Processing job: {}", job.id);
// Process the job
match process_job(&job).await {
Ok(_) => {
// Mark job as completed
client.complete_job(CompleteJobRequest {
job_id: job.id.clone(),
}).await?;
}
Err(e) => {
// Mark job as failed
client.fail_job(FailJobRequest {
job_id: job.id.clone(),
error_message: e.to_string(),
}).await?;
}
}
}
}
message Job {
string id = 1;
string job_type = 2;
string payload = 3; // JSON string
string status = 4; // "queued", "running", "completed"
int32 priority = 5; // 1 (high) to 3 (low)
int32 retry_count = 6;
int32 max_retries = 7;
string error_message = 8;
int32 delay = 9; // Delay in seconds
int64 scheduled_at = 10; // Unix timestamp
int64 started_at = 11;
int64 completed_at = 12;
int64 created_at = 13;
int64 updated_at = 14;
}
EnqueueJobAdds a new job to the queue.
Request: EnqueueJobRequest
job_type: String identifier for the job typepayload: JSON string with job datapriority: Job priority (1-3, defaults to 2)delay: Time basis for calculating exponential backoff in seconds, defaults to 30max_retries: Maximum retry attempts if job failed, defaults to 2scheduled_at: ISO 8601 timestamp to schedule the job for future executionDequeueJobPulls a single job from a specific queue. [This is an internally called endpoint].
Request: DequeueJobRequest
queue_name: Name of the queue to pull fromConsumeJobsOpens a streaming connection for continuous job processing.
Request: ConsumeJobsRequest
worker_id: Unique identifier for the workerjob_types: Array of job types this worker can handleCompleteJobMarks a job as successfully completed.
Request: CompleteJobRequest
job_id: ID of the completed jobFailJobMarks a job as failed with an error message.
Request: FailJobRequest
job_id: ID of the failed joberror_message: Description of the failureRetryJobRetries a failed job if within max retry limits. [This is an internally called endpoint]
Request: RetryJobRequest
job_id: ID of the job to retrycomplete_job() → status "completed"fail_job() → retry if attempts remain, else "failed"retry_job() → back to "Queued" with incremented retry count and exponential backoffDATABASE_URL: PostgreSQL connection stringREDIS_URL: Redis connection string (optional)SERVER_ADDR: gRPC server port (default: 50051)Stackduck uses gRPC for cross-language compatibility. Generate clients for:
pip install grpcio-toolsnpm install @grpc/grpc-jsgo install google.golang.org/protobuf/cmd/protoc-gen-godotnet add package Grpc.ToolsBuilt-in metrics and logging for:
git checkout -b feature-namecargo testMIT License - see LICENSE file for details.