| Crates.io | tunny |
| lib.rs | tunny |
| version | 0.1.0 |
| created_at | 2025-07-25 07:40:33.173231+00 |
| updated_at | 2025-07-25 07:40:33.173231+00 |
| description | Tunny is a flexible, efficient thread pool library for Rust built to manage and scale concurrent workloads. It enables you to process jobs in parallel across a configurable number of worker threads, supporting synchronous, asynchronous, and timeout-based job execution. |
| homepage | |
| repository | https://github.com/busyster996/RustTunny |
| max_upload_size | |
| id | 1767289 |
| size | 36,224 |
Tunny is a flexible, efficient thread pool library for Rust built to manage and scale concurrent workloads. It enables you to process jobs in parallel across a configurable number of worker threads, supporting synchronous, asynchronous, and timeout-based job execution.
Worker trait.Add dependencies in Cargo.toml:
[dependencies]
tunny = "0.1.0"
use tunny::{Pool, Handler};
fn main() {
// Create a thread pool with 4 callback workers
let pool = Pool::new_callback(4);
// Submit a synchronous job
let result = pool.process(Box::new(|| {
println!("Job processed");
Ok(())
}));
assert!(result.is_ok());
// Shutdown the pool
pool.close();
}
use std::sync::Arc;
use std::sync::atomic::{AtomicUsize, Ordering};
use tunny::{Pool, Handler};
fn main() {
let pool = Pool::new_callback(2);
let counter = Arc::new(AtomicUsize::new(0));
for i in 0..10 {
let counter_clone = Arc::clone(&counter);
pool.submit(Box::new(move || {
counter_clone.fetch_add(1, Ordering::SeqCst);
println!("Async job {}", i);
Ok(())
})).unwrap();
}
// Wait for jobs to finish
std::thread::sleep(std::time::Duration::from_millis(100));
assert_eq!(counter.load(Ordering::SeqCst), 10);
pool.close();
}
use std::time::Duration;
use tunny::{Pool, Handler, TunnyError};
fn main() {
let pool = Pool::new_callback(2);
let result = pool.process_timed(
Box::new(|| {
std::thread::sleep(Duration::from_millis(100));
Ok(())
}),
Duration::from_millis(500),
);
assert!(result.is_ok());
let timeout_result = pool.process_timed(
Box::new(|| {
std::thread::sleep(Duration::from_millis(1000));
Ok(())
}),
Duration::from_millis(100),
);
assert!(matches!(timeout_result, Err(TunnyError::JobTimedOut)));
pool.close();
}
use tunny::{Worker, Handler};
use std::sync::Arc;
struct MyWorker;
impl Worker for MyWorker {
fn process(&mut self, handler: Handler) -> std::result::Result<(), Box<dyn std::error::Error + Send>> {
println!("MyWorker is processing a job!");
handler()
}
}
fn main() {
let pool = tunny::Pool::new(2, || Box::new(MyWorker));
pool.process(Box::new(|| {
println!("Running inside MyWorker");
Ok(())
})).unwrap();
pool.close();
}
Pool::new(n, constructor): Create a pool with n workers.Pool::new_callback(n): Create a pool of callback workers.Pool::new_func(n, f): Create a pool with a custom job processing function.process(handler): Submit a job synchronously.process_timed(handler, timeout): Submit a job with timeout.submit(handler): Submit a job asynchronously.set_size(n): Resize the pool dynamically.get_size(): Get current pool size.queue_length(): Get current job queue length.close(): Shut down the pool.Implement Worker for custom job handling logic:
process(&mut self, handler: Handler): Process a job.block_until_ready(&mut self): Block until ready for next job.interrupt(&mut self): Interrupt a job in progress.terminate(&mut self): Clean up when worker exits.bind_pool(&mut self, pool: Arc<Pool>): Bind worker to pool.Tunny uses the TunnyError enum for pool and job errors:
PoolNotRunning: Pool is not running.WorkerClosed: Worker was closed.JobTimedOut: Job timed out.A comprehensive suite of tests is provided in lib.rs to demonstrate all core features, including pool creation, resizing, job submission, timeouts, and queue monitoring.
MIT
crossbeam-channel for channel communication.