future-pool

Crates.iofuture-pool
lib.rsfuture-pool
version0.1.0
created_at2025-10-17 09:47:27.301581+00
updated_at2025-10-17 09:47:27.301581+00
descriptionA simple yet efficient worker pool implementation for async task processing
homepage
repository
max_upload_size
id1887438
size50,922
Scofield Liu (Scofield626)

documentation

README

future-pool

Simple async worker pool built on Rust std::thread and futures. Avoids per-task tokio::spawn overhead with a clear task + context API.

Features

  • futures + std::thread: Drive async process futures on worker threads via block_on.
  • Clear API: Implement WorkerTask with a per-task struct and a shared Context.
  • Thread-per-core model: One OS thread per worker; optional CPU core affinity (opt-in).
  • Simple: Round-robin dispatch, bounded queues, minimal surface.
  • Configurable: Worker count, buffer size, CPU core affinity (opt-in).

Installation

Add this to your Cargo.toml:

[dependencies]
future-pool = "0.1.0"

Quick start (API usage)

  • Define a task with per-task data and implement WorkerTask:
struct MyTask { /* per-task fields */ }
impl WorkerTask for MyTask {
  type Context = /* shared state the task needs */;
  async fn process(self, ctx: &Self::Context) { /* use ctx */ }
}
  • Create a pool with shared Context and configuration:
let context = /* build context */;
let config = WorkerPoolConfig { /* worker count, buffer size, affinity */ };
let mut pool = WorkerPool::new(context, config);
  • Spawn work using spawn:
pool.spawn(MyTask { /* data */ }).await?;

See examples/ for a complete program.

API

  • trait WorkerTask

    • type Context: Clone + Send + Sync + 'static
    • fn process(self, &Context) -> impl Future<Output=()> + Send
  • struct WorkerPool<T: WorkerTask>

    • fn new(context: T::Context, config: WorkerPoolConfig) -> Self
    • async fn spawn(&mut self, task: T) -> Result<(), SendError<T>>
    • fn worker_count(&self) -> usize
    • fn shutdown(self)

Configuration

WorkerPoolConfig options:

  • num_workers: Number of worker threads (defaults to available CPUs)
  • buffer_size: Channel buffer size per worker
  • enable_core_affinity: Opt-in CPU core pinning
  • core_offset: Starting core offset for pinning (auto-calculated if not set)

Defaults: num_workers=None, buffer_size=DEFAULT_CHANNEL_SIZE, enable_core_affinity=false, core_offset=None.

When to use

  • Large numbers of small (homogeneous) async tasks where per-task tokio::spawn overhead matters
  • You want a simple pool model: round-robin dispatch to a fixed set of worker threads

Architecture (brief)

  • Thread-per-core worker model: N OS threads, each with its own bounded channel.
  • Optional core affinity: workers can be pinned to CPU cores; disabled by default.
Commit count: 0

cargo fmt