| Crates.io | micropool |
| lib.rs | micropool |
| version | 0.2.2 |
| created_at | 2025-10-24 13:28:08.827678+00 |
| updated_at | 2026-01-17 01:47:09.745859+00 |
| description | Low-latency thread pool with parallel iterators |
| homepage | |
| repository | https://github.com/DouglasDwyer/micropool |
| max_upload_size | |
| id | 1898423 |
| size | 92,426 |
micropool is a rayon-style thread pool designed for games and other low-latency scenarios. It implements the ability to spread work across multiple CPU threads in blocking and non-blocking ways. It also has full support for paralight's parallel iterators, which cleanly facilitate multithreading in a synchronous codebase. micropool uses a work-stealing scheduling system, but is unique in several aspects:
micropool (from calling join or using a parallel iterator), it will actively help complete the work. This eliminates the overhead of a context switch.micropool is guaranteed to have at least one thread processing it, from the moment of creation.spawn.joinA single operation can be split between two threads using the join primitive:
micropool::join(|| {
println!("A {:?}", std::thread::current().id());
}, || {
println!("B {:?}", std::thread::current().id());
});
// Possible output:
// B ThreadId(2)
// A ThreadId(1)
Parallel iterators allow for splitting common list operations across multiple threads. micropool re-exports the paralight library:
use micropool::iter::*;
let len = 10_000;
let input = (0..len as u64).collect::<Vec<u64>>();
let input_slice = input.as_slice();
let result = input_slice
.par_iter()
.with_thread_pool(micropool::split_by_threads())
.sum::<u64>();
assert_eq!(result, 49995000);
The .with_thread_pool line specifies that the current micropool instance should be used, and split_by_threads indicates that each pool thread should process an equal-sized chunk of the data. Other data-splitting strategies available are split_by, split_per_item, and split_per.
spawnTasks can be spawned asynchronously, then joined later:
let task = micropool::spawn_owned(|| 2 + 2);
println!("Is my task complete yet? {}", task.complete());
println!("The result: {}", task.join());
// Possible output:
// Is my task complete yet? false
// The result: 4
The following example illustrates the properties of the micropool scheduling system:
println!("A {:?}", std::thread::current().id());
let background_task = micropool::spawn_owned(|| println!("B {:?}", std::thread::current().id()));
micropool::join(|| {
std::thread::sleep(std::time::Duration::from_millis(20));
println!("C {:?}", std::thread::current().id())
}, || {
println!("D {:?}", std::thread::current().id());
micropool::join(|| {
std::thread::sleep(std::time::Duration::from_millis(200));
println!("E {:?}", std::thread::current().id());
}, || {
println!("F {:?}", std::thread::current().id());
});
});
One possible output of this code might be:
A ThreadId(1) // The main thread is #1
D ThreadId(2) // Thread #2 begins helping the outer micropool::join call
C ThreadId(1) // Thread #1 helps to finish the outer micropool::join call
F ThreadId(1) // Thread #1 steals work from thread #2, to help complete the inner micropool::join call
E ThreadId(2) // Thread #2 finishes the inner micropool::join call
B ThreadId(2) // Thread #2 grabs and completes the background task; thread #1 will *never* execute this
There are several key differences between micropool's behavior and rayon, for instance:
join occurs on an external thread. With rayon, this call would simply block and the main thread would wait for pool threads to finish both halves of join. With micropool, the external thread helps.rayon thread pool is saturated with tasks, the call to join might be long and unpredictable - the rayon workers would need to finish their current tasks first, even if those tasks are unrelated.join, there is other work available: the background_task. In this case, completion of background_task is not required for join to return. As such, the external thread will never run it. In contrast, if a rayon thread is blocked, it may run unrelated work in the meantime, so it may take a long/unpredictable amount of time before control flow returns from the join.join) before processing asynchronous tasks created via spawn. This natural separation of foreground and background work ensures that the most important foreground tasks - like per-frame rendering or physics in a agame engine - happen first.micropool compensates for this by spinning threads before they sleep.