chute

Crates.iochute
lib.rschute
version
sourcesrc
created_at2024-11-02 20:08:40.186658
updated_at2024-11-20 09:10:30.890058
descriptionLockfree mpmc/spmc broadcast queue.
homepage
repositoryhttps://github.com/tower120/chute
max_upload_size
id1433145
Cargo.toml error:TOML parse error at line 21, column 1 | 21 | autolib = false | ^^^^^^^ unknown field `autolib`, expected one of `name`, `version`, `edition`, `authors`, `description`, `readme`, `license`, `repository`, `homepage`, `documentation`, `build`, `resolver`, `links`, `default-run`, `default_dash_run`, `rust-version`, `rust_dash_version`, `rust_version`, `license-file`, `license_dash_file`, `license_file`, `licenseFile`, `license_capital_file`, `forced-target`, `forced_dash_target`, `autobins`, `autotests`, `autoexamples`, `autobenches`, `publish`, `metadata`, `keywords`, `categories`, `exclude`, `include`
size0
(tower120)

documentation

README

chute

crates.io license Docs CI codecov

Queue illustration

An mpmc1/spmc2 lock-free broadcast3 queue.

  • Lock-free consumers without overhead4.
  • Mpmc lock-free producers, which write simultaneously.
  • Spmc ordered. Mpmc ordered within writer messages5.
  • Unbounded dynamic size.
  • Shared queue. All readers and writers use the same queue, without duplications.
  • No clones! Messages are not cloned on return, so Clone is not required.

Blazingly fast reads. The consumer basically reads a plain slice of data, then does an atomic read that will define the next slice.

Example

Write from multiple threads, read from multiple threads:

const WRITERS         : usize = 4;
const WRITER_MESSAGES : usize = 100;
const MESSAGES        : usize = WRITERS*WRITER_MESSAGES;
const READERS         : usize = 4;
let queue = chute::mpmc::Queue::new();

std::thread::scope(|s| {
    // READ threads
    for _ in 0..READERS {
        let mut reader = queue.reader();
        s.spawn(move || {
            let mut sum = 0;
            for _ in 0..MESSAGES {
                // Since this is a queue, not a channel - 
                // we just spin around next().
                let msg = loop {
                    if let Some(msg) = reader.next() {
                        break msg;
                    }
                };
                sum += msg;
            }
            
            assert_eq!(sum, (0..MESSAGES).sum());
        });
    }        
    
    // WRITE threads
    for t in 0..WRITERS {
        let mut writer = queue.writer();
        s.spawn(move || {
            for i in 0..WRITER_MESSAGES {
                writer.push(t*WRITER_MESSAGES + i);
            }             
        });
    }
});

See examples.

Benchmarks

Intel i4771 (3.5Ghz 4C/8T), DDR3 1600Mhz, Windows 10. See benchmarks sub-project.

seq benchmark spsc benchmark mpsc benchmark broadcast mpmc benchmark broadcast spmc benchmark

Benchmarks compare with a channels since chute can be used +/- as a channel, by spinning on the reader side.

P.S. Suggestions on benchmark candidates are welcomed!

How it works

Chute is the next iteration of rc_event_queue. The key difference is true lockless mpmc writers.

See how it works.

Test coverage

Library covered with fuzzy and miri tests.

Known limitations

  • Currently, there is no way to "disconnect" slow reading reader from the writer side. The queue can grow indefinitely if at least one of the readers consumes slower than writers fill it.

  • All blocks have the same size now. This is likely to change in future - it will probably work the same way as in rc_event_queue.

Footnotes

  1. Multi-producer multi-consumer.

  2. Single-producer multi-consumer.

  3. Also known as a multicast queue. Each consumer gets every message sent to queue, from the moment of subscription.

  4. In compare to traditional lock techniques with Mutex.

  5. This means that each message written by writer, will be in the same order against each other. But between them, messages from other threads may appear. If write calls will be synchronized - all messages will be ordered by that "synchronization order".

Commit count: 38

cargo fmt