cancel-this

Crates.iocancel-this
lib.rscancel-this
version0.4.0
created_at2025-09-29 09:23:32.045799+00
updated_at2025-12-24 16:59:08.490716+00
descriptionA user-friendly cooperative cancellation and liveness monitoring library.
homepagehttps://github.com/daemontus/cancel-this
repositoryhttps://github.com/daemontus/cancel-this
max_upload_size
id1859285
size102,503
Samuel Pastva (daemontus)

documentation

README

Crates.io Api Docs Continuous integration Benchmarks Coverage GitHub issues GitHub last commit Crates.io

cancel_this (Rust co-op cancellation)

This crate provides a user-friendly way to implement cooperative cancellation in Rust based on a wide range of criteria, including triggers, timers, OS signals (Ctrl+C), memory limit, or the Python interpreter linked using PyO3. It also provides liveness monitoring of "cancellation aware" code.

Why not use async instead of cooperative cancellation? In principle, async was designed to solve a different problem, and that's executing IO-bound tasks in a non-blocking fashion. It is not really designed for CPU-bound tasks. Consequently, using async adds a lot of unnecessary overhead to your project which cancel_this does not have (see also the Performance section below).

Why not use stop-token, CancellationToken or other cooperative cancellation crates? So far, all crates I have seen require you to pass the cancellation token around and generally do not make it easy to combine the effects of multiple tokens. In cancel_this, the goal was to make cancellation dead simple: You register however many cancellation triggers you want, each trigger is valid within a specific scope (and thread), and can be checked by a macro anywhere in your code.

A linguistic sidenote: While both are correct, American English prefers double-L (cancellation) in the noun form and single-L (canceled) for the verb form. Furthermore, for words like cancellable, the convention is not really clear. This crate uses double-L everywhere in code (for consistency). In documentation and comments, we try to follow the official linguistic conventions.

Current features

  • Scoped cancellation using thread-local "cancellation triggers."
  • Out-of-the-box support for triggers based on atomics and timers.
  • With feature ctrlc enabled, support for cancellation using SIGINT signals.
  • With feature pyo3 enabled, support for cancellation using Python::check_signals.
  • With feature memory enabled, support for cancellation based on memory consumption returned by memory-stats.
  • With feature liveness enabled, you can register a per-thread handler invoked once the thread becomes unresponsive (i.e., cancellation is not checked periodically within the desired interval).
  • Practically no overhead in cancellable code when cancellation is not actively used.
  • Minimal overhead for "atomic-based" cancellation triggers and PyO3 cancellation.
  • All triggers and guards generate log messages (trace for normal operation, warn for issues where panic can be avoided).

Simple example

A simple counter that is eventually canceled by a one-second timeout. More complex examples (including liveness monitoring and multithreaded usage) are provided in the documentation.

use std::time::Duration;
use cancel_this::{Cancellable, is_cancelled};

fn cancellable_counter(count: usize) -> Cancellable<()> {
   for _ in 0..count {
      is_cancelled!()?;
      std::thread::sleep(Duration::from_millis(10));
   }
   Ok(())
}

fn main() {
   let one_s = Duration::from_secs(1);
   let result: Cancellable<()> = cancel_this::on_timeout(one_s, || {
      cancellable_counter(5)?;
      cancellable_counter(10)?;
      cancellable_counter(100)?;
      Ok(())
   });
    
   assert!(result.is_err());   
}

Performance

The overall overhead of adding cancellation checks will heavily depend on how often they are performed. Under ideal conditions, you don't want to run them too often. However, delaying cancellation too much can make your code seem unresponsive. In ./benches, we provide a benchmark to illustrate the impact of cancellation on simple code. Here, we intentionally use cancellation checks too often to gain significant overhead. In your own code, it is typically enough to run cancellation every few milliseconds.

Caching cancellation triggers

If you need to check cancellation repeatedly in a performance-sensitive piece of code, you might want to sacrifice some ergonomics of cancel_this for reduced overhead. In such cases, you can use cancel_this::active_triggers to store a "local copy" of all active triggers. You can then pass such triggers directly to is_cancelled! to avoid (relatively) costly thread-local variable access.

Sample results

Benchmarks with liveness=true are running with liveness monitoring (this adds additional overhead). The synchronous benchmark is a baseline without any cancellation support. The async::tokio benchmark implements cancellation using async functions. The cancellable::none benchmark implements cancellation using cancel_this, but with no trigger registered. Benchmarks marked as cached use a local variable to cache the active triggers. Remaining benchmarks test different "cancellation triggers" implemented in cancel_this.

These results were obtained on a M2 Max Macbook Pro using cargo bench (the exact output is simplified for brevity). Latest results from a more stable desktop environment are also available on bencher.dev or in the relevant CI run.

hash::synchronous;                                    4.0006 µs

hash::async::tokio;                                   17.076 µs

hash::cancellable::none; (liveness=false)             4.0369 µs
hash::cancellable::none; (liveness=true)              7.6464 µs
hash::cancellable::none::cached; (liveness=false)     4.0020 µs
hash::cancellable::none::cached; (liveness=true)      4.0214 µs

hash::cancellable::atomic; (liveness=false)          4.9599 µs
hash::cancellable::atomic; (liveness=true)           7.6691 µs
hash::cancellable::atomic::cached; (liveness=false)  4.0318 µs
hash::cancellable::atomic::cached; (liveness=true)   4.0614 µs

hash::cancellable::timeout; (liveness=false)         4.9626 µs
hash::cancellable::timeout; (liveness=true)          7.7143 µs

hash::cancellable::sigint; (liveness=false)          4.9717 µs
hash::cancellable::sigint; (liveness=true)           7.7038 µs

hash::cancellable::memory; (liveness=true)           535.34 µs
hash::cancellable::memory; (liveness=false)          533.16 µs

# Tested in simulated environment; results using actual Python
# interpreter will be slightly worse, depending on the interpreter.

hash::cancellable::python; (liveness=false)          7.3912 µs
hash::cancellable::python; (liveness=true)           7.8942 µs

To run the benchmarks locally, simply use cargo bench --all-features (with liveness turned on) or cargo bench --features=ctrlc --features=pyo3 --features=memory (liveness turned off).

Commit count: 0

cargo fmt