| Crates.io | pbt-macro |
| lib.rs | pbt-macro |
| version | 0.3.1 |
| created_at | 2025-08-08 07:38:04.682673+00 |
| updated_at | 2025-08-14 20:00:04.51148+00 |
| description | Property-based testing with `#![no_std]` compatibility, automatic edge cases, and exhaustive breadth-first search over arbitrary types. |
| homepage | https://github.com/wrsturgeon/pbt.git |
| repository | https://github.com/wrsturgeon/pbt.git |
| max_upload_size | |
| id | 1786352 |
| size | 204,519 |
pbtIn order,
Automatic implementation via a #[derive(..)] macro.
We're all human and can forget a case or two, but when that happens, it defeats the point of property-based testing. It should be trivial to generate arbitrary values of any type.
Maximal throughput. Most of the time, software works. When that happens, this crate should act like a fuzzing library, since the faster each case is generated and processed, the more real tests are executed, and the more sure you can be that your software works in general. Only when something fails should this crate slow down to print a useful, minimal, and reproducible error; then, once that's fixed, it's back to as many tests as possible.
Exact shrinking. Find the actual smallest interesting example, not a small-enough one near a pseudorandom hit.
Usual property-based testing libraries generate pseudorandom inputs as fast as possible; since these examples have unpredictable size and complexity, libraries use heuristic shrinking algorithms to approximate the smallest example that's similar to a bigger example, under some definition of "small/big" and "similar."
However, there's a tradeoff between shrinking accuracy and generation throughput:
the more information you give to the shrinker, the more information you have to generate each time.
Rust's two most popular property-based testing libraries (as of 2025) occupy two extremes:
quickcheck focuses on high throughput and doesn't even shrink user-defined types
unless you write a custom implementation (in addition to a generator implementation); and
proptest generates a rich structure called a strategy that provides
useful information to the shrinker (and the dual-edged sword of generating only specific
subsets of a type, like a strategy that only generates ASCII strings)
at the cost of slower generation and thus lower throughput.
I have used both of these libraries for about two years,
regularly switching from one to the other but using at least one continually.
Both are great, but they both require a tremendous amount of upkeep,
not to mention that your implementations might be wrong, which means
you're missing cases in your tests that are supposed to catch interesting cases!
On the bright side, both libraries have experimental #[derive(..)] macros (I wrote one for quickcheck),
but neither is anywhere near ready to use (as of 2025).
Instead of simply racing to do something similar, this crate follows a different strategy entirely:
when designing/deriving a generator for an arbitrary type,
first, iterate over the edge cases/corner cases of that type (since that's where 90% of bugs live),
then run exhaustive breadth-first search on the AST of the type.
Why exhaustive breadth-first search?
First, shrinking is free: the first case is the minimal case,
since we're searching in increasing order. (This is also, as far as I can tell,
the fastest that for any true shrinking algorithm can be, for the same reason that
checking if an element is in an arbitrary n-element list requires O(n) time:
it could be anywhere, and you just have to check.)
Second, deterministic search can be very fast, notably avoiding pseudorandom key splitting.
Third, we tackle small inputs first, increasing throughput.
And fourth, you know that, if you ran 1,000,000 tests, nothing smaller than 1,000,001 will ever fail
(and the edge cases give you pretty good confidence outside that range as well!).