How does Hedgehog and Hypothesis differ in their shrinking strategies?
The article uses the words "integrated" vs. "internal" shrinking.
> the raison d’être of internal shrinking: it doesn’t matter that we cannot shrink the two generators independently, because we are not shrinking generators! Instead, we just shrink the samples that feed into those generators.
Besides that it seems like falsify has many of the same features like choice of ranges and distributions.
> The key insight of the Hypothesis library is that instead of shrinking generated values, we instead shrink the samples produced by the PRNG.
Hedgehog loses shrink information when you do a monadic bind (Gen a -> (a -> Gen b) -> Gen b). Hypothesis parses values out of the stream of data generated by the PRNG, so when it "binds", you are still just consuming off that stream of random numbers, and you can shrink the stream to shrink the generated values.
(Author of falsify here.) You are absolutely correct that the empty string isn't always the best counter-example. The goal of shrinking is to shrink to the _simplest_ possible value (this is true for all approaches to shrinking). What constitutes "simple" is very much domain specific. It would certainly be possible to write a generator that would shrink to, say, "foo", as the canonical "simplest" example of a simple string. Indeed, since we are working in a lazy language, you could (with a bit of effort) shrink to `undefined` if the other arguments are not used at all.
I agree it can be domain-specific, but I think it's more common than not that empty containers, and the number zero, are corner cases rather than typical values.
So I think it would be a decent quality-of-life improvement to make generators of the sort you suggest easily available, and have the tutorial docs use them from the start.
A minimal reproducing example cannot guarantee that you'll correctly diagnose a bug just by looking at the example (because multiple potential bugs could cause the same example to fail) but it can guarantee that when you step through the code to understand what's happening, you won't have to deal with huge amounts of irrelevant data.
Maybe an alternative shrinking procedure could directly minimize the number of instructions that need to be executed to hit a failure...
Really? Your examples seem the opposite. I am left immediately thinking, "hm, is it failing on a '!', some sort of shell issue? Or is it truncating the string on '#', maybe? Or wait, there's a space in the third one, that looks pretty dangerous, as well as noticeably longer so there could be a length issue..." As opposed to the shrunk version where I immediately think, "uh oh: one of them is not handling an empty input correctly." Also, way easier to read, copy-paste, and type.
> As opposed to the shrunk version where I immediately think, "uh oh: one of them is not handling an empty input correctly."
I agree that non-empty strings are worse, but unfortunately `("", "", "", "")` wouldn't only make me think of empty strings; e.g. I'd wonder whether duplicate/equal values are the problem.
fails_on_empty_third_arg(
a = "", # or any other generated value
b = "", # or any other generated value
c = "",
d = "", # or any other generated value
)
The special value doesn't stand out, though. All three examples I gave were what I thought skimming his comment before my brain caught up to his caveat about an empty third argument. The empty string looked like it was by far the most harmless part... Whereas if they are all empty strings, then by definition the empty string stands out as the most suspicious possible part.
If I understand correctly, they approximate language of inputs of a function to discover minimal (in some sense, like "shortest description length") inputs that violate relations between inputs and outputs of a function under scrutiny.
I care about the edge between "this value fails, one value over succeeds".
I wish shrinking were fast enough to tell me if there are multiple edges between those values.
- Generate a random number N for the size (maybe restricted to some Range)
- Generate N `Char` values, by using a random number for each code point.
- Combine those Chars into a string
falsify runs a generator by applying it to an infinite binary tree, with random numbers in the nodes. A generator can either consume a single number (taken from the root node of a tree), or it can run two other generators (one gets run on the left child, the other gets run on the right). Hence the above generator would use the value in the left child as N, then run the "generate N Chars" generator on the right child. The latter generator would run a Char generator on its left child, and an 'N-1 Chars' generator on its right child; and so on.
To shrink, we just run the generator on a tree with smaller numbers. In this case, a smaller number in the left child will cause fewer Chars to be generated; and smaller numbers in the right tree will cause lower code-points to be generated. falsify's tree representation also has a special case for the smallest tree (which returns 0 for its root, and itself for each child).
How does Hedgehog and Hypothesis differ in their shrinking strategies?
The article uses the words "integrated" vs. "internal" shrinking.
> the raison d’être of internal shrinking: it doesn’t matter that we cannot shrink the two generators independently, because we are not shrinking generators! Instead, we just shrink the samples that feed into those generators.
Besides that it seems like falsify has many of the same features like choice of ranges and distributions.
This is the key sentence:
> The key insight of the Hypothesis library is that instead of shrinking generated values, we instead shrink the samples produced by the PRNG.
Hedgehog loses shrink information when you do a monadic bind (Gen a -> (a -> Gen b) -> Gen b). Hypothesis parses values out of the stream of data generated by the PRNG, so when it "binds", you are still just consuming off that stream of random numbers, and you can shrink the stream to shrink the generated values.
Here is a talk that applies the Hypothesis idea to test C++: https://www.youtube.com/watch?v=C6joICx1XMY . Discussion of PBT implementation approaches begins at 6:30.
I've found in practice that shrinking to get the "smallest amount of detail" is often unhelpful.
Suppose I have a function which takes four string parameters, and I have a bug which means it crashes if the third is empty.
I'd rather see this in the failure report:
("ldiuhuh!skdfh", "nd#lkgjdflkgdfg", "", "dc9ofugdl ifugidlugfoidufog")
than this:
("", "", "", "")
> I'd rather see this in the failure report:
> ("ldiuhuh!skdfh", "nd#lkgjdflkgdfg", "", "dc9ofugdl ifugidlugfoidufog")
I would prefer LazySmallcheck's result, which would be the following:
Where `_` indicates that part of the input wasn't evaluated.(Author of falsify here.) You are absolutely correct that the empty string isn't always the best counter-example. The goal of shrinking is to shrink to the _simplest_ possible value (this is true for all approaches to shrinking). What constitutes "simple" is very much domain specific. It would certainly be possible to write a generator that would shrink to, say, "foo", as the canonical "simplest" example of a simple string. Indeed, since we are working in a lazy language, you could (with a bit of effort) shrink to `undefined` if the other arguments are not used at all.
I agree it can be domain-specific, but I think it's more common than not that empty containers, and the number zero, are corner cases rather than typical values.
So I think it would be a decent quality-of-life improvement to make generators of the sort you suggest easily available, and have the tutorial docs use them from the start.
A minimal reproducing example cannot guarantee that you'll correctly diagnose a bug just by looking at the example (because multiple potential bugs could cause the same example to fail) but it can guarantee that when you step through the code to understand what's happening, you won't have to deal with huge amounts of irrelevant data.
Maybe an alternative shrinking procedure could directly minimize the number of instructions that need to be executed to hit a failure...
Really? Your examples seem the opposite. I am left immediately thinking, "hm, is it failing on a '!', some sort of shell issue? Or is it truncating the string on '#', maybe? Or wait, there's a space in the third one, that looks pretty dangerous, as well as noticeably longer so there could be a length issue..." As opposed to the shrunk version where I immediately think, "uh oh: one of them is not handling an empty input correctly." Also, way easier to read, copy-paste, and type.
> As opposed to the shrunk version where I immediately think, "uh oh: one of them is not handling an empty input correctly."
I agree that non-empty strings are worse, but unfortunately `("", "", "", "")` wouldn't only make me think of empty strings; e.g. I'd wonder whether duplicate/equal values are the problem.
Their point is that in the unshrunk example the “special” value stands out.
I guess if we were even more clever we could get to something more like (…, …, "", …).
The Hypothesis explain phase [1][2] does this!
[1] https://hypothesis.readthedocs.io/en/latest/reference/api.ht...[2] https://github.com/HypothesisWorks/hypothesis/pull/3555
The special value doesn't stand out, though. All three examples I gave were what I thought skimming his comment before my brain caught up to his caveat about an empty third argument. The empty string looked like it was by far the most harmless part... Whereas if they are all empty strings, then by definition the empty string stands out as the most suspicious possible part.
This is fascinating!
If I understand correctly, they approximate language of inputs of a function to discover minimal (in some sense, like "shortest description length") inputs that violate relations between inputs and outputs of a function under scrutiny.
I care about the edge between "this value fails, one value over succeeds". I wish shrinking were fast enough to tell me if there are multiple edges between those values.
I’m honestly completely failing to understand the basic idea here. What does this look like for generating and shrinking random strings,
One straightforward approach would be:
- Generate a random number N for the size (maybe restricted to some Range)
- Generate N `Char` values, by using a random number for each code point.
- Combine those Chars into a string
falsify runs a generator by applying it to an infinite binary tree, with random numbers in the nodes. A generator can either consume a single number (taken from the root node of a tree), or it can run two other generators (one gets run on the left child, the other gets run on the right). Hence the above generator would use the value in the left child as N, then run the "generate N Chars" generator on the right child. The latter generator would run a Char generator on its left child, and an 'N-1 Chars' generator on its right child; and so on.
To shrink, we just run the generator on a tree with smaller numbers. In this case, a smaller number in the left child will cause fewer Chars to be generated; and smaller numbers in the right tree will cause lower code-points to be generated. falsify's tree representation also has a special case for the smallest tree (which returns 0 for its root, and itself for each child).