Shrink all tuple sizes #744
Open
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This shrinks tuples of all sizes between 2 and 22, both inclusive. This implements issue #738.
Today I learned you do automatic code generation for various bits that are symmetric across many arities. That seems like a natural and good thing to use for tuple shrinking.
I added a few test cases. With the current tuple shrinking
eqvTupleShrinks(xs, ys)
could just returnxs == ys
. However, I think tuple shrinking might be better if we interleave rather than append the various single-component streams, and I'm pretty sure the.sorted
equivalence holds if we make that change—that is, I'm deliberately testing something weaker than what I know to be true, in order to future-proof.Would you like me to make the interleaving change? The advantage of doing e.g. a snake draft across all non-empty single-component shrink streams is that when you shrink component k of an n-tuple, in a situation where no shrunk values of the first k-1 components can provoke a test failure, the number of elements you need to look at before trying to shrink component k is k-1 rather than$sum_{i to k-1} number-of-shrinks-of-component-i$ . Stated another way: relatively speaking it prioritizes earlier shrinks of later components rather than later shrinks of earlier components.
If you think there are advantages to prioritizing focusing on the first component, I'll be happy to hear about them. (One could also do a hybrid where you foldLeft a pairwise interleaving, such that the complete interleaving gives 2^-k of the attention to the k'th component.)