Switch performance tests from BenchmarkTools to Chairmarks to reduce runtime from 13 minutes to 30 seconds. #23
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Both BenchmarkTools and Chairmarks are adequately reproducible and precise. Switching from BenchmarkTools to Chairmarks does not change reported runtime rations by much.
BenchmarkTools (master) reproducibility
I ran the existing benchmark suite twice and for each measured ratio plotted a point with the x position equal to the ratio measured on the first trial and the y position equal to the ratio on the second trial. This is a plot of results from one trial vs results from a second trail. If the benchmarks were perfectly reproducible, all points would lie on the line y=x which is plotted as well.
We can see that there is a pretty good reproducibility between trials. That's good.
Chairmarks (pr) reproducibility
Same as above but using this PR's benchmarks which use Chairmarks instead of BenchmarkTools
There is still a pretty good reproducibility between trials. That's good.
Absolute difference
When we plot the ratios reported by Chairmarks on the x axis and those reported by BenchmarkTools on the y axis, we can see a slightly lower cross-methodology correlation:
For cases where BenchmarkTools currently reports that Inflate is substantially slower than CodecZlib, Chairmarks reports that it es even slower than BenchmarkTools reports that it is. This is due to methodological difference and it is unclear to me which benchmarking package gives a more "correct" answer. Personally, all else equal, I think it makes sense to use more conservative methodology that reports Inflate as worse because that way once we get numbers below one (i.e. inflate is faster than CodecZlib) we can be more confident that the package is indeed faster.
The main reason for this PR is so that it is faster to iterate on candidate performance improvements to see if they actually speed things up.