-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Optimize compression by avoiding unpredictable branches #4144
Conversation
Avoid unpredictable branch. Use conditional move to generate the address that is guaranteed to be safe and compare unconditionally. Instead of if (idx < limit && x[idx] == val ) // mispredicted idx < limit branch Do addr = cmov(safe,x+idx) if (*addr == val && idx < limit) // almost always false so well predicted Using microbenchmarks from https://github.com/google/fleetbench, I get about ~10% speed-up: name old cpu/op new cpu/op delta BM_ZSTD_COMPRESS_Fleet/compression_level:-7/window_log:15 1.46ns ± 3% 1.31ns ± 7% -9.88% (p=0.000 n=35+38) BM_ZSTD_COMPRESS_Fleet/compression_level:-7/window_log:16 1.41ns ± 3% 1.28ns ± 3% -9.56% (p=0.000 n=36+39) BM_ZSTD_COMPRESS_Fleet/compression_level:-5/window_log:15 1.61ns ± 1% 1.43ns ± 3% -10.70% (p=0.000 n=30+39) BM_ZSTD_COMPRESS_Fleet/compression_level:-5/window_log:16 1.54ns ± 2% 1.39ns ± 3% -9.21% (p=0.000 n=37+39) BM_ZSTD_COMPRESS_Fleet/compression_level:-3/window_log:15 1.82ns ± 2% 1.61ns ± 3% -11.31% (p=0.000 n=37+40) BM_ZSTD_COMPRESS_Fleet/compression_level:-3/window_log:16 1.73ns ± 3% 1.56ns ± 3% -9.50% (p=0.000 n=38+39) BM_ZSTD_COMPRESS_Fleet/compression_level:-1/window_log:15 2.12ns ± 2% 1.79ns ± 3% -15.55% (p=0.000 n=34+39) BM_ZSTD_COMPRESS_Fleet/compression_level:-1/window_log:16 1.99ns ± 3% 1.72ns ± 3% -13.70% (p=0.000 n=38+38) BM_ZSTD_COMPRESS_Fleet/compression_level:0/window_log:15 3.22ns ± 3% 2.94ns ± 3% -8.67% (p=0.000 n=38+40) BM_ZSTD_COMPRESS_Fleet/compression_level:0/window_log:16 3.19ns ± 4% 2.86ns ± 4% -10.55% (p=0.000 n=40+38) BM_ZSTD_COMPRESS_Fleet/compression_level:1/window_log:15 2.60ns ± 3% 2.22ns ± 3% -14.53% (p=0.000 n=40+39) BM_ZSTD_COMPRESS_Fleet/compression_level:1/window_log:16 2.46ns ± 3% 2.13ns ± 2% -13.67% (p=0.000 n=39+36) BM_ZSTD_COMPRESS_Fleet/compression_level:2/window_log:15 2.69ns ± 3% 2.46ns ± 3% -8.63% (p=0.000 n=37+39) BM_ZSTD_COMPRESS_Fleet/compression_level:2/window_log:16 2.63ns ± 3% 2.36ns ± 3% -10.47% (p=0.000 n=40+40) BM_ZSTD_COMPRESS_Fleet/compression_level:3/window_log:15 3.20ns ± 2% 2.95ns ± 3% -7.94% (p=0.000 n=35+40) BM_ZSTD_COMPRESS_Fleet/compression_level:3/window_log:16 3.20ns ± 4% 2.87ns ± 4% -10.33% (p=0.000 n=40+40) I've also measured the impact on internal workloads and saw similar ~10% improvement in performance, measured by cpu usage/byte of data.
Note that we observed the most improvements on window log 15, 16. Window logs from 18 see little to no improvement which should be reasonable given the hashtable sizes And 10% improvement comes from running in Google production for several months |
I had a look into this, and unfortunately observed a performance regression with this patch on the test system: core i7-9700k (noturbo), Ubuntu 24.04, gcc 13.2,
It might be one of these cases where the performance difference is more impacted by random variations in instruction alignment, making it difficult to determine if an observed difference is actually related to the patch, or due to a random compiled binary layout. To mitigate the impact of a specific compiler version, I'll try to collect the performance differences on a larger set of compilers and versions. |
Interesting, where do I find silesia.tar to reproduce? I'm somewhat skeptical about instruction alignment here, since testing each location in isolation still causes (smaller) performance improvement but replacing cmov with branch causes performance regression, so probability of getting better alignment in all cases seems small. But since we are talking about predictability, it obviously depends a lot on CPU and dataset. As an aside I've found that zstd already does something similar (uncodniatil read instead of branch) in other cases: https://github.com/facebook/zstd/blob/dev/lib/compress/zstd_fast.c#L779 |
Also note gcc vs clang. At Google we exclusively use clang |
Right, also it is a rolling release model, so doesn't directly corresponds to clang/llvm releases. |
I have more results, using a larger set of compilers. core i7-9700k (noturbo), Ubuntu 24.04, silesia.tar, level 1:
Unfortunately, this looks like a pretty consistent loss, across all compilers, including Not sure what could explain such an observation difference, |
On clang-17 I have anectodal improvement after running some cli and perf stat commands. TL;DR some speed improvement and 1.5% fewer branch mispredictions Overall I used
|
I'm getting similar results on a skylake (server/worksation variant) with clang 16
With |
OK, I presume that reducing window size makes the algorithm less likely to find matches within range, thus making the targeted branch less predictable during search. Indeed, if I enforce a smaller window size, using It adds a layer of complexity, as it seems this patch is beneficial for "some" scenarios (small window sizes, on |
Note: It seems to indicate that the compiler is unable to generate a |
Small follow up: And it worked: the non-inlined There is still a little cost due to jumping in and out of the function. So the hand-crafted assembly is still faster since it avoids that cost. Note 2: |
Adding arm assembly shouldn't be too hard, if we are ok with having assembly |
In general, we try to avoid assembly, whenever it's not strictly necessary, That being said, sometimes, it's indeed necessary. |
After looking a bit more into it, So I guess we'll go the inline assembly route after all. |
An interesting observation : If I replace the test :
into:
(note how the That in itself is not a big deal: we could keep But when I extract the test into its own function, returning a 0/1 boolean signal,
then performance on So far, I've been unable to find out why. I'll show in another PR why it's important and is currently a blocker. |
We've tried internally and it works for us with the same speedup for smaller blocks. Thank you a lot for picking this up and making it work for all cases the right way! |
Thanks for the awesome patch! |
Avoid unpredictable branch. Use conditional move to generate the address that is guaranteed to be safe and compare unconditionally. Instead of
if (idx < limit && x[idx] == val ) // mispredicted idx < limit branch
Do
addr = cmov(safe,x+idx)
if (*addr == val && idx < limit) // almost always false so well predicted
Using microbenchmarks from https://github.com/google/fleetbench, I get about ~10% speed-up:
name old cpu/op new cpu/op delta
BM_ZSTD_COMPRESS_Fleet/compression_level:-7/window_log:15 1.46ns ± 3% 1.31ns ± 7% -9.88% (p=0.000 n=35+38)
BM_ZSTD_COMPRESS_Fleet/compression_level:-7/window_log:16 1.41ns ± 3% 1.28ns ± 3% -9.56% (p=0.000 n=36+39)
BM_ZSTD_COMPRESS_Fleet/compression_level:-5/window_log:15 1.61ns ± 1% 1.43ns ± 3% -10.70% (p=0.000 n=30+39)
BM_ZSTD_COMPRESS_Fleet/compression_level:-5/window_log:16 1.54ns ± 2% 1.39ns ± 3% -9.21% (p=0.000 n=37+39)
BM_ZSTD_COMPRESS_Fleet/compression_level:-3/window_log:15 1.82ns ± 2% 1.61ns ± 3% -11.31% (p=0.000 n=37+40)
BM_ZSTD_COMPRESS_Fleet/compression_level:-3/window_log:16 1.73ns ± 3% 1.56ns ± 3% -9.50% (p=0.000 n=38+39)
BM_ZSTD_COMPRESS_Fleet/compression_level:-1/window_log:15 2.12ns ± 2% 1.79ns ± 3% -15.55% (p=0.000 n=34+39)
BM_ZSTD_COMPRESS_Fleet/compression_level:-1/window_log:16 1.99ns ± 3% 1.72ns ± 3% -13.70% (p=0.000 n=38+38)
BM_ZSTD_COMPRESS_Fleet/compression_level:0/window_log:15 3.22ns ± 3% 2.94ns ± 3% -8.67% (p=0.000 n=38+40)
BM_ZSTD_COMPRESS_Fleet/compression_level:0/window_log:16 3.19ns ± 4% 2.86ns ± 4% -10.55% (p=0.000 n=40+38)
BM_ZSTD_COMPRESS_Fleet/compression_level:1/window_log:15 2.60ns ± 3% 2.22ns ± 3% -14.53% (p=0.000 n=40+39)
BM_ZSTD_COMPRESS_Fleet/compression_level:1/window_log:16 2.46ns ± 3% 2.13ns ± 2% -13.67% (p=0.000 n=39+36)
BM_ZSTD_COMPRESS_Fleet/compression_level:2/window_log:15 2.69ns ± 3% 2.46ns ± 3% -8.63% (p=0.000 n=37+39)
BM_ZSTD_COMPRESS_Fleet/compression_level:2/window_log:16 2.63ns ± 3% 2.36ns ± 3% -10.47% (p=0.000 n=40+40)
BM_ZSTD_COMPRESS_Fleet/compression_level:3/window_log:15 3.20ns ± 2% 2.95ns ± 3% -7.94% (p=0.000 n=35+40)
BM_ZSTD_COMPRESS_Fleet/compression_level:3/window_log:16 3.20ns ± 4% 2.87ns ± 4% -10.33% (p=0.000 n=40+40)
I've also measured the impact on internal workloads and saw similar ~10% improvement in performance, measured by cpu usage/byte of data.