Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Investigate whether LFU can achieve better performance than LRU caches #4073

Open
LifeIsStrange opened this issue Aug 27, 2020 · 2 comments
Open

Comments

@LifeIsStrange
Copy link

LifeIsStrange commented Aug 27, 2020

WebRender has a LRU cache implementation here https://github.com/servo/webrender/blob/1175acad2d4f49fa712e105c84149ac7f394261d/webrender/src/lru_cache.rs
And it is used for the texture cache and for the gpu cache, which should be quite performance critical.

There is a better kind of cache in theory: the LFU
While it is better in theory, in practice implementation had a worse algorithimical complexity. But there is a paper from 2010 that shows how you can implement a LFU cache in a 0(1) way, effectively achieving the same complexity as LRU while doing smarter, less harmful evictions.
This blog explain it better than I could:
https://arpitbhayani.me/blogs/lfu

So I believe that this might be an interesting "low" hanging fruit that could increase performance on average. Also this optimization idea is general and could affect other programs inside gecko (beyond webrender).

Prior art in rust: https://github.com/bodagovsky/LFUcache

Edit:
I thought about it a little bit more and actually quite obviously:
Recency and frequency are both useful, mostly hortogonal informations.
I believe (but you are the experts) that both can be useful for a gpu cache.
It might be worth investigating whether a LFU would give more performance than a LRU.
But ultimately, we could have the best of both worlds:
"LFU is sometimes combined with a LRU and called LRFU."
http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=970573&isnumber=20937

See also the LRFU section on wiki https://en.m.wikipedia.org/wiki/Cache_replacement_policies

@LifeIsStrange
Copy link
Author

LifeIsStrange commented Dec 7, 2021

@kvark friendly ping.
I'm pretty sure a few years ago I was seeing major performance improvements done by @gw3583 when doing optimizations that reduced the various caches, pressures.

BTW the de facto standard Java library for caching (Caffeine) use a hybrid called Window tinyLFU that claims near optimality and is battle tested
https://github.com/ben-manes/caffeine/wiki/Efficiency#window-tinylfu

@nical
Copy link
Contributor

nical commented Dec 7, 2021

Hi, thanks for the links.
The main reason there has been no movement on this yet is that our texture cache (while not perfect) is in rather good shape and other areas need more attention. It would be an interesting experiment nonetheless.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants