WTinyLFU: Evict main_cache_victim when victim from window cache wins#297
WTinyLFU: Evict main_cache_victim when victim from window cache wins#297manitofigh wants to merge 1 commit into1a1a11a:developfrom
Conversation
Summary of ChangesHello @manitofigh, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request addresses a subtle oversight in the WTinyLFU eviction policy by ensuring that the Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request correctly fixes a bug in the WTinyLFU_evict function. When a victim from the window cache has a higher frequency than a victim from the main cache, an eviction from the main cache occurs, and the window victim is promoted. Previously, the evicted flag was not set in this scenario, which could lead to an infinite loop or multiple evictions. By adding evicted = true;, this change ensures the eviction loop terminates correctly. The fix is sound and improves the correctness of the W-TinyLFU implementation. While reviewing this function, I noticed another potential issue. In the case where the main cache has enough space to admit an object from the window cache (lines 278-290), the object is moved, but no space is freed in the overall cache system, and the evicted flag is not set. Since WTinyLFU_evict is called in a loop by cache_get_base to free up space, this could lead to an infinite loop. I recommend investigating this as a potential critical bug in a follow-up.
|
Can you evaluate on more traces (https://ftp.pdl.cmu.edu/pub/datasets/twemcacheWorkload/.priv/) We can provide testbed if needed |
|
Sure, can you please share what is necessary (creds/testbed)? Please let me know if you have any specific config (cache size, obj size, req format, etc) in mind also. |
|
can you join https://cloudlab.us project cs2460 |
|
For some reason upon requesting to join, it gives the err "No such project. Did you spell it properly?" |
|
Sorry it should be cs2640 |
|
Thanks for the approval; for some reason however, the website is still showing
when I attempt to login. Still waiting on them to activate my account. |
|
It should not happen. Try again?
Juncheng
…On Thu, Feb 5, 2026 at 8:59 PM Mani Tofigh ***@***.***> wrote:
*manitofigh* left a comment (1a1a11a/libCacheSim#297)
<#297 (comment)>
Thanks for the approval; for some reason however, the website is still
showing:
Thank you! Stay tuned for email notifying you that your account has been
activated.
Once I attempt to login. Still waiting on them to activate my account.
—
Reply to this email directly, view it on GitHub
<#297 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACBP4T4EZID7HP3N3LAVMQD4KPYQTAVCNFSM6AAAAACSSP3KQWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZTQNJXGQZDMOJTGU>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
|
I did try clearing the cache also. Still the same. I saw the approval on your end, though this seems to be on CloudLab's end. |
From #162.
I tested some pre/post miss ration comparisons (txt/vscsi/oracle):
The miss ratios are identical for txt (0.4373 vs 0.4373) and vscsi (same).
For oracle, develop is 0.8246 and this branch is 0.8233 (tiny improvement of -0.0013).