-
Notifications
You must be signed in to change notification settings - Fork 741
Evict blocks based on size #1766
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
This resolved this issue: ava-labs/hypersdk#303 |
snow/engine/snowman/transitive.go
Outdated
"github.com/ava-labs/avalanchego/utils/wrappers" | ||
) | ||
|
||
const nonVerifiedCacheSize = 128 | ||
const ( | ||
nonVerifiedCacheSize = 128 * units.MiB |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@StephenButtolph just to very I understand the issue here:
These hyper blocks are beefy (> 1MB !!??) and are quickly verified.
With item based cash they are kept in memory until other 128 blocks arrive, which is (much) more than hyperSDK need to verify them.
A size-based cache would saturate faster and keep them less in memory (still longer than needed to verify them).
Is this correct?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
correct. This cache specifically probably didn't have a problem... But some of the other caches (with thousands of blocks) were causing issues.
vms/components/chain/state.go
Outdated
return len(bw.Bytes()) + 2*pointerOverhead | ||
}, | ||
), | ||
bytesToIDCache: &cache.LRU[string, ids.ID]{Size: config.BytesToIDCacheSize}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
while we keep this cache item based, should we reduce its size? I don't expect it to make a big difference if we don't, but IIUC it'll hold infos much longer than expected now, given other caches are size based?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great 👁️. I had forgotten that we had block bytes as a key in a cache.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just a few questions to make sure I understand the change
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just one clarifying question
) | ||
|
||
func cachedBlockSize(_ ids.ID, bw *BlockWrapper) int { | ||
return ids.IDLen + len(bw.Bytes()) + 2*constants.PointerOverhead |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this 2x? 1x for inner block pointer and one for the pointer to the wrapper object?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
1x for inner block pointer and one for the pointer to the wrapper object?
Correct - went through and made sure I was consistent (there was 1 case a tx wrapper wasn't being accounted for correctly)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
May make sense to comment this but that's def a nit
…ego into test-block-size-caches
Why this should be merged
With sustained large blocks (like observed with the hypersdk), the existing block caches evict too slowly. This changes the eviction policy to be size based.
How this works
Switch the usage of
cache.LRU
tocache.SizedLRU
How this was tested