-
Notifications
You must be signed in to change notification settings - Fork 423
OAK-11932: Segment prefetching for CachingSegmentReader #2513
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: trunk
Are you sure you want to change the base?
Conversation
| return out; | ||
| } | ||
|
|
||
| private void schedulePrefetch(long msb, long lsb, Buffer buffer) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@nfsantos made a good remark that this method, from the start, should be executed in a separate thread context, so it does not impact execution time for the thread invoking CachingSegmentArchiveReader#readSegment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Indeed, my tests with #2519, which implements a similar mechanism, but on a different level of abstraction, showed that dispatching from the caller thread hurts performance.
I even opted to trigger preloading only from within the load-callback, so preloading is completely off the critical path for cache hits.
Of course that's a trade-off, as it requires the caller thread to load one segment before any preloading happens.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good! I think some benchmarking is needed to ensure the added overhead is worthwhile.
| return out; | ||
| } | ||
|
|
||
| private void schedulePrefetch(long msb, long lsb, Buffer buffer) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Indeed, my tests with #2519, which implements a similar mechanism, but on a different level of abstraction, showed that dispatching from the caller thread hurts performance.
I even opted to trigger preloading only from within the load-callback, so preloading is completely off the critical path for cache hits.
Of course that's a trade-off, as it requires the caller thread to load one segment before any preloading happens.
| } | ||
|
|
||
| // Drop prefetch if already in progress for this segment | ||
| boolean registered = inFlightPrefetch.add(ref); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice idea! I missed this trick so far in #2519 🙂
| List<UUID> refs = extractReferences(buffer); | ||
| int limit = Math.min(refs.size(), prefetchMaxRefs); | ||
| for (int i = 0; i < limit; i++) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You are getting a list with all the references but then potentially iterate over only a subset of them. You could save some work by extracting only the references that will be prefetched, this could also avoid allocating an array with all the refs. Using streams with limit()may be an easy way of implementing this optimization, as streams are computed lazily.
| if (persistentCache.containsSegment(rMsb, rLsb)) { | ||
| continue; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe it's better to do this check in the worker thread, just before the segment is downloaded? The task that is scheduled to download the segment might not execute for a while, if the worker pool is busy, so from this point until the actual download, the segment might have been added to the cache. Or we can leave the check here and do another one before trying to download.
It would also be better to have a mechanism similar to the Guava loading cache, where thread requesting a segment that is not in the cache but is being downloaded will block waiting for the first download to complete, instead of starting a new download. This would avoid duplicate downloads.
|
As indicated in #2569, this approach is limited in that a |
https://issues.apache.org/jira/browse/OAK-11932