-
Notifications
You must be signed in to change notification settings - Fork 3.4k
HBASE-22463 Some paths in HFileScannerImpl did not consider block#release which will exhaust the ByteBuffAllocator #257
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
💔 -1 overall
This message was automatically generated. |
…ease which will exhaust the ByteBuffAllocator
💔 -1 overall
This message was automatically generated. |
return; | ||
} | ||
// We don't have to keep ref to EXCLUSIVE type of block | ||
if (this.curBlock != null && this.curBlock.usesSharedMemory()) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is a concern here. Even if the block is on an exclusive heap memory area, we will keep ref to that in this list. In a Phoenix Aggregation kind of use case where many blocks might get fetched and not immediately shipped, we are keeping the ref unwantedly here for longer time. This makes the GC not able to reclaim the heap memory area for the blocks. This might be a hidden bomb IMO. Its not good to remove the MemType. Lets create the block with memory type as EXCLUSIVE when the block data is on heap. The block might be coming from LRU cache or by fetching the block data from HDFS into heap memory area. When the block comes from off heap BC or if it is backed by a BB from the pool (While reading from HDFS, read into pooled BB) lets create the block with mem type as SHARED. Every block can have the retain and release method but let the EXCLUSIVE types do a noop here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FIled an separate issue for this: https://issues.apache.org/jira/browse/HBASE-21879. Let finish that in an new one because I believe it's will be many changes , also can helop to make this PR forward.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So already as part of this JIRA you removed the MEMTYPE and so even the bucket cache engines like FileIOEngine will remove the memtype and so all the blocks will go through the release() way by adding to prevBlocks and then getting released. The new PR #268 will remove the need for addition to prevBlocks as it is a noop. So now both bucket cache and read from HDFS both will create onheap/offheap blocks and the type of block will determine what it is .
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
High level one comment is added. Did not check the remaining part of the patch
return false; | ||
} | ||
|
||
// The first key in the current block 'seekToBlock' is greater than the given | ||
// seekBefore key. We will go ahead by reading the next block that satisfies the | ||
// given key. Return the current block before reading the next one. | ||
seekToBlock.release(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This fix is there as part of some other issue fix right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, similar to the issue: https://issues.apache.org/jira/browse/HBASE-22480. Discussed with binlijin before, his patch can apply to branch-2.x and need not to apply to master branch, once the HBASE-21879 get merged into master, then this bug will be fixed in this patch.
// Promote this to L1. | ||
if (result != null) { | ||
if (caching) { | ||
if (result instanceof HFileBlock && ((HFileBlock) result).usesSharedMemory()) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As part of the new issue, we have to handle here also.. Because when it is shared memory, we can not keep the block as is in LRU cache.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here, the following cacheBlock in LruBlockCache, we have a asReferencedHeapBlock method which will deepClone the shared memory into a heap one and cache the heap one. so there we no need to do extra check & deepClone now, it's safe to remove this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1. Lets address the comment as part of already raised issue. Just asked another Q. Seems the fix as part of another issue is also here in this. Any way u will merge. Fine.
…ease which will exhaust the ByteBuffAllocator (#257)
…ease which will exhaust the ByteBuffAllocator (#257)
…ease which will exhaust the ByteBuffAllocator (#257)
…ease which will exhaust the ByteBuffAllocator (apache#257)
…ease which will exhaust the ByteBuffAllocator (apache#257)
No description provided.