-
Notifications
You must be signed in to change notification settings - Fork 1.2k
decommit unusable page(s) in medium heap blocks (bug 5822302) #840
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
6225048 to
5d6803d
Compare
5d6803d to
2f52f5d
Compare
|
The behavior on free is a little concerning in the case where commit failed. Is it possible to just call PartialDecommit on the decommitted page to update the PageAllocator state and then call release on the entire set of pages? Release doesn't always just decommit, it can also call VirtualFree in some cases, so just decommitting if the commit fails might not be enough. |
|
Is this tied to #742? |
|
yes |
655fe7d to
0a9e42f
Compare
lib/Common/Memory/HeapInfo.cpp
Outdated
| return ((MediumAllocationBlockAttributes::PageCount*AutoSystemInfo::PageSize) % sizeCat) / AutoSystemInfo::PageSize; | ||
| } | ||
| /* static */ | ||
| void MediumAllocationBlockAttributes::ProtectUnusablePages(HeapBlock* heapBlock) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure why this is here and not in the HeapBlock- GetUnusablePageCount can be argued as getting an attribute of the block, this is definitely not an attribute. Also, nitpicky but please fix spacing of usage of *- not sure why its the only operator that you use with no spaces
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
in medium heap blocks, if the object size is bigger than 1 page, it can cause the whole last 1~3 page(s) never been allocated. when such case is hit, decommit those pages to save memory as well as capture corruption. when returning the pages back to page allocator, we should commit those pages again for reuse, in case of OOM here, just decommit all pages in the heap block and let the page allocator to manage the decommitted pages. in rescan code, asserting that we never scan those unallocatable pages
it's hard to make the partial decommit withou refactoring page allocator, especially in background releasing pages, there will have race issues. changing to protect with NOACCESS for now. In the future after we implemented the capability of partially commit pages in pageAllocator we can change to use that, because that can save some memories.
…, do specialization on heapblock itself
|
|
|
need to merge to master, see #985 |
in medium heap blocks, if the object size is bigger than 1 page, it can cause the whole last 1~3 page(s) never been allocated. when such case is hit, decommit those pages to save memory as well as capture corruption. when returning the pages back to page allocator, we should commit those pages again for reuse, in case of OOM here, just decommit all pages in the heap block and let the page allocator to manage the decommitted pages.
in rescan code, asserting that we never scan those unallocatable pages
Fixes #742