Skip to content

Changed CompressedPtr to 32 bits in size and reduced the max slab index value to 15 bits #87

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 54 commits into from

Conversation

guptask
Copy link
Collaborator

@guptask guptask commented Jun 23, 2022

This change is Reviewable

igchor and others added 30 commits February 10, 2022 14:55
It's implementation is mostly based on PosixShmSegment.

Also, extend ShmManager and ShmSegmentOpts to support this new
segment type.
After introducing file segment type, nameToKey_ does not provide
enough information to recover/remove segments on restart.

This commit fixes that by replacing nameToKey_ with nameToOpts_.

Previously, the Key from nameToKey_ map was only used in a single
DCHECK().
* New class MemoryTierCacheConfig allows to configure a memory tier.
  Setting tier size and location of a file for file-backed memory are
  supported in this initial implementation;
* New member, vector of memory tiers, is added to class CacheAllocatorConfig.
* New test suite, chelib/allocator/tests/MemoryTiersTest.cpp,
  demonstrates the usage of and tests extended config API.
to allow using new configureMemoryTiers() API with legacy behavior.

Move validation code for memory tiers to validate() method and convert
ratios to sizes lazily (on get)..
It wrongly assumed that the only possible segment type is
PosixSysV segment.
… handling when NVM cache state is not enabled
Now it's size is 8 bytes intead of 4.

Original CompressedPtr stored only some offset with a memory Allocator.
For multi-tier implementation, this is not enough. We must also store
tierId and when uncompressing, select a proper allocator.

An alternative could be to just resign from CompressedPtr but they
are leveraged to allow the cache to be mapped to different addresses on shared memory.

Changing CompressedPtr impacted CacheItem size - it increased from 32 to 44 bytes.
Without this fix removeCb called even in case when Item is moved between
tiers.
It fails because CentOS is EOL. We might want to consider
using CentOS Streams but for now, just remove it.

Right now, we rely on build-cachelib-centos workflow anyway.
Disabled test suite allocator-test-AllocatorTypeTest to skip sporadically failing tests.
…m#43)

Compensation results in ratios being different than originially specified.
@guptask guptask force-pushed the compressed_ptr_size_reduction branch 2 times, most recently from 9492096 to 0c3df03 Compare August 2, 2022 09:50
@guptask guptask marked this pull request as ready for review August 2, 2022 09:52
@guptask guptask force-pushed the compressed_ptr_size_reduction branch 4 times, most recently from 4936efd to a875c4b Compare August 3, 2022 07:23
@guptask guptask changed the title Changed CompressedPtr to 32 bits in size and reduced the max slab index value to 15 bits [Upstream] Changed CompressedPtr to 32 bits in size and reduced the max slab index value to 15 bits Aug 3, 2022
@guptask guptask changed the title [Upstream] Changed CompressedPtr to 32 bits in size and reduced the max slab index value to 15 bits Changed CompressedPtr to 32 bits in size and reduced the max slab index value to 15 bits Aug 3, 2022
@guptask guptask force-pushed the compressed_ptr_size_reduction branch 7 times, most recently from c8421a3 to df1a2a9 Compare August 18, 2022 00:20
@guptask guptask force-pushed the compressed_ptr_size_reduction branch from df1a2a9 to 6fb2e74 Compare August 18, 2022 02:10
@igchor
Copy link

igchor commented Aug 18, 2022

@guptask please resolve conflicts

Copy link

@igchor igchor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: 0 of 11 files reviewed, 4 unresolved discussions (waiting on @guptask)


cachelib/allocator/memory/CompressedPtr.h line 40 at r4 (raw file):

// specialize a type for all of the STL containers.
namespace IsContainerImpl {

Why do you add all those traits and so many new includes? I don;t think listing all possible containers is a good idea... What if some new ones are added in the future?

You can just set isContainer based on whether allocators_ has compress/uncompress methods taking a compressed ptr or not: https://stackoverflow.com/questions/87372/check-if-a-class-has-a-member-function-of-a-given-signature

Or even simpler, you can probably just make a partial specialization for AllocatorT == CacheAllocator<..> and for the generic case, treat AllocatorT as a container.

@guptask guptask closed this Dec 16, 2022
byrnedj pushed a commit to byrnedj/CacheLib that referenced this pull request Jun 15, 2023
instead of always inserting to topmost tier
byrnedj pushed a commit to byrnedj/CacheLib that referenced this pull request Jul 24, 2023
instead of always inserting to topmost tier
byrnedj pushed a commit to byrnedj/CacheLib that referenced this pull request Mar 12, 2024
instead of always inserting to topmost tier
byrnedj pushed a commit to byrnedj/CacheLib that referenced this pull request May 8, 2024
instead of always inserting to topmost tier
byrnedj pushed a commit to byrnedj/CacheLib that referenced this pull request May 20, 2024
instead of always inserting to topmost tier
byrnedj pushed a commit to byrnedj/CacheLib that referenced this pull request Jun 25, 2024
instead of always inserting to topmost tier
byrnedj pushed a commit to byrnedj/CacheLib that referenced this pull request Jun 27, 2024
instead of always inserting to topmost tier
byrnedj added a commit to byrnedj/CacheLib that referenced this pull request Nov 26, 2024
Part 2.
----------------------------
This patch introduces tryEvictToNextMemoryTier (the actual data
movement), multi-tier stats and some additional multi-tier tests.

Additional features:
- Add option to insert items to first free tier (pmem#87)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants