Skip to content

[draft] initial sync for chained items based on head handle #75

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 6 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
704 changes: 335 additions & 369 deletions cachelib/allocator/CacheAllocator-inl.h

Large diffs are not rendered by default.

78 changes: 48 additions & 30 deletions cachelib/allocator/CacheAllocator.h
Original file line number Diff line number Diff line change
Expand Up @@ -1386,7 +1386,7 @@ class CacheAllocator : public CacheBase {

private:
// wrapper around Item's refcount and active handle tracking
FOLLY_ALWAYS_INLINE RefcountWithFlags::incResult incRef(Item& it, bool failIfMoving);
FOLLY_ALWAYS_INLINE RefcountWithFlags::incResult incRef(Item& it);
FOLLY_ALWAYS_INLINE RefcountWithFlags::Value decRef(Item& it);

// drops the refcount and if needed, frees the allocation back to the memory
Expand Down Expand Up @@ -1544,6 +1544,25 @@ class CacheAllocator : public CacheBase {
WriteHandle allocateChainedItemInternal(const ReadHandle& parent,
uint32_t size);

// Allocate a chained item to a specific tier
//
// The resulting chained item does not have a parent item and
// will be freed once the handle is dropped
//
// The parent item parameter here is mainly used to find the
// correct pool to allocate memory for this chained item
//
// @param parent parent item
// @param size the size for the chained allocation
// @param tid the tier to allocate on
//
// @return handle to the chained allocation
// @throw std::invalid_argument if the size requested is invalid or
// if the item is invalid
WriteHandle allocateChainedItemInternalTier(const Item& parent,
uint32_t size,
TierId tid);

// Given an item and its parentKey, validate that the parentKey
// corresponds to an item that's the parent of the supplied item.
//
Expand Down Expand Up @@ -1620,18 +1639,16 @@ class CacheAllocator : public CacheBase {
// @return true If the move was completed, and the containers were updated
// successfully.
bool moveRegularItemWithSync(Item& oldItem, WriteHandle& newItemHdl);

// Moves a regular item to a different slab. This should only be used during
// slab release after the item's exclusive bit has been set. The user supplied
// callback is responsible for copying the contents and fixing the semantics
// of chained item.

// Moves a chained item to a different memory tier.
//
// @param oldItem item being moved
// @param oldItem Reference to the item being moved
// @param newItemHdl Reference to the handle of the new item being moved into
// @param parentHandle Reference to the handle of the parent item
//
// @return true If the move was completed, and the containers were updated
// successfully.
bool moveRegularItem(Item& oldItem, WriteHandle& newItemHdl);
bool moveChainedItemWithSync(ChainedItem& oldItem, WriteHandle& newItemHdl, WriteHandle& parentHandle);

// template class for viewAsChainedAllocs that takes either ReadHandle or
// WriteHandle
Expand All @@ -1644,29 +1661,12 @@ class CacheAllocator : public CacheBase {
template <typename Handle>
folly::IOBuf convertToIOBufT(Handle& handle);

// Moves a chained item to a different slab. This should only be used during
// slab release after the item's exclusive bit has been set. The user supplied
// callback is responsible for copying the contents and fixing the semantics
// of chained item.
//
// Note: If we have successfully moved the old item into the new, the
// newItemHdl is reset and no longer usable by the caller.
//
// @param oldItem Reference to the item being moved
// @param newItemHdl Reference to the handle of the new item being
// moved into
//
// @return true If the move was completed, and the containers were updated
// successfully.
bool moveChainedItem(ChainedItem& oldItem, WriteHandle& newItemHdl);

// Transfers the chain ownership from parent to newParent. Parent
// will be unmarked as having chained allocations. Parent will not be null
// after calling this API.
//
// Parent and NewParent must be valid handles to items with same key and
// parent must have chained items and parent handle must be the only
// outstanding handle for parent. New parent must be without any chained item
// NewParent must be valid handles to item with same key as Parent and
// Parent must have chained items. New parent must be without any chained item
// handles.
//
// Chained item lock for the parent's key needs to be held in exclusive mode.
Expand All @@ -1675,7 +1675,7 @@ class CacheAllocator : public CacheBase {
// @param newParent the new parent for the chain
//
// @throw if any of the conditions for parent or newParent are not met.
void transferChainLocked(WriteHandle& parent, WriteHandle& newParent);
void transferChainLocked(Item& parent, WriteHandle& newParent);

// replace a chained item in the existing chain. This needs to be called
// with the chained item lock held exclusive
Expand All @@ -1689,6 +1689,24 @@ class CacheAllocator : public CacheBase {
WriteHandle newItemHdl,
const Item& parent);

void replaceChainedItemLockedForMoving(Item& oldItem,
WriteHandle& newItemHdl,
const Item& parent);

//
// Performs the actual inplace replace - it is called from
// replaceChainedItemLockedForMoving and replaceChainedItemLocked
// must hold chainedItemLock
//
// @param oldItem the item we are replacing in the chain
// @param newItem the item we are replacing it with
// @param parent the parent for the chain
//
// @return handle to the oldItem
void replaceInChainLocked(Item& oldItem,
WriteHandle& newItemHdl,
const Item& parent,
bool fromMoving);
// Insert an item into MM container. The caller must hold a valid handle for
// the item.
//
Expand Down Expand Up @@ -1979,7 +1997,7 @@ auto& mmContainer = getMMContainer(tid, pid, cid);
throw std::runtime_error("Not supported for chained items");
}

if (candidate->markMoving(true)) {
if (candidate->markMoving()) {
mmContainer.remove(itr);
candidates.push_back(candidate);
} else {
Expand Down Expand Up @@ -2052,7 +2070,7 @@ auto& mmContainer = getMMContainer(tid, pid, cid);

// TODO: only allow it for read-only items?
// or implement mvcc
if (candidate->markMoving(true)) {
if (candidate->markMoving()) {
// promotions should rarely fail since we already marked moving
mmContainer.remove(itr);
candidates.push_back(candidate);
Expand Down
4 changes: 2 additions & 2 deletions cachelib/allocator/CacheItem-inl.h
Original file line number Diff line number Diff line change
Expand Up @@ -238,8 +238,8 @@ bool CacheItem<CacheTrait>::markForEvictionWhenMoving() {
}

template <typename CacheTrait>
bool CacheItem<CacheTrait>::markMoving(bool failIfRefNotZero) {
return ref_.markMoving(failIfRefNotZero);
bool CacheItem<CacheTrait>::markMoving() {
return ref_.markMoving();
}

template <typename CacheTrait>
Expand Down
6 changes: 3 additions & 3 deletions cachelib/allocator/CacheItem.h
Original file line number Diff line number Diff line change
Expand Up @@ -312,9 +312,9 @@ class CACHELIB_PACKED_ATTR CacheItem {
//
// @return true on success, failure if item is marked as exclusive
// @throw exception::RefcountOverflow on ref count overflow
FOLLY_ALWAYS_INLINE RefcountWithFlags::incResult incRef(bool failIfMoving) {
FOLLY_ALWAYS_INLINE RefcountWithFlags::incResult incRef() {
try {
return ref_.incRef(failIfMoving);
return ref_.incRef();
} catch (exception::RefcountOverflow& e) {
throw exception::RefcountOverflow(
folly::sformat("{} item: {}", e.what(), toString()));
Expand Down Expand Up @@ -381,7 +381,7 @@ class CACHELIB_PACKED_ATTR CacheItem {
* Unmarking moving will also return the refcount at the moment of
* unmarking.
*/
bool markMoving(bool failIfRefNotZero);
bool markMoving();
RefcountWithFlags::Value unmarkMoving() noexcept;
bool isMoving() const noexcept;
bool isOnlyMoving() const noexcept;
Expand Down
17 changes: 11 additions & 6 deletions cachelib/allocator/Refcount.h
Original file line number Diff line number Diff line change
Expand Up @@ -140,9 +140,9 @@ class FOLLY_PACK_ATTR RefcountWithFlags {
// @return true if refcount is bumped. false otherwise (if item is exclusive)
// @throw exception::RefcountOverflow if new count would be greater than
// maxCount
FOLLY_ALWAYS_INLINE incResult incRef(bool failIfMoving) {
FOLLY_ALWAYS_INLINE incResult incRef() {
incResult res = incOk;
auto predicate = [failIfMoving, &res](const Value curValue) {
auto predicate = [&res](const Value curValue) {
Value bitMask = getAdminRef<kExclusive>();

const bool exlusiveBitIsSet = curValue & bitMask;
Expand All @@ -151,7 +151,7 @@ class FOLLY_PACK_ATTR RefcountWithFlags {
} else if (exlusiveBitIsSet && (curValue & kAccessRefMask) == 0) {
res = incFailedEviction;
return false;
} else if (exlusiveBitIsSet && failIfMoving) {
} else if (exlusiveBitIsSet) {
res = incFailedMoving;
return false;
}
Expand Down Expand Up @@ -320,12 +320,17 @@ class FOLLY_PACK_ATTR RefcountWithFlags {
*
* Unmarking moving does not depend on `isInMMContainer`
*/
bool markMoving(bool failIfRefNotZero) {
auto predicate = [failIfRefNotZero](const Value curValue) {
bool markMoving() {
auto predicate = [](const Value curValue) {
Value conditionBitMask = getAdminRef<kLinked>();
const bool flagSet = curValue & conditionBitMask;
const bool alreadyExclusive = curValue & getAdminRef<kExclusive>();
if (failIfRefNotZero && (curValue & kAccessRefMask) != 0) {
const bool isChained = curValue & getFlag<kIsChainedItem>();

// chained item can have ref count == 1, this just means it's linked in the chain
if (isChained && (curValue & kAccessRefMask) > 1) {
return false;
} else if (!isChained && (curValue & kAccessRefMask) != 0) {
return false;
}
Comment on lines +331 to 335

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it could be simplified to be something like this::

if ((curValue & kAccessRefMask) != isChained ? 1 : 0) {
    return false;
}

if (!flagSet || alreadyExclusive) {
Expand Down
2 changes: 1 addition & 1 deletion cachelib/allocator/tests/ItemTest.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,7 @@ TEST(ItemTest, ExpiryTime) {
// So that exclusive bit will be set
item->markAccessible();
// Test that writes fail while the item is moving
result = item->markMoving(true);
result = item->markMoving();
EXPECT_TRUE(result);
result = item->updateExpiryTime(0);
EXPECT_FALSE(result);
Expand Down
34 changes: 9 additions & 25 deletions cachelib/allocator/tests/RefCountTest.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ void RefCountTest::testMultiThreaded() {
nLocalRef--;
ref.markAccessible();
} else {
ref.incRef(true);
ref.incRef();
nLocalRef++;
ref.unmarkAccessible();
}
Expand Down Expand Up @@ -101,12 +101,12 @@ void RefCountTest::testBasic() {
ASSERT_FALSE(ref.template isFlagSet<RefcountWithFlags::Flags::kMMFlag1>());

for (uint32_t i = 0; i < RefcountWithFlags::kAccessRefMask; i++) {
ASSERT_EQ(ref.incRef(true),RefcountWithFlags::incOk);
ASSERT_EQ(ref.incRef(),RefcountWithFlags::incOk);
}

// Incrementing past the max will fail
auto rawRef = ref.getRaw();
ASSERT_THROW(ref.incRef(true), std::overflow_error);
ASSERT_THROW(ref.incRef(), std::overflow_error);
ASSERT_EQ(rawRef, ref.getRaw());

// Bumping up access ref shouldn't affect admin ref and flags
Expand Down Expand Up @@ -152,11 +152,11 @@ void RefCountTest::testBasic() {
ASSERT_FALSE(ref.template isFlagSet<RefcountWithFlags::Flags::kMMFlag1>());

// conditionally set flags
ASSERT_FALSE(ref.markMoving(true));
ASSERT_FALSE(ref.markMoving());
ref.markInMMContainer();
// only first one succeeds
ASSERT_TRUE(ref.markMoving(true));
ASSERT_FALSE(ref.markMoving(true));
ASSERT_TRUE(ref.markMoving());
ASSERT_FALSE(ref.markMoving());
ref.unmarkInMMContainer();

ref.template setFlag<RefcountWithFlags::Flags::kMMFlag0>();
Expand Down Expand Up @@ -202,7 +202,7 @@ void RefCountTest::testMarkForEvictionAndMoving() {
ref.markInMMContainer();
ref.markAccessible();

ASSERT_TRUE(ref.markMoving(true));
ASSERT_TRUE(ref.markMoving());
ASSERT_FALSE(ref.markForEviction());

ref.unmarkInMMContainer();
Expand All @@ -218,37 +218,21 @@ void RefCountTest::testMarkForEvictionAndMoving() {
ref.markAccessible();

ASSERT_TRUE(ref.markForEviction());
ASSERT_FALSE(ref.markMoving(true));
ASSERT_FALSE(ref.markMoving());

ref.unmarkInMMContainer();
ref.unmarkAccessible();
auto ret = ref.unmarkForEviction();
ASSERT_EQ(ret, 0);
}

{
// can mark moving when ref count > 0
RefcountWithFlags ref;
ref.markInMMContainer();
ref.markAccessible();

ref.incRef(true);

ASSERT_TRUE(ref.markMoving(false));

ref.unmarkInMMContainer();
ref.unmarkAccessible();
auto ret = ref.unmarkMoving();
ASSERT_EQ(ret, 1);
}

{
// cannot mark for eviction when ref count > 0
RefcountWithFlags ref;
ref.markInMMContainer();
ref.markAccessible();

ref.incRef(true);
ref.incRef();
ASSERT_FALSE(ref.markForEviction());
}
}
Expand Down
40 changes: 40 additions & 0 deletions cachelib/cachebench/test_configs/small_moving_bg.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
// @nolint like default.json, but moves items during slab release instead of evicting them.
{
"cache_config" : {
"cacheSizeMB" : 2248,
"cacheDir": "/tmp/mem-tier5",
"memoryTiers" : [
{
"ratio": 1,
"memBindNodes": 0
}, {
"ratio": 1,
"memBindNodes": 0
}
],
"poolRebalanceIntervalSec" : 1,
"moveOnSlabRelease" : true,
"rebalanceMinSlabs" : 2,
"evictorThreads": 2,
"promoterThreads": 2
},
"test_config" :
{
"preallocateCache" : true,
"numOps" : 40000000,
"numThreads" : 32,
"numKeys" : 250000,
"generator": "online",

"keySizeRange" : [1, 8, 32, 64, 128, 256, 512],
"keySizeRangeProbability" : [0.1, 0.1, 0.2, 0.2, 0.3, 0.1],

"valSizeRange" : [1, 128, 512, 1024, 4096, 10240, 20480, 40960, 60000],
"valSizeRangeProbability" : [0.1, 0.1, 0.1, 0.2, 0.2, 0.1, 0.1, 0.1],

"getRatio" : 0.70,
"setRatio" : 0.30
}

}

6 changes: 6 additions & 0 deletions cachelib/common/Mutex.h
Original file line number Diff line number Diff line change
Expand Up @@ -341,6 +341,7 @@ class RWBucketLocks : public BaseBucketLocks<LockType, LockAlignmentType> {
using Lock = LockType;
using ReadLockHolder = ReadLockHolderType;
using WriteLockHolder = WriteLockHolderType;
using LockHolder = std::unique_lock<Lock>;

RWBucketLocks(uint32_t locksPower, std::shared_ptr<Hash> hasher)
: Base::BaseBucketLocks(locksPower, std::move(hasher)) {}
Expand All @@ -357,6 +358,11 @@ class RWBucketLocks : public BaseBucketLocks<LockType, LockAlignmentType> {
return WriteLockHolder{Base::getLock(args...)};
}

template <typename... Args>
LockHolder tryLockExclusive(Args... args) noexcept {
return LockHolder(Base::getLock(args...), std::try_to_lock);
}

// try to grab the reader lock for a limit _timeout_ duration
template <typename... Args>
ReadLockHolder lockShared(const std::chrono::microseconds& timeout,
Expand Down
1 change: 1 addition & 0 deletions run_tests.sh
Original file line number Diff line number Diff line change
Expand Up @@ -13,3 +13,4 @@ fi

../bin/cachebench --json_test_config ../test_configs/consistency/navy.json
../bin/cachebench --json_test_config ../test_configs/consistency/navy-multi-tier.json
../opt/bin/cachebench --json_test_config /opt/test_configs/small_moving_bg.json