Skip to content

Commit

Permalink
Prefer static_cast in place of most reinterpret_cast (facebook#12308)
Browse files Browse the repository at this point in the history
Summary:
The following are risks associated with pointer-to-pointer reinterpret_cast:
* Can produce the "wrong result" (crash or memory corruption). IIRC, in theory this can happen for any up-cast or down-cast for a non-standard-layout type, though in practice would only happen for multiple inheritance cases (where the base class pointer might be "inside" the derived object). We don't use multiple inheritance a lot, but we do.
* Can mask useful compiler errors upon code change, including converting between unrelated pointer types that you are expecting to be related, and converting between pointer and scalar types unintentionally.

I can only think of some obscure cases where static_cast could be troublesome when it compiles as a replacement:
* Going through `void*` could plausibly cause unnecessary or broken pointer arithmetic. Suppose we have
`struct Derived: public Base1, public Base2`.  If we have `Derived*` -> `void*` -> `Base2*` -> `Derived*` through reinterpret casts, this could plausibly work (though technical UB) assuming the `Base2*` is not dereferenced. Changing to static cast could introduce breaking pointer arithmetic.
* Unnecessary (but safe) pointer arithmetic could arise in a case like `Derived*` -> `Base2*` -> `Derived*` where before the Base2 pointer might not have been dereferenced. This could potentially affect performance.

With some light scripting, I tried replacing pointer-to-pointer reinterpret_casts with static_cast and kept the cases that still compile. Most occurrences of reinterpret_cast have successfully been changed (except for java/ and third-party/). 294 changed, 257 remain.

A couple of related interventions included here:
* Previously Cache::Handle was not actually derived from in the implementations and just used as a `void*` stand-in with reinterpret_cast. Now there is a relationship to allow static_cast. In theory, this could introduce pointer arithmetic (as described above) but is unlikely without multiple inheritance AND non-empty Cache::Handle.
* Remove some unnecessary casts to void* as this is allowed to be implicit (for better or worse).

Most of the remaining reinterpret_casts are for converting to/from raw bytes of objects. We could consider better idioms for these patterns in follow-up work.

I wish there were a way to implement a template variant of static_cast that would only compile if no pointer arithmetic is generated, but best I can tell, this is not possible. AFAIK the best you could do is a dynamic check that the void* conversion after the static cast is unchanged.

Pull Request resolved: facebook#12308

Test Plan: existing tests, CI

Reviewed By: ltamasi

Differential Revision: D53204947

Pulled By: pdillinger

fbshipit-source-id: 9de23e618263b0d5b9820f4e15966876888a16e2
  • Loading branch information
pdillinger authored and facebook-github-bot committed Feb 7, 2024
1 parent e3e8fbb commit 54cb9c7
Show file tree
Hide file tree
Showing 108 changed files with 319 additions and 369 deletions.
3 changes: 2 additions & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -539,7 +539,8 @@ endif

ifdef USE_CLANG
# Used by some teams in Facebook
WARNING_FLAGS += -Wshift-sign-overflow -Wambiguous-reversed-operator -Wimplicit-fallthrough
WARNING_FLAGS += -Wshift-sign-overflow -Wambiguous-reversed-operator \
-Wimplicit-fallthrough -Wreinterpret-base-class -Wundefined-reinterpret-cast
endif

ifeq ($(PLATFORM), OS_OPENBSD)
Expand Down
8 changes: 4 additions & 4 deletions cache/clock_cache.cc
Original file line number Diff line number Diff line change
Expand Up @@ -560,7 +560,7 @@ void BaseClockTable::TrackAndReleaseEvictedEntry(ClockHandle* h) {
took_value_ownership =
eviction_callback_(ClockCacheShard<FixedHyperClockTable>::ReverseHash(
h->GetHash(), &unhashed, hash_seed_),
reinterpret_cast<Cache::Handle*>(h),
static_cast<Cache::Handle*>(h),
h->meta.LoadRelaxed() & ClockHandle::kHitBitMask);
}
if (!took_value_ownership) {
Expand Down Expand Up @@ -1428,19 +1428,19 @@ BaseHyperClockCache<Table>::BaseHyperClockCache(

template <class Table>
Cache::ObjectPtr BaseHyperClockCache<Table>::Value(Handle* handle) {
return reinterpret_cast<const typename Table::HandleImpl*>(handle)->value;
return static_cast<const typename Table::HandleImpl*>(handle)->value;
}

template <class Table>
size_t BaseHyperClockCache<Table>::GetCharge(Handle* handle) const {
return reinterpret_cast<const typename Table::HandleImpl*>(handle)
return static_cast<const typename Table::HandleImpl*>(handle)
->GetTotalCharge();
}

template <class Table>
const Cache::CacheItemHelper* BaseHyperClockCache<Table>::GetCacheItemHelper(
Handle* handle) const {
auto h = reinterpret_cast<const typename Table::HandleImpl*>(handle);
auto h = static_cast<const typename Table::HandleImpl*>(handle);
return h->helper;
}

Expand Down
2 changes: 1 addition & 1 deletion cache/clock_cache.h
Original file line number Diff line number Diff line change
Expand Up @@ -298,7 +298,7 @@ class ClockCacheTest;

// ----------------------------------------------------------------------- //

struct ClockHandleBasicData {
struct ClockHandleBasicData : public Cache::Handle {
Cache::ObjectPtr value = nullptr;
const Cache::CacheItemHelper* helper = nullptr;
// A lossless, reversible hash of the fixed-size (16 byte) cache key. This
Expand Down
10 changes: 5 additions & 5 deletions cache/compressed_secondary_cache_test.cc
Original file line number Diff line number Diff line change
Expand Up @@ -1062,7 +1062,7 @@ bool CacheUsageWithinBounds(size_t val1, size_t val2, size_t error) {

TEST_P(CompressedSecCacheTestWithTiered, CacheReservationManager) {
CompressedSecondaryCache* sec_cache =
reinterpret_cast<CompressedSecondaryCache*>(GetSecondaryCache());
static_cast<CompressedSecondaryCache*>(GetSecondaryCache());

// Use EXPECT_PRED3 instead of EXPECT_NEAR to void too many size_t to
// double explicit casts
Expand All @@ -1085,7 +1085,7 @@ TEST_P(CompressedSecCacheTestWithTiered, CacheReservationManager) {
TEST_P(CompressedSecCacheTestWithTiered,
CacheReservationManagerMultipleUpdate) {
CompressedSecondaryCache* sec_cache =
reinterpret_cast<CompressedSecondaryCache*>(GetSecondaryCache());
static_cast<CompressedSecondaryCache*>(GetSecondaryCache());

EXPECT_PRED3(CacheUsageWithinBounds, GetCache()->GetUsage(), (30 << 20),
GetPercent(30 << 20, 1));
Expand Down Expand Up @@ -1171,7 +1171,7 @@ TEST_P(CompressedSecCacheTestWithTiered, AdmissionPolicy) {

TEST_P(CompressedSecCacheTestWithTiered, DynamicUpdate) {
CompressedSecondaryCache* sec_cache =
reinterpret_cast<CompressedSecondaryCache*>(GetSecondaryCache());
static_cast<CompressedSecondaryCache*>(GetSecondaryCache());
std::shared_ptr<Cache> tiered_cache = GetTieredCache();

// Use EXPECT_PRED3 instead of EXPECT_NEAR to void too many size_t to
Expand Down Expand Up @@ -1235,7 +1235,7 @@ TEST_P(CompressedSecCacheTestWithTiered, DynamicUpdate) {

TEST_P(CompressedSecCacheTestWithTiered, DynamicUpdateWithReservation) {
CompressedSecondaryCache* sec_cache =
reinterpret_cast<CompressedSecondaryCache*>(GetSecondaryCache());
static_cast<CompressedSecondaryCache*>(GetSecondaryCache());
std::shared_ptr<Cache> tiered_cache = GetTieredCache();

ASSERT_OK(cache_res_mgr()->UpdateCacheReservation(10 << 20));
Expand Down Expand Up @@ -1329,7 +1329,7 @@ TEST_P(CompressedSecCacheTestWithTiered, DynamicUpdateWithReservation) {

TEST_P(CompressedSecCacheTestWithTiered, ReservationOverCapacity) {
CompressedSecondaryCache* sec_cache =
reinterpret_cast<CompressedSecondaryCache*>(GetSecondaryCache());
static_cast<CompressedSecondaryCache*>(GetSecondaryCache());
std::shared_ptr<Cache> tiered_cache = GetTieredCache();

ASSERT_OK(cache_res_mgr()->UpdateCacheReservation(110 << 20));
Expand Down
11 changes: 5 additions & 6 deletions cache/lru_cache.cc
Original file line number Diff line number Diff line change
Expand Up @@ -339,8 +339,7 @@ void LRUCacheShard::NotifyEvicted(
MemoryAllocator* alloc = table_.GetAllocator();
for (LRUHandle* entry : evicted_handles) {
if (eviction_callback_ &&
eviction_callback_(entry->key(),
reinterpret_cast<Cache::Handle*>(entry),
eviction_callback_(entry->key(), static_cast<Cache::Handle*>(entry),
entry->HasHit())) {
// Callback took ownership of obj; just free handle
free(entry);
Expand Down Expand Up @@ -506,7 +505,7 @@ bool LRUCacheShard::Release(LRUHandle* e, bool /*useful*/,
// Only call eviction callback if we're sure no one requested erasure
// FIXME: disabled because of test churn
if (false && was_in_cache && !erase_if_last_ref && eviction_callback_ &&
eviction_callback_(e->key(), reinterpret_cast<Cache::Handle*>(e),
eviction_callback_(e->key(), static_cast<Cache::Handle*>(e),
e->HasHit())) {
// Callback took ownership of obj; just free handle
free(e);
Expand Down Expand Up @@ -661,18 +660,18 @@ LRUCache::LRUCache(const LRUCacheOptions& opts) : ShardedCache(opts) {
}

Cache::ObjectPtr LRUCache::Value(Handle* handle) {
auto h = reinterpret_cast<const LRUHandle*>(handle);
auto h = static_cast<const LRUHandle*>(handle);
return h->value;
}

size_t LRUCache::GetCharge(Handle* handle) const {
return reinterpret_cast<const LRUHandle*>(handle)->GetCharge(
return static_cast<const LRUHandle*>(handle)->GetCharge(
GetShard(0).metadata_charge_policy_);
}

const Cache::CacheItemHelper* LRUCache::GetCacheItemHelper(
Handle* handle) const {
auto h = reinterpret_cast<const LRUHandle*>(handle);
auto h = static_cast<const LRUHandle*>(handle);
return h->helper;
}

Expand Down
2 changes: 1 addition & 1 deletion cache/lru_cache.h
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ namespace lru_cache {
// LRUCacheShard::Lookup.
// While refs > 0, public properties like value and deleter must not change.

struct LRUHandle {
struct LRUHandle : public Cache::Handle {
Cache::ObjectPtr value;
const Cache::CacheItemHelper* helper;
LRUHandle* next_hash;
Expand Down
5 changes: 2 additions & 3 deletions cache/lru_cache_test.cc
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ class LRUCacheTest : public testing::Test {
double low_pri_pool_ratio = 1.0,
bool use_adaptive_mutex = kDefaultToAdaptiveMutex) {
DeleteCache();
cache_ = reinterpret_cast<LRUCacheShard*>(
cache_ = static_cast<LRUCacheShard*>(
port::cacheline_aligned_alloc(sizeof(LRUCacheShard)));
new (cache_) LRUCacheShard(capacity, /*strict_capacity_limit=*/false,
high_pri_pool_ratio, low_pri_pool_ratio,
Expand Down Expand Up @@ -392,8 +392,7 @@ class ClockCacheTest : public testing::Test {
void NewShard(size_t capacity, bool strict_capacity_limit = true,
int eviction_effort_cap = 30) {
DeleteShard();
shard_ =
reinterpret_cast<Shard*>(port::cacheline_aligned_alloc(sizeof(Shard)));
shard_ = static_cast<Shard*>(port::cacheline_aligned_alloc(sizeof(Shard)));

TableOpts opts{1 /*value_size*/, eviction_effort_cap};
new (shard_)
Expand Down
10 changes: 5 additions & 5 deletions cache/sharded_cache.h
Original file line number Diff line number Diff line change
Expand Up @@ -139,7 +139,7 @@ class ShardedCache : public ShardedCacheBase {

explicit ShardedCache(const ShardedCacheOptions& opts)
: ShardedCacheBase(opts),
shards_(reinterpret_cast<CacheShard*>(port::cacheline_aligned_alloc(
shards_(static_cast<CacheShard*>(port::cacheline_aligned_alloc(
sizeof(CacheShard) * GetNumShards()))),
destroy_shards_in_dtor_(false) {}

Expand Down Expand Up @@ -192,7 +192,7 @@ class ShardedCache : public ShardedCacheBase {
HashVal hash = CacheShard::ComputeHash(key, hash_seed_);
HandleImpl* result = GetShard(hash).CreateStandalone(
key, hash, obj, helper, charge, allow_uncharged);
return reinterpret_cast<Handle*>(result);
return static_cast<Handle*>(result);
}

Handle* Lookup(const Slice& key, const CacheItemHelper* helper = nullptr,
Expand All @@ -202,7 +202,7 @@ class ShardedCache : public ShardedCacheBase {
HashVal hash = CacheShard::ComputeHash(key, hash_seed_);
HandleImpl* result = GetShard(hash).Lookup(key, hash, helper,
create_context, priority, stats);
return reinterpret_cast<Handle*>(result);
return static_cast<Handle*>(result);
}

void Erase(const Slice& key) override {
Expand All @@ -212,11 +212,11 @@ class ShardedCache : public ShardedCacheBase {

bool Release(Handle* handle, bool useful,
bool erase_if_last_ref = false) override {
auto h = reinterpret_cast<HandleImpl*>(handle);
auto h = static_cast<HandleImpl*>(handle);
return GetShard(h->GetHash()).Release(h, useful, erase_if_last_ref);
}
bool Ref(Handle* handle) override {
auto h = reinterpret_cast<HandleImpl*>(handle);
auto h = static_cast<HandleImpl*>(handle);
return GetShard(h->GetHash()).Ref(h);
}
bool Release(Handle* handle, bool erase_if_last_ref = false) override {
Expand Down
7 changes: 3 additions & 4 deletions cache/typed_cache.h
Original file line number Diff line number Diff line change
Expand Up @@ -155,7 +155,7 @@ class BasicTypedCacheInterface : public BaseCacheInterface<CachePtr>,
using BaseCacheInterface<CachePtr>::BaseCacheInterface;
struct TypedAsyncLookupHandle : public Cache::AsyncLookupHandle {
TypedHandle* Result() {
return reinterpret_cast<TypedHandle*>(Cache::AsyncLookupHandle::Result());
return static_cast<TypedHandle*>(Cache::AsyncLookupHandle::Result());
}
};

Expand All @@ -169,8 +169,7 @@ class BasicTypedCacheInterface : public BaseCacheInterface<CachePtr>,
}

inline TypedHandle* Lookup(const Slice& key, Statistics* stats = nullptr) {
return reinterpret_cast<TypedHandle*>(
this->cache_->BasicLookup(key, stats));
return static_cast<TypedHandle*>(this->cache_->BasicLookup(key, stats));
}

inline void StartAsyncLookup(TypedAsyncLookupHandle& async_handle) {
Expand Down Expand Up @@ -347,7 +346,7 @@ class FullTypedCacheInterface
Priority priority = Priority::LOW, Statistics* stats = nullptr,
CacheTier lowest_used_cache_tier = CacheTier::kNonVolatileBlockTier) {
if (lowest_used_cache_tier > CacheTier::kVolatileTier) {
return reinterpret_cast<TypedHandle*>(this->cache_->Lookup(
return static_cast<TypedHandle*>(this->cache_->Lookup(
key, GetFullHelper(), create_context, priority, stats));
} else {
return BasicTypedCacheInterface<TValue, kRole, CachePtr>::Lookup(key,
Expand Down
3 changes: 1 addition & 2 deletions db/blob/db_blob_index_test.cc
Original file line number Diff line number Diff line change
Expand Up @@ -325,8 +325,7 @@ TEST_F(DBBlobIndexTest, Iterate) {

auto check_is_blob = [&](bool is_blob) {
return [is_blob](Iterator* iterator) {
ASSERT_EQ(is_blob,
reinterpret_cast<ArenaWrappedDBIter*>(iterator)->IsBlob());
ASSERT_EQ(is_blob, static_cast<ArenaWrappedDBIter*>(iterator)->IsBlob());
};
};

Expand Down
17 changes: 7 additions & 10 deletions db/compaction/compaction_job.cc
Original file line number Diff line number Diff line change
Expand Up @@ -1279,10 +1279,9 @@ void CompactionJob::ProcessKeyValueCompaction(SubcompactionState* sub_compact) {
: nullptr);

TEST_SYNC_POINT("CompactionJob::Run():Inprogress");
TEST_SYNC_POINT_CALLBACK(
"CompactionJob::Run():PausingManualCompaction:1",
reinterpret_cast<void*>(
const_cast<std::atomic<bool>*>(&manual_compaction_canceled_)));
TEST_SYNC_POINT_CALLBACK("CompactionJob::Run():PausingManualCompaction:1",
static_cast<void*>(const_cast<std::atomic<bool>*>(
&manual_compaction_canceled_)));

const std::string* const full_history_ts_low =
full_history_ts_low_.empty() ? nullptr : &full_history_ts_low_;
Expand Down Expand Up @@ -1330,8 +1329,7 @@ void CompactionJob::ProcessKeyValueCompaction(SubcompactionState* sub_compact) {
Status status;
TEST_SYNC_POINT_CALLBACK(
"CompactionJob::ProcessKeyValueCompaction()::Processing",
reinterpret_cast<void*>(
const_cast<Compaction*>(sub_compact->compaction)));
static_cast<void*>(const_cast<Compaction*>(sub_compact->compaction)));
uint64_t last_cpu_micros = prev_cpu_micros;
while (status.ok() && !cfd->IsDropped() && c_iter->Valid()) {
// Invariant: c_iter.status() is guaranteed to be OK if c_iter->Valid()
Expand Down Expand Up @@ -1362,10 +1360,9 @@ void CompactionJob::ProcessKeyValueCompaction(SubcompactionState* sub_compact) {
break;
}

TEST_SYNC_POINT_CALLBACK(
"CompactionJob::Run():PausingManualCompaction:2",
reinterpret_cast<void*>(
const_cast<std::atomic<bool>*>(&manual_compaction_canceled_)));
TEST_SYNC_POINT_CALLBACK("CompactionJob::Run():PausingManualCompaction:2",
static_cast<void*>(const_cast<std::atomic<bool>*>(
&manual_compaction_canceled_)));
c_iter->Next();
if (c_iter->status().IsManualCompactionPaused()) {
break;
Expand Down
2 changes: 1 addition & 1 deletion db/compaction/tiered_compaction_test.cc
Original file line number Diff line number Diff line change
Expand Up @@ -1338,7 +1338,7 @@ class PrecludeLastLevelTest : public DBTestBase {
SyncPoint::GetInstance()->SetCallBack(
"DBImpl::StartPeriodicTaskScheduler:Init", [&](void* arg) {
auto periodic_task_scheduler_ptr =
reinterpret_cast<PeriodicTaskScheduler*>(arg);
static_cast<PeriodicTaskScheduler*>(arg);
periodic_task_scheduler_ptr->TEST_OverrideTimer(mock_clock_.get());
});
mock_clock_->SetCurrentTime(kMockStartTime);
Expand Down
8 changes: 4 additions & 4 deletions db/corruption_test.cc
Original file line number Diff line number Diff line change
Expand Up @@ -1102,7 +1102,7 @@ TEST_F(CorruptionTest, VerifyWholeTableChecksum) {
int count{0};
SyncPoint::GetInstance()->SetCallBack(
"DBImpl::VerifyFullFileChecksum:mismatch", [&](void* arg) {
auto* s = reinterpret_cast<Status*>(arg);
auto* s = static_cast<Status*>(arg);
ASSERT_NE(s, nullptr);
++count;
ASSERT_NOK(*s);
Expand Down Expand Up @@ -1247,7 +1247,7 @@ TEST_P(CrashDuringRecoveryWithCorruptionTest, CrashDuringRecovery) {
SyncPoint::GetInstance()->ClearAllCallBacks();
SyncPoint::GetInstance()->SetCallBack(
"DBImpl::GetLogSizeAndMaybeTruncate:0", [&](void* arg) {
auto* tmp_s = reinterpret_cast<Status*>(arg);
auto* tmp_s = static_cast<Status*>(arg);
assert(tmp_s);
*tmp_s = Status::IOError("Injected");
});
Expand Down Expand Up @@ -1429,7 +1429,7 @@ TEST_P(CrashDuringRecoveryWithCorruptionTest, TxnDbCrashDuringRecovery) {
SyncPoint::GetInstance()->ClearAllCallBacks();
SyncPoint::GetInstance()->SetCallBack(
"DBImpl::Open::BeforeSyncWAL", [&](void* arg) {
auto* tmp_s = reinterpret_cast<Status*>(arg);
auto* tmp_s = static_cast<Status*>(arg);
assert(tmp_s);
*tmp_s = Status::IOError("Injected");
});
Expand Down Expand Up @@ -1597,7 +1597,7 @@ TEST_P(CrashDuringRecoveryWithCorruptionTest, CrashDuringRecoveryWithFlush) {
SyncPoint::GetInstance()->ClearAllCallBacks();
SyncPoint::GetInstance()->SetCallBack(
"DBImpl::GetLogSizeAndMaybeTruncate:0", [&](void* arg) {
auto* tmp_s = reinterpret_cast<Status*>(arg);
auto* tmp_s = static_cast<Status*>(arg);
assert(tmp_s);
*tmp_s = Status::IOError("Injected");
});
Expand Down
8 changes: 4 additions & 4 deletions db/cuckoo_table_db_test.cc
Original file line number Diff line number Diff line change
Expand Up @@ -131,7 +131,7 @@ TEST_F(CuckooTableDBTest, Flush) {
ASSERT_OK(dbfull()->TEST_FlushMemTable());

TablePropertiesCollection ptc;
ASSERT_OK(reinterpret_cast<DB*>(dbfull())->GetPropertiesOfAllTables(&ptc));
ASSERT_OK(static_cast<DB*>(dbfull())->GetPropertiesOfAllTables(&ptc));
VerifySstUniqueIds(ptc);
ASSERT_EQ(1U, ptc.size());
ASSERT_EQ(3U, ptc.begin()->second->num_entries);
Expand All @@ -148,7 +148,7 @@ TEST_F(CuckooTableDBTest, Flush) {
ASSERT_OK(Put("key6", "v6"));
ASSERT_OK(dbfull()->TEST_FlushMemTable());

ASSERT_OK(reinterpret_cast<DB*>(dbfull())->GetPropertiesOfAllTables(&ptc));
ASSERT_OK(static_cast<DB*>(dbfull())->GetPropertiesOfAllTables(&ptc));
VerifySstUniqueIds(ptc);
ASSERT_EQ(2U, ptc.size());
auto row = ptc.begin();
Expand All @@ -166,7 +166,7 @@ TEST_F(CuckooTableDBTest, Flush) {
ASSERT_OK(Delete("key5"));
ASSERT_OK(Delete("key4"));
ASSERT_OK(dbfull()->TEST_FlushMemTable());
ASSERT_OK(reinterpret_cast<DB*>(dbfull())->GetPropertiesOfAllTables(&ptc));
ASSERT_OK(static_cast<DB*>(dbfull())->GetPropertiesOfAllTables(&ptc));
VerifySstUniqueIds(ptc);
ASSERT_EQ(3U, ptc.size());
row = ptc.begin();
Expand All @@ -191,7 +191,7 @@ TEST_F(CuckooTableDBTest, FlushWithDuplicateKeys) {
ASSERT_OK(dbfull()->TEST_FlushMemTable());

TablePropertiesCollection ptc;
ASSERT_OK(reinterpret_cast<DB*>(dbfull())->GetPropertiesOfAllTables(&ptc));
ASSERT_OK(static_cast<DB*>(dbfull())->GetPropertiesOfAllTables(&ptc));
VerifySstUniqueIds(ptc);
ASSERT_EQ(1U, ptc.size());
ASSERT_EQ(2U, ptc.begin()->second->num_entries);
Expand Down
7 changes: 3 additions & 4 deletions db/db_basic_test.cc
Original file line number Diff line number Diff line change
Expand Up @@ -3342,8 +3342,7 @@ TEST_F(DBBasicTest, BestEffortsRecoveryWithVersionBuildingFailure) {
SyncPoint::GetInstance()->SetCallBack(
"VersionBuilder::CheckConsistencyBeforeReturn", [&](void* arg) {
ASSERT_NE(nullptr, arg);
*(reinterpret_cast<Status*>(arg)) =
Status::Corruption("Inject corruption");
*(static_cast<Status*>(arg)) = Status::Corruption("Inject corruption");
});
SyncPoint::GetInstance()->EnableProcessing();

Expand Down Expand Up @@ -4644,7 +4643,7 @@ TEST_F(DBBasicTest, ManifestWriteFailure) {
SyncPoint::GetInstance()->SetCallBack(
"VersionSet::ProcessManifestWrites:AfterSyncManifest", [&](void* arg) {
ASSERT_NE(nullptr, arg);
auto* s = reinterpret_cast<Status*>(arg);
auto* s = static_cast<Status*>(arg);
ASSERT_OK(*s);
// Manually overwrite return status
*s = Status::IOError();
Expand Down Expand Up @@ -4699,7 +4698,7 @@ TEST_F(DBBasicTest, FailOpenIfLoggerCreationFail) {
SyncPoint::GetInstance()->ClearAllCallBacks();
SyncPoint::GetInstance()->SetCallBack(
"rocksdb::CreateLoggerFromOptions:AfterGetPath", [&](void* arg) {
auto* s = reinterpret_cast<Status*>(arg);
auto* s = static_cast<Status*>(arg);
assert(s);
*s = Status::IOError("Injected");
});
Expand Down
2 changes: 1 addition & 1 deletion db/db_block_cache_test.cc
Original file line number Diff line number Diff line change
Expand Up @@ -1424,7 +1424,7 @@ TEST_P(DBBlockCacheKeyTest, StableCacheKeys) {
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->SetCallBack(
"BlockBasedTableBuilder::BlockBasedTableBuilder:PreSetupBaseCacheKey",
[&](void* arg) {
TableProperties* props = reinterpret_cast<TableProperties*>(arg);
TableProperties* props = static_cast<TableProperties*>(arg);
props->orig_file_number = 0;
});
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->EnableProcessing();
Expand Down
Loading

0 comments on commit 54cb9c7

Please sign in to comment.