Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

New Cache API for gathering statistics #8225

Closed
wants to merge 14 commits into from

Conversation

pdillinger
Copy link
Contributor

@pdillinger pdillinger commented Apr 23, 2021

Summary: Adds a new Cache::ApplyToAllEntries API that we expect to use
(in follow-up PRs) for efficiently gathering block cache statistics.
Notable features vs. old ApplyToAllCacheEntries:

  • Includes key and deleter (in addition to value and charge). We could
    have passed in a Handle but then more virtual function calls would be
    needed to get the "fields" of each entry. We expect to use the 'deleter'
    to identify the origin of entries, perhaps even more.
  • Heavily tuned to minimize latency impact on operating cache. It
    does this by iterating over small sections of each cache shard while
    cycling through the shards.
  • Supports tuning roughly how many entries to operate on for each
    lock acquire and release, to control the impact on the latency of other
    operations without excessive lock acquire & release. The right balance
    can depend on the cost of the callback. Good default seems to be
    around 256.
  • There should be no need to disable thread safety. (I would expect
    uncontended locks to be sufficiently fast.)

I have enhanced cache_bench to validate this approach:

  • Reports a histogram of ns per operation, so we can look at the
    ditribution of times, not just throughput (average).
  • Can add a thread for simulated "gather stats" which calls
    ApplyToAllEntries at a specified interval. We also generate a histogram
    of time to run ApplyToAllEntries.

To make the iteration over some entries of each shard work as cleanly as
possible, even with resize between next set of entries, I have
re-arranged which hash bits are used for sharding and which for indexing
within a shard.

Test Plan: A couple of unit tests are added, but primary validation is manual, as
the primary risk is to performance.

The primary validation is using cache_bench to ensure that neither
the minor hashing changes nor the simulated stats gathering
significantly impact QPS or latency distribution. Note that adding op
latency histogram seriously impacts the benchmark QPS, so for a
fair baseline, we need the cache_bench changes (except remove simulated
stat gathering to make it compile). In short, we don't see any
reproducible difference in ops/sec or op latency unless we are gathering
stats nearly continuously. Test uses 10GB block cache with
8KB values to be somewhat realistic in the number of items to iterate
over.

Baseline typical output:

Complete in 92.017 s; Rough parallel ops/sec = 869401
Thread ops/sec = 54662

Operation latency (ns):
Count: 80000000 Average: 11223.9494  StdDev: 29.61
Min: 0  Median: 7759.3973  Max: 9620500
Percentiles: P50: 7759.40 P75: 14190.73 P99: 46922.75 P99.9: 77509.84 P99.99: 217030.58
------------------------------------------------------
[       0,       1 ]       68   0.000%   0.000% 
(    2900,    4400 ]       89   0.000%   0.000% 
(    4400,    6600 ] 33630240  42.038%  42.038% ########
(    6600,    9900 ] 18129842  22.662%  64.700% #####
(    9900,   14000 ]  7877533   9.847%  74.547% ##
(   14000,   22000 ] 15193238  18.992%  93.539% ####
(   22000,   33000 ]  3037061   3.796%  97.335% #
(   33000,   50000 ]  1626316   2.033%  99.368% 
(   50000,   75000 ]   421532   0.527%  99.895% 
(   75000,  110000 ]    56910   0.071%  99.966% 
(  110000,  170000 ]    16134   0.020%  99.986% 
(  170000,  250000 ]     5166   0.006%  99.993% 
(  250000,  380000 ]     3017   0.004%  99.996% 
(  380000,  570000 ]     1337   0.002%  99.998% 
(  570000,  860000 ]      805   0.001%  99.999% 
(  860000, 1200000 ]      319   0.000% 100.000% 
( 1200000, 1900000 ]      231   0.000% 100.000% 
( 1900000, 2900000 ]      100   0.000% 100.000% 
( 2900000, 4300000 ]       39   0.000% 100.000% 
( 4300000, 6500000 ]       16   0.000% 100.000% 
( 6500000, 9800000 ]        7   0.000% 100.000% 

New, gather_stats=false. Median thread ops/sec of 5 runs:

Complete in 92.030 s; Rough parallel ops/sec = 869285
Thread ops/sec = 54458

Operation latency (ns):
Count: 80000000 Average: 11298.1027  StdDev: 42.18
Min: 0  Median: 7722.0822  Max: 6398720
Percentiles: P50: 7722.08 P75: 14294.68 P99: 47522.95 P99.9: 85292.16 P99.99: 228077.78
------------------------------------------------------
[       0,       1 ]      109   0.000%   0.000% 
(    2900,    4400 ]      793   0.001%   0.001% 
(    4400,    6600 ] 34054563  42.568%  42.569% #########
(    6600,    9900 ] 17482646  21.853%  64.423% ####
(    9900,   14000 ]  7908180   9.885%  74.308% ##
(   14000,   22000 ] 15032072  18.790%  93.098% ####
(   22000,   33000 ]  3237834   4.047%  97.145% #
(   33000,   50000 ]  1736882   2.171%  99.316% 
(   50000,   75000 ]   446851   0.559%  99.875% 
(   75000,  110000 ]    68251   0.085%  99.960% 
(  110000,  170000 ]    18592   0.023%  99.983% 
(  170000,  250000 ]     7200   0.009%  99.992% 
(  250000,  380000 ]     3334   0.004%  99.997% 
(  380000,  570000 ]     1393   0.002%  99.998% 
(  570000,  860000 ]      700   0.001%  99.999% 
(  860000, 1200000 ]      293   0.000% 100.000% 
( 1200000, 1900000 ]      196   0.000% 100.000% 
( 1900000, 2900000 ]       69   0.000% 100.000% 
( 2900000, 4300000 ]       32   0.000% 100.000% 
( 4300000, 6500000 ]       10   0.000% 100.000% 

New, gather_stats=true, 1 second delay between scans. Scans take about
1 second here so it's spending about 50% time scanning. Still the effect on
ops/sec and latency seems to be in the noise. Median thread ops/sec of 5 runs:

Complete in 91.890 s; Rough parallel ops/sec = 870608
Thread ops/sec = 54551

Operation latency (ns):
Count: 80000000 Average: 11311.2629  StdDev: 45.28
Min: 0  Median: 7686.5458  Max: 10018340
Percentiles: P50: 7686.55 P75: 14481.95 P99: 47232.60 P99.9: 79230.18 P99.99: 232998.86
------------------------------------------------------
[       0,       1 ]       71   0.000%   0.000% 
(    2900,    4400 ]      291   0.000%   0.000% 
(    4400,    6600 ] 34492060  43.115%  43.116% #########
(    6600,    9900 ] 16727328  20.909%  64.025% ####
(    9900,   14000 ]  7845828   9.807%  73.832% ##
(   14000,   22000 ] 15510654  19.388%  93.220% ####
(   22000,   33000 ]  3216533   4.021%  97.241% #
(   33000,   50000 ]  1680859   2.101%  99.342% 
(   50000,   75000 ]   439059   0.549%  99.891% 
(   75000,  110000 ]    60540   0.076%  99.967% 
(  110000,  170000 ]    14649   0.018%  99.985% 
(  170000,  250000 ]     5242   0.007%  99.991% 
(  250000,  380000 ]     3260   0.004%  99.995% 
(  380000,  570000 ]     1599   0.002%  99.997% 
(  570000,  860000 ]     1043   0.001%  99.999% 
(  860000, 1200000 ]      471   0.001%  99.999% 
( 1200000, 1900000 ]      275   0.000% 100.000% 
( 1900000, 2900000 ]      143   0.000% 100.000% 
( 2900000, 4300000 ]       60   0.000% 100.000% 
( 4300000, 6500000 ]       27   0.000% 100.000% 
( 6500000, 9800000 ]        7   0.000% 100.000% 
( 9800000, 14000000 ]        1   0.000% 100.000% 

Gather stats latency (us):
Count: 46 Average: 980387.5870  StdDev: 60911.18
Min: 879155  Median: 1033777.7778  Max: 1261431
Percentiles: P50: 1033777.78 P75: 1120666.67 P99: 1261431.00 P99.9: 1261431.00 P99.99: 1261431.00
------------------------------------------------------
(  860000, 1200000 ]       45  97.826%  97.826% ####################
( 1200000, 1900000 ]        1   2.174% 100.000% 

Most recent cache entry stats:
Number of entries: 1295133
Total charge: 9.88 GB
Average key size: 23.4982
Average charge: 8.00 KB
Unique deleters: 3

Summary: Adds a new Cache::ApplyToAllEntries API that we expect to use
(in follow-up PRs) for efficiently gathering block cache statistics.
Notable features vs. old ApplyToAllCacheEntries:

* Includes key and deleter (in addition to value and charge). We could
have passed in a Handle but then more virtual function calls would be
needed to get the "fields" of each entry. We expect to use the 'deleter'
to identify the origin of entries, perhaps even more.
* Supports (at least for LRUCache) tuning roughly how many entries to
operate on for each lock acquire and release, to control the impact on
the latency of other operations without excessive lock acquire &
release. The right balance can depend on the cost of the callback.
* There should be no need to disable thread safety. (I would expect
uncontended locks to be sufficiently fast.)

I have enhanced cache_bench to validate this approach:

* Reports a histogram of ns per operation, so we can look at the
ditribution of times, not just throughput (average).
* Can include a thread for simulated "gather stats" which calls
ApplyToAllEntries at a specified interval. We also generate a histogram
of time to run ApplyToAllEntries.

To make the iteration over some entries of each shard work as cleanly as
possible, even with resize between next set of entries, I have
re-arranged which hash bits are used for sharding and which for indexing
within a shard.

TODO: See if ClockCache still behaves. In cache_bench it segfaults for me.

Test Plan:

The primary validation is using cache_bench to ensure the simulated
stats gathering does not significantly impact QPS or latency
distribution.

TODO: include results here

TODO: some unit tests
Comment on lines +176 to +182
void deleter1(const Slice& /*key*/, void* value) {
delete[] static_cast<char*>(value);
}
void deleter2(const Slice& /*key*/, void* value) {
delete[] static_cast<char*>(value);
}
void deleter3(const Slice& /*key*/, void* value) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why the need for the different deleters?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is my planned backdoor way of identifying the origin and kind of cache entries. Unlike adding a tag to keys or values, there's no space/time impact to the critical performance path. It gives us an option to turn on "type checking" of cache entries if we want extra verification against memory corruption, which is non-trivial impact on block cache lookups. But optional as I said.


PrintEnv();
SharedState shared(this);
std::vector<std::unique_ptr<ThreadState> > threads(FLAGS_threads);
for (uint32_t i = 0; i < FLAGS_threads; i++) {
threads[i].reset(new ThreadState(i, &shared));
env->StartThread(ThreadBody, threads[i].get());
std::thread(ThreadBody, threads[i].get()).detach();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not the env threads?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Env threads don't use template parameter pack, so ugly to use.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As an aside, can you please file an issue as to what the Env threads should do to meet the needs here? It would be good to fix them before this sort of thing propagates too far if it is easy to do.

@@ -195,7 +214,7 @@ class CacheBench {
cache_ = NewLRUCache(FLAGS_cache_size, FLAGS_num_shard_bits);
}
if (FLAGS_ops_per_thread == 0) {
FLAGS_ops_per_thread = 5 * max_key_;
FLAGS_ops_per_thread = 5000000U;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not change this where the GFLAG is defined (and the comment as well)?

for (uint64_t i = 0; i < FLAGS_ops_per_thread; i++) {
Slice key = gen.GetRand(thread->rnd, max_key_);
uint64_t random_op = thread->rnd.Next();
timer.Start();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure I understand why you want a timer.Start here rather than resetting the timer when you get the elapsed time.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm trying to measure the latency of Cache operations, so the less "other stuff" that is included, the more accurate that is. I guess I would need to exclude timing the "do something with the data" sections to be more accurate.

LRUHandle* h = list_[i];
while (h != nullptr) {
LRUHandle* next = h->next_hash;
uint32_t hash = h->hash;
LRUHandle** ptr = &new_list[hash & (new_length - 1)];
LRUHandle** ptr = &new_list[hash >> (32 - new_length_bits)];
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you comment on this? Is the >> meant to be more efficient than the old way or something else?

It seems like you will be using different bits from the hash at least for the mapping. Will this break anything with persistent hashes or the like?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From the description:

To make the iteration over some entries of each shard work as cleanly as
possible, even with resize between next set of entries, I have
re-arranged which hash bits are used for sharding and which for indexing
within a shard.

@@ -181,10 +182,15 @@ class LRUHandleTable {

void Resize();

// Number of hash bits (upper because lower bits used for sharding)
// used for table index. Length == 1 << length_bits_
int length_bits_;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

uint32_t?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For a shift amount, there's no signed/unsigned pollution (with decent compilers, 1U << some_int is unsigned) and the values are small, so under Google C++ style, the default choice is int.

uint32_t length_bits = table_.GetLengthBits();
uint32_t length = uint32_t{1} << length_bits;

assert(average_entries_per_lock > 0);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why must average_entries_per_lock > 0? Would a value of 0 allow "unlocked" access like there was before?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because it's more efficient to sanitize the option only once, in the caller. This part of the implementation would make no progress with average_entries_per_lock=0.

Comment on lines 293 to 296
virtual void ApplyToAllEntries(
const std::function<void(const Slice& key, void* value, size_t charge,
DeleterFn deleter)>& callback,
size_t average_entries_per_lock = 256) = 0;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this an incompatible change and is there any way around it? As it stands now, any existing Cache implementation will not work/compile.

Are we okay with forcing code updates?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think custom Cache implementations are rare enough, and the likely benefit high enough to impose this. I tried other approaches to gathering the statistics but they hit critical areas of performance. Benchmark testing on this approach looks like it will be almost free even if we gathered the stats at intervals of seconds rather than minutes or hours.

Rules (e.g. "no breaking API changes") are negotiable, especially at Facebook. ;)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: The google coding standard does not allow default arguments on virtual functions. Can we split this into two methods, one (not virtual) with no "average_entries_per_lock" and a virtual one without the default

pdillinger added a commit to pdillinger/rocksdb that referenced this pull request May 4, 2021
Summary: In testing for facebook#8225 I found cache_bench would crash with
-use_clock_cache, as well as db_bench -use_clock_cache, but not
single-threaded. Smaller cache size hits failure much faster. ASAN
reported the failuer as calling malloc_usable_size on the `key` pointer
of a ClockCache handle after it was reportedly freed. On detailed
inspection I found this bad sequence of operations for a cache entry:

state=InCache=1,refs=1
[thread 1] Start ClockCacheShard::Unref (from Release, no mutex)
[thread 1] Decrement ref count
state=InCache=1,refs=0
[thread 1] Suspend before CalcTotalCharge (no mutex)

[thread 2] Start UnsetInCache (from Insert, mutex held)
[thread 2] clear InCache bit
state=InCache=0,refs=0
[thread 2] Calls RecycleHandle (based on pre-updated state)
[thread 2] Returns to Insert which calls Cleanup which deletes `key`

[thread 1] Resume ClockCacheShard::Unref
[thread 1] Read `key` in CalcTotalCharge

To fix this, I've added a field to the handle to store the metadata
charge so that we can efficiently remember everything we need from
the handle in Unref. We must not read from the handle again if we
decrement the count to zero with InCache=1, which means we don't own
the entry and someone else could eject/overwrite it immediately.

Note before this change, on amd64 sizeof(Handle) == 56 even though there
are only 48 bytes of data. Grouping together the uint32_t fields would
cut it down to 48, but I've added another uint32_t, which takees it
back up to 56. Not a big deal.

Test Plan: Manual + adding use_clock_cache to db_crashtest.py

Base performance
./cache_bench -use_clock_cache
Complete in 17.060 s; QPS = 2458513
New performance
./cache_bench -use_clock_cache
Complete in 17.052 s; QPS = 2459695

Any difference is easily buried in small noise.
facebook-github-bot pushed a commit that referenced this pull request May 5, 2021
Summary:
In testing for #8225 I found cache_bench would crash with
-use_clock_cache, as well as db_bench -use_clock_cache, but not
single-threaded. Smaller cache size hits failure much faster. ASAN
reported the failuer as calling malloc_usable_size on the `key` pointer
of a ClockCache handle after it was reportedly freed. On detailed
inspection I found this bad sequence of operations for a cache entry:

state=InCache=1,refs=1
[thread 1] Start ClockCacheShard::Unref (from Release, no mutex)
[thread 1] Decrement ref count
state=InCache=1,refs=0
[thread 1] Suspend before CalcTotalCharge (no mutex)

[thread 2] Start UnsetInCache (from Insert, mutex held)
[thread 2] clear InCache bit
state=InCache=0,refs=0
[thread 2] Calls RecycleHandle (based on pre-updated state)
[thread 2] Returns to Insert which calls Cleanup which deletes `key`

[thread 1] Resume ClockCacheShard::Unref
[thread 1] Read `key` in CalcTotalCharge

To fix this, I've added a field to the handle to store the metadata
charge so that we can efficiently remember everything we need from
the handle in Unref. We must not read from the handle again if we
decrement the count to zero with InCache=1, which means we don't own
the entry and someone else could eject/overwrite it immediately.

Note before this change, on amd64 sizeof(Handle) == 56 even though there
are only 48 bytes of data. Grouping together the uint32_t fields would
cut it down to 48, but I've added another uint32_t, which takes it
back up to 56. Not a big deal.

Also fixed DisownData to cooperate with ASAN as in LRUCache.

Pull Request resolved: #8261

Test Plan:
Manual + adding use_clock_cache to db_crashtest.py

Base performance
./cache_bench -use_clock_cache
Complete in 17.060 s; QPS = 2458513
New performance
./cache_bench -use_clock_cache
Complete in 17.052 s; QPS = 2459695

Any difference is easily buried in small noise.

Crash test shows still more bug(s) in ClockCache, so I'm expecting to
disable ClockCache from production code in a follow-up PR (if we
can't find and fix the bug(s))

Reviewed By: mrambacher

Differential Revision: D28207358

Pulled By: pdillinger

fbshipit-source-id: aa7a9322afc6f18f30e462c75dbbe4a1206eb294
pdillinger added a commit to pdillinger/rocksdb that referenced this pull request May 6, 2021
Summary: Along with facebook#8225, this is working toward gathering statistics
about the kinds of items in block cache by iterating over the cache
during periodic stats dump. We need some way to flag what kind of
blocks each entry belongs to, preferably without changing the Cache API.
One of the complications is that Cache is a general interface that could
have other users that don't adhere to whichever convention we decide
on for keys and values. Or we would pay for an extra field in the Handle
that would only be used for this purpose.

This change uses a back-door approach, the deleter, to mark the kind
of Cache entries. This has the added benefit of ensuring that each entry
we check the kind of has the proper code origin; if the entry came from
some other part of the code, it will use an unrecognized deleter, which
we can simply attribute to an "unknown" bucket.

In detail, this change pulls the awkward BlocklikeTraits into
CachableEntry, along with an API for translating between a deleter and
`<CacheEntryType, BlockType>`. This should soon be used for collecting
stats.

A side benefit/feature in this change is that the information in the
deleter can be used for "type checking" block cache accesses, through
a new option extra_block_cache_checks. Any logical error that could lead
to a block cache key collision can break type safety, such as with a
bad cast from Block to BlockContents. This option detects and reports
such errors before the bad cast. This is now always enabled for
db_stress, and for one CircleCI unit test configuration, to ensure we
stay clean, but otherwise disabled by default because of modest
performance penalty.

Test Plan: TODO: small unit tests and performance tests
Copy link
Contributor

@mrambacher mrambacher left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please update HISTORY.md with the changes to the public API.

cache/cache_bench.cc Show resolved Hide resolved
Comment on lines +332 to +343
if (shared->AllDone()) {
std::ostringstream ostr;
ostr << "Most recent cache entry stats:\n"
<< "Number of entries: " << total_entry_count << "\n"
<< "Total charge: " << BytesToHumanString(total_charge) << "\n"
<< "Average key size: "
<< (1.0 * total_key_size / total_entry_count) << "\n"
<< "Average charge: "
<< BytesToHumanString(1.0 * total_charge / total_entry_count)
<< "\n"
<< "Unique deleters: " << deleters.size() << "\n";
*stats_report = ostr.str();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am a little confused by this code, but it might just be me and could potentially be clarified by comments.

This block prints out the statistics gathered in the "outside the mutex block" down below. But it seems as if the statistics printed could potentially never be written (if this AllDone is triggered on the first pass) or would always be slightly out of date (as they would be writing the stats from the previous gathering).

If my understanding is wrong, can you please add some comments to the code to clarify why this is the proper way to gather the stats?

cache/clock_cache.cc Outdated Show resolved Hide resolved
Resize();
}
LRUHandleTable::LRUHandleTable()
: length_bits_(4),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why 4? Maybe make it a constant with a comment?

}
LRUHandleTable::LRUHandleTable()
: length_bits_(4),
list_(new LRUHandle* [size_t{1} << length_bits_] {}),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this guarantee that the list_ elements are initialized to nullptr?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

uint32_t count = 0;
for (uint32_t i = 0; i < length_; i++) {
for (uint32_t i = 0; (i >> old_length_bits) == 0; i++) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this more efficient than having a temporary variable "old_length = 1 << old_length_bits;" and use old_length as your loop termination condition?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was thinking about the old_length_bits==32 case, but apparently not well enough. (Shifting by full bit width is undefined.)

cache/lru_cache.h Outdated Show resolved Hide resolved
@facebook-github-bot
Copy link
Contributor

@pdillinger has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@pdillinger has updated the pull request. You must reimport the pull request before landing.

@facebook-github-bot
Copy link
Contributor

@pdillinger has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@pdillinger has updated the pull request. You must reimport the pull request before landing.

@facebook-github-bot
Copy link
Contributor

@pdillinger has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Copy link
Contributor

@mrambacher mrambacher left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Be nice to fix the virtual functions with default args, but it seems like an existing problem with this class already...


PrintEnv();
SharedState shared(this);
std::vector<std::unique_ptr<ThreadState> > threads(FLAGS_threads);
for (uint32_t i = 0; i < FLAGS_threads; i++) {
threads[i].reset(new ThreadState(i, &shared));
env->StartThread(ThreadBody, threads[i].get());
std::thread(ThreadBody, threads[i].get()).detach();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As an aside, can you please file an issue as to what the Env threads should do to meet the needs here? It would be good to fix them before this sort of thing propagates too far if it is easy to do.

Comment on lines 293 to 296
virtual void ApplyToAllEntries(
const std::function<void(const Slice& key, void* value, size_t charge,
DeleterFn deleter)>& callback,
size_t average_entries_per_lock = 256) = 0;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: The google coding standard does not allow default arguments on virtual functions. Can we split this into two methods, one (not virtual) with no "average_entries_per_lock" and a virtual one without the default

@facebook-github-bot
Copy link
Contributor

@pdillinger has updated the pull request. You must reimport the pull request before landing.

@facebook-github-bot
Copy link
Contributor

@pdillinger has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@pdillinger merged this pull request in 78a309b.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants