Skip to content

Commit

Permalink
Add more tests to ASSERT_STATUS_CHECKED (3), API change (facebook#7715)
Browse files Browse the repository at this point in the history
Summary:
Third batch of adding more tests to ASSERT_STATUS_CHECKED.

* db_compaction_filter_test
* db_compaction_test
* db_dynamic_level_test
* db_inplace_update_test
* db_sst_test
* db_tailing_iter_test
* db_io_failure_test

Also update GetApproximateSizes APIs to all return Status.

Pull Request resolved: facebook#7715

Reviewed By: jay-zhuang

Differential Revision: D25806896

Pulled By: pdillinger

fbshipit-source-id: 6cb9d62ba5a756c645812754c596ad3995d7c262
  • Loading branch information
adamretter authored and codingrhythm committed Mar 5, 2021
1 parent 5db84ac commit 342c0b4
Show file tree
Hide file tree
Showing 25 changed files with 733 additions and 579 deletions.
4 changes: 4 additions & 0 deletions HISTORY.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,10 @@
### Behavior Changes
* Attempting to write a merge operand without explicitly configuring `merge_operator` now fails immediately, causing the DB to enter read-only mode. Previously, failure was deferred until the `merge_operator` was needed by a user read or a background operation.

### API Changes
* `rocksdb_approximate_sizes` and `rocksdb_approximate_sizes_cf` in the C API now requires an error pointer (`char** errptr`) for receiving any error.
* All overloads of DB::GetApproximateSizes now return Status, so that any failure to obtain the sizes is indicated to the caller.

### Bug Fixes
* Truncated WALs ending in incomplete records can no longer produce gaps in the recovered data when `WALRecoveryMode::kPointInTimeRecovery` is used. Gaps are still possible when WALs are truncated exactly on record boundaries; for complete protection, users should enable `track_and_verify_wals_in_manifest`.
* Fix a bug where compressed blocks read by MultiGet are not inserted into the compressed block cache when use_direct_reads = true.
Expand Down
6 changes: 6 additions & 0 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -612,7 +612,12 @@ ifdef ASSERT_STATUS_CHECKED
db_blob_basic_test \
db_blob_index_test \
db_block_cache_test \
db_compaction_test \
db_compaction_filter_test \
db_dynamic_level_test \
db_flush_test \
db_inplace_update_test \
db_io_failure_test \
db_iterator_test \
db_logical_block_size_cache_test \
db_memtable_test \
Expand All @@ -629,6 +634,7 @@ ifdef ASSERT_STATUS_CHECKED
deletefile_test \
external_sst_file_test \
options_file_test \
db_sst_test \
db_statistics_test \
db_table_properties_test \
db_tailing_iter_test \
Expand Down
33 changes: 19 additions & 14 deletions db/c.cc
Original file line number Diff line number Diff line change
Expand Up @@ -1388,34 +1388,39 @@ char* rocksdb_property_value_cf(
}
}

void rocksdb_approximate_sizes(
rocksdb_t* db,
int num_ranges,
const char* const* range_start_key, const size_t* range_start_key_len,
const char* const* range_limit_key, const size_t* range_limit_key_len,
uint64_t* sizes) {
void rocksdb_approximate_sizes(rocksdb_t* db, int num_ranges,
const char* const* range_start_key,
const size_t* range_start_key_len,
const char* const* range_limit_key,
const size_t* range_limit_key_len,
uint64_t* sizes, char** errptr) {
Range* ranges = new Range[num_ranges];
for (int i = 0; i < num_ranges; i++) {
ranges[i].start = Slice(range_start_key[i], range_start_key_len[i]);
ranges[i].limit = Slice(range_limit_key[i], range_limit_key_len[i]);
}
db->rep->GetApproximateSizes(ranges, num_ranges, sizes);
Status s = db->rep->GetApproximateSizes(ranges, num_ranges, sizes);
if (!s.ok()) {
SaveError(errptr, s);
}
delete[] ranges;
}

void rocksdb_approximate_sizes_cf(
rocksdb_t* db,
rocksdb_column_family_handle_t* column_family,
int num_ranges,
const char* const* range_start_key, const size_t* range_start_key_len,
const char* const* range_limit_key, const size_t* range_limit_key_len,
uint64_t* sizes) {
rocksdb_t* db, rocksdb_column_family_handle_t* column_family,
int num_ranges, const char* const* range_start_key,
const size_t* range_start_key_len, const char* const* range_limit_key,
const size_t* range_limit_key_len, uint64_t* sizes, char** errptr) {
Range* ranges = new Range[num_ranges];
for (int i = 0; i < num_ranges; i++) {
ranges[i].start = Slice(range_start_key[i], range_start_key_len[i]);
ranges[i].limit = Slice(range_limit_key[i], range_limit_key_len[i]);
}
db->rep->GetApproximateSizes(column_family->rep, ranges, num_ranges, sizes);
Status s = db->rep->GetApproximateSizes(column_family->rep, ranges,
num_ranges, sizes);
if (!s.ok()) {
SaveError(errptr, s);
}
delete[] ranges;
}

Expand Down
4 changes: 3 additions & 1 deletion db/c_test.c
Original file line number Diff line number Diff line change
Expand Up @@ -988,7 +988,9 @@ int main(int argc, char** argv) {
&err);
CheckNoError(err);
}
rocksdb_approximate_sizes(db, 2, start, start_len, limit, limit_len, sizes);
rocksdb_approximate_sizes(db, 2, start, start_len, limit, limit_len, sizes,
&err);
CheckNoError(err);
CheckCondition(sizes[0] > 0);
CheckCondition(sizes[1] > 0);
}
Expand Down
9 changes: 4 additions & 5 deletions db/compaction/compaction_job_stats_test.cc
Original file line number Diff line number Diff line change
Expand Up @@ -297,15 +297,14 @@ class CompactionJobStatsTest : public testing::Test,
return result;
}

uint64_t Size(const Slice& start, const Slice& limit, int cf = 0) {
Status Size(uint64_t* size, const Slice& start, const Slice& limit,
int cf = 0) {
Range r(start, limit);
uint64_t size;
if (cf == 0) {
db_->GetApproximateSizes(&r, 1, &size);
return db_->GetApproximateSizes(&r, 1, size);
} else {
db_->GetApproximateSizes(handles_[1], &r, 1, &size);
return db_->GetApproximateSizes(handles_[1], &r, 1, size);
}
return size;
}

void Compact(int cf, const Slice& start, const Slice& limit,
Expand Down
Loading

0 comments on commit 342c0b4

Please sign in to comment.