Skip to content

Commit

Permalink
Experimental support for SST unique IDs (facebook#8990)
Browse files Browse the repository at this point in the history
Summary:
* New public header unique_id.h and function GetUniqueIdFromTableProperties
which computes a universally unique identifier based on table properties
of table files from recent RocksDB versions.
* Generation of DB session IDs is refactored so that they are
guaranteed unique in the lifetime of a process running RocksDB.
(SemiStructuredUniqueIdGen, new test included.) Along with file numbers,
this enables SST unique IDs to be guaranteed unique among SSTs generated
in a single process, and "better than random" between processes.
See https://github.com/pdillinger/unique_id
* In addition to public API producing 'external' unique IDs, there is a function
for producing 'internal' unique IDs, with functions for converting between the
two. In short, the external ID is "safe" for things people might do with it, and
the internal ID enables more "power user" features for the future. Specifically,
the external ID goes through a hashing layer so that any subset of bits in the
external ID can be used as a hash of the full ID, while also preserving
uniqueness guarantees in the first 128 bits (bijective both on first 128 bits
and on full 192 bits).

Intended follow-up:
* Use the internal unique IDs in cache keys. (Avoid conflicts with facebook#8912) (The file offset can be XORed into
the third 64-bit value of the unique ID.)
* Publish the external unique IDs in FileStorageInfo (facebook#8968)

Pull Request resolved: facebook#8990

Test Plan:
Unit tests added, and checking of unique ids in stress test.
NOTE in stress test we do not generate nearly enough files to thoroughly
stress uniqueness, but the test trims off pieces of the ID to check for
uniqueness so that we can infer (with some assumptions) stronger
properties in the aggregate.

Reviewed By: zhichao-cao, mrambacher

Differential Revision: D31582865

Pulled By: pdillinger

fbshipit-source-id: 1f620c4c86af9abe2a8d177b9ccf2ad2b9f48243
  • Loading branch information
pdillinger authored and facebook-github-bot committed Oct 19, 2021
1 parent aa21896 commit ad5325a
Show file tree
Hide file tree
Showing 30 changed files with 1,085 additions and 81 deletions.
3 changes: 2 additions & 1 deletion CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -713,7 +713,7 @@ set(SOURCES
env/file_system_tracer.cc
env/fs_remap.cc
env/mock_env.cc
env/unique_id.cc
env/unique_id_gen.cc
file/delete_scheduler.cc
file/file_prefetch_buffer.cc
file/file_util.cc
Expand Down Expand Up @@ -807,6 +807,7 @@ set(SOURCES
table/table_factory.cc
table/table_properties.cc
table/two_level_iterator.cc
table/unique_id.cc
test_util/sync_point.cc
test_util/sync_point_impl.cc
test_util/testutil.cc
Expand Down
1 change: 1 addition & 0 deletions HISTORY.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@
* Add remote compaction read/write bytes statistics: `REMOTE_COMPACT_READ_BYTES`, `REMOTE_COMPACT_WRITE_BYTES`.
* Introduce an experimental feature to dump out the blocks from block cache and insert them to the secondary cache to reduce the cache warmup time (e.g., used while migrating DB instance). More information are in `class CacheDumper` and `CacheDumpedLoader` at `rocksdb/utilities/cache_dump_load.h` Note that, this feature is subject to the potential change in the future, it is still experimental.
* Introduced a new BlobDB configuration option `blob_garbage_collection_force_threshold`, which can be used to trigger compactions targeting the SST files which reference the oldest blob files when the ratio of garbage in those blob files meets or exceeds the specified threshold. This can reduce space amplification with skewed workloads where the affected SST files might not otherwise get picked up for compaction.
* Added EXPERIMENTAL support for table file (SST) unique identifiers that are stable and universally unique, available with new function `GetUniqueIdFromTableProperties`. Only SST files from RocksDB >= 6.24 support unique IDs.
* [JAVA] `keyMayExist()` supports ByteBuffer.

### Public API change
Expand Down
7 changes: 5 additions & 2 deletions TARGETS
Original file line number Diff line number Diff line change
Expand Up @@ -225,7 +225,7 @@ cpp_library(
"env/fs_remap.cc",
"env/io_posix.cc",
"env/mock_env.cc",
"env/unique_id.cc",
"env/unique_id_gen.cc",
"file/delete_scheduler.cc",
"file/file_prefetch_buffer.cc",
"file/file_util.cc",
Expand Down Expand Up @@ -327,6 +327,7 @@ cpp_library(
"table/table_factory.cc",
"table/table_properties.cc",
"table/two_level_iterator.cc",
"table/unique_id.cc",
"test_util/sync_point.cc",
"test_util/sync_point_impl.cc",
"test_util/transaction_test_util.cc",
Expand Down Expand Up @@ -550,7 +551,7 @@ cpp_library(
"env/fs_remap.cc",
"env/io_posix.cc",
"env/mock_env.cc",
"env/unique_id.cc",
"env/unique_id_gen.cc",
"file/delete_scheduler.cc",
"file/file_prefetch_buffer.cc",
"file/file_util.cc",
Expand Down Expand Up @@ -652,6 +653,7 @@ cpp_library(
"table/table_factory.cc",
"table/table_properties.cc",
"table/two_level_iterator.cc",
"table/unique_id.cc",
"test_util/sync_point.cc",
"test_util/sync_point_impl.cc",
"test_util/transaction_test_util.cc",
Expand Down Expand Up @@ -848,6 +850,7 @@ cpp_library(
"db_stress_tool/db_stress_common.cc",
"db_stress_tool/db_stress_driver.cc",
"db_stress_tool/db_stress_gflags.cc",
"db_stress_tool/db_stress_listener.cc",
"db_stress_tool/db_stress_shared_state.cc",
"db_stress_tool/db_stress_test_base.cc",
"db_stress_tool/db_stress_tool.cc",
Expand Down
5 changes: 5 additions & 0 deletions db/cuckoo_table_db_test.cc
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@
#ifndef ROCKSDB_LITE

#include "db/db_impl/db_impl.h"
#include "db/db_test_util.h"
#include "rocksdb/db.h"
#include "rocksdb/env.h"
#include "table/cuckoo/cuckoo_table_factory.h"
Expand Down Expand Up @@ -133,6 +134,7 @@ TEST_F(CuckooTableDBTest, Flush) {

TablePropertiesCollection ptc;
ASSERT_OK(reinterpret_cast<DB*>(dbfull())->GetPropertiesOfAllTables(&ptc));
VerifySstUniqueIds(ptc);
ASSERT_EQ(1U, ptc.size());
ASSERT_EQ(3U, ptc.begin()->second->num_entries);
ASSERT_EQ("1", FilesPerLevel());
Expand All @@ -149,6 +151,7 @@ TEST_F(CuckooTableDBTest, Flush) {
ASSERT_OK(dbfull()->TEST_FlushMemTable());

ASSERT_OK(reinterpret_cast<DB*>(dbfull())->GetPropertiesOfAllTables(&ptc));
VerifySstUniqueIds(ptc);
ASSERT_EQ(2U, ptc.size());
auto row = ptc.begin();
ASSERT_EQ(3U, row->second->num_entries);
Expand All @@ -166,6 +169,7 @@ TEST_F(CuckooTableDBTest, Flush) {
ASSERT_OK(Delete("key4"));
ASSERT_OK(dbfull()->TEST_FlushMemTable());
ASSERT_OK(reinterpret_cast<DB*>(dbfull())->GetPropertiesOfAllTables(&ptc));
VerifySstUniqueIds(ptc);
ASSERT_EQ(3U, ptc.size());
row = ptc.begin();
ASSERT_EQ(3U, row->second->num_entries);
Expand All @@ -190,6 +194,7 @@ TEST_F(CuckooTableDBTest, FlushWithDuplicateKeys) {

TablePropertiesCollection ptc;
ASSERT_OK(reinterpret_cast<DB*>(dbfull())->GetPropertiesOfAllTables(&ptc));
VerifySstUniqueIds(ptc);
ASSERT_EQ(1U, ptc.size());
ASSERT_EQ(2U, ptc.begin()->second->num_entries);
ASSERT_EQ("1", FilesPerLevel());
Expand Down
32 changes: 14 additions & 18 deletions db/db_impl/db_impl.cc
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@
#include "db/version_set.h"
#include "db/write_batch_internal.h"
#include "db/write_callback.h"
#include "env/unique_id.h"
#include "env/unique_id_gen.h"
#include "file/file_util.h"
#include "file/filename.h"
#include "file/random_access_file_reader.h"
Expand Down Expand Up @@ -92,6 +92,7 @@
#include "table/sst_file_dumper.h"
#include "table/table_builder.h"
#include "table/two_level_iterator.h"
#include "table/unique_id_impl.h"
#include "test_util/sync_point.h"
#include "trace_replay/trace_replay.h"
#include "util/autovector.h"
Expand Down Expand Up @@ -3947,23 +3948,18 @@ Status DBImpl::GetDbSessionId(std::string& session_id) const {
}

std::string DBImpl::GenerateDbSessionId(Env*) {
// GenerateRawUniqueId() generates an identifier that has a negligible
// probability of being duplicated. It should have full 128 bits of entropy.
uint64_t a, b;
GenerateRawUniqueId(&a, &b);

// Hash and reformat that down to a more compact format, 20 characters
// in base-36 ([0-9A-Z]), which is ~103 bits of entropy, which is enough
// to expect no collisions across a billion servers each opening DBs
// a million times (~2^50). Benefits vs. raw unique id:
// * Save ~ dozen bytes per SST file
// * Shorter shared backup file names (some platforms have low limits)
// * Visually distinct from DB id format
std::string db_session_id(20U, '\0');
char* buf = &db_session_id[0];
PutBaseChars<36>(&buf, 10, a, /*uppercase*/ true);
PutBaseChars<36>(&buf, 10, b, /*uppercase*/ true);
return db_session_id;
// See SemiStructuredUniqueIdGen for its desirable properties.
static SemiStructuredUniqueIdGen gen;

uint64_t lo, hi;
gen.GenerateNext(&hi, &lo);
if (lo == 0) {
// Avoid emitting session ID with lo==0, so that SST unique
// IDs can be more easily ensured non-zero
gen.GenerateNext(&hi, &lo);
assert(lo != 0);
}
return EncodeSessionId(hi, lo);
}

void DBImpl::SetDbSessionId() {
Expand Down
2 changes: 2 additions & 0 deletions db/db_table_properties_test.cc
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,8 @@ void VerifyTableProperties(DB* db, uint64_t expected_entries_size) {

ASSERT_EQ(props.size(), unique_entries.size());
ASSERT_EQ(expected_entries_size, sum);

VerifySstUniqueIds(props);
}
} // namespace

Expand Down
7 changes: 7 additions & 0 deletions db/db_test.cc
Original file line number Diff line number Diff line change
Expand Up @@ -2077,6 +2077,13 @@ TEST_F(DBTest, OverlapInLevel0) {
Flush(1);
ASSERT_EQ("2,1,1", FilesPerLevel(1));

// BEGIN addition to existing test
// Take this opportunity to verify SST unique ids (including Plain table)
TablePropertiesCollection tbc;
ASSERT_OK(db_->GetPropertiesOfAllTables(handles_[1], &tbc));
VerifySstUniqueIds(tbc);
// END addition to existing test

// Compact away the placeholder files we created initially
dbfull()->TEST_CompactRange(1, nullptr, nullptr, handles_[1]);
dbfull()->TEST_CompactRange(2, nullptr, nullptr, handles_[1]);
Expand Down
11 changes: 11 additions & 0 deletions db/db_test_util.cc
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@
#include "env/mock_env.h"
#include "rocksdb/convenience.h"
#include "rocksdb/env_encryption.h"
#include "rocksdb/unique_id.h"
#include "rocksdb/utilities/object_registry.h"
#include "util/random.h"

Expand Down Expand Up @@ -1654,4 +1655,14 @@ uint64_t DBTestBase::GetNumberOfSstFilesForColumnFamily(
}
#endif // ROCKSDB_LITE

void VerifySstUniqueIds(const TablePropertiesCollection& props) {
ASSERT_FALSE(props.empty()); // suspicious test if empty
std::unordered_set<std::string> seen;
for (auto& pair : props) {
std::string id;
ASSERT_OK(GetUniqueIdFromTableProperties(*pair.second, &id));
ASSERT_TRUE(seen.insert(id).second);
}
}

} // namespace ROCKSDB_NAMESPACE
4 changes: 4 additions & 0 deletions db/db_test_util.h
Original file line number Diff line number Diff line change
Expand Up @@ -1195,4 +1195,8 @@ class DBTestBase : public testing::Test {
bool time_elapse_only_sleep_on_reopen_ = false;
};

// For verifying that all files generated by current version have SST
// unique ids.
void VerifySstUniqueIds(const TablePropertiesCollection& props);

} // namespace ROCKSDB_NAMESPACE
8 changes: 4 additions & 4 deletions db_stress_tool/CMakeLists.txt
Original file line number Diff line number Diff line change
@@ -1,13 +1,13 @@
add_executable(db_stress${ARTIFACT_SUFFIX}
db_stress.cc
db_stress_tool.cc
batched_ops_stress.cc
cf_consistency_stress.cc
db_stress.cc
db_stress_common.cc
db_stress_driver.cc
db_stress_test_base.cc
db_stress_shared_state.cc
db_stress_gflags.cc
db_stress_listener.cc
db_stress_shared_state.cc
db_stress_test_base.cc
db_stress_tool.cc
expected_state.cc
no_batched_ops_stress.cc)
Expand Down
136 changes: 136 additions & 0 deletions db_stress_tool/db_stress_listener.cc
Original file line number Diff line number Diff line change
@@ -0,0 +1,136 @@
// Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
// This source code is licensed under both the GPLv2 (found in the
// COPYING file in the root directory) and Apache 2.0 License
// (found in the LICENSE.Apache file in the root directory).

#include "db_stress_tool/db_stress_listener.h"

#include <cstdint>

#include "rocksdb/file_system.h"
#include "util/coding_lean.h"

namespace ROCKSDB_NAMESPACE {

#ifdef GFLAGS
#ifndef ROCKSDB_LITE

// TODO: consider using expected_values_dir instead, but this is more
// convenient for now.
UniqueIdVerifier::UniqueIdVerifier(const std::string& db_name)
: path_(db_name + "/.unique_ids") {
// We expect such a small number of files generated during this test
// (thousands?), checking full 192-bit IDs for uniqueness is a very
// weak check. For a stronger check, we pick a specific 64-bit
// subsequence from the ID to check for uniqueness. All bits of the
// ID should be high quality, and 64 bits should be unique with
// very good probability for the quantities in this test.
offset_ = Random::GetTLSInstance()->Uniform(17); // 0 to 16

// Use default FileSystem to avoid fault injection, etc.
FileSystem& fs = *FileSystem::Default();
IOOptions opts;

{
std::unique_ptr<FSSequentialFile> reader;
Status s =
fs.NewSequentialFile(path_, FileOptions(), &reader, /*dbg*/ nullptr);
if (s.ok()) {
// Load from file
std::string id(24U, '\0');
Slice result;
for (;;) {
s = reader->Read(id.size(), opts, &result, &id[0], /*dbg*/ nullptr);
if (!s.ok()) {
fprintf(stderr, "Error reading unique id file: %s\n",
s.ToString().c_str());
assert(false);
}
if (result.size() < id.size()) {
// EOF
if (result.size() != 0) {
// Corrupt file. Not a DB bug but could happen if OS doesn't provide
// good guarantees on process crash.
fprintf(stdout, "Warning: clearing corrupt unique id file\n");
id_set_.clear();
reader.reset();
s = fs.DeleteFile(path_, opts, /*dbg*/ nullptr);
assert(s.ok());
}
break;
}
VerifyNoWrite(id);
}
} else {
// Newly created is ok.
// But FileSystem doesn't tell us whether non-existence was the cause of
// the failure. (Issue #9021)
Status s2 = fs.FileExists(path_, opts, /*dbg*/ nullptr);
if (!s2.IsNotFound()) {
fprintf(stderr, "Error opening unique id file: %s\n",
s.ToString().c_str());
assert(false);
}
}
}
fprintf(stdout, "(Re-)verified %zu unique IDs\n", id_set_.size());
Status s = fs.ReopenWritableFile(path_, FileOptions(), &data_file_writer_,
/*dbg*/ nullptr);
if (!s.ok()) {
fprintf(stderr, "Error opening unique id file for append: %s\n",
s.ToString().c_str());
assert(false);
}
}

UniqueIdVerifier::~UniqueIdVerifier() {
data_file_writer_->Close(IOOptions(), /*dbg*/ nullptr);
}

void UniqueIdVerifier::VerifyNoWrite(const std::string& id) {
assert(id.size() == 24);
bool is_new = id_set_.insert(DecodeFixed64(&id[offset_])).second;
if (!is_new) {
fprintf(stderr,
"Duplicate partial unique ID found (offset=%zu, count=%zu)\n",
offset_, id_set_.size());
assert(false);
}
}

void UniqueIdVerifier::Verify(const std::string& id) {
assert(id.size() == 24);
std::lock_guard<std::mutex> lock(mutex_);
// If we accumulate more than ~4 million IDs, there would be > 1 in 1M
// natural chance of collision. Thus, simply stop checking at that point.
if (id_set_.size() >= 4294967) {
return;
}
IOStatus s =
data_file_writer_->Append(Slice(id), IOOptions(), /*dbg*/ nullptr);
if (!s.ok()) {
fprintf(stderr, "Error writing to unique id file: %s\n",
s.ToString().c_str());
assert(false);
}
s = data_file_writer_->Flush(IOOptions(), /*dbg*/ nullptr);
if (!s.ok()) {
fprintf(stderr, "Error flushing unique id file: %s\n",
s.ToString().c_str());
assert(false);
}
VerifyNoWrite(id);
}

void DbStressListener::VerifyTableFileUniqueId(
const TableProperties& new_file_properties) {
// Verify unique ID
std::string id;
GetUniqueIdFromTableProperties(new_file_properties, &id);
unique_ids_.Verify(id);
}

#endif // !ROCKSDB_LITE
#endif // GFLAGS

} // namespace ROCKSDB_NAMESPACE
Loading

0 comments on commit ad5325a

Please sign in to comment.