Skip to content

Commit

Permalink
Fix some typos in comments and docs.
Browse files Browse the repository at this point in the history
Summary: Closes facebook#3568

Differential Revision: D7170953

Pulled By: siying

fbshipit-source-id: 9cfb8dd88b7266da920c0e0c1e10fb2c5af0641c
  • Loading branch information
waywardmonkeys authored and facebook-github-bot committed Mar 8, 2018
1 parent a277b0f commit a3a3f54
Show file tree
Hide file tree
Showing 38 changed files with 124 additions and 124 deletions.
2 changes: 1 addition & 1 deletion CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ if(MSVC)
include(${CMAKE_CURRENT_SOURCE_DIR}/thirdparty.inc)
else()
if(CMAKE_SYSTEM_NAME MATCHES "FreeBSD")
# FreeBSD has jemaloc as default malloc
# FreeBSD has jemalloc as default malloc
# but it does not have all the jemalloc files in include/...
set(WITH_JEMALLOC ON)
else()
Expand Down
2 changes: 1 addition & 1 deletion HISTORY.md
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@
* `BackupableDBOptions::max_valid_backups_to_open == 0` now means no backups will be opened during BackupEngine initialization. Previously this condition disabled limiting backups opened.
* `DBOptions::preserve_deletes` is a new option that allows one to specify that DB should not drop tombstones for regular deletes if they have sequence number larger than what was set by the new API call `DB::SetPreserveDeletesSequenceNumber(SequenceNumber seqnum)`. Disabled by default.
* API call `DB::SetPreserveDeletesSequenceNumber(SequenceNumber seqnum)` was added, users who wish to preserve deletes are expected to periodically call this function to advance the cutoff seqnum (all deletes made before this seqnum can be dropped by DB). It's user responsibility to figure out how to advance the seqnum in the way so the tombstones are kept for the desired period of time, yet are eventually processed in time and don't eat up too much space.
* `ReadOptions::iter_start_seqnum` was added; if set to something > 0 user will see 2 changes in iterators behavior 1) only keys written with sequence larger than this parameter would be returned and 2) the `Slice` returned by iter->key() now points to the the memory that keep User-oriented representation of the internal key, rather than user key. New struct `FullKey` was added to represent internal keys, along with a new helper function `ParseFullKey(const Slice& internal_key, FullKey* result);`.
* `ReadOptions::iter_start_seqnum` was added; if set to something > 0 user will see 2 changes in iterators behavior 1) only keys written with sequence larger than this parameter would be returned and 2) the `Slice` returned by iter->key() now points to the memory that keep User-oriented representation of the internal key, rather than user key. New struct `FullKey` was added to represent internal keys, along with a new helper function `ParseFullKey(const Slice& internal_key, FullKey* result);`.
* Deprecate trash_dir param in NewSstFileManager, right now we will rename deleted files to <name>.trash instead of moving them to trash directory
* Allow setting a custom trash/DB size ratio limit in the SstFileManager, after which files that are to be scheduled for deletion are deleted immediately, regardless of any delete ratelimit.
* Return an error on write if write_options.sync = true and write_options.disableWAL = true to warn user of inconsistent options. Previously we will not write to WAL and not respecting the sync options in this case.
Expand Down
2 changes: 1 addition & 1 deletion ROCKSDB_LITE.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ RocksDBLite is a project focused on mobile use cases, which don't need a lot of
Some examples of the features disabled by ROCKSDB_LITE:
* compiled-in support for LDB tool
* No backupable DB
* No support for replication (which we provide in form of TrasactionalIterator)
* No support for replication (which we provide in form of TransactionalIterator)
* No advanced monitoring tools
* No special-purpose memtables that are highly optimized for specific use cases
* No Transactions
Expand Down
2 changes: 1 addition & 1 deletion db/db_impl.h
Original file line number Diff line number Diff line change
Expand Up @@ -469,7 +469,7 @@ class DBImpl : public DB {
bool no_full_scan = false);

// Diffs the files listed in filenames and those that do not
// belong to live files are posibly removed. Also, removes all the
// belong to live files are possibly removed. Also, removes all the
// files in sst_delete_files and log_delete_files.
// It is not necessary to hold the mutex when invoking this method.
// If FindObsoleteFiles() was run, we need to also run
Expand Down
14 changes: 7 additions & 7 deletions db/db_impl_files.cc
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ void DBImpl::MarkLogAsContainingPrepSection(uint64_t log) {

auto rit = logs_with_prep_.rbegin();
bool updated = false;
// Most probabely the last log is the one that is being marked for
// Most probably the last log is the one that is being marked for
// having a prepare section; so search from the end.
for (; rit != logs_with_prep_.rend() && rit->log >= log; ++rit) {
if (rit->log == log) {
Expand Down Expand Up @@ -97,7 +97,7 @@ uint64_t DBImpl::FindMinLogContainingOutstandingPrep() {
completed_it->second == it->cnt);
prepared_section_completed_.erase(completed_it);
}
// erase from beigning in vector is not efficient but this function is not
// erase from beginning in vector is not efficient but this function is not
// on the fast path.
it = logs_with_prep_.erase(it);
}
Expand All @@ -113,11 +113,11 @@ uint64_t DBImpl::MinLogNumberToKeep() {
// sections of outstanding transactions.
//
// We must check min logs with outstanding prep before we check
// logs referneces by memtables because a log referenced by the
// logs references by memtables because a log referenced by the
// first data structure could transition to the second under us.
//
// TODO(horuff): iterating over all column families under db mutex.
// should find more optimial solution
// should find more optimal solution
auto min_log_in_prep_heap = FindMinLogContainingOutstandingPrep();

if (min_log_in_prep_heap != 0 && min_log_in_prep_heap < log_number) {
Expand Down Expand Up @@ -153,7 +153,7 @@ void DBImpl::FindObsoleteFiles(JobContext* job_context, bool force,

bool doing_the_full_scan = false;

// logic for figurint out if we're doing the full scan
// logic for figuring out if we're doing the full scan
if (no_full_scan) {
doing_the_full_scan = false;
} else if (force ||
Expand All @@ -173,7 +173,7 @@ void DBImpl::FindObsoleteFiles(JobContext* job_context, bool force,
// threads
// Since job_context->min_pending_output is set, until file scan finishes,
// mutex_ cannot be released. Otherwise, we might see no min_pending_output
// here but later find newer generated unfinalized files while scannint.
// here but later find newer generated unfinalized files while scanning.
if (!pending_outputs_.empty()) {
job_context->min_pending_output = *pending_outputs_.begin();
} else {
Expand Down Expand Up @@ -344,7 +344,7 @@ void DBImpl::DeleteObsoleteFileImpl(int job_id, const std::string& fname,
}

// Diffs the files listed in filenames and those that do not
// belong to live files are posibly removed. Also, removes all the
// belong to live files are possibly removed. Also, removes all the
// files in sst_delete_files and log_delete_files.
// It is not necessary to hold the mutex when invoking this method.
void DBImpl::PurgeObsoleteFiles(const JobContext& state, bool schedule_only) {
Expand Down
2 changes: 1 addition & 1 deletion db/version_builder.cc
Original file line number Diff line number Diff line change
Expand Up @@ -420,7 +420,7 @@ class VersionBuilder::Rep {

void MaybeAddFile(VersionStorageInfo* vstorage, int level, FileMetaData* f) {
if (levels_[level].deleted_files.count(f->fd.GetNumber()) > 0) {
// f is to-be-delected table file
// f is to-be-deleted table file
vstorage->RemoveCurrentStats(f);
} else {
vstorage->AddFile(level, f, info_log_);
Expand Down
18 changes: 9 additions & 9 deletions db/version_set.h
Original file line number Diff line number Diff line change
Expand Up @@ -115,7 +115,7 @@ class VersionStorageInfo {
// Update the accumulated stats from a file-meta.
void UpdateAccumulatedStats(FileMetaData* file_meta);

// Decrease the current stat form a to-be-delected file-meta
// Decrease the current stat from a to-be-deleted file-meta
void RemoveCurrentStats(FileMetaData* file_meta);

void ComputeCompensatedSizes();
Expand Down Expand Up @@ -491,7 +491,7 @@ class VersionStorageInfo {
uint64_t accumulated_num_deletions_;
// current number of non_deletion entries
uint64_t current_num_non_deletions_;
// current number of delection entries
// current number of deletion entries
uint64_t current_num_deletions_;
// current number of file samples
uint64_t current_num_samples_;
Expand Down Expand Up @@ -565,13 +565,13 @@ class Version {
// Return a human readable string that describes this version's contents.
std::string DebugString(bool hex = false, bool print_stats = false) const;

// Returns the version nuber of this version
// Returns the version number of this version
uint64_t GetVersionNumber() const { return version_number_; }

// REQUIRES: lock is held
// On success, "tp" will contains the table properties of the file
// specified in "file_meta". If the file name of "file_meta" is
// known ahread, passing it by a non-null "fname" can save a
// known ahead, passing it by a non-null "fname" can save a
// file-name conversion.
Status GetTableProperties(std::shared_ptr<const TableProperties>* tp,
const FileMetaData* file_meta,
Expand All @@ -580,14 +580,14 @@ class Version {
// REQUIRES: lock is held
// On success, *props will be populated with all SSTables' table properties.
// The keys of `props` are the sst file name, the values of `props` are the
// tables' propertis, represented as shared_ptr.
// tables' properties, represented as shared_ptr.
Status GetPropertiesOfAllTables(TablePropertiesCollection* props);
Status GetPropertiesOfAllTables(TablePropertiesCollection* props, int level);
Status GetPropertiesOfTablesInRange(const Range* range, std::size_t n,
TablePropertiesCollection* props) const;

// REQUIRES: lock is held
// On success, "tp" will contains the aggregated table property amoug
// On success, "tp" will contains the aggregated table property among
// the table properties of all sst files in this version.
Status GetAggregatedTableProperties(
std::shared_ptr<const TableProperties>* tp, int level = -1);
Expand Down Expand Up @@ -637,7 +637,7 @@ class Version {
bool IsFilterSkipped(int level, bool is_file_last_in_level = false);

// The helper function of UpdateAccumulatedStats, which may fill the missing
// fields of file_mata from its associated TableProperties.
// fields of file_meta from its associated TableProperties.
// Returns true if it does initialize FileMetaData.
bool MaybeInitializeFileMetaData(FileMetaData* file_meta);

Expand Down Expand Up @@ -775,7 +775,7 @@ class VersionSet {
// Set the last sequence number to s.
void SetLastSequence(uint64_t s) {
assert(s >= last_sequence_);
// Last visible seqeunce must always be less than last written seq
// Last visible sequence must always be less than last written seq
assert(!db_options_->two_write_queues || s <= last_allocated_sequence_);
last_sequence_.store(s, std::memory_order_release);
}
Expand Down Expand Up @@ -913,7 +913,7 @@ class VersionSet {
// The last allocated sequence that is also published to the readers. This is
// applicable only when last_seq_same_as_publish_seq_ is not set. Otherwise
// last_sequence_ also indicates the last published seq.
// We have last_sequence <= last_published_seqeunce_ <=
// We have last_sequence <= last_published_sequence_ <=
// last_allocated_sequence_
std::atomic<uint64_t> last_published_sequence_;
uint64_t prev_log_number_; // 0 or backing store for memtable being compacted
Expand Down
38 changes: 19 additions & 19 deletions db/write_batch.cc
Original file line number Diff line number Diff line change
Expand Up @@ -397,7 +397,7 @@ Status WriteBatch::Iterate(Handler* handler) const {
input.remove_prefix(WriteBatchInternal::kHeader);
Slice key, value, blob, xid;
// Sometimes a sub-batch starts with a Noop. We want to exclude such Noops as
// the batch boundry sybmols otherwise we would mis-count the number of
// the batch boundary symbols otherwise we would mis-count the number of
// batches. We do that by checking whether the accumulated batch is empty
// before seeing the next Noop.
bool empty_batch = true;
Expand Down Expand Up @@ -1070,11 +1070,11 @@ class MemTableInserter : public WriteBatch::Handler {
// is set when a batch, which is tagged with seq, is read from the WAL.
// Within a sequenced batch, which could be a merge of multiple batches, we
// have two policies to advance the seq: i) seq_per_key (default) and ii)
// seq_per_batch. To implement the latter we need to mark the boundry between
// seq_per_batch. To implement the latter we need to mark the boundary between
// the individual batches. The approach is this: 1) Use the terminating
// markers to indicate the boundry (kTypeEndPrepareXID, kTypeCommitXID,
// kTypeRollbackXID) 2) Terminate a batch with kTypeNoop in the absense of a
// natural boundy marker.
// markers to indicate the boundary (kTypeEndPrepareXID, kTypeCommitXID,
// kTypeRollbackXID) 2) Terminate a batch with kTypeNoop in the absence of a
// natural boundary marker.
void MaybeAdvanceSeq(bool batch_boundry = false) {
if (batch_boundry == seq_per_batch_) {
sequence_++;
Expand Down Expand Up @@ -1150,7 +1150,7 @@ class MemTableInserter : public WriteBatch::Handler {
bool batch_boundry = false;
if (rebuilding_trx_ != nullptr) {
assert(!write_after_commit_);
// The CF is probabely flushed and hence no need for insert but we still
// The CF is probably flushed and hence no need for insert but we still
// need to keep track of the keys for upcoming rollback/commit.
WriteBatchInternal::Put(rebuilding_trx_, column_family_id, key, value);
batch_boundry = duplicate_detector_.IsDuplicateKeySeq(column_family_id,
Expand Down Expand Up @@ -1230,10 +1230,10 @@ class MemTableInserter : public WriteBatch::Handler {
if (UNLIKELY(!ret_status.IsTryAgain() && rebuilding_trx_ != nullptr)) {
assert(!write_after_commit_);
// If the ret_status is TryAgain then let the next try to add the ky to
// the the rebuilding transaction object.
// the rebuilding transaction object.
WriteBatchInternal::Put(rebuilding_trx_, column_family_id, key, value);
}
// Since all Puts are logged in trasaction logs (if enabled), always bump
// Since all Puts are logged in transaction logs (if enabled), always bump
// sequence number. Even if the update eventually fails and does not result
// in memtable add/update.
MaybeAdvanceSeq();
Expand Down Expand Up @@ -1278,7 +1278,7 @@ class MemTableInserter : public WriteBatch::Handler {
bool batch_boundry = false;
if (rebuilding_trx_ != nullptr) {
assert(!write_after_commit_);
// The CF is probabely flushed and hence no need for insert but we still
// The CF is probably flushed and hence no need for insert but we still
// need to keep track of the keys for upcoming rollback/commit.
WriteBatchInternal::Delete(rebuilding_trx_, column_family_id, key);
batch_boundry = duplicate_detector_.IsDuplicateKeySeq(column_family_id,
Expand All @@ -1293,7 +1293,7 @@ class MemTableInserter : public WriteBatch::Handler {
if (UNLIKELY(!ret_status.IsTryAgain() && rebuilding_trx_ != nullptr)) {
assert(!write_after_commit_);
// If the ret_status is TryAgain then let the next try to add the ky to
// the the rebuilding transaction object.
// the rebuilding transaction object.
WriteBatchInternal::Delete(rebuilding_trx_, column_family_id, key);
}
return ret_status;
Expand All @@ -1313,7 +1313,7 @@ class MemTableInserter : public WriteBatch::Handler {
bool batch_boundry = false;
if (rebuilding_trx_ != nullptr) {
assert(!write_after_commit_);
// The CF is probabely flushed and hence no need for insert but we still
// The CF is probably flushed and hence no need for insert but we still
// need to keep track of the keys for upcoming rollback/commit.
WriteBatchInternal::SingleDelete(rebuilding_trx_, column_family_id,
key);
Expand All @@ -1330,7 +1330,7 @@ class MemTableInserter : public WriteBatch::Handler {
if (UNLIKELY(!ret_status.IsTryAgain() && rebuilding_trx_ != nullptr)) {
assert(!write_after_commit_);
// If the ret_status is TryAgain then let the next try to add the ky to
// the the rebuilding transaction object.
// the rebuilding transaction object.
WriteBatchInternal::SingleDelete(rebuilding_trx_, column_family_id, key);
}
return ret_status;
Expand All @@ -1352,11 +1352,11 @@ class MemTableInserter : public WriteBatch::Handler {
bool batch_boundry = false;
if (rebuilding_trx_ != nullptr) {
assert(!write_after_commit_);
// The CF is probabely flushed and hence no need for insert but we still
// The CF is probably flushed and hence no need for insert but we still
// need to keep track of the keys for upcoming rollback/commit.
WriteBatchInternal::DeleteRange(rebuilding_trx_, column_family_id,
begin_key, end_key);
// TODO(myabandeh): when transctional DeleteRange support is added,
// TODO(myabandeh): when transactional DeleteRange support is added,
// check if end_key must also be added.
batch_boundry = duplicate_detector_.IsDuplicateKeySeq(
column_family_id, begin_key, sequence_);
Expand Down Expand Up @@ -1384,7 +1384,7 @@ class MemTableInserter : public WriteBatch::Handler {
if (UNLIKELY(!ret_status.IsTryAgain() && rebuilding_trx_ != nullptr)) {
assert(!write_after_commit_);
// If the ret_status is TryAgain then let the next try to add the ky to
// the the rebuilding transaction object.
// the rebuilding transaction object.
WriteBatchInternal::DeleteRange(rebuilding_trx_, column_family_id,
begin_key, end_key);
}
Expand All @@ -1406,7 +1406,7 @@ class MemTableInserter : public WriteBatch::Handler {
bool batch_boundry = false;
if (rebuilding_trx_ != nullptr) {
assert(!write_after_commit_);
// The CF is probabely flushed and hence no need for insert but we still
// The CF is probably flushed and hence no need for insert but we still
// need to keep track of the keys for upcoming rollback/commit.
WriteBatchInternal::Merge(rebuilding_trx_, column_family_id, key,
value);
Expand Down Expand Up @@ -1498,7 +1498,7 @@ class MemTableInserter : public WriteBatch::Handler {
if (UNLIKELY(!ret_status.IsTryAgain() && rebuilding_trx_ != nullptr)) {
assert(!write_after_commit_);
// If the ret_status is TryAgain then let the next try to add the ky to
// the the rebuilding transaction object.
// the rebuilding transaction object.
WriteBatchInternal::Merge(rebuilding_trx_, column_family_id, key, value);
}
MaybeAdvanceSeq();
Expand Down Expand Up @@ -1596,15 +1596,15 @@ class MemTableInserter : public WriteBatch::Handler {
// and commit.
auto trx = db_->GetRecoveredTransaction(name.ToString());

// the log contaiting the prepared section may have
// the log containing the prepared section may have
// been released in the last incarnation because the
// data was flushed to L0
if (trx != nullptr) {
// at this point individual CF lognumbers will prevent
// duplicate re-insertion of values.
assert(log_number_ref_ == 0);
if (write_after_commit_) {
// all insertes must reference this trx log number
// all inserts must reference this trx log number
log_number_ref_ = trx->log_number_;
s = trx->batch_->Iterate(this);
log_number_ref_ = 0;
Expand Down
6 changes: 3 additions & 3 deletions db/write_batch_internal.h
Original file line number Diff line number Diff line change
Expand Up @@ -117,10 +117,10 @@ class WriteBatchInternal {
// Set the count for the number of entries in the batch.
static void SetCount(WriteBatch* batch, int n);

// Return the seqeunce number for the start of this batch.
// Return the sequence number for the start of this batch.
static SequenceNumber Sequence(const WriteBatch* batch);

// Store the specified number as the seqeunce number for the start of
// Store the specified number as the sequence number for the start of
// this batch.
static void SetSequence(WriteBatch* batch, SequenceNumber seq);

Expand Down Expand Up @@ -168,7 +168,7 @@ class WriteBatchInternal {
bool seq_per_batch = false);

// Convenience form of InsertInto when you have only one batch
// next_seq returns the seq after last sequnce number used in MemTable insert
// next_seq returns the seq after last sequence number used in MemTable insert
static Status InsertInto(const WriteBatch* batch,
ColumnFamilyMemTables* memtables,
FlushScheduler* flush_scheduler,
Expand Down
Loading

0 comments on commit a3a3f54

Please sign in to comment.