Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rocksdb plugin to support OptimisticTransactionDb and TransactionDb #5328

Merged
Merged
Show file tree
Hide file tree
Changes from 8 commits
Commits
Show all changes
26 commits
Select commit Hold shift + click to select a range
bf3b9f8
Refactor to make RocksDBColumnarKeyValueStorage abstract and extend i…
gfukushima Apr 11, 2023
a6062c3
Refactor to use RocksDB class where possible
gfukushima Apr 11, 2023
9b7d68e
Replace RocksDBColumnarKeyValueStorage for equivalent use case
gfukushima Apr 11, 2023
de371a9
Add config to identify storage mode
gfukushima Apr 11, 2023
93322ea
javadoc
gfukushima Apr 12, 2023
46be43f
Removing unnecessary addition to plugin api
gfukushima Apr 12, 2023
11c8a61
Refactor to remove duplicated code
gfukushima Apr 12, 2023
f8e848e
Introduce concept of DataStorageFormat.FOREST to the rocksDB plugin
gfukushima Apr 12, 2023
42e7924
Removing code added during spike
gfukushima Apr 13, 2023
c230472
Merge branch 'main' into rocksdb-to-support-opTxDb-pessimisticTxDB
gfukushima Apr 13, 2023
c223ab5
Moving to the segmented folder since this is segmented DB tests
gfukushima Apr 17, 2023
c1c6bce
Remove takeSnapshot method from abstract and pessimistic class
gfukushima Apr 17, 2023
aaba2bc
Rename class and remove takeSnapshot method since it's not supported
gfukushima Apr 17, 2023
4e24937
Make takeSnapshot a public method of the class
gfukushima Apr 17, 2023
ebda4cf
Use nonSnappableAdapter for forest segmented storage
gfukushima Apr 17, 2023
3238f97
Add NonSnappableSegmentedKeyValueStorageAdapter for forest
gfukushima Apr 17, 2023
32adbf6
spdx
gfukushima Apr 17, 2023
a0cdd20
Extend RocksDBColumnarKeyValueStorageTest to use Optimistic and Trans…
gfukushima Apr 17, 2023
298ab61
spdx
gfukushima Apr 17, 2023
894dfb8
Merge branch 'main' into rocksdb-to-support-opTxDb-pessimisticTxDB
gfukushima Apr 17, 2023
2076716
javadoc rename fix
gfukushima Apr 17, 2023
bc4e3fc
Clean up code
gfukushima Apr 18, 2023
6ed4df6
Merge branch 'main' into rocksdb-to-support-opTxDb-pessimisticTxDB
gfukushima Apr 18, 2023
80b9470
changelog
gfukushima Apr 18, 2023
7cee988
Use SegmentedKeyValueStorageAdapter as base class for Snappable adapter
gfukushima Apr 18, 2023
df8a0aa
Merge branch 'main' into rocksdb-to-support-opTxDb-pessimisticTxDB
gfukushima Apr 19, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions plugins/rocksdb/build.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,8 @@ dependencies {
implementation 'io.prometheus:simpleclient'
implementation 'org.apache.tuweni:tuweni-bytes'
implementation 'org.rocksdb:rocksdbjni'
implementation project(path: ':ethereum:core')
implementation project(path: ':ethereum:core')

testImplementation project(':testutil')

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@

import static com.google.common.base.Preconditions.checkNotNull;

import org.hyperledger.besu.ethereum.worldstate.DataStorageFormat;
import org.hyperledger.besu.plugin.services.BesuConfiguration;
import org.hyperledger.besu.plugin.services.MetricsSystem;
import org.hyperledger.besu.plugin.services.exception.StorageException;
Expand All @@ -26,6 +27,8 @@
import org.hyperledger.besu.plugin.services.storage.rocksdb.configuration.RocksDBConfiguration;
import org.hyperledger.besu.plugin.services.storage.rocksdb.configuration.RocksDBConfigurationBuilder;
import org.hyperledger.besu.plugin.services.storage.rocksdb.configuration.RocksDBFactoryConfiguration;
import org.hyperledger.besu.plugin.services.storage.rocksdb.segmented.OptimisticRocksDBColumnarKeyValueStorage;
import org.hyperledger.besu.plugin.services.storage.rocksdb.segmented.PessimisticRocksDBColumnarKeyValueStorage;
import org.hyperledger.besu.plugin.services.storage.rocksdb.segmented.RocksDBColumnarKeyValueStorage;
import org.hyperledger.besu.plugin.services.storage.rocksdb.unsegmented.RocksDBKeyValueStorage;
import org.hyperledger.besu.services.kvstore.SegmentedKeyValueStorageAdapter;
Expand Down Expand Up @@ -149,7 +152,8 @@ public KeyValueStorage create(
final BesuConfiguration commonConfiguration,
final MetricsSystem metricsSystem)
throws StorageException {

final boolean isForestStorageFormat =
DataStorageFormat.FOREST.getDatabaseVersion() == commonConfiguration.getDatabaseVersion();
if (requiresInit()) {
init(commonConfiguration);
}
Expand Down Expand Up @@ -177,19 +181,35 @@ public KeyValueStorage create(
segments.stream()
.filter(segmentId -> segmentId.includeInDatabaseVersion(databaseVersion))
.collect(Collectors.toList());

segmentedStorage =
new RocksDBColumnarKeyValueStorage(
rocksDBConfiguration,
segmentsForVersion,
ignorableSegments,
metricsSystem,
rocksDBMetricsFactory);
if (isForestStorageFormat) {
LOG.info("FOREST mode detected, using pessimistic DB.");
gfukushima marked this conversation as resolved.
Show resolved Hide resolved
segmentedStorage =
new PessimisticRocksDBColumnarKeyValueStorage(
rocksDBConfiguration,
segmentsForVersion,
ignorableSegments,
metricsSystem,
rocksDBMetricsFactory);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This could be done just for databaseVersion 1 in the switch statement as that indicates the db format is forest. Then the isForestStorageFormat wouldn't be needed either.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I created that flag on purpose cause it took me some time to understand that DBversion actually is related to forest or bonsai. I think the usage of numbers to identify version is fine. But giving some more context to the code so next devs to touch don't spend time trying to understand what this version means.
In my first draft I wasted some time piping the StorageFormat through the plugin API because I didn't know that database version represented that same information.

} else {
LOG.info("Using Optimistic DB.");
segmentedStorage =
new OptimisticRocksDBColumnarKeyValueStorage(
rocksDBConfiguration,
segmentsForVersion,
ignorableSegments,
metricsSystem,
rocksDBMetricsFactory);
}
}
final RocksDbSegmentIdentifier rocksSegment =
segmentedStorage.getSegmentIdentifierByName(segment);
return new SegmentedKeyValueStorageAdapter<>(
segment, segmentedStorage, () -> segmentedStorage.takeSnapshot(rocksSegment));

if (isForestStorageFormat) {
return new SegmentedKeyValueStorageAdapter<>(segment, segmentedStorage);
} else {
return new SegmentedKeyValueStorageAdapter<>(
gfukushima marked this conversation as resolved.
Show resolved Hide resolved
segment, segmentedStorage, () -> segmentedStorage.takeSnapshot(rocksSegment));
}
}
default:
{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@
import org.hyperledger.besu.plugin.services.metrics.OperationTimer;
import org.hyperledger.besu.plugin.services.storage.rocksdb.configuration.RocksDBConfiguration;

import org.rocksdb.OptimisticTransactionDB;
import org.rocksdb.RocksDB;
import org.rocksdb.RocksDBException;
import org.rocksdb.Statistics;
import org.slf4j.Logger;
Expand Down Expand Up @@ -72,7 +72,7 @@ public RocksDBMetricsFactory(
public RocksDBMetrics create(
final MetricsSystem metricsSystem,
final RocksDBConfiguration rocksDbConfiguration,
final OptimisticTransactionDB db,
final RocksDB db,
final Statistics stats) {
final OperationTimer readLatency =
metricsSystem
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,13 +21,13 @@

import org.rocksdb.ColumnFamilyDescriptor;
import org.rocksdb.ColumnFamilyHandle;
import org.rocksdb.OptimisticTransactionDB;
import org.rocksdb.RocksDB;
import org.rocksdb.RocksDBException;

/** The RocksDb segment identifier. */
public class RocksDbSegmentIdentifier {

private final OptimisticTransactionDB db;
private final RocksDB db;
private final AtomicReference<ColumnFamilyHandle> reference;

/**
Expand All @@ -36,8 +36,7 @@ public class RocksDbSegmentIdentifier {
* @param db the db
* @param columnFamilyHandle the column family handle
*/
public RocksDbSegmentIdentifier(
final OptimisticTransactionDB db, final ColumnFamilyHandle columnFamilyHandle) {
public RocksDbSegmentIdentifier(final RocksDB db, final ColumnFamilyHandle columnFamilyHandle) {
this.db = db;
this.reference = new AtomicReference<>(columnFamilyHandle);
}
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,103 @@
/*
* Copyright Hyperledger Besu Contributors.
*
* Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
* an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*
* SPDX-License-Identifier: Apache-2.0
*/
package org.hyperledger.besu.plugin.services.storage.rocksdb.segmented;

import org.hyperledger.besu.plugin.services.MetricsSystem;
import org.hyperledger.besu.plugin.services.exception.StorageException;
import org.hyperledger.besu.plugin.services.storage.SegmentIdentifier;
import org.hyperledger.besu.plugin.services.storage.rocksdb.RocksDBMetricsFactory;
import org.hyperledger.besu.plugin.services.storage.rocksdb.RocksDbSegmentIdentifier;
import org.hyperledger.besu.plugin.services.storage.rocksdb.configuration.RocksDBConfiguration;
import org.hyperledger.besu.services.kvstore.SegmentedKeyValueStorage;
import org.hyperledger.besu.services.kvstore.SegmentedKeyValueStorageTransactionTransitionValidatorDecorator;

import java.util.List;

import org.rocksdb.OptimisticTransactionDB;
import org.rocksdb.RocksDB;
import org.rocksdb.RocksDBException;
import org.rocksdb.WriteOptions;

/** Optimistic RocksDB Columnar key value storage */
public class OptimisticRocksDBColumnarKeyValueStorage extends RocksDBColumnarKeyValueStorage {
private final OptimisticTransactionDB db;

/**
* Instantiates a new Rocks db columnar key value optimistic storage.
*
* @param configuration the configuration
* @param segments the segments
* @param ignorableSegments the ignorable segments
* @param metricsSystem the metrics system
* @param rocksDBMetricsFactory the rocks db metrics factory
* @throws StorageException the storage exception
*/
public OptimisticRocksDBColumnarKeyValueStorage(
final RocksDBConfiguration configuration,
final List<SegmentIdentifier> segments,
final List<SegmentIdentifier> ignorableSegments,
final MetricsSystem metricsSystem,
final RocksDBMetricsFactory rocksDBMetricsFactory)
throws StorageException {
super(configuration, segments, ignorableSegments, metricsSystem, rocksDBMetricsFactory);
try {

db =
OptimisticTransactionDB.open(
options, configuration.getDatabaseDir().toString(), columnDescriptors, columnHandles);
initMetrics();
initColumnHandler();

} catch (final RocksDBException e) {
throw new StorageException(e);
}
}

@Override
RocksDB getDB() {
return db;
}

/**
* Start a transaction
*
* @return the new transaction started
* @throws StorageException the storage exception
*/
@Override
public SegmentedKeyValueStorage.Transaction<RocksDbSegmentIdentifier> startTransaction()
throws StorageException {
throwIfClosed();
final WriteOptions writeOptions = new WriteOptions();
writeOptions.setIgnoreMissingColumnFamilies(true);
return new SegmentedKeyValueStorageTransactionTransitionValidatorDecorator<>(
new RocksDBColumnarKeyValueStorage.RocksDbTransaction(
jframe marked this conversation as resolved.
Show resolved Hide resolved
db.beginTransaction(writeOptions), writeOptions));
jframe marked this conversation as resolved.
Show resolved Hide resolved
}

/**
* Take snapshot RocksDb columnar key value snapshot.
*
* @param segment the segment
* @return the RocksDb columnar key value snapshot
* @throws StorageException the storage exception
*/
@Override
public RocksDBColumnarKeyValueSnapshot takeSnapshot(final RocksDbSegmentIdentifier segment)
throws StorageException {
throwIfClosed();
return new RocksDBColumnarKeyValueSnapshot(db, segment, metrics);
}
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,103 @@
/*
* Copyright Hyperledger Besu Contributors.
*
* Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
* an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*
* SPDX-License-Identifier: Apache-2.0
*/
package org.hyperledger.besu.plugin.services.storage.rocksdb.segmented;

import org.hyperledger.besu.plugin.services.MetricsSystem;
import org.hyperledger.besu.plugin.services.exception.StorageException;
import org.hyperledger.besu.plugin.services.storage.SegmentIdentifier;
import org.hyperledger.besu.plugin.services.storage.rocksdb.RocksDBMetricsFactory;
import org.hyperledger.besu.plugin.services.storage.rocksdb.RocksDbSegmentIdentifier;
import org.hyperledger.besu.plugin.services.storage.rocksdb.configuration.RocksDBConfiguration;
import org.hyperledger.besu.services.kvstore.SegmentedKeyValueStorageTransactionTransitionValidatorDecorator;

import java.util.List;

import org.rocksdb.RocksDB;
import org.rocksdb.RocksDBException;
import org.rocksdb.TransactionDB;
import org.rocksdb.WriteOptions;

/** Pessimistic RocksDB Columnar key value storage */
public class PessimisticRocksDBColumnarKeyValueStorage extends RocksDBColumnarKeyValueStorage {
jframe marked this conversation as resolved.
Show resolved Hide resolved

private final TransactionDB db;

/**
* The constructor of PessimisticRocksDBColumnarKeyValueStorage
*
* @param configuration the RocksDB configuration
* @param segments the segments
* @param ignorableSegments the ignorable segments
* @param metricsSystem the metrics system
* @param rocksDBMetricsFactory the rocksdb metrics factory
* @throws StorageException the storage exception
*/
public PessimisticRocksDBColumnarKeyValueStorage(
final RocksDBConfiguration configuration,
final List<SegmentIdentifier> segments,
final List<SegmentIdentifier> ignorableSegments,
final MetricsSystem metricsSystem,
final RocksDBMetricsFactory rocksDBMetricsFactory)
throws StorageException {
super(configuration, segments, ignorableSegments, metricsSystem, rocksDBMetricsFactory);
try {

db =
TransactionDB.open(
options,
txOptions,
configuration.getDatabaseDir().toString(),
columnDescriptors,
columnHandles);
initMetrics();
initColumnHandler();

} catch (final RocksDBException e) {
throw new StorageException(e);
}
}

@Override
RocksDB getDB() {
return db;
}

/**
* Start a transaction
*
* @return the new transaction started
* @throws StorageException the storage exception
*/
@Override
public Transaction<RocksDbSegmentIdentifier> startTransaction() throws StorageException {
throwIfClosed();
final WriteOptions writeOptions = new WriteOptions();
writeOptions.setIgnoreMissingColumnFamilies(true);
return new SegmentedKeyValueStorageTransactionTransitionValidatorDecorator<>(
new RocksDbTransaction(db.beginTransaction(writeOptions), writeOptions));
}

/**
* Not supported for Pessimistic.
*
* @param segment the segment
* @return the RocksDb columnar key value snapshot
* @throws StorageException the storage exception
*/
@Override
public RocksDBColumnarKeyValueSnapshot takeSnapshot(final RocksDbSegmentIdentifier segment) {
gfukushima marked this conversation as resolved.
Show resolved Hide resolved
throw new UnsupportedOperationException("Not supported");
}
}
Loading