Skip to content

Commit

Permalink
Merge branch 'master' into matt-user/refactor-block-query
Browse files Browse the repository at this point in the history
  • Loading branch information
matt-user authored Nov 15, 2024
2 parents 8b8169a + d013a99 commit a29718d
Show file tree
Hide file tree
Showing 37 changed files with 1,514 additions and 177 deletions.
9 changes: 8 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,14 +20,22 @@ and this project adheres to [Semantic Versioning](http://semver.org/).
- [2362](https://github.com/FuelLabs/fuel-core/pull/2362): Added a new request_response protocol version `/fuel/req_res/0.0.2`. In comparison with `/fuel/req/0.0.1`, which returns an empty response when a request cannot be fulfilled, this version returns more meaningful error codes. Nodes still support the version `0.0.1` of the protocol to guarantee backward compatibility with fuel-core nodes. Empty responses received from nodes using the old protocol `/fuel/req/0.0.1` are automatically converted into an error `ProtocolV1EmptyResponse` with error code 0, which is also the only error code implemented. More specific error codes will be added in the future.
- [2386](https://github.com/FuelLabs/fuel-core/pull/2386): Add a flag to define the maximum number of file descriptors that RocksDB can use. By default it's half of the OS limit.
- [2376](https://github.com/FuelLabs/fuel-core/pull/2376): Add a way to fetch transactions in P2P without specifying a peer.
- [2327](https://github.com/FuelLabs/fuel-core/pull/2327): Add more services tests and more checks of the pool. Also add an high level documentation for users of the pool and contributors.
- [2416](https://github.com/FuelLabs/fuel-core/issues/2416): Define the `GasPriceServiceV1` task.


### Fixed
- [2366](https://github.com/FuelLabs/fuel-core/pull/2366): The `importer_gas_price_for_block` metric is properly collected.
- [2369](https://github.com/FuelLabs/fuel-core/pull/2369): The `transaction_insertion_time_in_thread_pool_milliseconds` metric is properly collected.
- [2413](https://github.com/FuelLabs/fuel-core/issues/2413): block production immediately errors if unable to lock the mutex.
- [2389](https://github.com/FuelLabs/fuel-core/pull/2389): Fix construction of reverse iterator in RocksDB.

### Changed
- [2378](https://github.com/FuelLabs/fuel-core/pull/2378): Use cached hash of the topic instead of calculating it on each publishing gossip message.
- [2377](https://github.com/FuelLabs/fuel-core/pull/2377): Add more errors that can be returned as responses when using protocol `/fuel/req_res/0.0.2`. The errors supported are `ProtocolV1EmptyResponse` (status code `0`) for converting empty responses sent via protocol `/fuel/req_res/0.0.1`, `RequestedRangeTooLarge`(status code `1`) if the client requests a range of objects such as sealed block headers or transactions too large, `Timeout` (status code `2`) if the remote peer takes too long to fulfill a request, or `SyncProcessorOutOfCapacity` if the remote peer is fulfilling too many requests concurrently.

#### Breaking
- [2389](https://github.com/FuelLabs/fuel-core/pull/2258): Updated the `messageProof` GraphQL schema to return a non-nullable `MessageProof`.

## [Version 0.40.0]

Expand Down Expand Up @@ -56,7 +64,6 @@ and this project adheres to [Semantic Versioning](http://semver.org/).
- [2324](https://github.com/FuelLabs/fuel-core/pull/2324): Added metrics for sync, async processor and for all GraphQL queries.
- [2320](https://github.com/FuelLabs/fuel-core/pull/2320): Added new CLI flag `graphql-max-resolver-recursive-depth` to limit recursion within resolver. The default value it "1".


## Fixed
- [2320](https://github.com/FuelLabs/fuel-core/issues/2320): Prevent `/health` and `/v1/health` from being throttled by the concurrency limiter.
- [2322](https://github.com/FuelLabs/fuel-core/issues/2322): Set the salt of genesis contracts to zero on execution.
Expand Down
2 changes: 1 addition & 1 deletion crates/client/assets/debugAdapterProtocol.json
Original file line number Diff line number Diff line change
Expand Up @@ -1440,7 +1440,7 @@
{ "$ref": "#/definitions/Request" },
{
"type": "object",
"description": "Replaces all existing instruction breakpoints. Typically, instruction breakpoints would be set from a diassembly window. \nTo clear all instruction breakpoints, specify an empty array.\nWhen an instruction breakpoint is hit, a 'stopped' event (with reason 'instruction breakpoint') is generated.\nClients should only call this request if the capability 'supportsInstructionBreakpoints' is true.",
"description": "Replaces all existing instruction breakpoints. Typically, instruction breakpoints would be set from a disassembly window. \nTo clear all instruction breakpoints, specify an empty array.\nWhen an instruction breakpoint is hit, a 'stopped' event (with reason 'instruction breakpoint') is generated.\nClients should only call this request if the capability 'supportsInstructionBreakpoints' is true.",
"properties": {
"command": {
"type": "string",
Expand Down
2 changes: 1 addition & 1 deletion crates/client/assets/schema.sdl
Original file line number Diff line number Diff line change
Expand Up @@ -980,7 +980,7 @@ type Query {
"""
owner: Address, first: Int, after: String, last: Int, before: String
): MessageConnection!
messageProof(transactionId: TransactionId!, nonce: Nonce!, commitBlockId: BlockId, commitBlockHeight: U32): MessageProof
messageProof(transactionId: TransactionId!, nonce: Nonce!, commitBlockId: BlockId, commitBlockHeight: U32): MessageProof!
messageStatus(nonce: Nonce!): MessageStatus!
relayedTransactionStatus(
"""
Expand Down
11 changes: 2 additions & 9 deletions crates/client/src/client.rs
Original file line number Diff line number Diff line change
Expand Up @@ -1141,7 +1141,7 @@ impl FuelClient {
nonce: &Nonce,
commit_block_id: Option<&BlockId>,
commit_block_height: Option<BlockHeight>,
) -> io::Result<Option<types::MessageProof>> {
) -> io::Result<types::MessageProof> {
let transaction_id: TransactionId = (*transaction_id).into();
let nonce: schema::Nonce = (*nonce).into();
let commit_block_id: Option<schema::BlockId> =
Expand All @@ -1153,14 +1153,7 @@ impl FuelClient {
commit_block_id,
commit_block_height,
});

let proof = self
.query(query)
.await?
.message_proof
.map(TryInto::try_into)
.transpose()?;

let proof = self.query(query).await?.message_proof.try_into()?;
Ok(proof)
}

Expand Down
2 changes: 1 addition & 1 deletion crates/client/src/client/schema/message.rs
Original file line number Diff line number Diff line change
Expand Up @@ -114,7 +114,7 @@ pub struct MessageProofQuery {
commitBlockId: $commit_block_id,
commitBlockHeight: $commit_block_height
)]
pub message_proof: Option<MessageProof>,
pub message_proof: MessageProof,
}

#[derive(cynic::QueryFragment, Clone, Debug)]
Expand Down
4 changes: 2 additions & 2 deletions crates/fuel-core/src/schema/message.rs
Original file line number Diff line number Diff line change
Expand Up @@ -136,7 +136,7 @@ impl MessageQuery {
nonce: Nonce,
commit_block_id: Option<BlockId>,
commit_block_height: Option<U32>,
) -> async_graphql::Result<Option<MessageProof>> {
) -> async_graphql::Result<MessageProof> {
let query = ctx.read_view()?;
let height = match (commit_block_id, commit_block_height) {
(Some(commit_block_id), None) => {
Expand All @@ -157,7 +157,7 @@ impl MessageQuery {
height,
)?;

Ok(Some(MessageProof(proof)))
Ok(MessageProof(proof))
}

#[graphql(complexity = "query_costs().storage_read + child_complexity")]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,13 @@ where
fn get(&self, key: &[u8], column: Self::Column) -> StorageResult<Option<Value>> {
let read_history = &self.read_db;
let height_key = height_key(key, &self.height);
let options = ReadOptions::default();
let mut options = ReadOptions::default();
// We need this option because our iterator will try to start in the `height_key` prefix section
// but if there is no data in this section, we expect the iterator to fetch data in an other prefix section.
// Without this option it's not guarantee that we fetch the correct next prefix section.
// Source : https://github.com/facebook/rocksdb/wiki/Prefix-Seek#how-to-ignore-prefix-bloom-filters-in-read
// and https://github.com/facebook/rocksdb/wiki/Prefix-Seek#general-prefix-seek-api
options.set_total_order_seek(true);
let nearest_modification = read_history
.iterator::<KeyAndValue>(
Column::HistoricalDuplicateColumn(column),
Expand Down
169 changes: 140 additions & 29 deletions crates/fuel-core/src/state/rocks_db.rs
Original file line number Diff line number Diff line change
Expand Up @@ -443,12 +443,9 @@ where
}

/// RocksDB prefix iteration doesn't support reverse order,
/// but seeking the start key and iterating in reverse order works.
/// So we can create a workaround. We need to find the next available
/// element and use it as an anchor for reverse iteration,
/// but skip the first element to jump on the previous prefix.
/// If we can't find the next element, we are at the end of the list,
/// so we can use `IteratorMode::End` to start reverse iteration.
/// so we need to to force the RocksDB iterator to order
/// all the prefix in the iterator so that we can take the next prefix
/// as start of iterator and iterate in reverse.
fn reverse_prefix_iter<T>(
&self,
prefix: &[u8],
Expand All @@ -457,28 +454,24 @@ where
where
T: ExtractItem,
{
let maybe_next_item = next_prefix(prefix.to_vec())
.and_then(|next_prefix| {
self.iter_store(
column,
Some(next_prefix.as_slice()),
None,
IterDirection::Forward,
)
.next()
})
.and_then(|res| res.ok());

if let Some((next_start_key, _)) = maybe_next_item {
let iter_mode = IteratorMode::From(
next_start_key.as_slice(),
rocksdb::Direction::Reverse,
);
let reverse_iterator = next_prefix(prefix.to_vec()).map(|next_prefix| {
let mut opts = self.read_options();
// We need this option because our iterator start in the `next_prefix` prefix section
// and continue in `prefix` section. Without this option the correct
// iteration between prefix section isn't guaranteed
// Source : https://github.com/facebook/rocksdb/wiki/Prefix-Seek#how-to-ignore-prefix-bloom-filters-in-read
// and https://github.com/facebook/rocksdb/wiki/Prefix-Seek#general-prefix-seek-api
opts.set_total_order_seek(true);
self.iterator::<T>(
column,
opts,
IteratorMode::From(next_prefix.as_slice(), rocksdb::Direction::Reverse),
)
});

if let Some(iterator) = reverse_iterator {
let prefix = prefix.to_vec();
self
.iterator::<T>(column, self.read_options(), iter_mode)
// Skip the element under the `next_start_key` key.
.skip(1)
iterator
.take_while(move |item| {
if let Ok(item) = item {
T::starts_with(item, prefix.as_slice())
Expand Down Expand Up @@ -612,8 +605,14 @@ where
// start iterating in a certain direction from the start key
let iter_mode =
IteratorMode::From(start, convert_to_rocksdb_direction(direction));
self.iterator::<T>(column, self.read_options(), iter_mode)
.into_boxed()
let mut opts = self.read_options();
// We need this option because our iterator start in the `start` prefix section
// and continue in next sections. Without this option the correct
// iteration between prefix section isn't guaranteed
// Source : https://github.com/facebook/rocksdb/wiki/Prefix-Seek#how-to-ignore-prefix-bloom-filters-in-read
// and https://github.com/facebook/rocksdb/wiki/Prefix-Seek#general-prefix-seek-api
opts.set_total_order_seek(true);
self.iterator::<T>(column, opts, iter_mode).into_boxed()
}
(Some(prefix), Some(start)) => {
// TODO: Maybe we want to allow the `start` to be without a `prefix` in the future.
Expand Down Expand Up @@ -1218,4 +1217,116 @@ mod tests {
let _ = open_with_part_of_columns
.expect("Should open the database with shorter number of columns");
}

#[test]
fn iter_store__reverse_iterator__no_target_prefix() {
// Given
let (mut db, _tmp) = create_db();
let value = Arc::new(Vec::new());
let key_1 = [1, 1];
let key_2 = [2, 2];
let key_3 = [9, 3];
let key_4 = [10, 0];
db.put(&key_1, Column::Metadata, value.clone()).unwrap();
db.put(&key_2, Column::Metadata, value.clone()).unwrap();
db.put(&key_3, Column::Metadata, value.clone()).unwrap();
db.put(&key_4, Column::Metadata, value.clone()).unwrap();

// When
let db_iter = db
.iter_store(
Column::Metadata,
Some(vec![5].as_slice()),
None,
IterDirection::Reverse,
)
.map(|item| item.map(|(key, _)| key))
.collect::<Vec<_>>();

// Then
assert_eq!(db_iter, vec![]);
}

#[test]
fn iter_store__reverse_iterator__target_prefix_at_the_middle() {
// Given
let (mut db, _tmp) = create_db();
let value = Arc::new(Vec::new());
let key_1 = [1, 1];
let key_2 = [2, 2];
let key_3 = [2, 3];
let key_4 = [10, 0];
db.put(&key_1, Column::Metadata, value.clone()).unwrap();
db.put(&key_2, Column::Metadata, value.clone()).unwrap();
db.put(&key_3, Column::Metadata, value.clone()).unwrap();
db.put(&key_4, Column::Metadata, value.clone()).unwrap();

// When
let db_iter = db
.iter_store(
Column::Metadata,
Some(vec![2].as_slice()),
None,
IterDirection::Reverse,
)
.map(|item| item.map(|(key, _)| key))
.collect::<Vec<_>>();

// Then
assert_eq!(db_iter, vec![Ok(key_3.to_vec()), Ok(key_2.to_vec())]);
}

#[test]
fn iter_store__reverse_iterator__target_prefix_at_the_end() {
// Given
let (mut db, _tmp) = create_db();
let value = Arc::new(Vec::new());
let key_1 = [1, 1];
let key_2 = [2, 2];
let key_3 = [2, 3];
db.put(&key_1, Column::Metadata, value.clone()).unwrap();
db.put(&key_2, Column::Metadata, value.clone()).unwrap();
db.put(&key_3, Column::Metadata, value.clone()).unwrap();

// When
let db_iter = db
.iter_store(
Column::Metadata,
Some(vec![2].as_slice()),
None,
IterDirection::Reverse,
)
.map(|item| item.map(|(key, _)| key))
.collect::<Vec<_>>();

// Then
assert_eq!(db_iter, vec![Ok(key_3.to_vec()), Ok(key_2.to_vec())]);
}

#[test]
fn iter_store__reverse_iterator__target_prefix_at_the_end__overflow() {
// Given
let (mut db, _tmp) = create_db();
let value = Arc::new(Vec::new());
let key_1 = [1, 1];
let key_2 = [255, 254];
let key_3 = [255, 255];
db.put(&key_1, Column::Metadata, value.clone()).unwrap();
db.put(&key_2, Column::Metadata, value.clone()).unwrap();
db.put(&key_3, Column::Metadata, value.clone()).unwrap();

// When
let db_iter = db
.iter_store(
Column::Metadata,
Some(vec![255].as_slice()),
None,
IterDirection::Reverse,
)
.map(|item| item.map(|(key, _)| key))
.collect::<Vec<_>>();

// Then
assert_eq!(db_iter, vec![Ok(key_3.to_vec()), Ok(key_2.to_vec())]);
}
}
1 change: 1 addition & 0 deletions crates/fuel-gas-price-algorithm/src/v1.rs
Original file line number Diff line number Diff line change
Expand Up @@ -313,6 +313,7 @@ impl AlgorithmUpdaterV1 {
if !height_range.is_empty() {
self.da_block_update(height_range, range_cost)?;
self.recalculate_projected_cost();
self.update_da_gas_price();
}
Ok(())
}
Expand Down
Loading

0 comments on commit a29718d

Please sign in to comment.