-
Notifications
You must be signed in to change notification settings - Fork 781
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Stricter match of BlockError in lookup sync #6321
base: unstable
Are you sure you want to change the base?
Conversation
for peer in peer_group.of_index(index) { | ||
cx.report_peer( | ||
*peer, | ||
PeerAction::MidToleranceError, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should up this to LowToleranceError
); | ||
let recoverable = match other.rpc_scoring() { | ||
ErrorCategory::Internal { recoverable } => { | ||
if matches!(other, BlockError::ExecutionPayloadError(_)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I kept this extra condition from the original match as I imagine that when EL goes offline the node would spam non-actionable errors
a2ffbd2
to
5a69a7b
Compare
beacon_node/network/src/network_beacon_processor/sync_methods.rs
Outdated
Show resolved
Hide resolved
beacon_node/beacon_chain/src/data_availability_checker/error.rs
Outdated
Show resolved
Hide resolved
beacon_node/beacon_chain/src/data_availability_checker/error.rs
Outdated
Show resolved
Hide resolved
beacon_node/beacon_chain/src/data_availability_checker/error.rs
Outdated
Show resolved
Hide resolved
5a69a7b
to
33d10e5
Compare
@pawanjay176 I have moved the big match into the network beacon processor. This is analogous to how we handle the result of processing chain segments and gossip objects. I have applied all your comments to the current version. I think the PR makes much more sense now, ready for a re-review |
"Error processing block component"; | ||
"block_root" => %block_root, | ||
"err" => ?err, | ||
); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we want to log internal errors at a higher level than debug? If yes I can move the log statement below and match the type of ErrorCategory
# Conflicts: # beacon_node/network/src/sync/tests/lookups.rs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I still haven't finished reviewing this but I'll try to get back to this later today or Monday.
@@ -983,7 +985,7 @@ impl<T: BeaconChainTypes> NetworkBeaconProcessor<T> { | |||
"result" => "imported block and custody columns", | |||
"block_hash" => %hash, | |||
); | |||
self.chain.recompute_head_at_current_slot().await; | |||
// Head will be recomputed `handle_lookup_sync_processing_result` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This doesn't cover the gossip case right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Feels like it might be easier to move the recompute_head logic
back here, otherwise we'll have to either do it in two places or having to pass a flag here and conditionally recompute head.
/// The error also indicates which block component index is malicious if applicable. | ||
Malicious { retry: bool, index: usize }, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I found index
slightly confusing - what do you think about calling this sidecar_index
or data_sidecar_index
, and potentially make this an Option
, given we're using this for blocks as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does it make sense to call this Faulty
instead? we don't know if the peer is malicious - i think it might be more consistent with the terminology used in sync, although it doesn't really matter, a faulty peer and malicious peer both send invalid data.
@@ -214,7 +269,9 @@ impl<T: BeaconChainTypes> NetworkBeaconProcessor<T> { | |||
// Sync handles these results | |||
self.send_sync_message(SyncMessage::BlockComponentProcessed { | |||
process_type, | |||
result: result.into(), | |||
result: self | |||
.handle_lookup_sync_processing_result(block_root, result) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like we're potentially recomputing head twice here - one above at
self.chain.recompute_head_at_current_slot().await; |
self.handle_lookup_sync_processing_result
.
pub fn malicious_no_retry() -> Self { | ||
Self::Malicious { | ||
retry: false, | ||
index: 0, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it make sense to make this Option
, otherwise we can't distinguish between 0
index or None
// unreachable, this error is only part of gossip | ||
BlockError::BlobNotRequired(_) => ErrorCategory::malicious_retry().into(), | ||
// Unreachable: This variants never happen in lookup sync, only in range sync. | ||
// Does not matter what we set here, just setting `internal_recoverable` to |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// Does not matter what we set here, just setting `internal_recoverable` to | |
// Does not matter what we set here, just setting `internal_retry` to |
// Unreachable: This variants never happen in lookup sync, only in range sync. | ||
// Does not matter what we set here, just setting `internal_recoverable` to | ||
// put something. | ||
BlockError::NonLinearParentRoots | BlockError::NonLinearSlots => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For code path that's expected to be unreachable, is it worth logging a warning here?
BlockError::Slashable => ErrorCategory::internal_no_retry().into(), | ||
// TODO: BeaconChainError should be retried? | ||
BlockError::BeaconChainError(_) | BlockError::InternalError(_) => { | ||
ErrorCategory::internal_no_retry().into() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Both of these variants cover quite a few scenarios, I'm not sure if it's safe to say no retry to them?
Issue Addressed
Adds a strict match without fallbacks to handle all processing errors in lookup sync explicitly. With this we can:
Example problem 1
Today, if a block has incorrect state root we will download and process it 5 times. This is wasteful, we should discard the block immediately
Example problem 2
Assume we introduce a new
BlockError
variant in a future fork or network change. Lookup sync is very sensitive code, and not handling this error variant properly may result in sync getting stuckProposed Changes
In the network processor convert
BlockError
toLookupSyncProcessingResult
.Then lookup sync logic is quite simple match on
LookupSyncProcessingResult
, without fallbacks.