Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Stricter match of BlockError in lookup sync #6321

Open
wants to merge 2 commits into
base: unstable
Choose a base branch
from

Conversation

dapplion
Copy link
Collaborator

@dapplion dapplion commented Aug 28, 2024

Issue Addressed

Adds a strict match without fallbacks to handle all processing errors in lookup sync explicitly. With this we can:

  • Be explicit on what errors must result in penalties
  • Do not retry errors that are deterministic on the block root

Example problem 1

Today, if a block has incorrect state root we will download and process it 5 times. This is wasteful, we should discard the block immediately

Example problem 2

Assume we introduce a new BlockError variant in a future fork or network change. Lookup sync is very sensitive code, and not handling this error variant properly may result in sync getting stuck

Proposed Changes

In the network processor convert BlockError to LookupSyncProcessingResult.

Then lookup sync logic is quite simple match on LookupSyncProcessingResult, without fallbacks.

for peer in peer_group.of_index(index) {
cx.report_peer(
*peer,
PeerAction::MidToleranceError,
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should up this to LowToleranceError

);
let recoverable = match other.rpc_scoring() {
ErrorCategory::Internal { recoverable } => {
if matches!(other, BlockError::ExecutionPayloadError(_)) {
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I kept this extra condition from the original match as I imagine that when EL goes offline the node would spam non-actionable errors

@dapplion dapplion force-pushed the peerdas-peergroup-scoring branch from a2ffbd2 to 5a69a7b Compare August 28, 2024 13:46
@pawanjay176 pawanjay176 added the under-review A reviewer has only partially completed a review. label Aug 29, 2024
beacon_node/network/src/sync/network_context.rs Outdated Show resolved Hide resolved
beacon_node/beacon_chain/src/block_verification.rs Outdated Show resolved Hide resolved
beacon_node/beacon_chain/src/block_verification.rs Outdated Show resolved Hide resolved
beacon_node/beacon_chain/src/block_verification.rs Outdated Show resolved Hide resolved
beacon_node/beacon_chain/src/block_verification.rs Outdated Show resolved Hide resolved
beacon_node/beacon_chain/src/data_availability_checker.rs Outdated Show resolved Hide resolved
@dapplion dapplion changed the base branch from unstable to peerdas-peergroup-scoring-only September 9, 2024 11:55
@dapplion dapplion changed the title Attribute invalid column proof error to correct peer Stricter match of BlockError in sync Sep 9, 2024
@mergify mergify bot deleted the branch sigp:unstable September 23, 2024 18:49
@mergify mergify bot closed this Sep 23, 2024
@realbigsean realbigsean reopened this Sep 23, 2024
@realbigsean realbigsean changed the base branch from peerdas-peergroup-scoring-only to unstable September 23, 2024 23:07
@dapplion dapplion force-pushed the peerdas-peergroup-scoring branch from 5a69a7b to 33d10e5 Compare October 20, 2024 22:53
@dapplion dapplion changed the title Stricter match of BlockError in sync Stricter match of BlockError in lookup sync Oct 20, 2024
@dapplion
Copy link
Collaborator Author

@pawanjay176 I have moved the big match into the network beacon processor. This is analogous to how we handle the result of processing chain segments and gossip objects.

I have applied all your comments to the current version.

I think the PR makes much more sense now, ready for a re-review

"Error processing block component";
"block_root" => %block_root,
"err" => ?err,
);
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we want to log internal errors at a higher level than debug? If yes I can move the log statement below and match the type of ErrorCategory

@dapplion dapplion added ready-for-review The code is ready for review syncing and removed under-review A reviewer has only partially completed a review. labels Oct 20, 2024
# Conflicts:
#	beacon_node/network/src/sync/tests/lookups.rs
Copy link
Member

@jimmygchen jimmygchen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I still haven't finished reviewing this but I'll try to get back to this later today or Monday.

@@ -983,7 +985,7 @@ impl<T: BeaconChainTypes> NetworkBeaconProcessor<T> {
"result" => "imported block and custody columns",
"block_hash" => %hash,
);
self.chain.recompute_head_at_current_slot().await;
// Head will be recomputed `handle_lookup_sync_processing_result`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This doesn't cover the gossip case right?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Feels like it might be easier to move the recompute_head logic back here, otherwise we'll have to either do it in two places or having to pass a flag here and conditionally recompute head.

Comment on lines +70 to +71
/// The error also indicates which block component index is malicious if applicable.
Malicious { retry: bool, index: usize },
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I found index slightly confusing - what do you think about calling this sidecar_index or data_sidecar_index, and potentially make this an Option, given we're using this for blocks as well?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does it make sense to call this Faulty instead? we don't know if the peer is malicious - i think it might be more consistent with the terminology used in sync, although it doesn't really matter, a faulty peer and malicious peer both send invalid data.

@@ -214,7 +269,9 @@ impl<T: BeaconChainTypes> NetworkBeaconProcessor<T> {
// Sync handles these results
self.send_sync_message(SyncMessage::BlockComponentProcessed {
process_type,
result: result.into(),
result: self
.handle_lookup_sync_processing_result(block_root, result)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like we're potentially recomputing head twice here - one above at

self.chain.recompute_head_at_current_slot().await;
and one in self.handle_lookup_sync_processing_result.

@jimmygchen jimmygchen added the under-review A reviewer has only partially completed a review. label Dec 16, 2024
pub fn malicious_no_retry() -> Self {
Self::Malicious {
retry: false,
index: 0,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it make sense to make this Option, otherwise we can't distinguish between 0 index or None

// unreachable, this error is only part of gossip
BlockError::BlobNotRequired(_) => ErrorCategory::malicious_retry().into(),
// Unreachable: This variants never happen in lookup sync, only in range sync.
// Does not matter what we set here, just setting `internal_recoverable` to
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// Does not matter what we set here, just setting `internal_recoverable` to
// Does not matter what we set here, just setting `internal_retry` to

// Unreachable: This variants never happen in lookup sync, only in range sync.
// Does not matter what we set here, just setting `internal_recoverable` to
// put something.
BlockError::NonLinearParentRoots | BlockError::NonLinearSlots => {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For code path that's expected to be unreachable, is it worth logging a warning here?

BlockError::Slashable => ErrorCategory::internal_no_retry().into(),
// TODO: BeaconChainError should be retried?
BlockError::BeaconChainError(_) | BlockError::InternalError(_) => {
ErrorCategory::internal_no_retry().into()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Both of these variants cover quite a few scenarios, I'm not sure if it's safe to say no retry to them?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready-for-review The code is ready for review syncing under-review A reviewer has only partially completed a review.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants