Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Uncouple BeaconBlockAndBlobsSidecarByRoot #3139

Closed
wants to merge 1 commit into from

Conversation

dapplion
Copy link
Collaborator

Clients must still not prune un-finalized blobs, else they can't import a block older than MIN_EPOCHS_FOR_BLOBS_SIDECARS_REQUESTS. So it's not unreasonable to couple this two methods, but the coupling still has side effects.

Practically clients can couple on their implementation resulting on quasi-identical once EIP-4844 is finalized.

@terencechain
Copy link
Contributor

I don't think we should go back to uncoupling and dealing with the race condition to resolve this edge case. The con seems to outweigh the pro. As for the edge case, we could use Union[SignedBeaconBlockAndBlobsSidecar, SignedBeaconBlock] which seems like a viable option

@dapplion
Copy link
Collaborator Author

I don't think we should go back to uncoupling and dealing with the race condition to resolve this edge case. The con seems to outweigh the pro. As for the edge case, we could use Union[SignedBeaconBlockAndBlobsSidecar, SignedBeaconBlock] which seems like a viable option

Can you link to relevant discussion on why couple if it was originally de-coupled? If you block until both requests are completed there should not be race conditions.

@terencechain
Copy link
Contributor

terencechain commented Nov 30, 2022

Can you link to relevant discussion on why couple if it was originally de-coupled?

https://notes.ethereum.org/RLOGb1hYQ0aWt3hcVgzhgQ?

If you block until both requests are completed there should not be race conditions.

That's a decent divergence from current behavior (i.e. one request vs two requests) with additional complexities such as peer rotations. Curious what other teams think @realbigsean @tbenr

@realbigsean
Copy link
Contributor

Discussion for by range is here #3087

I think discussion for by root was in Discord and to be honest it was much more brief. I think we assumed this API was more straightforward.

Practically clients can couple on their implementation resulting on quasi-identical once EIP-4844 is finalized.

This is how we would handle it in lighthouse if the requests were decoupled. I think my view here is about the same as with by range requests. It'd be nice for the APIs to reflect this internal coupling if all clients are planning to do similar. Are you planning to do differently in lodestar?

Internally we're actually using Union[SignedBeaconBlockAndBlobsSidecar, SignedBeaconBlock] for block processing in more and more places so using this to resolve these cases actually seems like it would be pretty nice.

@tbenr
Copy link
Contributor

tbenr commented Dec 2, 2022

On the networking side, I'm experimenting on the "Union" blockByRoot. Even if technically possible, the implementation turned to be more complex than expected, still experimenting.

Internally I approached the fully coupled version. I ended up by implementing a sort of "Union" but there are several interference and too many code changes that might be reverted if we decouple back in the future. So I'll also investigate a fully decoupled processing.

@realbigsean
Copy link
Contributor

It sounded on the call like people preferred an error code to resolve the edge cases? This seems reasonable to me.

I think we should keep this coupled for devnet v3 but I think it's possible we'll have to decouple this if we observe big blocks on mainnet leading to significant block latencies because this would mean we may want to decouple gossip (as much as I wouldn't want to) in order to optimize there.

@AgeManning has been looking at how message size and latency relate on mainnet gossip today and seeing something like 50kb compressed blocks sizes have around 200-400ms of delay on gossip vs a 500b aggregate and proof has <50ms (correct me if I'm wrong, Age).

@dapplion
Copy link
Collaborator Author

dapplion commented Dec 9, 2022

@AgeManning has been looking at how message size and latency relate on mainnet gossip today and seeing something like 50kb compressed blocks sizes have around 200-400ms of delay on gossip vs a 500b aggregate and proof has <50ms (correct me if I'm wrong, Age).

These delays are per each hop?

@AgeManning
Copy link
Contributor

AgeManning commented Dec 12, 2022

The delays are looking over roughly a slot.

It's a measure of how delayed we are seeing duplicates on gossipsub for an individual message.

For the block topic, this doesn't measure the delay in the block arriving (I think most clients have this metric), but once we receive a block, how long does it take our other peers to deliver us the same block over the mesh. I guess its closer to a measure of the variance of message propagation on gossipsub.

The other interesting metric, that is highly correlated with this, is the % of messages on a topic that we recieve an IHAVE for before we see it on the mesh. This percentage is much higher on the block topic (because of the message size). i.e if it takes longer than the heartbeat time to send a message to all your mesh peers, you might just send the IHAVE msg-id before-hand.

@djrtwo
Copy link
Contributor

djrtwo commented Dec 12, 2022

It sounded on the call like people preferred an error code to resolve the edge cases? This seems reasonable to me.

This seems reasonable -- anyone want to take a stab at a PR? @realbigsean @dapplion ?

@arnetheduck
Copy link
Contributor

In general, I think coupling goes against the grain of what the sidecar proposal is about - while coupling in general might work for the 80% case we will keep running into issues in the margin where coupling is not in sympathy with the underlying data structure - this will keep repeating in various contexts: we're serializing something that is inherently parallelizable.

It might be the case that the first "rough cut" version of the feature will be implemented mostly-coupled internally, but when we have to process more and more data, we will want to remove inefficiencies and coupling introduces such an artificial inefficiency that fundamentally does not need to be there.

+1 for any uncoupling, at all levels.

@mcdee
Copy link
Contributor

mcdee commented Dec 16, 2022

Chiming in to agree with @arnetheduck having a single bundled block and sidecar results in a heavy object to pass around the network and the assumptions that will inevitably be made by having this object will result in it being much harder to separate block and sidecar in future (e.g. propagating over something other than the existing p2p), which seems totally at odds with the idea of having a sidecar in the first place.

@hwwhww
Copy link
Contributor

hwwhww commented Feb 20, 2023

closing in favor of #3244

@hwwhww hwwhww closed this Feb 20, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Deneb was called: eip-4844
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants