Fetch blobs from EL prior to block verification#6600
Fetch blobs from EL prior to block verification#6600mergify[bot] merged 4 commits intosigp:unstablefrom
Conversation
|
Any security considerations on triggering this logic before validating the block? The most damage a proposer can do is waste bandwidth on a bad proposal. This does not seem like a big issue and can be done anyway regardless of fetch_blobs. Else the experimental results look great |
|
We're doing this after gossip validation of the block, so we know that the proposer's signature is valid and they are a legitimate proposer for the slot. Unless the proposer slashes themselves, the blob versioned hashes in the block header are the "true" (valid) versioned hashes for this slot. Alternatively the block could be completely invalid (but not slashable), in which case we will reject it upon completion of block processing. As part of lighthouse/beacon_node/beacon_chain/src/fetch_blobs.rs Lines 131 to 148 in 6e1945f So if they are malformed (e.g. bad KZG proof), they will be rejected at this point. TL;DR on the whole I think it's security-equivalent to processing blobs on gossip:
|
beacon_node/network/src/network_beacon_processor/gossip_methods.rs
Outdated
Show resolved
Hide resolved
| self.executor.spawn( | ||
| async move { | ||
| self_clone | ||
| .fetch_engine_blobs_and_publish(block_clone, block_root, publish_blobs) |
There was a problem hiding this comment.
Since this is running as a task, it's no longer bound by the beacon processor queue. Could someone spam gossip blocks and cause a lot of fetch blobs work?
There was a problem hiding this comment.
They would need to be beacon blocks with valid signatures, and this is a linear factor, so it can't really blow up much beyond the number of threads allocated to the beacon processor. E.g. if we have 16 threads in the BP, we might end up with 32 running tasks max, which are mostly I/O bound and should be handled just fine by Tokio.
We do this in a few other places, like when we check the payload with the EL:
lighthouse/beacon_node/beacon_chain/src/block_verification.rs
Lines 1407 to 1416 in 6329042
Squashed commit of the following: commit 5f563ef Author: Michael Sproul <michael@sigmaprime.io> Date: Fri Nov 22 12:33:10 2024 +1100 Run fetch blobs in parallel with block import commit 3cfe9df Author: Michael Sproul <michael@sigmaprime.io> Date: Thu Nov 21 10:46:34 2024 +1100 Fetch blobs from EL prior to block verification
Squashed commit of the following: commit 5f563ef Author: Michael Sproul <michael@sigmaprime.io> Date: Fri Nov 22 12:33:10 2024 +1100 Run fetch blobs in parallel with block import commit 3cfe9df Author: Michael Sproul <michael@sigmaprime.io> Date: Thu Nov 21 10:46:34 2024 +1100 Fetch blobs from EL prior to block verification
|
@mergify queue |
🛑 The pull request has been removed from the queue
|
|
@mergify requeue |
✅ This pull request will be re-embarked automaticallyDetailsThe followup |
🛑 The pull request has been removed from the queue
|
Squashed commit of the following: commit 5f563ef Author: Michael Sproul <michael@sigmaprime.io> Date: Fri Nov 22 12:33:10 2024 +1100 Run fetch blobs in parallel with block import commit 3cfe9df Author: Michael Sproul <michael@sigmaprime.io> Date: Thu Nov 21 10:46:34 2024 +1100 Fetch blobs from EL prior to block verification
Squashed commit of the following: commit 5f563ef Author: Michael Sproul <michael@sigmaprime.io> Date: Fri Nov 22 12:33:10 2024 +1100 Run fetch blobs in parallel with block import commit 3cfe9df Author: Michael Sproul <michael@sigmaprime.io> Date: Thu Nov 21 10:46:34 2024 +1100 Fetch blobs from EL prior to block verification
|
@mergify queue |
🛑 The pull request has been removed from the queue
|
|
@mergify requeue |
✅ This pull request will be re-embarked automaticallyDetailsThe followup |
🛑 The pull request has been removed from the queue
|
|
@mergify requeue |
✅ This pull request will be re-embarked automaticallyDetailsThe followup |
🛑 The pull request has been removed from the queue
|
|
@mergify requeue |
✅ This pull request will be re-embarked automaticallyDetailsThe followup |
🛑 The pull request has been removed from the queue
|
|
@mergify requeue |
✅ This pull request will be re-embarked automaticallyDetailsThe followup |
🛑 The pull request has been removed from the queue
|
|
@mergify requeue |
❌ This pull request head commit has not been previously disembarked from queue. |
|
@mergify queue |
✅ The pull request has been merged automaticallyDetailsThe pull request has been merged automatically at 1c8161f |
|
THANK FUCK. |
Proposed Changes
Optimise
fetch_blobssignificantly, by fetching blobs from the EL prior to consensus and execution verification of the block.We had noticed that we weren't getting many hits with fetch blobs, and this was because blobs were almost always arriving on gossip prior to us requesting them. Only a few times an hour would the
fetch_blobslogic actually fire.With this change I'm seeing much more frequent hits, without a substantial increase in publication bandwidth. In the last 30 mins running on mainnet there have been 116 hits, and 156 individual blobs published (out of 395 fetched).
Data here: https://docs.google.com/spreadsheets/d/1ZJIYbOPwNGa_veqUC0ywsOdzFYvh4aqJLMYqFoisA_E/edit?usp=sharing
This does imply that we're publishing around 35% of all blobs! But this will likely come down as more nodes chip in to publishing.