-
Notifications
You must be signed in to change notification settings - Fork 123
feat: engine_newPayloadV3: validate, execute & store block
#222
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
…re-cancun-time
…re-cancun-time
…re-cancun-time
github-merge-queue bot
pushed a commit
that referenced
this pull request
Aug 6, 2024
…idation (#220) **Motivation** Fetch cancun time from DB when validating payload v3 timestamp <!-- Why does this pull request exist? What are its goals? --> **Description** * Store cancun_time in db * Use the stored cancun_time when validating payload timestamp in `eth_newPayloadV3` * Replace update methods for chain data in `Store` with `set_chain_config` Bonus: * Move `NewPayloadV3Request` to its corresponding module and update is parsing to match the rest of the codebase <!-- A clear and concise general description of the changes this PR introduces --> <!-- Link to issues: Resolves #111, Resolves #222 --> Closes None, but is part of #51
ElFantasma
pushed a commit
that referenced
this pull request
Aug 6, 2024
**Motivation** Having a way to obtain lates/earliest/pending/etc block numbers <!-- Why does this pull request exist? What are its goals? --> **Description** * Add get and update methods for earliest, latest, finalized, safe & pending block number to `Store` & `StoreEngine` * Resolve block numbers from tag in rpc methods <!-- A clear and concise general description of the changes this PR introduces --> <!-- Link to issues: Resolves #111, Resolves #222 --> Closes None, but fixes many and enables others
github-merge-queue bot
pushed a commit
that referenced
this pull request
Aug 7, 2024
…pts & withdrawals (#225) **Motivation** These roots are currently being calculated using `from_sorted_iter` but without being sorted beforehand. This PR replaces this behavior with inserting directly into the trie to ensure that it is ordered, then computing the root (The same fix that has been previously applied to storage root) **Description** Fixes `compute_transactions_root`, `compute_receipts_root` & `compute_withdrawals_root` <!-- A clear and concise general description of the changes this PR introduces --> **Notes** After this change, the payloads created by kurtosis local net now pass the block hash validations in `engine_NewPayloadV3` <!-- Link to issues: Resolves #111, Resolves #222 --> Closes None, but is needed for #51 Co-authored-by: Federica Moletta <federicamoletta@MacBook-Pro-de-Federica.local>
github-merge-queue bot
pushed a commit
that referenced
this pull request
Nov 11, 2025
**Motivation** <!-- Why does this pull request exist? What are its goals? --> **Description** <!-- A clear and concise general description of the changes this PR introduces --> <!-- Link to issues: Resolves #111, Resolves #222 --> Closes #issue_number --------- Co-authored-by: Javier Rodríguez Chatruc <49622509+jrchatruc@users.noreply.github.com> Co-authored-by: Javier Chatruc <jrchatruc@gmail.com>
xqft
pushed a commit
that referenced
this pull request
Nov 11, 2025
) **Motivation** In hive tests we often receive an fcu with either the genesis or last imported block as head to notify the node that it is already synced. As this is not usual in a real-case scenario, we just trigger a snap sync to the advertised head. This PR solves this by first checking if the fcu head is already part of our canonic chain before triggering a sync <!-- Why does this pull request exist? What are its goals? --> **Description** * When handling a forkchoice in snap sync mode, check that the head is not already canonical before triggering a sync, if it is already canonical, apply the forkchoice <!-- A clear and concise general description of the changes this PR introduces --> <!-- Link to issues: Resolves #111, Resolves #222 --> Closes #4846
xqft
pushed a commit
that referenced
this pull request
Nov 11, 2025
chain_config is a matter of initialization with mutable stores, while latest_block_header is replaced by an ArcSwap to allow lock-free swapping with new values. **Motivation** <!-- Why does this pull request exist? What are its goals? --> **Description** <!-- A clear and concise general description of the changes this PR introduces --> <!-- Link to issues: Resolves #111, Resolves #222 --> Closes #issue_number --------- Co-authored-by: Lucas Fiegl <iovoid@users.noreply.github.com> Co-authored-by: Javier Rodríguez Chatruc <49622509+jrchatruc@users.noreply.github.com> Co-authored-by: Javier Chatruc <jrchatruc@gmail.com>
xqft
pushed a commit
that referenced
this pull request
Nov 11, 2025
**Motivation** The L2 CI is not running on some PRs it should. <!-- Why does this pull request exist? What are its goals? --> **Description** The problem is that the required checks `Integration Test` and `Lint` have the same name for both L1 and L2. So if one of them passes, then the other is not needed to merge. This can be a problem in the case that the L1 CI finishes before the L2 one starts, in that case the L2 CI will not run. The solution is the following: - Add `Integration Test L2` and `Lint L2` to the branch protection rules - Rename the corresponding jobs on the L2 workflow. With only those changes, we will always require the L2 workflows to run in order to merge, since we want to be able only run the L1 CI in PRs that don't touch the L2, we need a way to skip it. Note: The current way of not running a workflow doesn't work, since skipping a complete workflow with `paths:` makes the jobs abscent, what we need is for the jobs to be marked as skipped. To do this, [dorny/paths-filter@v3](https://github.com/dorny/paths-filter) action is used, which performs the same action as the `paths:` option, but as a job itself. Then with the result of this job we can add ifs to the rest of the jobs to mark if they should be run. Examples with a fork of ethrex with the branch protection already active (For Integration Test L2) - [PR with changes only in L1](gianbelinche#4): It ran L1 checks and skipped L2 ones. - [PR with changes only in L2](gianbelinche#3): It ran L2 checks and skipped L1 ones. - [PR with both L1 and L2 changes](gianbelinche#5): It ran both L1 and L2 checks. You can also check that when the PRs were merged to main, all checks ran even though the PRs skipped the checks. https://github.com/gianbelinche/ethrex/commits/main/ <!-- A clear and concise general description of the changes this PR introduces --> <!-- Link to issues: Resolves #111, Resolves #222 -->
xqft
pushed a commit
that referenced
this pull request
Nov 11, 2025
**Motivation** We want to parallelize VM execution and merkelization. **Description** <!-- A clear and concise general description of the changes this PR introduces --> <!-- Link to issues: Resolves #111, Resolves #222 --> Closes #issue_number --------- Co-authored-by: Mario Rugiero <mrugiero@gmail.com> Co-authored-by: Lucas Fiegl <iovoid@users.noreply.github.com> Co-authored-by: Javier Rodríguez Chatruc <49622509+jrchatruc@users.noreply.github.com> Co-authored-by: Javier Chatruc <jrchatruc@gmail.com>
xqft
pushed a commit
that referenced
this pull request
Nov 11, 2025
**Motivation** <!-- Why does this pull request exist? What are its goals? --> We are saving the latest `Store` as the "current" one. This means the next time a batch is constructed, it might use a latter checkpoint than the needed. This can occur if the batch is sealed with less blocks than the available (e.g., if there's no more available space). **Description** <!-- A clear and concise general description of the changes this PR introduces --> Save a checkpoint from the batch execution instead <!-- Link to issues: Resolves #111, Resolves #222 -->
xqft
pushed a commit
that referenced
this pull request
Nov 11, 2025
**Motivation** Checkpoints are never deleted <!-- Why does this pull request exist? What are its goals? --> **Description** Delete checkpoints once we verify the corresponding batch <!-- A clear and concise general description of the changes this PR introduces --> <!-- Link to issues: Resolves #111, Resolves #222 --> --------- Co-authored-by: Manuel Iñaki Bilbao <manuel.bilbao@lambdaclass.com> Co-authored-by: Ivan Litteri <67517699+ilitteri@users.noreply.github.com>
xqft
added a commit
that referenced
this pull request
Nov 11, 2025
**Motivation** <!-- Why does this pull request exist? What are its goals? --> Reapply #4814 **Description** <!-- A clear and concise general description of the changes this PR introduces --> <!-- Link to issues: Resolves #111, Resolves #222 --> --------- Co-authored-by: Tomás Paradelo <tomas.paradelo@lambdaclass.com> Co-authored-by: Tomás Paradelo <112426153+tomip01@users.noreply.github.com> Co-authored-by: Gianbelinche <39842759+gianbelinche@users.noreply.github.com> Co-authored-by: Estéfano Bargas <estefano.bargas@fing.edu.uy> Co-authored-by: ilitteri <ilitteri@fi.uba.ar> Co-authored-by: Ivan Litteri <67517699+ilitteri@users.noreply.github.com> Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
xqft
pushed a commit
that referenced
this pull request
Nov 11, 2025
**Motivation** <!-- Why does this pull request exist? What are its goals? --> OZ's upgrade tools require the variables to be set in initializers and not in constructor/variable declaration. This should not be such a problem as it will always be initialized as 0, but with this change we avoid the tool error **Description** <!-- A clear and concise general description of the changes this PR introduces --> <!-- Link to issues: Resolves #111, Resolves #222 -->
xqft
pushed a commit
that referenced
this pull request
Nov 11, 2025
**Motivation** Bloom filter was set in a small capacity. **Description** <!-- A clear and concise general description of the changes this PR introduces --> <!-- Link to issues: Resolves #111, Resolves #222 --> Closes #issue_number --------- Co-authored-by: Javier Rodríguez Chatruc <49622509+jrchatruc@users.noreply.github.com> Co-authored-by: Javier Chatruc <jrchatruc@gmail.com>
xqft
pushed a commit
that referenced
this pull request
Nov 11, 2025
**Motivation** When encoding non-legacy transactions we need to encode the payload (txType || rlp(Transaction) as a bytes object. In order to do so we copy the payload to a `Bytes` object and then encode it in rlp, this is not needed as we can just encode the payload as bytes by invoking the implementation for [u8] directly **Description** * Avoid using `Bytes::copy_from_slice` when encoding transactions <!-- A clear and concise general description of the changes this PR introduces --> <!-- Link to issues: Resolves #111, Resolves #222 -->
xqft
pushed a commit
that referenced
this pull request
Nov 11, 2025
…sm to crates (#5189) **Motivation** We want to take advantage of AVX2 processor features, and we are using `asm` features of dependencies in case it's possible. **Description** This PR sets compilation flag with AVX2 with the configuration: ``` target-cpu=x86-64-v3 ``` At the same time, it changes Cargo.toml files in the following way: * `c-kzg`: no default features, and add `std` and `ethereum_kzg_settings`. * `ark-ff`: with `asm` feature. * `sha3`: unified in Cargo.toml of the workspace and `asm` feature added. <!-- Link to issues: Resolves #111, Resolves #222 --> Closes #issue_number
xqft
pushed a commit
that referenced
this pull request
Nov 11, 2025
**Motivation** Transactions are encoded in the following format: * Legacy: rlp(Transaction) * Non Legacy: rlp(Payload) where Payload is a bytes object containing (TxType || rlp(Transaction)) When decoding in order to differentiate between a legacy and a non legacy transaction we check if the encoded data is a bytes object (`is_encoded_as_bytes`) by checking if the prefix is between 0xb8 and 0xbf. The problem with this is that when encoding the payload the 0xb8..0xbf prefix is only applied if the length of the payload is higher than 56 bytes. This is usually the case for most real-scenario transactions, but if the transaction payload were to be lower than 56 bytes then we would not detect it as a bytes object and would attempt and fail to decode the transaction as a legacy one. This PR fixes this problem by: * Changing the criteria for legacy vs non-legacy transactions to instead check if the incoming encoded data is encoded as a list (if so it will be legacy). Aka `is_encoded_as_bytes` -> `!is_encoded_as_list` * Considering the case where the encoded payload doesn't have the bytes prefix. Aka `get_rlp_bytes_item_payload` -> `decode_rlp_item`. The latter which considers the prefix RLP_NULL+size that is used when encoding less than 56 bytes <!-- Why does this pull request exist? What are its goals? --> **Description** * Fix decoding for `Transaction` & `P2PTransaction` for cases where tx payload encoding is less than 56 bytes <!-- A clear and concise general description of the changes this PR introduces --> <!-- Link to issues: Resolves #111, Resolves #222 --> Closes #issue_number
xqft
pushed a commit
that referenced
this pull request
Nov 11, 2025
**Motivation** We only need to store the full state for the latest 128 blocks. This also applies to importing a chain via `import` subcommand. Therefore we can use `add_blocks_in_batch` for blocks before the latest 128 to speed up chain import. This fixes flaky devp2p tests reported by #5172 where the test timed out during node startup as block import took way too long. <!-- Why does this pull request exist? What are its goals? --> **Description** * Use `add_blocks_in_batch` for all but latest 128 blocks in `import_blocks` * Restore `devp2p` hive test suite in CI <!-- A clear and concise general description of the changes this PR introduces --> <!-- Link to issues: Resolves #111, Resolves #222 --> Closes #5172
xqft
pushed a commit
that referenced
this pull request
Nov 11, 2025
**Motivation** Enable hive p2p tests `TestBlobTxWithMismatchedSidecar ` & `TestBlobTxWithoutSidecar` In order to pass these tests we have to: * Be able to decode pooled transactions without blobs (aka plain eip4844 transactions instead of a wrapped eip4844 with its blobsbundle) * Disconnect from peers that send transactions without blobs or blob bundles that don't match the versioned hashes <!-- Why does this pull request exist? What are its goals? --> **Description** * Handle the case of plain `Eip4844` transaction when RLP-decoding `WrappedEip4844` transactions * Disconnect from peers that send empty/mismatched blobs <!-- A clear and concise general description of the changes this PR introduces --> <!-- Link to issues: Resolves #111, Resolves #222 --> Closes #3745, part of #4941 --------- Co-authored-by: Martin Paulucci <martin.c.paulucci@gmail.com>
xqft
pushed a commit
that referenced
this pull request
Nov 11, 2025
**Motivation** <!-- Why does this pull request exist? What are its goals? --> **Description** <!-- A clear and concise general description of the changes this PR introduces --> <!-- Link to issues: Resolves #111, Resolves #222 --> Closes #issue_number --------- Co-authored-by: Ivan Litteri <67517699+ilitteri@users.noreply.github.com> Co-authored-by: avilagaston9 <gaston.avila@lambdaclass.com> Co-authored-by: Gianbelinche <39842759+gianbelinche@users.noreply.github.com> Co-authored-by: ilitteri <ilitteri@fi.uba.ar>
xqft
pushed a commit
that referenced
this pull request
Nov 11, 2025
**Motivation** <!-- Why does this pull request exist? What are its goals? --> **Description** <!-- A clear and concise general description of the changes this PR introduces --> <!-- Link to issues: Resolves #111, Resolves #222 --> Closes #issue_number --------- Co-authored-by: Javier Rodríguez Chatruc <49622509+jrchatruc@users.noreply.github.com> Co-authored-by: Javier Chatruc <jrchatruc@gmail.com>
github-merge-queue bot
pushed a commit
that referenced
this pull request
Nov 13, 2025
) **Motivation** <!-- Why does this pull request exist? What are its goals? --> We use docker to generate reproducible guest program elfs. This slows down a lot the compilation time (about 15 minutes in a Mac). **Description** <!-- A clear and concise general description of the changes this PR introduces --> Don't use docker by default but only when releasing binaries. The docker build is triggered by setting the `PROVER_REPRODUCIBLE_BUILD` env var. <!-- Link to issues: Resolves #111, Resolves #222 --> --------- Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> Co-authored-by: ilitteri <ilitteri@fi.uba.ar>
github-merge-queue bot
pushed a commit
that referenced
this pull request
Nov 13, 2025
…5326) **Motivation** [This](#5135 (comment)) change made the integration tests took longer, having to amp the amount of retries. <!-- Why does this pull request exist? What are its goals? --> **Description** Reverts this change and adds an [issue](#5325) for solving it. <!-- A clear and concise general description of the changes this PR introduces --> <!-- Link to issues: Resolves #111, Resolves #222 -->
github-merge-queue bot
pushed a commit
that referenced
this pull request
Nov 14, 2025
**Motivation** Add rpc and engine instrumentation and panels to our dashboards <!-- Why does this pull request exist? What are its goals? --> **Description** This PR adds a couple of changes to profiling.rs and our dashboard: - Add a simple way to time async funcrions without relying on `#instrument` (which would be more complex for every rpc) - Add namespace as a field to the instrumentation spans - Set a namespace for the old ones (`block_execution`) - Make the breakdown panels work with the new changes - Add new RPC and Engine panels <!-- A clear and concise general description of the changes this PR introduces --> <img width="2551" height="780" alt="image" src="https://github.com/user-attachments/assets/3b9feead-fc21-4115-b91b-118157477f18" /> <!-- Link to issues: Resolves #111, Resolves #222 --> --------- Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
github-merge-queue bot
pushed a commit
that referenced
this pull request
Nov 14, 2025
**Motivation** <!-- Why does this pull request exist? What are its goals? --> **Description** <!-- A clear and concise general description of the changes this PR introduces --> <!-- Link to issues: Resolves #111, Resolves #222 --> Closes #issue_number --------- Co-authored-by: ilitteri <ilitteri@fi.uba.ar> Co-authored-by: Javier Chatruc <jrchatruc@gmail.com>
lakshya-sky
pushed a commit
to lakshya-sky/ethrex
that referenced
this pull request
Nov 17, 2025
…ambdaclass#5326) **Motivation** [This](lambdaclass#5135 (comment)) change made the integration tests took longer, having to amp the amount of retries. <!-- Why does this pull request exist? What are its goals? --> **Description** Reverts this change and adds an [issue](lambdaclass#5325) for solving it. <!-- A clear and concise general description of the changes this PR introduces --> <!-- Link to issues: Resolves lambdaclass#111, Resolves lambdaclass#222 -->
lakshya-sky
pushed a commit
to lakshya-sky/ethrex
that referenced
this pull request
Nov 17, 2025
**Motivation** Add rpc and engine instrumentation and panels to our dashboards <!-- Why does this pull request exist? What are its goals? --> **Description** This PR adds a couple of changes to profiling.rs and our dashboard: - Add a simple way to time async funcrions without relying on `#instrument` (which would be more complex for every rpc) - Add namespace as a field to the instrumentation spans - Set a namespace for the old ones (`block_execution`) - Make the breakdown panels work with the new changes - Add new RPC and Engine panels <!-- A clear and concise general description of the changes this PR introduces --> <img width="2551" height="780" alt="image" src="https://github.com/user-attachments/assets/3b9feead-fc21-4115-b91b-118157477f18" /> <!-- Link to issues: Resolves lambdaclass#111, Resolves lambdaclass#222 --> --------- Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
lakshya-sky
pushed a commit
to lakshya-sky/ethrex
that referenced
this pull request
Nov 17, 2025
**Motivation** <!-- Why does this pull request exist? What are its goals? --> **Description** <!-- A clear and concise general description of the changes this PR introduces --> <!-- Link to issues: Resolves lambdaclass#111, Resolves lambdaclass#222 --> Closes #issue_number --------- Co-authored-by: ilitteri <ilitteri@fi.uba.ar> Co-authored-by: Javier Chatruc <jrchatruc@gmail.com>
github-merge-queue bot
pushed a commit
that referenced
this pull request
Nov 18, 2025
**Motivation** jsonwebtoken crate doesn't validate `iat` claims even if we explicitly ask it, hence we needed tests. **Description** Added unit tests to correctly reject invalid `iat` claims. <!-- Link to issues: Resolves #111, Resolves #222 --> Closes #5074 --------- Signed-off-by: lakshya-sky <lakshya-sky@users.noreply.github.com> Co-authored-by: lakshya-sky <lakshya-sky@users.noreply.github.com>
github-merge-queue bot
pushed a commit
that referenced
this pull request
Nov 18, 2025
**Motivation** <!-- Why does this pull request exist? What are its goals? --> **Description** <!-- A clear and concise general description of the changes this PR introduces --> <!-- Link to issues: Resolves #111, Resolves #222 --> Closes #issue_number --------- Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
github-merge-queue bot
pushed a commit
that referenced
this pull request
Nov 19, 2025
**Motivation** The flag `--disable-deposit-contract-sync` no longer exists on the latest lighthouse version (v8.0.0) <!-- Why does this pull request exist? What are its goals? --> **Description** * Remove `--disable-deposit-contract-sync` flag from `start-lighthouse` target on `tooling/sync/Makefile` <!-- A clear and concise general description of the changes this PR introduces --> <!-- Link to issues: Resolves #111, Resolves #222 --> Closes #issue_number
github-merge-queue bot
pushed a commit
that referenced
this pull request
Nov 20, 2025
**Motivation** <!-- Why does this pull request exist? What are its goals? --> There was a race condition where dev's block producer starts before the RPC. In this scenario, the block producer tries to make some requests to the Engine API, failing three times in a row and thus ending. **Description** <!-- A clear and concise general description of the changes this PR introduces --> Add a small delay between retries so Engine RPC has time to get up <!-- Link to issues: Resolves #111, Resolves #222 --> Co-authored-by: Ivan Litteri <67517699+ilitteri@users.noreply.github.com>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Motivation
Being able to fully validate execute and store blocks received by
engine_newPayloadV3Description
engine_newPayloadV3endpointFixes:
Genesis.get_block:INITIAL_BASE_FEEas base_fee_per_gas(With these fixes the genesis' block hash now matches the parentBlockHash of the next block when running with kurtosis)
beacon_root_contract_callnow sets the block's gas_limit to avoid tx validation errorsMisc:
compute_transactions_rootis now a standalone function matching the other compute functionsengine_newPayloadV3endpointExecutionPayloadV3&PayloadStatusOther

We can now execute payloads when running with kurtosis 🚀
Disclaimer: We are still getting some execution errors in later blocks that we need to look into (They are all currently passing the block validations)
Closes #51