From 3bbabed2d0bb7e0ef4c5438506a3445f86ef92c7 Mon Sep 17 00:00:00 2001 From: Boyu Yang Date: Sun, 23 Apr 2023 17:12:17 +0800 Subject: [PATCH] Merge tag 'v4.1.0' into develop (#1) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * blob production * cargo fix * Add more gossip verification conditions * Added Capella Data Structures to consensus/types (#3637) * Ran Cargo fmt * Added Capella Data Structures to consensus/types * Fixed some stuff in state processing (#3640) * Fixed a ton of state_processing stuff (#3642) FIXME's: * consensus/fork_choice/src/fork_choice.rs * consensus/state_processing/src/per_epoch_processing/capella.rs * consensus/types/src/execution_payload_header.rs TODO's: * consensus/state_processing/src/per_epoch_processing/capella/partial_withdrawals.rs * consensus/state_processing/src/per_epoch_processing/capella/full_withdrawals.rs * Capella eip 4844 cleanup (#3652) * add capella gossip boiler plate * get everything compiling Co-authored-by: realbigsean * small cleanup * small cleanup * cargo fix + some test cleanup * improve block production * add fixme for potential panic Co-authored-by: Mark Mackey * Added Capella Epoch Processing Logic (#3666) * clean up types * 48 byte array serde * Couple blocks and blobs in gossip (#3670) * Revert "Add more gossip verification conditions" This reverts commit 1430b561c37adb44d5705005de6bf633deb8c16d. * Revert "Add todos" This reverts commit 91efb9d4c780b55025c3793a67bd9dacc1b2c924. * Revert "Reprocess blob sidecar messages" This reverts commit 21bf3d37cdce46632cfa4e3f5abb194f172c6851. * Add the coupled topic * Decode SignedBeaconBlockAndBlobsSidecar correctly * Process Block and Blobs in beacon processor * Remove extra blob publishing logic from vc * Remove blob signing in vc * Ugly hack to compile * Block processing eip4844 (#3673) * add eip4844 block processing * fix blob processing code * consensus logic fixes and cleanup * use safe arith * merge with unstable fixes * Cleanup payload types (#3675) * Add transparent support * Add `Config` struct * Deprecate `enum_behaviour` * Partially remove enum_behaviour from project * Revert "Partially remove enum_behaviour from project" This reverts commit 46ffb7fe77622cf420f7ba2fccf432c0050535d6. * Revert "Deprecate `enum_behaviour`" This reverts commit 89b64a6f53d0f68685be88d5b60d39799d9933b5. * Add `struct_behaviour` * Tidy * Move tests into `ssz_derive` * Bump ssz derive * Fix comment * newtype transaparent ssz * use ssz transparent and create macros for per fork implementations * use superstruct map macros Co-authored-by: Paul Hauner * Feature gate withdrawals (#3684) * start feature gating * feature gate withdrawals * Fix compilation error (#3692) * fix topic name * Updated for queueless withdrawals spec * Fixed compiling with withdrawals enabled * Added stuff that NEEDS IMPLEMENTING * BeaconState field renamed * Forgot one feature guard * Added process_withdrawals * Fixes to make EF Capella tests pass (#3719) * Fixes to make EF Capella tests pass * Clippy for state_processing * Fix BlocksByRoot response types (#3743) * Massive Update to Engine API (#3740) * Massive Update to Engine API * Update beacon_node/execution_layer/src/engine_api/json_structures.rs Co-authored-by: Michael Sproul * Update beacon_node/execution_layer/src/engine_api/json_structures.rs Co-authored-by: Michael Sproul * Update beacon_node/beacon_chain/src/execution_payload.rs Co-authored-by: realbigsean * Update beacon_node/execution_layer/src/engine_api.rs Co-authored-by: realbigsean Co-authored-by: Michael Sproul Co-authored-by: realbigsean * - fix pre-merge block production (#3746) - return `None` on pre-4844 blob requests * Stuuupid camelCase (#3748) * Two Capella bugfixes (#3749) * Two Capella bugfixes * fix payload default check in fork choice * Revert "fix payload default check in fork choice" This reverts commit e56fefbd05811526af4499711045275db366aa09. Co-authored-by: realbigsean * Rename excess blobs and update 4844 json RPC serialization/deserialization (#3745) * rename excess blobs and fix json serialization/deserialization * remove coments * Op pool and gossip for BLS to execution changes (#3726) * Fixed Payload Deserialization in DB (#3758) * Remove withdrawals guard for PayloadAttributesV2 * Fixed some BeaconChain Tests * Refactored Execution Layer & Fixed Some Tests * Fixed Compiler Warnings & Failing Tests (#3771) * Merge 'upstream/unstable' into capella (#3773) * Add API endpoint to count statuses of all validators (#3756) * Delete DB schema migrations for v11 and earlier (#3761) Co-authored-by: Mac L Co-authored-by: Michael Sproul * Fixed moar tests (#3774) * Fix some capella nits (#3782) * Fix `Withdrawal` serialisation and check address change fork (#3789) * Disallow address changes before Capella * Quote u64s in Withdrawal serialisation * Fixed Clippy Complaints & Some Failing Tests (#3791) * Fixed Clippy Complaints & Some Failing Tests * Update Dockerfile to Rust-1.65 * EF test file renamed * Touch up comments based on feedback * Fixed Payload Reconstruction Bug (#3796) * Use JsonPayload for payload reconstruction (#3797) * Batch API for address changes (#3798) * Fix Clippy * Publish capella images on push (#3803) * Enable withdrawals features in Capella docker images (#3805) * Bounded withdrawals and spec v1.3.0-alpha.2 (#3802) * Make engine_getPayloadV2 accept local block value * Removed `withdrawals` feature flag * Update consensus/state_processing/src/upgrade/eip4844.rs Co-authored-by: realbigsean * Update consensus/state_processing/src/upgrade/eip4844.rs Co-authored-by: realbigsean * Fixed spec serialization bug * Feature Guard V2 Engine API Methods * Fixed Some Tests * Fix clippy complaints * cleanup * Update beacon_node/execution_layer/src/engine_api/json_structures.rs * Update Execution Layer Tests for Capella * Fixed Operation Pool Tests * Fix EF Tests * Fixing Moar Failing Tests * Isolate withdrawals-processing Feature (#3854) * Added bls_to_execution_changes to PersistedOpPool (#3857) * Added bls_to_execution_changes to PersistedOpPool * Bump MSRV to 1.65 (#3860) * add historical summaries (#3865) * add historical summaries * fix tree hash caching, disable the sanity slots test with fake crypto * add ssz static HistoricalSummary * only store historical summaries after capella * Teach `UpdatePattern` about Capella * Tidy EF tests * Clippy Co-authored-by: Michael Sproul * Remove `withdrawals-processing` feature (#3864) * Use spec to Determine Supported Engine APIs * Remove `withdrawals-processing` feature * Fixed Tests * Missed Some Spots * Fixed Another Test * Stupid Clippy * Fix Arbitrary implementations (#3867) * Fix Arbitrary implementations * Remove remaining vestiges of arbitrary-fuzz * Remove FIXME * Clippy * Fix some beacon_chain tests * Verify blockHash with withdrawals * Sign BlsToExecutionChange w/ GENESIS_FORK_VERSION * Don't Penalize Early `bls_to_execution_change` * Update gossip_methods.rs * bump ef-tests * intentionally skip `LightClientHeader` ssz static tests * CL-EL withdrawals harmonization using Gwei units (#3884) * Update checkpoint-sync.md (#3831) Remove infura checkpoint sync instructions. Co-authored-by: Adam Patacchiola * Return HTTP 404 rather than 405 (#3836) ## Issue Addressed Issue #3112 ## Proposed Changes Add `Filter::recover` to the GET chain to handle rejections specifically as 404 NOT FOUND ## Additional Info Making a request to `http://localhost:5052/not_real` now returns the following: ``` { "code": 404, "message": "NOT_FOUND", "stacktraces": [] } ``` Co-authored-by: Paul Hauner * Add CLI flag to specify the format of logs written to the logfile (#3839) ## Proposed Changes Decouple the stdout and logfile formats by adding the `--logfile-format` CLI flag. This behaves identically to the existing `--log-format` flag, but instead will only affect the logs written to the logfile. The `--log-format` flag will no longer have any effect on the contents of the logfile. ## Additional Info This avoids being a breaking change by causing `logfile-format` to default to the value of `--log-format` if it is not provided. This means that users who were previously relying on being able to use a JSON formatted logfile will be able to continue to use `--log-format JSON`. Users who want to use JSON on stdout and default logs in the logfile, will need to pass the following flags: `--log-format JSON --logfile-format DEFAULT` * add better err reporting UnableToOpenVotingKeystore (#3781) ## Issue Addressed #3780 ## Proposed Changes Add error reporting that notifies the node operator that the `voting_keystore_path` in their `validator_definitions.yml` file may be incorrect. ## Additional Info There is more info in issue #3780 Co-authored-by: Paul Hauner * add logging for starting request and receiving block (#3858) ## Issue Addressed #3853 ## Proposed Changes Added `INFO` level logs for requesting and receiving the unsigned block. ## Additional Info Logging for successfully publishing the signed block is already there. And seemingly there is a log for when "We realize we are going to produce a block" in the `start_update_service`: `info!(log, "Block production service started"); `. Is there anywhere else you'd like to see logging around this event? Co-authored-by: GeemoCandama <104614073+GeemoCandama@users.noreply.github.com> * Fix some dead links in markdown files (#3885) ## Issue Addressed No issue has been raised for these broken links. ## Proposed Changes Update links with the new URLs for the same document. ## Additional Info ~The link for the [Lighthouse Development Updates](https://eepurl.com/dh9Lvb/) mailing list is also broken, but I can't find the correct link.~ Co-authored-by: Paul Hauner * Update engine_api to Latest spec (#3893) * Update engine_api to Latest spec * Small Test Fix * Fix Test Deserialization Issue * update antithesis dockerfile (#3883) Resolves https://github.com/sigp/lighthouse/issues/3879 Co-authored-by: realbigsean * Improve block delay metrics (#3894) We recently ran a large-block experiment on the testnet and plan to do a further experiment on mainnet. Although the metrics recovered from lighthouse nodes were quite useful, I think we could do with greater resolution in the block delay metrics and get some specific values for each block (currently these can be lost to large exponential histogram buckets). This PR increases the resolution of the block delay histogram buckets, but also introduces a new metric which records the last block delay. Depending on the polling resolution of the metric server, we can lose some block delay information, however it will always give us a specific value and we will not lose exact data based on poor resolution histogram buckets. * Switch allocator to jemalloc (#3697) ## Proposed Changes Another `tree-states` motivated PR, this adds `jemalloc` as the default allocator, with an option to use the system allocator by compiling with `FEATURES="" make`. - [x] Metrics - [x] Test on Windows - [x] Test on macOS - [x] Test with `musl` - [x] Metrics dashboard on `lighthouse-metrics` (https://github.com/sigp/lighthouse-metrics/pull/37) Co-authored-by: Michael Sproul * fix multiarch docker builds (#3904) ## Issue Addressed #3902 Tested and confirmed working [here](https://github.com/antondlr/lighthouse/actions/runs/3970418322) ## Additional Info buildx v0.10.0 added provenance attestations to images but they are packed in a way that's incompatible with `docker manifest` https://github.com/docker/buildx/releases * Import BLS to execution changes before Capella (#3892) * Import BLS to execution changes before Capella * Test for BLS to execution change HTTP API * Pack BLS to execution changes in LIFO order * Remove unused var * Clippy * Implement sync_committee_rewards API (per-validator reward) (#3903) ## Issue Addressed [#3661](https://github.com/sigp/lighthouse/issues/3661) ## Proposed Changes `/eth/v1/beacon/rewards/sync_committee/{block_id}` ``` { "execution_optimistic": false, "finalized": false, "data": [ { "validator_index": "0", "reward": "2000" } ] } ``` The issue contains the implementation of three per-validator reward APIs: * `sync_committee_rewards` * [`attestation_rewards`](https://github.com/sigp/lighthouse/pull/3822) * `block_rewards` This PR only implements the `sync_committe_rewards `. The endpoints can be viewed in the Ethereum Beacon nodes API browser: [https://ethereum.github.io/beacon-APIs/?urls.primaryName=dev#/Rewards](https://ethereum.github.io/beacon-APIs/?urls.primaryName=dev#/Rewards) ## Additional Info The implementation of [consensus client reward APIs](https://github.com/eth-protocol-fellows/cohort-three/blob/master/projects/project-ideas.md#consensus-client-reward-apis) is part of the [EPF](https://github.com/eth-protocol-fellows/cohort-three). Co-authored-by: navie Co-authored-by: kevinbogner * Use eth1_withdrawal_credentials in Test States (#3898) * Use eth1_withdrawal_credential in Some Test States * Update beacon_node/genesis/src/interop.rs Co-authored-by: Michael Sproul * Update beacon_node/genesis/src/interop.rs Co-authored-by: Michael Sproul * Increase validator sizes * Pick next sync committee message Co-authored-by: Michael Sproul Co-authored-by: Paul Hauner * light client optimistic update reprocessing (#3799) ## Issue Addressed Currently there is a race between receiving blocks and receiving light client optimistic updates (in unstable), which results in processing errors. This is a continuation of PR #3693 and seeks to progress on issue #3651 ## Proposed Changes Add the parent_root to ReprocessQueueMessage::BlockImported so we can remove blocks from queue when a block arrives that has the same parent root. We use the parent root as opposed to the block_root because the LightClientOptimisticUpdate does not contain the block_root. If light_client_optimistic_update.attested_header.canonical_root() != head_block.message().parent_root() then we queue the update. Otherwise we process immediately. ## Additional Info michaelsproul came up with this idea. The code was heavily based off of the attestation reprocessing. I have not properly tested this to see if it works as intended. * Fix docs for `oldest_block_slot` (#3911) ## Proposed Changes Update the docs to correct the description of `oldest_block_slot`. Credit to `laern` on Discord for noticing this. * Update sync rewards API for abstract exec payload * Fix the new BLS to execution change test * Update another test broken by the shuffling change * Clippy 1.67 (#3916) ## Proposed Changes Clippy 1.67.0 put us on blast for the size of some of our errors, most of them written by me ( :eyes: ). This PR shrinks the size of `BeaconChainError` by dropping some extraneous info and boxing an inner error which should only occur infrequently anyway. For the `AttestationSlashInfo` and `BlockSlashInfo` I opted to ignore the lint as they are always used in a `Result` where `A` is a similar size. This means they don't bloat the size of the `Result`, so it's a bit annoying for Clippy to report this as an issue. I also chose to ignore `clippy::uninlined-format-args` because I think the benefit-to-churn ratio is too low. E.g. sometimes we have long identifiers in `format!` args and IMO the non-inlined form is easier to read: ```rust // I prefer this... format!( "{} did {} to {}", REALLY_LONG_CONSTANT_NAME, ANOTHER_REALLY_LONG_CONSTANT_NAME, regular_long_identifier_name ); // To this format!("{REALLY_LONG_CONSTANT_NAME} did {ANOTHER_REALLY_LONG_CONSTANT_NAME} to {regular_long_identifier_name}"); ``` I tried generating an automatic diff with `cargo clippy --fix` but it came out at: ``` 250 files changed, 1209 insertions(+), 1469 deletions(-) ``` Which seems like a bad idea when we'd have to back-merge it to `capella` and `eip4844` :scream: * exchangeCapabilities & Capella Readiness Logging (#3918) * Undo Passing Spec to Engine API * Utilize engine_exchangeCapabilities * Add Logging to Indicate Capella Readiness * Add exchangeCapabilities to mock_execution_layer * Send Nested Array for engine_exchangeCapabilities * Use Mutex Instead of RwLock for EngineCapabilities * Improve Locking to Avoid Deadlock * Prettier logic for get_engine_capabilities * Improve Comments * Update beacon_node/beacon_chain/src/capella_readiness.rs Co-authored-by: Michael Sproul * Update beacon_node/beacon_chain/src/capella_readiness.rs Co-authored-by: Michael Sproul * Update beacon_node/beacon_chain/src/capella_readiness.rs Co-authored-by: Michael Sproul * Update beacon_node/beacon_chain/src/capella_readiness.rs Co-authored-by: Michael Sproul * Update beacon_node/beacon_chain/src/capella_readiness.rs Co-authored-by: Michael Sproul * Update beacon_node/client/src/notifier.rs Co-authored-by: Michael Sproul * Update beacon_node/execution_layer/src/engine_api/http.rs Co-authored-by: Michael Sproul * Addressed Michael's Comments --------- Co-authored-by: Michael Sproul * Use Local Payload if More Profitable than Builder (#3934) * Use Local Payload if More Profitable than Builder * Rename clone -> clone_from_ref * Minimize Clones of GetPayloadResponse * Cleanup & Fix Tests * Added Tests for Payload Choice by Profit * Fix Outdated Comments * Don't Reject all Builder Bids After Capella (#3940) * Fix bug in Builder API Post-Capella * Fix Clippy Complaints * Unpin fixed-hash (#3917) ## Proposed Changes Remove the `[patch]` for `fixed-hash`. We pinned it years ago in #2710 to fix `arbitrary` support. Nowadays the 0.7 version of `fixed-hash` is only used by the `web3` crate and doesn't need `arbitrary`. ~~Blocked on #3916 but could be merged in the same Bors batch.~~ * Implement `attestation_rewards` API (per-validator reward) (#3822) ## Issue Addressed #3661 ## Proposed Changes `/eth/v1/beacon/rewards/attestations/{epoch}` ```json { "execution_optimistic": false, "finalized": false, "data": [ { "ideal_rewards": [ { "effective_balance": "1000000000", "head": "2500", "target": "5000", "source": "5000" } ], "total_rewards": [ { "validator_index": "0", "head": "2000", "target": "2000", "source": "4000", "inclusion_delay": "2000" } ] } ] } ``` The issue contains the implementation of three per-validator reward APIs: - [`sync_committee_rewards`](https://github.com/sigp/lighthouse/pull/3790) - `attestation_rewards` - `block_rewards`. This PR *only* implements the `attestation_rewards`. The endpoints can be viewed in the Ethereum Beacon nodes API browser: https://ethereum.github.io/beacon-APIs/?urls.primaryName=dev#/Rewards ## Additional Info The implementation of [consensus client reward APIs](https://github.com/eth-protocol-fellows/cohort-three/blob/master/projects/project-ideas.md#consensus-client-reward-apis) is part of the [EPF](https://github.com/eth-protocol-fellows/cohort-three). --- - [x] `get_state` - [x] Calculate *ideal rewards* with some logic from `get_flag_index_deltas` - [x] Calculate *actual rewards* with some logic from `get_flag_index_deltas` - [x] Code cleanup - [x] Testing * Remove unused `u256_hex_be_opt` (#3942) * Broadcast address changes at Capella (#3919) * Add first efforts at broadcast * Tidy * Move broadcast code to client * Progress with broadcast impl * Rename to address change * Fix compile errors * Use `while` loop * Tidy * Flip broadcast condition * Switch to forgetting individual indices * Always broadcast when the node starts * Refactor into two functions * Add testing * Add another test * Tidy, add more testing * Tidy * Add test, rename enum * Rename enum again * Tidy * Break loop early * Add V15 schema migration * Bump schema version * Progress with migration * Update beacon_node/client/src/address_change_broadcast.rs Co-authored-by: Michael Sproul * Fix typo in function name --------- Co-authored-by: Michael Sproul * Implement block_rewards API (per-validator reward) (#3907) ## Issue Addressed [#3661](https://github.com/sigp/lighthouse/issues/3661) ## Proposed Changes `/eth/v1/beacon/rewards/blocks/{block_id}` ``` { "execution_optimistic": false, "finalized": false, "data": { "proposer_index": "123", "total": "123", "attestations": "123", "sync_aggregate": "123", "proposer_slashings": "123", "attester_slashings": "123" } } ``` The issue contains the implementation of three per-validator reward APIs: * `sync_committee_rewards` * [`attestation_rewards`](https://github.com/sigp/lighthouse/pull/3822) * `block_rewards` This PR only implements the `block_rewards`. The endpoints can be viewed in the Ethereum Beacon nodes API browser: [https://ethereum.github.io/beacon-APIs/?urls.primaryName=dev#/Rewards](https://ethereum.github.io/beacon-APIs/?urls.primaryName=dev#/Rewards) ## Additional Info The implementation of [consensus client reward APIs](https://github.com/eth-protocol-fellows/cohort-three/blob/master/projects/project-ideas.md#consensus-client-reward-apis) is part of the [EPF](https://github.com/eth-protocol-fellows/cohort-three). Co-authored-by: kevinbogner Co-authored-by: navie * Update the docker build to include features based images (#3875) ## Proposed Changes There are some features that are enabled/disabled with the `FEATURES` env variable. This PR would introduce a pattern to introduce docker images based on those features. This can be useful later on to have specific images for some experimental features in the future. ## Additional Info We at Lodesart need to have `minimal` spec support for some cross-client network testing. To make it efficient on the CI, we tend to use minimal preset. * Self rate limiting dev flag (#3928) ## Issue Addressed Adds self rate limiting options, mainly with the idea to comply with peer's rate limits in small testnets ## Proposed Changes Add a hidden flag `self-limiter` this can take no value, or customs values to configure quotas per protocol ## Additional Info ### How to use `--self-limiter` will turn on the self rate limiter applying the same params we apply to inbound requests (requests from other peers) `--self-limiter "beacon_blocks_by_range:64/1"` will turn on the self rate limiter for ALL protocols, but change the quota for bbrange to 64 requested blocks per 1 second. `--self-limiter "beacon_blocks_by_range:64/1;ping:1/10"` same as previous one, changing the quota for ping as well. ### Caveats - The rate limiter is either on or off for all protocols. I added the custom values to be able to change the quotas per protocol so that some protocols can be given extremely loose or tight quotas. I think this should satisfy every need even if we can't technically turn off rate limits per protocol. - This reuses the rate limiter struct for the inbound requests so there is this ugly part of the code in which we need to deal with the inbound only protocols (light client stuff) if this becomes too ugly as we add lc protocols, we might want to split the rate limiters. I've checked this and looks doable with const generics to avoid so much code duplication ### Knowing if this is on ``` Feb 06 21:12:05.493 DEBG Using self rate limiting params config: OutboundRateLimiterConfig { ping: 2/10s, metadata: 1/15s, status: 5/15s, goodbye: 1/10s, blocks_by_range: 1024/10s, blocks_by_root: 128/10s }, service: libp2p_rpc, service: libp2p ``` * Update dependencies (#3946) ## Issue Addressed Resolves the cargo-audit failure caused by https://rustsec.org/advisories/RUSTSEC-2023-0010. I also removed the ignore for `RUSTSEC-2020-0159` as we are no longer using a vulnerable version of `chrono`. We still need the other ignore for `time 0.1` because we depend on it via `sloggers -> chrono -> time 0.1`. * Fix the whitespace in docker workflow (#3952) ## Issue Addressed Fix a whitespace issue that was causing failure in the docker build. ## Additional Info https://github.com/sigp/lighthouse/pull/3948 * Remove participation rate from API docs (#3955) ## Issue Addressed NA ## Proposed Changes Removes the "Participation Rate" since it references an undefined variable: `previous_epoch_attesting_gwei`. I didn't replace it with anything since I think "Justification/Finalization Rate" already expresses what it was trying to express. ## Additional Info NA * Add attestation duty slot metric (#2704) ## Issue Addressed Resolves #2521 ## Proposed Changes Add a metric that indicates the next attestation duty slot for all managed validators in the validator client. * Fix edge-case when finding the finalized descendant (#3924) ## Issue Addressed NA ## Description We were missing an edge case when checking to see if a block is a descendant of the finalized checkpoint. This edge case is described for one of the tests in this PR: https://github.com/sigp/lighthouse/blob/a119edc739e9dcefe1cb800a2ce9eb4baab55f20/consensus/proto_array/src/proto_array_fork_choice.rs#L1018-L1047 This bug presented itself in the following mainnet log: ``` Jan 26 15:12:42.841 ERRO Unable to validate attestation error: MissingBeaconState(0x7c30cb80ec3d4ec624133abfa70e4c6cfecfca456bfbbbff3393e14e5b20bf25), peer_id: 16Uiu2HAm8RPRciXJYtYc5c3qtCRdrZwkHn2BXN3XP1nSi1gxHYit, type: "unaggregated", slot: Slot(5660161), beacon_block_root: 0x4a45e59da7cb9487f4836c83bdd1b741b4f31c67010c7ae343fa6771b3330489 ``` Here the BN is rejecting an attestation because of a "missing beacon state". Whilst it was correct to reject the attestation, it should have rejected it because it attests to a block that conflicts with finality rather than claiming that the database is inconsistent. The block that this attestation points to (`0x4a45`) is block `C` in the above diagram. It is a non-canonical block in the first slot of an epoch that conflicts with the finalized checkpoint. Due to our lazy pruning of proto array, `0x4a45` was still present in proto-array. Our missed edge-case in [`ForkChoice::is_descendant_of_finalized`](https://github.com/sigp/lighthouse/blob/38514c07f222ff7783834c48cf5c0a6ee7f346d0/consensus/fork_choice/src/fork_choice.rs#L1375-L1379) would have indicated to us that the block is a descendant of the finalized block. Therefore, we would have accepted the attestation thinking that it attests to a descendant of the finalized *checkpoint*. Since we didn't have the shuffling for this erroneously processed block, we attempted to read its state from the database. This failed because we prune states from the database by keeping track of the tips of the chain and iterating back until we find a finalized block. This would have deleted `C` from the database, hence the `MissingBeaconState` error. * Tweaks to reward APIs (#3957) ## Proposed Changes * Return the effective balance in gwei to align with the spec ([ideal attestation rewards](https://ethereum.github.io/beacon-APIs/?urls.primaryName=dev#/Rewards/getAttestationsRewards)). * Use quoted `i64`s for attestation and sync committee rewards. * Properly Deserialize ForkVersionedResponses (#3944) * Move ForkVersionedResponse to consensus/types * Properly Deserialize ForkVersionedResponses * Elide Types in from_value Calls * Added Tests for ForkVersionedResponse Deserialize * Address Sean's Comments & Make Less Restrictive * Utilize `map_fork_name!` * Update Mock Builder for Post-Capella Tests (#3958) * Update Mock Builder for Post-Capella Tests * Add _mut Suffix to BidStuff Functions * Fix Setting Gas Limit * Use release profile for Windows binaries (#3965) ## Proposed Changes Disable `maxperf` profile on Windows due to #3964. This is required for the v3.5.0 release CI to succeed without crashing. * Reduce some EE and builder related ERRO logs to WARN (#3966) ## Issue Addressed NA ## Proposed Changes Our `ERRO` stream has been rather noisy since the merge due to some unexpected behaviours of builders and EEs. Now that we've been running post-merge for a while, I think we can drop some of these `ERRO` to `WARN` so we're not "crying wolf". The modified logs are: #### `ERRO Execution engine call failed` I'm seeing this quite frequently on Geth nodes. They seem to timeout when they're busy and it rarely indicates a serious issue. We also have logging across block import, fork choice updating and payload production that raise `ERRO` or `CRIT` when the EE times out, so I think we're not at risk of silencing actual issues. #### `ERRO "Builder failed to reveal payload"` In #3775 we reduced this log from `CRIT` to `ERRO` since it's common for builders to fail to reveal the block to the producer directly whilst still broadcasting it to the networ. I think it's worth dropping this to `WARN` since it's rarely interesting. I elected to stay with `WARN` since I really do wish builders would fulfill their API promises by returning the block to us. Perhaps I'm just being pedantic here, I could be convinced otherwise. #### `ERRO "Relay error when registering validator(s)"` It seems like builders and/or mev-boost struggle to handle heavy loads of validator registrations. I haven't observed issues with validators not actually being registered, but I see timeouts on these endpoints many times a day. It doesn't seem like this `ERRO` is worth it. #### `ERRO Error fetching block for peer ExecutionLayerErrorPayloadReconstruction` This means we failed to respond to a peer on the P2P network with a block they requested because of an error in the `execution_layer`. It's very common to see timeouts or incomplete responses on this endpoint whilst the EE is busy and I don't think it's important enough for an `ERRO`. As long as the peer count stays high, I don't think the user needs to be actively concerned about how we're responding to peers. ## Additional Info NA * Fix regression in DB write atomicity (#3931) ## Issue Addressed Fix a bug introduced by #3696. The bug is not expected to occur frequently, so releasing this PR is non-urgent. ## Proposed Changes * Add a variant to `StoreOp` that allows a raw KV operation to be passed around. * Return to using `self.store.do_atomically` rather than `self.store.hot_db.do_atomically`. This streamlines the write back into a single call and makes our auto-revert work again. * Prevent `import_block_update_shuffling_cache` from failing block import. This is an outstanding bug from before v3.4.0 which may have contributed to some random unexplained database corruption. ## Additional Info In #3696 I split the database write into two calls, one to convert the `StoreOp`s to `KeyValueStoreOp`s and one to write them. This had the unfortunate side-effect of damaging our atomicity guarantees in case of a write error. If the first call failed, we would be left with the block in fork choice but not on-disk (or the snapshot cache), which would prevent us from processing any descendant blocks. On `unstable` the first call is very unlikely to fail unless the disk is full, but on `tree-states` the conversion is more involved and a user reported database corruption after it failed in a way that should have been recoverable. Additionally, as @emhane observed, #3696 also inadvertently removed the import of the new block into the block cache. Although this seems like it could have negatively impacted performance, there are several mitigating factors: - For regular block processing we should almost always load the parent block (and state) from the snapshot cache. - We often load blinded blocks, which bypass the block cache anyway. - Metrics show no noticeable increase in the block cache miss rate with v3.4.0. However, I expect the block cache _will_ be useful again in `tree-states`, so it is restored to use by this PR. * Invalid cross build feature flag (#3959) ## Issue Addressed The documentation referring to build from source mismatches with the what gitworkflow uses. https://github.com/sigp/lighthouse/blob/aa5b7ef7839e15d55c3a252230ecb11c4abc0a52/book/src/installation-source.md?plain=1#L118-L120 ## Proposed Changes Because the github workflow uses `cross` to build from source and for that build there is different env variable `CROSS_FEATURES` so need pass at the compile time. ## Additional Info Verified that existing `-dev` builds does not contains the `minimal` spec enabled. ```bash > docker run --rm --name node-5-cl-lighthouse sigp/lighthouse:latest-amd64-unstable-dev lighthouse --version Lighthouse v3.4.0-aa5b7ef BLS library: blst-portable SHA256 hardware acceleration: true Allocator: jemalloc Specs: mainnet (true), minimal (false), gnosis (true) ``` * Placeholder for BlobsByRange outbound rate limit * Update block rewards API for Capella * Enforce a timeout on peer disconnect (#3757) On heavily crowded networks, we are seeing many attempted connections to our node every second. Often these connections come from peers that have just been disconnected. This can be for a number of reasons including: - We have deemed them to be not as useful as other peers - They have performed poorly - They have dropped the connection with us - The connection was spontaneously lost - They were randomly removed because we have too many peers In all of these cases, if we have reached or exceeded our target peer limit, there is no desire to accept new connections immediately after the disconnect from these peers. In fact, it often costs us resources to handle the established connections and defeats some of the logic of dropping them in the first place. This PR adds a timeout, that prevents recently disconnected peers from reconnecting to us. Technically we implement a ban at the swarm layer to prevent immediate re connections for at least 10 minutes. I decided to keep this light, and use a time-based LRUCache which only gets updated during the peer manager heartbeat to prevent added stress of polling a delay map for what could be a large number of peers. This cache is bounded in time. An extra space bound could be added should people consider this a risk. Co-authored-by: Diva M * Quote Capella BeaconState fields (#3967) * Simplify payload traits and reduce cloning (#3976) * Simplify payload traits and reduce cloning * Fix self limiter * Fix docker and deps (#3978) ## Proposed Changes - Fix this cargo-audit failure for `sqlite3-sys`: https://github.com/sigp/lighthouse/actions/runs/4179008889/jobs/7238473962 - Prevent the Docker builds from running out of RAM on CI by removing `gnosis` and LMDB support from the `-dev` images (see: https://github.com/sigp/lighthouse/pull/3959#issuecomment-1430531155, successful run on my fork: https://github.com/michaelsproul/lighthouse/actions/runs/4179162480/jobs/7239537947). * Execution engine suggestions from code review Co-authored-by: Paul Hauner * blacklist tests in windows (#3961) ## Issue Addressed Windows tests for subscription and unsubscriptions fail in CI sporadically. We usually ignore this failures, so this PR aims to help reduce the failure noise. Associated issue is https://github.com/sigp/lighthouse/issues/3960 * Improve testing slot clock to allow manipulation of time in tests (#3974) ## Issue Addressed I discovered this issue while implementing [this test](https://github.com/jimmygchen/lighthouse/blob/test-example/beacon_node/network/src/beacon_processor/tests.rs#L895), where I tried to manipulate the slot clock with: `rig.chain.slot_clock.set_current_time(duration);` however the change doesn't get reflected in the `slot_clock` in `ReprocessQueue`, and I realised `slot_clock` was cloned a few times in the code, and therefore changing the time in `rig.chain.slot_clock` doesn't have any effect in `ReprocessQueue`. I've incorporated the suggestion from the @paulhauner and @michaelsproul - wrapping the `ManualSlotClock.current_time` (`RwLock)` in an `Arc`, and the above test now passes. Let's see if this breaks any existing tests :) * Fix exec integration tests for Geth v1.11.0 (#3982) ## Proposed Changes * Bump Go from 1.17 to 1.20. The latest Geth release v1.11.0 requires 1.18 minimum. * Prevent a cache miss during payload building by using the right fee recipient. This prevents Geth v1.11.0 from building a block with 0 transactions. The payload building mechanism is overhauled in the new Geth to improve the payload every 2s, and the tests were failing because we were falling back on a `getPayload` call with no lookahead due to `get_payload_id` cache miss caused by the mismatched fee recipient. Alternatively we could hack the tests to send `proposer_preparation_data`, but I think the static fee recipient is simpler for now. * Add support for optionally enabling Lighthouse logs in the integration tests. Enable using `cargo run --release --features logging/test_logger`. This was very useful for debugging. * Suggestions for Capella `execution_layer` (#3983) * Restrict Engine::request to FnOnce * Use `Into::into` * Impl IntoIterator for VariableList * Use Instant rather than SystemTime * Add capella fork epoch (#3997) * Fix Capella schema downgrades (#4004) * Remove "eip4844" network (#4008) * Suggestions for Capella `beacon_chain` (#3999) * Remove CapellaReadiness::NotSynced Some EEs have a habit of flipping between synced/not-synced, which causes some spurious "Not read for the merge" messages back before the merge. For the merge, if the EE wasn't synced the CE simple wouldn't go through the transition (due to optimistic sync stuff). However, we don't have that hard requirement for Capella; the CE will go through the fork and just wait for the EE to catch up. I think that removing `NotSynced` here will avoid false-positives on the "Not ready logs..". We'll be creating other WARN/ERRO logs if the EE isn't synced, anyway. * Change some Capella readiness logging There's two changes here: 1. Shorten the log messages, for readability. 2. Change the hints. Connecting a Capella-ready LH to a non-Capella-ready EE gives this log: ``` WARN Not ready for Capella info: The execution endpoint does not appear to support the required engine api methods for Capella: Required Methods Unsupported: engine_getPayloadV2 engine_forkchoiceUpdatedV2 engine_newPayloadV2, service: slot_notifier ``` This variant of error doesn't get a "try updating" style hint, when it's the one that needs it. This is because we detect the method-not-found reponse from the EE and return default capabilities, rather than indicating that the request fails. I think it's fair to say that an EE upgrade is required whenever it doesn't provide the required methods. I changed the `ExchangeCapabilitiesFailed` message since that can only happen when the EE fails to respond with anything other than success or not-found. * Capella consensus review (#4012) * Add extra encoding/decoding tests * Remove TODO The method LGTM * Remove `FreeAttestation` This is an ancient relic, I'm surprised it still existed! * Add paranoid check for eip4844 code This is not technically necessary, but I think it's nice to be explicit about EIP4844 consensus code for the time being. * Reduce big-O complexity of address change pruning I'm not sure this is *actually* useful, but it might come in handy if we see a ton of address changes at the fork boundary. I know the devops team have been testing with ~100k changes, so maybe this will help in that case. * Revert "Reduce big-O complexity of address change pruning" This reverts commit e7d93e6cc7cf1b92dd5a9e1966ce47d4078121eb. * Revert Sepolia genesis change (#4013) * Allow for withdrawals in max block size (#4011) * Allow for withdrawals in max block size * Ensure payload size is counted * Fix post-Bellatrix checkpoint sync (#4014) * Recognise execution in post-merge blocks * Remove `.body()` * Fix typo * Use `is_default_with_empty_roots`. * Modify some Capella comments (#4015) * Modify comment to only include 4844 Capella only modifies per epoch processing by adding `process_historical_summaries_update`, which does not change the realization of justification or finality. Whilst 4844 does not currently modify realization, the spec is not yet final enough to say that it never will. * Clarify address change verification comment The verification of the address change doesn't really have anything to do with the current epoch. I think this was just a copy-paste from a function like `verify_exit`. * Cache validator balances and allow them to be served over the HTTP API (#3863) ## Issue Addressed #3804 ## Proposed Changes - Add `total_balance` to the validator monitor and adjust the number of historical epochs which are cached. - Allow certain values in the cache to be served out via the HTTP API without requiring a state read. ## Usage ``` curl -X POST "http://localhost:5052/lighthouse/ui/validator_info" -d '{"indices": [0]}' -H "Content-Type: application/json" | jq ``` ``` { "data": { "validators": { "0": { "info": [ { "epoch": 172981, "total_balance": 36566388519 }, ... { "epoch": 172990, "total_balance": 36566496513 } ] }, "1": { "info": [ { "epoch": 172981, "total_balance": 36355797968 }, ... { "epoch": 172990, "total_balance": 36355905962 } ] } } } } ``` ## Additional Info This requires no historical states to operate which mean it will still function on the freshly checkpoint synced node, however because of this, the values will populate each epoch (up to a maximum of 10 entries). Another benefit of this method, is that we can easily cache any other values which would normally require a state read and serve them via the same endpoint. However, we would need be cautious about not overly increasing block processing time by caching values from complex computations. This also caches some of the validator metrics directly, rather than pulling them from the Prometheus metrics when the API is called. This means when the validator count exceeds the individual monitor threshold, the cached values will still be available. Co-authored-by: Paul Hauner * Disable debug info on CI (#4018) ## Issue Addressed Closes #4005 Alternative to #4017 ## Proposed Changes Disable debug info on CI to save RAM and disk space. * Remove BeaconBlockAndBlobsSidecar from core topics (#4016) * Fix metric (#4020) * Fix doppelganger script (#3988) ## Issue Addressed N/A ## Proposed Changes The doppelganger tests were failing silently since the `PROPOSER_BOOST` config was not set. Sets the config and script returns an error if any subprocess fails. * Register disconnected peers when temporarily banned (#4001) This is a correction to #3757. The correction registers a peer that is being disconnected in the local peer manager db to ensure we are tracking the correct state. * v3.5.0 (#3996) ## Issue Addressed NA ## Proposed Changes - Bump versions ## Sepolia Capella Upgrade This release will enable the Capella fork on Sepolia. We are planning to publish this release on the 23rd of Feb 2023. Users who can build from source and wish to do pre-release testing can use this branch. ## Additional Info - [ ] Requires further testing * Execution Integration Tests Correction (#4034) The execution integration tests are currently failing. This is a quick modification to pin the execution client version to correct the tests. * Allow compilation with no slasher backend (#3888) ## Proposed Changes Allowing compiling without MDBX by running: ```bash CARGO_INSTALL_EXTRA_FLAGS="--no-default-features" make ``` The reasons to do this are several: - Save compilation time if the slasher won't be used - Work around compilation errors in slasher backend dependencies (our pinned version of MDBX is currently not compiling on FreeBSD with certain compiler versions). ## Additional Info When I opened this PR we were using resolver v1 which [doesn't disable default features in dependencies](https://doc.rust-lang.org/cargo/reference/features.html#resolver-version-2-command-line-flags), and `mdbx` is default for the `slasher` crate. Even after the resolver got changed to v2 in #3697 compiling with `--no-default-features` _still_ wasn't turning off the slasher crate's default features, so I added `default-features = false` in all the places we depend on it. Co-authored-by: Michael Sproul * Add content-type header to metrics server response (#3970) This fixes issues with certain metrics scrapers, which might error if the content-type is not correctly set. ## Issue Addressed Fixes https://github.com/sigp/lighthouse/issues/3437 ## Proposed Changes Simply set header: `Content-Type: text/plain` on metrics server response. Seems like the errored branch does this correctly already. ## Additional Info This is needed also to enable influx-db metric scraping which work very nicely with Geth. * Use consensus-spec-tests `v1.3.0-rc.3` (#4021) ## Issue Addressed NA ## Proposed Changes Updates our `ef_tests` to use: https://github.com/ethereum/consensus-specs/releases/tag/v1.3.0-rc.3 This required: - Skipping a `merkle_proof_validity` test (see #4022) - Account for the `eip4844` tests changing name to `deneb` - My IDE did some Python linting during this change. It seemed simple and nice so I left it there. ## Additional Info NA * Docs for Siren (#4023) This adds some documentation for the Siren app into the Lighthouse book. Co-authored-by: Mavrik * Add more logs in the BN HTTP API during block production (#4025) ## Issue Addressed NA ## Proposed Changes Adds two new `DEBG` logs to the HTTP API: 1. As soon as we are requested to produce a block. 2. As soon as a signed block is received. In #3858 we added some very helpful logs to the VC so we could see when things are happening with block proposals in the VC. After doing some more debugging, I found that I can tell when the VC is sending a block but I *can't* tell the time that the BN receives it (I can only get the time after the BN has started doing some work with the block). Knowing when the VC published and as soon as the BN receives is useful for determining the delays introduced by network latency (and some other things like JSON decoding, etc). ## Additional Info NA * Clean capella (#4019) ## Issue Addressed Cleans up all the remnants of 4844 in capella. This makes sure when 4844 is reviewed there is nothing we are missing because it got included here ## Proposed Changes drop a bomb on every 4844 thing ## Additional Info Merge process I did (locally) is as follows: - squash merge to produce one commit - in new branch off unstable with the squashed commit create a `git revert HEAD` commit - merge that new branch onto 4844 with `--strategy ours` - compare local 4844 to remote 4844 and make sure the diff is empty - enjoy Co-authored-by: Paul Hauner * Delete Kiln and Ropsten configs (#4038) ## Proposed Changes Remove built-in support for Ropsten and Kiln via the `--network` flag. Both testnets are long dead and deprecated. This shaves about 30MiB off the binary size, from 135MiB to 103MiB (maxperf), or 165MiB to 135MiB (release). * Cleaner logic for gossip subscriptions for new forks (#4030) ## Issue Addressed Cleaner resolution for #4006 ## Proposed Changes We are currently subscribing to core topics of new forks way before the actual fork since we had just a single `CORE_TOPICS` array. This PR separates the core topics for every fork and subscribes to only required topics based on the current fork. Also adds logic for subscribing to the core topics of a new fork only 2 slots before the fork happens. 2 slots is to give enough time for the gossip meshes to form. Currently doesn't add logic to remove topics from older forks in new forks. For e.g. in the coupled 4844 world, we had to remove the `BeaconBlock` topic in favour of `BeaconBlocksAndBlobsSidecar` at the 4844 fork. It should be easy enough to add though. Not adding it because I'm assuming that #4019 will get merged before this PR and we won't require any deletion logic. Happy to add it regardless though. * Log a debug message when a request fails for a beacon node candidate (#4036) ## Issue Addressed #3985 ## Proposed Changes Log a debug message when a BN candidate returns an error. `Mar 01 16:40:24.011 DEBG Request to beacon node failed error: ServerMessage(ErrorMessage { code: 503, message: "SERVICE_UNAVAILABLE: beacon node is syncing: head slot is 8416, current slot is 5098402", stacktraces: [] }), node: http://localhost:5052/` * Permit a `null` LVH from an `INVALID` response to `newPayload` (#4037) ## Issue Addressed NA ## Proposed Changes As discovered in #4034, Lighthouse is not accepting `latest_valid_hash == None` in an `INVALID` response to `newPayload`. The `null`/`None` response *was* illegal at one point, however it was added in https://github.com/ethereum/execution-apis/pull/254. This PR brings Lighthouse in line with the standard and should fix the root cause of what #4034 patched around. ## Additional Info NA * Add latency measurement service to VC (#4024) ## Issue Addressed NA ## Proposed Changes Adds a service which periodically polls (11s into each mainnet slot) the `node/version` endpoint on each BN and roughly measures the round-trip latency. The latency is exposed as a `DEBG` log and a Prometheus metric. The `--latency-measurement-service` has been added to the VC, with the following options: - `--latency-measurement-service true`: enable the service (default). - `--latency-measurement-service`: (without a value) has the same effect. - `--latency-measurement-service false`: disable the service. ## Additional Info Whilst looking at our staking setup, I think the BN+VC latency is contributing to late blocks. Now that we have to wait for the builders to respond it's nice to try and do everything we can to reduce that latency. Having visibility is the first step. * Optimise payload attributes calculation and add SSE (#4027) ## Issue Addressed Closes #3896 Closes #3998 Closes #3700 ## Proposed Changes - Optimise the calculation of withdrawals for payload attributes by avoiding state clones, avoiding unnecessary state advances and reading from the snapshot cache if possible. - Use the execution layer's payload attributes cache to avoid re-calculating payload attributes. I actually implemented a new LRU cache just for withdrawals but it had the exact same key and most of the same data as the existing payload attributes cache, so I deleted it. - Add a new SSE event that fires when payloadAttributes are calculated. This is useful for block builders, a la https://github.com/ethereum/beacon-APIs/issues/244. - Add a new CLI flag `--always-prepare-payload` which forces payload attributes to be sent with every fcU regardless of connected proposers. This is intended for use by builders/relays. For maximum effect, the flags I've been using to run Lighthouse in "payload builder mode" are: ``` --always-prepare-payload \ --prepare-payload-lookahead 12000 \ --suggested-fee-recipient 0x0000000000000000000000000000000000000000 ``` The fee recipient is required so Lighthouse has something to pack in the payload attributes (it can be ignored by the builder). The lookahead causes fcU to be sent at the start of every slot rather than at 8s. As usual, fcU will also be sent after each change of head block. I think this combination is sufficient for builders to build on all viable heads. Often there will be two fcU (and two payload attributes) sent for the same slot: one sent at the start of the slot with the head from `n - 1` as the parent, and one sent after the block arrives with `n` as the parent. Example usage of the new event stream: ```bash curl -N "http://localhost:5052/eth/v1/events?topics=payload_attributes" ``` ## Additional Info - [x] Tests added by updating the proposer re-org tests. This has the benefit of testing the proposer re-org code paths with withdrawals too, confirming that the new changes don't interact poorly. - [ ] Benchmarking with `blockdreamer` on devnet-7 showed promising results but I'm yet to do a comparison to `unstable`. Co-authored-by: Michael Sproul * Optimise attestation selection proof signing (#4033) ## Issue Addressed Closes #3963 (hopefully) ## Proposed Changes Compute attestation selection proofs gradually each slot rather than in a single `join_all` at the start of each epoch. On a machine with 5k validators this replaces 5k tasks signing 5k proofs with 1 task that signs 5k/32 ~= 160 proofs each slot. Based on testing with Goerli validators this seems to reduce the average time to produce a signature by preventing Tokio and the OS from falling over each other trying to run hundreds of threads. My testing so far has been with local keystores, which run on a dynamic pool of up to 512 OS threads because they use [`spawn_blocking`](https://docs.rs/tokio/1.11.0/tokio/task/fn.spawn_blocking.html) (and we haven't changed the default). An earlier version of this PR hyper-optimised the time-per-signature metric to the detriment of the entire system's performance (see the reverted commits). The current PR is conservative in that it avoids touching the attestation service at all. I think there's more optimising to do here, but we can come back for that in a future PR rather than expanding the scope of this one. The new algorithm for attestation selection proofs is: - We sign a small batch of selection proofs each slot, for slots up to 8 slots in the future. On average we'll sign one slot's worth of proofs per slot, with an 8 slot lookahead. - The batch is signed halfway through the slot when there is unlikely to be contention for signature production (blocks are <4s, attestations are ~4-6 seconds, aggregates are 8s+). ## Performance Data _See first comment for updated graphs_. Graph of median signing times before this PR: ![signing_times_median](https://user-images.githubusercontent.com/4452260/221495627-3ab3c105-319f-406e-b99d-b5913e0ded9c.png) Graph of update attesters metric (includes selection proof signing) before this PR: ![update_attesters_store](https://user-images.githubusercontent.com/4452260/221497057-01ba40e4-8148-45f6-9e21-36a9567a631a.png) Median signing time after this PR (prototype from 12:00, updated version from 13:30): ![signing_times_median_updated](https://user-images.githubusercontent.com/4452260/221771578-47a040cc-b832-482f-9a1a-d1bd9854e00e.png) 99th percentile on signing times (bounded attestation signing from 16:55, now removed): ![signing_times_99pc](https://user-images.githubusercontent.com/4452260/221772055-e64081a8-2220-45ba-ba6d-9d7e344a5bde.png) Attester map update timing after this PR: ![update_attesters_store_updated](https://user-images.githubusercontent.com/4452260/221771757-c8558a48-7f4e-4bb5-9929-dee177a66c1e.png) Selection proof signings per second change: ![signing_attempts](https://user-images.githubusercontent.com/4452260/221771855-64f5da22-1655-478d-926b-810be8a3650c.png) ## Link to late blocks I believe this is related to the slow block signings because logs from Stakely in #3963 show these two logs almost 5 seconds apart: > Feb 23 18:56:23.978 INFO Received unsigned block, slot: 5862880, service: block, module: validator_client::block_service:393 > Feb 23 18:56:28.552 INFO Publishing signed block, slot: 5862880, service: block, module: validator_client::block_service:416 The only thing that happens between those two logs is the signing of the block: https://github.com/sigp/lighthouse/blob/0fb58a680d6f0c9f0dc8beecf142186debff9a8d/validator_client/src/block_service.rs#L410-L414 Helpfully, Stakely noticed this issue without any Lighthouse BNs in the mix, which pointed to a clear issue in the VC. ## TODO - [x] Further testing on testnet infrastructure. - [x] Make the attestation signing parallelism configurable. * Update dependencies incl tempfile (#4048) ## Proposed Changes Fix the cargo audit failure caused by [RUSTSEC-2023-0018](https://rustsec.org/advisories/RUSTSEC-2023-0018) which we were exposed to via `tempfile`. ## Additional Info I've held back the libp2p crate for now because it seemed to introduce another duplicate dependency on libp2p-core, for a total of 3 copies. Maybe that's fine, but we can sort it out later. * Log a `WARN` in the VC for a mismatched Capella fork epoch (#4050) ## Issue Addressed NA ## Proposed Changes - Adds a `WARN` statement for Capella, just like the previous forks. - Adds a hint message to all those WARNs to suggest the user update the BN or VC. ## Additional Info NA * Add VC metric for primary BN latency (#4051) ## Issue Addressed NA ## Proposed Changes In #4024 we added metrics to expose the latency measurements from a VC to each BN. Whilst playing with these new metrics on our infra I realised it would be great to have a single metric to make sure that the primary BN for each VC has a reasonable latency. With the current "metrics for all BNs" it's hard to tell which is the primary. ## Additional Info NA * Set Capella fork epoch for Goerli (#4044) ## Issue Addressed NA ## Proposed Changes Sets the Capella fork epoch as per https://github.com/eth-clients/goerli/pull/160. The fork will occur at: - Epoch: 162304 - Slot: 5193728 - UTC: 14/03/2023, 10:25:36 pm ## Additional Info - [x] Blocked on https://github.com/eth-clients/goerli/pull/160 being merged * Add a flag to always use payloads from builders (#4052) ## Issue Addressed #4040 ## Proposed Changes - Add the `always_prefer_builder_payload` field to `Config` in `beacon_node/client/src/config.rs`. - Add that same field to `Inner` in `beacon_node/execution_layer/src/lib.rs` - Modify the logic for picking the payload in `beacon_node/execution_layer/src/lib.rs` - Add the `always-prefer-builder-payload` flag to the beacon node CLI - Test the new flags in `lighthouse/tests/beacon_node.rs` Co-authored-by: Paul Hauner * Release v3.5.1 (#4049) ## Issue Addressed NA ## Proposed Changes Bumps versions to v3.5.1. ## Additional Info - [x] Requires further testing * Fix order of arguments to log_count (#4060) See: https://github.com/sigp/lighthouse/pull/4027 ## Proposed Changes The order of the arguments to `log_count` is swapped in `beacon_node/beacon_chain/src/events.rs`. * Appease Clippy 1.68 and refactor `http_api` (#4068) ## Proposed Changes Two tiny updates to satisfy Clippy 1.68 Plus refactoring of the `http_api` into less complex types so the compiler can chew and digest them more easily. Co-authored-by: Michael Sproul * Added warning when new jwt is generated (#4000) ## Issue Addressed #3435 ## Proposed Changes Fire a warning with the path of JWT to be created when the path given by --execution-jwt is not found Currently, the same error is logged if the jwt is found but doesn't match the execution client's jwt, and if no jwt was found at the given path. This makes it very hard to tell if you accidentally typed the wrong path, as a new jwt is created silently that won't match the execution client's jwt. So instead, it will now fire a warning stating that a jwt is being generated at the given path. ## Additional Info In the future, it may be smarter to handle this case by adding an InvalidJWTPath member to the Error enum in lib.rs or auth.rs that can be handled during upcheck() This is my first PR and first project with rust. so thanks to anyone who looks at this for their patience and help! Co-authored-by: Sebastian Richel <47844429+sebastianrich18@users.noreply.github.com> * Correct /lighthouse/nat implementation (#4069) ## Proposed Changes The current `/lighthouse/nat` implementation checks for _zero_ address updated messages, when it should check for a _non-zero_ number. This was spotted while debugging an issue on Discord where a user's ports weren't forwarded but `/lighthouse/nat` was still returning `true`. * Support for Ipv6 (#4046) ## Issue Addressed Add support for ipv6 and dual stack in lighthouse. ## Proposed Changes From an user perspective, now setting an ipv6 address, optionally configuring the ports should feel exactly the same as using an ipv4 address. If listening over both ipv4 and ipv6 then the user needs to: - use the `--listen-address` two times (ipv4 and ipv6 addresses) - `--port6` becomes then required - `--discovery-port6` can now be used to additionally configure the ipv6 udp port ### Rough list of code changes - Discovery: - Table filter and ip mode set to match the listening config. - Ipv6 address, tcp port and udp port set in the ENR builder - Reported addresses now check which tcp port to give to libp2p - LH Network Service: - Can listen over Ipv6, Ipv4, or both. This uses two sockets. Using mapped addresses is disabled from libp2p and it's the most compatible option. - NetworkGlobals: - No longer stores udp port since was not used at all. Instead, stores the Ipv4 and Ipv6 TCP ports. - NetworkConfig: - Update names to make it clear that previous udp and tcp ports in ENR were Ipv4 - Add fields to configure Ipv6 udp and tcp ports in the ENR - Include advertised enr Ipv6 address. - Add type to model Listening address that's either Ipv4, Ipv6 or both. A listening address includes the ip, udp port and tcp port. - UPnP: - Kept only for ipv4 - Cli flags: - `--listen-addresses` now can take up to two values - `--port` will apply to ipv4 or ipv6 if only one listening address is given. If two listening addresses are given it will apply only to Ipv4. - `--port6` New flag required when listening over ipv4 and ipv6 that applies exclusively to Ipv6. - `--discovery-port` will now apply to ipv4 and ipv6 if only one listening address is given. - `--discovery-port6` New flag to configure the individual udp port of ipv6 if listening over both ipv4 and ipv6. - `--enr-udp-port` Updated docs to specify that it only applies to ipv4. This is an old behaviour. - `--enr-udp6-port` Added to configure the enr udp6 field. - `--enr-tcp-port` Updated docs to specify that it only applies to ipv4. This is an old behaviour. - `--enr-tcp6-port` Added to configure the enr tcp6 field. - `--enr-addresses` now can take two values. - `--enr-match` updated behaviour. - Common: - rename `unused_port` functions to specify that they are over ipv4. - add functions to get unused ports over ipv6. - Testing binaries - Updated code to reflect network config changes and unused_port changes. ## Additional Info TODOs: - use two sockets in discovery. I'll get back to this and it's on https://github.com/sigp/discv5/pull/160 - lcli allow listening over two sockets in generate_bootnodes_enr - add at least one smoke flag for ipv6 (I have tested this and works for me) - update the book * Add parent_block_number to payload SSE (#4053) ## Issue Addressed In #4027 I forgot to add the `parent_block_number` to the payload attributes SSE. ## Proposed Changes Compute the parent block number while computing the pre-payload attributes. Pass it on to the SSE stream. ## Additional Info Not essential for v3.5.1 as I suspect most builders don't need the `parent_block_root`. I would like to use it for my dummy no-op builder however. * Complete match for `has_context_bytes` (#3972) ## Issue Addressed - Add a complete match for `Protocol` here. - The incomplete match was causing us not to append context bytes to the light client protocols - This is the relevant part of the spec and it looks like context bytes are defined https://github.com/ethereum/consensus-specs/blob/dev/specs/altair/light-client/p2p-interface.md#getlightclientbootstrap Disclaimer: I have no idea if people are using it but it shouldn't have been working so not sure why it wasn't caught Co-authored-by: realbigsean * Remove Router/Processor Code (#4002) ## Issue Addressed #3938 ## Proposed Changes - `network::Processor` is deleted and all it's logic is moved to `network::Router`. - The `network::Router` module is moved to a single file. - The following functions are deleted: `on_disconnect` `send_status` `on_status_response` `on_blocks_by_root_request` `on_lightclient_bootstrap` `on_blocks_by_range_request` `on_block_gossip` `on_unaggregated_attestation_gossip` `on_aggregated_attestation_gossip` `on_voluntary_exit_gossip` `on_proposer_slashing_gossip` `on_attester_slashing_gossip` `on_sync_committee_signature_gossip` `on_sync_committee_contribution_gossip` `on_light_client_finality_update_gossip` `on_light_client_optimistic_update_gossip`. This deletions are possible because the updated `Router` allows the underlying methods to be called directly. * Correct a race condition when dialing peers (#4056) There is a race condition which occurs when multiple discovery queries return at almost the exact same time and they independently contain a useful peer we would like to connect to. The condition can occur that we can add the same peer to the dial queue, before we get a chance to process the queue. This ends up displaying an error to the user: ``` ERRO Dialing an already dialing peer ``` Although this error is harmless it's not ideal. There are two solutions to resolving this: 1. As we decide to dial the peer, we change the state in the peer-db to dialing (before we add it to the queue) which would prevent other requests from adding to the queue. 2. We prevent duplicates in the dial queue This PR has opted for 2. because 1. will complicate the code in that we are changing states in non-intuitive places. Although this technically adds a very slight performance cost, its probably a cleaner solution as we can keep the state-changing logic in one place. * Siren Ui Lighthouse Version Requirments (#4093) ## Issue Addressed Added note in lighthouse book to instruct users to use a min lighthouse requirement to run Siren Ui. Which issue # does this PR address? ## Proposed Changes Please list or describe the changes introduced by this PR. ## Additional Info Please provide any additional information. For example, future considerations or information useful for reviewers. * Make more noise when the EL is broken (#3986) ## Issue Addressed Closes #3814, replaces #3818. ## Proposed Changes * Add a WARN log for the case where we are attempting to sync chain segments but can't process them because they're building on an invalid parent. The most common case where we see this is when the execution node database is corrupt, causing sync to stall mysteriously (because we're currently logging the failure only at debug level). * Additionally I've bumped up the logging for invalid execution payloads to `WARN`. This may result in some duplicate logs as we log errors from the `beacon_chain` and then again from the beacon processor. Invalid payloads and corrupt DBs _should_ be rare enough that this doesn't produce overwhelming log volume. * Reduce false positive logging for late builder blocks (#4073) ## Issue Addressed NA ## Proposed Changes When producing a block from a builder, there are two points where we could consider the block "broadcast": 1. When the blinded block is published to the builder. 2. When the un-blinded block is published to the P2P network (this is always *after* the previous step). Our logging for late block broadcasts was using (2) for builder-blocks, which was creating a lot of false-positive logs. This is because the builder publishes the block on the P2P network themselves before returning it to us and we perform (2). For clarity, the logs were false-positives because we claim that the block was published late by us when it was actually published earlier by the builder. This PR changes our logging behavior so we do our logging at (1) instead. It also updates our metrics for block broadcast to distinguish between local and builder blocks. I believe the metrics change will be natively compatible with existing Grafana dashboards. ## Additional Info One could argue that the builder *should* return the block to us faster, however that's not the case. I think it's more important that we don't desensitize users with false-positives. * Clarify "Ready for Capella" (#4095) ## Issue Addressed Resolves #4061 ## Proposed Changes Adds a message to tell users to check their EE. ## Additional Info I really struggled to come up with something succinct and complete, so I'm totally open to feedback. * Reconstruct Payloads using Payload Bodies Methods (#4028) ## Issue Addressed * #3895 Co-authored-by: ethDreamer <37123614+ethDreamer@users.noreply.github.com> Co-authored-by: Michael Sproul * Ignore self as a bootnode (#4110) If a node is also a bootnode it can try to add itself to its own local routing table which will emit an error. The error is entirely harmless but we would prefer to avoid emitting the error. This PR does not attempt to add a boot node ENR if that ENR corresponds to our local peer-id/node-id. * Improve Lighthouse Connectivity Via ENR TCP Update (#4057) Currently Lighthouse will remain uncontactable if users port forward a port that is not the same as the one they are listening on. For example, if Lighthouse runs with port 9000 TCP/UDP locally but a router is configured to pass 9010 externally to the lighthouse node on 9000, other nodes on the network will not be able to reach the lighthouse node. This occurs because Lighthouse does not update its ENR TCP port on external socket discovery. The intention was always that users should use `--enr-tcp-port` to customise this, but this is non-intuitive. The difficulty arises because we have no discovery mechanism to find our external TCP port. If we discovery a new external UDP port, we must guess what our external TCP port might be. This PR assumes the external TCP port is the same as the external UDP port (which may not be the case) and thus updates the TCP port along with the UDP port if the `--enr-tcp-port` flag is not set. Along with this PR, will be added documentation to the Lighthouse book so users can correctly understand and configure their ENR to maximize Lighthouse's connectivity. This relies on https://github.com/sigp/discv5/pull/166 and we should wait for a new release in discv5 before adding this PR. * Customisable shuffling cache size (#4081) This PR enables the user to adjust the shuffling cache size. This is useful for some HTTP API requests which require re-computing old shufflings. This PR currently optimizes the beacon/states/{state_id}/committees HTTP API by first checking the cache before re-building shuffling. If the shuffling is set to a non-default value, then the HTTP API request will also fill the cache when as it constructs new shufflings. If the CLI flag is not present or the value is set to the default of 16 the default behaviour is observed. Co-authored-by: Michael Sproul * Reduce verbosity of reprocess queue logs (#4101) ## Issue Addressed NA ## Proposed Changes Replaces #4058 to attempt to reduce `ERRO Failed to send scheduled attestation` spam and provide more information for diagnosis. With this PR we achieve: - When dequeuing attestations after a block is received, send only one log which reports `n` failures (rather than `n` logs reporting `n` failures). - Make a distinction in logs between two separate attestation dequeuing events. - Add more information to both log events to help assist with troubleshooting. ## Additional Info NA * Set Capella fork epoch for Mainnet (#4111) ## Issue Addressed NA ## Proposed Changes Sets the mainnet Capella fork epoch as per https://github.com/ethereum/consensus-specs/pull/3300 ## Additional Info I expect the `ef_tests` to fail until we get a compatible consensus spec tests release. * Fork choice modifications and cleanup (#3962) ## Issue Addressed NA ## Proposed Changes - Implements https://github.com/ethereum/consensus-specs/pull/3290/ - Bumps `ef-tests` to [v1.3.0-rc.4](https://github.com/ethereum/consensus-spec-tests/releases/tag/v1.3.0-rc.4). The `CountRealizedFull` concept has been removed and the `--count-unrealized-full` and `--count-unrealized` BN flags now do nothing but log a `WARN` when used. ## Database Migration Debt This PR removes the `best_justified_checkpoint` from fork choice. This field is persisted on-disk and the correct way to go about this would be to make a DB migration to remove the field. However, in this PR I've simply stubbed out the value with a junk value. I've taken this approach because if we're going to do a DB migration I'd love to remove the `Option`s around the justified and finalized checkpoints on `ProtoNode` whilst we're at it. Those options were added in #2822 which was included in Lighthouse v2.1.0. The options were only put there to handle the migration and they've been set to `Some` ever since v2.1.0. There's no reason to keep them as options anymore. I started adding the DB migration to this branch but I started to feel like I was bloating this rather critical PR with nice-to-haves. I've kept the partially-complete migration [over in my repo](https://github.com/paulhauner/lighthouse/tree/fc-pr-18-migration) so we can pick it up after this PR is merged. * Release v4.0.0 (#4112) ## Issue Addressed NA ## Proposed Changes Bump versions to `v4.0.0` ## Additional Info NA * Fix fork choice error message (#4122) ## Issue Addressed NA ## Proposed Changes Ensures that we log the values of the *head* block rather than the *justified* block. ## Additional Info NA * Release Candidate v4.0.1-rc.0 (#4123) * Release v4.0.1 (#4125) ## Issue Addressed NA ## Proposed Changes - Bump versions. - Bump openssl version to resolve various `cargo audit` notices. ## Additional Info - Requires further testing * Update arbitrary (#4139) ## Proposed Changes To prevent breakages from `cargo update`, this updates the `arbitrary` crate to a new commit from my fork. Unfortunately we still need to use my fork (even though my `bound` change was merged) because of this issue: https://github.com/rust-lang/rust-clippy/issues/10185. In a couple of Rust versions it should be resolved upstream. * Update Rust version in lcli Dockerfile (#4121) ## Issue Addressed The minimum supported Rust version has been set to 1.66 as of Lighthouse v4.0.0. This PR updates Rust to 1.66 in lcli Dockerfile. Co-authored-by: Jimmy Chen * improve error message (#4141) ## Issue Addressed NA ## Proposed Changes Not use magic number directly in the error message. ## Additional Info NA * Add debug fork choice api (#4003) ## Issue Addressed Which issue # does this PR address? https://github.com/sigp/lighthouse/issues/3669 ## Proposed Changes Please list or describe the changes introduced by this PR. - A new API to fetch fork choice data, as specified [here](https://github.com/ethereum/beacon-APIs/pull/232) - A new integration test to test the new API ## Additional Info Please provide any additional information. For example, future considerations or information useful for reviewers. - `extra_data` field specified in the beacon-API spec is not implemented, please let me know if I should instead. Co-authored-by: Michael Sproul * Optimise `update_validators` by decrypting key cache only when necessary (#4126) ## Title Optimise `update_validators` by decrypting key cache only when necessary ## Issue Addressed Resolves [#3968: Slow performance of validator client PATCH API with hundreds of keys](https://github.com/sigp/lighthouse/issues/3968) ## Proposed Changes 1. Add a check to determine if there is at least one local definition before decrypting the key cache. 2. Assign an empty `KeyCache` when all definitions are of the `Web3Signer` type. 3. Perform cache-related operations (e.g., saving the modified key cache) only if there are local definitions. ## Additional Info This PR addresses the excessive CPU usage and slow performance experienced when using the `PATCH lighthouse/validators/{pubkey}` request with a large number of keys. The issue was caused by the key cache using cryptography to decipher and cipher the cache entities every time the request was made. This operation called `scrypt`, which was very slow and required a lot of memory when there were many concurrent requests. These changes have no impact on the overall functionality but can lead to significant performance improvements when working with remote signers. Importantly, the key cache is never used when there are only `Web3Signer` definitions, avoiding the expensive operation of decrypting the key cache in such cases. Co-authored-by: Maksim Shcherbo * Correct log for ENR (#4133) ## Issue Addressed https://github.com/sigp/lighthouse/issues/4080 Fixes a log when displaying the initial ENR. * Add `finalized` to HTTP API responses (#3753) ## Issue Addressed #3708 ## Proposed Changes - Add `is_finalized_block` method to `BeaconChain` in `beacon_node/beacon_chain/src/beacon_chain.rs`. - Add `is_finalized_state` method to `BeaconChain` in `beacon_node/beacon_chain/src/beacon_chain.rs`. - Add `fork_and_execution_optimistic_and_finalized` in `beacon_node/http_api/src/state_id.rs`. - Add `ExecutionOptimisticFinalizedForkVersionedResponse` type in `consensus/types/src/fork_versioned_response.rs`. - Add `execution_optimistic_finalized_fork_versioned_response`function in `beacon_node/http_api/src/version.rs`. - Add `ExecutionOptimisticFinalizedResponse` type in `common/eth2/src/types.rs`. - Add `add_execution_optimistic_finalized` method in `common/eth2/src/types.rs`. - Update API response methods to include finalized. - Remove `execution_optimistic_fork_versioned_response` Co-authored-by: Michael Sproul * Test failing CI tests due to port conflicts (#4134) ## Issue Addressed #4127. PR to test port conflicts in CI tests . ## Proposed Changes See issue for more details, potential solution could be adding a cache bound by time to the `unused_port` function. * update README of local_testnet (#4114) ## Issue Addressed NA ## Proposed Changes update the descriptions of README in `scripts/local_testnet`. ## Additional Info NA * Update database-migrations.md (#4149) ## Issue Addressed Update the database-migrations to include v4.0.1 for database version v16: ## Proposed Changes Update the table by adding a row ## Additional Info Please provide any additional information. For example, future considerations or information useful for reviewers. * Rate limiting backfill sync (#3936) ## Issue Addressed #3212 ## Proposed Changes - Introduce a new `rate_limiting_backfill_queue` - any new inbound backfill work events gets immediately sent to this FIFO queue **without any processing** - Spawn a `backfill_scheduler` routine that pops a backfill event from the FIFO queue at specified intervals (currently halfway through a slot, or at 6s after slot start for 12s slots) and sends the event to `BeaconProcessor` via a `scheduled_backfill_work_tx` channel - This channel gets polled last in the `InboundEvents`, and work event received is wrapped in a `InboundEvent::ScheduledBackfillWork` enum variant, which gets processed immediately or queued by the `BeaconProcessor` (existing logic applies from here) Diagram comparing backfill processing with / without rate-limiting: https://github.com/sigp/lighthouse/issues/3212#issuecomment-1386249922 See this comment for @paulhauner's explanation and solution: https://github.com/sigp/lighthouse/issues/3212#issuecomment-1384674956 ## Additional Info I've compared this branch (with backfill processing rate limited to to 1 and 3 batches per slot) against the latest stable version. The CPU usage during backfill sync is reduced by ~5% - 20%, more details on this page: https://hackmd.io/@jimmygchen/SJuVpJL3j The above testing is done on Goerli (as I don't currently have hardware for Mainnet), I'm guessing the differences are likely to be bigger on mainnet due to block size. ### TODOs - [x] Experiment with processing multiple batches per slot. (need to think about how to do this for different slot durations) - [x] Add option to disable rate-limiting, enabed by default. - [x] (No longer required now we're reusing the reprocessing queue) Complete the `backfill_scheduler` task when backfill sync is completed or not required * Add new validator API for voluntary exit (#4119) ## Issue Addressed Addresses #4117 ## Proposed Changes See https://github.com/ethereum/keymanager-APIs/pull/58 for proposed API specification. ## TODO - [x] ~~Add submission to BN~~ - removed, see discussion in [keymanager API](https://github.com/ethereum/keymanager-APIs/pull/58) - [x] ~~Add flag to allow voluntary exit via the API~~ - no longer needed now the VC doesn't submit exit directly - [x] ~~Additional verification / checks, e.g. if validator on same network as BN~~ - to be done on client side - [x] ~~Potentially wait for the message to propagate and return some exit information in the response~~ - not required - [x] Update http tests - [x] ~~Update lighthouse book~~ - not required if this endpoint makes it to the standard keymanager API Co-authored-by: Paul Hauner Co-authored-by: Jimmy Chen * Ban peer race condition (#4140) It is possible that when we go to ban a peer, there is already an unbanned message in the queue. It could lead to the case that we ban and immediately unban a peer leaving us in a state where a should-be banned peer is unbanned. If this banned peer connects to us in this faulty state, we currently do not attempt to re-ban it. This PR does correct this also, so if we do see this error, it will now self-correct (although we shouldn't see the error in the first place). I have also incremented the severity of not supporting protocols as I see peers ultimately get banned in a few steps and it seems to make sense to just ban them outright, rather than have them linger. * remove dup log (#4155) ## Issue Addressed NA ## Proposed Changes remove duplicate log message. ## Additional Info NA * Add `beacon.watch` (#3362) > This is currently a WIP and all features are subject to alteration or removal at any time. ## Overview The successor to #2873. Contains the backbone of `beacon.watch` including syncing code, the initial API, and several core database tables. See `watch/README.md` for more information, requirements and usage. * CI fix: move download web3signer binary out of build script (#4163) ## Issue Addressed Attempt to fix #3812 ## Proposed Changes Move web3signer binary download script out of build script to avoid downloading unless necessary. If this works, it should also reduce the build time for all jobs that runs compilation. * Add a flag to disable peer scoring (#4135) ## Issue Addressed N/A ## Proposed Changes Adds a flag for disabling peer scoring. This is useful for local testing and testing small networks for new features. * Remove the unused `ExecutionOptimisticForkVersionedResponse` type (#4160) ## Issue Addressed #4146 ## Proposed Changes Removes the `ExecutionOptimisticForkVersionedResponse` type and the associated Beacon API endpoint which is now deprecated. Also removes the test associated with the endpoint. * Remove Redundant Trait Bound (#4169) I realized this is redundant while reasoning about how the `store` is implemented given the [definition of `ItemStore`](https://github.com/sigp/lighthouse/blob/v4.0.1/beacon_node/store/src/lib.rs#L107) ```rust pub trait ItemStore: KeyValueStore + Sync + Send + Sized + 'static { ... } ``` * Make re-org strat more cautious and add more config (#4151) ## Proposed Changes This change attempts to prevent failed re-orgs by: 1. Lowering the re-org cutoff from 2s to 1s. This is informed by a failed re-org attempted by @yorickdowne's node. The failed block was requested in the 1.5-2s window due to a Vouch failure, and failed to propagate to the majority of the network before the attestation deadline at 4s. 2. Allow users to adjust their re-org cutoff depending on observed network conditions and their risk profile. The static 2 second cutoff was too rigid. 3. Add a `--proposer-reorg-disallowed-offsets` flag which can be used to prohibit reorgs at certain slots. This is intended to help workaround an issue whereby reorging blocks at slot 1 are currently taking ~1.6s to propagate on gossip rather than ~500ms. This is suspected to be due to a cache miss in current versions of Prysm, which should be fixed in their next release. ## Additional Info I'm of two minds about removing the `shuffling_stable` check which checks for blocks at slot 0 in the epoch. If we removed it users would be able to configure Lighthouse to try reorging at slot 0, which likely wouldn't work very well due to interactions with the proposer index cache. I think we could leave it for now and revisit it later. * Avoid processing redundant RPC blocks (#4179) ## Proposed Changes We already make some attempts to avoid processing RPC blocks when a block from the same proposer is already being processed through gossip. This PR strengthens that guarantee by using the existing cache for `observed_block_producers` to inform whether an RPC block's processing should be delayed. * Update Lighthouse book and some FAQs (#4178) ## Issue Addressed Updated Lighthouse book on Section 2 and added some FAQs ## Proposed Changes All changes are made in the book/src .md files. ## Additional Info Please provide any additional information. For example, future considerations or information useful for reviewers. Co-authored-by: chonghe Co-authored-by: Michael Sproul * Use head state for exit verification (#4183) ## Issue Addressed NA ## Proposed Changes Similar to #4181 but without the version bump and a more nuanced fix. Patches the high CPU usage seen after the Capella fork which was caused by processing exits when there are skip slots. ## Additional Info ~~This is an imperfect solution that will cause us to drop some exits at the fork boundary. This is tracked at #4184.~~ * Address observed proposers behaviour (#4192) ## Issue Addressed NA ## Proposed Changes Apply two changes to code introduced in #4179: 1. Remove the `ERRO` log for when we error on `proposer_has_been_observed()`. We were seeing a lot of this in our logs for finalized blocks and it's a bit noisy. 1. Use `false` rather than `true` for `proposal_already_known` when there is an error. If a block raises an error in `proposer_has_been_observed()` then the block must be invalid, so we should process (and reject) it now rather than queuing it. For reference, here is one of the offending `ERRO` logs: ``` ERRO Failed to check observed proposers block_root: 0x5845…878e, source: rpc, error: FinalizedBlock { slot: Slot(5410983), finalized_slot: Slot(5411232) } ``` ## Additional Info NA * Use efficient payload reconstruction for HTTP API (#4102) ## Proposed Changes Builds on #4028 to use the new payload bodies methods in the HTTP API as well. ## Caveats The payloads by range method only works for the finalized chain, so it can't be used in the execution engine integration tests because we try to reconstruct unfinalized payloads there. * Set user agent on requests to builder (#4199) ## Issue Addressed Closes #4185 ## Proposed Changes - Set user agent to `Lighthouse/vX.Y.Z-` by default - Allow tweaking user agent via `--builder-user-agent "agent"` * Bump Rust version (MSRV) (#4204) ## Issue Addressed There was a [`VecDeque` bug](https://github.com/rust-lang/rust/issues/108453) in some recent versions of the Rust standard library (1.67.0 & 1.67.1) that could cause Lighthouse to panic (reported by `@Sea Monkey` on discord). See full logs below. The issue was likely introduced in Rust 1.67.0 and [fixed](https://github.com/rust-lang/rust/pull/108475) in 1.68, and we were able to reproduce the panic ourselves using [@michaelsproul's fuzz tests](https://github.com/michaelsproul/lighthouse/blob/fuzz-lru-time-cache/beacon_node/lighthouse_network/src/peer_manager/fuzz.rs#L111) on both Rust 1.67.0 and 1.67.1. Users that uses our Docker images or binaries are unlikely affected, as our Docker images were built with `1.66`, and latest binaries were built with latest stable (`1.68.2`). It likely impacts user that builds from source using Rust versions 1.67.x. ## Proposed Changes Bump Rust version (MSRV) to latest stable `1.68.2`. ## Additional Info From `@Sea Monkey` on Lighthouse Discord: > Crash on goerli using `unstable` `dd124b2d6804d02e4e221f29387a56775acccd08` ``` thread 'tokio-runtime-worker' panicked at 'Key must exist', /mnt/goerli/goerli/lighthouse/common/lru_cache/src/time.rs:68:28 stack backtrace: Apr 15 09:37:36.993 WARN Peer sent invalid block in single block lookup, peer_id: 16Uiu2HAm6ZuyJpVpR6y51X4Enbp8EhRBqGycQsDMPX7e5XfPYznG, error: WouldRevertFinalizedSlot { block_slot: Slot(5420212), finalized_slot: Slot(5420224) }, root: 0x10f6…3165, service: sync 0: rust_begin_unwind at /rustc/d5a82bbd26e1ad8b7401f6a718a9c57c96905483/library/std/src/panicking.rs:575:5 1: core::panicking::panic_fmt at /rustc/d5a82bbd26e1ad8b7401f6a718a9c57c96905483/library/core/src/panicking.rs:64:14 2: core::panicking::panic_display at /rustc/d5a82bbd26e1ad8b7401f6a718a9c57c96905483/library/core/src/panicking.rs:135:5 3: core::panicking::panic_str at /rustc/d5a82bbd26e1ad8b7401f6a718a9c57c96905483/library/core/src/panicking.rs:119:5 4: core::option::expect_failed at /rustc/d5a82bbd26e1ad8b7401f6a718a9c57c96905483/library/core/src/option.rs:1879:5 5: lru_cache::time::LRUTimeCache::raw_remove 6: lighthouse_network::peer_manager::PeerManager::handle_ban_operation 7: lighthouse_network::peer_manager::PeerManager::handle_score_action 8: lighthouse_network::peer_manager::PeerManager::report_peer 9: network::service::NetworkService::spawn_service::{{closure}} 10: as core::future::future::Future>::poll 11: as core::future::future::Future>::poll 12: ::Output> as core::future::future::Future>::poll 13: tokio::loom::std::unsafe_cell::UnsafeCell::with_mut 14: tokio::runtime::task::core::Core::poll 15: tokio::runtime::task::harness::Harness::poll 16: tokio::runtime::scheduler::multi_thread::worker::Context::run_task 17: tokio::runtime::scheduler::multi_thread::worker::Context::run 18: tokio::macros::scoped_tls::ScopedKey::set 19: tokio::runtime::scheduler::multi_thread::worker::run 20: tokio::loom::std::unsafe_cell::UnsafeCell::with_mut 21: tokio::runtime::task::core::Core::poll 22: tokio::runtime::task::harness::Harness::poll 23: tokio::runtime::blocking::pool::Inner::run note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace. Apr 15 09:37:37.069 INFO Saved DHT state service: network Apr 15 09:37:37.070 INFO Network service shutdown service: network Apr 15 09:37:37.132 CRIT Task panic. This is a bug! advice: Please check above for a backtrace and notify the developers, message: , task_name: network Apr 15 09:37:37.132 INFO Internal shutdown received reason: Panic (fatal error) Apr 15 09:37:37.133 INFO Shutting down.. reason: Failure("Panic (fatal error)") Apr 15 09:37:37.135 WARN Unable to free worker error: channel closed, msg: did not free worker, shutdown may be underway Apr 15 09:37:39.350 INFO Saved beacon chain to disk service: beacon Panic (fatal error) ``` * Check lateness of block before requeuing it (#4208) ## Issue Addressed NA ## Proposed Changes Avoids reprocessing loops introduced in #4179. (Also somewhat related to #4192). Breaks the re-queue loop by only re-queuing when an RPC block is received before the attestation creation deadline. I've put `proposal_is_known` behind a closure to avoid interacting with the `observed_proposers` lock unnecessarily. ## Additional Info NA * Release v4.1.0 (#4191) ## Issue Addressed NA ## Proposed Changes Bump versions. ## Additional Info NA --------- Co-authored-by: realbigsean Co-authored-by: Pawan Dhananjay Co-authored-by: ethDreamer <37123614+ethDreamer@users.noreply.github.com> Co-authored-by: Mark Mackey Co-authored-by: Paul Hauner Co-authored-by: Jimmy Chen Co-authored-by: Michael Sproul Co-authored-by: Michael Sproul Co-authored-by: realbigsean Co-authored-by: Mac L Co-authored-by: Justin Traglia <95511699+jtraglia@users.noreply.github.com> Co-authored-by: Madman600 <38760981+Madman600@users.noreply.github.com> Co-authored-by: Adam Patacchiola Co-authored-by: Santiago Medina Co-authored-by: David Theodore Co-authored-by: GeemoCandama Co-authored-by: GeemoCandama <104614073+GeemoCandama@users.noreply.github.com> Co-authored-by: aliask Co-authored-by: Age Manning Co-authored-by: antondlr Co-authored-by: naviechan Co-authored-by: navie Co-authored-by: kevinbogner Co-authored-by: Nazar Hussain Co-authored-by: Divma Co-authored-by: Alan Höng Co-authored-by: Mavrik Co-authored-by: Atanas Minkov Co-authored-by: Daniel Ramirez Chiquillo Co-authored-by: Alex Wied Co-authored-by: Sebastian Richel Co-authored-by: Sebastian Richel <47844429+sebastianrich18@users.noreply.github.com> Co-authored-by: Jimmy Chen Co-authored-by: int88 Co-authored-by: Christopher Chong Co-authored-by: Maksim Shcherbo Co-authored-by: Maksim Shcherbo Co-authored-by: chonghe Co-authored-by: chonghe <44791194+chong-he@users.noreply.github.com> --- .cargo/config.toml | 4 + .github/workflows/docker.yml | 18 +- .github/workflows/release.yml | 10 +- .github/workflows/test-suite.yml | 21 +- .gitignore | 1 + Cargo.lock | 1908 +++++++++++------ Cargo.toml | 12 +- Dockerfile | 2 +- Makefile | 45 +- README.md | 2 +- beacon_node/Cargo.toml | 4 +- beacon_node/beacon_chain/Cargo.toml | 5 +- .../beacon_chain/src/attestation_rewards.rs | 195 ++ .../src/attestation_verification.rs | 5 + .../beacon_chain/src/beacon_block_reward.rs | 237 ++ .../beacon_chain/src/beacon_block_streamer.rs | 973 +++++++++ beacon_node/beacon_chain/src/beacon_chain.rs | 616 ++++-- .../src/beacon_fork_choice_store.rs | 25 +- .../beacon_chain/src/beacon_snapshot.rs | 6 +- beacon_node/beacon_chain/src/block_reward.rs | 4 +- .../beacon_chain/src/block_verification.rs | 25 +- beacon_node/beacon_chain/src/builder.rs | 23 +- .../beacon_chain/src/canonical_head.rs | 34 +- .../beacon_chain/src/capella_readiness.rs | 122 ++ beacon_node/beacon_chain/src/chain_config.rs | 43 +- beacon_node/beacon_chain/src/errors.rs | 25 +- beacon_node/beacon_chain/src/events.rs | 80 +- .../beacon_chain/src/execution_payload.rs | 113 +- .../beacon_chain/src/fork_choice_signal.rs | 4 +- beacon_node/beacon_chain/src/fork_revert.rs | 3 - beacon_node/beacon_chain/src/lib.rs | 12 +- ...t_client_optimistic_update_verification.rs | 15 + .../beacon_chain/src/merge_readiness.rs | 2 +- .../beacon_chain/src/observed_operations.rs | 53 +- beacon_node/beacon_chain/src/schema_change.rs | 27 + .../src/schema_change/migration_schema_v12.rs | 8 +- .../src/schema_change/migration_schema_v14.rs | 125 ++ .../src/schema_change/migration_schema_v15.rs | 76 + .../src/schema_change/migration_schema_v16.rs | 46 + .../beacon_chain/src/shuffling_cache.rs | 16 +- .../src/sync_committee_rewards.rs | 87 + beacon_node/beacon_chain/src/test_utils.rs | 265 ++- .../beacon_chain/src/validator_monitor.rs | 111 +- .../src/validator_pubkey_cache.rs | 17 +- beacon_node/beacon_chain/tests/capella.rs | 167 ++ beacon_node/beacon_chain/tests/main.rs | 2 + beacon_node/beacon_chain/tests/merge.rs | 57 +- .../tests/payload_invalidation.rs | 35 +- beacon_node/beacon_chain/tests/rewards.rs | 121 ++ beacon_node/beacon_chain/tests/store_tests.rs | 112 +- .../tests/sync_committee_verification.rs | 7 +- beacon_node/beacon_chain/tests/tests.rs | 6 +- beacon_node/builder_client/Cargo.toml | 1 + beacon_node/builder_client/src/lib.rs | 23 +- beacon_node/client/Cargo.toml | 6 +- .../client/src/address_change_broadcast.rs | 322 +++ beacon_node/client/src/builder.rs | 26 +- beacon_node/client/src/config.rs | 2 + beacon_node/client/src/lib.rs | 16 +- beacon_node/client/src/notifier.rs | 76 +- beacon_node/eth1/Cargo.toml | 2 +- beacon_node/eth1/tests/test.rs | 3 +- beacon_node/execution_layer/Cargo.toml | 7 +- beacon_node/execution_layer/src/block_hash.rs | 44 +- beacon_node/execution_layer/src/engine_api.rs | 334 ++- .../execution_layer/src/engine_api/http.rs | 488 ++++- .../src/engine_api/json_structures.rs | 425 ++-- beacon_node/execution_layer/src/engines.rs | 112 +- beacon_node/execution_layer/src/lib.rs | 610 ++++-- beacon_node/execution_layer/src/metrics.rs | 4 + .../execution_layer/src/payload_status.rs | 24 +- .../test_utils/execution_block_generator.rs | 148 +- .../src/test_utils/handle_rpc.rs | 295 ++- .../execution_layer/src/test_utils/hook.rs | 8 +- .../src/test_utils/mock_builder.rs | 251 ++- .../src/test_utils/mock_execution_layer.rs | 96 +- .../execution_layer/src/test_utils/mod.rs | 38 +- beacon_node/genesis/src/interop.rs | 164 +- beacon_node/genesis/src/lib.rs | 5 +- beacon_node/http_api/Cargo.toml | 11 +- .../http_api/src/attestation_performance.rs | 4 +- beacon_node/http_api/src/attester_duties.rs | 6 +- beacon_node/http_api/src/block_id.rs | 73 +- beacon_node/http_api/src/block_rewards.rs | 2 +- beacon_node/http_api/src/lib.rs | 962 ++++++--- beacon_node/http_api/src/metrics.rs | 5 +- beacon_node/http_api/src/proposer_duties.rs | 4 +- beacon_node/http_api/src/publish_blocks.rs | 183 +- .../http_api/src/standard_block_rewards.rs | 27 + beacon_node/http_api/src/state_id.rs | 75 +- .../http_api/src/sync_committee_rewards.rs | 77 + .../{tests/common.rs => src/test_utils.rs} | 47 +- beacon_node/http_api/src/ui.rs | 202 +- .../http_api/src/validator_inclusion.rs | 2 +- beacon_node/http_api/src/version.rs | 21 +- beacon_node/http_api/tests/fork_tests.rs | 240 ++- .../http_api/tests/interactive_tests.rs | 156 +- beacon_node/http_api/tests/main.rs | 2 - beacon_node/http_api/tests/tests.rs | 822 +++++-- beacon_node/http_metrics/src/lib.rs | 8 +- beacon_node/http_metrics/tests/tests.rs | 9 +- beacon_node/lighthouse_network/Cargo.toml | 7 +- beacon_node/lighthouse_network/src/config.rs | 217 +- .../lighthouse_network/src/discovery/enr.rs | 37 +- .../lighthouse_network/src/discovery/mod.rs | 77 +- beacon_node/lighthouse_network/src/lib.rs | 2 + .../lighthouse_network/src/listen_addr.rs | 97 + beacon_node/lighthouse_network/src/metrics.rs | 5 +- .../src/peer_manager/mod.rs | 83 +- .../src/peer_manager/network_behaviour.rs | 8 +- .../src/peer_manager/peerdb.rs | 54 +- .../lighthouse_network/src/rpc/codec/base.rs | 3 + .../src/rpc/codec/ssz_snappy.rs | 26 +- .../lighthouse_network/src/rpc/config.rs | 173 ++ beacon_node/lighthouse_network/src/rpc/mod.rs | 58 +- .../lighthouse_network/src/rpc/protocol.rs | 67 +- .../src/rpc/rate_limiter.rs | 69 +- .../src/rpc/self_limiter.rs | 202 ++ .../src/service/api_types.rs | 3 +- .../src/service/gossip_cache.rs | 13 + .../lighthouse_network/src/service/mod.rs | 92 +- .../lighthouse_network/src/service/utils.rs | 1 + .../lighthouse_network/src/types/globals.rs | 33 +- .../lighthouse_network/src/types/mod.rs | 4 +- .../lighthouse_network/src/types/pubsub.rs | 28 +- .../lighthouse_network/src/types/topics.rs | 47 +- .../lighthouse_network/tests/common.rs | 15 +- .../lighthouse_network/tests/rpc_tests.rs | 4 +- beacon_node/network/Cargo.toml | 4 +- .../network/src/beacon_processor/mod.rs | 197 +- .../network/src/beacon_processor/tests.rs | 71 +- .../work_reprocessing_queue.rs | 434 +++- .../beacon_processor/worker/gossip_methods.rs | 208 +- .../beacon_processor/worker/rpc_methods.rs | 61 +- .../beacon_processor/worker/sync_methods.rs | 84 +- beacon_node/network/src/metrics.rs | 38 +- beacon_node/network/src/nat.rs | 12 +- beacon_node/network/src/router.rs | 535 +++++ beacon_node/network/src/router/mod.rs | 309 --- beacon_node/network/src/router/processor.rs | 459 ---- beacon_node/network/src/service.rs | 33 +- beacon_node/network/src/service/tests.rs | 3 +- .../network/src/subnet_service/tests/mod.rs | 3 + beacon_node/operation_pool/Cargo.toml | 3 +- .../src/bls_to_execution_changes.rs | 147 ++ beacon_node/operation_pool/src/lib.rs | 146 +- beacon_node/operation_pool/src/persistence.rs | 117 +- beacon_node/src/cli.rs | 167 +- beacon_node/src/config.rs | 439 +++- beacon_node/store/Cargo.toml | 2 +- beacon_node/store/src/chunked_vector.rs | 78 +- beacon_node/store/src/errors.rs | 12 +- beacon_node/store/src/hot_cold_store.rs | 44 +- .../store/src/impls/execution_payload.rs | 31 +- beacon_node/store/src/lib.rs | 3 + beacon_node/store/src/metadata.rs | 2 +- beacon_node/store/src/partial_beacon_state.rs | 121 +- beacon_node/store/src/reconstruct.rs | 6 +- beacon_node/tests/test.rs | 1 - book/src/SUMMARY.md | 6 +- book/src/advanced_networking.md | 12 +- book/src/api-lighthouse.md | 2 +- book/src/checkpoint-sync.md | 13 +- book/src/database-migrations.md | 9 +- book/src/docker.md | 49 +- book/src/faq.md | 55 +- book/src/imgs/ui-account-earnings.png | Bin 0 -> 886925 bytes book/src/imgs/ui-balance-modal.png | Bin 0 -> 44421 bytes book/src/imgs/ui-configuration.png | Bin 0 -> 110294 bytes book/src/imgs/ui-dashboard.png | Bin 0 -> 1453496 bytes book/src/imgs/ui-device.png | Bin 0 -> 57810 bytes book/src/imgs/ui-hardware.png | Bin 0 -> 73137 bytes book/src/imgs/ui-settings.png | Bin 0 -> 353862 bytes book/src/imgs/ui-validator-balance1.png | Bin 0 -> 67314 bytes book/src/imgs/ui-validator-balance2.png | Bin 0 -> 90980 bytes book/src/imgs/ui-validator-management.png | Bin 0 -> 391996 bytes book/src/imgs/ui-validator-modal.png | Bin 0 -> 341438 bytes book/src/imgs/ui-validator-table.png | Bin 0 -> 127175 bytes book/src/imgs/ui.png | Bin 0 -> 372824 bytes book/src/installation-binaries.md | 21 +- book/src/installation-source.md | 59 +- book/src/installation.md | 25 +- book/src/late-block-re-orgs.md | 9 + book/src/lighthouse-ui.md | 33 + book/src/merge-migration.md | 4 +- book/src/pi.md | 27 +- book/src/run_a_node.md | 2 +- book/src/system-requirements.md | 23 - book/src/ui-configuration.md | 47 + book/src/ui-faqs.md | 16 + book/src/ui-installation.md | 105 + book/src/ui-usage.md | 61 + book/src/validator-inclusion.md | 9 +- boot_node/Cargo.toml | 2 +- boot_node/src/cli.rs | 8 + boot_node/src/config.rs | 84 +- boot_node/src/server.rs | 2 +- bors.toml | 1 - common/compare_fields/src/lib.rs | 10 +- common/compare_fields_derive/src/lib.rs | 3 +- common/eth2/Cargo.toml | 2 +- common/eth2/src/lib.rs | 223 +- common/eth2/src/lighthouse.rs | 77 +- .../src/lighthouse/attestation_rewards.rs | 44 + .../src/lighthouse/standard_block_rewards.rs | 26 + .../src/lighthouse/sync_committee_rewards.rs | 13 + common/eth2/src/lighthouse_vc/http_client.rs | 24 + common/eth2/src/lighthouse_vc/types.rs | 5 + common/eth2/src/types.rs | 147 +- common/eth2_config/src/lib.rs | 20 - common/eth2_network_config/Cargo.toml | 2 +- .../gnosis/config.yaml | 3 + .../kiln/boot_enr.yaml | 3 - .../built_in_network_configs/kiln/config.yaml | 69 - .../kiln/deploy_block.txt | 1 - .../kiln/genesis.ssz.zip | Bin 8576081 -> 0 bytes .../mainnet/config.yaml | 3 + .../prater/config.yaml | 5 +- .../ropsten/boot_enr.yaml | 4 - .../ropsten/config.yaml | 71 - .../ropsten/deploy_block.txt | 1 - .../ropsten/genesis.ssz.zip | Bin 8234124 -> 0 bytes .../sepolia/config.yaml | 4 +- common/eth2_network_config/src/lib.rs | 2 +- common/lighthouse_version/src/lib.rs | 4 +- common/lru_cache/src/time.rs | 77 + common/malloc_utils/Cargo.toml | 12 +- common/malloc_utils/src/jemalloc.rs | 52 + common/malloc_utils/src/lib.rs | 44 +- common/slot_clock/src/lib.rs | 17 +- common/slot_clock/src/manual_slot_clock.rs | 7 +- common/unused_port/Cargo.toml | 3 + common/unused_port/src/lib.rs | 66 +- common/warp_utils/src/lib.rs | 1 + common/warp_utils/src/task.rs | 24 +- common/warp_utils/src/uor.rs | 25 + consensus/cached_tree_hash/Cargo.toml | 2 +- consensus/fork_choice/Cargo.toml | 2 +- consensus/fork_choice/src/fork_choice.rs | 253 +-- .../fork_choice/src/fork_choice_store.rs | 10 +- consensus/fork_choice/src/lib.rs | 4 +- consensus/fork_choice/tests/tests.rs | 33 +- consensus/proto_array/Cargo.toml | 2 +- consensus/proto_array/src/error.rs | 1 + .../src/fork_choice_test_definition.rs | 8 +- .../proto_array/src/justified_balances.rs | 2 +- consensus/proto_array/src/lib.rs | 8 +- consensus/proto_array/src/proto_array.rs | 176 +- .../src/proto_array_fork_choice.rs | 262 ++- consensus/proto_array/src/ssz_container.rs | 9 +- consensus/serde_utils/src/lib.rs | 2 +- consensus/serde_utils/src/quoted_int.rs | 36 +- consensus/ssz/Cargo.toml | 2 +- consensus/ssz/src/decode/impls.rs | 1 + consensus/ssz/src/encode/impls.rs | 1 + consensus/ssz/tests/tests.rs | 142 -- consensus/ssz_derive/Cargo.toml | 5 +- consensus/ssz_derive/src/lib.rs | 455 +++- consensus/ssz_derive/tests/tests.rs | 215 ++ consensus/ssz_types/src/bitfield.rs | 4 +- consensus/ssz_types/src/fixed_vector.rs | 2 +- consensus/ssz_types/src/variable_list.rs | 11 +- consensus/state_processing/Cargo.toml | 2 +- .../src/common/slash_validator.rs | 8 +- .../state_processing/src/consensus_context.rs | 6 +- consensus/state_processing/src/genesis.rs | 27 +- consensus/state_processing/src/lib.rs | 2 +- .../src/per_block_processing.rs | 177 +- .../block_signature_verifier.rs | 42 +- .../src/per_block_processing/errors.rs | 35 +- .../process_operations.rs | 45 +- .../per_block_processing/signature_sets.rs | 42 +- .../src/per_block_processing/tests.rs | 30 +- .../verify_bls_to_execution_change.rs | 56 + .../src/per_block_processing/verify_exit.rs | 12 +- .../src/per_epoch_processing.rs | 7 +- .../src/per_epoch_processing/capella.rs | 78 + .../capella/historical_summaries_update.rs | 23 + .../src/per_slot_processing.rs | 6 +- consensus/state_processing/src/upgrade.rs | 2 + .../state_processing/src/upgrade/capella.rs | 74 + .../state_processing/src/upgrade/merge.rs | 4 +- .../state_processing/src/verify_operation.rs | 63 +- consensus/tree_hash/Cargo.toml | 2 +- consensus/tree_hash/src/impls.rs | 20 + consensus/tree_hash_derive/src/lib.rs | 1 - consensus/types/Cargo.toml | 32 +- consensus/types/presets/gnosis/capella.yaml | 17 + consensus/types/presets/mainnet/capella.yaml | 17 + consensus/types/presets/minimal/capella.yaml | 17 + consensus/types/src/aggregate_and_proof.rs | 15 +- consensus/types/src/attestation.rs | 13 +- consensus/types/src/attestation_data.rs | 2 +- consensus/types/src/attestation_duty.rs | 3 +- consensus/types/src/attester_slashing.rs | 13 +- consensus/types/src/beacon_block.rs | 240 ++- consensus/types/src/beacon_block_body.rs | 140 +- consensus/types/src/beacon_block_header.rs | 14 +- consensus/types/src/beacon_committee.rs | 3 +- consensus/types/src/beacon_state.rs | 98 +- .../types/src/beacon_state/committee_cache.rs | 1 - .../types/src/beacon_state/exit_cache.rs | 1 - .../types/src/beacon_state/pubkey_cache.rs | 1 - consensus/types/src/beacon_state/tests.rs | 4 +- .../types/src/beacon_state/tree_hash_cache.rs | 31 +- .../types/src/bls_to_execution_change.rs | 57 + consensus/types/src/builder_bid.rs | 69 +- consensus/types/src/chain_spec.rs | 98 +- consensus/types/src/checkpoint.rs | 2 +- consensus/types/src/config_and_preset.rs | 26 +- consensus/types/src/contribution_and_proof.rs | 15 +- consensus/types/src/deposit.rs | 13 +- consensus/types/src/deposit_data.rs | 13 +- consensus/types/src/deposit_message.rs | 14 +- consensus/types/src/enr_fork_id.rs | 13 +- consensus/types/src/eth1_data.rs | 2 +- consensus/types/src/eth_spec.rs | 36 +- consensus/types/src/execution_block_hash.rs | 14 +- consensus/types/src/execution_block_header.rs | 35 +- consensus/types/src/execution_payload.rs | 118 +- .../types/src/execution_payload_header.rs | 192 +- consensus/types/src/fork.rs | 2 +- consensus/types/src/fork_context.rs | 7 + consensus/types/src/fork_data.rs | 13 +- consensus/types/src/fork_name.rs | 29 +- .../types/src/fork_versioned_response.rs | 141 ++ consensus/types/src/free_attestation.rs | 14 - consensus/types/src/graffiti.rs | 2 +- consensus/types/src/historical_batch.rs | 15 +- consensus/types/src/historical_summary.rs | 89 + consensus/types/src/indexed_attestation.rs | 13 +- consensus/types/src/lib.rs | 48 +- consensus/types/src/light_client_bootstrap.rs | 14 +- .../types/src/light_client_finality_update.rs | 14 +- .../src/light_client_optimistic_update.rs | 14 +- consensus/types/src/light_client_update.rs | 14 +- consensus/types/src/participation_flags.rs | 2 +- consensus/types/src/payload.rs | 937 ++++++-- consensus/types/src/pending_attestation.rs | 26 +- consensus/types/src/preset.rs | 24 + consensus/types/src/proposer_slashing.rs | 14 +- consensus/types/src/relative_epoch.rs | 6 +- consensus/types/src/selection_proof.rs | 3 +- .../types/src/signed_aggregate_and_proof.rs | 15 +- consensus/types/src/signed_beacon_block.rs | 122 +- .../types/src/signed_beacon_block_header.rs | 14 +- .../src/signed_bls_to_execution_change.rs | 33 + .../src/signed_contribution_and_proof.rs | 15 +- consensus/types/src/signed_voluntary_exit.rs | 13 +- consensus/types/src/signing_data.rs | 14 +- consensus/types/src/slot_epoch.rs | 30 +- consensus/types/src/subnet_id.rs | 3 +- consensus/types/src/sync_aggregate.rs | 13 +- .../src/sync_aggregator_selection_data.rs | 13 +- consensus/types/src/sync_committee.rs | 15 +- .../types/src/sync_committee_contribution.rs | 15 +- consensus/types/src/sync_committee_message.rs | 14 +- consensus/types/src/sync_selection_proof.rs | 3 +- consensus/types/src/sync_subnet_id.rs | 3 +- consensus/types/src/tree_hash_impls.rs | 6 +- consensus/types/src/validator.rs | 60 +- consensus/types/src/voluntary_exit.rs | 13 +- consensus/types/src/withdrawal.rs | 37 + crypto/bls/src/generic_aggregate_signature.rs | 2 +- lcli/Cargo.toml | 7 +- lcli/Dockerfile | 2 +- lcli/src/create_payload_header.rs | 32 +- lcli/src/generate_bootnode_enr.rs | 15 +- lcli/src/main.rs | 15 +- lcli/src/new_testnet.rs | 24 +- lighthouse/Cargo.toml | 8 +- lighthouse/environment/src/lib.rs | 4 +- lighthouse/src/main.rs | 27 +- lighthouse/tests/beacon_node.rs | 622 +++++- lighthouse/tests/boot_node.rs | 6 +- lighthouse/tests/validator_client.rs | 25 + scripts/local_testnet/README.md | 2 +- scripts/local_testnet/start_local_testnet.sh | 2 +- scripts/local_testnet/vars.env | 2 +- scripts/tests/doppelganger_protection.sh | 39 +- scripts/tests/vars.env | 3 + slasher/Cargo.toml | 2 +- slasher/service/Cargo.toml | 2 +- testing/antithesis/Dockerfile.libvoidstar | 8 +- testing/ef_tests/Cargo.toml | 2 +- testing/ef_tests/Makefile | 2 +- testing/ef_tests/check_all_files_accessed.py | 11 +- testing/ef_tests/src/cases/common.rs | 1 + .../ef_tests/src/cases/epoch_processing.rs | 52 +- testing/ef_tests/src/cases/fork.rs | 3 +- testing/ef_tests/src/cases/fork_choice.rs | 33 +- .../src/cases/genesis_initialization.rs | 5 +- .../src/cases/merkle_proof_validity.rs | 2 +- testing/ef_tests/src/cases/operations.rs | 136 +- testing/ef_tests/src/cases/transition.rs | 5 + testing/ef_tests/src/handler.rs | 25 +- testing/ef_tests/src/lib.rs | 10 +- testing/ef_tests/src/type_name.rs | 10 + testing/ef_tests/tests/tests.rs | 75 +- testing/eth1_test_rig/src/ganache.rs | 6 +- .../execution_engine_integration/Cargo.toml | 1 + .../src/execution_engine.rs | 6 +- .../execution_engine_integration/src/geth.rs | 4 +- .../execution_engine_integration/src/main.rs | 1 - .../src/nethermind.rs | 10 +- .../src/test_rig.rs | 115 +- testing/node_test_rig/src/lib.rs | 5 +- testing/simulator/src/checks.rs | 4 +- testing/simulator/src/eth1_sim.rs | 5 +- testing/simulator/src/local_network.rs | 22 +- testing/simulator/src/main.rs | 2 - testing/simulator/src/no_eth1_sim.rs | 5 +- testing/simulator/src/sync_sim.rs | 5 +- testing/web3signer_tests/Cargo.toml | 8 +- .../{build.rs => src/get_web3signer.rs} | 11 - testing/web3signer_tests/src/lib.rs | 51 +- .../slashing_protection/Cargo.toml | 4 +- .../src/slashing_database.rs | 4 +- validator_client/src/beacon_node_fallback.rs | 75 +- validator_client/src/block_service.rs | 29 +- validator_client/src/cli.rs | 18 + validator_client/src/config.rs | 16 + validator_client/src/duties_service.rs | 244 ++- .../http_api/create_signed_voluntary_exit.rs | 69 + validator_client/src/http_api/mod.rs | 47 + validator_client/src/http_api/tests.rs | 71 +- validator_client/src/http_metrics/metrics.rs | 28 + validator_client/src/http_metrics/mod.rs | 8 +- .../src/initialized_validators.rs | 27 +- validator_client/src/latency.rs | 64 + validator_client/src/lib.rs | 27 +- validator_client/src/signing_method.rs | 11 +- .../src/signing_method/web3signer.rs | 13 +- validator_client/src/validator_store.rs | 47 +- watch/.gitignore | 1 + watch/Cargo.toml | 45 + watch/README.md | 460 ++++ watch/config.yaml.default | 49 + watch/diesel.toml | 5 + watch/migrations/.gitkeep | 0 .../down.sql | 6 + .../up.sql | 36 + .../down.sql | 1 + .../2022-01-01-000000_canonical_slots/up.sql | 6 + .../2022-01-01-000001_beacon_blocks/down.sql | 1 + .../2022-01-01-000001_beacon_blocks/up.sql | 7 + .../2022-01-01-000002_validators/down.sql | 1 + .../2022-01-01-000002_validators/up.sql | 7 + .../2022-01-01-000003_proposer_info/down.sql | 1 + .../2022-01-01-000003_proposer_info/up.sql | 5 + .../2022-01-01-000004_active_config/down.sql | 1 + .../2022-01-01-000004_active_config/up.sql | 5 + .../2022-01-01-000010_blockprint/down.sql | 1 + .../2022-01-01-000010_blockprint/up.sql | 4 + .../2022-01-01-000011_block_rewards/down.sql | 1 + .../2022-01-01-000011_block_rewards/up.sql | 6 + .../2022-01-01-000012_block_packing/down.sql | 1 + .../2022-01-01-000012_block_packing/up.sql | 6 + .../down.sql | 1 + .../up.sql | 8 + .../2022-01-01-000020_capella/down.sql | 2 + .../2022-01-01-000020_capella/up.sql | 3 + watch/postgres_docker_compose/compose.yml | 16 + watch/src/block_packing/database.rs | 140 ++ watch/src/block_packing/mod.rs | 38 + watch/src/block_packing/server.rs | 31 + watch/src/block_packing/updater.rs | 211 ++ watch/src/block_rewards/database.rs | 137 ++ watch/src/block_rewards/mod.rs | 38 + watch/src/block_rewards/server.rs | 31 + watch/src/block_rewards/updater.rs | 157 ++ watch/src/blockprint/config.rs | 40 + watch/src/blockprint/database.rs | 224 ++ watch/src/blockprint/mod.rs | 149 ++ watch/src/blockprint/server.rs | 31 + watch/src/blockprint/updater.rs | 172 ++ watch/src/cli.rs | 55 + watch/src/client.rs | 178 ++ watch/src/config.rs | 50 + watch/src/database/compat.rs | 49 + watch/src/database/config.rs | 74 + watch/src/database/error.rs | 55 + watch/src/database/mod.rs | 782 +++++++ watch/src/database/models.rs | 67 + watch/src/database/schema.rs | 102 + watch/src/database/utils.rs | 29 + watch/src/database/watch_types.rs | 119 + watch/src/lib.rs | 12 + watch/src/logger.rs | 24 + watch/src/main.rs | 41 + watch/src/server/config.rs | 28 + watch/src/server/error.rs | 50 + watch/src/server/handler.rs | 266 +++ watch/src/server/mod.rs | 134 ++ watch/src/suboptimal_attestations/database.rs | 224 ++ watch/src/suboptimal_attestations/mod.rs | 56 + watch/src/suboptimal_attestations/server.rs | 299 +++ watch/src/suboptimal_attestations/updater.rs | 236 ++ watch/src/updater/config.rs | 65 + watch/src/updater/error.rs | 56 + watch/src/updater/handler.rs | 471 ++++ watch/src/updater/mod.rs | 234 ++ watch/tests/tests.rs | 1254 +++++++++++ 503 files changed, 28706 insertions(+), 5729 deletions(-) create mode 100644 .cargo/config.toml create mode 100644 beacon_node/beacon_chain/src/attestation_rewards.rs create mode 100644 beacon_node/beacon_chain/src/beacon_block_reward.rs create mode 100644 beacon_node/beacon_chain/src/beacon_block_streamer.rs create mode 100644 beacon_node/beacon_chain/src/capella_readiness.rs create mode 100644 beacon_node/beacon_chain/src/schema_change/migration_schema_v14.rs create mode 100644 beacon_node/beacon_chain/src/schema_change/migration_schema_v15.rs create mode 100644 beacon_node/beacon_chain/src/schema_change/migration_schema_v16.rs create mode 100644 beacon_node/beacon_chain/src/sync_committee_rewards.rs create mode 100644 beacon_node/beacon_chain/tests/capella.rs create mode 100644 beacon_node/beacon_chain/tests/rewards.rs create mode 100644 beacon_node/client/src/address_change_broadcast.rs create mode 100644 beacon_node/http_api/src/standard_block_rewards.rs create mode 100644 beacon_node/http_api/src/sync_committee_rewards.rs rename beacon_node/http_api/{tests/common.rs => src/test_utils.rs} (82%) create mode 100644 beacon_node/lighthouse_network/src/listen_addr.rs create mode 100644 beacon_node/lighthouse_network/src/rpc/config.rs create mode 100644 beacon_node/lighthouse_network/src/rpc/self_limiter.rs create mode 100644 beacon_node/network/src/router.rs delete mode 100644 beacon_node/network/src/router/mod.rs delete mode 100644 beacon_node/network/src/router/processor.rs create mode 100644 beacon_node/operation_pool/src/bls_to_execution_changes.rs create mode 100644 book/src/imgs/ui-account-earnings.png create mode 100644 book/src/imgs/ui-balance-modal.png create mode 100644 book/src/imgs/ui-configuration.png create mode 100644 book/src/imgs/ui-dashboard.png create mode 100644 book/src/imgs/ui-device.png create mode 100644 book/src/imgs/ui-hardware.png create mode 100644 book/src/imgs/ui-settings.png create mode 100644 book/src/imgs/ui-validator-balance1.png create mode 100644 book/src/imgs/ui-validator-balance2.png create mode 100644 book/src/imgs/ui-validator-management.png create mode 100644 book/src/imgs/ui-validator-modal.png create mode 100644 book/src/imgs/ui-validator-table.png create mode 100644 book/src/imgs/ui.png create mode 100644 book/src/lighthouse-ui.md delete mode 100644 book/src/system-requirements.md create mode 100644 book/src/ui-configuration.md create mode 100644 book/src/ui-faqs.md create mode 100644 book/src/ui-installation.md create mode 100644 book/src/ui-usage.md create mode 100644 common/eth2/src/lighthouse/attestation_rewards.rs create mode 100644 common/eth2/src/lighthouse/standard_block_rewards.rs create mode 100644 common/eth2/src/lighthouse/sync_committee_rewards.rs delete mode 100644 common/eth2_network_config/built_in_network_configs/kiln/boot_enr.yaml delete mode 100644 common/eth2_network_config/built_in_network_configs/kiln/config.yaml delete mode 100644 common/eth2_network_config/built_in_network_configs/kiln/deploy_block.txt delete mode 100644 common/eth2_network_config/built_in_network_configs/kiln/genesis.ssz.zip delete mode 100644 common/eth2_network_config/built_in_network_configs/ropsten/boot_enr.yaml delete mode 100644 common/eth2_network_config/built_in_network_configs/ropsten/config.yaml delete mode 100644 common/eth2_network_config/built_in_network_configs/ropsten/deploy_block.txt delete mode 100644 common/eth2_network_config/built_in_network_configs/ropsten/genesis.ssz.zip create mode 100644 common/malloc_utils/src/jemalloc.rs create mode 100644 common/warp_utils/src/uor.rs create mode 100644 consensus/ssz_derive/tests/tests.rs create mode 100644 consensus/state_processing/src/per_block_processing/verify_bls_to_execution_change.rs create mode 100644 consensus/state_processing/src/per_epoch_processing/capella.rs create mode 100644 consensus/state_processing/src/per_epoch_processing/capella/historical_summaries_update.rs create mode 100644 consensus/state_processing/src/upgrade/capella.rs create mode 100644 consensus/types/presets/gnosis/capella.yaml create mode 100644 consensus/types/presets/mainnet/capella.yaml create mode 100644 consensus/types/presets/minimal/capella.yaml create mode 100644 consensus/types/src/bls_to_execution_change.rs create mode 100644 consensus/types/src/fork_versioned_response.rs delete mode 100644 consensus/types/src/free_attestation.rs create mode 100644 consensus/types/src/historical_summary.rs create mode 100644 consensus/types/src/signed_bls_to_execution_change.rs create mode 100644 consensus/types/src/withdrawal.rs rename testing/web3signer_tests/{build.rs => src/get_web3signer.rs} (88%) create mode 100644 validator_client/src/http_api/create_signed_voluntary_exit.rs create mode 100644 validator_client/src/latency.rs create mode 100644 watch/.gitignore create mode 100644 watch/Cargo.toml create mode 100644 watch/README.md create mode 100644 watch/config.yaml.default create mode 100644 watch/diesel.toml create mode 100644 watch/migrations/.gitkeep create mode 100644 watch/migrations/00000000000000_diesel_initial_setup/down.sql create mode 100644 watch/migrations/00000000000000_diesel_initial_setup/up.sql create mode 100644 watch/migrations/2022-01-01-000000_canonical_slots/down.sql create mode 100644 watch/migrations/2022-01-01-000000_canonical_slots/up.sql create mode 100644 watch/migrations/2022-01-01-000001_beacon_blocks/down.sql create mode 100644 watch/migrations/2022-01-01-000001_beacon_blocks/up.sql create mode 100644 watch/migrations/2022-01-01-000002_validators/down.sql create mode 100644 watch/migrations/2022-01-01-000002_validators/up.sql create mode 100644 watch/migrations/2022-01-01-000003_proposer_info/down.sql create mode 100644 watch/migrations/2022-01-01-000003_proposer_info/up.sql create mode 100644 watch/migrations/2022-01-01-000004_active_config/down.sql create mode 100644 watch/migrations/2022-01-01-000004_active_config/up.sql create mode 100644 watch/migrations/2022-01-01-000010_blockprint/down.sql create mode 100644 watch/migrations/2022-01-01-000010_blockprint/up.sql create mode 100644 watch/migrations/2022-01-01-000011_block_rewards/down.sql create mode 100644 watch/migrations/2022-01-01-000011_block_rewards/up.sql create mode 100644 watch/migrations/2022-01-01-000012_block_packing/down.sql create mode 100644 watch/migrations/2022-01-01-000012_block_packing/up.sql create mode 100644 watch/migrations/2022-01-01-000013_suboptimal_attestations/down.sql create mode 100644 watch/migrations/2022-01-01-000013_suboptimal_attestations/up.sql create mode 100644 watch/migrations/2022-01-01-000020_capella/down.sql create mode 100644 watch/migrations/2022-01-01-000020_capella/up.sql create mode 100644 watch/postgres_docker_compose/compose.yml create mode 100644 watch/src/block_packing/database.rs create mode 100644 watch/src/block_packing/mod.rs create mode 100644 watch/src/block_packing/server.rs create mode 100644 watch/src/block_packing/updater.rs create mode 100644 watch/src/block_rewards/database.rs create mode 100644 watch/src/block_rewards/mod.rs create mode 100644 watch/src/block_rewards/server.rs create mode 100644 watch/src/block_rewards/updater.rs create mode 100644 watch/src/blockprint/config.rs create mode 100644 watch/src/blockprint/database.rs create mode 100644 watch/src/blockprint/mod.rs create mode 100644 watch/src/blockprint/server.rs create mode 100644 watch/src/blockprint/updater.rs create mode 100644 watch/src/cli.rs create mode 100644 watch/src/client.rs create mode 100644 watch/src/config.rs create mode 100644 watch/src/database/compat.rs create mode 100644 watch/src/database/config.rs create mode 100644 watch/src/database/error.rs create mode 100644 watch/src/database/mod.rs create mode 100644 watch/src/database/models.rs create mode 100644 watch/src/database/schema.rs create mode 100644 watch/src/database/utils.rs create mode 100644 watch/src/database/watch_types.rs create mode 100644 watch/src/lib.rs create mode 100644 watch/src/logger.rs create mode 100644 watch/src/main.rs create mode 100644 watch/src/server/config.rs create mode 100644 watch/src/server/error.rs create mode 100644 watch/src/server/handler.rs create mode 100644 watch/src/server/mod.rs create mode 100644 watch/src/suboptimal_attestations/database.rs create mode 100644 watch/src/suboptimal_attestations/mod.rs create mode 100644 watch/src/suboptimal_attestations/server.rs create mode 100644 watch/src/suboptimal_attestations/updater.rs create mode 100644 watch/src/updater/config.rs create mode 100644 watch/src/updater/error.rs create mode 100644 watch/src/updater/handler.rs create mode 100644 watch/src/updater/mod.rs create mode 100644 watch/tests/tests.rs diff --git a/.cargo/config.toml b/.cargo/config.toml new file mode 100644 index 00000000000..dac01630032 --- /dev/null +++ b/.cargo/config.toml @@ -0,0 +1,4 @@ +[env] +# Set the number of arenas to 16 when using jemalloc. +JEMALLOC_SYS_WITH_MALLOC_CONF = "abort_conf:true,narenas:16" + diff --git a/.github/workflows/docker.yml b/.github/workflows/docker.yml index 13b84116955..f2ccaf438ac 100644 --- a/.github/workflows/docker.yml +++ b/.github/workflows/docker.yml @@ -5,6 +5,7 @@ on: branches: - unstable - stable + - capella tags: - v* @@ -34,6 +35,11 @@ jobs: run: | echo "VERSION=latest" >> $GITHUB_ENV echo "VERSION_SUFFIX=-unstable" >> $GITHUB_ENV + - name: Extract version (if capella) + if: github.event.ref == 'refs/heads/capella' + run: | + echo "VERSION=capella" >> $GITHUB_ENV + echo "VERSION_SUFFIX=" >> $GITHUB_ENV - name: Extract version (if tagged release) if: startsWith(github.event.ref, 'refs/tags') run: | @@ -43,7 +49,7 @@ jobs: VERSION: ${{ env.VERSION }} VERSION_SUFFIX: ${{ env.VERSION_SUFFIX }} build-docker-single-arch: - name: build-docker-${{ matrix.binary }} + name: build-docker-${{ matrix.binary }}${{ matrix.features.version_suffix }} runs-on: ubuntu-22.04 strategy: matrix: @@ -51,6 +57,10 @@ jobs: aarch64-portable, x86_64, x86_64-portable] + features: [ + {version_suffix: "", env: "gnosis,slasher-lmdb,slasher-mdbx,jemalloc"}, + {version_suffix: "-dev", env: "jemalloc,spec-minimal"} + ] include: - profile: maxperf @@ -60,6 +70,7 @@ jobs: DOCKER_CLI_EXPERIMENTAL: enabled VERSION: ${{ needs.extract-version.outputs.VERSION }} VERSION_SUFFIX: ${{ needs.extract-version.outputs.VERSION_SUFFIX }} + FEATURE_SUFFIX: ${{ matrix.features.version_suffix }} steps: - uses: actions/checkout@v3 - name: Update Rust @@ -70,7 +81,7 @@ jobs: - name: Cross build Lighthouse binary run: | cargo install cross - env CROSS_PROFILE=${{ matrix.profile }} make build-${{ matrix.binary }} + env CROSS_PROFILE=${{ matrix.profile }} CROSS_FEATURES=${{ matrix.features.env }} make build-${{ matrix.binary }} - name: Move cross-built binary into Docker scope (if ARM) if: startsWith(matrix.binary, 'aarch64') run: | @@ -98,7 +109,8 @@ jobs: docker buildx build \ --platform=linux/${SHORT_ARCH} \ --file ./Dockerfile.cross . \ - --tag ${IMAGE_NAME}:${VERSION}-${SHORT_ARCH}${VERSION_SUFFIX}${MODERNITY_SUFFIX} \ + --tag ${IMAGE_NAME}:${VERSION}-${SHORT_ARCH}${VERSION_SUFFIX}${MODERNITY_SUFFIX}${FEATURE_SUFFIX} \ + --provenance=false \ --push build-docker-multiarch: name: build-docker-multiarch${{ matrix.modernity }} diff --git a/.github/workflows/release.yml b/.github/workflows/release.yml index 8ca6ab0f923..2e63b4d6c24 100644 --- a/.github/workflows/release.yml +++ b/.github/workflows/release.yml @@ -134,11 +134,17 @@ jobs: - name: Build Lighthouse for Windows portable if: matrix.arch == 'x86_64-windows-portable' - run: cargo install --path lighthouse --force --locked --features portable,gnosis --profile ${{ matrix.profile }} + # NOTE: profile set to release until this rustc issue is fixed: + # + # https://github.com/rust-lang/rust/issues/107781 + # + # tracked at: https://github.com/sigp/lighthouse/issues/3964 + run: cargo install --path lighthouse --force --locked --features portable,gnosis --profile release - name: Build Lighthouse for Windows modern if: matrix.arch == 'x86_64-windows' - run: cargo install --path lighthouse --force --locked --features modern,gnosis --profile ${{ matrix.profile }} + # NOTE: profile set to release (see above) + run: cargo install --path lighthouse --force --locked --features modern,gnosis --profile release - name: Configure GPG and create artifacts if: startsWith(matrix.arch, 'x86_64-windows') != true diff --git a/.github/workflows/test-suite.yml b/.github/workflows/test-suite.yml index 8d52f7fa7e2..b7321df7848 100644 --- a/.github/workflows/test-suite.yml +++ b/.github/workflows/test-suite.yml @@ -10,9 +10,10 @@ on: pull_request: env: # Deny warnings in CI - RUSTFLAGS: "-D warnings" + # Disable debug info (see https://github.com/sigp/lighthouse/issues/4005) + RUSTFLAGS: "-D warnings -C debuginfo=0" # The Nightly version used for cargo-udeps, might need updating from time to time. - PINNED_NIGHTLY: nightly-2022-12-15 + PINNED_NIGHTLY: nightly-2023-04-16 # Prevent Github API rate limiting. LIGHTHOUSE_GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} jobs: @@ -280,7 +281,7 @@ jobs: - uses: actions/checkout@v3 - uses: actions/setup-go@v3 with: - go-version: '1.17' + go-version: '1.20' - uses: actions/setup-dotnet@v3 with: dotnet-version: '6.0.201' @@ -306,16 +307,6 @@ jobs: repo-token: ${{ secrets.GITHUB_TOKEN }} - name: Typecheck benchmark code without running it run: make check-benches - check-consensus: - name: check-consensus - runs-on: ubuntu-latest - needs: cargo-fmt - steps: - - uses: actions/checkout@v3 - - name: Get latest version of stable Rust - run: rustup update stable - - name: Typecheck consensus code in strict mode - run: make check-consensus clippy: name: clippy runs-on: ubuntu-latest @@ -382,14 +373,12 @@ jobs: - uses: actions/checkout@v3 - name: Install Rust (${{ env.PINNED_NIGHTLY }}) run: rustup toolchain install $PINNED_NIGHTLY - # NOTE: cargo-udeps version is pinned until this issue is resolved: - # https://github.com/est31/cargo-udeps/issues/135 - name: Install Protoc uses: arduino/setup-protoc@e52d9eb8f7b63115df1ac544a1376fdbf5a39612 with: repo-token: ${{ secrets.GITHUB_TOKEN }} - name: Install cargo-udeps - run: cargo install cargo-udeps --locked --force --version 0.1.30 + run: cargo install cargo-udeps --locked --force - name: Create Cargo config dir run: mkdir -p .cargo - name: Install custom Cargo config diff --git a/.gitignore b/.gitignore index ae9f83c46dd..1b7e5dbb88b 100644 --- a/.gitignore +++ b/.gitignore @@ -12,3 +12,4 @@ genesis.ssz # IntelliJ /*.iml +.idea \ No newline at end of file diff --git a/Cargo.lock b/Cargo.lock index f1daf4dbdfb..a0f9fc7491f 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -88,6 +88,16 @@ dependencies = [ "rand_core 0.6.4", ] +[[package]] +name = "aead" +version = "0.5.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "d122413f284cf2d62fb1b7db97e02edb8cda96d769b16e443a4f6195e35662b0" +dependencies = [ + "crypto-common", + "generic-array", +] + [[package]] name = "aes" version = "0.6.0" @@ -113,17 +123,14 @@ dependencies = [ ] [[package]] -name = "aes-gcm" -version = "0.8.0" +name = "aes" +version = "0.8.2" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "5278b5fabbb9bd46e24aa69b2fdea62c99088e0a950a9be40e3e0101298f88da" +checksum = "433cfd6710c9986c576a25ca913c39d66a6474107b406f34f91d4a8923395241" dependencies = [ - "aead 0.3.2", - "aes 0.6.0", - "cipher 0.2.5", - "ctr 0.6.0", - "ghash 0.3.1", - "subtle", + "cfg-if", + "cipher 0.4.4", + "cpufeatures", ] [[package]] @@ -140,6 +147,20 @@ dependencies = [ "subtle", ] +[[package]] +name = "aes-gcm" +version = "0.10.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "82e1366e0c69c9f927b1fa5ce2c7bf9eafc8f9268c0b9800729e8b267612447c" +dependencies = [ + "aead 0.5.2", + "aes 0.8.2", + "cipher 0.4.4", + "ctr 0.9.2", + "ghash 0.5.0", + "subtle", +] + [[package]] name = "aes-soft" version = "0.6.4" @@ -205,15 +226,14 @@ dependencies = [ [[package]] name = "anyhow" -version = "1.0.68" +version = "1.0.70" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "2cb2f989d18dd141ab8ae82f64d1a8cdd37e0840f73a406896cf5e99502fab61" +checksum = "7de8ce5e0f9f8d88245311066a578d72b7af3e7088f32783804676302df237e4" [[package]] name = "arbitrary" -version = "1.2.2" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "b0224938f92e7aef515fac2ff2d18bd1115c1394ddf4a092e0c87e8be9499ee5" +version = "1.3.0" +source = "git+https://github.com/michaelsproul/arbitrary?rev=f002b99989b561ddce62e4cf2887b0f8860ae991#f002b99989b561ddce62e4cf2887b0f8860ae991" dependencies = [ "derive_arbitrary", ] @@ -226,9 +246,9 @@ checksum = "bddcadddf5e9015d310179a59bb28c4d4b9920ad0f11e8e14dbadf654890c9a6" [[package]] name = "arrayref" -version = "0.3.6" +version = "0.3.7" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "a4c527152e37cf757a3f78aae5a06fbeefdb07ccc535c980a3208ee3060dd544" +checksum = "6b4930d2cb77ce62f89ee5d5289b4ac049559b1c45539271f5ed4fdc7db34545" [[package]] name = "arrayvec" @@ -245,27 +265,27 @@ dependencies = [ "asn1-rs-derive 0.1.0", "asn1-rs-impl", "displaydoc", - "nom 7.1.2", + "nom 7.1.3", "num-traits", "rusticata-macros", "thiserror", - "time 0.3.17", + "time 0.3.20", ] [[package]] name = "asn1-rs" -version = "0.5.1" +version = "0.5.2" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "cf6690c370453db30743b373a60ba498fc0d6d83b11f4abfd87a84a075db5dd4" +checksum = "7f6fd5ddaf0351dff5b8da21b2fb4ff8e08ddd02857f0bf69c47639106c0fff0" dependencies = [ "asn1-rs-derive 0.4.0", "asn1-rs-impl", "displaydoc", - "nom 7.1.2", + "nom 7.1.3", "num-traits", "rusticata-macros", "thiserror", - "time 0.3.17", + "time 0.3.20", ] [[package]] @@ -276,7 +296,7 @@ checksum = "db8b7511298d5b7784b40b092d9e9dcd3a627a5707e4b5e507931ab0d44eeebf" dependencies = [ "proc-macro2", "quote", - "syn", + "syn 1.0.109", "synstructure", ] @@ -288,7 +308,7 @@ checksum = "726535892e8eae7e70657b4c8ea93d26b8553afb1ce617caee529ef96d7dee6c" dependencies = [ "proc-macro2", "quote", - "syn", + "syn 1.0.109", "synstructure", ] @@ -300,7 +320,7 @@ checksum = "2777730b2039ac0f95f093556e61b6d26cebed5393ca6f152717777cec3a42ed" dependencies = [ "proc-macro2", "quote", - "syn", + "syn 1.0.109", ] [[package]] @@ -311,64 +331,64 @@ checksum = "e22d1f4b888c298a027c99dc9048015fac177587de20fc30232a057dfbe24a21" [[package]] name = "async-io" -version = "1.12.0" +version = "1.13.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "8c374dda1ed3e7d8f0d9ba58715f924862c63eae6849c92d3a18e7fbde9e2794" +checksum = "0fc5b45d93ef0529756f812ca52e44c221b35341892d3dcc34132ac02f3dd2af" dependencies = [ "async-lock", "autocfg 1.1.0", + "cfg-if", "concurrent-queue", "futures-lite", - "libc", "log", "parking", "polling", + "rustix", "slab", - "socket2", + "socket2 0.4.9", "waker-fn", - "windows-sys", ] [[package]] name = "async-lock" -version = "2.6.0" +version = "2.7.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "c8101efe8695a6c17e02911402145357e718ac92d3ff88ae8419e84b1707b685" +checksum = "fa24f727524730b077666307f2734b4a1a1c57acb79193127dcc8914d5242dd7" dependencies = [ "event-listener", - "futures-lite", ] [[package]] name = "async-stream" -version = "0.3.3" +version = "0.3.4" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "dad5c83079eae9969be7fadefe640a1c566901f05ff91ab221de4b6f68d9507e" +checksum = "ad445822218ce64be7a341abfb0b1ea43b5c23aa83902542a4542e78309d8e5e" dependencies = [ "async-stream-impl", "futures-core", + "pin-project-lite 0.2.9", ] [[package]] name = "async-stream-impl" -version = "0.3.3" +version = "0.3.4" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "10f203db73a71dfa2fb6dd22763990fa26f3d2625a6da2da900d23b87d26be27" +checksum = "e4655ae1a7b0cdf149156f780c5bf3f1352bc53cbd9e0a361a7ef7b22947e965" dependencies = [ "proc-macro2", "quote", - "syn", + "syn 1.0.109", ] [[package]] name = "async-trait" -version = "0.1.61" +version = "0.1.68" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "705339e0e4a9690e2908d2b3d049d85682cf19fbd5782494498fbf7003a6a282" +checksum = "b9ccdd8f2a161be9bd5c023df56f1b2a0bd1d83872ae53b71a84a12c9bf6e842" dependencies = [ "proc-macro2", "quote", - "syn", + "syn 2.0.13", ] [[package]] @@ -397,9 +417,9 @@ dependencies = [ [[package]] name = "atomic-waker" -version = "1.0.0" +version = "1.1.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "065374052e7df7ee4047b1160cca5e1467a12351a40b3da123c870ba0b8eda2a" +checksum = "debc29dde2e69f9e47506b525f639ed42300fc014a3e007832592448fa8e4599" [[package]] name = "attohttpc" @@ -432,7 +452,7 @@ dependencies = [ "proc-macro-error", "proc-macro2", "quote", - "syn", + "syn 1.0.109", ] [[package]] @@ -464,7 +484,7 @@ dependencies = [ "http", "http-body", "hyper", - "itoa 1.0.5", + "itoa", "matchit", "memchr", "mime", @@ -530,16 +550,22 @@ version = "0.13.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "9e1b586273c5702936fe7b7d6896644d8be71e6314cfe09d3167c95f712589e8" +[[package]] +name = "base64" +version = "0.21.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "a4a4ddaa51a5bc52a6948f74c06d20aaaddb71924eab79b8c97a8c556e942d6a" + [[package]] name = "base64ct" -version = "1.5.3" +version = "1.6.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "b645a089122eccb6111b4f81cbc1a49f5900ac4666bb93ac027feaecf15607bf" +checksum = "8c3c1a368f70d6cf7302d78f8f7093da241fb8e8807c05cc9e51a125895a6d5b" [[package]] name = "beacon-api-client" version = "0.1.0" -source = "git+https://github.com/ralexstokes/beacon-api-client?rev=7d5d8dad1648f771573f42585ad8080a45b05689#7d5d8dad1648f771573f42585ad8080a45b05689" +source = "git+https://github.com/ralexstokes/beacon-api-client#30679e9e25d61731cde54e14cd8a3688a39d8e5b" dependencies = [ "ethereum-consensus", "http", @@ -601,10 +627,11 @@ dependencies = [ "state_processing", "store", "strum", - "superstruct", + "superstruct 0.5.0", "task_executor", "tempfile", "tokio", + "tokio-stream", "tree_hash", "types", "unused_port", @@ -612,7 +639,7 @@ dependencies = [ [[package]] name = "beacon_node" -version = "3.4.0" +version = "4.1.0" dependencies = [ "beacon_chain", "clap", @@ -723,9 +750,9 @@ dependencies = [ [[package]] name = "block-buffer" -version = "0.10.3" +version = "0.10.4" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "69cce20737498f97b993470a6e536b8523f0af7892a4f928cceb1ac5e52ebe7e" +checksum = "3078c7629b62d3f0439517fa394996acacc5cbc91c5a20d8c658e77abd503a71" dependencies = [ "generic-array", ] @@ -778,9 +805,20 @@ dependencies = [ "zeroize", ] +[[package]] +name = "bollard-stubs" +version = "1.41.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "ed2f2e73fffe9455141e170fb9c1feb0ac521ec7e7dcd47a7cab72a658490fb8" +dependencies = [ + "chrono", + "serde", + "serde_with", +] + [[package]] name = "boot_node" -version = "3.4.0" +version = "4.1.0" dependencies = [ "beacon_node", "clap", @@ -810,18 +848,6 @@ version = "0.4.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "771fe0050b883fcc3ea2359b1a96bcfbc090b7116eae7c3c512c7a083fdf23d3" -[[package]] -name = "bstr" -version = "0.2.17" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "ba3569f383e8f1598449f1a423e72e99569137b47740b1da11ef19af3d5c3223" -dependencies = [ - "lazy_static", - "memchr", - "regex-automata", - "serde", -] - [[package]] name = "buf_redux" version = "0.8.4" @@ -837,6 +863,7 @@ name = "builder_client" version = "0.1.0" dependencies = [ "eth2", + "lighthouse_version", "reqwest", "sensitive_url", "serde", @@ -845,9 +872,9 @@ dependencies = [ [[package]] name = "bumpalo" -version = "3.11.1" +version = "3.12.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "572f695136211188308f16ad2ca5c851a712c464060ae6974944458eb83880ba" +checksum = "0d261e256854913907f67ed06efbc3338dfe6179796deefc1ff763fc1aee5535" [[package]] name = "byte-slice-cast" @@ -863,9 +890,9 @@ checksum = "14c189c53d098945499cdfa7ecc63567cf3886b3332b312a5b4585d8d3a6a610" [[package]] name = "bytes" -version = "1.3.0" +version = "1.4.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "dfb24e866b15a1af2a1b663f10c6b6b8f397a84aadb828f12e5b289ec23a3a3c" +checksum = "89b2fd2a0dcf38d7971e2194b6b6eebab45ae01067456a7fd93d5547a61b70be" dependencies = [ "serde", ] @@ -914,9 +941,9 @@ checksum = "37b2a672a2cb129a2e41c10b1224bb368f9f37a2b16b612598138befd7b37eb5" [[package]] name = "cc" -version = "1.0.78" +version = "1.0.79" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "a20104e2335ce8a659d6dd92a51a767a0c062599c73b343fd152cb401e828c3d" +checksum = "50d30906286121d95be3d479533b458f87493b30a4b5f79a607db8f5d11aa91f" [[package]] name = "ccm" @@ -935,7 +962,7 @@ version = "0.6.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "6fac387a98bb7c37292057cffc56d62ecb629900026402633ae9160df93a8766" dependencies = [ - "nom 7.1.2", + "nom 7.1.3", ] [[package]] @@ -971,14 +998,15 @@ dependencies = [ [[package]] name = "chrono" -version = "0.4.23" +version = "0.4.24" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "16b0a3d9ed01224b22057780a37bb8c5dbfe1be8ba48678e7bf57ec4b385411f" +checksum = "4e3c5919066adf22df73762e50cffcde3a758f2a848b113b586d1f86728b673b" dependencies = [ "iana-time-zone", "js-sys", "num-integer", "num-traits", + "serde", "time 0.1.45", "wasm-bindgen", "winapi", @@ -1002,11 +1030,21 @@ dependencies = [ "generic-array", ] +[[package]] +name = "cipher" +version = "0.4.4" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "773f3b9af64447d2ce9850330c473515014aa235e6a783b02db81ff39e4a3dad" +dependencies = [ + "crypto-common", + "inout", +] + [[package]] name = "clang-sys" -version = "1.4.0" +version = "1.6.1" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "fa2e27ae6ab525c3d369ded447057bca5438d86dc3a68f6faafb8269ba82ebf3" +checksum = "c688fc74432808e3eb684cae8830a86be1d66a2bd58e1f248ed0960a590baf6f" dependencies = [ "glob", "libc", @@ -1063,8 +1101,10 @@ dependencies = [ "lazy_static", "lighthouse_metrics", "lighthouse_network", + "logging", "monitoring_api", "network", + "operation_pool", "parking_lot 0.12.1", "sensitive_url", "serde", @@ -1074,9 +1114,10 @@ dependencies = [ "slasher_service", "slog", "slot_clock", + "state_processing", "store", "task_executor", - "time 0.3.17", + "time 0.3.20", "timer", "tokio", "types", @@ -1084,9 +1125,9 @@ dependencies = [ [[package]] name = "cmake" -version = "0.1.49" +version = "0.1.50" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "db34956e100b30725f2eb215f90d4871051239535632f84fea3bc92722c66b7c" +checksum = "a31c789563b815f77f4250caee12365734369f942439b7defd71e18a48197130" dependencies = [ "cc", ] @@ -1113,14 +1154,14 @@ name = "compare_fields_derive" version = "0.2.0" dependencies = [ "quote", - "syn", + "syn 1.0.109", ] [[package]] name = "concurrent-queue" -version = "2.0.0" +version = "2.1.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "bd7bef69dc86e3c610e4e7aed41035e2a7ed12e72dd7530f61327a6579a4390b" +checksum = "c278839b831783b70278b14df4d45e1beb1aad306c07bb796637de9a0e323e8e" dependencies = [ "crossbeam-utils", ] @@ -1137,9 +1178,9 @@ dependencies = [ [[package]] name = "const-oid" -version = "0.9.1" +version = "0.9.2" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "cec318a675afcb6a1ea1d4340e2d377e56e47c266f28043ceccbf4412ddfdd3b" +checksum = "520fbf3c07483f94e3e3ca9d0cfd913d7718ef2483d2cfd91c0d9e91474ab913" [[package]] name = "convert_case" @@ -1159,9 +1200,9 @@ dependencies = [ [[package]] name = "core-foundation-sys" -version = "0.8.3" +version = "0.8.4" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "5827cebf4670468b8772dd191856768aedcb1b0278a04f989f7766351917b9dc" +checksum = "e496a50fda8aacccc86d7529e2c1e0892dbd0f898a6b5645b5561b89c3210efa" [[package]] name = "core2" @@ -1174,33 +1215,27 @@ dependencies = [ [[package]] name = "cpufeatures" -version = "0.2.5" +version = "0.2.6" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "28d997bd5e24a5928dd43e46dc529867e207907fe0b239c3477d924f7f2ca320" +checksum = "280a9f2d8b3a38871a3c8a46fb80db65e5e5ed97da80c4d08bf27fb63e35e181" dependencies = [ "libc", ] -[[package]] -name = "cpuid-bool" -version = "0.2.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "dcb25d077389e53838a8158c8e99174c5a9d902dee4904320db714f3c653ffba" - [[package]] name = "crc" -version = "3.0.0" +version = "3.0.1" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "53757d12b596c16c78b83458d732a5d1a17ab3f53f2f7412f6fb57cc8a140ab3" +checksum = "86ec7a15cbe22e59248fc7eadb1907dab5ba09372595da4d73dd805ed4417dfe" dependencies = [ "crc-catalog", ] [[package]] name = "crc-catalog" -version = "2.1.0" +version = "2.2.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "2d0165d2900ae6778e36e80bbc4da3b5eefccee9ba939761f9c2882a5d9af3ff" +checksum = "9cace84e55f07e7301bae1c519df89cdad8cc3cd868413d3fdbdeca9ff3db484" [[package]] name = "crc32fast" @@ -1249,9 +1284,9 @@ dependencies = [ [[package]] name = "crossbeam-channel" -version = "0.5.6" +version = "0.5.7" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "c2dd04ddaf88237dc3b8d8f9a3c1004b506b54b3313403944054d23c0870c521" +checksum = "cf2b3e8478797446514c91ef04bafcb59faba183e621ad488df88983cc14128c" dependencies = [ "cfg-if", "crossbeam-utils", @@ -1259,9 +1294,9 @@ dependencies = [ [[package]] name = "crossbeam-deque" -version = "0.8.2" +version = "0.8.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "715e8152b692bba2d374b53d4875445368fdf21a94751410af607a5ac677d1fc" +checksum = "ce6fd6f855243022dcecf8702fef0c297d4338e226845fe067f6341ad9fa0cef" dependencies = [ "cfg-if", "crossbeam-epoch", @@ -1270,22 +1305,22 @@ dependencies = [ [[package]] name = "crossbeam-epoch" -version = "0.9.13" +version = "0.9.14" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "01a9af1f4c2ef74bb8aa1f7e19706bc72d03598c8a570bb5de72243c7a9d9d5a" +checksum = "46bd5f3f85273295a9d14aedfb86f6aadbff6d8f5295c4a9edb08e819dcf5695" dependencies = [ "autocfg 1.1.0", "cfg-if", "crossbeam-utils", - "memoffset 0.7.1", + "memoffset 0.8.0", "scopeguard", ] [[package]] name = "crossbeam-utils" -version = "0.8.14" +version = "0.8.15" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "4fb766fa798726286dbbb842f174001dab8abc7b627a1dd86e0b7222a95d929f" +checksum = "3c063cd8cc95f5c377ed0d4b49a4b21f632396ff690e8470c29b3359b346984b" dependencies = [ "cfg-if", ] @@ -1315,6 +1350,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "1bfb12502f3fc46cca1bb51ac28df9d618d813cdc3d2f25b9fe775a34af26bb3" dependencies = [ "generic-array", + "rand_core 0.6.4", "typenum", ] @@ -1328,16 +1364,6 @@ dependencies = [ "subtle", ] -[[package]] -name = "crypto-mac" -version = "0.10.1" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "bff07008ec701e8028e2ceb8f83f0e4274ee62bd2dbdc4fefff2e9a91824081a" -dependencies = [ - "generic-array", - "subtle", -] - [[package]] name = "crypto-mac" version = "0.11.1" @@ -1350,13 +1376,12 @@ dependencies = [ [[package]] name = "csv" -version = "1.1.6" +version = "1.2.1" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "22813a6dc45b335f9bade10bf7271dc477e81113e89eb251a0bc2a8a81c536e1" +checksum = "0b015497079b9a9d69c02ad25de6c0a6edef051ea6360a327d0bd05802ef64ad" dependencies = [ - "bstr", "csv-core", - "itoa 0.4.8", + "itoa", "ryu", "serde", ] @@ -1372,30 +1397,30 @@ dependencies = [ [[package]] name = "ctr" -version = "0.6.0" +version = "0.8.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "fb4a30d54f7443bf3d6191dcd486aca19e67cb3c49fa7a06a319966346707e7f" +checksum = "049bb91fb4aaf0e3c7efa6cd5ef877dbbbd15b39dad06d9948de4ec8a75761ea" dependencies = [ - "cipher 0.2.5", + "cipher 0.3.0", ] [[package]] name = "ctr" -version = "0.8.0" +version = "0.9.2" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "049bb91fb4aaf0e3c7efa6cd5ef877dbbbd15b39dad06d9948de4ec8a75761ea" +checksum = "0369ee1ad671834580515889b80f2ea915f23b8be8d0daa4bbaf2ac5c7590835" dependencies = [ - "cipher 0.3.0", + "cipher 0.4.4", ] [[package]] name = "ctrlc" -version = "3.2.4" +version = "3.2.5" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "1631ca6e3c59112501a9d87fd86f21591ff77acd31331e8a73f8d80a65bbdd71" +checksum = "bbcf33c2a618cbe41ee43ae6e9f2e48368cd9f9db2896f10167d8d762679f639" dependencies = [ - "nix 0.26.1", - "windows-sys", + "nix 0.26.2", + "windows-sys 0.45.0", ] [[package]] @@ -1413,9 +1438,9 @@ dependencies = [ [[package]] name = "curve25519-dalek" -version = "4.0.0-pre.5" +version = "4.0.0-rc.2" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "67bc65846be335cb20f4e52d49a437b773a2c1fdb42b19fc84e79e6f6771536f" +checksum = "03d928d978dbec61a1167414f5ec534f24bea0d7a0d24dd9b6233d3d8223e585" dependencies = [ "cfg-if", "fiat-crypto", @@ -1427,9 +1452,9 @@ dependencies = [ [[package]] name = "cxx" -version = "1.0.86" +version = "1.0.94" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "51d1075c37807dcf850c379432f0df05ba52cc30f279c5cfc43cc221ce7f8579" +checksum = "f61f1b6389c3fe1c316bf8a4dccc90a38208354b330925bce1f74a6c4756eb93" dependencies = [ "cc", "cxxbridge-flags", @@ -1439,9 +1464,9 @@ dependencies = [ [[package]] name = "cxx-build" -version = "1.0.86" +version = "1.0.94" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "5044281f61b27bc598f2f6647d480aed48d2bf52d6eb0b627d84c0361b17aa70" +checksum = "12cee708e8962df2aeb38f594aae5d827c022b6460ac71a7a3e2c3c2aae5a07b" dependencies = [ "cc", "codespan-reporting", @@ -1449,24 +1474,24 @@ dependencies = [ "proc-macro2", "quote", "scratch", - "syn", + "syn 2.0.13", ] [[package]] name = "cxxbridge-flags" -version = "1.0.86" +version = "1.0.94" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "61b50bc93ba22c27b0d31128d2d130a0a6b3d267ae27ef7e4fae2167dfe8781c" +checksum = "7944172ae7e4068c533afbb984114a56c46e9ccddda550499caa222902c7f7bb" [[package]] name = "cxxbridge-macro" -version = "1.0.86" +version = "1.0.94" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "39e61fda7e62115119469c7b3591fd913ecca96fb766cfd3f2e2502ab7bc87a5" +checksum = "2345488264226bf682893e25de0769f3360aac9957980ec49361b083ddaa5bc5" dependencies = [ "proc-macro2", "quote", - "syn", + "syn 2.0.13", ] [[package]] @@ -1481,12 +1506,12 @@ dependencies = [ [[package]] name = "darling" -version = "0.14.2" +version = "0.14.4" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "b0dd3cd20dc6b5a876612a6e5accfe7f3dd883db6d07acfbf14c128f61550dfa" +checksum = "7b750cb3417fd1b327431a470f388520309479ab0bf5e323505daf0290cd3850" dependencies = [ - "darling_core 0.14.2", - "darling_macro 0.14.2", + "darling_core 0.14.4", + "darling_macro 0.14.4", ] [[package]] @@ -1500,21 +1525,21 @@ dependencies = [ "proc-macro2", "quote", "strsim 0.10.0", - "syn", + "syn 1.0.109", ] [[package]] name = "darling_core" -version = "0.14.2" +version = "0.14.4" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "a784d2ccaf7c98501746bf0be29b2022ba41fd62a2e622af997a03e9f972859f" +checksum = "109c1ca6e6b7f82cc233a97004ea8ed7ca123a9af07a8230878fcfda9b158bf0" dependencies = [ "fnv", "ident_case", "proc-macro2", "quote", "strsim 0.10.0", - "syn", + "syn 1.0.109", ] [[package]] @@ -1525,18 +1550,18 @@ checksum = "9c972679f83bdf9c42bd905396b6c3588a843a17f0f16dfcfa3e2c5d57441835" dependencies = [ "darling_core 0.13.4", "quote", - "syn", + "syn 1.0.109", ] [[package]] name = "darling_macro" -version = "0.14.2" +version = "0.14.4" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "7618812407e9402654622dd402b0a89dff9ba93badd6540781526117b92aab7e" +checksum = "a4aab4dbc9f7611d8b55048a3a16d2d010c2c8334e46304b40ac1cc14bf3b48e" dependencies = [ - "darling_core 0.14.2", + "darling_core 0.14.4", "quote", - "syn", + "syn 1.0.109", ] [[package]] @@ -1582,7 +1607,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "a5bbed42daaa95e780b60a50546aa345b8413a1e46f9a40a12907d3598f038db" dependencies = [ "data-encoding", - "syn", + "syn 1.0.109", ] [[package]] @@ -1611,12 +1636,12 @@ checksum = "b72465f46d518f6015d9cf07f7f3013a95dd6b9c2747c3d65ae0cce43929d14f" [[package]] name = "delay_map" -version = "0.1.2" +version = "0.3.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "9c4d75d3abfe4830dcbf9bcb1b926954e121669f74dd1ca7aa0183b1755d83f6" +checksum = "e4355c25cbf99edcb6b4a0e906f6bdc6956eda149e84455bea49696429b2f8e8" dependencies = [ "futures", - "tokio-util 0.6.10", + "tokio-util 0.7.7", ] [[package]] @@ -1652,7 +1677,7 @@ checksum = "fe398ac75057914d7d07307bf67dc7f3f574a26783b4fc7805a20ffa9f506e82" dependencies = [ "asn1-rs 0.3.1", "displaydoc", - "nom 7.1.2", + "nom 7.1.3", "num-bigint", "num-traits", "rusticata-macros", @@ -1660,13 +1685,13 @@ dependencies = [ [[package]] name = "der-parser" -version = "8.1.0" +version = "8.2.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "42d4bc9b0db0a0df9ae64634ac5bdefb7afcb534e182275ca0beadbe486701c1" +checksum = "dbd676fbbab537128ef0278adb5576cf363cff6aa22a7b24effe97347cfab61e" dependencies = [ - "asn1-rs 0.5.1", + "asn1-rs 0.5.2", "displaydoc", - "nom 7.1.2", + "nom 7.1.3", "num-bigint", "num-traits", "rusticata-macros", @@ -1680,18 +1705,17 @@ checksum = "fcc3dd5e9e9c0b295d6e1e4d811fb6f157d5ffd784b8d202fc62eac8035a770b" dependencies = [ "proc-macro2", "quote", - "syn", + "syn 1.0.109", ] [[package]] name = "derive_arbitrary" -version = "1.2.2" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "cf460bbff5f571bfc762da5102729f59f338be7db17a21fade44c5c4f5005350" +version = "1.3.0" +source = "git+https://github.com/michaelsproul/arbitrary?rev=f002b99989b561ddce62e4cf2887b0f8860ae991#f002b99989b561ddce62e4cf2887b0f8860ae991" dependencies = [ "proc-macro2", "quote", - "syn", + "syn 1.0.109", ] [[package]] @@ -1709,10 +1733,10 @@ version = "0.11.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "1f91d4cfa921f1c05904dc3c57b4a32c38aed3340cce209f3a6fd1478babafc4" dependencies = [ - "darling 0.14.2", + "darling 0.14.4", "proc-macro2", "quote", - "syn", + "syn 1.0.109", ] [[package]] @@ -1722,7 +1746,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "8f0314b72bed045f3a68671b3c86328386762c93f82d98c65c3cb5e5f573dd68" dependencies = [ "derive_builder_core", - "syn", + "syn 1.0.109", ] [[package]] @@ -1735,7 +1759,44 @@ dependencies = [ "proc-macro2", "quote", "rustc_version 0.4.0", - "syn", + "syn 1.0.109", +] + +[[package]] +name = "diesel" +version = "2.0.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "4391a22b19c916e50bec4d6140f29bdda3e3bb187223fe6e3ea0b6e4d1021c04" +dependencies = [ + "bitflags", + "byteorder", + "diesel_derives", + "itoa", + "pq-sys", + "r2d2", +] + +[[package]] +name = "diesel_derives" +version = "2.0.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "0ad74fdcf086be3d4fdd142f67937678fe60ed431c3b2f08599e7687269410c4" +dependencies = [ + "proc-macro-error", + "proc-macro2", + "quote", + "syn 1.0.109", +] + +[[package]] +name = "diesel_migrations" +version = "2.0.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "e9ae22beef5e9d6fab9225ddb073c1c6c1a7a6ded5019d5da11d1e5c5adc34e2" +dependencies = [ + "diesel", + "migrations_internals", + "migrations_macros", ] [[package]] @@ -1753,7 +1814,7 @@ version = "0.10.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "8168378f4e5023e7218c89c891c0fd8ecdb5e5e4f18cb78f38cf245dd021e76f" dependencies = [ - "block-buffer 0.10.3", + "block-buffer 0.10.4", "crypto-common", "subtle", ] @@ -1810,18 +1871,18 @@ dependencies = [ [[package]] name = "discv5" -version = "0.1.0" +version = "0.2.2" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "d767c0e59b3e8d65222d95df723cc2ea1da92bb0f27c563607e6f0bde064f255" +checksum = "b009a99b85b58900df46435307fc5c4c845af7e182582b1fbf869572fa9fce69" dependencies = [ "aes 0.7.5", "aes-gcm 0.9.4", "arrayvec", "delay_map", - "enr", + "enr 0.7.0", "fnv", "futures", - "hashlink", + "hashlink 0.7.0", "hex", "hkdf", "lazy_static", @@ -1832,7 +1893,7 @@ dependencies = [ "rand 0.8.5", "rlp", "smallvec", - "socket2", + "socket2 0.4.9", "tokio", "tokio-stream", "tokio-util 0.6.10", @@ -1850,14 +1911,14 @@ checksum = "3bf95dc3f046b9da4f2d51833c0d3547d8564ef6910f5c1ed130306a75b92886" dependencies = [ "proc-macro2", "quote", - "syn", + "syn 1.0.109", ] [[package]] name = "dtoa" -version = "1.0.5" +version = "1.0.6" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "c00704156a7de8df8da0911424e30c2049957b0a714542a44e05fe693dd85313" +checksum = "65d09067bfacaa79114679b279d7f5885b53295b1e2cfb4e79c8e4bd3d633169" [[package]] name = "ecdsa" @@ -1873,9 +1934,9 @@ dependencies = [ [[package]] name = "ed25519" -version = "1.5.2" +version = "1.5.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "1e9c280362032ea4203659fc489832d0204ef09f247a0506f170dafcac08c369" +checksum = "91cff35c70bba8a626e3185d8cd48cc11b5437e1a5bcd15b9b5fa3c64b6dfee7" dependencies = [ "signature", ] @@ -1927,9 +1988,9 @@ dependencies = [ [[package]] name = "either" -version = "1.8.0" +version = "1.8.1" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "90e5c1c8368803113bf0c9584fc495a58b86dc8a29edbf8fe877d21d9507e797" +checksum = "7fcaabb2fef8c910e7f4c7ce9f67a1283a1715879a7c230ca9d6d1ae31f16d91" [[package]] name = "elliptic-curve" @@ -1955,9 +2016,9 @@ dependencies = [ [[package]] name = "encoding_rs" -version = "0.8.31" +version = "0.8.32" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "9852635589dc9f9ea1b6fe9f05b50ef208c85c834a562f0c6abb1c475736ec2b" +checksum = "071a31f4ee85403370b58aca746f01041ede6f0da2730960ad001edc2b71b394" dependencies = [ "cfg-if", ] @@ -1968,7 +2029,26 @@ version = "0.6.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "26fa0a0be8915790626d5759eb51fe47435a8eac92c2f212bd2da9aa7f30ea56" dependencies = [ - "base64", + "base64 0.13.1", + "bs58", + "bytes", + "hex", + "k256", + "log", + "rand 0.8.5", + "rlp", + "serde", + "sha3 0.10.6", + "zeroize", +] + +[[package]] +name = "enr" +version = "0.7.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "492a7e5fc2504d5fdce8e124d3e263b244a68b283cac67a69eda0cd43e0aebad" +dependencies = [ + "base64 0.13.1", "bs58", "bytes", "ed25519-dalek", @@ -1991,7 +2071,7 @@ dependencies = [ "heck", "proc-macro2", "quote", - "syn", + "syn 1.0.109", ] [[package]] @@ -2039,6 +2119,27 @@ dependencies = [ "types", ] +[[package]] +name = "errno" +version = "0.3.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "50d6a0976c999d473fe89ad888d5a284e55366d9dc9038b1ba2aa15128c4afa0" +dependencies = [ + "errno-dragonfly", + "libc", + "windows-sys 0.45.0", +] + +[[package]] +name = "errno-dragonfly" +version = "0.1.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "aa68f1b12764fab894d2755d2518754e71b4fd80ecfb822714a1206c2aab39bf" +dependencies = [ + "cc", + "libc", +] + [[package]] name = "error-chain" version = "0.12.4" @@ -2073,7 +2174,7 @@ dependencies = [ "slog", "sloggers", "state_processing", - "superstruct", + "superstruct 0.5.0", "task_executor", "tokio", "tree_hash", @@ -2145,7 +2246,7 @@ dependencies = [ name = "eth2_interop_keypairs" version = "0.2.0" dependencies = [ - "base64", + "base64 0.13.1", "bls", "eth2_hashing", "hex", @@ -2194,7 +2295,7 @@ dependencies = [ name = "eth2_network_config" version = "0.2.0" dependencies = [ - "enr", + "discv5", "eth2_config", "eth2_ssz", "serde_yaml", @@ -2226,12 +2327,13 @@ dependencies = [ [[package]] name = "eth2_ssz_derive" -version = "0.3.0" +version = "0.3.1" dependencies = [ "darling 0.13.4", + "eth2_ssz", "proc-macro2", "quote", - "syn", + "syn 1.0.109", ] [[package]] @@ -2339,15 +2441,16 @@ dependencies = [ [[package]] name = "ethereum-consensus" version = "0.1.1" -source = "git+https://github.com/ralexstokes/ethereum-consensus?rev=a8110af76d97bf2bf27fb987a671808fcbdf1834#a8110af76d97bf2bf27fb987a671808fcbdf1834" +source = "git+https://github.com/ralexstokes//ethereum-consensus?rev=9b0ee0a8a45b968c8df5e7e64ea1c094e16f053d#9b0ee0a8a45b968c8df5e7e64ea1c094e16f053d" dependencies = [ "async-stream", "blst", "bs58", - "enr", + "enr 0.6.2", "hex", "integer-sqrt", "multiaddr 0.14.0", + "multihash 0.16.3", "rand 0.8.5", "serde", "serde_json", @@ -2422,7 +2525,7 @@ checksum = "a1a9e0597aa6b2fdc810ff58bc95e4eeaa2c219b3e615ed025106ecb027407d8" dependencies = [ "async-trait", "auto_impl", - "base64", + "base64 0.13.1", "ethers-core", "futures-channel", "futures-core", @@ -2470,6 +2573,7 @@ dependencies = [ "fork_choice", "futures", "hex", + "logging", "reqwest", "sensitive_url", "serde_json", @@ -2505,7 +2609,7 @@ dependencies = [ "lazy_static", "lighthouse_metrics", "lru 0.7.8", - "mev-build-rs", + "mev-rs", "parking_lot 0.12.1", "rand 0.8.5", "reqwest", @@ -2517,6 +2621,7 @@ dependencies = [ "ssz-rs", "state_processing", "strum", + "superstruct 0.6.0", "task_executor", "tempfile", "tokio", @@ -2552,9 +2657,9 @@ checksum = "7360491ce676a36bf9bb3c56c1aa791658183a54d2744120f27285738d90465a" [[package]] name = "fastrand" -version = "1.8.0" +version = "1.9.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "a7a407cfaa3385c4ae6b23e84623d48c2798d06e3e6a1878f7f59f17b3f86499" +checksum = "e51093e27b0797c359783294ca4f0a911c270184cb10f85783b118614a1501be" dependencies = [ "instant", ] @@ -2577,18 +2682,18 @@ checksum = "ec54ac60a7f2ee9a97cad9946f9bf629a3bc6a7ae59e68983dc9318f5a54b81a" [[package]] name = "fiat-crypto" -version = "0.1.17" +version = "0.1.20" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "a214f5bb88731d436478f3ae1f8a277b62124089ba9fb67f4f93fb100ef73c90" +checksum = "e825f6987101665dea6ec934c09ec6d721de7bc1bf92248e1d5810c8cd636b77" [[package]] name = "field-offset" -version = "0.3.4" +version = "0.3.5" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "1e1c54951450cbd39f3dbcf1005ac413b49487dabf18a720ad2383eccfeffb92" +checksum = "a3cf3a800ff6e860c863ca6d4b16fd999db8b752819c1606884047b73e468535" dependencies = [ - "memoffset 0.6.5", - "rustc_version 0.3.3", + "memoffset 0.8.0", + "rustc_version 0.4.0", ] [[package]] @@ -2602,7 +2707,8 @@ dependencies = [ [[package]] name = "fixed-hash" version = "0.7.0" -source = "git+https://github.com/paritytech/parity-common?rev=df638ab0885293d21d656dc300d39236b69ce57d#df638ab0885293d21d656dc300d39236b69ce57d" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "cfcf0ed7fe52a17a03854ec54a9f76d6d84508d1c0e66bc1793301c73fc8493c" dependencies = [ "byteorder", "rand 0.8.5", @@ -2709,9 +2815,9 @@ checksum = "e6d5a32815ae3f33302d95fdcb2ce17862f8c65363dcfd29360480ba1001fc9c" [[package]] name = "futures" -version = "0.3.25" +version = "0.3.28" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "38390104763dc37a5145a53c29c63c1290b5d316d6086ec32c293f6736051bb0" +checksum = "23342abe12aba583913b2e62f22225ff9c950774065e4bfb61a19cd9770fec40" dependencies = [ "futures-channel", "futures-core", @@ -2724,9 +2830,9 @@ dependencies = [ [[package]] name = "futures-channel" -version = "0.3.25" +version = "0.3.28" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "52ba265a92256105f45b719605a571ffe2d1f0fea3807304b522c1d778f79eed" +checksum = "955518d47e09b25bbebc7a18df10b81f0c766eaf4c4f1cccef2fca5f2a4fb5f2" dependencies = [ "futures-core", "futures-sink", @@ -2734,15 +2840,15 @@ dependencies = [ [[package]] name = "futures-core" -version = "0.3.25" +version = "0.3.28" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "04909a7a7e4633ae6c4a9ab280aeb86da1236243a77b694a49eacd659a4bd3ac" +checksum = "4bca583b7e26f571124fe5b7561d49cb2868d79116cfa0eefce955557c6fee8c" [[package]] name = "futures-executor" -version = "0.3.25" +version = "0.3.28" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "7acc85df6714c176ab5edf386123fafe217be88c0840ec11f199441134a074e2" +checksum = "ccecee823288125bd88b4d7f565c9e58e41858e47ab72e8ea2d64e93624386e0" dependencies = [ "futures-core", "futures-task", @@ -2752,9 +2858,9 @@ dependencies = [ [[package]] name = "futures-io" -version = "0.3.25" +version = "0.3.28" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "00f5fb52a06bdcadeb54e8d3671f8888a39697dcb0b81b23b55174030427f4eb" +checksum = "4fff74096e71ed47f8e023204cfd0aa1289cd54ae5430a9523be060cdb849964" [[package]] name = "futures-lite" @@ -2773,13 +2879,13 @@ dependencies = [ [[package]] name = "futures-macro" -version = "0.3.25" +version = "0.3.28" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "bdfb8ce053d86b91919aad980c220b1fb8401a9394410e1c289ed7e66b61835d" +checksum = "89ca545a94061b6365f2c7355b4b32bd20df3ff95f02da9329b34ccc3bd6ee72" dependencies = [ "proc-macro2", "quote", - "syn", + "syn 2.0.13", ] [[package]] @@ -2789,21 +2895,21 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d2411eed028cdf8c8034eaf21f9915f956b6c3abec4d4c7949ee67f0721127bd" dependencies = [ "futures-io", - "rustls 0.20.7", + "rustls 0.20.8", "webpki 0.22.0", ] [[package]] name = "futures-sink" -version = "0.3.25" +version = "0.3.28" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "39c15cf1a4aa79df40f1bb462fb39676d0ad9e366c2a33b590d7c66f4f81fcf9" +checksum = "f43be4fe21a13b9781a69afa4985b0f6ee0e1afab2c6f454a8cf30e2b2237b6e" [[package]] name = "futures-task" -version = "0.3.25" +version = "0.3.28" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "2ffb393ac5d9a6eaa9d3fdf37ae2776656b706e200c8e16b1bdb227f5198e6ea" +checksum = "76d3d132be6c0e6aa1534069c705a74a5997a356c0dc2f86a47765e5617c5b65" [[package]] name = "futures-timer" @@ -2813,9 +2919,9 @@ checksum = "e64b03909df88034c26dc1547e8970b91f98bdb65165d6a4e9110d94263dbb2c" [[package]] name = "futures-util" -version = "0.3.25" +version = "0.3.28" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "197676987abd2f9cadff84926f410af1c183608d36641465df73ae8211dc65d6" +checksum = "26b01e40b772d54cf6c6d721c1d1abd0647a0106a12ecaa1c186273392a69533" dependencies = [ "futures-channel", "futures-core", @@ -2840,9 +2946,9 @@ dependencies = [ [[package]] name = "generic-array" -version = "0.14.6" +version = "0.14.7" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "bff49e947297f3312447abdca79f45f4738097cc82b06e72054d2223f601f1b9" +checksum = "85649ca51fd72272d7821adaf274ad91c288277713d9c18820d8499a7ff69e9a" dependencies = [ "typenum", "version_check", @@ -2897,29 +3003,29 @@ dependencies = [ [[package]] name = "ghash" -version = "0.3.1" +version = "0.4.4" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "97304e4cd182c3846f7575ced3890c53012ce534ad9114046b0a9e00bb30a375" +checksum = "1583cc1656d7839fd3732b80cf4f38850336cdb9b8ded1cd399ca62958de3c99" dependencies = [ "opaque-debug", - "polyval 0.4.5", + "polyval 0.5.3", ] [[package]] name = "ghash" -version = "0.4.4" +version = "0.5.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "1583cc1656d7839fd3732b80cf4f38850336cdb9b8ded1cd399ca62958de3c99" +checksum = "d930750de5717d2dd0b8c0d42c076c0e884c81a73e6cab859bbd2339c71e3e40" dependencies = [ "opaque-debug", - "polyval 0.5.3", + "polyval 0.6.0", ] [[package]] name = "gimli" -version = "0.27.0" +version = "0.27.2" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "dec7af912d60cdbd3677c1af9352ebae6fb8394d165568a2234df0fa00f87793" +checksum = "ad0a93d233ebf96623465aad4046a8d3aa4da22d4f4beba5388838c8a434bbb4" [[package]] name = "git-version" @@ -2940,7 +3046,7 @@ dependencies = [ "proc-macro-hack", "proc-macro2", "quote", - "syn", + "syn 1.0.109", ] [[package]] @@ -2962,9 +3068,9 @@ dependencies = [ [[package]] name = "h2" -version = "0.3.15" +version = "0.3.16" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "5f9f29bc9dda355256b2916cf526ab02ce0aeaaaf2bad60d65ef3f12f11dd0f4" +checksum = "5be7b54589b581f624f566bf5d8eb2bab1db736c51528720b6bd36b96b55924d" dependencies = [ "bytes", "fnv", @@ -2975,7 +3081,7 @@ dependencies = [ "indexmap", "slab", "tokio", - "tokio-util 0.7.4", + "tokio-util 0.7.7", "tracing", ] @@ -3036,13 +3142,22 @@ dependencies = [ "hashbrown 0.11.2", ] +[[package]] +name = "hashlink" +version = "0.8.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "69fe1fcf8b4278d860ad0548329f892a3631fb63f82574df68275f34cdbe0ffa" +dependencies = [ + "hashbrown 0.12.3", +] + [[package]] name = "headers" version = "0.3.8" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "f3e372db8e5c0d213e0cd0b9be18be2aca3d44cf2fe30a9d46a65581cd454584" dependencies = [ - "base64", + "base64 0.13.1", "bitflags", "bytes", "headers-core", @@ -3063,9 +3178,9 @@ dependencies = [ [[package]] name = "heck" -version = "0.4.0" +version = "0.4.1" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "2540771e65fc8cb83cd6e8a237f70c319bd5c29f78ed1084ba5d50eeac86f7f9" +checksum = "95505c38b4572b2d910cecb0281560f54b440a19336cbbcb27bf6ce6adc6f5a8" [[package]] name = "hermit-abi" @@ -3085,6 +3200,12 @@ dependencies = [ "libc", ] +[[package]] +name = "hermit-abi" +version = "0.3.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "fed44880c466736ef9a5c5b5facefb5ed0785676d0c02d612db14e54f0d84286" + [[package]] name = "hex" version = "0.4.3" @@ -3116,16 +3237,6 @@ dependencies = [ "digest 0.9.0", ] -[[package]] -name = "hmac" -version = "0.10.1" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "c1441c6b1e930e2817404b5046f1f989899143a12bf92de603b69f4e0aee1e15" -dependencies = [ - "crypto-mac 0.10.1", - "digest 0.9.0", -] - [[package]] name = "hmac" version = "0.11.0" @@ -3169,13 +3280,13 @@ dependencies = [ [[package]] name = "http" -version = "0.2.8" +version = "0.2.9" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "75f43d41e26995c17e71ee126451dd3941010b0514a81a9d11f3b341debc2399" +checksum = "bd6effc99afb63425aff9b05836f029929e345a6148a14b7ecd5ab67af944482" dependencies = [ "bytes", "fnv", - "itoa 1.0.5", + "itoa", ] [[package]] @@ -3205,9 +3316,11 @@ dependencies = [ "environment", "eth1", "eth2", + "eth2_serde_utils", "eth2_ssz", "execution_layer", "futures", + "genesis", "hex", "lazy_static", "lighthouse_metrics", @@ -3216,6 +3329,7 @@ dependencies = [ "logging", "lru 0.7.8", "network", + "operation_pool", "parking_lot 0.12.1", "proto_array", "safe_arith", @@ -3279,9 +3393,9 @@ checksum = "9a3a5bfb195931eeb336b2a7b4d761daec841b97f947d34394601737a7bba5e4" [[package]] name = "hyper" -version = "0.14.23" +version = "0.14.25" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "034711faac9d2166cb1baf1a2fb0b60b1f277f8492fd72176c17f3515e1abd3c" +checksum = "cc5e554ff619822309ffd57d8734d77cd5ce6238bc956f037ea06c58238c9899" dependencies = [ "bytes", "futures-channel", @@ -3292,9 +3406,9 @@ dependencies = [ "http-body", "httparse", "httpdate", - "itoa 1.0.5", + "itoa", "pin-project-lite 0.2.9", - "socket2", + "socket2 0.4.9", "tokio", "tower-service", "tracing", @@ -3309,7 +3423,7 @@ checksum = "1788965e61b367cd03a62950836d5cd41560c3577d90e40e0819373194d1661c" dependencies = [ "http", "hyper", - "rustls 0.20.7", + "rustls 0.20.8", "tokio", "tokio-rustls 0.23.4", ] @@ -3329,16 +3443,16 @@ dependencies = [ [[package]] name = "iana-time-zone" -version = "0.1.53" +version = "0.1.54" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "64c122667b287044802d6ce17ee2ddf13207ed924c712de9a66a5814d5b64765" +checksum = "0c17cc76786e99f8d2f055c11159e7f0091c42474dcc3189fbab96072e873e6d" dependencies = [ "android_system_properties", "core-foundation-sys", "iana-time-zone-haiku", "js-sys", "wasm-bindgen", - "winapi", + "windows 0.46.0", ] [[package]] @@ -3411,9 +3525,9 @@ dependencies = [ [[package]] name = "if-watch" -version = "3.0.0" +version = "3.0.1" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "ba7abdbb86e485125dad06c2691e1e393bf3b08c7b743b43aa162a00fd39062e" +checksum = "a9465340214b296cd17a0009acdb890d6160010b8adf8f78a00d0d7ab270f79f" dependencies = [ "async-io", "core-foundation", @@ -3425,7 +3539,7 @@ dependencies = [ "rtnetlink", "system-configuration", "tokio", - "windows", + "windows 0.34.0", ] [[package]] @@ -3456,7 +3570,7 @@ version = "0.6.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "ba6a270039626615617f3f36d15fc827041df3b78c439da2cadfa47455a77f2f" dependencies = [ - "parity-scale-codec 3.2.1", + "parity-scale-codec 3.4.0", ] [[package]] @@ -3494,27 +3608,36 @@ checksum = "11d7a9f6330b71fea57921c9b61c47ee6e84f72d394754eff6163ae67e7395eb" dependencies = [ "proc-macro2", "quote", - "syn", + "syn 1.0.109", ] [[package]] name = "indexmap" -version = "1.9.2" +version = "1.9.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "1885e79c1fc4b10f0e172c475f458b7f7b93061064d98c3293e98c5ba0c8b399" +checksum = "bd070e393353796e801d209ad339e89596eb4c8d430d18ede6a1cced8fafbd99" dependencies = [ "autocfg 1.1.0", "hashbrown 0.12.3", ] [[package]] -name = "instant" -version = "0.1.12" +name = "inout" +version = "0.1.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "7a5bbe824c507c5da5956355e86a746d82e0e1464f65d862cc5e71da70e94b2c" +checksum = "a0c10553d664a4d0bcff9f4215d0aac67a639cc68ef660840afe309b807bc9f5" dependencies = [ - "cfg-if", - "js-sys", + "generic-array", +] + +[[package]] +name = "instant" +version = "0.1.12" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "7a5bbe824c507c5da5956355e86a746d82e0e1464f65d862cc5e71da70e94b2c" +dependencies = [ + "cfg-if", + "js-sys", "wasm-bindgen", "web-sys", ] @@ -3556,13 +3679,24 @@ dependencies = [ "webrtc-util", ] +[[package]] +name = "io-lifetimes" +version = "1.0.9" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "09270fd4fa1111bc614ed2246c7ef56239a3063d5be0d1ec3b589c505d400aeb" +dependencies = [ + "hermit-abi 0.3.1", + "libc", + "windows-sys 0.45.0", +] + [[package]] name = "ipconfig" version = "0.3.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "bd302af1b90f2463a98fa5ad469fc212c8e3175a41c3068601bfa2727591c5be" dependencies = [ - "socket2", + "socket2 0.4.9", "widestring 0.5.1", "winapi", "winreg", @@ -3570,9 +3704,9 @@ dependencies = [ [[package]] name = "ipnet" -version = "2.7.1" +version = "2.7.2" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "30e22bd8629359895450b59ea7a776c850561b96a3b1d31321c1949d9e6c9146" +checksum = "12b6ee2129af8d4fb011108c73d99a1b83a85977f23b82460c0ae2e25bb4b57f" [[package]] name = "itertools" @@ -3585,21 +3719,46 @@ dependencies = [ [[package]] name = "itoa" -version = "0.4.8" +version = "1.0.6" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "b71991ff56294aa922b450139ee08b3bfc70982c6b2c7562771375cf73542dd4" +checksum = "453ad9f582a441959e5f0d088b02ce04cfe8d51a8eaf077f12ac6d3e94164ca6" [[package]] -name = "itoa" -version = "1.0.5" +name = "jemalloc-ctl" +version = "0.5.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "c1891c671f3db85d8ea8525dd43ab147f9977041911d24a03e5a36187a7bfde9" +dependencies = [ + "jemalloc-sys", + "libc", + "paste", +] + +[[package]] +name = "jemalloc-sys" +version = "0.5.3+5.3.0-patched" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "f9bd5d616ea7ed58b571b2e209a65759664d7fb021a0819d7a790afc67e47ca1" +dependencies = [ + "cc", + "libc", +] + +[[package]] +name = "jemallocator" +version = "0.5.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "fad582f4b9e86b6caa621cabeb0963332d92eea04729ab12892c2533951e6440" +checksum = "16c2514137880c52b0b4822b563fadd38257c1f380858addb74a400889696ea6" +dependencies = [ + "jemalloc-sys", + "libc", +] [[package]] name = "js-sys" -version = "0.3.60" +version = "0.3.61" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "49409df3e3bf0856b916e2ceaca09ee28e6871cf7d9ce97a692cacfdb2a25a47" +checksum = "445dde2150c55e483f3d8416706b97ec8e8237c307e5b7b4b8dd15e6af2a0730" dependencies = [ "wasm-bindgen", ] @@ -3621,11 +3780,11 @@ dependencies = [ [[package]] name = "jsonwebtoken" -version = "8.2.0" +version = "8.3.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "09f4f04699947111ec1733e71778d763555737579e44b85844cae8e1940a1828" +checksum = "6971da4d9c3aa03c3d8f3ff0f4155b534aad021292003895a469716b2a230378" dependencies = [ - "base64", + "base64 0.21.0", "pem", "ring", "serde", @@ -3682,7 +3841,7 @@ checksum = "830d08ce1d1d941e6b30645f1a0eb5643013d835ce3779a5fc208261dbe10f55" [[package]] name = "lcli" -version = "3.4.0" +version = "4.1.0" dependencies = [ "account_utils", "beacon_chain", @@ -3703,6 +3862,7 @@ dependencies = [ "lighthouse_network", "lighthouse_version", "log", + "malloc_utils", "sensitive_url", "serde", "serde_json", @@ -3741,15 +3901,15 @@ dependencies = [ [[package]] name = "libc" -version = "0.2.139" +version = "0.2.140" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "201de327520df007757c1f0adce6e827fe8562fbc28bfd9c15571c66ca1f5f79" +checksum = "99227334921fae1a979cf0bfdfcc6b3e5ce376ef57e16fb6fb3ea2ed6095f80c" [[package]] name = "libflate" -version = "1.2.0" +version = "1.3.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "05605ab2bce11bcfc0e9c635ff29ef8b2ea83f29be257ee7d730cac3ee373093" +checksum = "97822bf791bd4d5b403713886a5fbe8bf49520fe78e323b0dc480ca1a03e50b0" dependencies = [ "adler32", "crc32fast", @@ -3758,9 +3918,9 @@ dependencies = [ [[package]] name = "libflate_lz77" -version = "1.1.0" +version = "1.2.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "39a734c0493409afcd49deee13c006a04e3586b9761a03543c6272c9c51f2f5a" +checksum = "a52d3a8bfc85f250440e4424db7d857e241a3aebbbe301f3eb606ab15c39acbf" dependencies = [ "rle-decode-fast", ] @@ -3804,9 +3964,9 @@ dependencies = [ [[package]] name = "libp2p" -version = "0.50.0" +version = "0.50.1" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "2e0a0d2f693675f49ded13c5d510c48b78069e23cbd9108d7ccd59f6dc568819" +checksum = "9c7b0104790be871edcf97db9bd2356604984e623a08d825c3f27852290266b8" dependencies = [ "bytes", "futures", @@ -3852,7 +4012,7 @@ dependencies = [ "libsecp256k1", "log", "multiaddr 0.14.0", - "multihash", + "multihash 0.16.3", "multistream-select 0.11.0", "p256", "parking_lot 0.12.1", @@ -3886,7 +4046,7 @@ dependencies = [ "libsecp256k1", "log", "multiaddr 0.16.0", - "multihash", + "multihash 0.16.3", "multistream-select 0.12.1", "once_cell", "p256", @@ -3905,6 +4065,34 @@ dependencies = [ "zeroize", ] +[[package]] +name = "libp2p-core" +version = "0.39.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "9b7f8b7d65c070a5a1b5f8f0510648189da08f787b8963f8e21219e0710733af" +dependencies = [ + "either", + "fnv", + "futures", + "futures-timer", + "instant", + "libp2p-identity", + "log", + "multiaddr 0.17.1", + "multihash 0.17.0", + "multistream-select 0.12.1", + "once_cell", + "parking_lot 0.12.1", + "pin-project", + "quick-protobuf", + "rand 0.8.5", + "rw-stream-sink", + "smallvec", + "thiserror", + "unsigned-varint 0.7.1", + "void", +] + [[package]] name = "libp2p-dns" version = "0.38.0" @@ -3926,7 +4114,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "a173171c71c29bb156f98886c7c4824596de3903dadf01e2e79d2ccdcf38cd9f" dependencies = [ "asynchronous-codec", - "base64", + "base64 0.13.1", "byteorder", "bytes", "fnv", @@ -3970,6 +4158,24 @@ dependencies = [ "void", ] +[[package]] +name = "libp2p-identity" +version = "0.1.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "8a8ea433ae0cea7e3315354305237b9897afe45278b2118a7a57ca744e70fd27" +dependencies = [ + "bs58", + "ed25519-dalek", + "log", + "multiaddr 0.17.1", + "multihash 0.17.0", + "prost", + "quick-protobuf", + "rand 0.8.5", + "thiserror", + "zeroize", +] + [[package]] name = "libp2p-mdns" version = "0.42.0" @@ -3984,7 +4190,7 @@ dependencies = [ "log", "rand 0.8.5", "smallvec", - "socket2", + "socket2 0.4.9", "tokio", "trust-dns-proto", "void", @@ -4077,7 +4283,7 @@ dependencies = [ "parking_lot 0.12.1", "quinn-proto", "rand 0.8.5", - "rustls 0.20.7", + "rustls 0.20.8", "thiserror", "tokio", ] @@ -4112,7 +4318,7 @@ checksum = "9d527d5827582abd44a6d80c07ff8b50b4ee238a8979e05998474179e79dc400" dependencies = [ "heck", "quote", - "syn", + "syn 1.0.109", ] [[package]] @@ -4127,22 +4333,23 @@ dependencies = [ "libc", "libp2p-core 0.38.0", "log", - "socket2", + "socket2 0.4.9", "tokio", ] [[package]] name = "libp2p-tls" -version = "0.1.0-alpha" +version = "0.1.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "f7905ce0d040576634e8a3229a7587cc8beab83f79db6023800f1792895defa8" +checksum = "ff08d13d0dc66e5e9ba6279c1de417b84fa0d0adc3b03e5732928c180ec02781" dependencies = [ "futures", "futures-rustls", - "libp2p-core 0.38.0", + "libp2p-core 0.39.1", + "libp2p-identity", "rcgen 0.10.0", "ring", - "rustls 0.20.7", + "rustls 0.20.8", "thiserror", "webpki 0.22.0", "x509-parser 0.14.0", @@ -4165,7 +4372,7 @@ dependencies = [ "libp2p-core 0.38.0", "libp2p-noise", "log", - "multihash", + "multihash 0.16.3", "prost", "prost-build", "prost-codec", @@ -4176,7 +4383,7 @@ dependencies = [ "thiserror", "tinytemplate", "tokio", - "tokio-util 0.7.4", + "tokio-util 0.7.7", "webrtc", ] @@ -4220,7 +4427,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "95b09eff1b35ed3b33b877ced3a691fc7a481919c7e29c53c906226fcf55e2a1" dependencies = [ "arrayref", - "base64", + "base64 0.13.1", "digest 0.9.0", "hmac-drbg", "libsecp256k1-core", @@ -4263,9 +4470,9 @@ dependencies = [ [[package]] name = "libsqlite3-sys" -version = "0.22.2" +version = "0.25.2" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "290b64917f8b0cb885d9de0f9959fe1f775d7fa12f1da2db9001c1c8ab60f89d" +checksum = "29f835d03d717946d28b1d1ed632eb6f0e24a299388ee623d0c23118d3e8a7fa" dependencies = [ "cc", "pkg-config", @@ -4285,7 +4492,7 @@ dependencies = [ [[package]] name = "lighthouse" -version = "3.4.0" +version = "4.1.0" dependencies = [ "account_manager", "account_utils", @@ -4352,6 +4559,7 @@ dependencies = [ "lighthouse_metrics", "lighthouse_version", "lru 0.7.8", + "lru_cache", "parking_lot 0.12.1", "prometheus-client", "quickcheck", @@ -4367,13 +4575,15 @@ dependencies = [ "smallvec", "snap", "strum", - "superstruct", + "superstruct 0.5.0", "task_executor", "tempfile", "tiny-keccak", "tokio", "tokio-io-timeout", "tokio-util 0.6.10", + "tree_hash", + "tree_hash_derive", "types", "unsigned-varint 0.6.0", "unused_port", @@ -4404,6 +4614,12 @@ version = "0.5.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "0717cef1bc8b636c6e1c1bbdefc09e6322da8a9321966e8928ef80d20f7f770f" +[[package]] +name = "linux-raw-sys" +version = "0.3.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "d59d8c75012853d2e872fb56bc8a2e53718e2cafe1a4c823143141c6d90c322f" + [[package]] name = "lmdb-rkv" version = "0.14.0" @@ -4510,6 +4726,8 @@ dependencies = [ name = "malloc_utils" version = "0.1.0" dependencies = [ + "jemalloc-ctl", + "jemallocator", "lazy_static", "libc", "lighthouse_metrics", @@ -4539,9 +4757,9 @@ dependencies = [ [[package]] name = "matches" -version = "0.1.9" +version = "0.1.10" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "a3e378b66a060d48947b590737b30a1be76706c8dd7b8ba0f2fe3989c68a853f" +checksum = "2532096657941c2fea9c289d370a250971c689d4f143798ff67113ec042024a5" [[package]] name = "matchit" @@ -4586,9 +4804,9 @@ dependencies = [ [[package]] name = "memoffset" -version = "0.7.1" +version = "0.8.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "5de893c32cde5f383baa4c04c5d6dbdd735cfd4a794b0debdb2bb1b421da5ff4" +checksum = "d61c719bcfbcf5d62b3a09efa6088de8c54bc0bfcd3ea7ae39fcc186108b8de1" dependencies = [ "autocfg 1.1.0", ] @@ -4625,25 +4843,47 @@ dependencies = [ "proc-macro2", "quote", "smallvec", - "syn", + "syn 1.0.109", ] [[package]] -name = "mev-build-rs" +name = "mev-rs" version = "0.2.1" -source = "git+https://github.com/ralexstokes/mev-rs?rev=6c99b0fbdc0427b1625469d2e575303ce08de5b8#6c99b0fbdc0427b1625469d2e575303ce08de5b8" +source = "git+https://github.com/ralexstokes//mev-rs?rev=7813d4a4a564e0754e9aaab2d95520ba437c3889#7813d4a4a564e0754e9aaab2d95520ba437c3889" dependencies = [ "async-trait", "axum", "beacon-api-client", "ethereum-consensus", + "hyper", "serde", - "serde_json", "ssz-rs", "thiserror", + "tokio", "tracing", ] +[[package]] +name = "migrations_internals" +version = "2.0.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "c493c09323068c01e54c685f7da41a9ccf9219735c3766fbfd6099806ea08fbc" +dependencies = [ + "serde", + "toml", +] + +[[package]] +name = "migrations_macros" +version = "2.0.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "8a8ff27a350511de30cdabb77147501c36ef02e0451d957abea2f30caffb2b58" +dependencies = [ + "migrations_internals", + "proc-macro2", + "quote", +] + [[package]] name = "milagro_bls" version = "1.4.2" @@ -4658,9 +4898,9 @@ dependencies = [ [[package]] name = "mime" -version = "0.3.16" +version = "0.3.17" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "2a60c7ce501c71e03a9c9c0d35b861413ae925bd979cc7a4e30d060069aaac8d" +checksum = "6877bb514081ee2a7ff5ef9de3281f14a4dd4bceac4c09388074a6b5df8a139a" [[package]] name = "mime_guess" @@ -4689,14 +4929,14 @@ dependencies = [ [[package]] name = "mio" -version = "0.8.5" +version = "0.8.6" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "e5d732bc30207a6423068df043e3d02e0735b155ad7ce1a6f76fe2baa5b158de" +checksum = "5b9d9a46eff5b4ff64b45a9e316a6d1e0bc719ef429cbec4dc630684212bfdf9" dependencies = [ "libc", "log", "wasi 0.11.0+wasi-snapshot-preview1", - "windows-sys", + "windows-sys 0.45.0", ] [[package]] @@ -4735,7 +4975,7 @@ dependencies = [ "bs58", "byteorder", "data-encoding", - "multihash", + "multihash 0.16.3", "percent-encoding", "serde", "static_assertions", @@ -4753,7 +4993,26 @@ dependencies = [ "byteorder", "data-encoding", "multibase", - "multihash", + "multihash 0.16.3", + "percent-encoding", + "serde", + "static_assertions", + "unsigned-varint 0.7.1", + "url", +] + +[[package]] +name = "multiaddr" +version = "0.17.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "2b36f567c7099511fa8612bbbb52dda2419ce0bdbacf31714e3a5ffdb766d3bd" +dependencies = [ + "arrayref", + "byteorder", + "data-encoding", + "log", + "multibase", + "multihash 0.17.0", "percent-encoding", "serde", "static_assertions", @@ -4785,6 +5044,19 @@ dependencies = [ "unsigned-varint 0.7.1", ] +[[package]] +name = "multihash" +version = "0.17.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "835d6ff01d610179fbce3de1694d007e500bf33a7f29689838941d6bf783ae40" +dependencies = [ + "core2", + "digest 0.10.6", + "multihash-derive", + "sha2 0.10.6", + "unsigned-varint 0.7.1", +] + [[package]] name = "multihash-derive" version = "0.8.1" @@ -4795,7 +5067,7 @@ dependencies = [ "proc-macro-error", "proc-macro2", "quote", - "syn", + "syn 1.0.109", "synstructure", ] @@ -4897,9 +5169,9 @@ dependencies = [ [[package]] name = "netlink-packet-utils" -version = "0.5.1" +version = "0.5.2" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "25af9cf0dc55498b7bd94a1508af7a78706aa0ab715a73c5169273e03c84845e" +checksum = "0ede8a08c71ad5a95cdd0e4e52facd37190977039a4704eb82a283f713747d34" dependencies = [ "anyhow", "byteorder", @@ -4924,9 +5196,9 @@ dependencies = [ [[package]] name = "netlink-sys" -version = "0.8.3" +version = "0.8.5" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "92b654097027250401127914afb37cb1f311df6610a9891ff07a757e94199027" +checksum = "6471bf08e7ac0135876a9581bf3217ef0333c191c128d34878079f42ee150411" dependencies = [ "bytes", "futures", @@ -4947,6 +5219,7 @@ dependencies = [ "eth2_ssz", "eth2_ssz_types", "ethereum-types 0.14.1", + "execution_layer", "exit-future", "fnv", "futures", @@ -4962,6 +5235,7 @@ dependencies = [ "lru_cache", "matches", "num_cpus", + "operation_pool", "rand 0.8.5", "rlp", "slog", @@ -5006,9 +5280,9 @@ dependencies = [ [[package]] name = "nix" -version = "0.26.1" +version = "0.26.2" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "46a58d1d356c6597d08cde02c2f09d785b09e28711837b1ed667dc652c08a694" +checksum = "bfdda3d196821d6af13126e40375cdf7da646a96114af134d5f417a9a1dc8e1a" dependencies = [ "bitflags", "cfg-if", @@ -5045,9 +5319,9 @@ checksum = "cf51a729ecf40266a2368ad335a5fdde43471f545a967109cd62146ecf8b66ff" [[package]] name = "nom" -version = "7.1.2" +version = "7.1.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "e5507769c4919c998e69e49c839d9dc6e693ede4cc4290d6ad8b41d4f09c548c" +checksum = "d273983c5a657a70a3e8f2a01329822f3b8c8172b73826411a55751e404a0a4a" dependencies = [ "memchr", "minimal-lexical", @@ -5153,9 +5427,9 @@ dependencies = [ [[package]] name = "object" -version = "0.30.1" +version = "0.30.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "8d864c91689fdc196779b98dba0aceac6118594c2df6ee5d943eb6a8df4d107a" +checksum = "ea86265d3d3dcb6a27fc51bd29a4bf387fae9d2986b823079d4986af253eb439" dependencies = [ "memchr", ] @@ -5175,14 +5449,14 @@ version = "0.6.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "9bedf36ffb6ba96c2eb7144ef6270557b52e54b20c0a8e1eb2ff99a6c6959bff" dependencies = [ - "asn1-rs 0.5.1", + "asn1-rs 0.5.2", ] [[package]] name = "once_cell" -version = "1.17.0" +version = "1.17.1" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "6f61fba1741ea2b3d6a1e3178721804bb716a68a6aeba1149b5d52e3d464ea66" +checksum = "b7e5500299e16ebb147ae15a00a942af264cf3688f47923b8fc2cd5858f23ad3" [[package]] name = "oneshot_broadcast" @@ -5225,14 +5499,14 @@ dependencies = [ "bytes", "proc-macro2", "quote", - "syn", + "syn 1.0.109", ] [[package]] name = "openssl" -version = "0.10.45" +version = "0.10.49" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "b102428fd03bc5edf97f62620f7298614c45cedf287c271e7ed450bbaf83f2e1" +checksum = "4d2f106ab837a24e03672c59b1239669a0596406ff657c3c0835b6b7f0f35a33" dependencies = [ "bitflags", "cfg-if", @@ -5245,13 +5519,13 @@ dependencies = [ [[package]] name = "openssl-macros" -version = "0.1.0" +version = "0.1.1" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "b501e44f11665960c7e7fcf062c7d96a14ade4aa98116c004b2e37b5be7d736c" +checksum = "a948666b637a0f465e8564c73e89d4dde00d72d4d473cc972f390fc3dcee7d9c" dependencies = [ "proc-macro2", "quote", - "syn", + "syn 2.0.13", ] [[package]] @@ -5262,20 +5536,19 @@ checksum = "ff011a302c396a5197692431fc1948019154afc178baf7d8e37367442a4601cf" [[package]] name = "openssl-src" -version = "111.24.0+1.1.1s" +version = "111.25.2+1.1.1t" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "3498f259dab01178c6228c6b00dcef0ed2a2d5e20d648c017861227773ea4abd" +checksum = "320708a054ad9b3bf314688b5db87cf4d6683d64cfc835e2337924ae62bf4431" dependencies = [ "cc", ] [[package]] name = "openssl-sys" -version = "0.9.80" +version = "0.9.84" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "23bbbf7854cd45b83958ebe919f0e8e516793727652e27fda10a8384cfc790b7" +checksum = "3a20eace9dc2d82904039cb76dcf50fb1a0bba071cfd1629720b5d6f1ddba0fa" dependencies = [ - "autocfg 1.1.0", "cc", "libc", "openssl-src", @@ -5297,6 +5570,7 @@ dependencies = [ "lighthouse_metrics", "maplit", "parking_lot 0.12.1", + "rand 0.8.5", "rayon", "serde", "serde_derive", @@ -5360,15 +5634,15 @@ dependencies = [ [[package]] name = "parity-scale-codec" -version = "3.2.1" +version = "3.4.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "366e44391a8af4cfd6002ef6ba072bae071a96aafca98d7d448a34c5dca38b6a" +checksum = "637935964ff85a605d114591d4d2c13c5d1ba2806dae97cea6bf180238a749ac" dependencies = [ "arrayvec", "bitvec 1.0.1", "byte-slice-cast", "impl-trait-for-tuples", - "parity-scale-codec-derive 3.1.3", + "parity-scale-codec-derive 3.1.4", "serde", ] @@ -5381,19 +5655,19 @@ dependencies = [ "proc-macro-crate", "proc-macro2", "quote", - "syn", + "syn 1.0.109", ] [[package]] name = "parity-scale-codec-derive" -version = "3.1.3" +version = "3.1.4" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "9299338969a3d2f491d65f140b00ddec470858402f888af98e8642fb5e8965cd" +checksum = "86b26a931f824dd4eca30b3e43bb4f31cd5f0d3a403c5f5ff27106b805bfde7b" dependencies = [ "proc-macro-crate", "proc-macro2", "quote", - "syn", + "syn 1.0.109", ] [[package]] @@ -5420,7 +5694,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "3742b2c103b9f06bc9fff0a37ff4912935851bee6d36f3c02bcc755bcfec228f" dependencies = [ "lock_api", - "parking_lot_core 0.9.5", + "parking_lot_core 0.9.7", ] [[package]] @@ -5432,29 +5706,29 @@ dependencies = [ "cfg-if", "instant", "libc", - "redox_syscall", + "redox_syscall 0.2.16", "smallvec", "winapi", ] [[package]] name = "parking_lot_core" -version = "0.9.5" +version = "0.9.7" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "7ff9f3fef3968a3ec5945535ed654cb38ff72d7495a25619e2247fb15a2ed9ba" +checksum = "9069cbb9f99e3a5083476ccb29ceb1de18b9118cafa53e90c9551235de2b9521" dependencies = [ "cfg-if", "libc", - "redox_syscall", + "redox_syscall 0.2.16", "smallvec", - "windows-sys", + "windows-sys 0.45.0", ] [[package]] name = "paste" -version = "1.0.11" +version = "1.0.12" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "d01a5bd0424d00070b0098dd17ebca6f961a959dead1dbcbbbc1d1cd8d3deeba" +checksum = "9f746c4065a8fa3fe23974dd82f15431cc8d40779821001404d10d2e79ca7d79" [[package]] name = "pbkdf2" @@ -5482,11 +5756,11 @@ checksum = "19b17cddbe7ec3f8bc800887bab5e717348c95ea2ca0b1bf0837fb964dc67099" [[package]] name = "pem" -version = "1.1.0" +version = "1.1.1" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "03c64931a1a212348ec4f3b4362585eca7159d0d09cbdf4a7f74f02173596fd4" +checksum = "a8835c273a76a90455d7344889b0964598e3316e2a79ede8e36f16bdcf2228b8" dependencies = [ - "base64", + "base64 0.13.1", ] [[package]] @@ -5504,21 +5778,11 @@ version = "2.2.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "478c572c3d73181ff3c2539045f6eb99e5491218eae919370993b890cdbdd98e" -[[package]] -name = "pest" -version = "2.5.2" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "0f6e86fb9e7026527a0d46bc308b841d73170ef8f443e1807f6ef88526a816d4" -dependencies = [ - "thiserror", - "ucd-trie", -] - [[package]] name = "petgraph" -version = "0.6.2" +version = "0.6.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "e6d5014253a1331579ce62aa67443b4a658c5e7dd03d4bc6d302b94474888143" +checksum = "4dd7d28ee937e54fe3080c91faa1c3a46c06de6252988a7f4592ba2310ef22a4" dependencies = [ "fixedbitset", "indexmap", @@ -5534,6 +5798,24 @@ dependencies = [ "rustc_version 0.4.0", ] +[[package]] +name = "phf" +version = "0.11.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "928c6535de93548188ef63bb7c4036bd415cd8f36ad25af44b9789b2ee72a48c" +dependencies = [ + "phf_shared", +] + +[[package]] +name = "phf_shared" +version = "0.11.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "e1fb5f6f826b772a8d4c0394209441e7d37cbbb967ae9c7e0e8134365c9ee676" +dependencies = [ + "siphasher", +] + [[package]] name = "pin-project" version = "1.0.12" @@ -5551,7 +5833,7 @@ checksum = "069bdb1e05adc7a8990dce9cc75370895fbe4e3d58b9b73bf1aee56359344a55" dependencies = [ "proc-macro2", "quote", - "syn", + "syn 1.0.109", ] [[package]] @@ -5630,16 +5912,18 @@ dependencies = [ [[package]] name = "polling" -version = "2.5.2" +version = "2.6.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "22122d5ec4f9fe1b3916419b76be1e80bcb93f618d071d2edf841b137b2a2bd6" +checksum = "7e1f879b2998099c2d69ab9605d145d5b661195627eccc680002c4918a7fb6fa" dependencies = [ "autocfg 1.1.0", + "bitflags", "cfg-if", + "concurrent-queue", "libc", "log", - "wepoll-ffi", - "windows-sys", + "pin-project-lite 0.2.9", + "windows-sys 0.45.0", ] [[package]] @@ -5650,30 +5934,60 @@ checksum = "048aeb476be11a4b6ca432ca569e375810de9294ae78f4774e78ea98a9246ede" dependencies = [ "cpufeatures", "opaque-debug", - "universal-hash", + "universal-hash 0.4.1", ] [[package]] name = "polyval" -version = "0.4.5" +version = "0.5.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "eebcc4aa140b9abd2bc40d9c3f7ccec842679cd79045ac3a7ac698c1a064b7cd" +checksum = "8419d2b623c7c0896ff2d5d96e2cb4ede590fed28fcc34934f4c33c036e620a1" dependencies = [ - "cpuid-bool", + "cfg-if", + "cpufeatures", "opaque-debug", - "universal-hash", + "universal-hash 0.4.1", ] [[package]] name = "polyval" -version = "0.5.3" +version = "0.6.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "8419d2b623c7c0896ff2d5d96e2cb4ede590fed28fcc34934f4c33c036e620a1" +checksum = "7ef234e08c11dfcb2e56f79fd70f6f2eb7f025c0ce2333e82f4f0518ecad30c6" dependencies = [ "cfg-if", "cpufeatures", "opaque-debug", - "universal-hash", + "universal-hash 0.5.0", +] + +[[package]] +name = "postgres-protocol" +version = "0.6.5" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "78b7fa9f396f51dffd61546fd8573ee20592287996568e6175ceb0f8699ad75d" +dependencies = [ + "base64 0.21.0", + "byteorder", + "bytes", + "fallible-iterator", + "hmac 0.12.1", + "md-5", + "memchr", + "rand 0.8.5", + "sha2 0.10.6", + "stringprep", +] + +[[package]] +name = "postgres-types" +version = "0.2.5" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "f028f05971fe20f512bcc679e2c10227e57809a3af86a7606304435bc8896cd6" +dependencies = [ + "bytes", + "fallible-iterator", + "postgres-protocol", ] [[package]] @@ -5682,14 +5996,23 @@ version = "0.2.17" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "5b40af805b3121feab8a3c29f04d8ad262fa8e0561883e7653e024ae4479e6de" +[[package]] +name = "pq-sys" +version = "0.4.7" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "3b845d6d8ec554f972a2c5298aad68953fd64e7441e846075450b44656a016d1" +dependencies = [ + "vcpkg", +] + [[package]] name = "prettyplease" -version = "0.1.23" +version = "0.1.25" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "e97e3215779627f01ee256d2fad52f3d95e8e1c11e9fc6fd08f7cd455d5d5c78" +checksum = "6c8646e95016a7a6c4adea95bafa8a16baab64b583356217f2c85db4a39d9a86" dependencies = [ "proc-macro2", - "syn", + "syn 1.0.109", ] [[package]] @@ -5738,7 +6061,7 @@ dependencies = [ "proc-macro-error-attr", "proc-macro2", "quote", - "syn", + "syn 1.0.109", "version_check", ] @@ -5761,9 +6084,9 @@ checksum = "dc375e1527247fe1a97d8b7156678dfe7c1af2fc075c9a4db3690ecd2a148068" [[package]] name = "proc-macro2" -version = "1.0.49" +version = "1.0.55" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "57a8eca9f9c4ffde41714334dee777596264c7825420f521abc92b5b5deb63a5" +checksum = "1d0dd4be24fcdcfeaa12a432d588dc59bbad6cad3510c67e74a2b6b2fc950564" dependencies = [ "unicode-ident", ] @@ -5802,7 +6125,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "83cd1b99916654a69008fd66b4f9397fbe08e6e51dfe23d4417acf5d3b8cb87c" dependencies = [ "dtoa", - "itoa 1.0.5", + "itoa", "parking_lot 0.12.1", "prometheus-client-derive-text-encode", ] @@ -5815,14 +6138,14 @@ checksum = "66a455fbcb954c1a7decf3c586e860fd7889cddf4b8e164be736dbac95a953cd" dependencies = [ "proc-macro2", "quote", - "syn", + "syn 1.0.109", ] [[package]] name = "prost" -version = "0.11.5" +version = "0.11.8" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "c01db6702aa05baa3f57dec92b8eeeeb4cb19e894e73996b32a4093289e54592" +checksum = "e48e50df39172a3e7eb17e14642445da64996989bc212b583015435d39a58537" dependencies = [ "bytes", "prost-derive", @@ -5830,9 +6153,9 @@ dependencies = [ [[package]] name = "prost-build" -version = "0.11.5" +version = "0.11.8" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "cb5320c680de74ba083512704acb90fe00f28f79207286a848e730c45dd73ed6" +checksum = "2c828f93f5ca4826f97fedcbd3f9a536c16b12cff3dbbb4a007f932bbad95b12" dependencies = [ "bytes", "heck", @@ -5845,7 +6168,7 @@ dependencies = [ "prost", "prost-types", "regex", - "syn", + "syn 1.0.109", "tempfile", "which", ] @@ -5865,24 +6188,23 @@ dependencies = [ [[package]] name = "prost-derive" -version = "0.11.5" +version = "0.11.8" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "c8842bad1a5419bca14eac663ba798f6bc19c413c2fdceb5f3ba3b0932d96720" +checksum = "4ea9b0f8cbe5e15a8a042d030bd96668db28ecb567ec37d691971ff5731d2b1b" dependencies = [ "anyhow", "itertools", "proc-macro2", "quote", - "syn", + "syn 1.0.109", ] [[package]] name = "prost-types" -version = "0.11.5" +version = "0.11.8" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "017f79637768cde62820bc2d4fe0e45daaa027755c323ad077767c6c5f173091" +checksum = "379119666929a1afd7a043aa6cf96fa67a6dce9af60c88095a4686dbce4c9c88" dependencies = [ - "bytes", "prost", ] @@ -5930,6 +6252,15 @@ version = "1.2.3" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "a1d01941d82fa2ab50be1e79e6714289dd7cde78eba4c074bc5a4374f650dfe0" +[[package]] +name = "quick-protobuf" +version = "0.8.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "9d6da84cc204722a989e01ba2f6e1e276e190f22263d0cb6ce8526fcdb0d2e1f" +dependencies = [ + "byteorder", +] + [[package]] name = "quickcheck" version = "0.9.2" @@ -5950,7 +6281,7 @@ checksum = "608c156fd8e97febc07dc9c2e2c80bf74cfc6ef26893eae3daf8bc2bc94a4b7f" dependencies = [ "proc-macro2", "quote", - "syn", + "syn 1.0.109", ] [[package]] @@ -5966,15 +6297,15 @@ dependencies = [ [[package]] name = "quinn-proto" -version = "0.9.2" +version = "0.9.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "72ef4ced82a24bb281af338b9e8f94429b6eca01b4e66d899f40031f074e74c9" +checksum = "67c10f662eee9c94ddd7135043e544f3c82fa839a1e7b865911331961b53186c" dependencies = [ "bytes", "rand 0.8.5", "ring", "rustc-hash", - "rustls 0.20.7", + "rustls 0.20.8", "slab", "thiserror", "tinyvec", @@ -5984,9 +6315,9 @@ dependencies = [ [[package]] name = "quote" -version = "1.0.23" +version = "1.0.26" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "8856d8364d252a14d474036ea1358d63c9e6965c8e5c1885c18f73d70bff9c7b" +checksum = "4424af4bf778aae2051a77b60283332f386554255d722233d09fbfc7e30da2fc" dependencies = [ "proc-macro2", ] @@ -6004,9 +6335,9 @@ dependencies = [ [[package]] name = "r2d2_sqlite" -version = "0.18.0" +version = "0.21.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "9d24607049214c5e42d3df53ac1d8a23c34cc6a5eefe3122acb2c72174719959" +checksum = "b4f5d0337e99cd5cacd91ffc326c6cc9d8078def459df560c4f9bf9ba4a51034" dependencies = [ "r2d2", "rusqlite", @@ -6106,9 +6437,9 @@ dependencies = [ [[package]] name = "rayon" -version = "1.6.1" +version = "1.7.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "6db3a213adf02b3bcfd2d3846bb41cb22857d131789e01df434fb7e7bc0759b7" +checksum = "1d2df5196e37bcc87abebc0053e20787d73847bb33134a69841207dd0a47f03b" dependencies = [ "either", "rayon-core", @@ -6116,9 +6447,9 @@ dependencies = [ [[package]] name = "rayon-core" -version = "1.10.1" +version = "1.11.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "cac410af5d00ab6884528b4ab69d1e8e146e8d471201800fa1b4524126de6ad3" +checksum = "4b8f95bd6966f5c87776639160a66bd8ab9895d9d4ab01ddba9fc60661aebe8d" dependencies = [ "crossbeam-channel", "crossbeam-deque", @@ -6134,7 +6465,7 @@ checksum = "6413f3de1edee53342e6138e75b56d32e7bc6e332b3bd62d497b1929d4cfbcdd" dependencies = [ "pem", "ring", - "time 0.3.17", + "time 0.3.20", "x509-parser 0.13.2", "yasna", ] @@ -6147,7 +6478,7 @@ checksum = "ffbe84efe2f38dea12e9bfc1f65377fdf03e53a18cb3b995faedf7934c7e785b" dependencies = [ "pem", "ring", - "time 0.3.17", + "time 0.3.20", "yasna", ] @@ -6160,6 +6491,15 @@ dependencies = [ "bitflags", ] +[[package]] +name = "redox_syscall" +version = "0.3.5" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "567664f262709473930a4bf9e51bf2ebf3348f2e748ccc50dea20646858f8f29" +dependencies = [ + "bitflags", +] + [[package]] name = "redox_users" version = "0.4.3" @@ -6167,15 +6507,15 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "b033d837a7cf162d7993aded9304e30a83213c648b6e389db233191f891e5c2b" dependencies = [ "getrandom 0.2.8", - "redox_syscall", + "redox_syscall 0.2.16", "thiserror", ] [[package]] name = "regex" -version = "1.7.1" +version = "1.7.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "48aaa5748ba571fb95cd2c85c09f629215d3a6ece942baa100950af03a34f733" +checksum = "8b1f693b24f6ac912f4893ef08244d70b6067480d2f1a46e950c9691e6749d1d" dependencies = [ "aho-corasick", "memchr", @@ -6193,26 +6533,17 @@ dependencies = [ [[package]] name = "regex-syntax" -version = "0.6.28" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "456c603be3e8d448b072f410900c09faf164fbce2d480456f50eea6e25f9c848" - -[[package]] -name = "remove_dir_all" -version = "0.5.3" +version = "0.6.29" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "3acd125665422973a33ac9d3dd2df85edad0f4ae9b00dafb1a05e43a9f5ef8e7" -dependencies = [ - "winapi", -] +checksum = "f162c6dd7b008981e4d40210aca20b4bd0f9b60ca9271061b07f78537722f2e1" [[package]] name = "reqwest" -version = "0.11.13" +version = "0.11.16" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "68cc60575865c7831548863cc02356512e3f1dc2f3f82cb837d7fc4cc8f3c97c" +checksum = "27b71749df584b7f4cac2c426c127a7c785a5106cc98f7a8feb044115f0fa254" dependencies = [ - "base64", + "base64 0.21.0", "bytes", "encoding_rs", "futures-core", @@ -6231,7 +6562,7 @@ dependencies = [ "once_cell", "percent-encoding", "pin-project-lite 0.2.9", - "rustls 0.20.7", + "rustls 0.20.8", "rustls-pemfile", "serde", "serde_json", @@ -6239,11 +6570,12 @@ dependencies = [ "tokio", "tokio-native-tls", "tokio-rustls 0.23.4", - "tokio-util 0.7.4", + "tokio-util 0.7.7", "tower-service", "url", "wasm-bindgen", "wasm-bindgen-futures", + "wasm-streams", "web-sys", "webpki-roots", "winreg", @@ -6309,7 +6641,7 @@ checksum = "e33d7b2abe0c340d8797fe2907d3f20d3b5ea5908683618bfe80df7f621f672a" dependencies = [ "proc-macro2", "quote", - "syn", + "syn 1.0.109", ] [[package]] @@ -6364,24 +6696,23 @@ dependencies = [ [[package]] name = "rusqlite" -version = "0.25.4" +version = "0.28.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "5c4b1eaf239b47034fb450ee9cdedd7d0226571689d8823030c4b6c2cb407152" +checksum = "01e213bc3ecb39ac32e81e51ebe31fd888a940515173e3a18a35f8c6e896422a" dependencies = [ "bitflags", "fallible-iterator", "fallible-streaming-iterator", - "hashlink", + "hashlink 0.8.1", "libsqlite3-sys", - "memchr", "smallvec", ] [[package]] name = "rustc-demangle" -version = "0.1.21" +version = "0.1.22" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "7ef03e0a2b150c7a90d01faf6254c9c48a41e95fb2a8c2ac1c6f0d2b9aefc342" +checksum = "d4a36c42d1873f9a77c53bde094f9664d9891bc604a45b4798fd2c389ed12e5b" [[package]] name = "rustc-hash" @@ -6401,16 +6732,7 @@ version = "0.2.3" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "138e3e0acb6c9fb258b19b67cb8abd63c00679d2851805ea151465464fe9030a" dependencies = [ - "semver 0.9.0", -] - -[[package]] -name = "rustc_version" -version = "0.3.3" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "f0dfe2087c51c460008730de8b57e6a320782fbfb312e1f4d520e6c6fae155ee" -dependencies = [ - "semver 0.11.0", + "semver 0.9.0", ] [[package]] @@ -6419,7 +6741,7 @@ version = "0.4.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "bfa0f585226d2e68097d4f95d113b15b83a82e819ab25717ec0590d9584ef366" dependencies = [ - "semver 1.0.16", + "semver 1.0.17", ] [[package]] @@ -6428,7 +6750,21 @@ version = "4.1.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "faf0c4a6ece9950b9abdb62b1cfcf2a68b3b67a10ba445b3bb85be2a293d0632" dependencies = [ - "nom 7.1.2", + "nom 7.1.3", +] + +[[package]] +name = "rustix" +version = "0.37.6" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "d097081ed288dfe45699b72f5b5d648e5f15d64d900c7080273baa20c16a6849" +dependencies = [ + "bitflags", + "errno", + "io-lifetimes", + "libc", + "linux-raw-sys", + "windows-sys 0.45.0", ] [[package]] @@ -6437,7 +6773,7 @@ version = "0.19.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "35edb675feee39aec9c99fa5ff985081995a06d594114ae14cbe797ad7b7a6d7" dependencies = [ - "base64", + "base64 0.13.1", "log", "ring", "sct 0.6.1", @@ -6446,9 +6782,9 @@ dependencies = [ [[package]] name = "rustls" -version = "0.20.7" +version = "0.20.8" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "539a2bfe908f471bfa933876bd1eb6a19cf2176d375f82ef7f99530a40e48c2c" +checksum = "fff78fc74d175294f4e83b28343315ffcfb114b156f0185e9741cb5570f50e2f" dependencies = [ "log", "ring", @@ -6458,18 +6794,18 @@ dependencies = [ [[package]] name = "rustls-pemfile" -version = "1.0.1" +version = "1.0.2" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "0864aeff53f8c05aa08d86e5ef839d3dfcf07aeba2db32f12db0ef716e87bd55" +checksum = "d194b56d58803a43635bdc398cd17e383d6f71f9182b9a192c127ca42494a59b" dependencies = [ - "base64", + "base64 0.21.0", ] [[package]] name = "rustversion" -version = "1.0.11" +version = "1.0.12" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "5583e89e108996506031660fe09baa5011b9dd0341b89029313006d1fb508d70" +checksum = "4f3208ce4d8448b3f3e7d168a73f5e0c43a61e32930de3bceeccedb388b6bf06" [[package]] name = "rw-stream-sink" @@ -6484,9 +6820,9 @@ dependencies = [ [[package]] name = "ryu" -version = "1.0.12" +version = "1.0.13" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "7b4b9743ed687d4b4bcedf9ff5eaa7398495ae14e61cba0a295704edbc7decde" +checksum = "f91339c0467de62360649f8d3e185ca8de4224ff281f66000de5eb2a77a79041" [[package]] name = "safe_arith" @@ -6518,26 +6854,26 @@ dependencies = [ [[package]] name = "scale-info" -version = "2.3.1" +version = "2.5.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "001cf62ece89779fd16105b5f515ad0e5cedcd5440d3dd806bb067978e7c3608" +checksum = "0cfdffd972d76b22f3d7f81c8be34b2296afd3a25e0a547bd9abe340a4dbbe97" dependencies = [ "cfg-if", "derive_more", - "parity-scale-codec 3.2.1", + "parity-scale-codec 3.4.0", "scale-info-derive", ] [[package]] name = "scale-info-derive" -version = "2.3.1" +version = "2.5.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "303959cf613a6f6efd19ed4b4ad5bf79966a13352716299ad532cfb115f4205c" +checksum = "61fa974aea2d63dd18a4ec3a49d59af9f34178c73a4f56d2f18205628d00681e" dependencies = [ "proc-macro-crate", "proc-macro2", "quote", - "syn", + "syn 1.0.109", ] [[package]] @@ -6546,14 +6882,14 @@ version = "0.1.21" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "713cfb06c7059f3588fb8044c0fad1d09e3c01d225e25b9220dbfdcf16dbb1b3" dependencies = [ - "windows-sys", + "windows-sys 0.42.0", ] [[package]] name = "scheduled-thread-pool" -version = "0.2.6" +version = "0.2.7" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "977a7519bff143a44f842fd07e80ad1329295bd71686457f18e496736f4bf9bf" +checksum = "3cbc66816425a074528352f5789333ecff06ca41b36b0b0efdfbb29edc391a19" dependencies = [ "parking_lot 0.12.1", ] @@ -6572,9 +6908,9 @@ checksum = "d29ab0c6d3fc0ee92fe66e2d99f700eab17a8d57d1c1d3b748380fb20baa78cd" [[package]] name = "scratch" -version = "1.0.3" +version = "1.0.5" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "ddccb15bcce173023b3fedd9436f882a0739b8dfb45e4f6b6002bee5929f61b2" +checksum = "1792db035ce95be60c3f8853017b3999209281c24e2ba5bc8e59bf97a0c590c1" [[package]] name = "scrypt" @@ -6654,9 +6990,9 @@ dependencies = [ [[package]] name = "security-framework" -version = "2.7.0" +version = "2.8.2" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "2bc1bb97804af6631813c55739f771071e0f2ed33ee20b68c86ec505d906356c" +checksum = "a332be01508d814fed64bf28f798a146d73792121129962fdf335bb3c49a4254" dependencies = [ "bitflags", "core-foundation", @@ -6667,9 +7003,9 @@ dependencies = [ [[package]] name = "security-framework-sys" -version = "2.6.1" +version = "2.8.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "0160a13a177a45bfb43ce71c01580998474f556ad854dcbca936dd2841a5c556" +checksum = "31c9bb296072e961fcbd8853511dd39c2d8be2deb1e17c6860b1d30732b323b4" dependencies = [ "core-foundation-sys", "libc", @@ -6681,23 +7017,14 @@ version = "0.9.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "1d7eb9ef2c18661902cc47e535f9bc51b78acd254da71d375c2f6720d9a40403" dependencies = [ - "semver-parser 0.7.0", -] - -[[package]] -name = "semver" -version = "0.11.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "f301af10236f6df4160f7c3f04eec6dbc70ace82d23326abad5edee88801c6b6" -dependencies = [ - "semver-parser 0.10.2", + "semver-parser", ] [[package]] name = "semver" -version = "1.0.16" +version = "1.0.17" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "58bc9567378fc7690d6b2addae4e60ac2eeea07becb2c64b9f218b53865cba2a" +checksum = "bebd363326d05ec3e2f532ab7660680f3b02130d780c299bca73469d521bc0ed" [[package]] name = "semver-parser" @@ -6705,20 +7032,11 @@ version = "0.7.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "388a1df253eca08550bef6c72392cfe7c30914bf41df5269b68cbd6ff8f570a3" -[[package]] -name = "semver-parser" -version = "0.10.2" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "00b0bef5b7f9e0df16536d3961cfb6e84331c065b4066afb39768d0e319411f7" -dependencies = [ - "pest", -] - [[package]] name = "send_wrapper" -version = "0.5.0" +version = "0.6.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "930c0acf610d3fdb5e2ab6213019aaa04e227ebe9547b0649ba599b16d788bd7" +checksum = "cd0b0ec5f1c1ca621c432a25813d8d60c88abe6d3e08a3eb9cf37d97a0fe3d73" [[package]] name = "sensitive_url" @@ -6730,9 +7048,9 @@ dependencies = [ [[package]] name = "serde" -version = "1.0.152" +version = "1.0.159" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "bb7d1f0d3021d347a83e556fc4683dea2ea09d87bccdf88ff5c12545d89d5efb" +checksum = "3c04e8343c3daeec41f58990b9d77068df31209f2af111e059e9fe9646693065" dependencies = [ "serde_derive", ] @@ -6759,35 +7077,35 @@ dependencies = [ [[package]] name = "serde_derive" -version = "1.0.152" +version = "1.0.159" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "af487d118eecd09402d70a5d72551860e788df87b464af30e5ea6a38c75c541e" +checksum = "4c614d17805b093df4b147b51339e7e44bf05ef59fba1e45d83500bcfb4d8585" dependencies = [ "proc-macro2", "quote", - "syn", + "syn 2.0.13", ] [[package]] name = "serde_json" -version = "1.0.91" +version = "1.0.95" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "877c235533714907a8c2464236f5c4b2a17262ef1bd71f38f35ea592c8da6883" +checksum = "d721eca97ac802aa7777b701877c8004d950fc142651367300d21c1cc0194744" dependencies = [ - "itoa 1.0.5", + "itoa", "ryu", "serde", ] [[package]] name = "serde_repr" -version = "0.1.10" +version = "0.1.12" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "9a5ec9fa74a20ebbe5d9ac23dac1fc96ba0ecfe9f50f2843b52e537b10fbcb4e" +checksum = "bcec881020c684085e55a25f7fd888954d56609ef363479dc5a1305eb0d40cab" dependencies = [ "proc-macro2", "quote", - "syn", + "syn 2.0.13", ] [[package]] @@ -6797,7 +7115,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d3491c14715ca2294c4d6a88f15e84739788c1d030eed8c110436aafdaa2f3fd" dependencies = [ "form_urlencoded", - "itoa 1.0.5", + "itoa", "ryu", "serde", ] @@ -6821,7 +7139,7 @@ dependencies = [ "darling 0.13.4", "proc-macro2", "quote", - "syn", + "syn 1.0.109", ] [[package]] @@ -6934,9 +7252,9 @@ checksum = "43b2853a4d09f215c24cc5489c992ce46052d359b5109343cbafbf26bc62f8a3" [[package]] name = "signal-hook-registry" -version = "1.4.0" +version = "1.4.1" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "e51e73328dc4ac0c7ccbda3a494dfa03df1de2f46018127f60c693f2648455b0" +checksum = "d8229b473baa5980ac72ef434c4415e70c4b5e71b423043adb4ba059f89c99a1" dependencies = [ "libc", ] @@ -6960,7 +7278,7 @@ dependencies = [ "num-bigint", "num-traits", "thiserror", - "time 0.3.17", + "time 0.3.20", ] [[package]] @@ -6981,11 +7299,17 @@ dependencies = [ "types", ] +[[package]] +name = "siphasher" +version = "0.3.10" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "7bd3e3206899af3f8b12af284fafc038cc1dc2b41d1b89dd17297221c5d225de" + [[package]] name = "slab" -version = "0.4.7" +version = "0.4.8" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "4614a76b2a8be0058caa9dbbaf66d988527d86d003c11a94fbd335d7661edcef" +checksum = "6528351c9bc8ab22353f9d776db39a20288e8d6c37ef8cfe3317cf875eecfc2d" dependencies = [ "autocfg 1.1.0", ] @@ -7086,7 +7410,7 @@ dependencies = [ "serde", "serde_json", "slog", - "time 0.3.17", + "time 0.3.20", ] [[package]] @@ -7131,7 +7455,7 @@ dependencies = [ "slog", "term", "thread_local", - "time 0.3.17", + "time 0.3.20", ] [[package]] @@ -7189,7 +7513,7 @@ dependencies = [ "aes-gcm 0.9.4", "blake2", "chacha20poly1305", - "curve25519-dalek 4.0.0-pre.5", + "curve25519-dalek 4.0.0-rc.2", "rand_core 0.6.4", "ring", "rustc_version 0.4.0", @@ -7199,21 +7523,31 @@ dependencies = [ [[package]] name = "socket2" -version = "0.4.7" +version = "0.4.9" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "02e2d2db9033d13a1567121ddd7a095ee144db4e1ca1b1bda3419bc0da294ebd" +checksum = "64a4a911eed85daf18834cfaa86a79b7d266ff93ff5ba14005426219480ed662" dependencies = [ "libc", "winapi", ] +[[package]] +name = "socket2" +version = "0.5.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "bc8d618c6641ae355025c449427f9e96b98abf99a772be3cef6708d15c77147a" +dependencies = [ + "libc", + "windows-sys 0.45.0", +] + [[package]] name = "soketto" version = "0.7.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "41d1c5305e39e09653383c2c7244f2f78b3bcae37cf50c64cb4789c9f5096ec2" dependencies = [ - "base64", + "base64 0.13.1", "bytes", "flate2", "futures", @@ -7242,11 +7576,10 @@ dependencies = [ [[package]] name = "ssz-rs" version = "0.8.0" -source = "git+https://github.com/ralexstokes/ssz-rs?rev=cb08f1#cb08f18ca919cc1b685b861d0fa9e2daabe89737" +source = "git+https://github.com/ralexstokes//ssz-rs?rev=adf1a0b14cef90b9536f28ef89da1fab316465e1#adf1a0b14cef90b9536f28ef89da1fab316465e1" dependencies = [ "bitvec 1.0.1", "hex", - "lazy_static", "num-bigint", "serde", "sha2 0.9.9", @@ -7257,11 +7590,11 @@ dependencies = [ [[package]] name = "ssz-rs-derive" version = "0.8.0" -source = "git+https://github.com/ralexstokes/ssz-rs?rev=cb08f1#cb08f18ca919cc1b685b861d0fa9e2daabe89737" +source = "git+https://github.com/ralexstokes//ssz-rs?rev=adf1a0b14cef90b9536f28ef89da1fab316465e1#adf1a0b14cef90b9536f28ef89da1fab316465e1" dependencies = [ "proc-macro2", "quote", - "syn", + "syn 1.0.109", ] [[package]] @@ -7334,6 +7667,16 @@ dependencies = [ "types", ] +[[package]] +name = "stringprep" +version = "0.1.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "8ee348cb74b87454fff4b551cbf727025810a004f88aeacae7f85b87f4e9a1c1" +dependencies = [ + "unicode-bidi", + "unicode-normalization", +] + [[package]] name = "strsim" version = "0.8.0" @@ -7365,7 +7708,7 @@ dependencies = [ "proc-macro2", "quote", "rustversion", - "syn", + "syn 1.0.109", ] [[package]] @@ -7374,7 +7717,7 @@ version = "0.4.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "a7e94b1ec00bad60e6410e058b52f1c66de3dc5fe4d62d09b3e52bb7d3b73e25" dependencies = [ - "base64", + "base64 0.13.1", "crc", "lazy_static", "md-5", @@ -7413,7 +7756,21 @@ dependencies = [ "proc-macro2", "quote", "smallvec", - "syn", + "syn 1.0.109", +] + +[[package]] +name = "superstruct" +version = "0.6.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "75b9e5728aa1a87141cefd4e7509903fc01fa0dcb108022b1e841a67c5159fc5" +dependencies = [ + "darling 0.13.4", + "itertools", + "proc-macro2", + "quote", + "smallvec", + "syn 1.0.109", ] [[package]] @@ -7427,9 +7784,20 @@ dependencies = [ [[package]] name = "syn" -version = "1.0.107" +version = "1.0.109" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "72b64191b275b66ffe2469e8af2c1cfe3bafa67b529ead792a6d0160888b4237" +dependencies = [ + "proc-macro2", + "quote", + "unicode-ident", +] + +[[package]] +name = "syn" +version = "2.0.13" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "1f4064b5b16e03ae50984a5a8ed5d4f8803e6bc1fd170a3cda91a1be4b18e3f5" +checksum = "4c9da457c5285ac1f936ebd076af6dac17a61cfe7826f2076b4d015cf47bc8ec" dependencies = [ "proc-macro2", "quote", @@ -7438,9 +7806,9 @@ dependencies = [ [[package]] name = "sync_wrapper" -version = "0.1.1" +version = "0.1.2" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "20518fe4a4c9acf048008599e464deb21beeae3d3578418951a189c235a7a9a8" +checksum = "2047c6ded9c721764247e62cd3b03c09ffc529b2ba5b10ec482ae507a4a70160" [[package]] name = "synstructure" @@ -7450,15 +7818,15 @@ checksum = "f36bdaa60a83aca3921b5259d5400cbf5e90fc51931376a9bd4a0eb79aa7210f" dependencies = [ "proc-macro2", "quote", - "syn", + "syn 1.0.109", "unicode-xid", ] [[package]] name = "sysinfo" -version = "0.26.8" +version = "0.26.9" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "29ddf41e393a9133c81d5f0974195366bd57082deac6e0eb02ed39b8341c2bb6" +checksum = "5c18a6156d1f27a9592ee18c1a846ca8dd5c258b7179fc193ae87c74ebb666f5" dependencies = [ "cfg-if", "core-foundation-sys", @@ -7543,16 +7911,15 @@ dependencies = [ [[package]] name = "tempfile" -version = "3.3.0" +version = "3.5.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "5cdb1ef4eaeeaddc8fbd371e5017057064af0911902ef36b39801f67cc6d79e4" +checksum = "b9fbec84f381d5795b08656e4912bec604d162bff9291d6189a78f4c8ab87998" dependencies = [ "cfg-if", "fastrand", - "libc", - "redox_syscall", - "remove_dir_all", - "winapi", + "redox_syscall 0.3.5", + "rustix", + "windows-sys 0.45.0", ] [[package]] @@ -7568,9 +7935,9 @@ dependencies = [ [[package]] name = "termcolor" -version = "1.1.3" +version = "1.2.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "bab24d30b911b2376f3a13cc2cd443142f0c81dda04c118693e35b3835757755" +checksum = "be55cf8942feac5c765c2c993422806843c9a9a45d4d5c407ad6dd2ea95eb9b6" dependencies = [ "winapi-util", ] @@ -7588,7 +7955,24 @@ name = "test_random_derive" version = "0.2.0" dependencies = [ "quote", - "syn", + "syn 1.0.109", +] + +[[package]] +name = "testcontainers" +version = "0.14.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "0e2b1567ca8a2b819ea7b28c92be35d9f76fb9edb214321dcc86eb96023d1f87" +dependencies = [ + "bollard-stubs", + "futures", + "hex", + "hmac 0.12.1", + "log", + "rand 0.8.5", + "serde", + "serde_json", + "sha2 0.10.6", ] [[package]] @@ -7602,30 +7986,31 @@ dependencies = [ [[package]] name = "thiserror" -version = "1.0.38" +version = "1.0.40" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "6a9cd18aa97d5c45c6603caea1da6628790b37f7a34b6ca89522331c5180fed0" +checksum = "978c9a314bd8dc99be594bc3c175faaa9794be04a5a5e153caba6915336cebac" dependencies = [ "thiserror-impl", ] [[package]] name = "thiserror-impl" -version = "1.0.38" +version = "1.0.40" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "1fb327af4685e4d03fa8cbcf1716380da910eeb2bb8be417e7f9fd3fb164f36f" +checksum = "f9456a42c5b0d803c8cd86e73dd7cc9edd429499f37a3550d286d5e86720569f" dependencies = [ "proc-macro2", "quote", - "syn", + "syn 2.0.13", ] [[package]] name = "thread_local" -version = "1.1.4" +version = "1.1.7" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "5516c27b78311c50bf42c071425c560ac799b11c30b31f87e3081965fe5e0180" +checksum = "3fdd6f064ccff2d6567adcb3873ca630700f00b5ad3f060c25b5dcfd9a4ce152" dependencies = [ + "cfg-if", "once_cell", ] @@ -7651,11 +8036,11 @@ dependencies = [ [[package]] name = "time" -version = "0.3.17" +version = "0.3.20" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "a561bf4617eebd33bca6434b988f39ed798e527f51a1e797d0ee4f61c0a38376" +checksum = "cd0cbfecb4d19b5ea75bb31ad904eb5b9fa13f21079c3b92017ebdf4999a5890" dependencies = [ - "itoa 1.0.5", + "itoa", "libc", "num_threads", "serde", @@ -7671,9 +8056,9 @@ checksum = "2e153e1f1acaef8acc537e68b44906d2db6436e2b35ac2c6b42640fff91f00fd" [[package]] name = "time-macros" -version = "0.2.6" +version = "0.2.8" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "d967f99f534ca7e495c575c62638eebc2898a8c84c119b89e250477bc4ba16b2" +checksum = "fd80a657e71da814b8e5d60d3374fc6d35045062245d80224748ae522dd76f36" dependencies = [ "time-core", ] @@ -7738,28 +8123,27 @@ dependencies = [ [[package]] name = "tinyvec_macros" -version = "0.1.0" +version = "0.1.1" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "cda74da7e1a664f795bb1f8a87ec406fb89a02522cf6e50620d016add6dbbf5c" +checksum = "1f3ccbac311fea05f86f61904b462b55fb3df8837a366dfc601a0161d0532f20" [[package]] name = "tokio" -version = "1.24.1" +version = "1.27.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "1d9f76183f91ecfb55e1d7d5602bd1d979e38a3a522fe900241cf195624d67ae" +checksum = "d0de47a4eecbe11f498978a9b29d792f0d2692d1dd003650c24c76510e3bc001" dependencies = [ "autocfg 1.1.0", "bytes", "libc", - "memchr", "mio", "num_cpus", "parking_lot 0.12.1", "pin-project-lite 0.2.9", "signal-hook-registry", - "socket2", + "socket2 0.4.9", "tokio-macros", - "windows-sys", + "windows-sys 0.45.0", ] [[package]] @@ -7774,25 +8158,49 @@ dependencies = [ [[package]] name = "tokio-macros" -version = "1.8.2" +version = "2.0.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "d266c00fde287f55d3f1c3e96c500c362a2b8c695076ec180f27918820bc6df8" +checksum = "61a573bdc87985e9d6ddeed1b3d864e8a302c847e40d647746df2f1de209d1ce" dependencies = [ "proc-macro2", "quote", - "syn", + "syn 2.0.13", ] [[package]] name = "tokio-native-tls" -version = "0.3.0" +version = "0.3.1" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "f7d995660bd2b7f8c1568414c1126076c13fbb725c40112dc0120b78eb9b717b" +checksum = "bbae76ab933c85776efabc971569dd6119c580d8f5d448769dec1764bf796ef2" dependencies = [ "native-tls", "tokio", ] +[[package]] +name = "tokio-postgres" +version = "0.7.8" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "6e89f6234aa8fd43779746012fcf53603cdb91fdd8399aa0de868c2d56b6dde1" +dependencies = [ + "async-trait", + "byteorder", + "bytes", + "fallible-iterator", + "futures-channel", + "futures-util", + "log", + "parking_lot 0.12.1", + "percent-encoding", + "phf", + "pin-project-lite 0.2.9", + "postgres-protocol", + "postgres-types", + "socket2 0.5.1", + "tokio", + "tokio-util 0.7.7", +] + [[package]] name = "tokio-rustls" version = "0.22.0" @@ -7810,21 +8218,21 @@ version = "0.23.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "c43ee83903113e03984cb9e5cebe6c04a5116269e900e3ddba8f068a62adda59" dependencies = [ - "rustls 0.20.7", + "rustls 0.20.8", "tokio", "webpki 0.22.0", ] [[package]] name = "tokio-stream" -version = "0.1.11" +version = "0.1.12" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "d660770404473ccd7bc9f8b28494a811bc18542b915c0855c51e8f419d5223ce" +checksum = "8fb52b74f05dbf495a8fba459fdc331812b96aa086d9eb78101fa0d4569c3313" dependencies = [ "futures-core", "pin-project-lite 0.2.9", "tokio", - "tokio-util 0.7.4", + "tokio-util 0.7.7", ] [[package]] @@ -7848,7 +8256,7 @@ checksum = "f714dd15bead90401d77e04243611caec13726c2408afd5b31901dfcdcb3b181" dependencies = [ "futures-util", "log", - "rustls 0.20.7", + "rustls 0.20.8", "tokio", "tokio-rustls 0.23.4", "tungstenite 0.17.3", @@ -7874,24 +8282,25 @@ dependencies = [ [[package]] name = "tokio-util" -version = "0.7.4" +version = "0.7.7" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "0bb2e075f03b3d66d8d8785356224ba688d2906a371015e225beeb65ca92c740" +checksum = "5427d89453009325de0d8f342c9490009f76e999cb7672d77e46267448f7e6b2" dependencies = [ "bytes", "futures-core", "futures-io", "futures-sink", "pin-project-lite 0.2.9", + "slab", "tokio", "tracing", ] [[package]] name = "toml" -version = "0.5.10" +version = "0.5.11" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "1333c76748e868a4d9d1017b5ab53171dfd095f70c712fdb4653a406547f598f" +checksum = "f4f7f0dd8d50a853a531c426359045b1998f04219d88799810762cd4ad314234" dependencies = [ "serde", ] @@ -7964,7 +8373,7 @@ checksum = "4017f8f45139870ca7e672686113917c71c7a6e02d4924eda67186083c03081a" dependencies = [ "proc-macro2", "quote", - "syn", + "syn 1.0.109", ] [[package]] @@ -8032,7 +8441,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "ebeb235c5847e2f82cfe0f07eb971d1e5f6804b18dac2ae16349cc604380f82f" dependencies = [ "quote", - "syn", + "syn 1.0.109", ] [[package]] @@ -8056,7 +8465,7 @@ version = "0.4.0" dependencies = [ "darling 0.13.4", "quote", - "syn", + "syn 1.0.109", ] [[package]] @@ -8087,7 +8496,7 @@ dependencies = [ "lazy_static", "rand 0.8.5", "smallvec", - "socket2", + "socket2 0.4.9", "thiserror", "tinyvec", "tokio", @@ -8127,7 +8536,7 @@ version = "0.14.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "a0b2d8558abd2e276b0a8df5c05a2ec762609344191e5fd23e292c910e9165b5" dependencies = [ - "base64", + "base64 0.13.1", "byteorder", "bytes", "http", @@ -8146,14 +8555,14 @@ version = "0.17.3" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "e27992fd6a8c29ee7eef28fc78349aa244134e10ad447ce3b9f0ac0ed0fa4ce0" dependencies = [ - "base64", + "base64 0.13.1", "byteorder", "bytes", "http", "httparse", "log", "rand 0.8.5", - "rustls 0.20.7", + "rustls 0.20.8", "sha-1 0.10.1", "thiserror", "url", @@ -8168,7 +8577,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "4712ee30d123ec7ae26d1e1b218395a16c87cdbaf4b3925d170d684af62ea5e8" dependencies = [ "async-trait", - "base64", + "base64 0.13.1", "futures", "log", "md-5", @@ -8237,7 +8646,7 @@ dependencies = [ "slog", "smallvec", "state_processing", - "superstruct", + "superstruct 0.6.0", "swap_or_not_shuffle", "tempfile", "test_random_derive", @@ -8246,12 +8655,6 @@ dependencies = [ "tree_hash_derive", ] -[[package]] -name = "ucd-trie" -version = "0.1.5" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "9e79c4d996edb816c91e4308506774452e55e95c3c9de07b6729e17e15a5ef81" - [[package]] name = "uint" version = "0.9.5" @@ -8282,15 +8685,15 @@ dependencies = [ [[package]] name = "unicode-bidi" -version = "0.3.8" +version = "0.3.13" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "099b7128301d285f79ddd55b9a83d5e6b9e97c92e0ea0daebee7263e932de992" +checksum = "92888ba5573ff080736b3648696b70cafad7d250551175acbaa4e0385b3e1460" [[package]] name = "unicode-ident" -version = "1.0.6" +version = "1.0.8" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "84a22b9f218b40614adcb3f4ff08b703773ad44fa9423e4e0d346d5db86e4ebc" +checksum = "e5464a87b239f13a63a501f2701565754bae92d243d4bb7eb12f6d57d2269bf4" [[package]] name = "unicode-normalization" @@ -8323,6 +8726,16 @@ dependencies = [ "subtle", ] +[[package]] +name = "universal-hash" +version = "0.5.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "7d3160b73c9a19f7e2939a2fdad446c57c1bbbbf4d919d3213ff1267a580d8b5" +dependencies = [ + "crypto-common", + "subtle", +] + [[package]] name = "unsigned-varint" version = "0.6.0" @@ -8352,6 +8765,11 @@ checksum = "a156c684c91ea7d62626509bce3cb4e1d9ed5c4d978f7b4352658f96a4c26b4a" [[package]] name = "unused_port" version = "0.1.0" +dependencies = [ + "lazy_static", + "lru_cache", + "parking_lot 0.12.1", +] [[package]] name = "url" @@ -8382,9 +8800,9 @@ dependencies = [ [[package]] name = "uuid" -version = "1.2.2" +version = "1.3.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "422ee0de9031b5b948b97a8fc04e3aa35230001a722ddd27943e0be31564ce4c" +checksum = "1674845326ee10d37ca60470760d4288a6f80f304007d92e5c53bab78c9cfd79" dependencies = [ "getrandom 0.2.8", ] @@ -8508,12 +8926,11 @@ checksum = "9d5b2c62b4012a3e1eca5a7e077d13b3bf498c4073e33ccd58626607748ceeca" [[package]] name = "walkdir" -version = "2.3.2" +version = "2.3.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "808cf2735cd4b6866113f648b791c6adc5714537bc222d9347bb203386ffda56" +checksum = "36df944cda56c7d8d8b7496af378e6b16de9284591917d307c9b4d313c44e698" dependencies = [ "same-file", - "winapi", "winapi-util", ] @@ -8595,9 +9012,9 @@ checksum = "9c8d87e72b64a3b4db28d11ce29237c246188f4f51057d65a7eab63b7987e423" [[package]] name = "wasm-bindgen" -version = "0.2.83" +version = "0.2.84" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "eaf9f5aceeec8be17c128b2e93e031fb8a4d469bb9c4ae2d7dc1888b26887268" +checksum = "31f8dcbc21f30d9b8f2ea926ecb58f6b91192c17e9d33594b3df58b2007ca53b" dependencies = [ "cfg-if", "wasm-bindgen-macro", @@ -8605,24 +9022,24 @@ dependencies = [ [[package]] name = "wasm-bindgen-backend" -version = "0.2.83" +version = "0.2.84" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "4c8ffb332579b0557b52d268b91feab8df3615f265d5270fec2a8c95b17c1142" +checksum = "95ce90fd5bcc06af55a641a86428ee4229e44e07033963a2290a8e241607ccb9" dependencies = [ "bumpalo", "log", "once_cell", "proc-macro2", "quote", - "syn", + "syn 1.0.109", "wasm-bindgen-shared", ] [[package]] name = "wasm-bindgen-futures" -version = "0.4.33" +version = "0.4.34" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "23639446165ca5a5de86ae1d8896b737ae80319560fbaa4c2887b7da6e7ebd7d" +checksum = "f219e0d211ba40266969f6dbdd90636da12f75bee4fc9d6c23d1260dadb51454" dependencies = [ "cfg-if", "js-sys", @@ -8632,9 +9049,9 @@ dependencies = [ [[package]] name = "wasm-bindgen-macro" -version = "0.2.83" +version = "0.2.84" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "052be0f94026e6cbc75cdefc9bae13fd6052cdcaf532fa6c45e7ae33a1e6c810" +checksum = "4c21f77c0bedc37fd5dc21f897894a5ca01e7bb159884559461862ae90c0b4c5" dependencies = [ "quote", "wasm-bindgen-macro-support", @@ -8642,28 +9059,28 @@ dependencies = [ [[package]] name = "wasm-bindgen-macro-support" -version = "0.2.83" +version = "0.2.84" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "07bc0c051dc5f23e307b13285f9d75df86bfdf816c5721e573dec1f9b8aa193c" +checksum = "2aff81306fcac3c7515ad4e177f521b5c9a15f2b08f4e32d823066102f35a5f6" dependencies = [ "proc-macro2", "quote", - "syn", + "syn 1.0.109", "wasm-bindgen-backend", "wasm-bindgen-shared", ] [[package]] name = "wasm-bindgen-shared" -version = "0.2.83" +version = "0.2.84" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "1c38c045535d93ec4f0b4defec448e4291638ee608530863b1e2ba115d4fff7f" +checksum = "0046fef7e28c3804e5e38bfa31ea2a0f73905319b677e57ebe37e49358989b5d" [[package]] name = "wasm-bindgen-test" -version = "0.3.33" +version = "0.3.34" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "09d2fff962180c3fadf677438054b1db62bee4aa32af26a45388af07d1287e1d" +checksum = "6db36fc0f9fb209e88fb3642590ae0205bb5a56216dabd963ba15879fe53a30b" dependencies = [ "console_error_panic_hook", "js-sys", @@ -8675,14 +9092,27 @@ dependencies = [ [[package]] name = "wasm-bindgen-test-macro" -version = "0.3.33" +version = "0.3.34" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "4683da3dfc016f704c9f82cf401520c4f1cb3ee440f7f52b3d6ac29506a49ca7" +checksum = "0734759ae6b3b1717d661fe4f016efcfb9828f5edb4520c18eaee05af3b43be9" dependencies = [ "proc-macro2", "quote", ] +[[package]] +name = "wasm-streams" +version = "0.2.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "6bbae3363c08332cadccd13b67db371814cd214c2524020932f0804b8cf7c078" +dependencies = [ + "futures-util", + "js-sys", + "wasm-bindgen", + "wasm-bindgen-futures", + "web-sys", +] + [[package]] name = "wasm-timer" version = "0.2.5" @@ -8698,11 +9128,44 @@ dependencies = [ "web-sys", ] +[[package]] +name = "watch" +version = "0.1.0" +dependencies = [ + "axum", + "beacon_chain", + "beacon_node", + "bls", + "byteorder", + "clap", + "diesel", + "diesel_migrations", + "env_logger 0.9.3", + "eth2", + "hex", + "http_api", + "hyper", + "log", + "network", + "r2d2", + "rand 0.7.3", + "reqwest", + "serde", + "serde_json", + "serde_yaml", + "testcontainers", + "tokio", + "tokio-postgres", + "types", + "unused_port", + "url", +] + [[package]] name = "web-sys" -version = "0.3.60" +version = "0.3.61" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "bcda906d8be16e728fd5adc5b729afad4e444e106ab28cd1c7256e54fa61510f" +checksum = "e33b99f4b23ba3eec1a53ac264e35a755f00e966e0065077d6027c0f575b0b97" dependencies = [ "js-sys", "wasm-bindgen", @@ -8715,7 +9178,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "44f258e254752d210b84fe117b31f1e3cc9cbf04c0d747eb7f8cf7cf5e370f6d" dependencies = [ "arrayvec", - "base64", + "base64 0.13.1", "bytes", "derive_more", "ethabi 16.0.0", @@ -8765,6 +9228,8 @@ dependencies = [ "eth2_network_config", "exit-future", "futures", + "lazy_static", + "parking_lot 0.12.1", "reqwest", "serde", "serde_derive", @@ -8835,7 +9300,7 @@ dependencies = [ "sha2 0.10.6", "stun", "thiserror", - "time 0.3.17", + "time 0.3.20", "tokio", "turn", "url", @@ -8867,22 +9332,22 @@ dependencies = [ [[package]] name = "webrtc-dtls" -version = "0.7.0" +version = "0.7.1" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "7021987ae0a2ed6c8cd33f68e98e49bb6e74ffe9543310267b48a1bbe3900e5f" +checksum = "942be5bd85f072c3128396f6e5a9bfb93ca8c1939ded735d177b7bcba9a13d05" dependencies = [ "aes 0.6.0", - "aes-gcm 0.8.0", + "aes-gcm 0.10.1", "async-trait", "bincode", "block-modes", "byteorder", "ccm", "curve25519-dalek 3.2.0", - "der-parser 8.1.0", + "der-parser 8.2.0", "elliptic-curve", "hkdf", - "hmac 0.10.1", + "hmac 0.12.1", "log", "oid-registry 0.6.1", "p256", @@ -8894,23 +9359,23 @@ dependencies = [ "rustls 0.19.1", "sec1", "serde", - "sha-1 0.9.8", - "sha2 0.9.9", + "sha1", + "sha2 0.10.6", "signature", "subtle", "thiserror", "tokio", "webpki 0.21.4", "webrtc-util", - "x25519-dalek 2.0.0-pre.1", + "x25519-dalek 2.0.0-rc.2", "x509-parser 0.13.2", ] [[package]] name = "webrtc-ice" -version = "0.9.0" +version = "0.9.1" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "494483fbb2f5492620871fdc78b084aed8807377f6e3fe88b2e49f0a9c9c41d7" +checksum = "465a03cc11e9a7d7b4f9f99870558fe37a102b65b93f8045392fef7c67b39e80" dependencies = [ "arc-swap", "async-trait", @@ -8924,7 +9389,7 @@ dependencies = [ "tokio", "turn", "url", - "uuid 1.2.2", + "uuid 1.3.0", "waitgroup", "webrtc-mdns", "webrtc-util", @@ -8937,7 +9402,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "f08dfd7a6e3987e255c4dbe710dde5d94d0f0574f8a21afa95d171376c143106" dependencies = [ "log", - "socket2", + "socket2 0.4.9", "thiserror", "tokio", "webrtc-util", @@ -9021,20 +9486,11 @@ dependencies = [ "winapi", ] -[[package]] -name = "wepoll-ffi" -version = "0.1.2" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "d743fdedc5c64377b5fc2bc036b01c7fd642205a0d96356034ae3404d49eb7fb" -dependencies = [ - "cc", -] - [[package]] name = "which" -version = "4.3.0" +version = "4.4.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "1c831fbbee9e129a8cf93e7747a82da9d95ba8e16621cae60ec2cdc849bacb7b" +checksum = "2441c784c52b289a054b7201fc93253e288f094e2f4be9058343127c4226a269" dependencies = [ "either", "libc", @@ -9097,6 +9553,15 @@ dependencies = [ "windows_x86_64_msvc 0.34.0", ] +[[package]] +name = "windows" +version = "0.46.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "cdacb41e6a96a052c6cb63a144f24900236121c6f63f4f8219fef5977ecb0c25" +dependencies = [ + "windows-targets", +] + [[package]] name = "windows-acl" version = "0.3.0" @@ -9116,19 +9581,43 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "5a3e1820f08b8513f676f7ab6c1f99ff312fb97b553d30ff4dd86f9f15728aa7" dependencies = [ "windows_aarch64_gnullvm", - "windows_aarch64_msvc 0.42.0", - "windows_i686_gnu 0.42.0", - "windows_i686_msvc 0.42.0", - "windows_x86_64_gnu 0.42.0", + "windows_aarch64_msvc 0.42.2", + "windows_i686_gnu 0.42.2", + "windows_i686_msvc 0.42.2", + "windows_x86_64_gnu 0.42.2", + "windows_x86_64_gnullvm", + "windows_x86_64_msvc 0.42.2", +] + +[[package]] +name = "windows-sys" +version = "0.45.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "75283be5efb2831d37ea142365f009c02ec203cd29a3ebecbc093d52315b66d0" +dependencies = [ + "windows-targets", +] + +[[package]] +name = "windows-targets" +version = "0.42.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "8e5180c00cd44c9b1c88adb3693291f1cd93605ded80c250a75d472756b4d071" +dependencies = [ + "windows_aarch64_gnullvm", + "windows_aarch64_msvc 0.42.2", + "windows_i686_gnu 0.42.2", + "windows_i686_msvc 0.42.2", + "windows_x86_64_gnu 0.42.2", "windows_x86_64_gnullvm", - "windows_x86_64_msvc 0.42.0", + "windows_x86_64_msvc 0.42.2", ] [[package]] name = "windows_aarch64_gnullvm" -version = "0.42.0" +version = "0.42.2" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "41d2aa71f6f0cbe00ae5167d90ef3cfe66527d6f613ca78ac8024c3ccab9a19e" +checksum = "597a5118570b68bc08d8d59125332c54f1ba9d9adeedeef5b99b02ba2b0698f8" [[package]] name = "windows_aarch64_msvc" @@ -9138,9 +9627,9 @@ checksum = "17cffbe740121affb56fad0fc0e421804adf0ae00891205213b5cecd30db881d" [[package]] name = "windows_aarch64_msvc" -version = "0.42.0" +version = "0.42.2" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "dd0f252f5a35cac83d6311b2e795981f5ee6e67eb1f9a7f64eb4500fbc4dcdb4" +checksum = "e08e8864a60f06ef0d0ff4ba04124db8b0fb3be5776a5cd47641e942e58c4d43" [[package]] name = "windows_i686_gnu" @@ -9150,9 +9639,9 @@ checksum = "2564fde759adb79129d9b4f54be42b32c89970c18ebf93124ca8870a498688ed" [[package]] name = "windows_i686_gnu" -version = "0.42.0" +version = "0.42.2" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "fbeae19f6716841636c28d695375df17562ca208b2b7d0dc47635a50ae6c5de7" +checksum = "c61d927d8da41da96a81f029489353e68739737d3beca43145c8afec9a31a84f" [[package]] name = "windows_i686_msvc" @@ -9162,9 +9651,9 @@ checksum = "9cd9d32ba70453522332c14d38814bceeb747d80b3958676007acadd7e166956" [[package]] name = "windows_i686_msvc" -version = "0.42.0" +version = "0.42.2" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "84c12f65daa39dd2babe6e442988fc329d6243fdce47d7d2d155b8d874862246" +checksum = "44d840b6ec649f480a41c8d80f9c65108b92d89345dd94027bfe06ac444d1060" [[package]] name = "windows_x86_64_gnu" @@ -9174,15 +9663,15 @@ checksum = "cfce6deae227ee8d356d19effc141a509cc503dfd1f850622ec4b0f84428e1f4" [[package]] name = "windows_x86_64_gnu" -version = "0.42.0" +version = "0.42.2" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "bf7b1b21b5362cbc318f686150e5bcea75ecedc74dd157d874d754a2ca44b0ed" +checksum = "8de912b8b8feb55c064867cf047dda097f92d51efad5b491dfb98f6bbb70cb36" [[package]] name = "windows_x86_64_gnullvm" -version = "0.42.0" +version = "0.42.2" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "09d525d2ba30eeb3297665bd434a54297e4170c7f1a44cad4ef58095b4cd2028" +checksum = "26d41b46a36d453748aedef1486d5c7a85db22e56aff34643984ea85514e94a3" [[package]] name = "windows_x86_64_msvc" @@ -9192,9 +9681,9 @@ checksum = "d19538ccc21819d01deaf88d6a17eae6596a12e9aafdbb97916fb49896d89de9" [[package]] name = "windows_x86_64_msvc" -version = "0.42.0" +version = "0.42.2" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "f40009d85759725a34da6d89a94e63d7bdc50a862acf0dbc7c8e488f1edcb6f5" +checksum = "9aec5da331524158c6d1a4ac0ab1541149c0b9505fde06423b02f5ef0106b9f0" [[package]] name = "winreg" @@ -9207,13 +9696,14 @@ dependencies = [ [[package]] name = "ws_stream_wasm" -version = "0.7.3" +version = "0.7.4" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "47ca1ab42f5afed7fc332b22b6e932ca5414b209465412c8cdf0ad23bc0de645" +checksum = "7999f5f4217fe3818726b66257a4475f71e74ffd190776ad053fa159e50737f5" dependencies = [ "async_io_stream", "futures", "js-sys", + "log", "pharos", "rustc_version 0.4.0", "send_wrapper", @@ -9251,12 +9741,13 @@ dependencies = [ [[package]] name = "x25519-dalek" -version = "2.0.0-pre.1" +version = "2.0.0-rc.2" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "e5da623d8af10a62342bcbbb230e33e58a63255a58012f8653c578e54bab48df" +checksum = "fabd6e16dd08033932fc3265ad4510cc2eab24656058a6dcb107ffe274abcc95" dependencies = [ - "curve25519-dalek 3.2.0", + "curve25519-dalek 4.0.0-rc.2", "rand_core 0.6.4", + "serde", "zeroize", ] @@ -9267,16 +9758,16 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "9fb9bace5b5589ffead1afb76e43e34cff39cd0f3ce7e170ae0c29e53b88eb1c" dependencies = [ "asn1-rs 0.3.1", - "base64", + "base64 0.13.1", "data-encoding", "der-parser 7.0.0", "lazy_static", - "nom 7.1.2", + "nom 7.1.3", "oid-registry 0.4.0", "ring", "rusticata-macros", "thiserror", - "time 0.3.17", + "time 0.3.20", ] [[package]] @@ -9285,16 +9776,16 @@ version = "0.14.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "e0ecbeb7b67ce215e40e3cc7f2ff902f94a223acf44995934763467e7b1febc8" dependencies = [ - "asn1-rs 0.5.1", - "base64", + "asn1-rs 0.5.2", + "base64 0.13.1", "data-encoding", - "der-parser 8.1.0", + "der-parser 8.2.0", "lazy_static", - "nom 7.1.2", + "nom 7.1.3", "oid-registry 0.6.1", "rusticata-macros", "thiserror", - "time 0.3.17", + "time 0.3.20", ] [[package]] @@ -9341,28 +9832,27 @@ version = "0.5.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "aed2e7a52e3744ab4d0c05c20aa065258e84c49fd4226f5191b2ed29712710b4" dependencies = [ - "time 0.3.17", + "time 0.3.20", ] [[package]] name = "zeroize" -version = "1.5.7" +version = "1.6.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "c394b5bd0c6f669e7275d9c20aa90ae064cb22e75a1cad54e1b34088034b149f" +checksum = "2a0956f1ba7c7909bfb66c2e9e4124ab6f6482560f6628b5aaeba39207c9aad9" dependencies = [ "zeroize_derive", ] [[package]] name = "zeroize_derive" -version = "1.3.3" +version = "1.4.2" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "44bf07cb3e50ea2003396695d58bf46bc9887a1f362260446fad6bc4e79bd36c" +checksum = "ce36e65b0d2999d2aafac989fb249189a141aee1f53c612c1f37d72631959f69" dependencies = [ "proc-macro2", "quote", - "syn", - "synstructure", + "syn 2.0.13", ] [[package]] diff --git a/Cargo.toml b/Cargo.toml index b35cbbb89cf..66b2b4e2e9c 100644 --- a/Cargo.toml +++ b/Cargo.toml @@ -87,12 +87,22 @@ members = [ "validator_client", "validator_client/slashing_protection", + + "watch", ] +resolver = "2" [patch] [patch.crates-io] -fixed-hash = { git = "https://github.com/paritytech/parity-common", rev="df638ab0885293d21d656dc300d39236b69ce57d" } warp = { git = "https://github.com/macladson/warp", rev="7e75acc368229a46a236a8c991bf251fe7fe50ef" } +arbitrary = { git = "https://github.com/michaelsproul/arbitrary", rev="f002b99989b561ddce62e4cf2887b0f8860ae991" } + +[patch."https://github.com/ralexstokes/mev-rs"] +mev-rs = { git = "https://github.com/ralexstokes//mev-rs", rev = "7813d4a4a564e0754e9aaab2d95520ba437c3889" } +[patch."https://github.com/ralexstokes/ethereum-consensus"] +ethereum-consensus = { git = "https://github.com/ralexstokes//ethereum-consensus", rev = "9b0ee0a8a45b968c8df5e7e64ea1c094e16f053d" } +[patch."https://github.com/ralexstokes/ssz-rs"] +ssz-rs = { git = "https://github.com/ralexstokes//ssz-rs", rev = "adf1a0b14cef90b9536f28ef89da1fab316465e1" } [profile.maxperf] inherits = "release" diff --git a/Dockerfile b/Dockerfile index 72423b17c68..0d268c7e1aa 100644 --- a/Dockerfile +++ b/Dockerfile @@ -1,4 +1,4 @@ -FROM rust:1.62.1-bullseye AS builder +FROM rust:1.68.2-bullseye AS builder RUN apt-get update && apt-get -y upgrade && apt-get install -y cmake libclang-dev protobuf-compiler COPY . lighthouse ARG FEATURES diff --git a/Makefile b/Makefile index 33077a6c930..89362d12d82 100644 --- a/Makefile +++ b/Makefile @@ -14,28 +14,48 @@ BUILD_PATH_AARCH64 = "target/$(AARCH64_TAG)/release" PINNED_NIGHTLY ?= nightly CLIPPY_PINNED_NIGHTLY=nightly-2022-05-19 +# List of features to use when building natively. Can be overriden via the environment. +# No jemalloc on Windows +ifeq ($(OS),Windows_NT) + FEATURES?= +else + FEATURES?=jemalloc +endif + # List of features to use when cross-compiling. Can be overridden via the environment. -CROSS_FEATURES ?= gnosis,slasher-lmdb,slasher-mdbx +CROSS_FEATURES ?= gnosis,slasher-lmdb,slasher-mdbx,jemalloc # Cargo profile for Cross builds. Default is for local builds, CI uses an override. CROSS_PROFILE ?= release +# List of features to use when running EF tests. +EF_TEST_FEATURES ?= + # Cargo profile for regular builds. PROFILE ?= release # List of all hard forks. This list is used to set env variables for several tests so that # they run for different forks. -FORKS=phase0 altair merge +FORKS=phase0 altair merge capella + +# Extra flags for Cargo +CARGO_INSTALL_EXTRA_FLAGS?= # Builds the Lighthouse binary in release (optimized). # # Binaries will most likely be found in `./target/release` install: - cargo install --path lighthouse --force --locked --features "$(FEATURES)" --profile "$(PROFILE)" + cargo install --path lighthouse --force --locked \ + --features "$(FEATURES)" \ + --profile "$(PROFILE)" \ + $(CARGO_INSTALL_EXTRA_FLAGS) # Builds the lcli binary in release (optimized). install-lcli: - cargo install --path lcli --force --locked --features "$(FEATURES)" --profile "$(PROFILE)" + cargo install --path lcli --force --locked \ + --features "$(FEATURES)" \ + --profile "$(PROFILE)" \ + $(CARGO_INSTALL_EXTRA_FLAGS) # The following commands use `cross` to build a cross-compile. # @@ -101,23 +121,19 @@ cargo-fmt: check-benches: cargo check --workspace --benches -# Typechecks consensus code *without* allowing deprecated legacy arithmetic or metrics. -check-consensus: - cargo check -p state_processing --no-default-features - # Runs only the ef-test vectors. run-ef-tests: rm -rf $(EF_TESTS)/.accessed_file_log.txt - cargo test --release -p ef_tests --features "ef_tests" - cargo test --release -p ef_tests --features "ef_tests,fake_crypto" - cargo test --release -p ef_tests --features "ef_tests,milagro" + cargo test --release -p ef_tests --features "ef_tests,$(EF_TEST_FEATURES)" + cargo test --release -p ef_tests --features "ef_tests,$(EF_TEST_FEATURES),fake_crypto" + cargo test --release -p ef_tests --features "ef_tests,$(EF_TEST_FEATURES),milagro" ./$(EF_TESTS)/check_all_files_accessed.py $(EF_TESTS)/.accessed_file_log.txt $(EF_TESTS)/consensus-spec-tests # Run the tests in the `beacon_chain` crate for all known forks. test-beacon-chain: $(patsubst %,test-beacon-chain-%,$(FORKS)) test-beacon-chain-%: - env FORK_NAME=$* cargo test --release --features fork_from_env -p beacon_chain + env FORK_NAME=$* cargo test --release --features fork_from_env,slasher/lmdb -p beacon_chain # Run the tests in the `operation_pool` crate for all known forks. test-op-pool: $(patsubst %,test-op-pool-%,$(FORKS)) @@ -160,7 +176,8 @@ lint: -A clippy::from-over-into \ -A clippy::upper-case-acronyms \ -A clippy::vec-init-then-push \ - -A clippy::question-mark + -A clippy::question-mark \ + -A clippy::uninlined-format-args nightly-lint: cp .github/custom/clippy.toml . @@ -185,7 +202,7 @@ arbitrary-fuzz: # Runs cargo audit (Audit Cargo.lock files for crates with security vulnerabilities reported to the RustSec Advisory Database) audit: cargo install --force cargo-audit - cargo audit --ignore RUSTSEC-2020-0071 --ignore RUSTSEC-2020-0159 + cargo audit --ignore RUSTSEC-2020-0071 # Runs `cargo vendor` to make sure dependencies can be vendored for packaging, reproducibility and archival purpose. vendor: diff --git a/README.md b/README.md index 859d5c4c63a..3565882d6e7 100644 --- a/README.md +++ b/README.md @@ -66,7 +66,7 @@ of the Lighthouse book. The best place for discussion is the [Lighthouse Discord server](https://discord.gg/cyAszAh). -Sign up to the [Lighthouse Development Updates](https://eepurl.com/dh9Lvb/) mailing list for email +Sign up to the [Lighthouse Development Updates](https://eepurl.com/dh9Lvb) mailing list for email notifications about releases, network status and other important information. Encrypt sensitive messages using our [PGP diff --git a/beacon_node/Cargo.toml b/beacon_node/Cargo.toml index cca8cc969ef..95f145a557d 100644 --- a/beacon_node/Cargo.toml +++ b/beacon_node/Cargo.toml @@ -1,6 +1,6 @@ [package] name = "beacon_node" -version = "3.4.0" +version = "4.1.0" authors = ["Paul Hauner ", "Age Manning BeaconChain { + pub fn compute_attestation_rewards( + &self, + epoch: Epoch, + validators: Vec, + log: Logger, + ) -> Result { + debug!(log, "computing attestation rewards"; "epoch" => epoch, "validator_count" => validators.len()); + + // Get state + let spec = &self.spec; + + let state_slot = (epoch + 1).end_slot(T::EthSpec::slots_per_epoch()); + + let state_root = self + .state_root_at_slot(state_slot)? + .ok_or(BeaconChainError::NoStateForSlot(state_slot))?; + + let mut state = self + .get_state(&state_root, Some(state_slot))? + .ok_or(BeaconChainError::MissingBeaconState(state_root))?; + + // Calculate ideal_rewards + let participation_cache = ParticipationCache::new(&state, spec)?; + + let previous_epoch = state.previous_epoch(); + + let mut ideal_rewards_hashmap = HashMap::new(); + + for flag_index in 0..PARTICIPATION_FLAG_WEIGHTS.len() { + let weight = get_flag_weight(flag_index) + .map_err(|_| BeaconChainError::AttestationRewardsError)?; + + let unslashed_participating_indices = participation_cache + .get_unslashed_participating_indices(flag_index, previous_epoch)?; + + let unslashed_participating_balance = + unslashed_participating_indices + .total_balance() + .map_err(|_| BeaconChainError::AttestationRewardsError)?; + + let unslashed_participating_increments = + unslashed_participating_balance.safe_div(spec.effective_balance_increment)?; + + let total_active_balance = participation_cache.current_epoch_total_active_balance(); + + let active_increments = + total_active_balance.safe_div(spec.effective_balance_increment)?; + + let base_reward_per_increment = + BaseRewardPerIncrement::new(total_active_balance, spec)?; + + for effective_balance_eth in 0..=32 { + let effective_balance = + effective_balance_eth.safe_mul(spec.effective_balance_increment)?; + let base_reward = + effective_balance_eth.safe_mul(base_reward_per_increment.as_u64())?; + + let penalty = -(base_reward.safe_mul(weight)?.safe_div(WEIGHT_DENOMINATOR)? as i64); + + let reward_numerator = base_reward + .safe_mul(weight)? + .safe_mul(unslashed_participating_increments)?; + + let ideal_reward = reward_numerator + .safe_div(active_increments)? + .safe_div(WEIGHT_DENOMINATOR)?; + if !state.is_in_inactivity_leak(previous_epoch, spec) { + ideal_rewards_hashmap + .insert((flag_index, effective_balance), (ideal_reward, penalty)); + } else { + ideal_rewards_hashmap.insert((flag_index, effective_balance), (0, penalty)); + } + } + } + + // Calculate total_rewards + let mut total_rewards: Vec = Vec::new(); + + let validators = if validators.is_empty() { + participation_cache.eligible_validator_indices().to_vec() + } else { + validators + .into_iter() + .map(|validator| match validator { + ValidatorId::Index(i) => Ok(i as usize), + ValidatorId::PublicKey(pubkey) => state + .get_validator_index(&pubkey)? + .ok_or(BeaconChainError::ValidatorPubkeyUnknown(pubkey)), + }) + .collect::, _>>()? + }; + + for validator_index in &validators { + let eligible = state.is_eligible_validator(previous_epoch, *validator_index)?; + let mut head_reward = 0u64; + let mut target_reward = 0i64; + let mut source_reward = 0i64; + + if eligible { + let effective_balance = state.get_effective_balance(*validator_index)?; + + for flag_index in 0..PARTICIPATION_FLAG_WEIGHTS.len() { + let (ideal_reward, penalty) = ideal_rewards_hashmap + .get(&(flag_index, effective_balance)) + .ok_or(BeaconChainError::AttestationRewardsError)?; + let voted_correctly = participation_cache + .get_unslashed_participating_indices(flag_index, previous_epoch) + .map_err(|_| BeaconChainError::AttestationRewardsError)? + .contains(*validator_index) + .map_err(|_| BeaconChainError::AttestationRewardsError)?; + if voted_correctly { + if flag_index == TIMELY_HEAD_FLAG_INDEX { + head_reward += ideal_reward; + } else if flag_index == TIMELY_TARGET_FLAG_INDEX { + target_reward += *ideal_reward as i64; + } else if flag_index == TIMELY_SOURCE_FLAG_INDEX { + source_reward += *ideal_reward as i64; + } + } else if flag_index == TIMELY_HEAD_FLAG_INDEX { + head_reward = 0; + } else if flag_index == TIMELY_TARGET_FLAG_INDEX { + target_reward = *penalty; + } else if flag_index == TIMELY_SOURCE_FLAG_INDEX { + source_reward = *penalty; + } + } + } + total_rewards.push(TotalAttestationRewards { + validator_index: *validator_index as u64, + head: head_reward, + target: target_reward, + source: source_reward, + }); + } + + // Convert hashmap to vector + let mut ideal_rewards: Vec = ideal_rewards_hashmap + .iter() + .map( + |((flag_index, effective_balance), (ideal_reward, _penalty))| { + (flag_index, effective_balance, ideal_reward) + }, + ) + .fold( + HashMap::new(), + |mut acc, (flag_index, &effective_balance, ideal_reward)| { + let entry = acc + .entry(effective_balance) + .or_insert(IdealAttestationRewards { + effective_balance, + head: 0, + target: 0, + source: 0, + }); + match *flag_index { + TIMELY_SOURCE_FLAG_INDEX => entry.source += ideal_reward, + TIMELY_TARGET_FLAG_INDEX => entry.target += ideal_reward, + TIMELY_HEAD_FLAG_INDEX => entry.head += ideal_reward, + _ => {} + } + acc + }, + ) + .into_values() + .collect::>(); + ideal_rewards.sort_by(|a, b| a.effective_balance.cmp(&b.effective_balance)); + + Ok(StandardAttestationRewards { + ideal_rewards, + total_rewards, + }) + } +} diff --git a/beacon_node/beacon_chain/src/attestation_verification.rs b/beacon_node/beacon_chain/src/attestation_verification.rs index b60ce7efe5c..04f601fad97 100644 --- a/beacon_node/beacon_chain/src/attestation_verification.rs +++ b/beacon_node/beacon_chain/src/attestation_verification.rs @@ -27,6 +27,11 @@ //! â–¼ //! impl VerifiedAttestation //! ``` + +// Ignore this lint for `AttestationSlashInfo` which is of comparable size to the non-error types it +// is returned alongside. +#![allow(clippy::result_large_err)] + mod batch; use crate::{ diff --git a/beacon_node/beacon_chain/src/beacon_block_reward.rs b/beacon_node/beacon_chain/src/beacon_block_reward.rs new file mode 100644 index 00000000000..786402c9978 --- /dev/null +++ b/beacon_node/beacon_chain/src/beacon_block_reward.rs @@ -0,0 +1,237 @@ +use crate::{BeaconChain, BeaconChainError, BeaconChainTypes}; +use eth2::lighthouse::StandardBlockReward; +use operation_pool::RewardCache; +use safe_arith::SafeArith; +use slog::error; +use state_processing::{ + common::{ + altair, get_attestation_participation_flag_indices, get_attesting_indices_from_state, + }, + per_block_processing::{ + altair::sync_committee::compute_sync_aggregate_rewards, get_slashable_indices, + }, +}; +use store::{ + consts::altair::{PARTICIPATION_FLAG_WEIGHTS, PROPOSER_WEIGHT, WEIGHT_DENOMINATOR}, + RelativeEpoch, +}; +use types::{AbstractExecPayload, BeaconBlockRef, BeaconState, BeaconStateError, Hash256}; + +type BeaconBlockSubRewardValue = u64; + +impl BeaconChain { + pub fn compute_beacon_block_reward>( + &self, + block: BeaconBlockRef<'_, T::EthSpec, Payload>, + block_root: Hash256, + state: &mut BeaconState, + ) -> Result { + if block.slot() != state.slot() { + return Err(BeaconChainError::BlockRewardSlotError); + } + + state.build_committee_cache(RelativeEpoch::Previous, &self.spec)?; + state.build_committee_cache(RelativeEpoch::Current, &self.spec)?; + + let proposer_index = block.proposer_index(); + + let sync_aggregate_reward = + self.compute_beacon_block_sync_aggregate_reward(block, state)?; + + let proposer_slashing_reward = self + .compute_beacon_block_proposer_slashing_reward(block, state) + .map_err(|e| { + error!( + self.log, + "Error calculating proposer slashing reward"; + "error" => ?e + ); + BeaconChainError::BlockRewardError + })?; + + let attester_slashing_reward = self + .compute_beacon_block_attester_slashing_reward(block, state) + .map_err(|e| { + error!( + self.log, + "Error calculating attester slashing reward"; + "error" => ?e + ); + BeaconChainError::BlockRewardError + })?; + + let block_attestation_reward = if let BeaconState::Base(_) = state { + self.compute_beacon_block_attestation_reward_base(block, block_root, state) + .map_err(|e| { + error!( + self.log, + "Error calculating base block attestation reward"; + "error" => ?e + ); + BeaconChainError::BlockRewardAttestationError + })? + } else { + self.compute_beacon_block_attestation_reward_altair(block, state) + .map_err(|e| { + error!( + self.log, + "Error calculating altair block attestation reward"; + "error" => ?e + ); + BeaconChainError::BlockRewardAttestationError + })? + }; + + let total_reward = sync_aggregate_reward + .safe_add(proposer_slashing_reward)? + .safe_add(attester_slashing_reward)? + .safe_add(block_attestation_reward)?; + + Ok(StandardBlockReward { + proposer_index, + total: total_reward, + attestations: block_attestation_reward, + sync_aggregate: sync_aggregate_reward, + proposer_slashings: proposer_slashing_reward, + attester_slashings: attester_slashing_reward, + }) + } + + fn compute_beacon_block_sync_aggregate_reward>( + &self, + block: BeaconBlockRef<'_, T::EthSpec, Payload>, + state: &BeaconState, + ) -> Result { + if let Ok(sync_aggregate) = block.body().sync_aggregate() { + let (_, proposer_reward_per_bit) = compute_sync_aggregate_rewards(state, &self.spec) + .map_err(|_| BeaconChainError::BlockRewardSyncError)?; + Ok(sync_aggregate.sync_committee_bits.num_set_bits() as u64 * proposer_reward_per_bit) + } else { + Ok(0) + } + } + + fn compute_beacon_block_proposer_slashing_reward>( + &self, + block: BeaconBlockRef<'_, T::EthSpec, Payload>, + state: &BeaconState, + ) -> Result { + let mut proposer_slashing_reward = 0; + + let proposer_slashings = block.body().proposer_slashings(); + + for proposer_slashing in proposer_slashings { + proposer_slashing_reward.safe_add_assign( + state + .get_validator(proposer_slashing.proposer_index() as usize)? + .effective_balance + .safe_div(self.spec.whistleblower_reward_quotient)?, + )?; + } + + Ok(proposer_slashing_reward) + } + + fn compute_beacon_block_attester_slashing_reward>( + &self, + block: BeaconBlockRef<'_, T::EthSpec, Payload>, + state: &BeaconState, + ) -> Result { + let mut attester_slashing_reward = 0; + + let attester_slashings = block.body().attester_slashings(); + + for attester_slashing in attester_slashings { + for attester_index in get_slashable_indices(state, attester_slashing)? { + attester_slashing_reward.safe_add_assign( + state + .get_validator(attester_index as usize)? + .effective_balance + .safe_div(self.spec.whistleblower_reward_quotient)?, + )?; + } + } + + Ok(attester_slashing_reward) + } + + fn compute_beacon_block_attestation_reward_base>( + &self, + block: BeaconBlockRef<'_, T::EthSpec, Payload>, + block_root: Hash256, + state: &BeaconState, + ) -> Result { + // Call compute_block_reward in the base case + // Since base does not have sync aggregate, we only grab attesation portion of the returned + // value + let mut reward_cache = RewardCache::default(); + let block_attestation_reward = self + .compute_block_reward(block, block_root, state, &mut reward_cache, true)? + .attestation_rewards + .total; + + Ok(block_attestation_reward) + } + + fn compute_beacon_block_attestation_reward_altair>( + &self, + block: BeaconBlockRef<'_, T::EthSpec, Payload>, + state: &mut BeaconState, + ) -> Result { + let total_active_balance = state.get_total_active_balance()?; + let base_reward_per_increment = + altair::BaseRewardPerIncrement::new(total_active_balance, &self.spec)?; + + let mut total_proposer_reward = 0; + + let proposer_reward_denominator = WEIGHT_DENOMINATOR + .safe_sub(PROPOSER_WEIGHT)? + .safe_mul(WEIGHT_DENOMINATOR)? + .safe_div(PROPOSER_WEIGHT)?; + + for attestation in block.body().attestations() { + let data = &attestation.data; + let inclusion_delay = state.slot().safe_sub(data.slot)?.as_u64(); + let participation_flag_indices = get_attestation_participation_flag_indices( + state, + data, + inclusion_delay, + &self.spec, + )?; + + let attesting_indices = get_attesting_indices_from_state(state, attestation)?; + + let mut proposer_reward_numerator = 0; + for index in attesting_indices { + let index = index as usize; + for (flag_index, &weight) in PARTICIPATION_FLAG_WEIGHTS.iter().enumerate() { + let epoch_participation = + state.get_epoch_participation_mut(data.target.epoch)?; + let validator_participation = epoch_participation + .get_mut(index) + .ok_or(BeaconStateError::ParticipationOutOfBounds(index))?; + + if participation_flag_indices.contains(&flag_index) + && !validator_participation.has_flag(flag_index)? + { + validator_participation.add_flag(flag_index)?; + proposer_reward_numerator.safe_add_assign( + altair::get_base_reward( + state, + index, + base_reward_per_increment, + &self.spec, + )? + .safe_mul(weight)?, + )?; + } + } + } + total_proposer_reward.safe_add_assign( + proposer_reward_numerator.safe_div(proposer_reward_denominator)?, + )?; + } + + Ok(total_proposer_reward) + } +} diff --git a/beacon_node/beacon_chain/src/beacon_block_streamer.rs b/beacon_node/beacon_chain/src/beacon_block_streamer.rs new file mode 100644 index 00000000000..e43f2a8dd81 --- /dev/null +++ b/beacon_node/beacon_chain/src/beacon_block_streamer.rs @@ -0,0 +1,973 @@ +use crate::{BeaconChain, BeaconChainError, BeaconChainTypes}; +use execution_layer::{ExecutionLayer, ExecutionPayloadBodyV1}; +use slog::{crit, debug, Logger}; +use std::collections::HashMap; +use std::sync::Arc; +use store::DatabaseBlock; +use task_executor::TaskExecutor; +use tokio::sync::{ + mpsc::{self, UnboundedSender}, + RwLock, +}; +use tokio_stream::{wrappers::UnboundedReceiverStream, Stream}; +use types::{ + ChainSpec, EthSpec, ExecPayload, ExecutionBlockHash, ForkName, Hash256, SignedBeaconBlock, + SignedBlindedBeaconBlock, Slot, +}; +use types::{ + ExecutionPayload, ExecutionPayloadCapella, ExecutionPayloadHeader, ExecutionPayloadMerge, +}; + +#[derive(PartialEq)] +pub enum CheckEarlyAttesterCache { + Yes, + No, +} + +#[derive(Debug)] +pub enum Error { + PayloadReconstruction(String), + BlocksByRangeFailure(Box), + RequestNotFound, + BlockResultNotFound, +} + +const BLOCKS_PER_RANGE_REQUEST: u64 = 32; + +// This is the same as a DatabaseBlock but the Arc allows us to avoid an unnecessary clone. +enum LoadedBeaconBlock { + Full(Arc>), + Blinded(Box>), +} +type LoadResult = Result>, BeaconChainError>; +type BlockResult = Result>>, BeaconChainError>; + +enum RequestState { + UnSent(Vec>), + Sent(HashMap>>), +} + +struct BodiesByRange { + start: u64, + count: u64, + state: RequestState, +} + +// stores the components of a block for future re-construction in a small form +struct BlockParts { + blinded_block: Box>, + header: Box>, + body: Option>>, +} + +impl BlockParts { + pub fn new( + blinded: Box>, + header: ExecutionPayloadHeader, + ) -> Self { + Self { + blinded_block: blinded, + header: Box::new(header), + body: None, + } + } + + pub fn root(&self) -> Hash256 { + self.blinded_block.canonical_root() + } + + pub fn slot(&self) -> Slot { + self.blinded_block.message().slot() + } + + pub fn block_hash(&self) -> ExecutionBlockHash { + self.header.block_hash() + } +} + +fn reconstruct_default_header_block( + blinded_block: Box>, + header_from_block: ExecutionPayloadHeader, + spec: &ChainSpec, +) -> BlockResult { + let fork = blinded_block + .fork_name(spec) + .map_err(BeaconChainError::InconsistentFork)?; + + let payload: ExecutionPayload = match fork { + ForkName::Merge => ExecutionPayloadMerge::default().into(), + ForkName::Capella => ExecutionPayloadCapella::default().into(), + ForkName::Base | ForkName::Altair => { + return Err(Error::PayloadReconstruction(format!( + "Block with fork variant {} has execution payload", + fork + )) + .into()) + } + }; + + let header_from_payload = ExecutionPayloadHeader::from(payload.to_ref()); + if header_from_payload == header_from_block { + blinded_block + .try_into_full_block(Some(payload)) + .ok_or(BeaconChainError::AddPayloadLogicError) + .map(Arc::new) + .map(Some) + } else { + Err(BeaconChainError::InconsistentPayloadReconstructed { + slot: blinded_block.slot(), + exec_block_hash: header_from_block.block_hash(), + canonical_transactions_root: header_from_block.transactions_root(), + reconstructed_transactions_root: header_from_payload.transactions_root(), + }) + } +} + +fn reconstruct_blocks( + block_map: &mut HashMap>>, + block_parts_with_bodies: HashMap>, + log: &Logger, +) { + for (root, block_parts) in block_parts_with_bodies { + if let Some(payload_body) = block_parts.body { + match payload_body.to_payload(block_parts.header.as_ref().clone()) { + Ok(payload) => { + let header_from_payload = ExecutionPayloadHeader::from(payload.to_ref()); + if header_from_payload == *block_parts.header { + block_map.insert( + root, + Arc::new( + block_parts + .blinded_block + .try_into_full_block(Some(payload)) + .ok_or(BeaconChainError::AddPayloadLogicError) + .map(Arc::new) + .map(Some), + ), + ); + } else { + let error = BeaconChainError::InconsistentPayloadReconstructed { + slot: block_parts.blinded_block.slot(), + exec_block_hash: block_parts.header.block_hash(), + canonical_transactions_root: block_parts.header.transactions_root(), + reconstructed_transactions_root: header_from_payload + .transactions_root(), + }; + debug!(log, "Failed to reconstruct block"; "root" => ?root, "error" => ?error); + block_map.insert(root, Arc::new(Err(error))); + } + } + Err(string) => { + block_map.insert( + root, + Arc::new(Err(Error::PayloadReconstruction(string).into())), + ); + } + } + } else { + block_map.insert( + root, + Arc::new(Err(BeaconChainError::BlockHashMissingFromExecutionLayer( + block_parts.block_hash(), + ))), + ); + } + } +} + +impl BodiesByRange { + pub fn new(maybe_block_parts: Option>) -> Self { + if let Some(block_parts) = maybe_block_parts { + Self { + start: block_parts.header.block_number(), + count: 1, + state: RequestState::UnSent(vec![block_parts]), + } + } else { + Self { + start: 0, + count: 0, + state: RequestState::UnSent(vec![]), + } + } + } + + pub fn is_unsent(&self) -> bool { + matches!(self.state, RequestState::UnSent(_)) + } + + pub fn push_block_parts(&mut self, block_parts: BlockParts) -> Result<(), BlockParts> { + if self.count == BLOCKS_PER_RANGE_REQUEST { + return Err(block_parts); + } + + match &mut self.state { + RequestState::Sent(_) => Err(block_parts), + RequestState::UnSent(blocks_parts_vec) => { + let block_number = block_parts.header.block_number(); + if self.count == 0 { + self.start = block_number; + self.count = 1; + blocks_parts_vec.push(block_parts); + Ok(()) + } else { + // need to figure out if this block fits in the request + if block_number < self.start + || self.start + BLOCKS_PER_RANGE_REQUEST <= block_number + { + return Err(block_parts); + } + + blocks_parts_vec.push(block_parts); + if self.start + self.count <= block_number { + self.count = block_number - self.start + 1; + } + + Ok(()) + } + } + } + } + + async fn execute(&mut self, execution_layer: &ExecutionLayer, log: &Logger) { + if let RequestState::UnSent(blocks_parts_ref) = &mut self.state { + let block_parts_vec = std::mem::take(blocks_parts_ref); + + let mut block_map = HashMap::new(); + match execution_layer + .get_payload_bodies_by_range(self.start, self.count) + .await + { + Ok(bodies) => { + let mut range_map = (self.start..(self.start + self.count)) + .zip(bodies.into_iter().chain(std::iter::repeat(None))) + .collect::>(); + + let mut with_bodies = HashMap::new(); + for mut block_parts in block_parts_vec { + with_bodies + // it's possible the same block is requested twice, using + // or_insert_with() skips duplicates + .entry(block_parts.root()) + .or_insert_with(|| { + let block_number = block_parts.header.block_number(); + block_parts.body = + range_map.remove(&block_number).flatten().map(Box::new); + + block_parts + }); + } + + reconstruct_blocks(&mut block_map, with_bodies, log); + } + Err(e) => { + let block_result = + Arc::new(Err(Error::BlocksByRangeFailure(Box::new(e)).into())); + debug!(log, "Payload bodies by range failure"; "error" => ?block_result); + for block_parts in block_parts_vec { + block_map.insert(block_parts.root(), block_result.clone()); + } + } + } + self.state = RequestState::Sent(block_map); + } + } + + pub async fn get_block_result( + &mut self, + root: &Hash256, + execution_layer: &ExecutionLayer, + log: &Logger, + ) -> Option>> { + self.execute(execution_layer, log).await; + if let RequestState::Sent(map) = &self.state { + return map.get(root).cloned(); + } + // Shouldn't reach this point + None + } +} + +#[derive(Clone)] +enum EngineRequest { + ByRange(Arc>>), + // When we already have the data or there's an error + NoRequest(Arc>>>>), +} + +impl EngineRequest { + pub fn new_by_range() -> Self { + Self::ByRange(Arc::new(RwLock::new(BodiesByRange::new(None)))) + } + pub fn new_no_request() -> Self { + Self::NoRequest(Arc::new(RwLock::new(HashMap::new()))) + } + + pub async fn is_unsent(&self) -> bool { + match self { + Self::ByRange(bodies_by_range) => bodies_by_range.read().await.is_unsent(), + Self::NoRequest(_) => false, + } + } + + pub async fn push_block_parts(&mut self, block_parts: BlockParts, log: &Logger) { + match self { + Self::ByRange(bodies_by_range) => { + let mut request = bodies_by_range.write().await; + + if let Err(block_parts) = request.push_block_parts(block_parts) { + drop(request); + let new_by_range = BodiesByRange::new(Some(block_parts)); + *self = Self::ByRange(Arc::new(RwLock::new(new_by_range))); + } + } + Self::NoRequest(_) => { + // this should _never_ happen + crit!( + log, + "Please notify the devs"; + "beacon_block_streamer" => "push_block_parts called on NoRequest Variant", + ); + } + } + } + + pub async fn push_block_result( + &mut self, + root: Hash256, + block_result: BlockResult, + log: &Logger, + ) { + // this function will only fail if something is seriously wrong + match self { + Self::ByRange(_) => { + // this should _never_ happen + crit!( + log, + "Please notify the devs"; + "beacon_block_streamer" => "push_block_result called on ByRange", + ); + } + Self::NoRequest(results) => { + results.write().await.insert(root, Arc::new(block_result)); + } + } + } + + pub async fn get_block_result( + &self, + root: &Hash256, + execution_layer: &ExecutionLayer, + log: &Logger, + ) -> Arc> { + match self { + Self::ByRange(by_range) => { + by_range + .write() + .await + .get_block_result(root, execution_layer, log) + .await + } + Self::NoRequest(map) => map.read().await.get(root).cloned(), + } + .unwrap_or_else(|| { + crit!( + log, + "Please notify the devs"; + "beacon_block_streamer" => "block_result not found in request", + "root" => ?root, + ); + Arc::new(Err(Error::BlockResultNotFound.into())) + }) + } +} + +pub struct BeaconBlockStreamer { + execution_layer: ExecutionLayer, + check_early_attester_cache: CheckEarlyAttesterCache, + beacon_chain: Arc>, +} + +impl BeaconBlockStreamer { + pub fn new( + beacon_chain: &Arc>, + check_early_attester_cache: CheckEarlyAttesterCache, + ) -> Result { + let execution_layer = beacon_chain + .execution_layer + .as_ref() + .ok_or(BeaconChainError::ExecutionLayerMissing)? + .clone(); + + Ok(Self { + execution_layer, + check_early_attester_cache, + beacon_chain: beacon_chain.clone(), + }) + } + + fn check_early_attester_cache( + &self, + root: Hash256, + ) -> Option>> { + if self.check_early_attester_cache == CheckEarlyAttesterCache::Yes { + self.beacon_chain.early_attester_cache.get_block(root) + } else { + None + } + } + + fn load_payloads(&self, block_roots: Vec) -> Vec<(Hash256, LoadResult)> { + let mut db_blocks = Vec::new(); + + for root in block_roots { + if let Some(cached_block) = self + .check_early_attester_cache(root) + .map(LoadedBeaconBlock::Full) + { + db_blocks.push((root, Ok(Some(cached_block)))); + continue; + } + + match self.beacon_chain.store.try_get_full_block(&root) { + Err(e) => db_blocks.push((root, Err(e.into()))), + Ok(opt_block) => db_blocks.push(( + root, + Ok(opt_block.map(|db_block| match db_block { + DatabaseBlock::Full(block) => LoadedBeaconBlock::Full(Arc::new(block)), + DatabaseBlock::Blinded(block) => { + LoadedBeaconBlock::Blinded(Box::new(block)) + } + })), + )), + } + } + + db_blocks + } + + /// Pre-process the loaded blocks into execution engine requests. + /// + /// The purpose of this function is to separate the blocks into 2 categories: + /// 1) no_request - when we already have the full block or there's an error + /// 2) blocks_by_range - used for blinded blocks + /// + /// The function returns a vector of block roots in the same order as requested + /// along with the engine request that each root corresponds to. + async fn get_requests( + &self, + payloads: Vec<(Hash256, LoadResult)>, + ) -> Vec<(Hash256, EngineRequest)> { + let mut ordered_block_roots = Vec::new(); + let mut requests = HashMap::new(); + + // we sort the by range blocks by slot before adding them to the + // request as it should *better* optimize the number of blocks that + // can fit in the same request + let mut by_range_blocks: Vec> = vec![]; + let mut no_request = EngineRequest::new_no_request(); + + for (root, load_result) in payloads { + // preserve the order of the requested blocks + ordered_block_roots.push(root); + + let block_result = match load_result { + Err(e) => Err(e), + Ok(None) => Ok(None), + Ok(Some(LoadedBeaconBlock::Full(full_block))) => Ok(Some(full_block)), + Ok(Some(LoadedBeaconBlock::Blinded(blinded_block))) => { + match blinded_block + .message() + .execution_payload() + .map(|payload| payload.to_execution_payload_header()) + { + Ok(header) => { + if header.block_hash() == ExecutionBlockHash::zero() { + reconstruct_default_header_block( + blinded_block, + header, + &self.beacon_chain.spec, + ) + } else { + // Add the block to the set requiring a by-range request. + let block_parts = BlockParts::new(blinded_block, header); + by_range_blocks.push(block_parts); + continue; + } + } + Err(e) => Err(BeaconChainError::BeaconStateError(e)), + } + } + }; + + no_request + .push_block_result(root, block_result, &self.beacon_chain.log) + .await; + requests.insert(root, no_request.clone()); + } + + // Now deal with the by_range requests. Sort them in order of increasing slot + let mut by_range = EngineRequest::::new_by_range(); + by_range_blocks.sort_by_key(|block_parts| block_parts.slot()); + for block_parts in by_range_blocks { + let root = block_parts.root(); + by_range + .push_block_parts(block_parts, &self.beacon_chain.log) + .await; + requests.insert(root, by_range.clone()); + } + + let mut result = vec![]; + for root in ordered_block_roots { + if let Some(request) = requests.get(&root) { + result.push((root, request.clone())) + } else { + crit!( + self.beacon_chain.log, + "Please notify the devs"; + "beacon_block_streamer" => "request not found", + "root" => ?root, + ); + no_request + .push_block_result( + root, + Err(Error::RequestNotFound.into()), + &self.beacon_chain.log, + ) + .await; + result.push((root, no_request.clone())); + } + } + + result + } + + // used when the execution engine doesn't support the payload bodies methods + async fn stream_blocks_fallback( + &self, + block_roots: Vec, + sender: UnboundedSender<(Hash256, Arc>)>, + ) { + debug!( + self.beacon_chain.log, + "Using slower fallback method of eth_getBlockByHash()" + ); + for root in block_roots { + let cached_block = self.check_early_attester_cache(root); + let block_result = if cached_block.is_some() { + Ok(cached_block) + } else { + self.beacon_chain + .get_block(&root) + .await + .map(|opt_block| opt_block.map(Arc::new)) + }; + + if sender.send((root, Arc::new(block_result))).is_err() { + break; + } + } + } + + async fn stream_blocks( + &self, + block_roots: Vec, + sender: UnboundedSender<(Hash256, Arc>)>, + ) { + let n_roots = block_roots.len(); + let mut n_success = 0usize; + let mut n_sent = 0usize; + let mut engine_requests = 0usize; + + let payloads = self.load_payloads(block_roots); + let requests = self.get_requests(payloads).await; + + for (root, request) in requests { + if request.is_unsent().await { + engine_requests += 1; + } + + let result = request + .get_block_result(&root, &self.execution_layer, &self.beacon_chain.log) + .await; + + let successful = result + .as_ref() + .as_ref() + .map(|opt| opt.is_some()) + .unwrap_or(false); + + if sender.send((root, result)).is_err() { + break; + } else { + n_sent += 1; + if successful { + n_success += 1; + } + } + } + + debug!( + self.beacon_chain.log, + "BeaconBlockStreamer finished"; + "requested blocks" => n_roots, + "sent" => n_sent, + "succeeded" => n_success, + "failed" => (n_sent - n_success), + "engine requests" => engine_requests, + ); + } + + pub async fn stream( + self, + block_roots: Vec, + sender: UnboundedSender<(Hash256, Arc>)>, + ) { + match self + .execution_layer + .get_engine_capabilities(None) + .await + .map_err(Box::new) + .map_err(BeaconChainError::EngineGetCapabilititesFailed) + { + Ok(engine_capabilities) => { + if engine_capabilities.get_payload_bodies_by_range_v1 { + self.stream_blocks(block_roots, sender).await; + } else { + // use the fallback method + self.stream_blocks_fallback(block_roots, sender).await; + } + } + Err(e) => { + send_errors(block_roots, sender, e).await; + } + } + } + + pub fn launch_stream( + self, + block_roots: Vec, + executor: &TaskExecutor, + ) -> impl Stream>)> { + let (block_tx, block_rx) = mpsc::unbounded_channel(); + debug!( + self.beacon_chain.log, + "Launching a BeaconBlockStreamer"; + "blocks" => block_roots.len(), + ); + executor.spawn(self.stream(block_roots, block_tx), "get_blocks_sender"); + UnboundedReceiverStream::new(block_rx) + } +} + +async fn send_errors( + block_roots: Vec, + sender: UnboundedSender<(Hash256, Arc>)>, + beacon_chain_error: BeaconChainError, +) { + let result = Arc::new(Err(beacon_chain_error)); + for root in block_roots { + if sender.send((root, result.clone())).is_err() { + break; + } + } +} + +impl From for BeaconChainError { + fn from(value: Error) -> Self { + BeaconChainError::BlockStreamerError(value) + } +} + +#[cfg(test)] +mod tests { + use crate::beacon_block_streamer::{BeaconBlockStreamer, CheckEarlyAttesterCache}; + use crate::test_utils::{test_spec, BeaconChainHarness, EphemeralHarnessType}; + use execution_layer::test_utils::{Block, DEFAULT_ENGINE_CAPABILITIES}; + use execution_layer::EngineCapabilities; + use lazy_static::lazy_static; + use std::time::Duration; + use tokio::sync::mpsc; + use types::{ChainSpec, Epoch, EthSpec, Hash256, Keypair, MinimalEthSpec, Slot}; + + const VALIDATOR_COUNT: usize = 48; + lazy_static! { + /// A cached set of keys. + static ref KEYPAIRS: Vec = types::test_utils::generate_deterministic_keypairs(VALIDATOR_COUNT); + } + + fn get_harness( + validator_count: usize, + spec: ChainSpec, + ) -> BeaconChainHarness> { + let harness = BeaconChainHarness::builder(MinimalEthSpec) + .spec(spec) + .keypairs(KEYPAIRS[0..validator_count].to_vec()) + .logger(logging::test_logger()) + .fresh_ephemeral_store() + .mock_execution_layer() + .build(); + + harness.advance_slot(); + + harness + } + + #[tokio::test] + async fn check_all_blocks_from_altair_to_capella() { + let slots_per_epoch = MinimalEthSpec::slots_per_epoch() as usize; + let num_epochs = 8; + let bellatrix_fork_epoch = 2usize; + let capella_fork_epoch = 4usize; + let num_blocks_produced = num_epochs * slots_per_epoch; + + let mut spec = test_spec::(); + spec.altair_fork_epoch = Some(Epoch::new(0)); + spec.bellatrix_fork_epoch = Some(Epoch::new(bellatrix_fork_epoch as u64)); + spec.capella_fork_epoch = Some(Epoch::new(capella_fork_epoch as u64)); + + let harness = get_harness(VALIDATOR_COUNT, spec); + // go to bellatrix fork + harness + .extend_slots(bellatrix_fork_epoch * slots_per_epoch) + .await; + // extend half an epoch + harness.extend_slots(slots_per_epoch / 2).await; + // trigger merge + harness + .execution_block_generator() + .move_to_terminal_block() + .expect("should move to terminal block"); + let timestamp = harness.get_timestamp_at_slot() + harness.spec.seconds_per_slot; + harness + .execution_block_generator() + .modify_last_block(|block| { + if let Block::PoW(terminal_block) = block { + terminal_block.timestamp = timestamp; + } + }); + // finish out merge epoch + harness.extend_slots(slots_per_epoch / 2).await; + // finish rest of epochs + harness + .extend_slots((num_epochs - 1 - bellatrix_fork_epoch) * slots_per_epoch) + .await; + + let head = harness.chain.head_snapshot(); + let state = &head.beacon_state; + + assert_eq!( + state.slot(), + Slot::new(num_blocks_produced as u64), + "head should be at the current slot" + ); + assert_eq!( + state.current_epoch(), + num_blocks_produced as u64 / MinimalEthSpec::slots_per_epoch(), + "head should be at the expected epoch" + ); + assert_eq!( + state.current_justified_checkpoint().epoch, + state.current_epoch() - 1, + "the head should be justified one behind the current epoch" + ); + assert_eq!( + state.finalized_checkpoint().epoch, + state.current_epoch() - 2, + "the head should be finalized two behind the current epoch" + ); + + let block_roots: Vec = harness + .chain + .forwards_iter_block_roots(Slot::new(0)) + .expect("should get iter") + .map(Result::unwrap) + .map(|(root, _)| root) + .collect(); + + let mut expected_blocks = vec![]; + // get all blocks the old fashioned way + for root in &block_roots { + let block = harness + .chain + .get_block(root) + .await + .expect("should get block") + .expect("block should exist"); + expected_blocks.push(block); + } + + for epoch in 0..num_epochs { + let start = epoch * slots_per_epoch; + let mut epoch_roots = vec![Hash256::zero(); slots_per_epoch]; + epoch_roots[..].clone_from_slice(&block_roots[start..(start + slots_per_epoch)]); + let streamer = BeaconBlockStreamer::new(&harness.chain, CheckEarlyAttesterCache::No) + .expect("should create streamer"); + let (block_tx, mut block_rx) = mpsc::unbounded_channel(); + streamer.stream(epoch_roots.clone(), block_tx).await; + + for (i, expected_root) in epoch_roots.into_iter().enumerate() { + let (found_root, found_block_result) = + block_rx.recv().await.expect("should get block"); + + assert_eq!( + found_root, expected_root, + "expected block root should match" + ); + match found_block_result.as_ref() { + Ok(maybe_block) => { + let found_block = maybe_block.clone().expect("should have a block"); + let expected_block = expected_blocks + .get(start + i) + .expect("should get expected block"); + assert_eq!( + found_block.as_ref(), + expected_block, + "expected block should match found block" + ); + } + Err(e) => panic!("Error retrieving block {}: {:?}", expected_root, e), + } + } + } + } + + #[tokio::test] + async fn check_fallback_altair_to_capella() { + let slots_per_epoch = MinimalEthSpec::slots_per_epoch() as usize; + let num_epochs = 8; + let bellatrix_fork_epoch = 2usize; + let capella_fork_epoch = 4usize; + let num_blocks_produced = num_epochs * slots_per_epoch; + + let mut spec = test_spec::(); + spec.altair_fork_epoch = Some(Epoch::new(0)); + spec.bellatrix_fork_epoch = Some(Epoch::new(bellatrix_fork_epoch as u64)); + spec.capella_fork_epoch = Some(Epoch::new(capella_fork_epoch as u64)); + + let harness = get_harness(VALIDATOR_COUNT, spec); + + // modify execution engine so it doesn't support engine_payloadBodiesBy* methods + let mock_execution_layer = harness.mock_execution_layer.as_ref().unwrap(); + mock_execution_layer + .server + .set_engine_capabilities(EngineCapabilities { + get_payload_bodies_by_hash_v1: false, + get_payload_bodies_by_range_v1: false, + ..DEFAULT_ENGINE_CAPABILITIES + }); + // refresh capabilities cache + harness + .chain + .execution_layer + .as_ref() + .unwrap() + .get_engine_capabilities(Some(Duration::ZERO)) + .await + .unwrap(); + + // go to bellatrix fork + harness + .extend_slots(bellatrix_fork_epoch * slots_per_epoch) + .await; + // extend half an epoch + harness.extend_slots(slots_per_epoch / 2).await; + // trigger merge + harness + .execution_block_generator() + .move_to_terminal_block() + .expect("should move to terminal block"); + let timestamp = harness.get_timestamp_at_slot() + harness.spec.seconds_per_slot; + harness + .execution_block_generator() + .modify_last_block(|block| { + if let Block::PoW(terminal_block) = block { + terminal_block.timestamp = timestamp; + } + }); + // finish out merge epoch + harness.extend_slots(slots_per_epoch / 2).await; + // finish rest of epochs + harness + .extend_slots((num_epochs - 1 - bellatrix_fork_epoch) * slots_per_epoch) + .await; + + let head = harness.chain.head_snapshot(); + let state = &head.beacon_state; + + assert_eq!( + state.slot(), + Slot::new(num_blocks_produced as u64), + "head should be at the current slot" + ); + assert_eq!( + state.current_epoch(), + num_blocks_produced as u64 / MinimalEthSpec::slots_per_epoch(), + "head should be at the expected epoch" + ); + assert_eq!( + state.current_justified_checkpoint().epoch, + state.current_epoch() - 1, + "the head should be justified one behind the current epoch" + ); + assert_eq!( + state.finalized_checkpoint().epoch, + state.current_epoch() - 2, + "the head should be finalized two behind the current epoch" + ); + + let block_roots: Vec = harness + .chain + .forwards_iter_block_roots(Slot::new(0)) + .expect("should get iter") + .map(Result::unwrap) + .map(|(root, _)| root) + .collect(); + + let mut expected_blocks = vec![]; + // get all blocks the old fashioned way + for root in &block_roots { + let block = harness + .chain + .get_block(root) + .await + .expect("should get block") + .expect("block should exist"); + expected_blocks.push(block); + } + + for epoch in 0..num_epochs { + let start = epoch * slots_per_epoch; + let mut epoch_roots = vec![Hash256::zero(); slots_per_epoch]; + epoch_roots[..].clone_from_slice(&block_roots[start..(start + slots_per_epoch)]); + let streamer = BeaconBlockStreamer::new(&harness.chain, CheckEarlyAttesterCache::No) + .expect("should create streamer"); + let (block_tx, mut block_rx) = mpsc::unbounded_channel(); + streamer.stream(epoch_roots.clone(), block_tx).await; + + for (i, expected_root) in epoch_roots.into_iter().enumerate() { + let (found_root, found_block_result) = + block_rx.recv().await.expect("should get block"); + + assert_eq!( + found_root, expected_root, + "expected block root should match" + ); + match found_block_result.as_ref() { + Ok(maybe_block) => { + let found_block = maybe_block.clone().expect("should have a block"); + let expected_block = expected_blocks + .get(start + i) + .expect("should get expected block"); + assert_eq!( + found_block.as_ref(), + expected_block, + "expected block should match found block" + ); + } + Err(e) => panic!("Error retrieving block {}: {:?}", expected_root, e), + } + } + } + } +} diff --git a/beacon_node/beacon_chain/src/beacon_chain.rs b/beacon_node/beacon_chain/src/beacon_chain.rs index 55d6ae29efb..0165c54dc3b 100644 --- a/beacon_node/beacon_chain/src/beacon_chain.rs +++ b/beacon_node/beacon_chain/src/beacon_chain.rs @@ -4,14 +4,16 @@ use crate::attestation_verification::{ VerifiedUnaggregatedAttestation, }; use crate::attester_cache::{AttesterCache, AttesterCacheKey}; +use crate::beacon_block_streamer::{BeaconBlockStreamer, CheckEarlyAttesterCache}; use crate::beacon_proposer_cache::compute_proposer_duties_from_head; use crate::beacon_proposer_cache::BeaconProposerCache; use crate::block_times_cache::BlockTimesCache; use crate::block_verification::{ - check_block_is_finalized_descendant, check_block_relevancy, get_block_root, + check_block_is_finalized_checkpoint_or_descendant, check_block_relevancy, get_block_root, signature_verify_chain_segment, BlockError, ExecutionPendingBlock, GossipVerifiedBlock, IntoExecutionPendingBlock, PayloadVerificationOutcome, POS_PANDA_BANNER, }; +pub use crate::canonical_head::{CanonicalHead, CanonicalHeadRwLock}; use crate::chain_config::ChainConfig; use crate::early_attester_cache::EarlyAttesterCache; use crate::errors::{BeaconChainError as Error, BlockProductionError}; @@ -56,10 +58,12 @@ use crate::validator_monitor::{ }; use crate::validator_pubkey_cache::ValidatorPubkeyCache; use crate::{metrics, BeaconChainError, BeaconForkChoiceStore, BeaconSnapshot, CachedHead}; -use eth2::types::{EventKind, SseBlock, SyncDuty}; +use eth2::types::{EventKind, SseBlock, SseExtendedPayloadAttributes, SyncDuty}; use execution_layer::{ - BuilderParams, ChainHealth, ExecutionLayer, FailedCondition, PayloadAttributes, PayloadStatus, + BlockProposalContents, BuilderParams, ChainHealth, ExecutionLayer, FailedCondition, + PayloadAttributes, PayloadStatus, }; +pub use fork_choice::CountUnrealized; use fork_choice::{ AttestationFromBlock, ExecutionStatus, ForkChoice, ForkchoiceUpdateParameters, InvalidationOperation, PayloadVerificationStatus, ResetPayloadStatuses, @@ -67,9 +71,9 @@ use fork_choice::{ use futures::channel::mpsc::Sender; use itertools::process_results; use itertools::Itertools; -use operation_pool::{AttestationRef, OperationPool, PersistedOperationPool}; +use operation_pool::{AttestationRef, OperationPool, PersistedOperationPool, ReceivedPreCapella}; use parking_lot::{Mutex, RwLock}; -use proto_array::{CountUnrealizedFull, DoNotReOrg, ProposerHeadError}; +use proto_array::{DoNotReOrg, ProposerHeadError}; use safe_arith::SafeArith; use slasher::Slasher; use slog::{crit, debug, error, info, trace, warn, Logger}; @@ -79,13 +83,14 @@ use state_processing::{ common::get_attesting_indices_from_state, per_block_processing, per_block_processing::{ - errors::AttestationValidationError, verify_attestation_for_block_inclusion, - VerifySignatures, + errors::AttestationValidationError, get_expected_withdrawals, + verify_attestation_for_block_inclusion, VerifySignatures, }, per_slot_processing, state_advance::{complete_state_advance, partial_state_advance}, BlockSignatureStrategy, ConsensusContext, SigVerifiedOp, VerifyBlockRoot, VerifyOperation, }; +use std::borrow::Cow; use std::cmp::Ordering; use std::collections::HashMap; use std::collections::HashSet; @@ -98,14 +103,11 @@ use store::{ DatabaseBlock, Error as DBError, HotColdDB, KeyValueStore, KeyValueStoreOp, StoreItem, StoreOp, }; use task_executor::{ShutdownReason, TaskExecutor}; +use tokio_stream::Stream; use tree_hash::TreeHash; use types::beacon_state::CloneConfig; -use types::consts::merge::INTERVALS_PER_SLOT; use types::*; -pub use crate::canonical_head::{CanonicalHead, CanonicalHeadRwLock}; -pub use fork_choice::CountUnrealized; - pub type ForkChoiceError = fork_choice::Error; /// Alias to appease clippy. @@ -125,12 +127,6 @@ pub const VALIDATOR_PUBKEY_CACHE_LOCK_TIMEOUT: Duration = Duration::from_secs(1) /// The timeout for the eth1 finalization cache pub const ETH1_FINALIZATION_CACHE_LOCK_TIMEOUT: Duration = Duration::from_millis(200); -/// The latest delay from the start of the slot at which to attempt a 1-slot re-org. -fn max_re_org_slot_delay(seconds_per_slot: u64) -> Duration { - // Allow at least half of the attestation deadline for the block to propagate. - Duration::from_secs(seconds_per_slot) / INTERVALS_PER_SLOT as u32 / 2 -} - // These keys are all zero because they get stored in different columns, see `DBColumn` type. pub const BEACON_CHAIN_DB_KEY: Hash256 = Hash256::zero(); pub const OP_POOL_DB_KEY: Hash256 = Hash256::zero(); @@ -196,6 +192,9 @@ pub enum ProduceBlockVerification { pub struct PrePayloadAttributes { pub proposer_index: u64, pub prev_randao: Hash256, + /// The parent block number is not part of the payload attributes sent to the EL, but *is* + /// sent to builders via SSE. + pub parent_block_number: u64, } /// Define whether a forkchoiceUpdate needs to be checked for an override (`Yes`) or has already @@ -269,7 +268,7 @@ pub trait BeaconChainTypes: Send + Sync + 'static { } /// Used internally to split block production into discrete functions. -struct PartialBeaconBlock { +struct PartialBeaconBlock> { state: BeaconState, slot: Slot, proposer_index: u64, @@ -283,7 +282,8 @@ struct PartialBeaconBlock { deposits: Vec, voluntary_exits: Vec, sync_aggregate: Option>, - prepare_payload_handle: Option>, + prepare_payload_handle: Option>, + bls_to_execution_changes: Vec, } pub type BeaconForkChoice = ForkChoice< @@ -352,7 +352,7 @@ pub struct BeaconChain { /// in recent epochs. pub(crate) observed_sync_aggregators: RwLock>, /// Maintains a record of which validators have proposed blocks for each slot. - pub(crate) observed_block_producers: RwLock>, + pub observed_block_producers: RwLock>, /// Maintains a record of which validators have submitted voluntary exits. pub(crate) observed_voluntary_exits: Mutex>, /// Maintains a record of which validators we've seen proposer slashings for. @@ -360,6 +360,9 @@ pub struct BeaconChain { /// Maintains a record of which validators we've seen attester slashings for. pub(crate) observed_attester_slashings: Mutex, T::EthSpec>>, + /// Maintains a record of which validators we've seen BLS to execution changes for. + pub(crate) observed_bls_to_execution_changes: + Mutex>, /// The most recently validated light client finality update received on gossip. pub latest_seen_finality_update: Mutex>>, /// The most recently validated light client optimistic update received on gossip. @@ -422,6 +425,46 @@ pub struct BeaconChain { type BeaconBlockAndState = (BeaconBlock, BeaconState); impl BeaconChain { + /// Checks if a block is finalized. + /// The finalization check is done with the block slot. The block root is used to verify that + /// the finalized slot is in the canonical chain. + pub fn is_finalized_block( + &self, + block_root: &Hash256, + block_slot: Slot, + ) -> Result { + let finalized_slot = self + .canonical_head + .cached_head() + .finalized_checkpoint() + .epoch + .start_slot(T::EthSpec::slots_per_epoch()); + let is_canonical = self + .block_root_at_slot(block_slot, WhenSlotSkipped::None)? + .map_or(false, |canonical_root| block_root == &canonical_root); + Ok(block_slot <= finalized_slot && is_canonical) + } + + /// Checks if a state is finalized. + /// The finalization check is done with the slot. The state root is used to verify that + /// the finalized state is in the canonical chain. + pub fn is_finalized_state( + &self, + state_root: &Hash256, + state_slot: Slot, + ) -> Result { + let finalized_slot = self + .canonical_head + .cached_head() + .finalized_checkpoint() + .epoch + .start_slot(T::EthSpec::slots_per_epoch()); + let is_canonical = self + .state_root_at_slot(state_slot)? + .map_or(false, |canonical_root| state_root == &canonical_root); + Ok(state_slot <= finalized_slot && is_canonical) + } + /// Persists the head tracker and fork choice. /// /// We do it atomically even though no guarantees need to be made about blocks from @@ -469,7 +512,6 @@ impl BeaconChain { pub fn load_fork_choice( store: BeaconStore, reset_payload_statuses: ResetPayloadStatuses, - count_unrealized_full: CountUnrealizedFull, spec: &ChainSpec, log: &Logger, ) -> Result>, Error> { @@ -486,7 +528,6 @@ impl BeaconChain { persisted_fork_choice.fork_choice, reset_payload_statuses, fc_store, - count_unrealized_full, spec, log, )?)) @@ -933,14 +974,42 @@ impl BeaconChain { /// ## Errors /// /// May return a database error. - pub async fn get_block_checking_early_attester_cache( - &self, - block_root: &Hash256, - ) -> Result>>, Error> { - if let Some(block) = self.early_attester_cache.get_block(*block_root) { - return Ok(Some(block)); - } - Ok(self.get_block(block_root).await?.map(Arc::new)) + pub fn get_blocks_checking_early_attester_cache( + self: &Arc, + block_roots: Vec, + executor: &TaskExecutor, + ) -> Result< + impl Stream< + Item = ( + Hash256, + Arc>>, Error>>, + ), + >, + Error, + > { + Ok( + BeaconBlockStreamer::::new(self, CheckEarlyAttesterCache::Yes)? + .launch_stream(block_roots, executor), + ) + } + + pub fn get_blocks( + self: &Arc, + block_roots: Vec, + executor: &TaskExecutor, + ) -> Result< + impl Stream< + Item = ( + Hash256, + Arc>>, Error>>, + ), + >, + Error, + > { + Ok( + BeaconBlockStreamer::::new(self, CheckEarlyAttesterCache::No)? + .launch_stream(block_roots, executor), + ) } /// Returns the block at the given root, if any. @@ -959,29 +1028,32 @@ impl BeaconChain { Some(DatabaseBlock::Blinded(block)) => block, None => return Ok(None), }; + let fork = blinded_block.fork_name(&self.spec)?; // If we only have a blinded block, load the execution payload from the EL. let block_message = blinded_block.message(); - let execution_payload_header = &block_message + let execution_payload_header = block_message .execution_payload() .map_err(|_| Error::BlockVariantLacksExecutionPayload(*block_root))? - .execution_payload_header; + .to_execution_payload_header(); - let exec_block_hash = execution_payload_header.block_hash; + let exec_block_hash = execution_payload_header.block_hash(); let execution_payload = self .execution_layer .as_ref() .ok_or(Error::ExecutionLayerMissing)? - .get_payload_by_block_hash(exec_block_hash) + .get_payload_for_header(&execution_payload_header, fork) .await - .map_err(|e| Error::ExecutionLayerErrorPayloadReconstruction(exec_block_hash, e))? + .map_err(|e| { + Error::ExecutionLayerErrorPayloadReconstruction(exec_block_hash, Box::new(e)) + })? .ok_or(Error::BlockHashMissingFromExecutionLayer(exec_block_hash))?; // Verify payload integrity. - let header_from_payload = ExecutionPayloadHeader::from(&execution_payload); - if header_from_payload != *execution_payload_header { - for txn in &execution_payload.transactions { + let header_from_payload = ExecutionPayloadHeader::from(execution_payload.to_ref()); + if header_from_payload != execution_payload_header { + for txn in execution_payload.transactions() { debug!( self.log, "Reconstructed txn"; @@ -992,10 +1064,8 @@ impl BeaconChain { return Err(Error::InconsistentPayloadReconstructed { slot: blinded_block.slot(), exec_block_hash, - canonical_payload_root: execution_payload_header.tree_hash_root(), - reconstructed_payload_root: header_from_payload.tree_hash_root(), - canonical_transactions_root: execution_payload_header.transactions_root, - reconstructed_transactions_root: header_from_payload.transactions_root, + canonical_transactions_root: execution_payload_header.transactions_root(), + reconstructed_transactions_root: header_from_payload.transactions_root(), }); } @@ -1861,7 +1931,6 @@ impl BeaconChain { self.slot()?, verified.indexed_attestation(), AttestationFromBlock::False, - &self.spec, ) .map_err(Into::into) } @@ -2137,12 +2206,14 @@ impl BeaconChain { &self, exit: SignedVoluntaryExit, ) -> Result, Error> { - // NOTE: this could be more efficient if it avoided cloning the head state - let wall_clock_state = self.wall_clock_state()?; + let head_snapshot = self.head().snapshot; + let head_state = &head_snapshot.beacon_state; + let wall_clock_epoch = self.epoch()?; + Ok(self .observed_voluntary_exits .lock() - .verify_and_observe(exit, &wall_clock_state, &self.spec) + .verify_and_observe_at(exit, wall_clock_epoch, head_state, &self.spec) .map(|exit| { // this method is called for both API and gossip exits, so this covers all exit events if let Some(event_handler) = self.event_handler.as_ref() { @@ -2218,6 +2289,79 @@ impl BeaconChain { } } + /// Verify a signed BLS to execution change before allowing it to propagate on the gossip network. + pub fn verify_bls_to_execution_change_for_http_api( + &self, + bls_to_execution_change: SignedBlsToExecutionChange, + ) -> Result, Error> { + // Before checking the gossip duplicate filter, check that no prior change is already + // in our op pool. Ignore these messages: do not gossip, do not try to override the pool. + match self + .op_pool + .bls_to_execution_change_in_pool_equals(&bls_to_execution_change) + { + Some(true) => return Ok(ObservationOutcome::AlreadyKnown), + Some(false) => return Err(Error::BlsToExecutionConflictsWithPool), + None => (), + } + + // Use the head state to save advancing to the wall-clock slot unnecessarily. The message is + // signed with respect to the genesis fork version, and the slot check for gossip is applied + // separately. This `Arc` clone of the head is nice and cheap. + let head_snapshot = self.head().snapshot; + let head_state = &head_snapshot.beacon_state; + + Ok(self + .observed_bls_to_execution_changes + .lock() + .verify_and_observe(bls_to_execution_change, head_state, &self.spec)?) + } + + /// Verify a signed BLS to execution change before allowing it to propagate on the gossip network. + pub fn verify_bls_to_execution_change_for_gossip( + &self, + bls_to_execution_change: SignedBlsToExecutionChange, + ) -> Result, Error> { + // Ignore BLS to execution changes on gossip prior to Capella. + if !self.current_slot_is_post_capella()? { + return Err(Error::BlsToExecutionPriorToCapella); + } + self.verify_bls_to_execution_change_for_http_api(bls_to_execution_change) + .or_else(|e| { + // On gossip treat conflicts the same as duplicates [IGNORE]. + match e { + Error::BlsToExecutionConflictsWithPool => Ok(ObservationOutcome::AlreadyKnown), + e => Err(e), + } + }) + } + + /// Check if the current slot is greater than or equal to the Capella fork epoch. + pub fn current_slot_is_post_capella(&self) -> Result { + let current_fork = self.spec.fork_name_at_slot::(self.slot()?); + if let ForkName::Base | ForkName::Altair | ForkName::Merge = current_fork { + Ok(false) + } else { + Ok(true) + } + } + + /// Import a BLS to execution change to the op pool. + /// + /// Return `true` if the change was added to the pool. + pub fn import_bls_to_execution_change( + &self, + bls_to_execution_change: SigVerifiedOp, + received_pre_capella: ReceivedPreCapella, + ) -> bool { + if self.eth1_chain.is_some() { + self.op_pool + .insert_bls_to_execution_change(bls_to_execution_change, received_pre_capella) + } else { + false + } + } + /// Attempt to obtain sync committee duties from the head. pub fn sync_committee_duties_from_head( &self, @@ -2714,7 +2858,7 @@ impl BeaconChain { // is so we don't have to think about lock ordering with respect to the fork choice lock. // There are a bunch of places where we lock both fork choice and the pubkey cache and it // would be difficult to check that they all lock fork choice first. - let mut kv_store_ops = self + let mut ops = self .validator_pubkey_cache .try_write_for(VALIDATOR_PUBKEY_CACHE_LOCK_TIMEOUT) .ok_or(Error::ValidatorPubkeyCacheLockTimeout)? @@ -2736,7 +2880,7 @@ impl BeaconChain { let mut fork_choice = self.canonical_head.fork_choice_write_lock(); // Do not import a block that doesn't descend from the finalized root. - check_block_is_finalized_descendant(self, &fork_choice, &signed_block)?; + check_block_is_finalized_checkpoint_or_descendant(self, &fork_choice, &signed_block)?; // Register the new block with the fork choice service. { @@ -2744,7 +2888,7 @@ impl BeaconChain { metrics::start_timer(&metrics::FORK_CHOICE_PROCESS_BLOCK_TIMES); let block_delay = self .slot_clock - .seconds_from_current_slot_start(self.spec.seconds_per_slot) + .seconds_from_current_slot_start() .ok_or(Error::UnableToComputeTimeAtSlot)?; fork_choice @@ -2756,7 +2900,7 @@ impl BeaconChain { &state, payload_verification_status, &self.spec, - count_unrealized.and(self.config.count_unrealized.into()), + count_unrealized, ) .map_err(|e| BlockError::BeaconChainError(e.into()))?; } @@ -2816,9 +2960,14 @@ impl BeaconChain { // ---------------------------- BLOCK PROBABLY ATTESTABLE ---------------------------------- // Most blocks are now capable of being attested to thanks to the `early_attester_cache` // cache above. Resume non-essential processing. + // + // It is important NOT to return errors here before the database commit, because the block + // has already been added to fork choice and the database would be left in an inconsistent + // state if we returned early without committing. In other words, an error here would + // corrupt the node's database permanently. // ----------------------------------------------------------------------------------------- - self.import_block_update_shuffling_cache(block_root, &mut state)?; + self.import_block_update_shuffling_cache(block_root, &mut state); self.import_block_observe_attestations( block, &state, @@ -2841,17 +2990,16 @@ impl BeaconChain { // If the write fails, revert fork choice to the version from disk, else we can // end up with blocks in fork choice that are missing from disk. // See https://github.com/sigp/lighthouse/issues/2028 - let mut ops: Vec<_> = confirmed_state_roots - .into_iter() - .map(StoreOp::DeleteStateTemporaryFlag) - .collect(); + ops.extend( + confirmed_state_roots + .into_iter() + .map(StoreOp::DeleteStateTemporaryFlag), + ); ops.push(StoreOp::PutBlock(block_root, signed_block.clone())); ops.push(StoreOp::PutState(block.state_root(), &state)); let txn_lock = self.store.hot_db.begin_rw_transaction(); - kv_store_ops.extend(self.store.convert_to_kv_batch(ops)?); - - if let Err(e) = self.store.hot_db.do_atomically(kv_store_ops) { + if let Err(e) = self.store.do_atomically(ops) { error!( self.log, "Database write failed!"; @@ -2871,7 +3019,6 @@ impl BeaconChain { ResetPayloadStatuses::always_reset_conditionally( self.config.always_reset_payload_statuses, ), - self.config.count_unrealized_full, &self.store, &self.spec, &self.log, @@ -3280,13 +3427,27 @@ impl BeaconChain { } } + // For the current and next epoch of this state, ensure we have the shuffling from this + // block in our cache. fn import_block_update_shuffling_cache( &self, block_root: Hash256, state: &mut BeaconState, + ) { + if let Err(e) = self.import_block_update_shuffling_cache_fallible(block_root, state) { + warn!( + self.log, + "Failed to prime shuffling cache"; + "error" => ?e + ); + } + } + + fn import_block_update_shuffling_cache_fallible( + &self, + block_root: Hash256, + state: &mut BeaconState, ) -> Result<(), BlockError> { - // For the current and next epoch of this state, ensure we have the shuffling from this - // block in our cache. for relative_epoch in [RelativeEpoch::Current, RelativeEpoch::Next] { let shuffling_id = AttestationShufflingId::new(block_root, state, relative_epoch)?; @@ -3426,7 +3587,7 @@ impl BeaconChain { /// /// The produced block will not be inherently valid, it must be signed by a block producer. /// Block signing is out of the scope of this function and should be done by a separate program. - pub async fn produce_block>( + pub async fn produce_block + 'static>( self: &Arc, randao_reveal: Signature, slot: Slot, @@ -3442,7 +3603,9 @@ impl BeaconChain { } /// Same as `produce_block` but allowing for configuration of RANDAO-verification. - pub async fn produce_block_with_verification>( + pub async fn produce_block_with_verification< + Payload: AbstractExecPayload + 'static, + >( self: &Arc, randao_reveal: Signature, slot: Slot, @@ -3578,7 +3741,7 @@ impl BeaconChain { let slot_delay = self .slot_clock - .seconds_from_current_slot_start(self.spec.seconds_per_slot) + .seconds_from_current_slot_start() .or_else(|| { warn!( self.log, @@ -3593,7 +3756,7 @@ impl BeaconChain { // 1. It seems we have time to propagate and still receive the proposer boost. // 2. The current head block was seen late. // 3. The `get_proposer_head` conditions from fork choice pass. - let proposing_on_time = slot_delay < max_re_org_slot_delay(self.spec.seconds_per_slot); + let proposing_on_time = slot_delay < self.config.re_org_cutoff(self.spec.seconds_per_slot); if !proposing_on_time { debug!( self.log, @@ -3623,6 +3786,7 @@ impl BeaconChain { slot, canonical_head, re_org_threshold, + &self.config.re_org_disallowed_offsets, self.config.re_org_max_epochs_since_finalization, ) .map_err(|e| match e { @@ -3767,19 +3931,93 @@ impl BeaconChain { proposer as u64 }; - // Get the `prev_randao` value. - let prev_randao = if proposer_head == parent_block_root { - cached_head.parent_random() + // Get the `prev_randao` and parent block number. + let head_block_number = cached_head.head_block_number()?; + let (prev_randao, parent_block_number) = if proposer_head == parent_block_root { + ( + cached_head.parent_random()?, + head_block_number.saturating_sub(1), + ) } else { - cached_head.head_random() - }?; + (cached_head.head_random()?, head_block_number) + }; Ok(Some(PrePayloadAttributes { proposer_index, prev_randao, + parent_block_number, })) } + pub fn get_expected_withdrawals( + &self, + forkchoice_update_params: &ForkchoiceUpdateParameters, + proposal_slot: Slot, + ) -> Result, Error> { + let cached_head = self.canonical_head.cached_head(); + let head_state = &cached_head.snapshot.beacon_state; + + let parent_block_root = forkchoice_update_params.head_root; + + let (unadvanced_state, unadvanced_state_root) = + if cached_head.head_block_root() == parent_block_root { + (Cow::Borrowed(head_state), cached_head.head_state_root()) + } else if let Some(snapshot) = self + .snapshot_cache + .try_read_for(BLOCK_PROCESSING_CACHE_LOCK_TIMEOUT) + .ok_or(Error::SnapshotCacheLockTimeout)? + .get_cloned(parent_block_root, CloneConfig::none()) + { + debug!( + self.log, + "Hit snapshot cache during withdrawals calculation"; + "slot" => proposal_slot, + "parent_block_root" => ?parent_block_root, + ); + let state_root = snapshot.beacon_state_root(); + (Cow::Owned(snapshot.beacon_state), state_root) + } else { + info!( + self.log, + "Missed snapshot cache during withdrawals calculation"; + "slot" => proposal_slot, + "parent_block_root" => ?parent_block_root + ); + let block = self + .get_blinded_block(&parent_block_root)? + .ok_or(Error::MissingBeaconBlock(parent_block_root))?; + let state = self + .get_state(&block.state_root(), Some(block.slot()))? + .ok_or(Error::MissingBeaconState(block.state_root()))?; + (Cow::Owned(state), block.state_root()) + }; + + // Parent state epoch is the same as the proposal, we don't need to advance because the + // list of expected withdrawals can only change after an epoch advance or a + // block application. + let proposal_epoch = proposal_slot.epoch(T::EthSpec::slots_per_epoch()); + if head_state.current_epoch() == proposal_epoch { + return get_expected_withdrawals(&unadvanced_state, &self.spec) + .map_err(Error::PrepareProposerFailed); + } + + // Advance the state using the partial method. + debug!( + self.log, + "Advancing state for withdrawals calculation"; + "proposal_slot" => proposal_slot, + "parent_block_root" => ?parent_block_root, + ); + let mut advanced_state = unadvanced_state.into_owned(); + partial_state_advance( + &mut advanced_state, + Some(unadvanced_state_root), + proposal_epoch.start_slot(T::EthSpec::slots_per_epoch()), + &self.spec, + )?; + get_expected_withdrawals(&advanced_state, &self.spec).map_err(Error::PrepareProposerFailed) + } + /// Determine whether a fork choice update to the execution layer should be overridden. /// /// This is *only* necessary when proposer re-orgs are enabled, because we have to prevent the @@ -3827,6 +4065,7 @@ impl BeaconChain { .get_preliminary_proposer_head( head_block_root, re_org_threshold, + &self.config.re_org_disallowed_offsets, self.config.re_org_max_epochs_since_finalization, ) .map_err(|e| e.map_inner_error(Error::ProposerHeadForkChoiceError))?; @@ -3837,7 +4076,7 @@ impl BeaconChain { let re_org_block_slot = head_slot + 1; let fork_choice_slot = info.current_slot; - // If a re-orging proposal isn't made by the `max_re_org_slot_delay` then we give up + // If a re-orging proposal isn't made by the `re_org_cutoff` then we give up // and allow the fork choice update for the canonical head through so that we may attest // correctly. let current_slot_ok = if head_slot == fork_choice_slot { @@ -3848,7 +4087,7 @@ impl BeaconChain { .and_then(|slot_start| { let now = self.slot_clock.now_duration()?; let slot_delay = now.saturating_sub(slot_start); - Some(slot_delay <= max_re_org_slot_delay(self.spec.seconds_per_slot)) + Some(slot_delay <= self.config.re_org_cutoff(self.spec.seconds_per_slot)) }) .unwrap_or(false) } else { @@ -3962,7 +4201,7 @@ impl BeaconChain { /// The provided `state_root_opt` should only ever be set to `Some` if the contained value is /// equal to the root of `state`. Providing this value will serve as an optimization to avoid /// performing a tree hash in some scenarios. - pub async fn produce_block_on_state>( + pub async fn produce_block_on_state + 'static>( self: &Arc, state: BeaconState, state_root_opt: Option, @@ -3997,12 +4236,13 @@ impl BeaconChain { // // Wait for the execution layer to return an execution payload (if one is required). let prepare_payload_handle = partial_beacon_block.prepare_payload_handle.take(); - let execution_payload = if let Some(prepare_payload_handle) = prepare_payload_handle { - let execution_payload = prepare_payload_handle - .await - .map_err(BlockProductionError::TokioJoin)? - .ok_or(BlockProductionError::ShuttingDown)??; - Some(execution_payload) + let block_contents = if let Some(prepare_payload_handle) = prepare_payload_handle { + Some( + prepare_payload_handle + .await + .map_err(BlockProductionError::TokioJoin)? + .ok_or(BlockProductionError::ShuttingDown)??, + ) } else { None }; @@ -4016,7 +4256,7 @@ impl BeaconChain { move || { chain.complete_partial_beacon_block( partial_beacon_block, - execution_payload, + block_contents, verification, ) }, @@ -4027,7 +4267,7 @@ impl BeaconChain { .map_err(BlockProductionError::TokioJoin)? } - fn produce_partial_beacon_block>( + fn produce_partial_beacon_block + 'static>( self: &Arc, mut state: BeaconState, state_root_opt: Option, @@ -4087,7 +4327,7 @@ impl BeaconChain { // allows it to run concurrently with things like attestation packing. let prepare_payload_handle = match &state { BeaconState::Base(_) | BeaconState::Altair(_) => None, - BeaconState::Merge(_) => { + BeaconState::Merge(_) | BeaconState::Capella(_) => { let prepare_payload_handle = get_execution_payload(self.clone(), &state, proposer_index, builder_params)?; Some(prepare_payload_handle) @@ -4100,6 +4340,10 @@ impl BeaconChain { let eth1_data = eth1_chain.eth1_data_for_block_production(&state, &self.spec)?; let deposits = eth1_chain.deposits_for_block_inclusion(&state, ð1_data, &self.spec)?; + let bls_to_execution_changes = self + .op_pool + .get_bls_to_execution_changes(&state, &self.spec); + // Iterate through the naive aggregation pool and ensure all the attestations from there // are included in the operation pool. let unagg_import_timer = @@ -4258,13 +4502,14 @@ impl BeaconChain { voluntary_exits, sync_aggregate, prepare_payload_handle, + bls_to_execution_changes, }) } - fn complete_partial_beacon_block>( + fn complete_partial_beacon_block>( &self, partial_beacon_block: PartialBeaconBlock, - execution_payload: Option, + block_contents: Option>, verification: ProduceBlockVerification, ) -> Result, BlockProductionError> { let PartialBeaconBlock { @@ -4285,6 +4530,7 @@ impl BeaconChain { // this function. We can assume that the handle has already been consumed in order to // produce said `execution_payload`. prepare_payload_handle: _, + bls_to_execution_changes, } = partial_beacon_block; let inner_block = match &state { @@ -4340,8 +4586,35 @@ impl BeaconChain { voluntary_exits: voluntary_exits.into(), sync_aggregate: sync_aggregate .ok_or(BlockProductionError::MissingSyncAggregate)?, - execution_payload: execution_payload - .ok_or(BlockProductionError::MissingExecutionPayload)?, + execution_payload: block_contents + .ok_or(BlockProductionError::MissingExecutionPayload)? + .to_payload() + .try_into() + .map_err(|_| BlockProductionError::InvalidPayloadFork)?, + }, + }), + BeaconState::Capella(_) => BeaconBlock::Capella(BeaconBlockCapella { + slot, + proposer_index, + parent_root, + state_root: Hash256::zero(), + body: BeaconBlockBodyCapella { + randao_reveal, + eth1_data, + graffiti, + proposer_slashings: proposer_slashings.into(), + attester_slashings: attester_slashings.into(), + attestations: attestations.into(), + deposits: deposits.into(), + voluntary_exits: voluntary_exits.into(), + sync_aggregate: sync_aggregate + .ok_or(BlockProductionError::MissingSyncAggregate)?, + execution_payload: block_contents + .ok_or(BlockProductionError::MissingExecutionPayload)? + .to_payload() + .try_into() + .map_err(|_| BlockProductionError::InvalidPayloadFork)?, + bls_to_execution_changes: bls_to_execution_changes.into(), }, }), }; @@ -4532,7 +4805,9 @@ impl BeaconChain { // Nothing to do if there are no proposers registered with the EL, exit early to avoid // wasting cycles. - if !execution_layer.has_any_proposer_preparation_data().await { + if !self.config.always_prepare_payload + && !execution_layer.has_any_proposer_preparation_data().await + { return Ok(()); } @@ -4589,40 +4864,60 @@ impl BeaconChain { // If the execution layer doesn't have any proposer data for this validator then we assume // it's not connected to this BN and no action is required. let proposer = pre_payload_attributes.proposer_index; - if !execution_layer - .has_proposer_preparation_data(proposer) - .await + if !self.config.always_prepare_payload + && !execution_layer + .has_proposer_preparation_data(proposer) + .await { return Ok(()); } + // Fetch payoad attributes from the execution layer's cache, or compute them from scratch + // if no matching entry is found. This saves recomputing the withdrawals which can take + // considerable time to compute if a state load is required. let head_root = forkchoice_update_params.head_root; - let payload_attributes = PayloadAttributes { - timestamp: self - .slot_clock - .start_of(prepare_slot) - .ok_or(Error::InvalidSlot(prepare_slot))? - .as_secs(), - prev_randao: pre_payload_attributes.prev_randao, - suggested_fee_recipient: execution_layer.get_suggested_fee_recipient(proposer).await, - }; + let payload_attributes = if let Some(payload_attributes) = execution_layer + .payload_attributes(prepare_slot, head_root) + .await + { + payload_attributes + } else { + let withdrawals = match self.spec.fork_name_at_slot::(prepare_slot) { + ForkName::Base | ForkName::Altair | ForkName::Merge => None, + ForkName::Capella => { + let chain = self.clone(); + self.spawn_blocking_handle( + move || { + chain.get_expected_withdrawals(&forkchoice_update_params, prepare_slot) + }, + "prepare_beacon_proposer_withdrawals", + ) + .await? + .map(Some)? + } + }; - debug!( - self.log, - "Preparing beacon proposer"; - "payload_attributes" => ?payload_attributes, - "prepare_slot" => prepare_slot, - "validator" => proposer, - "parent_root" => ?head_root, - ); + let payload_attributes = PayloadAttributes::new( + self.slot_clock + .start_of(prepare_slot) + .ok_or(Error::InvalidSlot(prepare_slot))? + .as_secs(), + pre_payload_attributes.prev_randao, + execution_layer.get_suggested_fee_recipient(proposer).await, + withdrawals.map(Into::into), + ); - let already_known = execution_layer - .insert_proposer(prepare_slot, head_root, proposer, payload_attributes) - .await; + execution_layer + .insert_proposer( + prepare_slot, + head_root, + proposer, + payload_attributes.clone(), + ) + .await; - // Only push a log to the user if this is the first time we've seen this proposer for this - // slot. - if !already_known { + // Only push a log to the user if this is the first time we've seen this proposer for + // this slot. info!( self.log, "Prepared beacon proposer"; @@ -4630,6 +4925,24 @@ impl BeaconChain { "validator" => proposer, "parent_root" => ?head_root, ); + payload_attributes + }; + + // Push a server-sent event (probably to a block builder or relay). + if let Some(event_handler) = &self.event_handler { + if event_handler.has_payload_attributes_subscribers() { + event_handler.register(EventKind::PayloadAttributes(ForkVersionedResponse { + data: SseExtendedPayloadAttributes { + proposal_slot: prepare_slot, + proposer_index: proposer, + parent_block_root: head_root, + parent_block_number: pre_payload_attributes.parent_block_number, + parent_block_hash: forkchoice_update_params.head_hash.unwrap_or_default(), + payload_attributes: payload_attributes.into(), + }, + version: Some(self.spec.fork_name_at_slot::(prepare_slot)), + })); + } } let till_prepare_slot = @@ -4652,7 +4965,9 @@ impl BeaconChain { // If we are close enough to the proposal slot, send an fcU, which will have payload // attributes filled in by the execution layer cache we just primed. - if till_prepare_slot <= self.config.prepare_payload_lookahead { + if self.config.always_prepare_payload + || till_prepare_slot <= self.config.prepare_payload_lookahead + { debug!( self.log, "Sending forkchoiceUpdate for proposer prep"; @@ -4754,7 +5069,7 @@ impl BeaconChain { { // We are a proposer, check for terminal_pow_block_hash if let Some(terminal_pow_block_hash) = execution_layer - .get_terminal_pow_block_hash(&self.spec, payload_attributes.timestamp) + .get_terminal_pow_block_hash(&self.spec, payload_attributes.timestamp()) .await .map_err(Error::ForkchoiceUpdate)? { @@ -4845,7 +5160,7 @@ impl BeaconChain { latest_valid_hash, ref validation_error, } => { - debug!( + warn!( self.log, "Invalid execution payload"; "validation_error" => ?validation_error, @@ -4854,32 +5169,44 @@ impl BeaconChain { "head_block_root" => ?head_block_root, "method" => "fcU", ); - warn!( - self.log, - "Fork choice update invalidated payload"; - "status" => ?status - ); - // This implies that the terminal block was invalid. We are being explicit in - // invalidating only the head block in this case. - if latest_valid_hash == ExecutionBlockHash::zero() { - self.process_invalid_execution_payload( - &InvalidationOperation::InvalidateOne { - block_root: head_block_root, - }, - ) - .await?; - } else { + match latest_valid_hash { + // The `latest_valid_hash` is set to `None` when the EE + // "cannot determine the ancestor of the invalid + // payload". In such a scenario we should only + // invalidate the head block and nothing else. + None => { + self.process_invalid_execution_payload( + &InvalidationOperation::InvalidateOne { + block_root: head_block_root, + }, + ) + .await?; + } + // An all-zeros execution block hash implies that + // the terminal block was invalid. We are being + // explicit in invalidating only the head block in + // this case. + Some(hash) if hash == ExecutionBlockHash::zero() => { + self.process_invalid_execution_payload( + &InvalidationOperation::InvalidateOne { + block_root: head_block_root, + }, + ) + .await?; + } // The execution engine has stated that all blocks between the // `head_execution_block_hash` and `latest_valid_hash` are invalid. - self.process_invalid_execution_payload( - &InvalidationOperation::InvalidateMany { - head_block_root, - always_invalidate_head: true, - latest_valid_ancestor: latest_valid_hash, - }, - ) - .await?; + Some(latest_valid_hash) => { + self.process_invalid_execution_payload( + &InvalidationOperation::InvalidateMany { + head_block_root, + always_invalidate_head: true, + latest_valid_ancestor: latest_valid_hash, + }, + ) + .await?; + } } Err(BeaconChainError::ExecutionForkChoiceUpdateInvalid { status }) @@ -4887,7 +5214,7 @@ impl BeaconChain { PayloadStatus::InvalidBlockHash { ref validation_error, } => { - debug!( + warn!( self.log, "Invalid execution payload block hash"; "validation_error" => ?validation_error, @@ -4895,11 +5222,6 @@ impl BeaconChain { "head_block_root" => ?head_block_root, "method" => "fcU", ); - warn!( - self.log, - "Fork choice update invalidated payload"; - "status" => ?status - ); // The execution engine has stated that the head block is invalid, however it // hasn't returned a latest valid ancestor. // @@ -4929,7 +5251,7 @@ impl BeaconChain { /// Returns `Ok(false)` if the block is pre-Bellatrix, or has `ExecutionStatus::Valid`. /// Returns `Ok(true)` if the block has `ExecutionStatus::Optimistic` or has /// `ExecutionStatus::Invalid`. - pub fn is_optimistic_or_invalid_block>( + pub fn is_optimistic_or_invalid_block>( &self, block: &SignedBeaconBlock, ) -> Result { @@ -4955,7 +5277,7 @@ impl BeaconChain { /// /// There is a potential race condition when syncing where the block_root of `head_block` could /// be pruned from the fork choice store before being read. - pub fn is_optimistic_or_invalid_head_block>( + pub fn is_optimistic_or_invalid_head_block>( &self, head_block: &SignedBeaconBlock, ) -> Result { diff --git a/beacon_node/beacon_chain/src/beacon_fork_choice_store.rs b/beacon_node/beacon_chain/src/beacon_fork_choice_store.rs index 0b789b8b615..71160fcb638 100644 --- a/beacon_node/beacon_chain/src/beacon_fork_choice_store.rs +++ b/beacon_node/beacon_chain/src/beacon_fork_choice_store.rs @@ -16,10 +16,18 @@ use std::sync::Arc; use store::{Error as StoreError, HotColdDB, ItemStore}; use superstruct::superstruct; use types::{ - BeaconBlockRef, BeaconState, BeaconStateError, Checkpoint, Epoch, EthSpec, ExecPayload, + AbstractExecPayload, BeaconBlockRef, BeaconState, BeaconStateError, Checkpoint, Epoch, EthSpec, Hash256, Slot, }; +/// Ensure this justified checkpoint has an epoch of 0 so that it is never +/// greater than the justified checkpoint and enshrined as the actual justified +/// checkpoint. +const JUNK_BEST_JUSTIFIED_CHECKPOINT: Checkpoint = Checkpoint { + epoch: Epoch::new(0), + root: Hash256::repeat_byte(0), +}; + #[derive(Debug)] pub enum Error { UnableToReadSlot, @@ -144,7 +152,6 @@ pub struct BeaconForkChoiceStore, Cold: ItemStore< finalized_checkpoint: Checkpoint, justified_checkpoint: Checkpoint, justified_balances: JustifiedBalances, - best_justified_checkpoint: Checkpoint, unrealized_justified_checkpoint: Checkpoint, unrealized_finalized_checkpoint: Checkpoint, proposer_boost_root: Hash256, @@ -194,7 +201,6 @@ where justified_checkpoint, justified_balances, finalized_checkpoint, - best_justified_checkpoint: justified_checkpoint, unrealized_justified_checkpoint: justified_checkpoint, unrealized_finalized_checkpoint: finalized_checkpoint, proposer_boost_root: Hash256::zero(), @@ -212,7 +218,7 @@ where finalized_checkpoint: self.finalized_checkpoint, justified_checkpoint: self.justified_checkpoint, justified_balances: self.justified_balances.effective_balances.clone(), - best_justified_checkpoint: self.best_justified_checkpoint, + best_justified_checkpoint: JUNK_BEST_JUSTIFIED_CHECKPOINT, unrealized_justified_checkpoint: self.unrealized_justified_checkpoint, unrealized_finalized_checkpoint: self.unrealized_finalized_checkpoint, proposer_boost_root: self.proposer_boost_root, @@ -234,7 +240,6 @@ where finalized_checkpoint: persisted.finalized_checkpoint, justified_checkpoint: persisted.justified_checkpoint, justified_balances, - best_justified_checkpoint: persisted.best_justified_checkpoint, unrealized_justified_checkpoint: persisted.unrealized_justified_checkpoint, unrealized_finalized_checkpoint: persisted.unrealized_finalized_checkpoint, proposer_boost_root: persisted.proposer_boost_root, @@ -260,7 +265,7 @@ where self.time = slot } - fn on_verified_block>( + fn on_verified_block>( &mut self, _block: BeaconBlockRef, block_root: Hash256, @@ -277,10 +282,6 @@ where &self.justified_balances } - fn best_justified_checkpoint(&self) -> &Checkpoint { - &self.best_justified_checkpoint - } - fn finalized_checkpoint(&self) -> &Checkpoint { &self.finalized_checkpoint } @@ -333,10 +334,6 @@ where Ok(()) } - fn set_best_justified_checkpoint(&mut self, checkpoint: Checkpoint) { - self.best_justified_checkpoint = checkpoint - } - fn set_unrealized_justified_checkpoint(&mut self, checkpoint: Checkpoint) { self.unrealized_justified_checkpoint = checkpoint; } diff --git a/beacon_node/beacon_chain/src/beacon_snapshot.rs b/beacon_node/beacon_chain/src/beacon_snapshot.rs index 8491622cb09..7d89df98293 100644 --- a/beacon_node/beacon_chain/src/beacon_snapshot.rs +++ b/beacon_node/beacon_chain/src/beacon_snapshot.rs @@ -1,20 +1,20 @@ use serde_derive::Serialize; use std::sync::Arc; use types::{ - beacon_state::CloneConfig, BeaconState, EthSpec, ExecPayload, FullPayload, Hash256, + beacon_state::CloneConfig, AbstractExecPayload, BeaconState, EthSpec, FullPayload, Hash256, SignedBeaconBlock, }; /// Represents some block and its associated state. Generally, this will be used for tracking the /// head, justified head and finalized head. #[derive(Clone, Serialize, PartialEq, Debug)] -pub struct BeaconSnapshot = FullPayload> { +pub struct BeaconSnapshot = FullPayload> { pub beacon_block: Arc>, pub beacon_block_root: Hash256, pub beacon_state: BeaconState, } -impl> BeaconSnapshot { +impl> BeaconSnapshot { /// Create a new checkpoint. pub fn new( beacon_block: Arc>, diff --git a/beacon_node/beacon_chain/src/block_reward.rs b/beacon_node/beacon_chain/src/block_reward.rs index 3bddd2a5215..fd0cfc7e9bd 100644 --- a/beacon_node/beacon_chain/src/block_reward.rs +++ b/beacon_node/beacon_chain/src/block_reward.rs @@ -5,10 +5,10 @@ use state_processing::{ common::get_attesting_indices_from_state, per_block_processing::altair::sync_committee::compute_sync_aggregate_rewards, }; -use types::{BeaconBlockRef, BeaconState, EthSpec, ExecPayload, Hash256}; +use types::{AbstractExecPayload, BeaconBlockRef, BeaconState, EthSpec, Hash256}; impl BeaconChain { - pub fn compute_block_reward>( + pub fn compute_block_reward>( &self, block: BeaconBlockRef<'_, T::EthSpec, Payload>, block_root: Hash256, diff --git a/beacon_node/beacon_chain/src/block_verification.rs b/beacon_node/beacon_chain/src/block_verification.rs index ab317e96b96..5102381a1a1 100644 --- a/beacon_node/beacon_chain/src/block_verification.rs +++ b/beacon_node/beacon_chain/src/block_verification.rs @@ -42,6 +42,11 @@ //! END //! //! ``` + +// Ignore this lint for `BlockSlashInfo` which is of comparable size to the non-error types it is +// returned alongside. +#![allow(clippy::result_large_err)] + use crate::eth1_finalization_cache::Eth1FinalizationData; use crate::execution_payload::{ is_optimistic_candidate_block, validate_execution_payload_for_gossip, validate_merge_block, @@ -83,6 +88,7 @@ use std::time::Duration; use store::{Error as DBError, HotStateSummary, KeyValueStore, StoreOp}; use task_executor::JoinHandle; use tree_hash::TreeHash; +use types::ExecPayload; use types::{ BeaconBlockRef, BeaconState, BeaconStateError, BlindedPayload, ChainSpec, CloneConfig, Epoch, EthSpec, ExecutionBlockHash, Hash256, InconsistentFork, PublicKey, PublicKeyBytes, @@ -274,10 +280,10 @@ pub enum BlockError { /// /// ## Peer scoring /// - /// TODO(merge): reconsider how we score peers for this. - /// - /// The peer sent us an invalid block, but I'm not really sure how to score this in an - /// "optimistic" sync world. + /// The peer sent us an invalid block, we must penalise harshly. + /// If it's actually our fault (e.g. our execution node database is corrupt) we have bigger + /// problems to worry about than losing peers, and we're doing the network a favour by + /// disconnecting. ParentExecutionPayloadInvalid { parent_root: Hash256 }, } @@ -739,7 +745,7 @@ impl GossipVerifiedBlock { // Do not process a block that doesn't descend from the finalized root. // // We check this *before* we load the parent so that we can return a more detailed error. - check_block_is_finalized_descendant( + check_block_is_finalized_checkpoint_or_descendant( chain, &chain.canonical_head.fork_choice_write_lock(), &block, @@ -1180,7 +1186,7 @@ impl ExecutionPendingBlock { .message() .body() .execution_payload() - .map(|full_payload| full_payload.execution_payload.block_hash); + .map(|full_payload| full_payload.block_hash()); // Ensure the block is a candidate for optimistic import. if !is_optimistic_candidate_block(&chain, block.slot(), block.parent_root()).await? @@ -1462,7 +1468,6 @@ impl ExecutionPendingBlock { current_slot, indexed_attestation, AttestationFromBlock::True, - &chain.spec, ) { Ok(()) => Ok(()), // Ignore invalid attestations whilst importing attestations from a block. The @@ -1559,12 +1564,12 @@ fn check_block_against_finalized_slot( /// ## Warning /// /// Taking a lock on the `chain.canonical_head.fork_choice` might cause a deadlock here. -pub fn check_block_is_finalized_descendant( +pub fn check_block_is_finalized_checkpoint_or_descendant( chain: &BeaconChain, fork_choice: &BeaconForkChoice, block: &Arc>, ) -> Result<(), BlockError> { - if fork_choice.is_descendant_of_finalized(block.parent_root()) { + if fork_choice.is_finalized_checkpoint_or_descendant(block.parent_root()) { Ok(()) } else { // If fork choice does *not* consider the parent to be a descendant of the finalized block, @@ -1845,7 +1850,7 @@ fn cheap_state_advance_to_obtain_committees<'a, E: EthSpec>( } /// Obtains a read-locked `ValidatorPubkeyCache` from the `chain`. -fn get_validator_pubkey_cache( +pub fn get_validator_pubkey_cache( chain: &BeaconChain, ) -> Result>, BlockError> { chain diff --git a/beacon_node/beacon_chain/src/builder.rs b/beacon_node/beacon_chain/src/builder.rs index 48419d46edb..6ee97a95c1a 100644 --- a/beacon_node/beacon_chain/src/builder.rs +++ b/beacon_node/beacon_chain/src/builder.rs @@ -18,11 +18,11 @@ use crate::{ }; use eth1::Config as Eth1Config; use execution_layer::ExecutionLayer; -use fork_choice::{ForkChoice, ResetPayloadStatuses}; +use fork_choice::{CountUnrealized, ForkChoice, ResetPayloadStatuses}; use futures::channel::mpsc::Sender; use operation_pool::{OperationPool, PersistedOperationPool}; use parking_lot::RwLock; -use proto_array::ReOrgThreshold; +use proto_array::{DisallowedReOrgOffsets, ReOrgThreshold}; use slasher::Slasher; use slog::{crit, error, info, Logger}; use slot_clock::{SlotClock, TestingSlotClock}; @@ -175,6 +175,15 @@ where self } + /// Sets the proposer re-org disallowed offsets list. + pub fn proposer_re_org_disallowed_offsets( + mut self, + disallowed_offsets: DisallowedReOrgOffsets, + ) -> Self { + self.chain_config.re_org_disallowed_offsets = disallowed_offsets; + self + } + /// Sets the store (database). /// /// Should generally be called early in the build chain. @@ -265,7 +274,6 @@ where ResetPayloadStatuses::always_reset_conditionally( self.chain_config.always_reset_payload_statuses, ), - self.chain_config.count_unrealized_full, &self.spec, log, ) @@ -384,7 +392,6 @@ where &genesis.beacon_block, &genesis.beacon_state, current_slot, - self.chain_config.count_unrealized_full, &self.spec, ) .map_err(|e| format!("Unable to initialize ForkChoice: {:?}", e))?; @@ -503,7 +510,6 @@ where &snapshot.beacon_block, &snapshot.beacon_state, current_slot, - self.chain_config.count_unrealized_full, &self.spec, ) .map_err(|e| format!("Unable to initialize ForkChoice: {:?}", e))?; @@ -681,8 +687,7 @@ where store.clone(), Some(current_slot), &self.spec, - self.chain_config.count_unrealized.into(), - self.chain_config.count_unrealized_full, + CountUnrealized::True, )?; } @@ -765,6 +770,7 @@ where let genesis_time = head_snapshot.beacon_state.genesis_time(); let head_for_snapshot_cache = head_snapshot.clone(); let canonical_head = CanonicalHead::new(fork_choice, Arc::new(head_snapshot)); + let shuffling_cache_size = self.chain_config.shuffling_cache_size; let beacon_chain = BeaconChain { spec: self.spec, @@ -800,6 +806,7 @@ where observed_voluntary_exits: <_>::default(), observed_proposer_slashings: <_>::default(), observed_attester_slashings: <_>::default(), + observed_bls_to_execution_changes: <_>::default(), latest_seen_finality_update: <_>::default(), latest_seen_optimistic_update: <_>::default(), eth1_chain: self.eth1_chain, @@ -817,7 +824,7 @@ where DEFAULT_SNAPSHOT_CACHE_SIZE, head_for_snapshot_cache, )), - shuffling_cache: TimeoutRwLock::new(ShufflingCache::new()), + shuffling_cache: TimeoutRwLock::new(ShufflingCache::new(shuffling_cache_size)), eth1_finalization_cache: TimeoutRwLock::new(Eth1FinalizationCache::new(log.clone())), beacon_proposer_cache: <_>::default(), block_times_cache: <_>::default(), diff --git a/beacon_node/beacon_chain/src/canonical_head.rs b/beacon_node/beacon_chain/src/canonical_head.rs index dd64e02edf7..0e1c8a5305d 100644 --- a/beacon_node/beacon_chain/src/canonical_head.rs +++ b/beacon_node/beacon_chain/src/canonical_head.rs @@ -45,8 +45,7 @@ use crate::{ }; use eth2::types::{EventKind, SseChainReorg, SseFinalizedCheckpoint, SseHead, SseLateHead}; use fork_choice::{ - CountUnrealizedFull, ExecutionStatus, ForkChoiceView, ForkchoiceUpdateParameters, ProtoBlock, - ResetPayloadStatuses, + ExecutionStatus, ForkChoiceView, ForkchoiceUpdateParameters, ProtoBlock, ResetPayloadStatuses, }; use itertools::process_results; use parking_lot::{Mutex, RwLock, RwLockReadGuard, RwLockWriteGuard}; @@ -167,6 +166,17 @@ impl CachedHead { .map(|payload| payload.prev_randao()) } + /// Returns the execution block number of the block at the head of the chain. + /// + /// Returns an error if the chain is prior to Bellatrix. + pub fn head_block_number(&self) -> Result { + self.snapshot + .beacon_block + .message() + .execution_payload() + .map(|payload| payload.block_number()) + } + /// Returns the active validator count for the current epoch of the head state. /// /// Should only return `None` if the caches have not been built on the head state (this should @@ -274,19 +284,13 @@ impl CanonicalHead { // defensive programming. mut fork_choice_write_lock: RwLockWriteGuard>, reset_payload_statuses: ResetPayloadStatuses, - count_unrealized_full: CountUnrealizedFull, store: &BeaconStore, spec: &ChainSpec, log: &Logger, ) -> Result<(), Error> { - let fork_choice = >::load_fork_choice( - store.clone(), - reset_payload_statuses, - count_unrealized_full, - spec, - log, - )? - .ok_or(Error::MissingPersistedForkChoice)?; + let fork_choice = + >::load_fork_choice(store.clone(), reset_payload_statuses, spec, log)? + .ok_or(Error::MissingPersistedForkChoice)?; let fork_choice_view = fork_choice.cached_fork_choice_view(); let beacon_block_root = fork_choice_view.head_block_root; let beacon_block = store @@ -930,8 +934,12 @@ impl BeaconChain { .execution_status .is_optimistic_or_invalid(); - self.op_pool - .prune_all(&new_snapshot.beacon_state, self.epoch()?); + self.op_pool.prune_all( + &new_snapshot.beacon_block, + &new_snapshot.beacon_state, + self.epoch()?, + &self.spec, + ); self.observed_block_producers.write().prune( new_view diff --git a/beacon_node/beacon_chain/src/capella_readiness.rs b/beacon_node/beacon_chain/src/capella_readiness.rs new file mode 100644 index 00000000000..bb729d89997 --- /dev/null +++ b/beacon_node/beacon_chain/src/capella_readiness.rs @@ -0,0 +1,122 @@ +//! Provides tools for checking if a node is ready for the Capella upgrade and following merge +//! transition. + +use crate::{BeaconChain, BeaconChainTypes}; +use execution_layer::http::{ + ENGINE_FORKCHOICE_UPDATED_V2, ENGINE_GET_PAYLOAD_V2, ENGINE_NEW_PAYLOAD_V2, +}; +use serde::{Deserialize, Serialize}; +use std::fmt; +use std::time::Duration; +use types::*; + +/// The time before the Capella fork when we will start issuing warnings about preparation. +use super::merge_readiness::SECONDS_IN_A_WEEK; +pub const CAPELLA_READINESS_PREPARATION_SECONDS: u64 = SECONDS_IN_A_WEEK * 2; +pub const ENGINE_CAPABILITIES_REFRESH_INTERVAL: u64 = 300; + +#[derive(Debug, Serialize, Deserialize)] +#[serde(rename_all = "snake_case")] +#[serde(tag = "type")] +pub enum CapellaReadiness { + /// The execution engine is capella-enabled (as far as we can tell) + Ready, + /// We are connected to an execution engine which doesn't support the V2 engine api methods + V2MethodsNotSupported { error: String }, + /// The transition configuration with the EL failed, there might be a problem with + /// connectivity, authentication or a difference in configuration. + ExchangeCapabilitiesFailed { error: String }, + /// The user has not configured an execution endpoint + NoExecutionEndpoint, +} + +impl fmt::Display for CapellaReadiness { + fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { + match self { + CapellaReadiness::Ready => { + write!(f, "This node appears ready for Capella.") + } + CapellaReadiness::ExchangeCapabilitiesFailed { error } => write!( + f, + "Could not exchange capabilities with the \ + execution endpoint: {}", + error + ), + CapellaReadiness::NoExecutionEndpoint => write!( + f, + "The --execution-endpoint flag is not specified, this is a \ + requirement post-merge" + ), + CapellaReadiness::V2MethodsNotSupported { error } => write!( + f, + "Execution endpoint does not support Capella methods: {}", + error + ), + } + } +} + +impl BeaconChain { + /// Returns `true` if capella epoch is set and Capella fork has occurred or will + /// occur within `CAPELLA_READINESS_PREPARATION_SECONDS` + pub fn is_time_to_prepare_for_capella(&self, current_slot: Slot) -> bool { + if let Some(capella_epoch) = self.spec.capella_fork_epoch { + let capella_slot = capella_epoch.start_slot(T::EthSpec::slots_per_epoch()); + let capella_readiness_preparation_slots = + CAPELLA_READINESS_PREPARATION_SECONDS / self.spec.seconds_per_slot; + // Return `true` if Capella has happened or is within the preparation time. + current_slot + capella_readiness_preparation_slots > capella_slot + } else { + // The Capella fork epoch has not been defined yet, no need to prepare. + false + } + } + + /// Attempts to connect to the EL and confirm that it is ready for capella. + pub async fn check_capella_readiness(&self) -> CapellaReadiness { + if let Some(el) = self.execution_layer.as_ref() { + match el + .get_engine_capabilities(Some(Duration::from_secs( + ENGINE_CAPABILITIES_REFRESH_INTERVAL, + ))) + .await + { + Err(e) => { + // The EL was either unreachable or responded with an error + CapellaReadiness::ExchangeCapabilitiesFailed { + error: format!("{:?}", e), + } + } + Ok(capabilities) => { + let mut missing_methods = String::from("Required Methods Unsupported:"); + let mut all_good = true; + if !capabilities.get_payload_v2 { + missing_methods.push(' '); + missing_methods.push_str(ENGINE_GET_PAYLOAD_V2); + all_good = false; + } + if !capabilities.forkchoice_updated_v2 { + missing_methods.push(' '); + missing_methods.push_str(ENGINE_FORKCHOICE_UPDATED_V2); + all_good = false; + } + if !capabilities.new_payload_v2 { + missing_methods.push(' '); + missing_methods.push_str(ENGINE_NEW_PAYLOAD_V2); + all_good = false; + } + + if all_good { + CapellaReadiness::Ready + } else { + CapellaReadiness::V2MethodsNotSupported { + error: missing_methods, + } + } + } + } + } else { + CapellaReadiness::NoExecutionEndpoint + } + } +} diff --git a/beacon_node/beacon_chain/src/chain_config.rs b/beacon_node/beacon_chain/src/chain_config.rs index cce2fbb971f..9921435313d 100644 --- a/beacon_node/beacon_chain/src/chain_config.rs +++ b/beacon_node/beacon_chain/src/chain_config.rs @@ -1,10 +1,12 @@ -pub use proto_array::{CountUnrealizedFull, ReOrgThreshold}; +pub use proto_array::{DisallowedReOrgOffsets, ReOrgThreshold}; use serde_derive::{Deserialize, Serialize}; use std::time::Duration; use types::{Checkpoint, Epoch}; pub const DEFAULT_RE_ORG_THRESHOLD: ReOrgThreshold = ReOrgThreshold(20); pub const DEFAULT_RE_ORG_MAX_EPOCHS_SINCE_FINALIZATION: Epoch = Epoch::new(2); +/// Default to 1/12th of the slot, which is 1 second on mainnet. +pub const DEFAULT_RE_ORG_CUTOFF_DENOMINATOR: u32 = 12; pub const DEFAULT_FORK_CHOICE_BEFORE_PROPOSAL_TIMEOUT: u64 = 250; /// Default fraction of a slot lookahead for payload preparation (12/3 = 4 seconds on mainnet). @@ -34,6 +36,13 @@ pub struct ChainConfig { pub re_org_threshold: Option, /// Maximum number of epochs since finalization for attempting a proposer re-org. pub re_org_max_epochs_since_finalization: Epoch, + /// Maximum delay after the start of the slot at which to propose a reorging block. + pub re_org_cutoff_millis: Option, + /// Additional epoch offsets at which re-orging block proposals are not permitted. + /// + /// By default this list is empty, but it can be useful for reacting to network conditions, e.g. + /// slow gossip of re-org blocks at slot 1 in the epoch. + pub re_org_disallowed_offsets: DisallowedReOrgOffsets, /// Number of milliseconds to wait for fork choice before proposing a block. /// /// If set to 0 then block proposal will not wait for fork choice at all. @@ -48,16 +57,11 @@ pub struct ChainConfig { pub builder_fallback_epochs_since_finalization: usize, /// Whether any chain health checks should be considered when deciding whether to use the builder API. pub builder_fallback_disable_checks: bool, - /// When set to `true`, weigh the "unrealized" FFG progression when choosing a head in fork - /// choice. - pub count_unrealized: bool, /// When set to `true`, forget any valid/invalid/optimistic statuses in fork choice during start /// up. pub always_reset_payload_statuses: bool, /// Whether to apply paranoid checks to blocks proposed by this beacon node. pub paranoid_block_proposal: bool, - /// Whether to strictly count unrealized justified votes. - pub count_unrealized_full: CountUnrealizedFull, /// Optionally set timeout for calls to checkpoint sync endpoint. pub checkpoint_sync_url_timeout: u64, /// The offset before the start of a proposal slot at which payload attributes should be sent. @@ -67,6 +71,14 @@ pub struct ChainConfig { pub prepare_payload_lookahead: Duration, /// Use EL-free optimistic sync for the finalized part of the chain. pub optimistic_finalized_sync: bool, + /// The size of the shuffling cache, + pub shuffling_cache_size: usize, + /// Whether to send payload attributes every slot, regardless of connected proposers. + /// + /// This is useful for block builders and testing. + pub always_prepare_payload: bool, + /// Whether backfill sync processing should be rate-limited. + pub enable_backfill_rate_limiting: bool, } impl Default for ChainConfig { @@ -79,19 +91,34 @@ impl Default for ChainConfig { max_network_size: 10 * 1_048_576, // 10M re_org_threshold: Some(DEFAULT_RE_ORG_THRESHOLD), re_org_max_epochs_since_finalization: DEFAULT_RE_ORG_MAX_EPOCHS_SINCE_FINALIZATION, + re_org_cutoff_millis: None, + re_org_disallowed_offsets: DisallowedReOrgOffsets::default(), fork_choice_before_proposal_timeout_ms: DEFAULT_FORK_CHOICE_BEFORE_PROPOSAL_TIMEOUT, // Builder fallback configs that are set in `clap` will override these. builder_fallback_skips: 3, builder_fallback_skips_per_epoch: 8, builder_fallback_epochs_since_finalization: 3, builder_fallback_disable_checks: false, - count_unrealized: true, always_reset_payload_statuses: false, paranoid_block_proposal: false, - count_unrealized_full: CountUnrealizedFull::default(), checkpoint_sync_url_timeout: 60, prepare_payload_lookahead: Duration::from_secs(4), + // This value isn't actually read except in tests. optimistic_finalized_sync: true, + shuffling_cache_size: crate::shuffling_cache::DEFAULT_CACHE_SIZE, + always_prepare_payload: false, + enable_backfill_rate_limiting: true, } } } + +impl ChainConfig { + /// The latest delay from the start of the slot at which to attempt a 1-slot re-org. + pub fn re_org_cutoff(&self, seconds_per_slot: u64) -> Duration { + self.re_org_cutoff_millis + .map(Duration::from_millis) + .unwrap_or_else(|| { + Duration::from_secs(seconds_per_slot) / DEFAULT_RE_ORG_CUTOFF_DENOMINATOR + }) + } +} diff --git a/beacon_node/beacon_chain/src/errors.rs b/beacon_node/beacon_chain/src/errors.rs index 17f58b223f4..e789b54a21b 100644 --- a/beacon_node/beacon_chain/src/errors.rs +++ b/beacon_node/beacon_chain/src/errors.rs @@ -1,4 +1,5 @@ use crate::attester_cache::Error as AttesterCacheError; +use crate::beacon_block_streamer::Error as BlockStreamerError; use crate::beacon_chain::ForkChoiceError; use crate::beacon_fork_choice_store::Error as ForkChoiceStoreError; use crate::eth1_chain::Error as Eth1ChainError; @@ -17,8 +18,9 @@ use ssz_types::Error as SszTypesError; use state_processing::{ block_signature_verifier::Error as BlockSignatureVerifierError, per_block_processing::errors::{ - AttestationValidationError, AttesterSlashingValidationError, ExitValidationError, - ProposerSlashingValidationError, SyncCommitteeMessageValidationError, + AttestationValidationError, AttesterSlashingValidationError, + BlsExecutionChangeValidationError, ExitValidationError, ProposerSlashingValidationError, + SyncCommitteeMessageValidationError, }, signature_sets::Error as SignatureSetError, state_advance::Error as StateAdvanceError, @@ -50,7 +52,6 @@ pub enum BeaconChainError { }, SlotClockDidNotStart, NoStateForSlot(Slot), - UnableToFindTargetRoot(Slot), BeaconStateError(BeaconStateError), DBInconsistent(String), DBError(store::Error), @@ -70,6 +71,7 @@ pub enum BeaconChainError { ExitValidationError(ExitValidationError), ProposerSlashingValidationError(ProposerSlashingValidationError), AttesterSlashingValidationError(AttesterSlashingValidationError), + BlsExecutionChangeValidationError(BlsExecutionChangeValidationError), StateSkipTooLarge { start_slot: Slot, requested_slot: Slot, @@ -141,25 +143,28 @@ pub enum BeaconChainError { BuilderMissing, ExecutionLayerMissing, BlockVariantLacksExecutionPayload(Hash256), - ExecutionLayerErrorPayloadReconstruction(ExecutionBlockHash, execution_layer::Error), + ExecutionLayerErrorPayloadReconstruction(ExecutionBlockHash, Box), + EngineGetCapabilititesFailed(Box), BlockHashMissingFromExecutionLayer(ExecutionBlockHash), InconsistentPayloadReconstructed { slot: Slot, exec_block_hash: ExecutionBlockHash, - canonical_payload_root: Hash256, - reconstructed_payload_root: Hash256, canonical_transactions_root: Hash256, reconstructed_transactions_root: Hash256, }, + BlockStreamerError(BlockStreamerError), AddPayloadLogicError, ExecutionForkChoiceUpdateFailed(execution_layer::Error), - PrepareProposerBlockingFailed(execution_layer::Error), + PrepareProposerFailed(BlockProcessingError), ExecutionForkChoiceUpdateInvalid { status: PayloadStatus, }, + BlockRewardError, BlockRewardSlotError, BlockRewardAttestationError, BlockRewardSyncError, + SyncCommitteeRewardsSyncError, + AttestationRewardsError, HeadMissingFromForkChoice(Hash256), FinalizedBlockMissingFromForkChoice(Hash256), HeadBlockMissingFromForkChoice(Hash256), @@ -204,6 +209,9 @@ pub enum BeaconChainError { MissingPersistedForkChoice, CommitteePromiseFailed(oneshot_broadcast::Error), MaxCommitteePromises(usize), + BlsToExecutionPriorToCapella, + BlsToExecutionConflictsWithPool, + InconsistentFork(InconsistentFork), ProposerHeadForkChoiceError(fork_choice::Error), } @@ -213,6 +221,7 @@ easy_from_to!(SyncCommitteeMessageValidationError, BeaconChainError); easy_from_to!(ExitValidationError, BeaconChainError); easy_from_to!(ProposerSlashingValidationError, BeaconChainError); easy_from_to!(AttesterSlashingValidationError, BeaconChainError); +easy_from_to!(BlsExecutionChangeValidationError, BeaconChainError); easy_from_to!(SszTypesError, BeaconChainError); easy_from_to!(OpPoolError, BeaconChainError); easy_from_to!(NaiveAggregationError, BeaconChainError); @@ -227,6 +236,7 @@ easy_from_to!(ForkChoiceStoreError, BeaconChainError); easy_from_to!(HistoricalBlockError, BeaconChainError); easy_from_to!(StateAdvanceError, BeaconChainError); easy_from_to!(BlockReplayError, BeaconChainError); +easy_from_to!(InconsistentFork, BeaconChainError); #[derive(Debug)] pub enum BlockProductionError { @@ -259,6 +269,7 @@ pub enum BlockProductionError { MissingExecutionPayload, TokioJoin(tokio::task::JoinError), BeaconChain(BeaconChainError), + InvalidPayloadFork, } easy_from_to!(BlockProcessingError, BlockProductionError); diff --git a/beacon_node/beacon_chain/src/events.rs b/beacon_node/beacon_chain/src/events.rs index 6f4415ef4f3..fed05032374 100644 --- a/beacon_node/beacon_chain/src/events.rs +++ b/beacon_node/beacon_chain/src/events.rs @@ -14,6 +14,7 @@ pub struct ServerSentEventHandler { exit_tx: Sender>, chain_reorg_tx: Sender>, contribution_tx: Sender>, + payload_attributes_tx: Sender>, late_head: Sender>, block_reward_tx: Sender>, log: Logger, @@ -32,6 +33,7 @@ impl ServerSentEventHandler { let (exit_tx, _) = broadcast::channel(capacity); let (chain_reorg_tx, _) = broadcast::channel(capacity); let (contribution_tx, _) = broadcast::channel(capacity); + let (payload_attributes_tx, _) = broadcast::channel(capacity); let (late_head, _) = broadcast::channel(capacity); let (block_reward_tx, _) = broadcast::channel(capacity); @@ -43,6 +45,7 @@ impl ServerSentEventHandler { exit_tx, chain_reorg_tx, contribution_tx, + payload_attributes_tx, late_head, block_reward_tx, log, @@ -50,28 +53,55 @@ impl ServerSentEventHandler { } pub fn register(&self, kind: EventKind) { - let result = match kind { - EventKind::Attestation(attestation) => self + let log_count = |name, count| { + trace!( + self.log, + "Registering server-sent event"; + "kind" => name, + "receiver_count" => count + ); + }; + let result = match &kind { + EventKind::Attestation(_) => self .attestation_tx - .send(EventKind::Attestation(attestation)) - .map(|count| trace!(self.log, "Registering server-sent attestation event"; "receiver_count" => count)), - EventKind::Block(block) => self.block_tx.send(EventKind::Block(block)) - .map(|count| trace!(self.log, "Registering server-sent block event"; "receiver_count" => count)), - EventKind::FinalizedCheckpoint(checkpoint) => self.finalized_tx - .send(EventKind::FinalizedCheckpoint(checkpoint)) - .map(|count| trace!(self.log, "Registering server-sent finalized checkpoint event"; "receiver_count" => count)), - EventKind::Head(head) => self.head_tx.send(EventKind::Head(head)) - .map(|count| trace!(self.log, "Registering server-sent head event"; "receiver_count" => count)), - EventKind::VoluntaryExit(exit) => self.exit_tx.send(EventKind::VoluntaryExit(exit)) - .map(|count| trace!(self.log, "Registering server-sent voluntary exit event"; "receiver_count" => count)), - EventKind::ChainReorg(reorg) => self.chain_reorg_tx.send(EventKind::ChainReorg(reorg)) - .map(|count| trace!(self.log, "Registering server-sent chain reorg event"; "receiver_count" => count)), - EventKind::ContributionAndProof(contribution_and_proof) => self.contribution_tx.send(EventKind::ContributionAndProof(contribution_and_proof)) - .map(|count| trace!(self.log, "Registering server-sent contribution and proof event"; "receiver_count" => count)), - EventKind::LateHead(late_head) => self.late_head.send(EventKind::LateHead(late_head)) - .map(|count| trace!(self.log, "Registering server-sent late head event"; "receiver_count" => count)), - EventKind::BlockReward(block_reward) => self.block_reward_tx.send(EventKind::BlockReward(block_reward)) - .map(|count| trace!(self.log, "Registering server-sent contribution and proof event"; "receiver_count" => count)), + .send(kind) + .map(|count| log_count("attestation", count)), + EventKind::Block(_) => self + .block_tx + .send(kind) + .map(|count| log_count("block", count)), + EventKind::FinalizedCheckpoint(_) => self + .finalized_tx + .send(kind) + .map(|count| log_count("finalized checkpoint", count)), + EventKind::Head(_) => self + .head_tx + .send(kind) + .map(|count| log_count("head", count)), + EventKind::VoluntaryExit(_) => self + .exit_tx + .send(kind) + .map(|count| log_count("exit", count)), + EventKind::ChainReorg(_) => self + .chain_reorg_tx + .send(kind) + .map(|count| log_count("chain reorg", count)), + EventKind::ContributionAndProof(_) => self + .contribution_tx + .send(kind) + .map(|count| log_count("contribution and proof", count)), + EventKind::PayloadAttributes(_) => self + .payload_attributes_tx + .send(kind) + .map(|count| log_count("payload attributes", count)), + EventKind::LateHead(_) => self + .late_head + .send(kind) + .map(|count| log_count("late head", count)), + EventKind::BlockReward(_) => self + .block_reward_tx + .send(kind) + .map(|count| log_count("block reward", count)), }; if let Err(SendError(event)) = result { trace!(self.log, "No receivers registered to listen for event"; "event" => ?event); @@ -106,6 +136,10 @@ impl ServerSentEventHandler { self.contribution_tx.subscribe() } + pub fn subscribe_payload_attributes(&self) -> Receiver> { + self.payload_attributes_tx.subscribe() + } + pub fn subscribe_late_head(&self) -> Receiver> { self.late_head.subscribe() } @@ -142,6 +176,10 @@ impl ServerSentEventHandler { self.contribution_tx.receiver_count() > 0 } + pub fn has_payload_attributes_subscribers(&self) -> bool { + self.payload_attributes_tx.receiver_count() > 0 + } + pub fn has_late_head_subscribers(&self) -> bool { self.late_head.receiver_count() > 0 } diff --git a/beacon_node/beacon_chain/src/execution_payload.rs b/beacon_node/beacon_chain/src/execution_payload.rs index 7435c3a8cc4..1ac7229cc6d 100644 --- a/beacon_node/beacon_chain/src/execution_payload.rs +++ b/beacon_node/beacon_chain/src/execution_payload.rs @@ -12,22 +12,23 @@ use crate::{ BeaconChain, BeaconChainError, BeaconChainTypes, BlockError, BlockProductionError, ExecutionPayloadError, }; -use execution_layer::{BuilderParams, PayloadStatus}; +use execution_layer::{BlockProposalContents, BuilderParams, PayloadAttributes, PayloadStatus}; use fork_choice::{InvalidationOperation, PayloadVerificationStatus}; use proto_array::{Block as ProtoBlock, ExecutionStatus}; use slog::{debug, warn}; use slot_clock::SlotClock; use state_processing::per_block_processing::{ - compute_timestamp_at_slot, is_execution_enabled, is_merge_transition_complete, - partially_verify_execution_payload, + compute_timestamp_at_slot, get_expected_withdrawals, is_execution_enabled, + is_merge_transition_complete, partially_verify_execution_payload, }; use std::sync::Arc; use tokio::task::JoinHandle; use tree_hash::TreeHash; use types::*; -pub type PreparePayloadResult = Result; -pub type PreparePayloadHandle = JoinHandle>>; +pub type PreparePayloadResult = + Result, BlockProductionError>; +pub type PreparePayloadHandle = JoinHandle>>; #[derive(PartialEq)] pub enum AllowOptimisticImport { @@ -68,8 +69,13 @@ impl PayloadNotifier { // where we do not send the block to the EL at all. let block_message = block.message(); let payload = block_message.execution_payload()?; - partially_verify_execution_payload(state, block.slot(), payload, &chain.spec) - .map_err(BlockError::PerBlockProcessingError)?; + partially_verify_execution_payload::<_, FullPayload<_>>( + state, + block.slot(), + payload, + &chain.spec, + ) + .map_err(BlockError::PerBlockProcessingError)?; match notify_execution_layer { NotifyExecutionLayer::No if chain.config.optimistic_finalized_sync => { @@ -81,7 +87,7 @@ impl PayloadNotifier { .ok_or(ExecutionPayloadError::NoExecutionConnection)?; if let Err(e) = - execution_layer.verify_payload_block_hash(&payload.execution_payload) + execution_layer.verify_payload_block_hash(payload.execution_payload_ref()) { warn!( chain.log, @@ -140,7 +146,7 @@ async fn notify_new_payload<'a, T: BeaconChainTypes>( .ok_or(ExecutionPayloadError::NoExecutionConnection)?; let new_payload_response = execution_layer - .notify_new_payload(&execution_payload.execution_payload) + .notify_new_payload(&execution_payload.into()) .await; match new_payload_response { @@ -153,12 +159,12 @@ async fn notify_new_payload<'a, T: BeaconChainTypes>( latest_valid_hash, ref validation_error, } => { - debug!( + warn!( chain.log, "Invalid execution payload"; "validation_error" => ?validation_error, "latest_valid_hash" => ?latest_valid_hash, - "execution_block_hash" => ?execution_payload.execution_payload.block_hash, + "execution_block_hash" => ?execution_payload.block_hash(), "root" => ?block.tree_hash_root(), "graffiti" => block.body().graffiti().as_utf8_lossy(), "proposer_index" => block.proposer_index(), @@ -166,32 +172,45 @@ async fn notify_new_payload<'a, T: BeaconChainTypes>( "method" => "new_payload", ); - // latest_valid_hash == 0 implies that this was the terminal block - // Hence, we don't need to run `BeaconChain::process_invalid_execution_payload`. - if latest_valid_hash == ExecutionBlockHash::zero() { - return Err(ExecutionPayloadError::RejectedByExecutionEngine { status }.into()); + // Only trigger payload invalidation in fork choice if the + // `latest_valid_hash` is `Some` and non-zero. + // + // A `None` latest valid hash indicates that the EE was unable + // to determine the most recent valid ancestor. Since `block` + // has not yet been applied to fork choice, there's nothing to + // invalidate. + // + // An all-zeros payload indicates that an EIP-3675 check has + // failed regarding the validity of the terminal block. Rather + // than iterating back in the chain to find the terminal block + // and invalidating that, we simply reject this block without + // invalidating anything else. + if let Some(latest_valid_hash) = + latest_valid_hash.filter(|hash| *hash != ExecutionBlockHash::zero()) + { + // This block has not yet been applied to fork choice, so the latest block that was + // imported to fork choice was the parent. + let latest_root = block.parent_root(); + + chain + .process_invalid_execution_payload(&InvalidationOperation::InvalidateMany { + head_block_root: latest_root, + always_invalidate_head: false, + latest_valid_ancestor: latest_valid_hash, + }) + .await?; } - // This block has not yet been applied to fork choice, so the latest block that was - // imported to fork choice was the parent. - let latest_root = block.parent_root(); - chain - .process_invalid_execution_payload(&InvalidationOperation::InvalidateMany { - head_block_root: latest_root, - always_invalidate_head: false, - latest_valid_ancestor: latest_valid_hash, - }) - .await?; Err(ExecutionPayloadError::RejectedByExecutionEngine { status }.into()) } PayloadStatus::InvalidBlockHash { ref validation_error, } => { - debug!( + warn!( chain.log, "Invalid execution payload block hash"; "validation_error" => ?validation_error, - "execution_block_hash" => ?execution_payload.execution_payload.block_hash, + "execution_block_hash" => ?execution_payload.block_hash(), "root" => ?block.tree_hash_root(), "graffiti" => block.body().graffiti().as_utf8_lossy(), "proposer_index" => block.proposer_index(), @@ -344,7 +363,7 @@ pub fn validate_execution_payload_for_gossip( } }; - if is_merge_transition_complete || execution_payload != &<_>::default() { + if is_merge_transition_complete || !execution_payload.is_default_with_empty_roots() { let expected_timestamp = chain .slot_clock .start_of(block.slot()) @@ -382,13 +401,13 @@ pub fn validate_execution_payload_for_gossip( /// https://github.com/ethereum/consensus-specs/blob/v1.1.5/specs/merge/validator.md#block-proposal pub fn get_execution_payload< T: BeaconChainTypes, - Payload: ExecPayload + Default + Send + 'static, + Payload: AbstractExecPayload + 'static, >( chain: Arc>, state: &BeaconState, proposer_index: u64, builder_params: BuilderParams, -) -> Result, BlockProductionError> { +) -> Result, BlockProductionError> { // Compute all required values from the `state` now to avoid needing to pass it into a spawned // task. let spec = &chain.spec; @@ -398,7 +417,13 @@ pub fn get_execution_payload< compute_timestamp_at_slot(state, state.slot(), spec).map_err(BeaconStateError::from)?; let random = *state.get_randao_mix(current_epoch)?; let latest_execution_payload_header_block_hash = - state.latest_execution_payload_header()?.block_hash; + state.latest_execution_payload_header()?.block_hash(); + let withdrawals = match state { + &BeaconState::Capella(_) => Some(get_expected_withdrawals(state, spec)?.into()), + &BeaconState::Merge(_) => None, + // These shouldn't happen but they're here to make the pattern irrefutable + &BeaconState::Base(_) | &BeaconState::Altair(_) => None, + }; // Spawn a task to obtain the execution payload from the EL via a series of async calls. The // `join_handle` can be used to await the result of the function. @@ -415,6 +440,7 @@ pub fn get_execution_payload< proposer_index, latest_execution_payload_header_block_hash, builder_params, + withdrawals, ) .await }, @@ -448,13 +474,15 @@ pub async fn prepare_execution_payload( proposer_index: u64, latest_execution_payload_header_block_hash: ExecutionBlockHash, builder_params: BuilderParams, -) -> Result + withdrawals: Option>, +) -> Result, BlockProductionError> where T: BeaconChainTypes, - Payload: ExecPayload + Default, + Payload: AbstractExecPayload, { let current_epoch = builder_params.slot.epoch(T::EthSpec::slots_per_epoch()); let spec = &chain.spec; + let fork = spec.fork_name_at_slot::(builder_params.slot); let execution_layer = chain .execution_layer .as_ref() @@ -468,7 +496,7 @@ where if is_terminal_block_hash_set && !is_activation_epoch_reached { // Use the "empty" payload if there's a terminal block hash, but we haven't reached the // terminal block epoch yet. - return Ok(<_>::default()); + return BlockProposalContents::default_at_fork(fork).map_err(Into::into); } let terminal_pow_block_hash = execution_layer @@ -481,7 +509,7 @@ where } else { // If the merge transition hasn't occurred yet and the EL hasn't found the terminal // block, return an "empty" payload. - return Ok(<_>::default()); + return BlockProposalContents::default_at_fork(fork).map_err(Into::into); } } else { latest_execution_payload_header_block_hash @@ -505,21 +533,26 @@ where .await .map_err(BlockProductionError::BeaconChain)?; + let suggested_fee_recipient = execution_layer + .get_suggested_fee_recipient(proposer_index) + .await; + let payload_attributes = + PayloadAttributes::new(timestamp, random, suggested_fee_recipient, withdrawals); + // Note: the suggested_fee_recipient is stored in the `execution_layer`, it will add this parameter. // // This future is not executed here, it's up to the caller to await it. - let execution_payload = execution_layer + let block_contents = execution_layer .get_payload::( parent_hash, - timestamp, - random, - proposer_index, + &payload_attributes, forkchoice_update_params, builder_params, + fork, &chain.spec, ) .await .map_err(BlockProductionError::GetPayloadFailed)?; - Ok(execution_payload) + Ok(block_contents) } diff --git a/beacon_node/beacon_chain/src/fork_choice_signal.rs b/beacon_node/beacon_chain/src/fork_choice_signal.rs index fd92de661da..f5424d417eb 100644 --- a/beacon_node/beacon_chain/src/fork_choice_signal.rs +++ b/beacon_node/beacon_chain/src/fork_choice_signal.rs @@ -43,7 +43,7 @@ impl ForkChoiceSignalTx { /// /// Return an error if the provided `slot` is strictly less than any previously provided slot. pub fn notify_fork_choice_complete(&self, slot: Slot) -> Result<(), BeaconChainError> { - let &(ref lock, ref condvar) = &*self.pair; + let (lock, condvar) = &*self.pair; let mut current_slot = lock.lock(); @@ -72,7 +72,7 @@ impl Default for ForkChoiceSignalTx { impl ForkChoiceSignalRx { pub fn wait_for_fork_choice(&self, slot: Slot, timeout: Duration) -> ForkChoiceWaitResult { - let &(ref lock, ref condvar) = &*self.pair; + let (lock, condvar) = &*self.pair; let mut current_slot = lock.lock(); diff --git a/beacon_node/beacon_chain/src/fork_revert.rs b/beacon_node/beacon_chain/src/fork_revert.rs index 6d5b5ddc4ae..ef23248aba6 100644 --- a/beacon_node/beacon_chain/src/fork_revert.rs +++ b/beacon_node/beacon_chain/src/fork_revert.rs @@ -1,7 +1,6 @@ use crate::{BeaconForkChoiceStore, BeaconSnapshot}; use fork_choice::{CountUnrealized, ForkChoice, PayloadVerificationStatus}; use itertools::process_results; -use proto_array::CountUnrealizedFull; use slog::{info, warn, Logger}; use state_processing::state_advance::complete_state_advance; use state_processing::{ @@ -102,7 +101,6 @@ pub fn reset_fork_choice_to_finalization, Cold: It current_slot: Option, spec: &ChainSpec, count_unrealized_config: CountUnrealized, - count_unrealized_full_config: CountUnrealizedFull, ) -> Result, E>, String> { // Fetch finalized block. let finalized_checkpoint = head_state.finalized_checkpoint(); @@ -156,7 +154,6 @@ pub fn reset_fork_choice_to_finalization, Cold: It &finalized_snapshot.beacon_block, &finalized_snapshot.beacon_state, current_slot, - count_unrealized_full_config, spec, ) .map_err(|e| format!("Unable to reset fork choice for revert: {:?}", e))?; diff --git a/beacon_node/beacon_chain/src/lib.rs b/beacon_node/beacon_chain/src/lib.rs index ae1c5e4b766..be1522a3b80 100644 --- a/beacon_node/beacon_chain/src/lib.rs +++ b/beacon_node/beacon_chain/src/lib.rs @@ -1,6 +1,8 @@ -#![recursion_limit = "128"] // For lazy-static +pub mod attestation_rewards; pub mod attestation_verification; mod attester_cache; +pub mod beacon_block_reward; +mod beacon_block_streamer; mod beacon_chain; mod beacon_fork_choice_store; pub mod beacon_proposer_cache; @@ -10,6 +12,7 @@ mod block_times_cache; mod block_verification; pub mod builder; pub mod canonical_head; +pub mod capella_readiness; pub mod chain_config; mod early_attester_cache; mod errors; @@ -29,7 +32,7 @@ pub mod migrate; mod naive_aggregation_pool; mod observed_aggregates; mod observed_attesters; -mod observed_block_producers; +pub mod observed_block_producers; pub mod observed_operations; pub mod otb_verification_service; mod persisted_beacon_chain; @@ -37,9 +40,10 @@ mod persisted_fork_choice; mod pre_finalization_cache; pub mod proposer_prep_service; pub mod schema_change; -mod shuffling_cache; +pub mod shuffling_cache; mod snapshot_cache; pub mod state_advance_timer; +pub mod sync_committee_rewards; pub mod sync_committee_verification; pub mod test_utils; mod timeout_rw_lock; @@ -53,7 +57,7 @@ pub use self::beacon_chain::{ INVALID_JUSTIFIED_PAYLOAD_SHUTDOWN_REASON, MAXIMUM_GOSSIP_CLOCK_DISPARITY, }; pub use self::beacon_snapshot::BeaconSnapshot; -pub use self::chain_config::{ChainConfig, CountUnrealizedFull}; +pub use self::chain_config::ChainConfig; pub use self::errors::{BeaconChainError, BlockProductionError}; pub use self::historical_blocks::HistoricalBlockError; pub use attestation_verification::Error as AttestationError; diff --git a/beacon_node/beacon_chain/src/light_client_optimistic_update_verification.rs b/beacon_node/beacon_chain/src/light_client_optimistic_update_verification.rs index ec9c90e7355..20d7181808a 100644 --- a/beacon_node/beacon_chain/src/light_client_optimistic_update_verification.rs +++ b/beacon_node/beacon_chain/src/light_client_optimistic_update_verification.rs @@ -2,6 +2,7 @@ use crate::{ beacon_chain::MAXIMUM_GOSSIP_CLOCK_DISPARITY, BeaconChain, BeaconChainError, BeaconChainTypes, }; use derivative::Derivative; +use eth2::types::Hash256; use slot_clock::SlotClock; use std::time::Duration; use strum::AsRefStr; @@ -36,6 +37,8 @@ pub enum Error { SigSlotStartIsNone, /// Failed to construct a LightClientOptimisticUpdate from state. FailedConstructingUpdate, + /// Unknown block with parent root. + UnknownBlockParentRoot(Hash256), /// Beacon chain error occured. BeaconChainError(BeaconChainError), LightClientUpdateError(LightClientUpdateError), @@ -58,6 +61,7 @@ impl From for Error { #[derivative(Clone(bound = "T: BeaconChainTypes"))] pub struct VerifiedLightClientOptimisticUpdate { light_client_optimistic_update: LightClientOptimisticUpdate, + pub parent_root: Hash256, seen_timestamp: Duration, } @@ -107,6 +111,16 @@ impl VerifiedLightClientOptimisticUpdate { None => return Err(Error::SigSlotStartIsNone), } + // check if we can process the optimistic update immediately + // otherwise queue + let canonical_root = light_client_optimistic_update + .attested_header + .canonical_root(); + + if canonical_root != head_block.message().parent_root() { + return Err(Error::UnknownBlockParentRoot(canonical_root)); + } + let optimistic_update = LightClientOptimisticUpdate::new(&chain.spec, head_block, &attested_state)?; @@ -119,6 +133,7 @@ impl VerifiedLightClientOptimisticUpdate { Ok(Self { light_client_optimistic_update, + parent_root: canonical_root, seen_timestamp, }) } diff --git a/beacon_node/beacon_chain/src/merge_readiness.rs b/beacon_node/beacon_chain/src/merge_readiness.rs index 4ef2102fd51..c66df39eedf 100644 --- a/beacon_node/beacon_chain/src/merge_readiness.rs +++ b/beacon_node/beacon_chain/src/merge_readiness.rs @@ -8,7 +8,7 @@ use std::fmt::Write; use types::*; /// The time before the Bellatrix fork when we will start issuing warnings about preparation. -const SECONDS_IN_A_WEEK: u64 = 604800; +pub const SECONDS_IN_A_WEEK: u64 = 604800; pub const MERGE_READINESS_PREPARATION_SECONDS: u64 = SECONDS_IN_A_WEEK * 2; #[derive(Default, Debug, Serialize, Deserialize)] diff --git a/beacon_node/beacon_chain/src/observed_operations.rs b/beacon_node/beacon_chain/src/observed_operations.rs index 8d8272b67d7..4121111b3ee 100644 --- a/beacon_node/beacon_chain/src/observed_operations.rs +++ b/beacon_node/beacon_chain/src/observed_operations.rs @@ -1,12 +1,12 @@ use derivative::Derivative; -use smallvec::SmallVec; +use smallvec::{smallvec, SmallVec}; use ssz::{Decode, Encode}; -use state_processing::{SigVerifiedOp, VerifyOperation}; +use state_processing::{SigVerifiedOp, VerifyOperation, VerifyOperationAt}; use std::collections::HashSet; use std::marker::PhantomData; use types::{ - AttesterSlashing, BeaconState, ChainSpec, EthSpec, ForkName, ProposerSlashing, - SignedVoluntaryExit, Slot, + AttesterSlashing, BeaconState, ChainSpec, Epoch, EthSpec, ForkName, ProposerSlashing, + SignedBlsToExecutionChange, SignedVoluntaryExit, Slot, }; /// Number of validator indices to store on the stack in `observed_validators`. @@ -39,7 +39,7 @@ pub enum ObservationOutcome { AlreadyKnown, } -/// Trait for exits and slashings which can be observed using `ObservedOperations`. +/// Trait for operations which can be observed using `ObservedOperations`. pub trait ObservableOperation: VerifyOperation + Sized { /// The set of validator indices involved in this operation. /// @@ -49,13 +49,13 @@ pub trait ObservableOperation: VerifyOperation + Sized { impl ObservableOperation for SignedVoluntaryExit { fn observed_validators(&self) -> SmallVec<[u64; SMALL_VEC_SIZE]> { - std::iter::once(self.message.validator_index).collect() + smallvec![self.message.validator_index] } } impl ObservableOperation for ProposerSlashing { fn observed_validators(&self) -> SmallVec<[u64; SMALL_VEC_SIZE]> { - std::iter::once(self.signed_header_1.message.proposer_index).collect() + smallvec![self.signed_header_1.message.proposer_index] } } @@ -80,13 +80,23 @@ impl ObservableOperation for AttesterSlashing { } } +impl ObservableOperation for SignedBlsToExecutionChange { + fn observed_validators(&self) -> SmallVec<[u64; SMALL_VEC_SIZE]> { + smallvec![self.message.validator_index] + } +} + impl, E: EthSpec> ObservedOperations { - pub fn verify_and_observe( + pub fn verify_and_observe_parametric( &mut self, op: T, + validate: F, head_state: &BeaconState, spec: &ChainSpec, - ) -> Result, T::Error> { + ) -> Result, T::Error> + where + F: Fn(T) -> Result, T::Error>, + { self.reset_at_fork_boundary(head_state.slot(), spec); let observed_validator_indices = &mut self.observed_validator_indices; @@ -106,7 +116,7 @@ impl, E: EthSpec> ObservedOperations { } // Validate the op using operation-specific logic (`verify_attester_slashing`, etc). - let verified_op = op.validate(head_state, spec)?; + let verified_op = validate(op)?; // Add the relevant indices to the set of known indices to prevent processing of duplicates // in the future. @@ -115,6 +125,16 @@ impl, E: EthSpec> ObservedOperations { Ok(ObservationOutcome::New(verified_op)) } + pub fn verify_and_observe( + &mut self, + op: T, + head_state: &BeaconState, + spec: &ChainSpec, + ) -> Result, T::Error> { + let validate = |op: T| op.validate(head_state, spec); + self.verify_and_observe_parametric(op, validate, head_state, spec) + } + /// Reset the cache when crossing a fork boundary. /// /// This prevents an attacker from crafting a self-slashing which is only valid before the fork @@ -134,3 +154,16 @@ impl, E: EthSpec> ObservedOperations { } } } + +impl + VerifyOperationAt, E: EthSpec> ObservedOperations { + pub fn verify_and_observe_at( + &mut self, + op: T, + verify_at_epoch: Epoch, + head_state: &BeaconState, + spec: &ChainSpec, + ) -> Result, T::Error> { + let validate = |op: T| op.validate_at(head_state, verify_at_epoch, spec); + self.verify_and_observe_parametric(op, validate, head_state, spec) + } +} diff --git a/beacon_node/beacon_chain/src/schema_change.rs b/beacon_node/beacon_chain/src/schema_change.rs index 73906b1b586..5808e648a2c 100644 --- a/beacon_node/beacon_chain/src/schema_change.rs +++ b/beacon_node/beacon_chain/src/schema_change.rs @@ -1,6 +1,9 @@ //! Utilities for managing database schema changes. mod migration_schema_v12; mod migration_schema_v13; +mod migration_schema_v14; +mod migration_schema_v15; +mod migration_schema_v16; use crate::beacon_chain::{BeaconChainTypes, ETH1_CACHE_DB_KEY}; use crate::eth1_chain::SszEth1; @@ -114,6 +117,30 @@ pub fn migrate_schema( Ok(()) } + (SchemaVersion(13), SchemaVersion(14)) => { + let ops = migration_schema_v14::upgrade_to_v14::(db.clone(), log)?; + db.store_schema_version_atomically(to, ops) + } + (SchemaVersion(14), SchemaVersion(13)) => { + let ops = migration_schema_v14::downgrade_from_v14::(db.clone(), log)?; + db.store_schema_version_atomically(to, ops) + } + (SchemaVersion(14), SchemaVersion(15)) => { + let ops = migration_schema_v15::upgrade_to_v15::(db.clone(), log)?; + db.store_schema_version_atomically(to, ops) + } + (SchemaVersion(15), SchemaVersion(14)) => { + let ops = migration_schema_v15::downgrade_from_v15::(db.clone(), log)?; + db.store_schema_version_atomically(to, ops) + } + (SchemaVersion(15), SchemaVersion(16)) => { + let ops = migration_schema_v16::upgrade_to_v16::(db.clone(), log)?; + db.store_schema_version_atomically(to, ops) + } + (SchemaVersion(16), SchemaVersion(15)) => { + let ops = migration_schema_v16::downgrade_from_v16::(db.clone(), log)?; + db.store_schema_version_atomically(to, ops) + } // Anything else is an error. (_, _) => Err(HotColdDBError::UnsupportedSchemaVersion { target_version: to, diff --git a/beacon_node/beacon_chain/src/schema_change/migration_schema_v12.rs b/beacon_node/beacon_chain/src/schema_change/migration_schema_v12.rs index bb72b28c0ec..c9aa2097f8a 100644 --- a/beacon_node/beacon_chain/src/schema_change/migration_schema_v12.rs +++ b/beacon_node/beacon_chain/src/schema_change/migration_schema_v12.rs @@ -168,16 +168,14 @@ pub fn downgrade_from_v12( log: Logger, ) -> Result, Error> { // Load a V12 op pool and transform it to V5. - let PersistedOperationPoolV12 { + let PersistedOperationPoolV12:: { attestations, sync_contributions, attester_slashings, proposer_slashings, voluntary_exits, - } = if let Some(PersistedOperationPool::::V12(op_pool)) = - db.get_item(&OP_POOL_DB_KEY)? - { - op_pool + } = if let Some(op_pool_v12) = db.get_item(&OP_POOL_DB_KEY)? { + op_pool_v12 } else { debug!(log, "Nothing to do, no operation pool stored"); return Ok(vec![]); diff --git a/beacon_node/beacon_chain/src/schema_change/migration_schema_v14.rs b/beacon_node/beacon_chain/src/schema_change/migration_schema_v14.rs new file mode 100644 index 00000000000..be913d8cc5f --- /dev/null +++ b/beacon_node/beacon_chain/src/schema_change/migration_schema_v14.rs @@ -0,0 +1,125 @@ +use crate::beacon_chain::{BeaconChainTypes, OP_POOL_DB_KEY}; +use operation_pool::{ + PersistedOperationPool, PersistedOperationPoolV12, PersistedOperationPoolV14, +}; +use slog::{debug, error, info, Logger}; +use slot_clock::SlotClock; +use std::sync::Arc; +use std::time::Duration; +use store::{Error, HotColdDB, KeyValueStoreOp, StoreItem}; +use types::{EthSpec, Hash256, Slot}; + +/// The slot clock isn't usually available before the database is initialized, so we construct a +/// temporary slot clock by reading the genesis state. It should always exist if the database is +/// initialized at a prior schema version, however we still handle the lack of genesis state +/// gracefully. +fn get_slot_clock( + db: &HotColdDB, + log: &Logger, +) -> Result, Error> { + let spec = db.get_chain_spec(); + let genesis_block = if let Some(block) = db.get_blinded_block(&Hash256::zero())? { + block + } else { + error!(log, "Missing genesis block"); + return Ok(None); + }; + let genesis_state = + if let Some(state) = db.get_state(&genesis_block.state_root(), Some(Slot::new(0)))? { + state + } else { + error!(log, "Missing genesis state"; "state_root" => ?genesis_block.state_root()); + return Ok(None); + }; + Ok(Some(T::SlotClock::new( + spec.genesis_slot, + Duration::from_secs(genesis_state.genesis_time()), + Duration::from_secs(spec.seconds_per_slot), + ))) +} + +pub fn upgrade_to_v14( + db: Arc>, + log: Logger, +) -> Result, Error> { + // Load a V12 op pool and transform it to V14. + let PersistedOperationPoolV12:: { + attestations, + sync_contributions, + attester_slashings, + proposer_slashings, + voluntary_exits, + } = if let Some(op_pool_v12) = db.get_item(&OP_POOL_DB_KEY)? { + op_pool_v12 + } else { + debug!(log, "Nothing to do, no operation pool stored"); + return Ok(vec![]); + }; + + // initialize with empty vector + let bls_to_execution_changes = vec![]; + let v14 = PersistedOperationPool::V14(PersistedOperationPoolV14 { + attestations, + sync_contributions, + attester_slashings, + proposer_slashings, + voluntary_exits, + bls_to_execution_changes, + }); + Ok(vec![v14.as_kv_store_op(OP_POOL_DB_KEY)]) +} + +pub fn downgrade_from_v14( + db: Arc>, + log: Logger, +) -> Result, Error> { + // We cannot downgrade from V14 once the Capella fork has been reached because there will + // be HistoricalSummaries stored in the database instead of HistoricalRoots and prior versions + // of Lighthouse can't handle that. + if let Some(capella_fork_epoch) = db.get_chain_spec().capella_fork_epoch { + let current_epoch = get_slot_clock::(&db, &log)? + .and_then(|clock| clock.now()) + .map(|slot| slot.epoch(T::EthSpec::slots_per_epoch())) + .ok_or(Error::SlotClockUnavailableForMigration)?; + + if current_epoch >= capella_fork_epoch { + error!( + log, + "Capella already active: v14+ is mandatory"; + "current_epoch" => current_epoch, + "capella_fork_epoch" => capella_fork_epoch, + ); + return Err(Error::UnableToDowngrade); + } + } + + // Load a V14 op pool and transform it to V12. + let PersistedOperationPoolV14:: { + attestations, + sync_contributions, + attester_slashings, + proposer_slashings, + voluntary_exits, + bls_to_execution_changes, + } = if let Some(op_pool) = db.get_item(&OP_POOL_DB_KEY)? { + op_pool + } else { + debug!(log, "Nothing to do, no operation pool stored"); + return Ok(vec![]); + }; + + info!( + log, + "Dropping bls_to_execution_changes from pool"; + "count" => bls_to_execution_changes.len(), + ); + + let v12 = PersistedOperationPoolV12 { + attestations, + sync_contributions, + attester_slashings, + proposer_slashings, + voluntary_exits, + }; + Ok(vec![v12.as_kv_store_op(OP_POOL_DB_KEY)]) +} diff --git a/beacon_node/beacon_chain/src/schema_change/migration_schema_v15.rs b/beacon_node/beacon_chain/src/schema_change/migration_schema_v15.rs new file mode 100644 index 00000000000..07c86bd931f --- /dev/null +++ b/beacon_node/beacon_chain/src/schema_change/migration_schema_v15.rs @@ -0,0 +1,76 @@ +use crate::beacon_chain::{BeaconChainTypes, OP_POOL_DB_KEY}; +use operation_pool::{ + PersistedOperationPool, PersistedOperationPoolV14, PersistedOperationPoolV15, +}; +use slog::{debug, info, Logger}; +use std::sync::Arc; +use store::{Error, HotColdDB, KeyValueStoreOp, StoreItem}; + +pub fn upgrade_to_v15( + db: Arc>, + log: Logger, +) -> Result, Error> { + // Load a V14 op pool and transform it to V15. + let PersistedOperationPoolV14:: { + attestations, + sync_contributions, + attester_slashings, + proposer_slashings, + voluntary_exits, + bls_to_execution_changes, + } = if let Some(op_pool_v14) = db.get_item(&OP_POOL_DB_KEY)? { + op_pool_v14 + } else { + debug!(log, "Nothing to do, no operation pool stored"); + return Ok(vec![]); + }; + + let v15 = PersistedOperationPool::V15(PersistedOperationPoolV15 { + attestations, + sync_contributions, + attester_slashings, + proposer_slashings, + voluntary_exits, + bls_to_execution_changes, + // Initialize with empty set + capella_bls_change_broadcast_indices: <_>::default(), + }); + Ok(vec![v15.as_kv_store_op(OP_POOL_DB_KEY)]) +} + +pub fn downgrade_from_v15( + db: Arc>, + log: Logger, +) -> Result, Error> { + // Load a V15 op pool and transform it to V14. + let PersistedOperationPoolV15:: { + attestations, + sync_contributions, + attester_slashings, + proposer_slashings, + voluntary_exits, + bls_to_execution_changes, + capella_bls_change_broadcast_indices, + } = if let Some(op_pool) = db.get_item(&OP_POOL_DB_KEY)? { + op_pool + } else { + debug!(log, "Nothing to do, no operation pool stored"); + return Ok(vec![]); + }; + + info!( + log, + "Forgetting address changes for Capella broadcast"; + "count" => capella_bls_change_broadcast_indices.len(), + ); + + let v14 = PersistedOperationPoolV14 { + attestations, + sync_contributions, + attester_slashings, + proposer_slashings, + voluntary_exits, + bls_to_execution_changes, + }; + Ok(vec![v14.as_kv_store_op(OP_POOL_DB_KEY)]) +} diff --git a/beacon_node/beacon_chain/src/schema_change/migration_schema_v16.rs b/beacon_node/beacon_chain/src/schema_change/migration_schema_v16.rs new file mode 100644 index 00000000000..230573b0288 --- /dev/null +++ b/beacon_node/beacon_chain/src/schema_change/migration_schema_v16.rs @@ -0,0 +1,46 @@ +use crate::beacon_chain::{BeaconChainTypes, FORK_CHOICE_DB_KEY}; +use crate::persisted_fork_choice::PersistedForkChoiceV11; +use slog::{debug, Logger}; +use std::sync::Arc; +use store::{Error, HotColdDB, KeyValueStoreOp, StoreItem}; + +pub fn upgrade_to_v16( + db: Arc>, + log: Logger, +) -> Result, Error> { + drop_balances_cache::(db, log) +} + +pub fn downgrade_from_v16( + db: Arc>, + log: Logger, +) -> Result, Error> { + drop_balances_cache::(db, log) +} + +/// Drop the balances cache from the fork choice store. +/// +/// There aren't any type-level changes in this schema migration, however the +/// way that we compute the `JustifiedBalances` has changed due to: +/// https://github.com/sigp/lighthouse/pull/3962 +pub fn drop_balances_cache( + db: Arc>, + log: Logger, +) -> Result, Error> { + let mut persisted_fork_choice = db + .get_item::(&FORK_CHOICE_DB_KEY)? + .ok_or_else(|| Error::SchemaMigrationError("fork choice missing from database".into()))?; + + debug!( + log, + "Dropping fork choice balances cache"; + "item_count" => persisted_fork_choice.fork_choice_store.balances_cache.items.len() + ); + + // Drop all items in the balances cache. + persisted_fork_choice.fork_choice_store.balances_cache = <_>::default(); + + let kv_op = persisted_fork_choice.as_kv_store_op(FORK_CHOICE_DB_KEY); + + Ok(vec![kv_op]) +} diff --git a/beacon_node/beacon_chain/src/shuffling_cache.rs b/beacon_node/beacon_chain/src/shuffling_cache.rs index a01847a0e13..91a1e24d82b 100644 --- a/beacon_node/beacon_chain/src/shuffling_cache.rs +++ b/beacon_node/beacon_chain/src/shuffling_cache.rs @@ -9,7 +9,7 @@ use types::{beacon_state::CommitteeCache, AttestationShufflingId, Epoch, Hash256 /// Each entry should be `8 + 800,000 = 800,008` bytes in size with 100k validators. (8-byte hash + /// 100k indices). Therefore, this cache should be approx `16 * 800,008 = 12.8 MB`. (Note: this /// ignores a few extra bytes in the caches that should be insignificant compared to the indices). -const CACHE_SIZE: usize = 16; +pub const DEFAULT_CACHE_SIZE: usize = 16; /// The maximum number of concurrent committee cache "promises" that can be issued. In effect, this /// limits the number of concurrent states that can be loaded into memory for the committee cache. @@ -54,9 +54,9 @@ pub struct ShufflingCache { } impl ShufflingCache { - pub fn new() -> Self { + pub fn new(cache_size: usize) -> Self { Self { - cache: LruCache::new(CACHE_SIZE), + cache: LruCache::new(cache_size), } } @@ -172,7 +172,7 @@ impl ToArcCommitteeCache for Arc { impl Default for ShufflingCache { fn default() -> Self { - Self::new() + Self::new(DEFAULT_CACHE_SIZE) } } @@ -249,7 +249,7 @@ mod test { fn resolved_promise() { let (committee_a, _) = committee_caches(); let id_a = shuffling_id(1); - let mut cache = ShufflingCache::new(); + let mut cache = ShufflingCache::default(); // Create a promise. let sender = cache.create_promise(id_a.clone()).unwrap(); @@ -276,7 +276,7 @@ mod test { #[test] fn unresolved_promise() { let id_a = shuffling_id(1); - let mut cache = ShufflingCache::new(); + let mut cache = ShufflingCache::default(); // Create a promise. let sender = cache.create_promise(id_a.clone()).unwrap(); @@ -301,7 +301,7 @@ mod test { fn two_promises() { let (committee_a, committee_b) = committee_caches(); let (id_a, id_b) = (shuffling_id(1), shuffling_id(2)); - let mut cache = ShufflingCache::new(); + let mut cache = ShufflingCache::default(); // Create promise A. let sender_a = cache.create_promise(id_a.clone()).unwrap(); @@ -355,7 +355,7 @@ mod test { #[test] fn too_many_promises() { - let mut cache = ShufflingCache::new(); + let mut cache = ShufflingCache::default(); for i in 0..MAX_CONCURRENT_PROMISES { cache.create_promise(shuffling_id(i as u64)).unwrap(); diff --git a/beacon_node/beacon_chain/src/sync_committee_rewards.rs b/beacon_node/beacon_chain/src/sync_committee_rewards.rs new file mode 100644 index 00000000000..2221aa1d5eb --- /dev/null +++ b/beacon_node/beacon_chain/src/sync_committee_rewards.rs @@ -0,0 +1,87 @@ +use crate::{BeaconChain, BeaconChainError, BeaconChainTypes}; + +use eth2::lighthouse::SyncCommitteeReward; +use safe_arith::SafeArith; +use slog::error; +use state_processing::per_block_processing::altair::sync_committee::compute_sync_aggregate_rewards; +use std::collections::HashMap; +use store::RelativeEpoch; +use types::{AbstractExecPayload, BeaconBlockRef, BeaconState}; + +impl BeaconChain { + pub fn compute_sync_committee_rewards>( + &self, + block: BeaconBlockRef<'_, T::EthSpec, Payload>, + state: &mut BeaconState, + ) -> Result, BeaconChainError> { + if block.slot() != state.slot() { + return Err(BeaconChainError::BlockRewardSlotError); + } + + let spec = &self.spec; + + state.build_committee_cache(RelativeEpoch::Current, spec)?; + + let sync_aggregate = block.body().sync_aggregate()?; + + let sync_committee = state.current_sync_committee()?.clone(); + + let sync_committee_indices = state.get_sync_committee_indices(&sync_committee)?; + + let (participant_reward_value, proposer_reward_per_bit) = + compute_sync_aggregate_rewards(state, spec).map_err(|e| { + error!( + self.log, "Error calculating sync aggregate rewards"; + "error" => ?e + ); + BeaconChainError::SyncCommitteeRewardsSyncError + })?; + + let mut balances = HashMap::::new(); + + let mut total_proposer_rewards = 0; + let proposer_index = state.get_beacon_proposer_index(block.slot(), spec)?; + + // Apply rewards to participant balances. Keep track of proposer rewards + for (validator_index, participant_bit) in sync_committee_indices + .iter() + .zip(sync_aggregate.sync_committee_bits.iter()) + { + let participant_balance = balances + .entry(*validator_index) + .or_insert_with(|| state.balances()[*validator_index]); + + if participant_bit { + participant_balance.safe_add_assign(participant_reward_value)?; + + balances + .entry(proposer_index) + .or_insert_with(|| state.balances()[proposer_index]) + .safe_add_assign(proposer_reward_per_bit)?; + + total_proposer_rewards.safe_add_assign(proposer_reward_per_bit)?; + } else { + *participant_balance = participant_balance.saturating_sub(participant_reward_value); + } + } + + Ok(balances + .iter() + .filter_map(|(i, new_balance)| { + let reward = if *i != proposer_index { + *new_balance as i64 - state.balances()[*i] as i64 + } else if sync_committee_indices.contains(i) { + *new_balance as i64 + - state.balances()[*i] as i64 + - total_proposer_rewards as i64 + } else { + return None; + }; + Some(SyncCommitteeReward { + validator_index: *i as u64, + reward, + }) + }) + .collect()) + } +} diff --git a/beacon_node/beacon_chain/src/test_utils.rs b/beacon_node/beacon_chain/src/test_utils.rs index 66de3f02d23..3c5d1fd3b1a 100644 --- a/beacon_node/beacon_chain/src/test_utils.rs +++ b/beacon_node/beacon_chain/src/test_utils.rs @@ -2,6 +2,7 @@ pub use crate::persisted_beacon_chain::PersistedBeaconChain; pub use crate::{ beacon_chain::{BEACON_CHAIN_DB_KEY, ETH1_CACHE_DB_KEY, FORK_CHOICE_DB_KEY, OP_POOL_DB_KEY}, migrate::MigratorConfig, + sync_committee_verification::Error as SyncCommitteeError, validator_monitor::DEFAULT_INDIVIDUAL_TRACKING_THRESHOLD, BeaconChainError, NotifyExecutionLayer, ProduceBlockVerification, }; @@ -12,17 +13,17 @@ use crate::{ StateSkipConfig, }; use bls::get_withdrawal_credentials; -use execution_layer::test_utils::DEFAULT_JWT_SECRET; use execution_layer::{ auth::JwtKey, test_utils::{ - ExecutionBlockGenerator, MockExecutionLayer, TestingBuilder, DEFAULT_TERMINAL_BLOCK, + ExecutionBlockGenerator, MockExecutionLayer, TestingBuilder, DEFAULT_JWT_SECRET, + DEFAULT_TERMINAL_BLOCK, }, ExecutionLayer, }; use fork_choice::CountUnrealized; use futures::channel::mpsc::Receiver; -pub use genesis::{interop_genesis_state, DEFAULT_ETH1_BLOCK_HASH}; +pub use genesis::{interop_genesis_state_with_eth1, DEFAULT_ETH1_BLOCK_HASH}; use int_to_bytes::int_to_bytes32; use merkle_proof::MerkleTree; use parking_lot::Mutex; @@ -107,6 +108,14 @@ pub enum AttestationStrategy { SomeValidators(Vec), } +#[derive(Debug, Clone, Copy, PartialEq, Eq)] +pub enum SyncCommitteeStrategy { + /// All sync committee validators sign. + AllValidators, + /// No validators sign. + NoValidators, +} + /// Indicates whether the `BeaconChainHarness` should use the `state.current_sync_committee` or /// `state.next_sync_committee` when creating sync messages or contributions. #[derive(Clone, Debug)] @@ -148,6 +157,7 @@ pub struct Builder { eth_spec_instance: T::EthSpec, spec: Option, validator_keypairs: Option>, + withdrawal_keypairs: Vec>, chain_config: Option, store_config: Option, #[allow(clippy::type_complexity)] @@ -179,7 +189,7 @@ impl Builder> { .unwrap(), ); let mutator = move |builder: BeaconChainBuilder<_>| { - let genesis_state = interop_genesis_state::( + let genesis_state = interop_genesis_state_with_eth1::( &validator_keypairs, HARNESS_GENESIS_TIME, Hash256::from_slice(DEFAULT_ETH1_BLOCK_HASH), @@ -240,7 +250,7 @@ impl Builder> { .expect("cannot build without validator keypairs"); let mutator = move |builder: BeaconChainBuilder<_>| { - let genesis_state = interop_genesis_state::( + let genesis_state = interop_genesis_state_with_eth1::( &validator_keypairs, HARNESS_GENESIS_TIME, Hash256::from_slice(DEFAULT_ETH1_BLOCK_HASH), @@ -282,6 +292,7 @@ where eth_spec_instance, spec: None, validator_keypairs: None, + withdrawal_keypairs: vec![], chain_config: None, store_config: None, store: None, @@ -307,6 +318,26 @@ where self } + /// Initializes the BLS withdrawal keypairs for `num_keypairs` validators to + /// the "determistic" values, regardless of wether or not the validator has + /// a BLS or execution address in the genesis deposits. + /// + /// This aligns with the withdrawal commitments used in the "interop" + /// genesis states. + pub fn deterministic_withdrawal_keypairs(self, num_keypairs: usize) -> Self { + self.withdrawal_keypairs( + types::test_utils::generate_deterministic_keypairs(num_keypairs) + .into_iter() + .map(Option::Some) + .collect(), + ) + } + + pub fn withdrawal_keypairs(mut self, withdrawal_keypairs: Vec>) -> Self { + self.withdrawal_keypairs = withdrawal_keypairs; + self + } + pub fn default_spec(self) -> Self { self.spec_or_default(None) } @@ -384,15 +415,35 @@ where self } + pub fn recalculate_fork_times_with_genesis(mut self, genesis_time: u64) -> Self { + let mock = self + .mock_execution_layer + .as_mut() + .expect("must have mock execution layer to recalculate fork times"); + let spec = self + .spec + .clone() + .expect("cannot recalculate fork times without spec"); + mock.server.execution_block_generator().shanghai_time = + spec.capella_fork_epoch.map(|epoch| { + genesis_time + spec.seconds_per_slot * E::slots_per_epoch() * epoch.as_u64() + }); + + self + } + pub fn mock_execution_layer(mut self) -> Self { let spec = self.spec.clone().expect("cannot build without spec"); + let shanghai_time = spec.capella_fork_epoch.map(|epoch| { + HARNESS_GENESIS_TIME + spec.seconds_per_slot * E::slots_per_epoch() * epoch.as_u64() + }); let mock = MockExecutionLayer::new( self.runtime.task_executor.clone(), - spec.terminal_total_difficulty, DEFAULT_TERMINAL_BLOCK, - spec.terminal_block_hash, - spec.terminal_block_hash_activation_epoch, + shanghai_time, + None, Some(JwtKey::from_slice(&DEFAULT_JWT_SECRET).unwrap()), + spec, None, ); self.execution_layer = Some(mock.el.clone()); @@ -400,19 +451,26 @@ where self } - pub fn mock_execution_layer_with_builder(mut self, beacon_url: SensitiveUrl) -> Self { + pub fn mock_execution_layer_with_builder( + mut self, + beacon_url: SensitiveUrl, + builder_threshold: Option, + ) -> Self { // Get a random unused port - let port = unused_port::unused_tcp_port().unwrap(); + let port = unused_port::unused_tcp4_port().unwrap(); let builder_url = SensitiveUrl::parse(format!("http://127.0.0.1:{port}").as_str()).unwrap(); let spec = self.spec.clone().expect("cannot build without spec"); + let shanghai_time = spec.capella_fork_epoch.map(|epoch| { + HARNESS_GENESIS_TIME + spec.seconds_per_slot * E::slots_per_epoch() * epoch.as_u64() + }); let mock_el = MockExecutionLayer::new( self.runtime.task_executor.clone(), - spec.terminal_total_difficulty, DEFAULT_TERMINAL_BLOCK, - spec.terminal_block_hash, - spec.terminal_block_hash_activation_epoch, + shanghai_time, + builder_threshold, Some(JwtKey::from_slice(&DEFAULT_JWT_SECRET).unwrap()), + spec.clone(), Some(builder_url.clone()), ) .move_to_terminal_block(); @@ -504,6 +562,7 @@ where spec: chain.spec.clone(), chain: Arc::new(chain), validator_keypairs, + withdrawal_keypairs: self.withdrawal_keypairs, shutdown_receiver: Arc::new(Mutex::new(shutdown_receiver)), runtime: self.runtime, mock_execution_layer: self.mock_execution_layer, @@ -519,6 +578,12 @@ where /// Used for testing. pub struct BeaconChainHarness { pub validator_keypairs: Vec, + /// Optional BLS withdrawal keys for each validator. + /// + /// If a validator index is missing from this vec or their entry is `None` then either + /// no BLS withdrawal key was set for them (they had an address from genesis) or the test + /// initializer neglected to set this field. + pub withdrawal_keypairs: Vec>, pub chain: Arc>, pub spec: ChainSpec, @@ -1430,6 +1495,44 @@ where .sign(sk, &fork, genesis_validators_root, &self.chain.spec) } + pub fn make_bls_to_execution_change( + &self, + validator_index: u64, + address: Address, + ) -> SignedBlsToExecutionChange { + let keypair = self.get_withdrawal_keypair(validator_index); + self.make_bls_to_execution_change_with_keys( + validator_index, + address, + &keypair.pk, + &keypair.sk, + ) + } + + pub fn make_bls_to_execution_change_with_keys( + &self, + validator_index: u64, + address: Address, + pubkey: &PublicKey, + secret_key: &SecretKey, + ) -> SignedBlsToExecutionChange { + let genesis_validators_root = self.chain.genesis_validators_root; + BlsToExecutionChange { + validator_index, + from_bls_pubkey: pubkey.compress(), + to_execution_address: address, + } + .sign(secret_key, genesis_validators_root, &self.chain.spec) + } + + pub fn get_withdrawal_keypair(&self, validator_index: u64) -> &Keypair { + self.withdrawal_keypairs + .get(validator_index as usize) + .expect("BLS withdrawal key missing from harness") + .as_ref() + .expect("no withdrawal key for validator") + } + pub fn add_voluntary_exit( &self, block: &mut BeaconBlock, @@ -1657,15 +1760,64 @@ where self.process_attestations(attestations); } + pub fn sync_committee_sign_block( + &self, + state: &BeaconState, + block_hash: Hash256, + slot: Slot, + relative_sync_committee: RelativeSyncCommittee, + ) { + let sync_contributions = + self.make_sync_contributions(state, block_hash, slot, relative_sync_committee); + self.process_sync_contributions(sync_contributions).unwrap() + } + pub async fn add_attested_block_at_slot( &self, slot: Slot, state: BeaconState, state_root: Hash256, validators: &[usize], + ) -> Result<(SignedBeaconBlockHash, BeaconState), BlockError> { + self.add_attested_block_at_slot_with_sync( + slot, + state, + state_root, + validators, + SyncCommitteeStrategy::NoValidators, + ) + .await + } + + pub async fn add_attested_block_at_slot_with_sync( + &self, + slot: Slot, + state: BeaconState, + state_root: Hash256, + validators: &[usize], + sync_committee_strategy: SyncCommitteeStrategy, ) -> Result<(SignedBeaconBlockHash, BeaconState), BlockError> { let (block_hash, block, state) = self.add_block_at_slot(slot, state).await?; self.attest_block(&state, state_root, block_hash, &block, validators); + + if sync_committee_strategy == SyncCommitteeStrategy::AllValidators + && state.current_sync_committee().is_ok() + { + self.sync_committee_sign_block( + &state, + block_hash.into(), + slot, + if (slot + 1).epoch(E::slots_per_epoch()) + % self.spec.epochs_per_sync_committee_period + == 0 + { + RelativeSyncCommittee::Next + } else { + RelativeSyncCommittee::Current + }, + ); + } + Ok((block_hash, state)) } @@ -1675,10 +1827,35 @@ where state_root: Hash256, slots: &[Slot], validators: &[usize], + ) -> AddBlocksResult { + self.add_attested_blocks_at_slots_with_sync( + state, + state_root, + slots, + validators, + SyncCommitteeStrategy::NoValidators, + ) + .await + } + + pub async fn add_attested_blocks_at_slots_with_sync( + &self, + state: BeaconState, + state_root: Hash256, + slots: &[Slot], + validators: &[usize], + sync_committee_strategy: SyncCommitteeStrategy, ) -> AddBlocksResult { assert!(!slots.is_empty()); - self.add_attested_blocks_at_slots_given_lbh(state, state_root, slots, validators, None) - .await + self.add_attested_blocks_at_slots_given_lbh( + state, + state_root, + slots, + validators, + None, + sync_committee_strategy, + ) + .await } async fn add_attested_blocks_at_slots_given_lbh( @@ -1688,6 +1865,7 @@ where slots: &[Slot], validators: &[usize], mut latest_block_hash: Option, + sync_committee_strategy: SyncCommitteeStrategy, ) -> AddBlocksResult { assert!( slots.windows(2).all(|w| w[0] <= w[1]), @@ -1697,7 +1875,13 @@ where let mut state_hash_from_slot: HashMap = HashMap::new(); for slot in slots { let (block_hash, new_state) = self - .add_attested_block_at_slot(*slot, state, state_root, validators) + .add_attested_block_at_slot_with_sync( + *slot, + state, + state_root, + validators, + sync_committee_strategy, + ) .await .unwrap(); state = new_state; @@ -1779,6 +1963,7 @@ where &epoch_slots, &validators, Some(head_block), + SyncCommitteeStrategy::NoValidators, // for backwards compat ) .await; @@ -1895,6 +2080,22 @@ where num_blocks: usize, block_strategy: BlockStrategy, attestation_strategy: AttestationStrategy, + ) -> Hash256 { + self.extend_chain_with_sync( + num_blocks, + block_strategy, + attestation_strategy, + SyncCommitteeStrategy::NoValidators, + ) + .await + } + + pub async fn extend_chain_with_sync( + &self, + num_blocks: usize, + block_strategy: BlockStrategy, + attestation_strategy: AttestationStrategy, + sync_committee_strategy: SyncCommitteeStrategy, ) -> Hash256 { let (mut state, slots) = match block_strategy { BlockStrategy::OnCanonicalHead => { @@ -1926,7 +2127,13 @@ where }; let state_root = state.update_tree_hash_cache().unwrap(); let (_, _, last_produced_block_hash, _) = self - .add_attested_blocks_at_slots(state, state_root, &slots, &validators) + .add_attested_blocks_at_slots_with_sync( + state, + state_root, + &slots, + &validators, + sync_committee_strategy, + ) .await; last_produced_block_hash.into() } @@ -1980,6 +2187,30 @@ where (honest_head, faulty_head) } + + pub fn process_sync_contributions( + &self, + sync_contributions: HarnessSyncContributions, + ) -> Result<(), SyncCommitteeError> { + let mut verified_contributions = Vec::with_capacity(sync_contributions.len()); + + for (_, contribution_and_proof) in sync_contributions { + let signed_contribution_and_proof = contribution_and_proof.unwrap(); + + let verified_contribution = self + .chain + .verify_sync_contribution_for_gossip(signed_contribution_and_proof)?; + + verified_contributions.push(verified_contribution); + } + + for verified_contribution in verified_contributions { + self.chain + .add_contribution_to_block_inclusion_pool(verified_contribution)?; + } + + Ok(()) + } } // Junk `Debug` impl to satistfy certain trait bounds during testing. diff --git a/beacon_node/beacon_chain/src/validator_monitor.rs b/beacon_node/beacon_chain/src/validator_monitor.rs index dad5e1517ad..d79a56df6b2 100644 --- a/beacon_node/beacon_chain/src/validator_monitor.rs +++ b/beacon_node/beacon_chain/src/validator_monitor.rs @@ -15,6 +15,7 @@ use std::io; use std::marker::PhantomData; use std::str::Utf8Error; use std::time::{Duration, SystemTime, UNIX_EPOCH}; +use store::AbstractExecPayload; use types::{ AttesterSlashing, BeaconBlockRef, BeaconState, ChainSpec, Epoch, EthSpec, Hash256, IndexedAttestation, ProposerSlashing, PublicKeyBytes, SignedAggregateAndProof, @@ -29,7 +30,7 @@ const TOTAL_LABEL: &str = "total"; /// The validator monitor collects per-epoch data about each monitored validator. Historical data /// will be kept around for `HISTORIC_EPOCHS` before it is pruned. -pub const HISTORIC_EPOCHS: usize = 4; +pub const HISTORIC_EPOCHS: usize = 10; /// Once the validator monitor reaches this number of validators it will stop /// tracking their metrics/logging individually in an effort to reduce @@ -45,7 +46,7 @@ pub enum Error { /// Contains data pertaining to one validator for one epoch. #[derive(Default)] -struct EpochSummary { +pub struct EpochSummary { /* * Attestations with a target in the current epoch. */ @@ -103,6 +104,12 @@ struct EpochSummary { pub proposer_slashings: usize, /// The number of attester slashings observed. pub attester_slashings: usize, + + /* + * Other validator info helpful for the UI. + */ + /// The total balance of the validator. + pub total_balance: Option, } impl EpochSummary { @@ -176,18 +183,60 @@ impl EpochSummary { pub fn register_attester_slashing(&mut self) { self.attester_slashings += 1; } + + pub fn register_validator_total_balance(&mut self, total_balance: u64) { + self.total_balance = Some(total_balance) + } } type SummaryMap = HashMap; +#[derive(Default)] +pub struct ValidatorMetrics { + pub attestation_hits: u64, + pub attestation_misses: u64, + pub attestation_head_hits: u64, + pub attestation_head_misses: u64, + pub attestation_target_hits: u64, + pub attestation_target_misses: u64, +} + +impl ValidatorMetrics { + pub fn increment_hits(&mut self) { + self.attestation_hits += 1; + } + + pub fn increment_misses(&mut self) { + self.attestation_misses += 1; + } + + pub fn increment_target_hits(&mut self) { + self.attestation_target_hits += 1; + } + + pub fn increment_target_misses(&mut self) { + self.attestation_target_misses += 1; + } + + pub fn increment_head_hits(&mut self) { + self.attestation_head_hits += 1; + } + + pub fn increment_head_misses(&mut self) { + self.attestation_head_misses += 1; + } +} + /// A validator that is being monitored by the `ValidatorMonitor`. -struct MonitoredValidator { +pub struct MonitoredValidator { /// A human-readable identifier for the validator. pub id: String, /// The validator index in the state. pub index: Option, /// A history of the validator over time. pub summaries: RwLock, + /// Validator metrics to be exposed over the HTTP API. + pub metrics: RwLock, } impl MonitoredValidator { @@ -198,6 +247,7 @@ impl MonitoredValidator { .unwrap_or_else(|| pubkey.to_string()), index, summaries: <_>::default(), + metrics: <_>::default(), } } @@ -252,6 +302,20 @@ impl MonitoredValidator { fn touch_epoch_summary(&self, epoch: Epoch) { self.with_epoch_summary(epoch, |_| {}); } + + fn get_from_epoch_summary(&self, epoch: Epoch, func: F) -> Option + where + F: Fn(Option<&EpochSummary>) -> Option, + { + let summaries = self.summaries.read(); + func(summaries.get(&epoch)) + } + + pub fn get_total_balance(&self, epoch: Epoch) -> Option { + self.get_from_epoch_summary(epoch, |summary_opt| { + summary_opt.and_then(|summary| summary.total_balance) + }) + } } /// Holds a collection of `MonitoredValidator` and is notified about a variety of events on the P2P @@ -347,12 +411,20 @@ impl ValidatorMonitor { if let Some(i) = monitored_validator.index { monitored_validator.touch_epoch_summary(current_epoch); + let i = i as usize; + + // Cache relevant validator info. + if let Some(balance) = state.balances().get(i) { + monitored_validator.with_epoch_summary(current_epoch, |summary| { + summary.register_validator_total_balance(*balance) + }); + } + // Only log the per-validator metrics if it's enabled. if !self.individual_tracking() { continue; } - let i = i as usize; let id = &monitored_validator.id; if let Some(balance) = state.balances().get(i) { @@ -479,6 +551,25 @@ impl ValidatorMonitor { continue; } + // Store some metrics directly to be re-exposed on the HTTP API. + let mut validator_metrics = monitored_validator.metrics.write(); + if previous_epoch_matched_any { + validator_metrics.increment_hits(); + if previous_epoch_matched_target { + validator_metrics.increment_target_hits() + } else { + validator_metrics.increment_target_misses() + } + if previous_epoch_matched_head { + validator_metrics.increment_head_hits() + } else { + validator_metrics.increment_head_misses() + } + } else { + validator_metrics.increment_misses() + } + drop(validator_metrics); + // Indicates if any attestation made it on-chain. // // For Base states, this will be *any* attestation whatsoever. For Altair states, @@ -717,6 +808,14 @@ impl ValidatorMonitor { self.validators.values().map(|val| val.id.clone()).collect() } + pub fn get_monitored_validator(&self, index: u64) -> Option<&MonitoredValidator> { + if let Some(pubkey) = self.indices.get(&index) { + self.validators.get(pubkey) + } else { + None + } + } + /// If `self.auto_register == true`, add the `validator_index` to `self.monitored_validators`. /// Otherwise, do nothing. pub fn auto_register_local_validator(&mut self, validator_index: u64) { @@ -1638,9 +1737,9 @@ fn u64_to_i64(n: impl Into) -> i64 { } /// Returns the delay between the start of `block.slot` and `seen_timestamp`. -pub fn get_block_delay_ms( +pub fn get_block_delay_ms>( seen_timestamp: Duration, - block: BeaconBlockRef<'_, T>, + block: BeaconBlockRef<'_, T, P>, slot_clock: &S, ) -> Duration { get_slot_delay_ms::(seen_timestamp, block.slot(), slot_clock) diff --git a/beacon_node/beacon_chain/src/validator_pubkey_cache.rs b/beacon_node/beacon_chain/src/validator_pubkey_cache.rs index 26aea2d2722..79910df2923 100644 --- a/beacon_node/beacon_chain/src/validator_pubkey_cache.rs +++ b/beacon_node/beacon_chain/src/validator_pubkey_cache.rs @@ -4,7 +4,7 @@ use ssz::{Decode, Encode}; use std::collections::HashMap; use std::convert::TryInto; use std::marker::PhantomData; -use store::{DBColumn, Error as StoreError, KeyValueStore, KeyValueStoreOp, StoreItem}; +use store::{DBColumn, Error as StoreError, StoreItem, StoreOp}; use types::{BeaconState, Hash256, PublicKey, PublicKeyBytes}; /// Provides a mapping of `validator_index -> validator_publickey`. @@ -38,7 +38,7 @@ impl ValidatorPubkeyCache { }; let store_ops = cache.import_new_pubkeys(state)?; - store.hot_db.do_atomically(store_ops)?; + store.do_atomically(store_ops)?; Ok(cache) } @@ -79,7 +79,7 @@ impl ValidatorPubkeyCache { pub fn import_new_pubkeys( &mut self, state: &BeaconState, - ) -> Result, BeaconChainError> { + ) -> Result>, BeaconChainError> { if state.validators().len() > self.pubkeys.len() { self.import( state.validators()[self.pubkeys.len()..] @@ -92,7 +92,10 @@ impl ValidatorPubkeyCache { } /// Adds zero or more validators to `self`. - fn import(&mut self, validator_keys: I) -> Result, BeaconChainError> + fn import( + &mut self, + validator_keys: I, + ) -> Result>, BeaconChainError> where I: Iterator + ExactSizeIterator, { @@ -112,7 +115,9 @@ impl ValidatorPubkeyCache { // It will be committed atomically when the block that introduced it is written to disk. // Notably it is NOT written while the write lock on the cache is held. // See: https://github.com/sigp/lighthouse/issues/2327 - store_ops.push(DatabasePubkey(pubkey).as_kv_store_op(DatabasePubkey::key_for_index(i))); + store_ops.push(StoreOp::KeyValueOp( + DatabasePubkey(pubkey).as_kv_store_op(DatabasePubkey::key_for_index(i)), + )); self.pubkeys.push( (&pubkey) @@ -294,7 +299,7 @@ mod test { let ops = cache .import_new_pubkeys(&state) .expect("should import pubkeys"); - store.hot_db.do_atomically(ops).unwrap(); + store.do_atomically(ops).unwrap(); check_cache_get(&cache, &keypairs[..]); drop(cache); diff --git a/beacon_node/beacon_chain/tests/capella.rs b/beacon_node/beacon_chain/tests/capella.rs new file mode 100644 index 00000000000..e910e8134f1 --- /dev/null +++ b/beacon_node/beacon_chain/tests/capella.rs @@ -0,0 +1,167 @@ +#![cfg(not(debug_assertions))] // Tests run too slow in debug. + +use beacon_chain::test_utils::BeaconChainHarness; +use execution_layer::test_utils::Block; +use types::*; + +const VALIDATOR_COUNT: usize = 32; +type E = MainnetEthSpec; + +fn verify_execution_payload_chain(chain: &[FullPayload]) { + let mut prev_ep: Option> = None; + + for ep in chain { + assert!(!ep.is_default_with_empty_roots()); + assert!(ep.block_hash() != ExecutionBlockHash::zero()); + + // Check against previous `ExecutionPayload`. + if let Some(prev_ep) = prev_ep { + assert_eq!(prev_ep.block_hash(), ep.parent_hash()); + assert_eq!(prev_ep.block_number() + 1, ep.block_number()); + assert!(ep.timestamp() > prev_ep.timestamp()); + } + prev_ep = Some(ep.clone()); + } +} + +#[tokio::test] +async fn base_altair_merge_capella() { + let altair_fork_epoch = Epoch::new(4); + let altair_fork_slot = altair_fork_epoch.start_slot(E::slots_per_epoch()); + let bellatrix_fork_epoch = Epoch::new(8); + let merge_fork_slot = bellatrix_fork_epoch.start_slot(E::slots_per_epoch()); + let capella_fork_epoch = Epoch::new(12); + let capella_fork_slot = capella_fork_epoch.start_slot(E::slots_per_epoch()); + + let mut spec = E::default_spec(); + spec.altair_fork_epoch = Some(altair_fork_epoch); + spec.bellatrix_fork_epoch = Some(bellatrix_fork_epoch); + spec.capella_fork_epoch = Some(capella_fork_epoch); + + let harness = BeaconChainHarness::builder(E::default()) + .spec(spec) + .logger(logging::test_logger()) + .deterministic_keypairs(VALIDATOR_COUNT) + .fresh_ephemeral_store() + .mock_execution_layer() + .build(); + + /* + * Start with the base fork. + */ + assert!(harness.chain.head_snapshot().beacon_block.as_base().is_ok()); + + /* + * Do the Altair fork. + */ + harness.extend_to_slot(altair_fork_slot).await; + + let altair_head = &harness.chain.head_snapshot().beacon_block; + assert!(altair_head.as_altair().is_ok()); + assert_eq!(altair_head.slot(), altair_fork_slot); + + /* + * Do the merge fork, without a terminal PoW block. + */ + harness.extend_to_slot(merge_fork_slot).await; + + let merge_head = &harness.chain.head_snapshot().beacon_block; + assert!(merge_head.as_merge().is_ok()); + assert_eq!(merge_head.slot(), merge_fork_slot); + assert!( + merge_head + .message() + .body() + .execution_payload() + .unwrap() + .is_default_with_empty_roots(), + "Merge head is default payload" + ); + + /* + * Next merge block shouldn't include an exec payload. + */ + harness.extend_slots(1).await; + + let one_after_merge_head = &harness.chain.head_snapshot().beacon_block; + assert!( + one_after_merge_head + .message() + .body() + .execution_payload() + .unwrap() + .is_default_with_empty_roots(), + "One after merge head is default payload" + ); + assert_eq!(one_after_merge_head.slot(), merge_fork_slot + 1); + + /* + * Trigger the terminal PoW block. + */ + harness + .execution_block_generator() + .move_to_terminal_block() + .unwrap(); + + // Add a slot duration to get to the next slot + let timestamp = harness.get_timestamp_at_slot() + harness.spec.seconds_per_slot; + harness + .execution_block_generator() + .modify_last_block(|block| { + if let Block::PoW(terminal_block) = block { + terminal_block.timestamp = timestamp; + } + }); + harness.extend_slots(1).await; + + let two_after_merge_head = &harness.chain.head_snapshot().beacon_block; + assert!( + two_after_merge_head + .message() + .body() + .execution_payload() + .unwrap() + .is_default_with_empty_roots(), + "Two after merge head is default payload" + ); + assert_eq!(two_after_merge_head.slot(), merge_fork_slot + 2); + + /* + * Next merge block should include an exec payload. + */ + let mut execution_payloads = vec![]; + for _ in (merge_fork_slot.as_u64() + 3)..capella_fork_slot.as_u64() { + harness.extend_slots(1).await; + let block = &harness.chain.head_snapshot().beacon_block; + let full_payload: FullPayload = block + .message() + .body() + .execution_payload() + .unwrap() + .clone() + .into(); + // pre-capella shouldn't have withdrawals + assert!(full_payload.withdrawals_root().is_err()); + execution_payloads.push(full_payload); + } + + /* + * Should enter capella fork now. + */ + for _ in 0..16 { + harness.extend_slots(1).await; + let block = &harness.chain.head_snapshot().beacon_block; + let full_payload: FullPayload = block + .message() + .body() + .execution_payload() + .unwrap() + .clone() + .into(); + // post-capella should have withdrawals + assert!(full_payload.withdrawals_root().is_ok()); + execution_payloads.push(full_payload); + } + + verify_execution_payload_chain(execution_payloads.as_slice()); +} diff --git a/beacon_node/beacon_chain/tests/main.rs b/beacon_node/beacon_chain/tests/main.rs index 1c61e9927fc..c81a547406a 100644 --- a/beacon_node/beacon_chain/tests/main.rs +++ b/beacon_node/beacon_chain/tests/main.rs @@ -1,9 +1,11 @@ mod attestation_production; mod attestation_verification; mod block_verification; +mod capella; mod merge; mod op_verification; mod payload_invalidation; +mod rewards; mod store_tests; mod sync_committee_verification; mod tests; diff --git a/beacon_node/beacon_chain/tests/merge.rs b/beacon_node/beacon_chain/tests/merge.rs index c8c47c99041..1e0112a4954 100644 --- a/beacon_node/beacon_chain/tests/merge.rs +++ b/beacon_node/beacon_chain/tests/merge.rs @@ -12,17 +12,14 @@ fn verify_execution_payload_chain(chain: &[FullPayload]) { let mut prev_ep: Option> = None; for ep in chain { - assert!(*ep != FullPayload::default()); + assert!(!ep.is_default_with_empty_roots()); assert!(ep.block_hash() != ExecutionBlockHash::zero()); // Check against previous `ExecutionPayload`. if let Some(prev_ep) = prev_ep { - assert_eq!(prev_ep.block_hash(), ep.execution_payload.parent_hash); - assert_eq!( - prev_ep.execution_payload.block_number + 1, - ep.execution_payload.block_number - ); - assert!(ep.execution_payload.timestamp > prev_ep.execution_payload.timestamp); + assert_eq!(prev_ep.block_hash(), ep.parent_hash()); + assert_eq!(prev_ep.block_number() + 1, ep.block_number()); + assert!(ep.timestamp() > prev_ep.timestamp()); } prev_ep = Some(ep.clone()); } @@ -89,7 +86,7 @@ async fn merge_with_terminal_block_hash_override() { if i == 0 { assert_eq!(execution_payload.block_hash(), genesis_pow_block_hash); } - execution_payloads.push(execution_payload); + execution_payloads.push(execution_payload.into()); } verify_execution_payload_chain(execution_payloads.as_slice()); @@ -141,9 +138,14 @@ async fn base_altair_merge_with_terminal_block_after_fork() { let merge_head = &harness.chain.head_snapshot().beacon_block; assert!(merge_head.as_merge().is_ok()); assert_eq!(merge_head.slot(), merge_fork_slot); - assert_eq!( - *merge_head.message().body().execution_payload().unwrap(), - FullPayload::default() + assert!( + merge_head + .message() + .body() + .execution_payload() + .unwrap() + .is_default_with_empty_roots(), + "Merge head is default payload" ); /* @@ -153,13 +155,14 @@ async fn base_altair_merge_with_terminal_block_after_fork() { harness.extend_slots(1).await; let one_after_merge_head = &harness.chain.head_snapshot().beacon_block; - assert_eq!( - *one_after_merge_head + assert!( + one_after_merge_head .message() .body() .execution_payload() - .unwrap(), - FullPayload::default() + .unwrap() + .is_default_with_empty_roots(), + "One after merge head is default payload" ); assert_eq!(one_after_merge_head.slot(), merge_fork_slot + 1); @@ -185,26 +188,34 @@ async fn base_altair_merge_with_terminal_block_after_fork() { harness.extend_slots(1).await; - let one_after_merge_head = &harness.chain.head_snapshot().beacon_block; - assert_eq!( - *one_after_merge_head + let two_after_merge_head = &harness.chain.head_snapshot().beacon_block; + assert!( + two_after_merge_head .message() .body() .execution_payload() - .unwrap(), - FullPayload::default() + .unwrap() + .is_default_with_empty_roots(), + "Two after merge head is default payload" ); - assert_eq!(one_after_merge_head.slot(), merge_fork_slot + 2); + assert_eq!(two_after_merge_head.slot(), merge_fork_slot + 2); /* * Next merge block should include an exec payload. */ - for _ in 0..4 { harness.extend_slots(1).await; let block = &harness.chain.head_snapshot().beacon_block; - execution_payloads.push(block.message().body().execution_payload().unwrap().clone()); + execution_payloads.push( + block + .message() + .body() + .execution_payload() + .unwrap() + .clone() + .into(), + ); } verify_execution_payload_chain(execution_payloads.as_slice()); diff --git a/beacon_node/beacon_chain/tests/payload_invalidation.rs b/beacon_node/beacon_chain/tests/payload_invalidation.rs index 0b9eaaee0f0..54d7734471c 100644 --- a/beacon_node/beacon_chain/tests/payload_invalidation.rs +++ b/beacon_node/beacon_chain/tests/payload_invalidation.rs @@ -13,9 +13,9 @@ use beacon_chain::{ INVALID_JUSTIFIED_PAYLOAD_SHUTDOWN_REASON, }; use execution_layer::{ - json_structures::{JsonForkChoiceStateV1, JsonPayloadAttributesV1}, + json_structures::{JsonForkchoiceStateV1, JsonPayloadAttributes, JsonPayloadAttributesV1}, test_utils::ExecutionBlockGenerator, - ExecutionLayer, ForkChoiceState, PayloadAttributes, + ExecutionLayer, ForkchoiceState, PayloadAttributes, }; use fork_choice::{ CountUnrealized, Error as ForkChoiceError, InvalidationOperation, PayloadVerificationStatus, @@ -120,7 +120,7 @@ impl InvalidPayloadRig { &self.harness.chain.canonical_head } - fn previous_forkchoice_update_params(&self) -> (ForkChoiceState, PayloadAttributes) { + fn previous_forkchoice_update_params(&self) -> (ForkchoiceState, PayloadAttributes) { let mock_execution_layer = self.harness.mock_execution_layer.as_ref().unwrap(); let json = mock_execution_layer .server @@ -129,14 +129,17 @@ impl InvalidPayloadRig { let params = json.get("params").expect("no params"); let fork_choice_state_json = params.get(0).expect("no payload param"); - let fork_choice_state: JsonForkChoiceStateV1 = + let fork_choice_state: JsonForkchoiceStateV1 = serde_json::from_value(fork_choice_state_json.clone()).unwrap(); let payload_param_json = params.get(1).expect("no payload param"); let attributes: JsonPayloadAttributesV1 = serde_json::from_value(payload_param_json.clone()).unwrap(); - (fork_choice_state.into(), attributes.into()) + ( + fork_choice_state.into(), + JsonPayloadAttributes::V1(attributes).into(), + ) } fn previous_payload_attributes(&self) -> PayloadAttributes { @@ -991,20 +994,20 @@ async fn payload_preparation() { .await .unwrap(); - let payload_attributes = PayloadAttributes { - timestamp: rig - .harness + let payload_attributes = PayloadAttributes::new( + rig.harness .chain .slot_clock .start_of(next_slot) .unwrap() .as_secs(), - prev_randao: *head + *head .beacon_state .get_randao_mix(head.beacon_state.current_epoch()) .unwrap(), - suggested_fee_recipient: fee_recipient, - }; + fee_recipient, + None, + ); assert_eq!(rig.previous_payload_attributes(), payload_attributes); } @@ -1138,7 +1141,7 @@ async fn payload_preparation_before_transition_block() { let (fork_choice_state, payload_attributes) = rig.previous_forkchoice_update_params(); let latest_block_hash = rig.latest_execution_block_hash(); - assert_eq!(payload_attributes.suggested_fee_recipient, fee_recipient); + assert_eq!(payload_attributes.suggested_fee_recipient(), fee_recipient); assert_eq!(fork_choice_state.head_block_hash, latest_block_hash); } @@ -1385,18 +1388,16 @@ async fn build_optimistic_chain( .body() .execution_payload() .unwrap() - .execution_payload - == <_>::default(), + .is_default_with_empty_roots(), "the block *has not* undergone the merge transition" ); assert!( - post_transition_block + !post_transition_block .message() .body() .execution_payload() .unwrap() - .execution_payload - != <_>::default(), + .is_default_with_empty_roots(), "the block *has* undergone the merge transition" ); diff --git a/beacon_node/beacon_chain/tests/rewards.rs b/beacon_node/beacon_chain/tests/rewards.rs new file mode 100644 index 00000000000..b61bea12429 --- /dev/null +++ b/beacon_node/beacon_chain/tests/rewards.rs @@ -0,0 +1,121 @@ +#![cfg(test)] + +use std::collections::HashMap; + +use beacon_chain::test_utils::{ + generate_deterministic_keypairs, BeaconChainHarness, EphemeralHarnessType, +}; +use beacon_chain::{ + test_utils::{AttestationStrategy, BlockStrategy, RelativeSyncCommittee}, + types::{Epoch, EthSpec, Keypair, MinimalEthSpec}, +}; +use lazy_static::lazy_static; + +pub const VALIDATOR_COUNT: usize = 64; + +lazy_static! { + static ref KEYPAIRS: Vec = generate_deterministic_keypairs(VALIDATOR_COUNT); +} + +fn get_harness() -> BeaconChainHarness> { + let mut spec = E::default_spec(); + + spec.altair_fork_epoch = Some(Epoch::new(0)); // We use altair for all tests + + let harness = BeaconChainHarness::builder(E::default()) + .spec(spec) + .keypairs(KEYPAIRS.to_vec()) + .fresh_ephemeral_store() + .build(); + + harness.advance_slot(); + + harness +} + +#[tokio::test] +async fn test_sync_committee_rewards() { + let num_block_produced = MinimalEthSpec::slots_per_epoch(); + let harness = get_harness::(); + + let latest_block_root = harness + .extend_chain( + num_block_produced as usize, + BlockStrategy::OnCanonicalHead, + AttestationStrategy::AllValidators, + ) + .await; + + // Create and add sync committee message to op_pool + let sync_contributions = harness.make_sync_contributions( + &harness.get_current_state(), + latest_block_root, + harness.get_current_slot(), + RelativeSyncCommittee::Current, + ); + + harness + .process_sync_contributions(sync_contributions) + .unwrap(); + + // Add block + let chain = &harness.chain; + let (head_state, head_state_root) = harness.get_current_state_and_root(); + let target_slot = harness.get_current_slot() + 1; + + let (block_root, mut state) = harness + .add_attested_block_at_slot(target_slot, head_state, head_state_root, &[]) + .await + .unwrap(); + + let block = harness.get_block(block_root).unwrap(); + let parent_block = chain + .get_blinded_block(&block.parent_root()) + .unwrap() + .unwrap(); + let parent_state = chain + .get_state(&parent_block.state_root(), Some(parent_block.slot())) + .unwrap() + .unwrap(); + + let reward_payload = chain + .compute_sync_committee_rewards(block.message(), &mut state) + .unwrap(); + + let rewards = reward_payload + .iter() + .map(|reward| (reward.validator_index, reward.reward)) + .collect::>(); + + let proposer_index = state + .get_beacon_proposer_index(target_slot, &MinimalEthSpec::default_spec()) + .unwrap(); + + let mut mismatches = vec![]; + + for validator in state.validators() { + let validator_index = state + .clone() + .get_validator_index(&validator.pubkey) + .unwrap() + .unwrap(); + let pre_state_balance = parent_state.balances()[validator_index]; + let post_state_balance = state.balances()[validator_index]; + let sync_committee_reward = rewards.get(&(validator_index as u64)).unwrap_or(&0); + + if validator_index == proposer_index { + continue; // Ignore proposer + } + + if pre_state_balance as i64 + *sync_committee_reward != post_state_balance as i64 { + mismatches.push(validator_index.to_string()); + } + } + + assert_eq!( + mismatches.len(), + 0, + "Expect 0 mismatches, but these validators have mismatches on balance: {} ", + mismatches.join(",") + ); +} diff --git a/beacon_node/beacon_chain/tests/store_tests.rs b/beacon_node/beacon_chain/tests/store_tests.rs index 8a6ea9cfe1a..2f40443b996 100644 --- a/beacon_node/beacon_chain/tests/store_tests.rs +++ b/beacon_node/beacon_chain/tests/store_tests.rs @@ -2,6 +2,7 @@ use beacon_chain::attestation_verification::Error as AttnError; use beacon_chain::builder::BeaconChainBuilder; +use beacon_chain::schema_change::migrate_schema; use beacon_chain::test_utils::{ test_spec, AttestationStrategy, BeaconChainHarness, BlockStrategy, DiskHarnessType, }; @@ -22,6 +23,7 @@ use std::collections::HashSet; use std::convert::TryInto; use std::sync::Arc; use std::time::Duration; +use store::metadata::{SchemaVersion, CURRENT_SCHEMA_VERSION}; use store::{ iter::{BlockRootsIterator, StateRootsIterator}, HotColdDB, LevelDB, StoreConfig, @@ -68,6 +70,7 @@ fn get_harness( let harness = BeaconChainHarness::builder(MinimalEthSpec) .default_spec() .keypairs(KEYPAIRS[0..validator_count].to_vec()) + .logger(store.logger().clone()) .fresh_disk_store(store) .mock_execution_layer() .build(); @@ -1013,8 +1016,8 @@ fn check_shuffling_compatible( // Ensure blocks from abandoned forks are pruned from the Hot DB #[tokio::test] async fn prunes_abandoned_fork_between_two_finalized_checkpoints() { - const HONEST_VALIDATOR_COUNT: usize = 16 + 0; - const ADVERSARIAL_VALIDATOR_COUNT: usize = 8 - 0; + const HONEST_VALIDATOR_COUNT: usize = 32 + 0; + const ADVERSARIAL_VALIDATOR_COUNT: usize = 16 - 0; const VALIDATOR_COUNT: usize = HONEST_VALIDATOR_COUNT + ADVERSARIAL_VALIDATOR_COUNT; let validators_keypairs = types::test_utils::generate_deterministic_keypairs(VALIDATOR_COUNT); let honest_validators: Vec = (0..HONEST_VALIDATOR_COUNT).collect(); @@ -1123,8 +1126,8 @@ async fn prunes_abandoned_fork_between_two_finalized_checkpoints() { #[tokio::test] async fn pruning_does_not_touch_abandoned_block_shared_with_canonical_chain() { - const HONEST_VALIDATOR_COUNT: usize = 16 + 0; - const ADVERSARIAL_VALIDATOR_COUNT: usize = 8 - 0; + const HONEST_VALIDATOR_COUNT: usize = 32 + 0; + const ADVERSARIAL_VALIDATOR_COUNT: usize = 16 - 0; const VALIDATOR_COUNT: usize = HONEST_VALIDATOR_COUNT + ADVERSARIAL_VALIDATOR_COUNT; let validators_keypairs = types::test_utils::generate_deterministic_keypairs(VALIDATOR_COUNT); let honest_validators: Vec = (0..HONEST_VALIDATOR_COUNT).collect(); @@ -1255,8 +1258,8 @@ async fn pruning_does_not_touch_abandoned_block_shared_with_canonical_chain() { #[tokio::test] async fn pruning_does_not_touch_blocks_prior_to_finalization() { - const HONEST_VALIDATOR_COUNT: usize = 16; - const ADVERSARIAL_VALIDATOR_COUNT: usize = 8; + const HONEST_VALIDATOR_COUNT: usize = 32; + const ADVERSARIAL_VALIDATOR_COUNT: usize = 16; const VALIDATOR_COUNT: usize = HONEST_VALIDATOR_COUNT + ADVERSARIAL_VALIDATOR_COUNT; let validators_keypairs = types::test_utils::generate_deterministic_keypairs(VALIDATOR_COUNT); let honest_validators: Vec = (0..HONEST_VALIDATOR_COUNT).collect(); @@ -1350,8 +1353,8 @@ async fn pruning_does_not_touch_blocks_prior_to_finalization() { #[tokio::test] async fn prunes_fork_growing_past_youngest_finalized_checkpoint() { - const HONEST_VALIDATOR_COUNT: usize = 16 + 0; - const ADVERSARIAL_VALIDATOR_COUNT: usize = 8 - 0; + const HONEST_VALIDATOR_COUNT: usize = 32 + 0; + const ADVERSARIAL_VALIDATOR_COUNT: usize = 16 - 0; const VALIDATOR_COUNT: usize = HONEST_VALIDATOR_COUNT + ADVERSARIAL_VALIDATOR_COUNT; let validators_keypairs = types::test_utils::generate_deterministic_keypairs(VALIDATOR_COUNT); let honest_validators: Vec = (0..HONEST_VALIDATOR_COUNT).collect(); @@ -1495,8 +1498,8 @@ async fn prunes_fork_growing_past_youngest_finalized_checkpoint() { // This is to check if state outside of normal block processing are pruned correctly. #[tokio::test] async fn prunes_skipped_slots_states() { - const HONEST_VALIDATOR_COUNT: usize = 16 + 0; - const ADVERSARIAL_VALIDATOR_COUNT: usize = 8 - 0; + const HONEST_VALIDATOR_COUNT: usize = 32 + 0; + const ADVERSARIAL_VALIDATOR_COUNT: usize = 16 - 0; const VALIDATOR_COUNT: usize = HONEST_VALIDATOR_COUNT + ADVERSARIAL_VALIDATOR_COUNT; let validators_keypairs = types::test_utils::generate_deterministic_keypairs(VALIDATOR_COUNT); let honest_validators: Vec = (0..HONEST_VALIDATOR_COUNT).collect(); @@ -1624,8 +1627,8 @@ async fn prunes_skipped_slots_states() { // This is to check if state outside of normal block processing are pruned correctly. #[tokio::test] async fn finalizes_non_epoch_start_slot() { - const HONEST_VALIDATOR_COUNT: usize = 16 + 0; - const ADVERSARIAL_VALIDATOR_COUNT: usize = 8 - 0; + const HONEST_VALIDATOR_COUNT: usize = 32 + 0; + const ADVERSARIAL_VALIDATOR_COUNT: usize = 16 - 0; const VALIDATOR_COUNT: usize = HONEST_VALIDATOR_COUNT + ADVERSARIAL_VALIDATOR_COUNT; let validators_keypairs = types::test_utils::generate_deterministic_keypairs(VALIDATOR_COUNT); let honest_validators: Vec = (0..HONEST_VALIDATOR_COUNT).collect(); @@ -2529,6 +2532,91 @@ async fn revert_minority_fork_on_resume() { assert_eq!(heads.len(), 1); } +// This test checks whether the schema downgrade from the latest version to some minimum supported +// version is correct. This is the easiest schema test to write without historic versions of +// Lighthouse on-hand, but has the disadvantage that the min version needs to be adjusted manually +// as old downgrades are deprecated. +#[tokio::test] +async fn schema_downgrade_to_min_version() { + let num_blocks_produced = E::slots_per_epoch() * 4; + let db_path = tempdir().unwrap(); + let store = get_store(&db_path); + let harness = get_harness(store.clone(), LOW_VALIDATOR_COUNT); + let spec = &harness.chain.spec.clone(); + + harness + .extend_chain( + num_blocks_produced as usize, + BlockStrategy::OnCanonicalHead, + AttestationStrategy::AllValidators, + ) + .await; + + let min_version = if harness.spec.capella_fork_epoch.is_some() { + // Can't downgrade beyond V14 once Capella is reached, for simplicity don't test that + // at all if Capella is enabled. + SchemaVersion(14) + } else { + SchemaVersion(11) + }; + + // Close the database to ensure everything is written to disk. + drop(store); + drop(harness); + + // Re-open the store. + let store = get_store(&db_path); + + // Downgrade. + let deposit_contract_deploy_block = 0; + migrate_schema::>( + store.clone(), + deposit_contract_deploy_block, + CURRENT_SCHEMA_VERSION, + min_version, + store.logger().clone(), + spec, + ) + .expect("schema downgrade to minimum version should work"); + + // Upgrade back. + migrate_schema::>( + store.clone(), + deposit_contract_deploy_block, + min_version, + CURRENT_SCHEMA_VERSION, + store.logger().clone(), + spec, + ) + .expect("schema upgrade from minimum version should work"); + + // Rescreate the harness. + let harness = BeaconChainHarness::builder(MinimalEthSpec) + .default_spec() + .keypairs(KEYPAIRS[0..LOW_VALIDATOR_COUNT].to_vec()) + .logger(store.logger().clone()) + .resumed_disk_store(store.clone()) + .mock_execution_layer() + .build(); + + check_finalization(&harness, num_blocks_produced); + check_split_slot(&harness, store.clone()); + check_chain_dump(&harness, num_blocks_produced + 1); + check_iterators(&harness); + + // Check that downgrading beyond the minimum version fails (bound is *tight*). + let min_version_sub_1 = SchemaVersion(min_version.as_u64().checked_sub(1).unwrap()); + migrate_schema::>( + store.clone(), + deposit_contract_deploy_block, + CURRENT_SCHEMA_VERSION, + min_version_sub_1, + harness.logger().clone(), + spec, + ) + .expect_err("should not downgrade below minimum version"); +} + /// Checks that two chains are the same, for the purpose of these tests. /// /// Several fields that are hard/impossible to check are ignored (e.g., the store). diff --git a/beacon_node/beacon_chain/tests/sync_committee_verification.rs b/beacon_node/beacon_chain/tests/sync_committee_verification.rs index 1e51b0ffb9b..239f55e7d38 100644 --- a/beacon_node/beacon_chain/tests/sync_committee_verification.rs +++ b/beacon_node/beacon_chain/tests/sync_committee_verification.rs @@ -45,6 +45,7 @@ fn get_valid_sync_committee_message( harness: &BeaconChainHarness>, slot: Slot, relative_sync_committee: RelativeSyncCommittee, + message_index: usize, ) -> (SyncCommitteeMessage, usize, SecretKey, SyncSubnetId) { let head_state = harness.chain.head_beacon_state_cloned(); let head_block_root = harness.chain.head_snapshot().beacon_block_root; @@ -52,7 +53,7 @@ fn get_valid_sync_committee_message( .make_sync_committee_messages(&head_state, head_block_root, slot, relative_sync_committee) .get(0) .expect("sync messages should exist") - .get(0) + .get(message_index) .expect("first sync message should exist") .clone(); @@ -494,7 +495,7 @@ async fn unaggregated_gossip_verification() { let current_slot = harness.chain.slot().expect("should get slot"); let (valid_sync_committee_message, expected_validator_index, validator_sk, subnet_id) = - get_valid_sync_committee_message(&harness, current_slot, RelativeSyncCommittee::Current); + get_valid_sync_committee_message(&harness, current_slot, RelativeSyncCommittee::Current, 0); macro_rules! assert_invalid { ($desc: tt, $attn_getter: expr, $subnet_getter: expr, $($error: pat_param) |+ $( if $guard: expr )?) => { @@ -644,7 +645,7 @@ async fn unaggregated_gossip_verification() { // **Incorrectly** create a sync message using the current sync committee let (next_valid_sync_committee_message, _, _, next_subnet_id) = - get_valid_sync_committee_message(&harness, target_slot, RelativeSyncCommittee::Current); + get_valid_sync_committee_message(&harness, target_slot, RelativeSyncCommittee::Current, 1); assert_invalid!( "sync message on incorrect subnet", diff --git a/beacon_node/beacon_chain/tests/tests.rs b/beacon_node/beacon_chain/tests/tests.rs index d80db132ef9..b4eabc8093f 100644 --- a/beacon_node/beacon_chain/tests/tests.rs +++ b/beacon_node/beacon_chain/tests/tests.rs @@ -19,7 +19,7 @@ use types::{ }; // Should ideally be divisible by 3. -pub const VALIDATOR_COUNT: usize = 24; +pub const VALIDATOR_COUNT: usize = 48; lazy_static! { /// A cached set of keys. @@ -500,7 +500,7 @@ async fn unaggregated_attestations_added_to_fork_choice_some_none() { // Move forward a slot so all queued attestations can be processed. harness.advance_slot(); fork_choice - .update_time(harness.chain.slot().unwrap(), &harness.chain.spec) + .update_time(harness.chain.slot().unwrap()) .unwrap(); let validator_slots: Vec<(usize, Slot)> = (0..VALIDATOR_COUNT) @@ -614,7 +614,7 @@ async fn unaggregated_attestations_added_to_fork_choice_all_updated() { // Move forward a slot so all queued attestations can be processed. harness.advance_slot(); fork_choice - .update_time(harness.chain.slot().unwrap(), &harness.chain.spec) + .update_time(harness.chain.slot().unwrap()) .unwrap(); let validators: Vec = (0..VALIDATOR_COUNT).collect(); diff --git a/beacon_node/builder_client/Cargo.toml b/beacon_node/builder_client/Cargo.toml index 48ac0300c98..b79fc5e4073 100644 --- a/beacon_node/builder_client/Cargo.toml +++ b/beacon_node/builder_client/Cargo.toml @@ -10,3 +10,4 @@ sensitive_url = { path = "../../common/sensitive_url" } eth2 = { path = "../../common/eth2" } serde = { version = "1.0.116", features = ["derive"] } serde_json = "1.0.58" +lighthouse_version = { path = "../../common/lighthouse_version" } diff --git a/beacon_node/builder_client/src/lib.rs b/beacon_node/builder_client/src/lib.rs index 3517d06b15b..255c2fdd19b 100644 --- a/beacon_node/builder_client/src/lib.rs +++ b/beacon_node/builder_client/src/lib.rs @@ -1,6 +1,6 @@ use eth2::types::builder_bid::SignedBuilderBid; use eth2::types::{ - BlindedPayload, EthSpec, ExecPayload, ExecutionBlockHash, ExecutionPayload, + AbstractExecPayload, BlindedPayload, EthSpec, ExecutionBlockHash, ExecutionPayload, ForkVersionedResponse, PublicKeyBytes, SignedBeaconBlock, SignedValidatorRegistrationData, Slot, }; @@ -17,6 +17,9 @@ pub const DEFAULT_TIMEOUT_MILLIS: u64 = 15000; /// This timeout is in accordance with v0.2.0 of the [builder specs](https://github.com/flashbots/mev-boost/pull/20). pub const DEFAULT_GET_HEADER_TIMEOUT_MILLIS: u64 = 1000; +/// Default user agent for HTTP requests. +pub const DEFAULT_USER_AGENT: &str = lighthouse_version::VERSION; + #[derive(Clone)] pub struct Timeouts { get_header: Duration, @@ -41,23 +44,23 @@ pub struct BuilderHttpClient { client: reqwest::Client, server: SensitiveUrl, timeouts: Timeouts, + user_agent: String, } impl BuilderHttpClient { - pub fn new(server: SensitiveUrl) -> Result { + pub fn new(server: SensitiveUrl, user_agent: Option) -> Result { + let user_agent = user_agent.unwrap_or(DEFAULT_USER_AGENT.to_string()); + let client = reqwest::Client::builder().user_agent(&user_agent).build()?; Ok(Self { - client: reqwest::Client::new(), + client, server, timeouts: Timeouts::default(), + user_agent, }) } - pub fn new_with_timeouts(server: SensitiveUrl, timeouts: Timeouts) -> Result { - Ok(Self { - client: reqwest::Client::new(), - server, - timeouts, - }) + pub fn get_user_agent(&self) -> &str { + &self.user_agent } async fn get_with_timeout( @@ -160,7 +163,7 @@ impl BuilderHttpClient { } /// `GET /eth/v1/builder/header` - pub async fn get_builder_header>( + pub async fn get_builder_header>( &self, slot: Slot, parent_hash: ExecutionBlockHash, diff --git a/beacon_node/client/Cargo.toml b/beacon_node/client/Cargo.toml index d01f2505cce..876458eea52 100644 --- a/beacon_node/client/Cargo.toml +++ b/beacon_node/client/Cargo.toml @@ -6,6 +6,10 @@ edition = "2021" [dev-dependencies] serde_yaml = "0.8.13" +logging = { path = "../../common/logging" } +state_processing = { path = "../../consensus/state_processing" } +operation_pool = { path = "../operation_pool" } +tokio = "1.14.0" [dependencies] beacon_chain = { path = "../beacon_chain" } @@ -35,7 +39,7 @@ time = "0.3.5" directory = {path = "../../common/directory"} http_api = { path = "../http_api" } http_metrics = { path = "../http_metrics" } -slasher = { path = "../../slasher" } +slasher = { path = "../../slasher", default-features = false } slasher_service = { path = "../../slasher/service" } monitoring_api = {path = "../../common/monitoring_api"} execution_layer = { path = "../execution_layer" } diff --git a/beacon_node/client/src/address_change_broadcast.rs b/beacon_node/client/src/address_change_broadcast.rs new file mode 100644 index 00000000000..272ee908fba --- /dev/null +++ b/beacon_node/client/src/address_change_broadcast.rs @@ -0,0 +1,322 @@ +use crate::*; +use lighthouse_network::PubsubMessage; +use network::NetworkMessage; +use slog::{debug, info, warn, Logger}; +use slot_clock::SlotClock; +use std::cmp; +use std::collections::HashSet; +use std::mem; +use std::time::Duration; +use tokio::sync::mpsc::UnboundedSender; +use tokio::time::sleep; +use types::EthSpec; + +/// The size of each chunk of addresses changes to be broadcast at the Capella +/// fork. +const BROADCAST_CHUNK_SIZE: usize = 128; +/// The delay between broadcasting each chunk. +const BROADCAST_CHUNK_DELAY: Duration = Duration::from_millis(500); + +/// If the Capella fork has already been reached, `broadcast_address_changes` is +/// called immediately. +/// +/// If the Capella fork has not been reached, waits until the start of the fork +/// epoch and then calls `broadcast_address_changes`. +pub async fn broadcast_address_changes_at_capella( + chain: &BeaconChain, + network_send: UnboundedSender>, + log: &Logger, +) { + let spec = &chain.spec; + let slot_clock = &chain.slot_clock; + + let capella_fork_slot = if let Some(epoch) = spec.capella_fork_epoch { + epoch.start_slot(T::EthSpec::slots_per_epoch()) + } else { + // Exit now if Capella is not defined. + return; + }; + + // Wait until the Capella fork epoch. + while chain.slot().map_or(true, |slot| slot < capella_fork_slot) { + match slot_clock.duration_to_slot(capella_fork_slot) { + Some(duration) => { + // Sleep until the Capella fork. + sleep(duration).await; + break; + } + None => { + // We were unable to read the slot clock wait another slot + // and then try again. + sleep(slot_clock.slot_duration()).await; + } + } + } + + // The following function will be called in two scenarios: + // + // 1. The node has been running for some time and the Capella fork has just + // been reached. + // 2. The node has just started and it is *after* the Capella fork. + broadcast_address_changes(chain, network_send, log).await +} + +/// Broadcasts any address changes that are flagged for broadcasting at the +/// Capella fork epoch. +/// +/// Address changes are published in chunks, with a delay between each chunk. +/// This helps reduce the load on the P2P network and also helps prevent us from +/// clogging our `network_send` channel and being late to publish +/// blocks, attestations, etc. +pub async fn broadcast_address_changes( + chain: &BeaconChain, + network_send: UnboundedSender>, + log: &Logger, +) { + let head = chain.head_snapshot(); + let mut changes = chain + .op_pool + .get_bls_to_execution_changes_received_pre_capella(&head.beacon_state, &chain.spec); + + while !changes.is_empty() { + // This `split_off` approach is to allow us to have owned chunks of the + // `changes` vec. The `std::slice::Chunks` method uses references and + // the `itertools` iterator that achives this isn't `Send` so it doesn't + // work well with the `sleep` at the end of the loop. + let tail = changes.split_off(cmp::min(BROADCAST_CHUNK_SIZE, changes.len())); + let chunk = mem::replace(&mut changes, tail); + + let mut published_indices = HashSet::with_capacity(BROADCAST_CHUNK_SIZE); + let mut num_ok = 0; + let mut num_err = 0; + + // Publish each individual address change. + for address_change in chunk { + let validator_index = address_change.message.validator_index; + + let pubsub_message = PubsubMessage::BlsToExecutionChange(Box::new(address_change)); + let message = NetworkMessage::Publish { + messages: vec![pubsub_message], + }; + // It seems highly unlikely that this unbounded send will fail, but + // we handle the result nontheless. + if let Err(e) = network_send.send(message) { + debug!( + log, + "Failed to publish change message"; + "error" => ?e, + "validator_index" => validator_index + ); + num_err += 1; + } else { + debug!( + log, + "Published address change message"; + "validator_index" => validator_index + ); + num_ok += 1; + published_indices.insert(validator_index); + } + } + + // Remove any published indices from the list of indices that need to be + // published. + chain + .op_pool + .register_indices_broadcasted_at_capella(&published_indices); + + info!( + log, + "Published address change messages"; + "num_published" => num_ok, + ); + + if num_err > 0 { + warn!( + log, + "Failed to publish address changes"; + "info" => "failed messages will be retried", + "num_unable_to_publish" => num_err, + ); + } + + sleep(BROADCAST_CHUNK_DELAY).await; + } + + debug!( + log, + "Address change routine complete"; + ); +} + +#[cfg(not(debug_assertions))] // Tests run too slow in debug. +#[cfg(test)] +mod tests { + use super::*; + use beacon_chain::test_utils::{BeaconChainHarness, EphemeralHarnessType}; + use operation_pool::ReceivedPreCapella; + use state_processing::{SigVerifiedOp, VerifyOperation}; + use std::collections::HashSet; + use tokio::sync::mpsc; + use types::*; + + type E = MainnetEthSpec; + + pub const VALIDATOR_COUNT: usize = BROADCAST_CHUNK_SIZE * 3; + pub const EXECUTION_ADDRESS: Address = Address::repeat_byte(42); + + struct Tester { + harness: BeaconChainHarness>, + /// Changes which should be broadcast at the Capella fork. + received_pre_capella_changes: Vec>, + /// Changes which should *not* be broadcast at the Capella fork. + not_received_pre_capella_changes: Vec>, + } + + impl Tester { + fn new() -> Self { + let altair_fork_epoch = Epoch::new(0); + let bellatrix_fork_epoch = Epoch::new(0); + let capella_fork_epoch = Epoch::new(2); + + let mut spec = E::default_spec(); + spec.altair_fork_epoch = Some(altair_fork_epoch); + spec.bellatrix_fork_epoch = Some(bellatrix_fork_epoch); + spec.capella_fork_epoch = Some(capella_fork_epoch); + + let harness = BeaconChainHarness::builder(E::default()) + .spec(spec) + .logger(logging::test_logger()) + .deterministic_keypairs(VALIDATOR_COUNT) + .deterministic_withdrawal_keypairs(VALIDATOR_COUNT) + .fresh_ephemeral_store() + .mock_execution_layer() + .build(); + + Self { + harness, + received_pre_capella_changes: <_>::default(), + not_received_pre_capella_changes: <_>::default(), + } + } + + fn produce_verified_address_change( + &self, + validator_index: u64, + ) -> SigVerifiedOp { + let change = self + .harness + .make_bls_to_execution_change(validator_index, EXECUTION_ADDRESS); + let head = self.harness.chain.head_snapshot(); + + change + .validate(&head.beacon_state, &self.harness.spec) + .unwrap() + } + + fn produce_received_pre_capella_changes(mut self, indices: Vec) -> Self { + for validator_index in indices { + self.received_pre_capella_changes + .push(self.produce_verified_address_change(validator_index)); + } + self + } + + fn produce_not_received_pre_capella_changes(mut self, indices: Vec) -> Self { + for validator_index in indices { + self.not_received_pre_capella_changes + .push(self.produce_verified_address_change(validator_index)); + } + self + } + + async fn run(self) { + let harness = self.harness; + let chain = harness.chain.clone(); + + let mut broadcast_indices = HashSet::new(); + for change in self.received_pre_capella_changes { + broadcast_indices.insert(change.as_inner().message.validator_index); + chain + .op_pool + .insert_bls_to_execution_change(change, ReceivedPreCapella::Yes); + } + + let mut non_broadcast_indices = HashSet::new(); + for change in self.not_received_pre_capella_changes { + non_broadcast_indices.insert(change.as_inner().message.validator_index); + chain + .op_pool + .insert_bls_to_execution_change(change, ReceivedPreCapella::No); + } + + harness.set_current_slot( + chain + .spec + .capella_fork_epoch + .unwrap() + .start_slot(E::slots_per_epoch()), + ); + + let (sender, mut receiver) = mpsc::unbounded_channel(); + + broadcast_address_changes_at_capella(&chain, sender, &logging::test_logger()).await; + + let mut broadcasted_changes = vec![]; + while let Some(NetworkMessage::Publish { mut messages }) = receiver.recv().await { + match messages.pop().unwrap() { + PubsubMessage::BlsToExecutionChange(change) => broadcasted_changes.push(change), + _ => panic!("unexpected message"), + } + } + + assert_eq!( + broadcasted_changes.len(), + broadcast_indices.len(), + "all expected changes should have been broadcast" + ); + + for broadcasted in &broadcasted_changes { + assert!( + !non_broadcast_indices.contains(&broadcasted.message.validator_index), + "messages not flagged for broadcast should not have been broadcast" + ); + } + + let head = chain.head_snapshot(); + assert!( + chain + .op_pool + .get_bls_to_execution_changes_received_pre_capella( + &head.beacon_state, + &chain.spec, + ) + .is_empty(), + "there shouldn't be any capella broadcast changes left in the op pool" + ); + } + } + + // Useful for generating even-numbered indices. Required since only even + // numbered genesis validators have BLS credentials. + fn even_indices(start: u64, count: usize) -> Vec { + (start..).filter(|i| i % 2 == 0).take(count).collect() + } + + #[tokio::test] + async fn one_chunk() { + Tester::new() + .produce_received_pre_capella_changes(even_indices(0, 4)) + .produce_not_received_pre_capella_changes(even_indices(10, 4)) + .run() + .await; + } + + #[tokio::test] + async fn multiple_chunks() { + Tester::new() + .produce_received_pre_capella_changes(even_indices(0, BROADCAST_CHUNK_SIZE * 3 / 2)) + .run() + .await; + } +} diff --git a/beacon_node/client/src/builder.rs b/beacon_node/client/src/builder.rs index 3b016ebda9c..d4b785cb119 100644 --- a/beacon_node/client/src/builder.rs +++ b/beacon_node/client/src/builder.rs @@ -1,3 +1,4 @@ +use crate::address_change_broadcast::broadcast_address_changes_at_capella; use crate::config::{ClientGenesis, Config as ClientConfig}; use crate::notifier::spawn_notifier; use crate::Client; @@ -346,12 +347,6 @@ where while block.slot() % slots_per_epoch != 0 { block_slot = (block_slot / slots_per_epoch - 1) * slots_per_epoch; - debug!( - context.log(), - "Searching for aligned checkpoint block"; - "block_slot" => block_slot, - ); - debug!( context.log(), "Searching for aligned checkpoint block"; @@ -802,6 +797,25 @@ where // Spawns a routine that polls the `exchange_transition_configuration` endpoint. execution_layer.spawn_transition_configuration_poll(beacon_chain.spec.clone()); } + + // Spawn a service to publish BLS to execution changes at the Capella fork. + if let Some(network_senders) = self.network_senders { + let inner_chain = beacon_chain.clone(); + let broadcast_context = + runtime_context.service_context("addr_bcast".to_string()); + let log = broadcast_context.log().clone(); + broadcast_context.executor.spawn( + async move { + broadcast_address_changes_at_capella( + &inner_chain, + network_senders.network_send(), + &log, + ) + .await + }, + "addr_broadcast", + ); + } } start_proposer_prep_service(runtime_context.executor.clone(), beacon_chain.clone()); diff --git a/beacon_node/client/src/config.rs b/beacon_node/client/src/config.rs index 22b868256ad..95a00b37492 100644 --- a/beacon_node/client/src/config.rs +++ b/beacon_node/client/src/config.rs @@ -79,6 +79,7 @@ pub struct Config { pub monitoring_api: Option, pub slasher: Option, pub logger_config: LoggerConfig, + pub always_prefer_builder_payload: bool, } impl Default for Config { @@ -105,6 +106,7 @@ impl Default for Config { validator_monitor_pubkeys: vec![], validator_monitor_individual_tracking_threshold: DEFAULT_INDIVIDUAL_TRACKING_THRESHOLD, logger_config: LoggerConfig::default(), + always_prefer_builder_payload: false, } } } diff --git a/beacon_node/client/src/lib.rs b/beacon_node/client/src/lib.rs index 24df8740863..584a0d736de 100644 --- a/beacon_node/client/src/lib.rs +++ b/beacon_node/client/src/lib.rs @@ -1,5 +1,6 @@ extern crate slog; +mod address_change_broadcast; pub mod config; mod metrics; mod notifier; @@ -45,9 +46,18 @@ impl Client { self.http_metrics_listen_addr } - /// Returns the port of the client's libp2p stack, if it was started. - pub fn libp2p_listen_port(&self) -> Option { - self.network_globals.as_ref().map(|n| n.listen_port_tcp()) + /// Returns the ipv4 port of the client's libp2p stack, if it was started. + pub fn libp2p_listen_ipv4_port(&self) -> Option { + self.network_globals + .as_ref() + .and_then(|n| n.listen_port_tcp4()) + } + + /// Returns the ipv6 port of the client's libp2p stack, if it was started. + pub fn libp2p_listen_ipv6_port(&self) -> Option { + self.network_globals + .as_ref() + .and_then(|n| n.listen_port_tcp6()) } /// Returns the list of libp2p addresses the client is listening to. diff --git a/beacon_node/client/src/notifier.rs b/beacon_node/client/src/notifier.rs index 1da7a79707d..1105bc41f67 100644 --- a/beacon_node/client/src/notifier.rs +++ b/beacon_node/client/src/notifier.rs @@ -1,5 +1,6 @@ use crate::metrics; use beacon_chain::{ + capella_readiness::CapellaReadiness, merge_readiness::{MergeConfig, MergeReadiness}, BeaconChain, BeaconChainTypes, ExecutionStatus, }; @@ -313,6 +314,7 @@ pub fn spawn_notifier( eth1_logging(&beacon_chain, &log); merge_readiness_logging(current_slot, &beacon_chain, &log).await; + capella_readiness_logging(current_slot, &beacon_chain, &log).await; } }; @@ -350,12 +352,15 @@ async fn merge_readiness_logging( } if merge_completed && !has_execution_layer { - error!( - log, - "Execution endpoint required"; - "info" => "you need an execution engine to validate blocks, see: \ - https://lighthouse-book.sigmaprime.io/merge-migration.html" - ); + if !beacon_chain.is_time_to_prepare_for_capella(current_slot) { + // logging of the EE being offline is handled in `capella_readiness_logging()` + error!( + log, + "Execution endpoint required"; + "info" => "you need an execution engine to validate blocks, see: \ + https://lighthouse-book.sigmaprime.io/merge-migration.html" + ); + } return; } @@ -419,6 +424,65 @@ async fn merge_readiness_logging( } } +/// Provides some helpful logging to users to indicate if their node is ready for Capella +async fn capella_readiness_logging( + current_slot: Slot, + beacon_chain: &BeaconChain, + log: &Logger, +) { + let capella_completed = beacon_chain + .canonical_head + .cached_head() + .snapshot + .beacon_block + .message() + .body() + .execution_payload() + .map_or(false, |payload| payload.withdrawals_root().is_ok()); + + let has_execution_layer = beacon_chain.execution_layer.is_some(); + + if capella_completed && has_execution_layer + || !beacon_chain.is_time_to_prepare_for_capella(current_slot) + { + return; + } + + if capella_completed && !has_execution_layer { + error!( + log, + "Execution endpoint required"; + "info" => "you need a Capella enabled execution engine to validate blocks, see: \ + https://lighthouse-book.sigmaprime.io/merge-migration.html" + ); + return; + } + + match beacon_chain.check_capella_readiness().await { + CapellaReadiness::Ready => { + info!( + log, + "Ready for Capella"; + "info" => "ensure the execution endpoint is updated to the latest Capella/Shanghai release" + ) + } + readiness @ CapellaReadiness::ExchangeCapabilitiesFailed { error: _ } => { + error!( + log, + "Not ready for Capella"; + "hint" => "the execution endpoint may be offline", + "info" => %readiness, + ) + } + readiness => warn!( + log, + "Not ready for Capella"; + "hint" => "try updating the execution endpoint", + "info" => %readiness, + ), + } +} + fn eth1_logging(beacon_chain: &BeaconChain, log: &Logger) { let current_slot_opt = beacon_chain.slot().ok(); diff --git a/beacon_node/eth1/Cargo.toml b/beacon_node/eth1/Cargo.toml index fb988d73989..9e8179aff4f 100644 --- a/beacon_node/eth1/Cargo.toml +++ b/beacon_node/eth1/Cargo.toml @@ -21,7 +21,7 @@ hex = "0.4.2" types = { path = "../../consensus/types"} merkle_proof = { path = "../../consensus/merkle_proof"} eth2_ssz = { version = "0.4.1", path = "../../consensus/ssz" } -eth2_ssz_derive = { version = "0.3.0", path = "../../consensus/ssz_derive" } +eth2_ssz_derive = { version = "0.3.1", path = "../../consensus/ssz_derive" } tree_hash = { version = "0.4.1", path = "../../consensus/tree_hash" } parking_lot = "0.12.0" slog = "2.5.2" diff --git a/beacon_node/eth1/tests/test.rs b/beacon_node/eth1/tests/test.rs index 069a6e4aade..cd680478cc5 100644 --- a/beacon_node/eth1/tests/test.rs +++ b/beacon_node/eth1/tests/test.rs @@ -697,6 +697,7 @@ mod fast { let web3 = eth1.web3(); let now = get_block_number(&web3).await; + let spec = MainnetEthSpec::default_spec(); let service = Service::new( Config { endpoint: Eth1Endpoint::NoAuth( @@ -710,7 +711,7 @@ mod fast { ..Config::default() }, log, - MainnetEthSpec::default_spec(), + spec.clone(), ) .unwrap(); let client = diff --git a/beacon_node/execution_layer/Cargo.toml b/beacon_node/execution_layer/Cargo.toml index d1190d85da0..786472ed811 100644 --- a/beacon_node/execution_layer/Cargo.toml +++ b/beacon_node/execution_layer/Cargo.toml @@ -26,6 +26,7 @@ eth2_ssz = { version = "0.4.1", path = "../../consensus/ssz" } eth2_ssz_types = { version = "0.2.2", path = "../../consensus/ssz_types" } eth2 = { path = "../../common/eth2" } state_processing = { path = "../../consensus/state_processing" } +superstruct = "0.6.0" lru = "0.7.1" exit-future = "0.2.0" tree_hash = { version = "0.4.1", path = "../../consensus/tree_hash" } @@ -40,9 +41,9 @@ lazy_static = "1.4.0" ethers-core = "1.0.2" builder_client = { path = "../builder_client" } fork_choice = { path = "../../consensus/fork_choice" } -mev-build-rs = { git = "https://github.com/ralexstokes/mev-rs", rev = "6c99b0fbdc0427b1625469d2e575303ce08de5b8" } -ethereum-consensus = { git = "https://github.com/ralexstokes/ethereum-consensus", rev = "a8110af76d97bf2bf27fb987a671808fcbdf1834" } -ssz-rs = { git = "https://github.com/ralexstokes/ssz-rs", rev = "cb08f1" } +mev-rs = { git = "https://github.com/ralexstokes/mev-rs" } +ethereum-consensus = { git = "https://github.com/ralexstokes/ethereum-consensus" } +ssz-rs = { git = "https://github.com/ralexstokes/ssz-rs" } tokio-stream = { version = "0.1.9", features = [ "sync" ] } strum = "0.24.0" keccak-hash = "0.10.0" diff --git a/beacon_node/execution_layer/src/block_hash.rs b/beacon_node/execution_layer/src/block_hash.rs index f023c038aec..e9b7dcc17f3 100644 --- a/beacon_node/execution_layer/src/block_hash.rs +++ b/beacon_node/execution_layer/src/block_hash.rs @@ -1,4 +1,5 @@ use crate::{ + json_structures::JsonWithdrawal, keccak::{keccak256, KeccakHasher}, metrics, Error, ExecutionLayer, }; @@ -6,39 +7,51 @@ use ethers_core::utils::rlp::RlpStream; use keccak_hash::KECCAK_EMPTY_LIST_RLP; use triehash::ordered_trie_root; use types::{ - map_execution_block_header_fields, Address, EthSpec, ExecutionBlockHash, ExecutionBlockHeader, - ExecutionPayload, Hash256, Hash64, Uint256, + map_execution_block_header_fields_except_withdrawals, Address, EthSpec, ExecutionBlockHash, + ExecutionBlockHeader, ExecutionPayloadRef, Hash256, Hash64, Uint256, }; impl ExecutionLayer { /// Verify `payload.block_hash` locally within Lighthouse. /// /// No remote calls to the execution client will be made, so this is quite a cheap check. - pub fn verify_payload_block_hash(&self, payload: &ExecutionPayload) -> Result<(), Error> { + pub fn verify_payload_block_hash(&self, payload: ExecutionPayloadRef) -> Result<(), Error> { let _timer = metrics::start_timer(&metrics::EXECUTION_LAYER_VERIFY_BLOCK_HASH); // Calculate the transactions root. // We're currently using a deprecated Parity library for this. We should move to a // better alternative when one appears, possibly following Reth. let rlp_transactions_root = ordered_trie_root::( - payload.transactions.iter().map(|txn_bytes| &**txn_bytes), + payload.transactions().iter().map(|txn_bytes| &**txn_bytes), ); + // Calculate withdrawals root (post-Capella). + let rlp_withdrawals_root = if let Ok(withdrawals) = payload.withdrawals() { + Some(ordered_trie_root::( + withdrawals.iter().map(|withdrawal| { + rlp_encode_withdrawal(&JsonWithdrawal::from(withdrawal.clone())) + }), + )) + } else { + None + }; + // Construct the block header. let exec_block_header = ExecutionBlockHeader::from_payload( payload, KECCAK_EMPTY_LIST_RLP.as_fixed_bytes().into(), rlp_transactions_root, + rlp_withdrawals_root, ); // Hash the RLP encoding of the block header. let rlp_block_header = rlp_encode_block_header(&exec_block_header); let header_hash = ExecutionBlockHash::from_root(keccak256(&rlp_block_header)); - if header_hash != payload.block_hash { + if header_hash != payload.block_hash() { return Err(Error::BlockHashMismatch { computed: header_hash, - payload: payload.block_hash, + payload: payload.block_hash(), transactions_root: rlp_transactions_root, }); } @@ -47,13 +60,27 @@ impl ExecutionLayer { } } +/// RLP encode a withdrawal. +pub fn rlp_encode_withdrawal(withdrawal: &JsonWithdrawal) -> Vec { + let mut rlp_stream = RlpStream::new(); + rlp_stream.begin_list(4); + rlp_stream.append(&withdrawal.index); + rlp_stream.append(&withdrawal.validator_index); + rlp_stream.append(&withdrawal.address); + rlp_stream.append(&withdrawal.amount); + rlp_stream.out().into() +} + /// RLP encode an execution block header. pub fn rlp_encode_block_header(header: &ExecutionBlockHeader) -> Vec { let mut rlp_header_stream = RlpStream::new(); rlp_header_stream.begin_unbounded_list(); - map_execution_block_header_fields!(&header, |_, field| { + map_execution_block_header_fields_except_withdrawals!(&header, |_, field| { rlp_header_stream.append(field); }); + if let Some(withdrawals_root) = &header.withdrawals_root { + rlp_header_stream.append(withdrawals_root); + } rlp_header_stream.finalize_unbounded_list(); rlp_header_stream.out().into() } @@ -99,6 +126,7 @@ mod test { mix_hash: Hash256::from_str("0000000000000000000000000000000000000000000000000000000000000000").unwrap(), nonce: Hash64::zero(), base_fee_per_gas: 0x036b_u64.into(), + withdrawals_root: None, }; let expected_rlp = "f90200a0e0a94a7a3c9617401586b1a27025d2d9671332d22d540e0af72b069170380f2aa01dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d4934794ba5e000000000000000000000000000000000000a0ec3c94b18b8a1cff7d60f8d258ec723312932928626b4c9355eb4ab3568ec7f7a050f738580ed699f0469702c7ccc63ed2e51bc034be9479b7bff4e68dee84accfa029b0562f7140574dd0d50dee8a271b22e1a0a7b78fca58f7c60370d8317ba2a9b9010000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000830200000188016345785d8a00008301553482079e42a0000000000000000000000000000000000000000000000000000000000000000088000000000000000082036b"; let expected_hash = @@ -126,6 +154,7 @@ mod test { mix_hash: Hash256::from_str("0000000000000000000000000000000000000000000000000000000000020000").unwrap(), nonce: Hash64::zero(), base_fee_per_gas: 0x036b_u64.into(), + withdrawals_root: None, }; let expected_rlp = "f901fda0927ca537f06c783a3a2635b8805eef1c8c2124f7444ad4a3389898dd832f2dbea01dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d4934794ba5e000000000000000000000000000000000000a0e97859b065bd8dbbb4519c7cb935024de2484c2b7f881181b4360492f0b06b82a050f738580ed699f0469702c7ccc63ed2e51bc034be9479b7bff4e68dee84accfa029b0562f7140574dd0d50dee8a271b22e1a0a7b78fca58f7c60370d8317ba2a9b9010000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000800188016345785d8a00008301553482079e42a0000000000000000000000000000000000000000000000000000000000002000088000000000000000082036b"; let expected_hash = @@ -154,6 +183,7 @@ mod test { mix_hash: Hash256::from_str("bf5289894b2ceab3549f92f063febbac896b280ddb18129a57cff13113c11b13").unwrap(), nonce: Hash64::zero(), base_fee_per_gas: 0x34187b238_u64.into(), + withdrawals_root: None, }; let expected_hash = Hash256::from_str("6da69709cd5a34079b6604d29cd78fc01dacd7c6268980057ad92a2bede87351") diff --git a/beacon_node/execution_layer/src/engine_api.rs b/beacon_node/execution_layer/src/engine_api.rs index ba0a37736b0..3ecb36d0938 100644 --- a/beacon_node/execution_layer/src/engine_api.rs +++ b/beacon_node/execution_layer/src/engine_api.rs @@ -1,14 +1,26 @@ -use crate::engines::ForkChoiceState; +use crate::engines::ForkchoiceState; +use crate::http::{ + ENGINE_EXCHANGE_TRANSITION_CONFIGURATION_V1, ENGINE_FORKCHOICE_UPDATED_V1, + ENGINE_FORKCHOICE_UPDATED_V2, ENGINE_GET_PAYLOAD_BODIES_BY_HASH_V1, + ENGINE_GET_PAYLOAD_BODIES_BY_RANGE_V1, ENGINE_GET_PAYLOAD_V1, ENGINE_GET_PAYLOAD_V2, + ENGINE_NEW_PAYLOAD_V1, ENGINE_NEW_PAYLOAD_V2, +}; +use eth2::types::{SsePayloadAttributes, SsePayloadAttributesV1, SsePayloadAttributesV2}; pub use ethers_core::types::Transaction; +use ethers_core::utils::rlp::{self, Decodable, Rlp}; use http::deposit_methods::RpcError; -pub use json_structures::TransitionConfigurationV1; +pub use json_structures::{JsonWithdrawal, TransitionConfigurationV1}; use reqwest::StatusCode; use serde::{Deserialize, Serialize}; +use std::convert::TryFrom; use strum::IntoStaticStr; +use superstruct::superstruct; pub use types::{ - Address, EthSpec, ExecutionBlockHash, ExecutionPayload, ExecutionPayloadHeader, FixedVector, - Hash256, Uint256, VariableList, + Address, EthSpec, ExecutionBlockHash, ExecutionPayload, ExecutionPayloadHeader, + ExecutionPayloadRef, FixedVector, ForkName, Hash256, Transactions, Uint256, VariableList, + Withdrawal, Withdrawals, }; +use types::{ExecutionPayloadCapella, ExecutionPayloadMerge}; pub mod auth; pub mod http; @@ -38,7 +50,13 @@ pub enum Error { PayloadConversionLogicFlaw, DeserializeTransaction(ssz_types::Error), DeserializeTransactions(ssz_types::Error), + DeserializeWithdrawals(ssz_types::Error), BuilderApi(builder_client::Error), + IncorrectStateVariant, + RequiredMethodUnsupported(&'static str), + UnsupportedForkVariant(String), + BadConversion(String), + RlpDecoderError(rlp::DecoderError), } impl From for Error { @@ -72,6 +90,12 @@ impl From for Error { } } +impl From for Error { + fn from(e: rlp::DecoderError) -> Self { + Error::RlpDecoderError(e) + } +} + #[derive(Clone, Copy, Debug, PartialEq, IntoStaticStr)] #[strum(serialize_all = "snake_case")] pub enum PayloadStatusV1Status { @@ -111,9 +135,18 @@ pub struct ExecutionBlock { pub timestamp: u64, } -/// Representation of an exection block with enough detail to reconstruct a payload. +/// Representation of an execution block with enough detail to reconstruct a payload. +#[superstruct( + variants(Merge, Capella), + variant_attributes( + derive(Clone, Debug, PartialEq, Serialize, Deserialize,), + serde(bound = "T: EthSpec", rename_all = "camelCase"), + ), + cast_error(ty = "Error", expr = "Error::IncorrectStateVariant"), + partial_getter_error(ty = "Error", expr = "Error::IncorrectStateVariant") +)] #[derive(Clone, Debug, PartialEq, Serialize, Deserialize)] -#[serde(rename_all = "camelCase")] +#[serde(bound = "T: EthSpec", rename_all = "camelCase", untagged)] pub struct ExecutionBlockWithTransactions { pub parent_hash: ExecutionBlockHash, #[serde(alias = "miner")] @@ -138,13 +171,132 @@ pub struct ExecutionBlockWithTransactions { #[serde(rename = "hash")] pub block_hash: ExecutionBlockHash, pub transactions: Vec, + #[superstruct(only(Capella))] + pub withdrawals: Vec, } -#[derive(Clone, Copy, Debug, PartialEq)] +impl TryFrom> for ExecutionBlockWithTransactions { + type Error = Error; + + fn try_from(payload: ExecutionPayload) -> Result { + let json_payload = match payload { + ExecutionPayload::Merge(block) => Self::Merge(ExecutionBlockWithTransactionsMerge { + parent_hash: block.parent_hash, + fee_recipient: block.fee_recipient, + state_root: block.state_root, + receipts_root: block.receipts_root, + logs_bloom: block.logs_bloom, + prev_randao: block.prev_randao, + block_number: block.block_number, + gas_limit: block.gas_limit, + gas_used: block.gas_used, + timestamp: block.timestamp, + extra_data: block.extra_data, + base_fee_per_gas: block.base_fee_per_gas, + block_hash: block.block_hash, + transactions: block + .transactions + .iter() + .map(|tx| Transaction::decode(&Rlp::new(tx))) + .collect::, _>>()?, + }), + ExecutionPayload::Capella(block) => { + Self::Capella(ExecutionBlockWithTransactionsCapella { + parent_hash: block.parent_hash, + fee_recipient: block.fee_recipient, + state_root: block.state_root, + receipts_root: block.receipts_root, + logs_bloom: block.logs_bloom, + prev_randao: block.prev_randao, + block_number: block.block_number, + gas_limit: block.gas_limit, + gas_used: block.gas_used, + timestamp: block.timestamp, + extra_data: block.extra_data, + base_fee_per_gas: block.base_fee_per_gas, + block_hash: block.block_hash, + transactions: block + .transactions + .iter() + .map(|tx| Transaction::decode(&Rlp::new(tx))) + .collect::, _>>()?, + withdrawals: Vec::from(block.withdrawals) + .into_iter() + .map(|withdrawal| withdrawal.into()) + .collect(), + }) + } + }; + Ok(json_payload) + } +} + +#[superstruct( + variants(V1, V2), + variant_attributes(derive(Clone, Debug, Eq, Hash, PartialEq),), + cast_error(ty = "Error", expr = "Error::IncorrectStateVariant"), + partial_getter_error(ty = "Error", expr = "Error::IncorrectStateVariant") +)] +#[derive(Clone, Debug, Eq, Hash, PartialEq)] pub struct PayloadAttributes { + #[superstruct(getter(copy))] pub timestamp: u64, + #[superstruct(getter(copy))] pub prev_randao: Hash256, + #[superstruct(getter(copy))] pub suggested_fee_recipient: Address, + #[superstruct(only(V2))] + pub withdrawals: Vec, +} + +impl PayloadAttributes { + pub fn new( + timestamp: u64, + prev_randao: Hash256, + suggested_fee_recipient: Address, + withdrawals: Option>, + ) -> Self { + match withdrawals { + Some(withdrawals) => PayloadAttributes::V2(PayloadAttributesV2 { + timestamp, + prev_randao, + suggested_fee_recipient, + withdrawals, + }), + None => PayloadAttributes::V1(PayloadAttributesV1 { + timestamp, + prev_randao, + suggested_fee_recipient, + }), + } + } +} + +impl From for SsePayloadAttributes { + fn from(pa: PayloadAttributes) -> Self { + match pa { + PayloadAttributes::V1(PayloadAttributesV1 { + timestamp, + prev_randao, + suggested_fee_recipient, + }) => Self::V1(SsePayloadAttributesV1 { + timestamp, + prev_randao, + suggested_fee_recipient, + }), + PayloadAttributes::V2(PayloadAttributesV2 { + timestamp, + prev_randao, + suggested_fee_recipient, + withdrawals, + }) => Self::V2(SsePayloadAttributesV2 { + timestamp, + prev_randao, + suggested_fee_recipient, + withdrawals, + }), + } + } } #[derive(Clone, Debug, PartialEq)] @@ -166,3 +318,171 @@ pub struct ProposeBlindedBlockResponse { pub latest_valid_hash: Option, pub validation_error: Option, } + +#[superstruct( + variants(Merge, Capella), + variant_attributes(derive(Clone, Debug, PartialEq),), + map_into(ExecutionPayload), + map_ref_into(ExecutionPayloadRef), + cast_error(ty = "Error", expr = "Error::IncorrectStateVariant"), + partial_getter_error(ty = "Error", expr = "Error::IncorrectStateVariant") +)] +#[derive(Clone, Debug, PartialEq)] +pub struct GetPayloadResponse { + #[superstruct(only(Merge), partial_getter(rename = "execution_payload_merge"))] + pub execution_payload: ExecutionPayloadMerge, + #[superstruct(only(Capella), partial_getter(rename = "execution_payload_capella"))] + pub execution_payload: ExecutionPayloadCapella, + pub block_value: Uint256, +} + +impl<'a, T: EthSpec> From> for ExecutionPayloadRef<'a, T> { + fn from(response: GetPayloadResponseRef<'a, T>) -> Self { + map_get_payload_response_ref_into_execution_payload_ref!(&'a _, response, |inner, cons| { + cons(&inner.execution_payload) + }) + } +} + +impl From> for ExecutionPayload { + fn from(response: GetPayloadResponse) -> Self { + map_get_payload_response_into_execution_payload!(response, |inner, cons| { + cons(inner.execution_payload) + }) + } +} + +impl From> for (ExecutionPayload, Uint256) { + fn from(response: GetPayloadResponse) -> Self { + match response { + GetPayloadResponse::Merge(inner) => ( + ExecutionPayload::Merge(inner.execution_payload), + inner.block_value, + ), + GetPayloadResponse::Capella(inner) => ( + ExecutionPayload::Capella(inner.execution_payload), + inner.block_value, + ), + } + } +} + +impl GetPayloadResponse { + pub fn execution_payload_ref(&self) -> ExecutionPayloadRef { + self.to_ref().into() + } +} + +#[derive(Clone, Debug)] +pub struct ExecutionPayloadBodyV1 { + pub transactions: Transactions, + pub withdrawals: Option>, +} + +impl ExecutionPayloadBodyV1 { + pub fn to_payload( + self, + header: ExecutionPayloadHeader, + ) -> Result, String> { + match header { + ExecutionPayloadHeader::Merge(header) => { + if self.withdrawals.is_some() { + return Err(format!( + "block {} is merge but payload body has withdrawals", + header.block_hash + )); + } + Ok(ExecutionPayload::Merge(ExecutionPayloadMerge { + parent_hash: header.parent_hash, + fee_recipient: header.fee_recipient, + state_root: header.state_root, + receipts_root: header.receipts_root, + logs_bloom: header.logs_bloom, + prev_randao: header.prev_randao, + block_number: header.block_number, + gas_limit: header.gas_limit, + gas_used: header.gas_used, + timestamp: header.timestamp, + extra_data: header.extra_data, + base_fee_per_gas: header.base_fee_per_gas, + block_hash: header.block_hash, + transactions: self.transactions, + })) + } + ExecutionPayloadHeader::Capella(header) => { + if let Some(withdrawals) = self.withdrawals { + Ok(ExecutionPayload::Capella(ExecutionPayloadCapella { + parent_hash: header.parent_hash, + fee_recipient: header.fee_recipient, + state_root: header.state_root, + receipts_root: header.receipts_root, + logs_bloom: header.logs_bloom, + prev_randao: header.prev_randao, + block_number: header.block_number, + gas_limit: header.gas_limit, + gas_used: header.gas_used, + timestamp: header.timestamp, + extra_data: header.extra_data, + base_fee_per_gas: header.base_fee_per_gas, + block_hash: header.block_hash, + transactions: self.transactions, + withdrawals, + })) + } else { + Err(format!( + "block {} is capella but payload body doesn't have withdrawals", + header.block_hash + )) + } + } + } + } +} + +#[derive(Clone, Copy, Debug)] +pub struct EngineCapabilities { + pub new_payload_v1: bool, + pub new_payload_v2: bool, + pub forkchoice_updated_v1: bool, + pub forkchoice_updated_v2: bool, + pub get_payload_bodies_by_hash_v1: bool, + pub get_payload_bodies_by_range_v1: bool, + pub get_payload_v1: bool, + pub get_payload_v2: bool, + pub exchange_transition_configuration_v1: bool, +} + +impl EngineCapabilities { + pub fn to_response(&self) -> Vec<&str> { + let mut response = Vec::new(); + if self.new_payload_v1 { + response.push(ENGINE_NEW_PAYLOAD_V1); + } + if self.new_payload_v2 { + response.push(ENGINE_NEW_PAYLOAD_V2); + } + if self.forkchoice_updated_v1 { + response.push(ENGINE_FORKCHOICE_UPDATED_V1); + } + if self.forkchoice_updated_v2 { + response.push(ENGINE_FORKCHOICE_UPDATED_V2); + } + if self.get_payload_bodies_by_hash_v1 { + response.push(ENGINE_GET_PAYLOAD_BODIES_BY_HASH_V1); + } + if self.get_payload_bodies_by_range_v1 { + response.push(ENGINE_GET_PAYLOAD_BODIES_BY_RANGE_V1); + } + if self.get_payload_v1 { + response.push(ENGINE_GET_PAYLOAD_V1); + } + if self.get_payload_v2 { + response.push(ENGINE_GET_PAYLOAD_V2); + } + if self.exchange_transition_configuration_v1 { + response.push(ENGINE_EXCHANGE_TRANSITION_CONFIGURATION_V1); + } + + response + } +} diff --git a/beacon_node/execution_layer/src/engine_api/http.rs b/beacon_node/execution_layer/src/engine_api/http.rs index 74536630128..993957450bc 100644 --- a/beacon_node/execution_layer/src/engine_api/http.rs +++ b/beacon_node/execution_layer/src/engine_api/http.rs @@ -7,8 +7,10 @@ use reqwest::header::CONTENT_TYPE; use sensitive_url::SensitiveUrl; use serde::de::DeserializeOwned; use serde_json::json; +use std::collections::HashSet; +use tokio::sync::Mutex; -use std::time::Duration; +use std::time::{Duration, Instant}; use types::EthSpec; pub use deposit_log::{DepositLog, Log}; @@ -29,22 +31,62 @@ pub const ETH_SYNCING: &str = "eth_syncing"; pub const ETH_SYNCING_TIMEOUT: Duration = Duration::from_secs(1); pub const ENGINE_NEW_PAYLOAD_V1: &str = "engine_newPayloadV1"; +pub const ENGINE_NEW_PAYLOAD_V2: &str = "engine_newPayloadV2"; pub const ENGINE_NEW_PAYLOAD_TIMEOUT: Duration = Duration::from_secs(8); pub const ENGINE_GET_PAYLOAD_V1: &str = "engine_getPayloadV1"; +pub const ENGINE_GET_PAYLOAD_V2: &str = "engine_getPayloadV2"; pub const ENGINE_GET_PAYLOAD_TIMEOUT: Duration = Duration::from_secs(2); pub const ENGINE_FORKCHOICE_UPDATED_V1: &str = "engine_forkchoiceUpdatedV1"; +pub const ENGINE_FORKCHOICE_UPDATED_V2: &str = "engine_forkchoiceUpdatedV2"; pub const ENGINE_FORKCHOICE_UPDATED_TIMEOUT: Duration = Duration::from_secs(8); +pub const ENGINE_GET_PAYLOAD_BODIES_BY_HASH_V1: &str = "engine_getPayloadBodiesByHashV1"; +pub const ENGINE_GET_PAYLOAD_BODIES_BY_RANGE_V1: &str = "engine_getPayloadBodiesByRangeV1"; +pub const ENGINE_GET_PAYLOAD_BODIES_TIMEOUT: Duration = Duration::from_secs(10); + pub const ENGINE_EXCHANGE_TRANSITION_CONFIGURATION_V1: &str = "engine_exchangeTransitionConfigurationV1"; pub const ENGINE_EXCHANGE_TRANSITION_CONFIGURATION_V1_TIMEOUT: Duration = Duration::from_secs(1); +pub const ENGINE_EXCHANGE_CAPABILITIES: &str = "engine_exchangeCapabilities"; +pub const ENGINE_EXCHANGE_CAPABILITIES_TIMEOUT: Duration = Duration::from_secs(1); + /// This error is returned during a `chainId` call by Geth. pub const EIP155_ERROR_STR: &str = "chain not synced beyond EIP-155 replay-protection fork block"; - -/// Contains methods to convert arbitary bytes to an ETH2 deposit contract object. +/// This code is returned by all clients when a method is not supported +/// (verified geth, nethermind, erigon, besu) +pub const METHOD_NOT_FOUND_CODE: i64 = -32601; + +pub static LIGHTHOUSE_CAPABILITIES: &[&str] = &[ + ENGINE_NEW_PAYLOAD_V1, + ENGINE_NEW_PAYLOAD_V2, + ENGINE_GET_PAYLOAD_V1, + ENGINE_GET_PAYLOAD_V2, + ENGINE_FORKCHOICE_UPDATED_V1, + ENGINE_FORKCHOICE_UPDATED_V2, + ENGINE_GET_PAYLOAD_BODIES_BY_HASH_V1, + ENGINE_GET_PAYLOAD_BODIES_BY_RANGE_V1, + ENGINE_EXCHANGE_TRANSITION_CONFIGURATION_V1, +]; + +/// This is necessary because a user might run a capella-enabled version of +/// lighthouse before they update to a capella-enabled execution engine. +// TODO (mark): rip this out once we are post-capella on mainnet +pub static PRE_CAPELLA_ENGINE_CAPABILITIES: EngineCapabilities = EngineCapabilities { + new_payload_v1: true, + new_payload_v2: false, + forkchoice_updated_v1: true, + forkchoice_updated_v2: false, + get_payload_bodies_by_hash_v1: false, + get_payload_bodies_by_range_v1: false, + get_payload_v1: true, + get_payload_v2: false, + exchange_transition_configuration_v1: true, +}; + +/// Contains methods to convert arbitrary bytes to an ETH2 deposit contract object. pub mod deposit_log { use ssz::Decode; use state_processing::per_block_processing::signature_sets::deposit_pubkey_signature_message; @@ -519,10 +561,39 @@ pub mod deposit_methods { } } +#[derive(Clone, Debug)] +pub struct CapabilitiesCacheEntry { + engine_capabilities: EngineCapabilities, + fetch_time: Instant, +} + +impl CapabilitiesCacheEntry { + pub fn new(engine_capabilities: EngineCapabilities) -> Self { + Self { + engine_capabilities, + fetch_time: Instant::now(), + } + } + + pub fn engine_capabilities(&self) -> EngineCapabilities { + self.engine_capabilities + } + + pub fn age(&self) -> Duration { + Instant::now().duration_since(self.fetch_time) + } + + /// returns `true` if the entry's age is >= age_limit + pub fn older_than(&self, age_limit: Option) -> bool { + age_limit.map_or(false, |limit| self.age() >= limit) + } +} + pub struct HttpJsonRpc { pub client: Client, pub url: SensitiveUrl, pub execution_timeout_multiplier: u32, + pub engine_capabilities_cache: Mutex>, auth: Option, } @@ -535,6 +606,7 @@ impl HttpJsonRpc { client: Client::builder().build()?, url, execution_timeout_multiplier: execution_timeout_multiplier.unwrap_or(1), + engine_capabilities_cache: Mutex::new(None), auth: None, }) } @@ -548,6 +620,7 @@ impl HttpJsonRpc { client: Client::builder().build()?, url, execution_timeout_multiplier: execution_timeout_multiplier.unwrap_or(1), + engine_capabilities_cache: Mutex::new(None), auth: Some(auth), }) } @@ -654,21 +727,40 @@ impl HttpJsonRpc { pub async fn get_block_by_hash_with_txns( &self, block_hash: ExecutionBlockHash, + fork: ForkName, ) -> Result>, Error> { let params = json!([block_hash, true]); - self.rpc_request( - ETH_GET_BLOCK_BY_HASH, - params, - ETH_GET_BLOCK_BY_HASH_TIMEOUT * self.execution_timeout_multiplier, - ) - .await + Ok(Some(match fork { + ForkName::Merge => ExecutionBlockWithTransactions::Merge( + self.rpc_request( + ETH_GET_BLOCK_BY_HASH, + params, + ETH_GET_BLOCK_BY_HASH_TIMEOUT * self.execution_timeout_multiplier, + ) + .await?, + ), + ForkName::Capella => ExecutionBlockWithTransactions::Capella( + self.rpc_request( + ETH_GET_BLOCK_BY_HASH, + params, + ETH_GET_BLOCK_BY_HASH_TIMEOUT * self.execution_timeout_multiplier, + ) + .await?, + ), + ForkName::Base | ForkName::Altair => { + return Err(Error::UnsupportedForkVariant(format!( + "called get_block_by_hash_with_txns with fork {:?}", + fork + ))) + } + })) } pub async fn new_payload_v1( &self, execution_payload: ExecutionPayload, ) -> Result { - let params = json!([JsonExecutionPayloadV1::from(execution_payload)]); + let params = json!([JsonExecutionPayload::from(execution_payload)]); let response: JsonPayloadStatusV1 = self .rpc_request( @@ -681,13 +773,30 @@ impl HttpJsonRpc { Ok(response.into()) } + pub async fn new_payload_v2( + &self, + execution_payload: ExecutionPayload, + ) -> Result { + let params = json!([JsonExecutionPayload::from(execution_payload)]); + + let response: JsonPayloadStatusV1 = self + .rpc_request( + ENGINE_NEW_PAYLOAD_V2, + params, + ENGINE_NEW_PAYLOAD_TIMEOUT * self.execution_timeout_multiplier, + ) + .await?; + + Ok(response.into()) + } + pub async fn get_payload_v1( &self, payload_id: PayloadId, - ) -> Result, Error> { + ) -> Result, Error> { let params = json!([JsonPayloadIdRequest::from(payload_id)]); - let response: JsonExecutionPayloadV1 = self + let payload_v1: JsonExecutionPayloadV1 = self .rpc_request( ENGINE_GET_PAYLOAD_V1, params, @@ -695,17 +804,58 @@ impl HttpJsonRpc { ) .await?; - Ok(response.into()) + Ok(GetPayloadResponse::Merge(GetPayloadResponseMerge { + execution_payload: payload_v1.into(), + // Set the V1 payload values from the EE to be zero. This simulates + // the pre-block-value functionality of always choosing the builder + // block. + block_value: Uint256::zero(), + })) + } + + pub async fn get_payload_v2( + &self, + fork_name: ForkName, + payload_id: PayloadId, + ) -> Result, Error> { + let params = json!([JsonPayloadIdRequest::from(payload_id)]); + + match fork_name { + ForkName::Merge => { + let response: JsonGetPayloadResponseV1 = self + .rpc_request( + ENGINE_GET_PAYLOAD_V2, + params, + ENGINE_GET_PAYLOAD_TIMEOUT * self.execution_timeout_multiplier, + ) + .await?; + Ok(JsonGetPayloadResponse::V1(response).into()) + } + ForkName::Capella => { + let response: JsonGetPayloadResponseV2 = self + .rpc_request( + ENGINE_GET_PAYLOAD_V2, + params, + ENGINE_GET_PAYLOAD_TIMEOUT * self.execution_timeout_multiplier, + ) + .await?; + Ok(JsonGetPayloadResponse::V2(response).into()) + } + ForkName::Base | ForkName::Altair => Err(Error::UnsupportedForkVariant(format!( + "called get_payload_v2 with {}", + fork_name + ))), + } } pub async fn forkchoice_updated_v1( &self, - forkchoice_state: ForkChoiceState, + forkchoice_state: ForkchoiceState, payload_attributes: Option, ) -> Result { let params = json!([ - JsonForkChoiceStateV1::from(forkchoice_state), - payload_attributes.map(JsonPayloadAttributesV1::from) + JsonForkchoiceStateV1::from(forkchoice_state), + payload_attributes.map(JsonPayloadAttributes::from) ]); let response: JsonForkchoiceUpdatedV1Response = self @@ -719,6 +869,71 @@ impl HttpJsonRpc { Ok(response.into()) } + pub async fn forkchoice_updated_v2( + &self, + forkchoice_state: ForkchoiceState, + payload_attributes: Option, + ) -> Result { + let params = json!([ + JsonForkchoiceStateV1::from(forkchoice_state), + payload_attributes.map(JsonPayloadAttributes::from) + ]); + + let response: JsonForkchoiceUpdatedV1Response = self + .rpc_request( + ENGINE_FORKCHOICE_UPDATED_V2, + params, + ENGINE_FORKCHOICE_UPDATED_TIMEOUT * self.execution_timeout_multiplier, + ) + .await?; + + Ok(response.into()) + } + + pub async fn get_payload_bodies_by_hash_v1( + &self, + block_hashes: Vec, + ) -> Result>>, Error> { + let params = json!([block_hashes]); + + let response: Vec>> = self + .rpc_request( + ENGINE_GET_PAYLOAD_BODIES_BY_HASH_V1, + params, + ENGINE_GET_PAYLOAD_BODIES_TIMEOUT * self.execution_timeout_multiplier, + ) + .await?; + + Ok(response + .into_iter() + .map(|opt_json| opt_json.map(From::from)) + .collect()) + } + + pub async fn get_payload_bodies_by_range_v1( + &self, + start: u64, + count: u64, + ) -> Result>>, Error> { + #[derive(Serialize)] + #[serde(transparent)] + struct Quantity(#[serde(with = "eth2_serde_utils::u64_hex_be")] u64); + + let params = json!([Quantity(start), Quantity(count)]); + let response: Vec>> = self + .rpc_request( + ENGINE_GET_PAYLOAD_BODIES_BY_RANGE_V1, + params, + ENGINE_GET_PAYLOAD_BODIES_TIMEOUT * self.execution_timeout_multiplier, + ) + .await?; + + Ok(response + .into_iter() + .map(|opt_json| opt_json.map(From::from)) + .collect()) + } + pub async fn exchange_transition_configuration_v1( &self, transition_configuration: TransitionConfigurationV1, @@ -736,6 +951,122 @@ impl HttpJsonRpc { Ok(response) } + + pub async fn exchange_capabilities(&self) -> Result { + let params = json!([LIGHTHOUSE_CAPABILITIES]); + + let response: Result, _> = self + .rpc_request( + ENGINE_EXCHANGE_CAPABILITIES, + params, + ENGINE_EXCHANGE_CAPABILITIES_TIMEOUT * self.execution_timeout_multiplier, + ) + .await; + + match response { + // TODO (mark): rip this out once we are post capella on mainnet + Err(error) => match error { + Error::ServerMessage { code, message: _ } if code == METHOD_NOT_FOUND_CODE => { + Ok(PRE_CAPELLA_ENGINE_CAPABILITIES) + } + _ => Err(error), + }, + Ok(capabilities) => Ok(EngineCapabilities { + new_payload_v1: capabilities.contains(ENGINE_NEW_PAYLOAD_V1), + new_payload_v2: capabilities.contains(ENGINE_NEW_PAYLOAD_V2), + forkchoice_updated_v1: capabilities.contains(ENGINE_FORKCHOICE_UPDATED_V1), + forkchoice_updated_v2: capabilities.contains(ENGINE_FORKCHOICE_UPDATED_V2), + get_payload_bodies_by_hash_v1: capabilities + .contains(ENGINE_GET_PAYLOAD_BODIES_BY_HASH_V1), + get_payload_bodies_by_range_v1: capabilities + .contains(ENGINE_GET_PAYLOAD_BODIES_BY_RANGE_V1), + get_payload_v1: capabilities.contains(ENGINE_GET_PAYLOAD_V1), + get_payload_v2: capabilities.contains(ENGINE_GET_PAYLOAD_V2), + exchange_transition_configuration_v1: capabilities + .contains(ENGINE_EXCHANGE_TRANSITION_CONFIGURATION_V1), + }), + } + } + + pub async fn clear_exchange_capabilties_cache(&self) { + *self.engine_capabilities_cache.lock().await = None; + } + + /// Returns the execution engine capabilities resulting from a call to + /// engine_exchangeCapabilities. If the capabilities cache is not populated, + /// or if it is populated with a cached result of age >= `age_limit`, this + /// method will fetch the result from the execution engine and populate the + /// cache before returning it. Otherwise it will return a cached result from + /// a previous call. + /// + /// Set `age_limit` to `None` to always return the cached result + /// Set `age_limit` to `Some(Duration::ZERO)` to force fetching from EE + pub async fn get_engine_capabilities( + &self, + age_limit: Option, + ) -> Result { + let mut lock = self.engine_capabilities_cache.lock().await; + + if let Some(lock) = lock.as_ref().filter(|entry| !entry.older_than(age_limit)) { + Ok(lock.engine_capabilities()) + } else { + let engine_capabilities = self.exchange_capabilities().await?; + *lock = Some(CapabilitiesCacheEntry::new(engine_capabilities)); + Ok(engine_capabilities) + } + } + + // automatically selects the latest version of + // new_payload that the execution engine supports + pub async fn new_payload( + &self, + execution_payload: ExecutionPayload, + ) -> Result { + let engine_capabilities = self.get_engine_capabilities(None).await?; + if engine_capabilities.new_payload_v2 { + self.new_payload_v2(execution_payload).await + } else if engine_capabilities.new_payload_v1 { + self.new_payload_v1(execution_payload).await + } else { + Err(Error::RequiredMethodUnsupported("engine_newPayload")) + } + } + + // automatically selects the latest version of + // get_payload that the execution engine supports + pub async fn get_payload( + &self, + fork_name: ForkName, + payload_id: PayloadId, + ) -> Result, Error> { + let engine_capabilities = self.get_engine_capabilities(None).await?; + if engine_capabilities.get_payload_v2 { + self.get_payload_v2(fork_name, payload_id).await + } else if engine_capabilities.new_payload_v1 { + self.get_payload_v1(payload_id).await + } else { + Err(Error::RequiredMethodUnsupported("engine_getPayload")) + } + } + + // automatically selects the latest version of + // forkchoice_updated that the execution engine supports + pub async fn forkchoice_updated( + &self, + forkchoice_state: ForkchoiceState, + payload_attributes: Option, + ) -> Result { + let engine_capabilities = self.get_engine_capabilities(None).await?; + if engine_capabilities.forkchoice_updated_v2 { + self.forkchoice_updated_v2(forkchoice_state, payload_attributes) + .await + } else if engine_capabilities.forkchoice_updated_v1 { + self.forkchoice_updated_v1(forkchoice_state, payload_attributes) + .await + } else { + Err(Error::RequiredMethodUnsupported("engine_forkchoiceUpdated")) + } + } } #[cfg(test)] @@ -746,7 +1077,7 @@ mod test { use std::future::Future; use std::str::FromStr; use std::sync::Arc; - use types::{MainnetEthSpec, Transactions, Unsigned, VariableList}; + use types::{ExecutionPayloadMerge, MainnetEthSpec, Transactions, Unsigned, VariableList}; struct Tester { server: MockServer, @@ -852,10 +1183,10 @@ mod test { fn encode_transactions( transactions: Transactions, ) -> Result { - let ep: JsonExecutionPayloadV1 = JsonExecutionPayloadV1 { + let ep: JsonExecutionPayload = JsonExecutionPayload::V1(JsonExecutionPayloadV1 { transactions, ..<_>::default() - }; + }); let json = serde_json::to_value(&ep)?; Ok(json.get("transactions").unwrap().clone()) } @@ -882,8 +1213,8 @@ mod test { json.as_object_mut() .unwrap() .insert("transactions".into(), transactions); - let ep: JsonExecutionPayloadV1 = serde_json::from_value(json)?; - Ok(ep.transactions) + let ep: JsonExecutionPayload = serde_json::from_value(json)?; + Ok(ep.transactions().clone()) } fn assert_transactions_serde( @@ -1029,16 +1360,16 @@ mod test { |client| async move { let _ = client .forkchoice_updated_v1( - ForkChoiceState { + ForkchoiceState { head_block_hash: ExecutionBlockHash::repeat_byte(1), safe_block_hash: ExecutionBlockHash::repeat_byte(1), finalized_block_hash: ExecutionBlockHash::zero(), }, - Some(PayloadAttributes { + Some(PayloadAttributes::V1(PayloadAttributesV1 { timestamp: 5, prev_randao: Hash256::zero(), suggested_fee_recipient: Address::repeat_byte(0), - }), + })), ) .await; }, @@ -1064,16 +1395,16 @@ mod test { .assert_auth_failure(|client| async move { client .forkchoice_updated_v1( - ForkChoiceState { + ForkchoiceState { head_block_hash: ExecutionBlockHash::repeat_byte(1), safe_block_hash: ExecutionBlockHash::repeat_byte(1), finalized_block_hash: ExecutionBlockHash::zero(), }, - Some(PayloadAttributes { + Some(PayloadAttributes::V1(PayloadAttributesV1 { timestamp: 5, prev_randao: Hash256::zero(), suggested_fee_recipient: Address::repeat_byte(0), - }), + })), ) .await }) @@ -1109,22 +1440,24 @@ mod test { .assert_request_equals( |client| async move { let _ = client - .new_payload_v1::(ExecutionPayload { - parent_hash: ExecutionBlockHash::repeat_byte(0), - fee_recipient: Address::repeat_byte(1), - state_root: Hash256::repeat_byte(1), - receipts_root: Hash256::repeat_byte(0), - logs_bloom: vec![1; 256].into(), - prev_randao: Hash256::repeat_byte(1), - block_number: 0, - gas_limit: 1, - gas_used: 2, - timestamp: 42, - extra_data: vec![].into(), - base_fee_per_gas: Uint256::from(1), - block_hash: ExecutionBlockHash::repeat_byte(1), - transactions: vec![].into(), - }) + .new_payload_v1::(ExecutionPayload::Merge( + ExecutionPayloadMerge { + parent_hash: ExecutionBlockHash::repeat_byte(0), + fee_recipient: Address::repeat_byte(1), + state_root: Hash256::repeat_byte(1), + receipts_root: Hash256::repeat_byte(0), + logs_bloom: vec![1; 256].into(), + prev_randao: Hash256::repeat_byte(1), + block_number: 0, + gas_limit: 1, + gas_used: 2, + timestamp: 42, + extra_data: vec![].into(), + base_fee_per_gas: Uint256::from(1), + block_hash: ExecutionBlockHash::repeat_byte(1), + transactions: vec![].into(), + }, + )) .await; }, json!({ @@ -1154,22 +1487,24 @@ mod test { Tester::new(false) .assert_auth_failure(|client| async move { client - .new_payload_v1::(ExecutionPayload { - parent_hash: ExecutionBlockHash::repeat_byte(0), - fee_recipient: Address::repeat_byte(1), - state_root: Hash256::repeat_byte(1), - receipts_root: Hash256::repeat_byte(0), - logs_bloom: vec![1; 256].into(), - prev_randao: Hash256::repeat_byte(1), - block_number: 0, - gas_limit: 1, - gas_used: 2, - timestamp: 42, - extra_data: vec![].into(), - base_fee_per_gas: Uint256::from(1), - block_hash: ExecutionBlockHash::repeat_byte(1), - transactions: vec![].into(), - }) + .new_payload_v1::(ExecutionPayload::Merge( + ExecutionPayloadMerge { + parent_hash: ExecutionBlockHash::repeat_byte(0), + fee_recipient: Address::repeat_byte(1), + state_root: Hash256::repeat_byte(1), + receipts_root: Hash256::repeat_byte(0), + logs_bloom: vec![1; 256].into(), + prev_randao: Hash256::repeat_byte(1), + block_number: 0, + gas_limit: 1, + gas_used: 2, + timestamp: 42, + extra_data: vec![].into(), + base_fee_per_gas: Uint256::from(1), + block_hash: ExecutionBlockHash::repeat_byte(1), + transactions: vec![].into(), + }, + )) .await }) .await; @@ -1182,7 +1517,7 @@ mod test { |client| async move { let _ = client .forkchoice_updated_v1( - ForkChoiceState { + ForkchoiceState { head_block_hash: ExecutionBlockHash::repeat_byte(0), safe_block_hash: ExecutionBlockHash::repeat_byte(0), finalized_block_hash: ExecutionBlockHash::repeat_byte(1), @@ -1208,7 +1543,7 @@ mod test { .assert_auth_failure(|client| async move { client .forkchoice_updated_v1( - ForkChoiceState { + ForkchoiceState { head_block_hash: ExecutionBlockHash::repeat_byte(0), safe_block_hash: ExecutionBlockHash::repeat_byte(0), finalized_block_hash: ExecutionBlockHash::repeat_byte(1), @@ -1247,16 +1582,16 @@ mod test { |client| async move { let _ = client .forkchoice_updated_v1( - ForkChoiceState { + ForkchoiceState { head_block_hash: ExecutionBlockHash::from_str("0x3b8fb240d288781d4aac94d3fd16809ee413bc99294a085798a589dae51ddd4a").unwrap(), safe_block_hash: ExecutionBlockHash::from_str("0x3b8fb240d288781d4aac94d3fd16809ee413bc99294a085798a589dae51ddd4a").unwrap(), finalized_block_hash: ExecutionBlockHash::zero(), }, - Some(PayloadAttributes { + Some(PayloadAttributes::V1(PayloadAttributesV1 { timestamp: 5, prev_randao: Hash256::zero(), suggested_fee_recipient: Address::from_str("0xa94f5374fce5edbc8e2a8697c15331677e6ebf0b").unwrap(), - }) + })) ) .await; }, @@ -1294,16 +1629,16 @@ mod test { |client| async move { let response = client .forkchoice_updated_v1( - ForkChoiceState { + ForkchoiceState { head_block_hash: ExecutionBlockHash::from_str("0x3b8fb240d288781d4aac94d3fd16809ee413bc99294a085798a589dae51ddd4a").unwrap(), safe_block_hash: ExecutionBlockHash::from_str("0x3b8fb240d288781d4aac94d3fd16809ee413bc99294a085798a589dae51ddd4a").unwrap(), finalized_block_hash: ExecutionBlockHash::zero(), }, - Some(PayloadAttributes { + Some(PayloadAttributes::V1(PayloadAttributesV1 { timestamp: 5, prev_randao: Hash256::zero(), suggested_fee_recipient: Address::from_str("0xa94f5374fce5edbc8e2a8697c15331677e6ebf0b").unwrap(), - }) + })) ) .await .unwrap(); @@ -1357,12 +1692,13 @@ mod test { } })], |client| async move { - let payload = client + let payload: ExecutionPayload<_> = client .get_payload_v1::(str_to_payload_id("0xa247243752eb10b4")) .await - .unwrap(); + .unwrap() + .into(); - let expected = ExecutionPayload { + let expected = ExecutionPayload::Merge(ExecutionPayloadMerge { parent_hash: ExecutionBlockHash::from_str("0x3b8fb240d288781d4aac94d3fd16809ee413bc99294a085798a589dae51ddd4a").unwrap(), fee_recipient: Address::from_str("0xa94f5374fce5edbc8e2a8697c15331677e6ebf0b").unwrap(), state_root: Hash256::from_str("0xca3149fa9e37db08d1cd49c9061db1002ef1cd58db2210f2115c8c989b2bdf45").unwrap(), @@ -1377,7 +1713,7 @@ mod test { base_fee_per_gas: Uint256::from(7), block_hash: ExecutionBlockHash::from_str("0x6359b8381a370e2f54072a5784ddd78b6ed024991558c511d4452eb4f6ac898c").unwrap(), transactions: vec![].into(), - }; + }); assert_eq!(payload, expected); }, @@ -1387,7 +1723,7 @@ mod test { // engine_newPayloadV1 REQUEST validation |client| async move { let _ = client - .new_payload_v1::(ExecutionPayload { + .new_payload_v1::(ExecutionPayload::Merge(ExecutionPayloadMerge{ parent_hash: ExecutionBlockHash::from_str("0x3b8fb240d288781d4aac94d3fd16809ee413bc99294a085798a589dae51ddd4a").unwrap(), fee_recipient: Address::from_str("0xa94f5374fce5edbc8e2a8697c15331677e6ebf0b").unwrap(), state_root: Hash256::from_str("0xca3149fa9e37db08d1cd49c9061db1002ef1cd58db2210f2115c8c989b2bdf45").unwrap(), @@ -1402,7 +1738,7 @@ mod test { base_fee_per_gas: Uint256::from(7), block_hash: ExecutionBlockHash::from_str("0x3559e851470f6e7bbed1db474980683e8c315bfce99b2a6ef47c057c04de7858").unwrap(), transactions: vec![].into(), - }) + })) .await; }, json!({ @@ -1441,7 +1777,7 @@ mod test { })], |client| async move { let response = client - .new_payload_v1::(ExecutionPayload::default()) + .new_payload_v1::(ExecutionPayload::Merge(ExecutionPayloadMerge::default())) .await .unwrap(); @@ -1460,7 +1796,7 @@ mod test { |client| async move { let _ = client .forkchoice_updated_v1( - ForkChoiceState { + ForkchoiceState { head_block_hash: ExecutionBlockHash::from_str("0x3559e851470f6e7bbed1db474980683e8c315bfce99b2a6ef47c057c04de7858").unwrap(), safe_block_hash: ExecutionBlockHash::from_str("0x3559e851470f6e7bbed1db474980683e8c315bfce99b2a6ef47c057c04de7858").unwrap(), finalized_block_hash: ExecutionBlockHash::from_str("0x3b8fb240d288781d4aac94d3fd16809ee413bc99294a085798a589dae51ddd4a").unwrap(), @@ -1499,7 +1835,7 @@ mod test { |client| async move { let response = client .forkchoice_updated_v1( - ForkChoiceState { + ForkchoiceState { head_block_hash: ExecutionBlockHash::from_str("0x3559e851470f6e7bbed1db474980683e8c315bfce99b2a6ef47c057c04de7858").unwrap(), safe_block_hash: ExecutionBlockHash::from_str("0x3559e851470f6e7bbed1db474980683e8c315bfce99b2a6ef47c057c04de7858").unwrap(), finalized_block_hash: ExecutionBlockHash::from_str("0x3b8fb240d288781d4aac94d3fd16809ee413bc99294a085798a589dae51ddd4a").unwrap(), diff --git a/beacon_node/execution_layer/src/engine_api/json_structures.rs b/beacon_node/execution_layer/src/engine_api/json_structures.rs index 560569c92f2..6d33bbabe2a 100644 --- a/beacon_node/execution_layer/src/engine_api/json_structures.rs +++ b/beacon_node/execution_layer/src/engine_api/json_structures.rs @@ -1,7 +1,11 @@ use super::*; use serde::{Deserialize, Serialize}; use strum::EnumString; -use types::{EthSpec, ExecutionBlockHash, FixedVector, Transaction, Unsigned, VariableList}; +use superstruct::superstruct; +use types::{ + EthSpec, ExecutionBlockHash, FixedVector, Transactions, Unsigned, VariableList, Withdrawal, +}; +use types::{ExecutionPayload, ExecutionPayloadCapella, ExecutionPayloadMerge}; #[derive(Debug, PartialEq, Serialize, Deserialize)] #[serde(rename_all = "camelCase")] @@ -56,9 +60,18 @@ pub struct JsonPayloadIdResponse { pub payload_id: PayloadId, } -#[derive(Debug, PartialEq, Default, Serialize, Deserialize)] -#[serde(bound = "T: EthSpec", rename_all = "camelCase")] -pub struct JsonExecutionPayloadHeaderV1 { +#[superstruct( + variants(V1, V2), + variant_attributes( + derive(Debug, PartialEq, Default, Serialize, Deserialize,), + serde(bound = "T: EthSpec", rename_all = "camelCase"), + ), + cast_error(ty = "Error", expr = "Error::IncorrectStateVariant"), + partial_getter_error(ty = "Error", expr = "Error::IncorrectStateVariant") +)] +#[derive(Debug, PartialEq, Serialize, Deserialize)] +#[serde(bound = "T: EthSpec", rename_all = "camelCase", untagged)] +pub struct JsonExecutionPayload { pub parent_hash: ExecutionBlockHash, pub fee_recipient: Address, pub state_root: Hash256, @@ -79,209 +92,265 @@ pub struct JsonExecutionPayloadHeaderV1 { #[serde(with = "eth2_serde_utils::u256_hex_be")] pub base_fee_per_gas: Uint256, pub block_hash: ExecutionBlockHash, - pub transactions_root: Hash256, + #[serde(with = "ssz_types::serde_utils::list_of_hex_var_list")] + pub transactions: Transactions, + #[superstruct(only(V2))] + pub withdrawals: VariableList, } -impl From> for ExecutionPayloadHeader { - fn from(e: JsonExecutionPayloadHeaderV1) -> Self { - // Use this verbose deconstruction pattern to ensure no field is left unused. - let JsonExecutionPayloadHeaderV1 { - parent_hash, - fee_recipient, - state_root, - receipts_root, - logs_bloom, - prev_randao, - block_number, - gas_limit, - gas_used, - timestamp, - extra_data, - base_fee_per_gas, - block_hash, - transactions_root, - } = e; +impl From> for JsonExecutionPayloadV1 { + fn from(payload: ExecutionPayloadMerge) -> Self { + JsonExecutionPayloadV1 { + parent_hash: payload.parent_hash, + fee_recipient: payload.fee_recipient, + state_root: payload.state_root, + receipts_root: payload.receipts_root, + logs_bloom: payload.logs_bloom, + prev_randao: payload.prev_randao, + block_number: payload.block_number, + gas_limit: payload.gas_limit, + gas_used: payload.gas_used, + timestamp: payload.timestamp, + extra_data: payload.extra_data, + base_fee_per_gas: payload.base_fee_per_gas, + block_hash: payload.block_hash, + transactions: payload.transactions, + } + } +} +impl From> for JsonExecutionPayloadV2 { + fn from(payload: ExecutionPayloadCapella) -> Self { + JsonExecutionPayloadV2 { + parent_hash: payload.parent_hash, + fee_recipient: payload.fee_recipient, + state_root: payload.state_root, + receipts_root: payload.receipts_root, + logs_bloom: payload.logs_bloom, + prev_randao: payload.prev_randao, + block_number: payload.block_number, + gas_limit: payload.gas_limit, + gas_used: payload.gas_used, + timestamp: payload.timestamp, + extra_data: payload.extra_data, + base_fee_per_gas: payload.base_fee_per_gas, + block_hash: payload.block_hash, + transactions: payload.transactions, + withdrawals: payload + .withdrawals + .into_iter() + .map(Into::into) + .collect::>() + .into(), + } + } +} - Self { - parent_hash, - fee_recipient, - state_root, - receipts_root, - logs_bloom, - prev_randao, - block_number, - gas_limit, - gas_used, - timestamp, - extra_data, - base_fee_per_gas, - block_hash, - transactions_root, +impl From> for JsonExecutionPayload { + fn from(execution_payload: ExecutionPayload) -> Self { + match execution_payload { + ExecutionPayload::Merge(payload) => JsonExecutionPayload::V1(payload.into()), + ExecutionPayload::Capella(payload) => JsonExecutionPayload::V2(payload.into()), } } } -#[derive(Debug, PartialEq, Default, Serialize, Deserialize)] -#[serde(bound = "T: EthSpec", rename_all = "camelCase")] -pub struct JsonExecutionPayloadV1 { - pub parent_hash: ExecutionBlockHash, - pub fee_recipient: Address, - pub state_root: Hash256, - pub receipts_root: Hash256, - #[serde(with = "serde_logs_bloom")] - pub logs_bloom: FixedVector, - pub prev_randao: Hash256, - #[serde(with = "eth2_serde_utils::u64_hex_be")] - pub block_number: u64, +impl From> for ExecutionPayloadMerge { + fn from(payload: JsonExecutionPayloadV1) -> Self { + ExecutionPayloadMerge { + parent_hash: payload.parent_hash, + fee_recipient: payload.fee_recipient, + state_root: payload.state_root, + receipts_root: payload.receipts_root, + logs_bloom: payload.logs_bloom, + prev_randao: payload.prev_randao, + block_number: payload.block_number, + gas_limit: payload.gas_limit, + gas_used: payload.gas_used, + timestamp: payload.timestamp, + extra_data: payload.extra_data, + base_fee_per_gas: payload.base_fee_per_gas, + block_hash: payload.block_hash, + transactions: payload.transactions, + } + } +} +impl From> for ExecutionPayloadCapella { + fn from(payload: JsonExecutionPayloadV2) -> Self { + ExecutionPayloadCapella { + parent_hash: payload.parent_hash, + fee_recipient: payload.fee_recipient, + state_root: payload.state_root, + receipts_root: payload.receipts_root, + logs_bloom: payload.logs_bloom, + prev_randao: payload.prev_randao, + block_number: payload.block_number, + gas_limit: payload.gas_limit, + gas_used: payload.gas_used, + timestamp: payload.timestamp, + extra_data: payload.extra_data, + base_fee_per_gas: payload.base_fee_per_gas, + block_hash: payload.block_hash, + transactions: payload.transactions, + withdrawals: payload + .withdrawals + .into_iter() + .map(Into::into) + .collect::>() + .into(), + } + } +} + +impl From> for ExecutionPayload { + fn from(json_execution_payload: JsonExecutionPayload) -> Self { + match json_execution_payload { + JsonExecutionPayload::V1(payload) => ExecutionPayload::Merge(payload.into()), + JsonExecutionPayload::V2(payload) => ExecutionPayload::Capella(payload.into()), + } + } +} + +#[superstruct( + variants(V1, V2), + variant_attributes( + derive(Debug, PartialEq, Serialize, Deserialize), + serde(bound = "T: EthSpec", rename_all = "camelCase") + ), + cast_error(ty = "Error", expr = "Error::IncorrectStateVariant"), + partial_getter_error(ty = "Error", expr = "Error::IncorrectStateVariant") +)] +#[derive(Debug, PartialEq, Serialize, Deserialize)] +#[serde(untagged)] +pub struct JsonGetPayloadResponse { + #[superstruct(only(V1), partial_getter(rename = "execution_payload_v1"))] + pub execution_payload: JsonExecutionPayloadV1, + #[superstruct(only(V2), partial_getter(rename = "execution_payload_v2"))] + pub execution_payload: JsonExecutionPayloadV2, + #[serde(with = "eth2_serde_utils::u256_hex_be")] + pub block_value: Uint256, +} + +impl From> for GetPayloadResponse { + fn from(json_get_payload_response: JsonGetPayloadResponse) -> Self { + match json_get_payload_response { + JsonGetPayloadResponse::V1(response) => { + GetPayloadResponse::Merge(GetPayloadResponseMerge { + execution_payload: response.execution_payload.into(), + block_value: response.block_value, + }) + } + JsonGetPayloadResponse::V2(response) => { + GetPayloadResponse::Capella(GetPayloadResponseCapella { + execution_payload: response.execution_payload.into(), + block_value: response.block_value, + }) + } + } + } +} + +#[derive(Debug, PartialEq, Clone, Serialize, Deserialize)] +#[serde(rename_all = "camelCase")] +pub struct JsonWithdrawal { #[serde(with = "eth2_serde_utils::u64_hex_be")] - pub gas_limit: u64, + pub index: u64, #[serde(with = "eth2_serde_utils::u64_hex_be")] - pub gas_used: u64, + pub validator_index: u64, + pub address: Address, #[serde(with = "eth2_serde_utils::u64_hex_be")] - pub timestamp: u64, - #[serde(with = "ssz_types::serde_utils::hex_var_list")] - pub extra_data: VariableList, - #[serde(with = "eth2_serde_utils::u256_hex_be")] - pub base_fee_per_gas: Uint256, - pub block_hash: ExecutionBlockHash, - #[serde(with = "ssz_types::serde_utils::list_of_hex_var_list")] - pub transactions: - VariableList, T::MaxTransactionsPerPayload>, + pub amount: u64, } -impl From> for JsonExecutionPayloadV1 { - fn from(e: ExecutionPayload) -> Self { - // Use this verbose deconstruction pattern to ensure no field is left unused. - let ExecutionPayload { - parent_hash, - fee_recipient, - state_root, - receipts_root, - logs_bloom, - prev_randao, - block_number, - gas_limit, - gas_used, - timestamp, - extra_data, - base_fee_per_gas, - block_hash, - transactions, - } = e; - +impl From for JsonWithdrawal { + fn from(withdrawal: Withdrawal) -> Self { Self { - parent_hash, - fee_recipient, - state_root, - receipts_root, - logs_bloom, - prev_randao, - block_number, - gas_limit, - gas_used, - timestamp, - extra_data, - base_fee_per_gas, - block_hash, - transactions, + index: withdrawal.index, + validator_index: withdrawal.validator_index, + address: withdrawal.address, + amount: withdrawal.amount, } } } -impl From> for ExecutionPayload { - fn from(e: JsonExecutionPayloadV1) -> Self { - // Use this verbose deconstruction pattern to ensure no field is left unused. - let JsonExecutionPayloadV1 { - parent_hash, - fee_recipient, - state_root, - receipts_root, - logs_bloom, - prev_randao, - block_number, - gas_limit, - gas_used, - timestamp, - extra_data, - base_fee_per_gas, - block_hash, - transactions, - } = e; - +impl From for Withdrawal { + fn from(jw: JsonWithdrawal) -> Self { Self { - parent_hash, - fee_recipient, - state_root, - receipts_root, - logs_bloom, - prev_randao, - block_number, - gas_limit, - gas_used, - timestamp, - extra_data, - base_fee_per_gas, - block_hash, - transactions, + index: jw.index, + validator_index: jw.validator_index, + address: jw.address, + amount: jw.amount, } } } -#[derive(Debug, PartialEq, Clone, Serialize, Deserialize)] -#[serde(rename_all = "camelCase")] -pub struct JsonPayloadAttributesV1 { +#[superstruct( + variants(V1, V2), + variant_attributes( + derive(Debug, Clone, PartialEq, Serialize, Deserialize), + serde(rename_all = "camelCase") + ), + cast_error(ty = "Error", expr = "Error::IncorrectStateVariant"), + partial_getter_error(ty = "Error", expr = "Error::IncorrectStateVariant") +)] +#[derive(Debug, Clone, PartialEq, Serialize, Deserialize)] +#[serde(untagged)] +pub struct JsonPayloadAttributes { #[serde(with = "eth2_serde_utils::u64_hex_be")] pub timestamp: u64, pub prev_randao: Hash256, pub suggested_fee_recipient: Address, + #[superstruct(only(V2))] + pub withdrawals: Vec, } -impl From for JsonPayloadAttributesV1 { - fn from(p: PayloadAttributes) -> Self { - // Use this verbose deconstruction pattern to ensure no field is left unused. - let PayloadAttributes { - timestamp, - prev_randao, - suggested_fee_recipient, - } = p; - - Self { - timestamp, - prev_randao, - suggested_fee_recipient, +impl From for JsonPayloadAttributes { + fn from(payload_atributes: PayloadAttributes) -> Self { + match payload_atributes { + PayloadAttributes::V1(pa) => Self::V1(JsonPayloadAttributesV1 { + timestamp: pa.timestamp, + prev_randao: pa.prev_randao, + suggested_fee_recipient: pa.suggested_fee_recipient, + }), + PayloadAttributes::V2(pa) => Self::V2(JsonPayloadAttributesV2 { + timestamp: pa.timestamp, + prev_randao: pa.prev_randao, + suggested_fee_recipient: pa.suggested_fee_recipient, + withdrawals: pa.withdrawals.into_iter().map(Into::into).collect(), + }), } } } -impl From for PayloadAttributes { - fn from(j: JsonPayloadAttributesV1) -> Self { - // Use this verbose deconstruction pattern to ensure no field is left unused. - let JsonPayloadAttributesV1 { - timestamp, - prev_randao, - suggested_fee_recipient, - } = j; - - Self { - timestamp, - prev_randao, - suggested_fee_recipient, +impl From for PayloadAttributes { + fn from(json_payload_attributes: JsonPayloadAttributes) -> Self { + match json_payload_attributes { + JsonPayloadAttributes::V1(jpa) => Self::V1(PayloadAttributesV1 { + timestamp: jpa.timestamp, + prev_randao: jpa.prev_randao, + suggested_fee_recipient: jpa.suggested_fee_recipient, + }), + JsonPayloadAttributes::V2(jpa) => Self::V2(PayloadAttributesV2 { + timestamp: jpa.timestamp, + prev_randao: jpa.prev_randao, + suggested_fee_recipient: jpa.suggested_fee_recipient, + withdrawals: jpa.withdrawals.into_iter().map(Into::into).collect(), + }), } } } #[derive(Debug, PartialEq, Clone, Serialize, Deserialize)] #[serde(rename_all = "camelCase")] -pub struct JsonForkChoiceStateV1 { +pub struct JsonForkchoiceStateV1 { pub head_block_hash: ExecutionBlockHash, pub safe_block_hash: ExecutionBlockHash, pub finalized_block_hash: ExecutionBlockHash, } -impl From for JsonForkChoiceStateV1 { - fn from(f: ForkChoiceState) -> Self { +impl From for JsonForkchoiceStateV1 { + fn from(f: ForkchoiceState) -> Self { // Use this verbose deconstruction pattern to ensure no field is left unused. - let ForkChoiceState { + let ForkchoiceState { head_block_hash, safe_block_hash, finalized_block_hash, @@ -295,10 +364,10 @@ impl From for JsonForkChoiceStateV1 { } } -impl From for ForkChoiceState { - fn from(j: JsonForkChoiceStateV1) -> Self { +impl From for ForkchoiceState { + fn from(j: JsonForkchoiceStateV1) -> Self { // Use this verbose deconstruction pattern to ensure no field is left unused. - let JsonForkChoiceStateV1 { + let JsonForkchoiceStateV1 { head_block_hash, safe_block_hash, finalized_block_hash, @@ -424,6 +493,30 @@ impl From for JsonForkchoiceUpdatedV1Response { } } +#[derive(Clone, Debug, Serialize, Deserialize)] +#[serde(bound = "E: EthSpec")] +pub struct JsonExecutionPayloadBodyV1 { + #[serde(with = "ssz_types::serde_utils::list_of_hex_var_list")] + pub transactions: Transactions, + pub withdrawals: Option>, +} + +impl From> for ExecutionPayloadBodyV1 { + fn from(value: JsonExecutionPayloadBodyV1) -> Self { + Self { + transactions: value.transactions, + withdrawals: value.withdrawals.map(|json_withdrawals| { + Withdrawals::::from( + json_withdrawals + .into_iter() + .map(Into::into) + .collect::>(), + ) + }), + } + } +} + #[derive(Clone, Copy, Debug, PartialEq, Serialize, Deserialize)] #[serde(rename_all = "camelCase")] pub struct TransitionConfigurationV1 { diff --git a/beacon_node/execution_layer/src/engines.rs b/beacon_node/execution_layer/src/engines.rs index 339006c1ba6..ce413cb1139 100644 --- a/beacon_node/execution_layer/src/engines.rs +++ b/beacon_node/execution_layer/src/engines.rs @@ -1,22 +1,25 @@ //! Provides generic behaviour for multiple execution engines, specifically fallback behaviour. use crate::engine_api::{ - Error as EngineApiError, ForkchoiceUpdatedResponse, PayloadAttributes, PayloadId, + EngineCapabilities, Error as EngineApiError, ForkchoiceUpdatedResponse, PayloadAttributes, + PayloadId, }; use crate::HttpJsonRpc; use lru::LruCache; -use slog::{debug, error, info, Logger}; +use slog::{debug, error, info, warn, Logger}; use std::future::Future; use std::sync::Arc; +use std::time::Duration; use task_executor::TaskExecutor; use tokio::sync::{watch, Mutex, RwLock}; use tokio_stream::wrappers::WatchStream; -use types::{Address, ExecutionBlockHash, Hash256}; +use types::ExecutionBlockHash; /// The number of payload IDs that will be stored for each `Engine`. /// -/// Since the size of each value is small (~100 bytes) a large number is used for safety. +/// Since the size of each value is small (~800 bytes) a large number is used for safety. const PAYLOAD_ID_LRU_CACHE_SIZE: usize = 512; +const CACHED_ENGINE_CAPABILITIES_AGE_LIMIT: Duration = Duration::from_secs(900); // 15 minutes /// Stores the remembered state of a engine. #[derive(Copy, Clone, PartialEq, Debug, Eq, Default)] @@ -28,6 +31,14 @@ enum EngineStateInternal { AuthFailed, } +#[derive(Copy, Clone, Debug, Default, Eq, PartialEq)] +enum CapabilitiesCacheAction { + #[default] + None, + Update, + Clear, +} + /// A subset of the engine state to inform other services if the engine is online or offline. #[derive(Debug, Clone, PartialEq, Eq, Copy)] pub enum EngineState { @@ -88,7 +99,7 @@ impl State { } #[derive(Copy, Clone, PartialEq, Debug)] -pub struct ForkChoiceState { +pub struct ForkchoiceState { pub head_block_hash: ExecutionBlockHash, pub safe_block_hash: ExecutionBlockHash, pub finalized_block_hash: ExecutionBlockHash, @@ -97,9 +108,7 @@ pub struct ForkChoiceState { #[derive(Hash, PartialEq, std::cmp::Eq)] struct PayloadIdCacheKey { pub head_block_hash: ExecutionBlockHash, - pub timestamp: u64, - pub prev_randao: Hash256, - pub suggested_fee_recipient: Address, + pub payload_attributes: PayloadAttributes, } #[derive(Debug)] @@ -115,7 +124,7 @@ pub struct Engine { pub api: HttpJsonRpc, payload_id_cache: Mutex>, state: RwLock, - latest_forkchoice_state: RwLock>, + latest_forkchoice_state: RwLock>, executor: TaskExecutor, log: Logger, } @@ -142,37 +151,30 @@ impl Engine { pub async fn get_payload_id( &self, - head_block_hash: ExecutionBlockHash, - timestamp: u64, - prev_randao: Hash256, - suggested_fee_recipient: Address, + head_block_hash: &ExecutionBlockHash, + payload_attributes: &PayloadAttributes, ) -> Option { self.payload_id_cache .lock() .await - .get(&PayloadIdCacheKey { - head_block_hash, - timestamp, - prev_randao, - suggested_fee_recipient, - }) + .get(&PayloadIdCacheKey::new(head_block_hash, payload_attributes)) .cloned() } pub async fn notify_forkchoice_updated( &self, - forkchoice_state: ForkChoiceState, + forkchoice_state: ForkchoiceState, payload_attributes: Option, log: &Logger, ) -> Result { let response = self .api - .forkchoice_updated_v1(forkchoice_state, payload_attributes) + .forkchoice_updated(forkchoice_state, payload_attributes.clone()) .await?; if let Some(payload_id) = response.payload_id { - if let Some(key) = - payload_attributes.map(|pa| PayloadIdCacheKey::new(&forkchoice_state, &pa)) + if let Some(key) = payload_attributes + .map(|pa| PayloadIdCacheKey::new(&forkchoice_state.head_block_hash, &pa)) { self.payload_id_cache.lock().await.put(key, payload_id); } else { @@ -187,11 +189,11 @@ impl Engine { Ok(response) } - async fn get_latest_forkchoice_state(&self) -> Option { + async fn get_latest_forkchoice_state(&self) -> Option { *self.latest_forkchoice_state.read().await } - pub async fn set_latest_forkchoice_state(&self, state: ForkChoiceState) { + pub async fn set_latest_forkchoice_state(&self, state: ForkchoiceState) { *self.latest_forkchoice_state.write().await = Some(state); } @@ -216,7 +218,7 @@ impl Engine { // For simplicity, payload attributes are never included in this call. It may be // reasonable to include them in the future. - if let Err(e) = self.api.forkchoice_updated_v1(forkchoice_state, None).await { + if let Err(e) = self.api.forkchoice_updated(forkchoice_state, None).await { debug!( self.log, "Failed to issue latest head to engine"; @@ -239,7 +241,7 @@ impl Engine { /// Run the `EngineApi::upcheck` function if the node's last known state is not synced. This /// might be used to recover the node if offline. pub async fn upcheck(&self) { - let state: EngineStateInternal = match self.api.upcheck().await { + let (state, cache_action) = match self.api.upcheck().await { Ok(()) => { let mut state = self.state.write().await; if **state != EngineStateInternal::Synced { @@ -257,12 +259,12 @@ impl Engine { ); } state.update(EngineStateInternal::Synced); - **state + (**state, CapabilitiesCacheAction::Update) } Err(EngineApiError::IsSyncing) => { let mut state = self.state.write().await; state.update(EngineStateInternal::Syncing); - **state + (**state, CapabilitiesCacheAction::Update) } Err(EngineApiError::Auth(err)) => { error!( @@ -273,7 +275,7 @@ impl Engine { let mut state = self.state.write().await; state.update(EngineStateInternal::AuthFailed); - **state + (**state, CapabilitiesCacheAction::Clear) } Err(e) => { error!( @@ -284,10 +286,30 @@ impl Engine { let mut state = self.state.write().await; state.update(EngineStateInternal::Offline); - **state + // need to clear the engine capabilities cache if we detect the + // execution engine is offline as it is likely the engine is being + // updated to a newer version with new capabilities + (**state, CapabilitiesCacheAction::Clear) } }; + // do this after dropping state lock guard to avoid holding two locks at once + match cache_action { + CapabilitiesCacheAction::None => {} + CapabilitiesCacheAction::Update => { + if let Err(e) = self + .get_engine_capabilities(Some(CACHED_ENGINE_CAPABILITIES_AGE_LIMIT)) + .await + { + warn!(self.log, + "Error during exchange capabilities"; + "error" => ?e, + ) + } + } + CapabilitiesCacheAction::Clear => self.api.clear_exchange_capabilties_cache().await, + } + debug!( self.log, "Execution engine upcheck complete"; @@ -295,6 +317,22 @@ impl Engine { ); } + /// Returns the execution engine capabilities resulting from a call to + /// engine_exchangeCapabilities. If the capabilities cache is not populated, + /// or if it is populated with a cached result of age >= `age_limit`, this + /// method will fetch the result from the execution engine and populate the + /// cache before returning it. Otherwise it will return a cached result from + /// a previous call. + /// + /// Set `age_limit` to `None` to always return the cached result + /// Set `age_limit` to `Some(Duration::ZERO)` to force fetching from EE + pub async fn get_engine_capabilities( + &self, + age_limit: Option, + ) -> Result { + self.api.get_engine_capabilities(age_limit).await + } + /// Run `func` on the node regardless of the node's current state. /// /// ## Note @@ -303,7 +341,7 @@ impl Engine { /// deadlock. pub async fn request<'a, F, G, H>(self: &'a Arc, func: F) -> Result where - F: Fn(&'a Engine) -> G, + F: FnOnce(&'a Engine) -> G, G: Future>, { match func(self).await { @@ -325,7 +363,7 @@ impl Engine { Ok(result) } Err(error) => { - error!( + warn!( self.log, "Execution engine call failed"; "error" => ?error, @@ -348,12 +386,10 @@ impl Engine { } impl PayloadIdCacheKey { - fn new(state: &ForkChoiceState, attributes: &PayloadAttributes) -> Self { + fn new(head_block_hash: &ExecutionBlockHash, attributes: &PayloadAttributes) -> Self { Self { - head_block_hash: state.head_block_hash, - timestamp: attributes.timestamp, - prev_randao: attributes.prev_randao, - suggested_fee_recipient: attributes.suggested_fee_recipient, + head_block_hash: *head_block_hash, + payload_attributes: attributes.clone(), } } } diff --git a/beacon_node/execution_layer/src/lib.rs b/beacon_node/execution_layer/src/lib.rs index a4d15abb364..09be379d240 100644 --- a/beacon_node/execution_layer/src/lib.rs +++ b/beacon_node/execution_layer/src/lib.rs @@ -7,12 +7,13 @@ use crate::payload_cache::PayloadCache; use auth::{strip_prefix, Auth, JwtKey}; use builder_client::BuilderHttpClient; +pub use engine_api::EngineCapabilities; use engine_api::Error as ApiError; pub use engine_api::*; pub use engine_api::{http, http::deposit_methods, http::HttpJsonRpc}; use engines::{Engine, EngineError}; -pub use engines::{EngineState, ForkChoiceState}; -use eth2::types::{builder_bid::SignedBuilderBid, ForkVersionedResponse}; +pub use engines::{EngineState, ForkchoiceState}; +use eth2::types::builder_bid::SignedBuilderBid; use fork_choice::ForkchoiceUpdateParameters; use lru::LruCache; use payload_status::process_payload_status; @@ -25,6 +26,7 @@ use std::collections::HashMap; use std::fmt; use std::future::Future; use std::io::Write; +use std::marker::PhantomData; use std::path::PathBuf; use std::sync::Arc; use std::time::{Duration, Instant, SystemTime, UNIX_EPOCH}; @@ -35,14 +37,17 @@ use tokio::{ time::sleep, }; use tokio_stream::wrappers::WatchStream; +use tree_hash::TreeHash; +use types::{AbstractExecPayload, BeaconStateError, ExecPayload, Withdrawals}; use types::{ - BlindedPayload, BlockType, ChainSpec, Epoch, ExecPayload, ExecutionBlockHash, ForkName, + BlindedPayload, BlockType, ChainSpec, Epoch, ExecutionBlockHash, ExecutionPayload, + ExecutionPayloadCapella, ExecutionPayloadMerge, ForkName, ForkVersionedResponse, ProposerPreparationData, PublicKeyBytes, Signature, SignedBeaconBlock, Slot, Uint256, }; mod block_hash; mod engine_api; -mod engines; +pub mod engines; mod keccak; mod metrics; pub mod payload_cache; @@ -72,7 +77,7 @@ const DEFAULT_SUGGESTED_FEE_RECIPIENT: [u8; 20] = const CONFIG_POLL_INTERVAL: Duration = Duration::from_secs(60); /// A payload alongside some information about where it came from. -enum ProvenancedPayload

{ +pub enum ProvenancedPayload

{ /// A good ol' fashioned farm-to-table payload from your local EE. Local(P), /// A payload from a builder (e.g. mev-boost). @@ -98,6 +103,15 @@ pub enum Error { transactions_root: Hash256, }, InvalidJWTSecret(String), + InvalidForkForPayload, + InvalidPayloadBody(String), + BeaconStateError(BeaconStateError), +} + +impl From for Error { + fn from(e: BeaconStateError) -> Self { + Error::BeaconStateError(e) + } } impl From for Error { @@ -106,6 +120,56 @@ impl From for Error { } } +pub enum BlockProposalContents> { + Payload { + payload: Payload, + block_value: Uint256, + // TODO: remove for 4844, since it appears in PayloadAndBlobs + _phantom: PhantomData, + }, +} + +impl> BlockProposalContents { + pub fn payload(&self) -> &Payload { + match self { + Self::Payload { + payload, + block_value: _, + _phantom: _, + } => payload, + } + } + pub fn to_payload(self) -> Payload { + match self { + Self::Payload { + payload, + block_value: _, + _phantom: _, + } => payload, + } + } + pub fn block_value(&self) -> &Uint256 { + match self { + Self::Payload { + payload: _, + block_value, + _phantom: _, + } => block_value, + } + } + pub fn default_at_fork(fork_name: ForkName) -> Result { + Ok(match fork_name { + ForkName::Base | ForkName::Altair | ForkName::Merge | ForkName::Capella => { + BlockProposalContents::Payload { + payload: Payload::default_at_fork(fork_name)?, + block_value: Uint256::zero(), + _phantom: PhantomData::default(), + } + } + }) + } +} + #[derive(Clone, PartialEq)] pub struct ProposerPreparationDataEntry { update_epoch: Epoch, @@ -157,6 +221,7 @@ struct Inner { payload_cache: PayloadCache, builder_profit_threshold: Uint256, log: Logger, + always_prefer_builder_payload: bool, } #[derive(Debug, Default, Clone, Serialize, Deserialize)] @@ -165,6 +230,8 @@ pub struct Config { pub execution_endpoints: Vec, /// Endpoint urls for services providing the builder api. pub builder_url: Option, + /// User agent to send with requests to the builder API. + pub builder_user_agent: Option, /// JWT secrets for the above endpoints running the engine api. pub secret_files: Vec, /// The default fee recipient to use on the beacon node if none if provided from @@ -179,6 +246,7 @@ pub struct Config { /// The minimum value of an external payload for it to be considered in a proposal. pub builder_profit_threshold: u128, pub execution_timeout_multiplier: Option, + pub always_prefer_builder_payload: bool, } /// Provides access to one execution engine and provides a neat interface for consumption by the @@ -194,6 +262,7 @@ impl ExecutionLayer { let Config { execution_endpoints: urls, builder_url, + builder_user_agent, secret_files, suggested_fee_recipient, jwt_id, @@ -201,6 +270,7 @@ impl ExecutionLayer { default_datadir, builder_profit_threshold, execution_timeout_multiplier, + always_prefer_builder_payload, } = config; if urls.len() > 1 { @@ -228,6 +298,7 @@ impl ExecutionLayer { .map_err(Error::InvalidJWTSecret) } else { // Create a new file and write a randomly generated secret to it if file does not exist + warn!(log, "No JWT found on disk. Generating"; "path" => %secret_file.display()); std::fs::File::options() .write(true) .create_new(true) @@ -252,12 +323,17 @@ impl ExecutionLayer { let builder = builder_url .map(|url| { - let builder_client = BuilderHttpClient::new(url.clone()).map_err(Error::Builder); - info!(log, + let builder_client = BuilderHttpClient::new(url.clone(), builder_user_agent) + .map_err(Error::Builder)?; + + info!( + log, "Connected to external block builder"; "builder_url" => ?url, - "builder_profit_threshold" => builder_profit_threshold); - builder_client + "builder_profit_threshold" => builder_profit_threshold, + "local_user_agent" => builder_client.get_user_agent(), + ); + Ok::<_, Error>(builder_client) }) .transpose()?; @@ -273,6 +349,7 @@ impl ExecutionLayer { payload_cache: PayloadCache::default(), builder_profit_threshold: Uint256::from(builder_profit_threshold), log, + always_prefer_builder_payload, }; Ok(Self { @@ -290,12 +367,12 @@ impl ExecutionLayer { &self.inner.builder } - /// Cache a full payload, keyed on the `tree_hash_root` of its `transactions` field. - fn cache_payload(&self, payload: &ExecutionPayload) -> Option> { - self.inner.payload_cache.put(payload.clone()) + /// Cache a full payload, keyed on the `tree_hash_root` of the payload + fn cache_payload(&self, payload: ExecutionPayloadRef) -> Option> { + self.inner.payload_cache.put(payload.clone_from_ref()) } - /// Attempt to retrieve a full payload from the payload cache by the `transactions_root`. + /// Attempt to retrieve a full payload from the payload cache by the payload root pub fn get_payload_by_root(&self, root: &Hash256) -> Option> { self.inner.payload_cache.pop(root) } @@ -566,19 +643,15 @@ impl ExecutionLayer { /// /// The result will be returned from the first node that returns successfully. No more nodes /// will be contacted. - #[allow(clippy::too_many_arguments)] - pub async fn get_payload>( + pub async fn get_payload>( &self, parent_hash: ExecutionBlockHash, - timestamp: u64, - prev_randao: Hash256, - proposer_index: u64, + payload_attributes: &PayloadAttributes, forkchoice_update_params: ForkchoiceUpdateParameters, builder_params: BuilderParams, + current_fork: ForkName, spec: &ChainSpec, - ) -> Result { - let suggested_fee_recipient = self.get_suggested_fee_recipient(proposer_index).await; - + ) -> Result, Error> { let payload_result = match Payload::block_type() { BlockType::Blinded => { let _timer = metrics::start_timer_vec( @@ -587,11 +660,10 @@ impl ExecutionLayer { ); self.get_blinded_payload( parent_hash, - timestamp, - prev_randao, - suggested_fee_recipient, + payload_attributes, forkchoice_update_params, builder_params, + current_fork, spec, ) .await @@ -603,10 +675,9 @@ impl ExecutionLayer { ); self.get_full_payload( parent_hash, - timestamp, - prev_randao, - suggested_fee_recipient, + payload_attributes, forkchoice_update_params, + current_fork, ) .await .map(ProvenancedPayload::Local) @@ -615,7 +686,7 @@ impl ExecutionLayer { // Track some metrics and return the result. match payload_result { - Ok(ProvenancedPayload::Local(payload)) => { + Ok(ProvenancedPayload::Local(block_proposal_contents)) => { metrics::inc_counter_vec( &metrics::EXECUTION_LAYER_GET_PAYLOAD_OUTCOME, &[metrics::SUCCESS], @@ -624,9 +695,9 @@ impl ExecutionLayer { &metrics::EXECUTION_LAYER_GET_PAYLOAD_SOURCE, &[metrics::LOCAL], ); - Ok(payload) + Ok(block_proposal_contents) } - Ok(ProvenancedPayload::Builder(payload)) => { + Ok(ProvenancedPayload::Builder(block_proposal_contents)) => { metrics::inc_counter_vec( &metrics::EXECUTION_LAYER_GET_PAYLOAD_OUTCOME, &[metrics::SUCCESS], @@ -635,7 +706,7 @@ impl ExecutionLayer { &metrics::EXECUTION_LAYER_GET_PAYLOAD_SOURCE, &[metrics::BUILDER], ); - Ok(payload) + Ok(block_proposal_contents) } Err(e) => { metrics::inc_counter_vec( @@ -647,17 +718,15 @@ impl ExecutionLayer { } } - #[allow(clippy::too_many_arguments)] - async fn get_blinded_payload>( + async fn get_blinded_payload>( &self, parent_hash: ExecutionBlockHash, - timestamp: u64, - prev_randao: Hash256, - suggested_fee_recipient: Address, + payload_attributes: &PayloadAttributes, forkchoice_update_params: ForkchoiceUpdateParameters, builder_params: BuilderParams, + current_fork: ForkName, spec: &ChainSpec, - ) -> Result, Error> { + ) -> Result>, Error> { if let Some(builder) = self.builder() { let slot = builder_params.slot; let pubkey = builder_params.pubkey; @@ -682,10 +751,9 @@ impl ExecutionLayer { timed_future(metrics::GET_BLINDED_PAYLOAD_LOCAL, async { self.get_full_payload_caching::( parent_hash, - timestamp, - prev_randao, - suggested_fee_recipient, + payload_attributes, forkchoice_update_params, + current_fork, ) .await }) @@ -701,7 +769,7 @@ impl ExecutionLayer { }, "relay_response_ms" => relay_duration.as_millis(), "local_fee_recipient" => match &local_result { - Ok(header) => format!("{:?}", header.fee_recipient()), + Ok(proposal_contents) => format!("{:?}", proposal_contents.payload().fee_recipient()), Err(_) => "request failed".to_string() }, "local_response_ms" => local_duration.as_millis(), @@ -715,7 +783,7 @@ impl ExecutionLayer { "Builder error when requesting payload"; "info" => "falling back to local execution client", "relay_error" => ?e, - "local_block_hash" => ?local.block_hash(), + "local_block_hash" => ?local.payload().block_hash(), "parent_hash" => ?parent_hash, ); Ok(ProvenancedPayload::Local(local)) @@ -725,7 +793,7 @@ impl ExecutionLayer { self.log(), "Builder did not return a payload"; "info" => "falling back to local execution client", - "local_block_hash" => ?local.block_hash(), + "local_block_hash" => ?local.payload().block_hash(), "parent_hash" => ?parent_hash, ); Ok(ProvenancedPayload::Local(local)) @@ -737,22 +805,40 @@ impl ExecutionLayer { self.log(), "Received local and builder payloads"; "relay_block_hash" => ?header.block_hash(), - "local_block_hash" => ?local.block_hash(), + "local_block_hash" => ?local.payload().block_hash(), "parent_hash" => ?parent_hash, ); + let relay_value = relay.data.message.value; + let local_value = *local.block_value(); + if !self.inner.always_prefer_builder_payload + && local_value >= relay_value + { + info!( + self.log(), + "Local block is more profitable than relay block"; + "local_block_value" => %local_value, + "relay_value" => %relay_value + ); + return Ok(ProvenancedPayload::Local(local)); + } + match verify_builder_bid( &relay, parent_hash, - prev_randao, - timestamp, - Some(local.block_number()), + payload_attributes, + Some(local.payload().block_number()), self.inner.builder_profit_threshold, + current_fork, spec, ) { - Ok(()) => { - Ok(ProvenancedPayload::Builder(relay.data.message.header)) - } + Ok(()) => Ok(ProvenancedPayload::Builder( + BlockProposalContents::Payload { + payload: relay.data.message.header, + block_value: relay.data.message.value, + _phantom: PhantomData::default(), + }, + )), Err(reason) if !reason.payload_invalid() => { info!( self.log(), @@ -795,20 +881,28 @@ impl ExecutionLayer { match verify_builder_bid( &relay, parent_hash, - prev_randao, - timestamp, + payload_attributes, None, self.inner.builder_profit_threshold, + current_fork, spec, ) { - Ok(()) => { - Ok(ProvenancedPayload::Builder(relay.data.message.header)) - } + Ok(()) => Ok(ProvenancedPayload::Builder( + BlockProposalContents::Payload { + payload: relay.data.message.header, + block_value: relay.data.message.value, + _phantom: PhantomData::default(), + }, + )), // If the payload is valid then use it. The local EE failed // to produce a payload so we have no alternative. - Err(e) if !e.payload_invalid() => { - Ok(ProvenancedPayload::Builder(relay.data.message.header)) - } + Err(e) if !e.payload_invalid() => Ok(ProvenancedPayload::Builder( + BlockProposalContents::Payload { + payload: relay.data.message.header, + block_value: relay.data.message.value, + _phantom: PhantomData::default(), + }, + )), Err(reason) => { metrics::inc_counter_vec( &metrics::EXECUTION_LAYER_GET_PAYLOAD_BUILDER_REJECTIONS, @@ -871,76 +965,62 @@ impl ExecutionLayer { } self.get_full_payload_caching( parent_hash, - timestamp, - prev_randao, - suggested_fee_recipient, + payload_attributes, forkchoice_update_params, + current_fork, ) .await .map(ProvenancedPayload::Local) } /// Get a full payload without caching its result in the execution layer's payload cache. - async fn get_full_payload>( + async fn get_full_payload>( &self, parent_hash: ExecutionBlockHash, - timestamp: u64, - prev_randao: Hash256, - suggested_fee_recipient: Address, + payload_attributes: &PayloadAttributes, forkchoice_update_params: ForkchoiceUpdateParameters, - ) -> Result { + current_fork: ForkName, + ) -> Result, Error> { self.get_full_payload_with( parent_hash, - timestamp, - prev_randao, - suggested_fee_recipient, + payload_attributes, forkchoice_update_params, + current_fork, noop, ) .await } /// Get a full payload and cache its result in the execution layer's payload cache. - async fn get_full_payload_caching>( + async fn get_full_payload_caching>( &self, parent_hash: ExecutionBlockHash, - timestamp: u64, - prev_randao: Hash256, - suggested_fee_recipient: Address, + payload_attributes: &PayloadAttributes, forkchoice_update_params: ForkchoiceUpdateParameters, - ) -> Result { + current_fork: ForkName, + ) -> Result, Error> { self.get_full_payload_with( parent_hash, - timestamp, - prev_randao, - suggested_fee_recipient, + payload_attributes, forkchoice_update_params, + current_fork, Self::cache_payload, ) .await } - async fn get_full_payload_with>( + async fn get_full_payload_with>( &self, parent_hash: ExecutionBlockHash, - timestamp: u64, - prev_randao: Hash256, - suggested_fee_recipient: Address, + payload_attributes: &PayloadAttributes, forkchoice_update_params: ForkchoiceUpdateParameters, - f: fn(&ExecutionLayer, &ExecutionPayload) -> Option>, - ) -> Result { - debug!( - self.log(), - "Issuing engine_getPayload"; - "suggested_fee_recipient" => ?suggested_fee_recipient, - "prev_randao" => ?prev_randao, - "timestamp" => timestamp, - "parent_hash" => ?parent_hash, - ); + current_fork: ForkName, + f: fn(&ExecutionLayer, ExecutionPayloadRef) -> Option>, + ) -> Result, Error> { self.engine() - .request(|engine| async move { + .request(move |engine| async move { let payload_id = if let Some(id) = engine - .get_payload_id(parent_hash, timestamp, prev_randao, suggested_fee_recipient) + .get_payload_id(&parent_hash, payload_attributes) .await { // The payload id has been cached for this engine. @@ -956,7 +1036,7 @@ impl ExecutionLayer { &metrics::EXECUTION_LAYER_PRE_PREPARED_PAYLOAD_ID, &[metrics::MISS], ); - let fork_choice_state = ForkChoiceState { + let fork_choice_state = ForkchoiceState { head_block_hash: parent_hash, safe_block_hash: forkchoice_update_params .justified_hash @@ -965,16 +1045,11 @@ impl ExecutionLayer { .finalized_hash .unwrap_or_else(ExecutionBlockHash::zero), }; - let payload_attributes = PayloadAttributes { - timestamp, - prev_randao, - suggested_fee_recipient, - }; let response = engine .notify_forkchoice_updated( fork_choice_state, - Some(payload_attributes), + Some(payload_attributes.clone()), self.log(), ) .await?; @@ -994,33 +1069,46 @@ impl ExecutionLayer { } }; - engine - .api - .get_payload_v1::(payload_id) - .await - .map(|full_payload| { - if full_payload.fee_recipient != suggested_fee_recipient { - error!( - self.log(), - "Inconsistent fee recipient"; - "msg" => "The fee recipient returned from the Execution Engine differs \ - from the suggested_fee_recipient set on the beacon node. This could \ - indicate that fees are being diverted to another address. Please \ - ensure that the value of suggested_fee_recipient is set correctly and \ - that the Execution Engine is trusted.", - "fee_recipient" => ?full_payload.fee_recipient, - "suggested_fee_recipient" => ?suggested_fee_recipient, - ); - } - if f(self, &full_payload).is_some() { - warn!( - self.log(), - "Duplicate payload cached, this might indicate redundant proposal \ + let payload_fut = async { + debug!( + self.log(), + "Issuing engine_getPayload"; + "suggested_fee_recipient" => ?payload_attributes.suggested_fee_recipient(), + "prev_randao" => ?payload_attributes.prev_randao(), + "timestamp" => payload_attributes.timestamp(), + "parent_hash" => ?parent_hash, + ); + engine.api.get_payload::(current_fork, payload_id).await + }; + let payload_response = payload_fut.await; + let (execution_payload, block_value) = payload_response.map(|payload_response| { + if payload_response.execution_payload_ref().fee_recipient() != payload_attributes.suggested_fee_recipient() { + error!( + self.log(), + "Inconsistent fee recipient"; + "msg" => "The fee recipient returned from the Execution Engine differs \ + from the suggested_fee_recipient set on the beacon node. This could \ + indicate that fees are being diverted to another address. Please \ + ensure that the value of suggested_fee_recipient is set correctly and \ + that the Execution Engine is trusted.", + "fee_recipient" => ?payload_response.execution_payload_ref().fee_recipient(), + "suggested_fee_recipient" => ?payload_attributes.suggested_fee_recipient(), + ); + } + if f(self, payload_response.execution_payload_ref()).is_some() { + warn!( + self.log(), + "Duplicate payload cached, this might indicate redundant proposal \ attempts." - ); - } - full_payload.into() - }) + ); + } + payload_response.into() + })?; + Ok(BlockProposalContents::Payload { + payload: execution_payload.into(), + block_value, + _phantom: PhantomData::default(), + }) }) .await .map_err(Box::new) @@ -1052,14 +1140,14 @@ impl ExecutionLayer { trace!( self.log(), "Issuing engine_newPayload"; - "parent_hash" => ?execution_payload.parent_hash, - "block_hash" => ?execution_payload.block_hash, - "block_number" => execution_payload.block_number, + "parent_hash" => ?execution_payload.parent_hash(), + "block_hash" => ?execution_payload.block_hash(), + "block_number" => execution_payload.block_number(), ); let result = self .engine() - .request(|engine| engine.api.new_payload_v1(execution_payload.clone())) + .request(|engine| engine.api.new_payload(execution_payload.clone())) .await; if let Ok(status) = &result { @@ -1069,7 +1157,7 @@ impl ExecutionLayer { ); } - process_payload_status(execution_payload.block_hash, result, self.log()) + process_payload_status(execution_payload.block_hash(), result, self.log()) .map_err(Box::new) .map_err(Error::EngineError) } @@ -1172,9 +1260,9 @@ impl ExecutionLayer { let payload_attributes = self.payload_attributes(next_slot, head_block_root).await; // Compute the "lookahead", the time between when the payload will be produced and now. - if let Some(payload_attributes) = payload_attributes { + if let Some(ref payload_attributes) = payload_attributes { if let Ok(now) = SystemTime::now().duration_since(UNIX_EPOCH) { - let timestamp = Duration::from_secs(payload_attributes.timestamp); + let timestamp = Duration::from_secs(payload_attributes.timestamp()); if let Some(lookahead) = timestamp.checked_sub(now) { metrics::observe_duration( &metrics::EXECUTION_LAYER_PAYLOAD_ATTRIBUTES_LOOKAHEAD, @@ -1191,7 +1279,7 @@ impl ExecutionLayer { } } - let forkchoice_state = ForkChoiceState { + let forkchoice_state = ForkchoiceState { head_block_hash, safe_block_hash: justified_block_hash, finalized_block_hash, @@ -1273,6 +1361,26 @@ impl ExecutionLayer { } } + /// Returns the execution engine capabilities resulting from a call to + /// engine_exchangeCapabilities. If the capabilities cache is not populated, + /// or if it is populated with a cached result of age >= `age_limit`, this + /// method will fetch the result from the execution engine and populate the + /// cache before returning it. Otherwise it will return a cached result from + /// a previous call. + /// + /// Set `age_limit` to `None` to always return the cached result + /// Set `age_limit` to `Some(Duration::ZERO)` to force fetching from EE + pub async fn get_engine_capabilities( + &self, + age_limit: Option, + ) -> Result { + self.engine() + .request(|engine| engine.get_engine_capabilities(age_limit)) + .await + .map_err(Box::new) + .map_err(Error::EngineError) + } + /// Used during block production to determine if the merge has been triggered. /// /// ## Specification @@ -1473,13 +1581,90 @@ impl ExecutionLayer { } } - pub async fn get_payload_by_block_hash( + pub async fn get_payload_bodies_by_hash( + &self, + hashes: Vec, + ) -> Result>>, Error> { + self.engine() + .request(|engine: &Engine| async move { + engine.api.get_payload_bodies_by_hash_v1(hashes).await + }) + .await + .map_err(Box::new) + .map_err(Error::EngineError) + } + + pub async fn get_payload_bodies_by_range( + &self, + start: u64, + count: u64, + ) -> Result>>, Error> { + let _timer = metrics::start_timer(&metrics::EXECUTION_LAYER_GET_PAYLOAD_BODIES_BY_RANGE); + self.engine() + .request(|engine: &Engine| async move { + engine + .api + .get_payload_bodies_by_range_v1(start, count) + .await + }) + .await + .map_err(Box::new) + .map_err(Error::EngineError) + } + + /// Fetch a full payload from the execution node. + /// + /// This will fail if the payload is not from the finalized portion of the chain. + pub async fn get_payload_for_header( + &self, + header: &ExecutionPayloadHeader, + fork: ForkName, + ) -> Result>, Error> { + let hash = header.block_hash(); + let block_number = header.block_number(); + + // Handle default payload body. + if header.block_hash() == ExecutionBlockHash::zero() { + let payload = match fork { + ForkName::Merge => ExecutionPayloadMerge::default().into(), + ForkName::Capella => ExecutionPayloadCapella::default().into(), + ForkName::Base | ForkName::Altair => { + return Err(Error::InvalidForkForPayload); + } + }; + return Ok(Some(payload)); + } + + // Use efficient payload bodies by range method if supported. + let capabilities = self.get_engine_capabilities(None).await?; + if capabilities.get_payload_bodies_by_range_v1 { + let mut payload_bodies = self.get_payload_bodies_by_range(block_number, 1).await?; + + if payload_bodies.len() != 1 { + return Ok(None); + } + + let opt_payload_body = payload_bodies.pop().flatten(); + opt_payload_body + .map(|body| { + body.to_payload(header.clone()) + .map_err(Error::InvalidPayloadBody) + }) + .transpose() + } else { + // Fall back to eth_blockByHash. + self.get_payload_by_hash_legacy(hash, fork).await + } + } + + pub async fn get_payload_by_hash_legacy( &self, hash: ExecutionBlockHash, + fork: ForkName, ) -> Result>, Error> { self.engine() .request(|engine| async move { - self.get_payload_by_block_hash_from_engine(engine, hash) + self.get_payload_by_hash_from_engine(engine, hash, fork) .await }) .await @@ -1487,18 +1672,29 @@ impl ExecutionLayer { .map_err(Error::EngineError) } - async fn get_payload_by_block_hash_from_engine( + async fn get_payload_by_hash_from_engine( &self, engine: &Engine, hash: ExecutionBlockHash, + fork: ForkName, ) -> Result>, ApiError> { let _timer = metrics::start_timer(&metrics::EXECUTION_LAYER_GET_PAYLOAD_BY_BLOCK_HASH); if hash == ExecutionBlockHash::zero() { - return Ok(Some(ExecutionPayload::default())); + return match fork { + ForkName::Merge => Ok(Some(ExecutionPayloadMerge::default().into())), + ForkName::Capella => Ok(Some(ExecutionPayloadCapella::default().into())), + ForkName::Base | ForkName::Altair => Err(ApiError::UnsupportedForkVariant( + format!("called get_payload_by_hash_from_engine with {}", fork), + )), + }; } - let block = if let Some(block) = engine.api.get_block_by_hash_with_txns::(hash).await? { + let block = if let Some(block) = engine + .api + .get_block_by_hash_with_txns::(hash, fork) + .await? + { block } else { return Ok(None); @@ -1506,30 +1702,63 @@ impl ExecutionLayer { let transactions = VariableList::new( block - .transactions - .into_iter() + .transactions() + .iter() .map(|transaction| VariableList::new(transaction.rlp().to_vec())) .collect::>() .map_err(ApiError::DeserializeTransaction)?, ) .map_err(ApiError::DeserializeTransactions)?; - Ok(Some(ExecutionPayload { - parent_hash: block.parent_hash, - fee_recipient: block.fee_recipient, - state_root: block.state_root, - receipts_root: block.receipts_root, - logs_bloom: block.logs_bloom, - prev_randao: block.prev_randao, - block_number: block.block_number, - gas_limit: block.gas_limit, - gas_used: block.gas_used, - timestamp: block.timestamp, - extra_data: block.extra_data, - base_fee_per_gas: block.base_fee_per_gas, - block_hash: block.block_hash, - transactions, - })) + let payload = match block { + ExecutionBlockWithTransactions::Merge(merge_block) => { + ExecutionPayload::Merge(ExecutionPayloadMerge { + parent_hash: merge_block.parent_hash, + fee_recipient: merge_block.fee_recipient, + state_root: merge_block.state_root, + receipts_root: merge_block.receipts_root, + logs_bloom: merge_block.logs_bloom, + prev_randao: merge_block.prev_randao, + block_number: merge_block.block_number, + gas_limit: merge_block.gas_limit, + gas_used: merge_block.gas_used, + timestamp: merge_block.timestamp, + extra_data: merge_block.extra_data, + base_fee_per_gas: merge_block.base_fee_per_gas, + block_hash: merge_block.block_hash, + transactions, + }) + } + ExecutionBlockWithTransactions::Capella(capella_block) => { + let withdrawals = VariableList::new( + capella_block + .withdrawals + .into_iter() + .map(Into::into) + .collect(), + ) + .map_err(ApiError::DeserializeWithdrawals)?; + ExecutionPayload::Capella(ExecutionPayloadCapella { + parent_hash: capella_block.parent_hash, + fee_recipient: capella_block.fee_recipient, + state_root: capella_block.state_root, + receipts_root: capella_block.receipts_root, + logs_bloom: capella_block.logs_bloom, + prev_randao: capella_block.prev_randao, + block_number: capella_block.block_number, + gas_limit: capella_block.gas_limit, + gas_used: capella_block.gas_used, + timestamp: capella_block.timestamp, + extra_data: capella_block.extra_data, + base_fee_per_gas: capella_block.base_fee_per_gas, + block_hash: capella_block.block_hash, + transactions, + withdrawals, + }) + } + }; + + Ok(Some(payload)) } pub async fn propose_blinded_beacon_block( @@ -1565,9 +1794,9 @@ impl ExecutionLayer { "Builder successfully revealed payload"; "relay_response_ms" => duration.as_millis(), "block_root" => ?block_root, - "fee_recipient" => ?payload.fee_recipient, - "block_hash" => ?payload.block_hash, - "parent_hash" => ?payload.parent_hash + "fee_recipient" => ?payload.fee_recipient(), + "block_hash" => ?payload.block_hash(), + "parent_hash" => ?payload.parent_hash() ) } Err(e) => { @@ -1575,10 +1804,10 @@ impl ExecutionLayer { &metrics::EXECUTION_LAYER_BUILDER_REVEAL_PAYLOAD_OUTCOME, &[metrics::FAILURE], ); - error!( + warn!( self.log(), "Builder failed to reveal payload"; - "info" => "this relay failure may cause a missed proposal", + "info" => "this is common behaviour for some builders and may not indicate an issue", "error" => ?e, "relay_response_ms" => duration.as_millis(), "block_root" => ?block_root, @@ -1629,6 +1858,10 @@ enum InvalidBuilderPayload { signature: Signature, pubkey: PublicKeyBytes, }, + WithdrawalsRoot { + payload: Option, + expected: Option, + }, } impl InvalidBuilderPayload { @@ -1643,6 +1876,7 @@ impl InvalidBuilderPayload { InvalidBuilderPayload::BlockNumber { .. } => true, InvalidBuilderPayload::Fork { .. } => true, InvalidBuilderPayload::Signature { .. } => true, + InvalidBuilderPayload::WithdrawalsRoot { .. } => true, } } } @@ -1678,18 +1912,31 @@ impl fmt::Display for InvalidBuilderPayload { "invalid payload signature {} for pubkey {}", signature, pubkey ), + InvalidBuilderPayload::WithdrawalsRoot { payload, expected } => { + let opt_string = |opt_hash: &Option| { + opt_hash + .map(|hash| hash.to_string()) + .unwrap_or_else(|| "None".to_string()) + }; + write!( + f, + "payload withdrawals root was {} not {}", + opt_string(payload), + opt_string(expected) + ) + } } } } /// Perform some cursory, non-exhaustive validation of the bid returned from the builder. -fn verify_builder_bid>( +fn verify_builder_bid>( bid: &ForkVersionedResponse>, parent_hash: ExecutionBlockHash, - prev_randao: Hash256, - timestamp: u64, + payload_attributes: &PayloadAttributes, block_number: Option, profit_threshold: Uint256, + current_fork: ForkName, spec: &ChainSpec, ) -> Result<(), Box> { let is_signature_valid = bid.data.verify_signature(spec); @@ -1706,6 +1953,13 @@ fn verify_builder_bid>( ); } + let expected_withdrawals_root = payload_attributes + .withdrawals() + .ok() + .cloned() + .map(|withdrawals| Withdrawals::::from(withdrawals).tree_hash_root()); + let payload_withdrawals_root = header.withdrawals_root().ok(); + if payload_value < profit_threshold { Err(Box::new(InvalidBuilderPayload::LowValue { profit_threshold, @@ -1716,35 +1970,36 @@ fn verify_builder_bid>( payload: header.parent_hash(), expected: parent_hash, })) - } else if header.prev_randao() != prev_randao { + } else if header.prev_randao() != payload_attributes.prev_randao() { Err(Box::new(InvalidBuilderPayload::PrevRandao { payload: header.prev_randao(), - expected: prev_randao, + expected: payload_attributes.prev_randao(), })) - } else if header.timestamp() != timestamp { + } else if header.timestamp() != payload_attributes.timestamp() { Err(Box::new(InvalidBuilderPayload::Timestamp { payload: header.timestamp(), - expected: timestamp, + expected: payload_attributes.timestamp(), })) } else if block_number.map_or(false, |n| n != header.block_number()) { Err(Box::new(InvalidBuilderPayload::BlockNumber { payload: header.block_number(), expected: block_number, })) - } else if !matches!(bid.version, Some(ForkName::Merge)) { - // Once fork information is added to the payload, we will need to - // check that the local and relay payloads match. At this point, if - // we are requesting a payload at all, we have to assume this is - // the Bellatrix fork. + } else if bid.version != Some(current_fork) { Err(Box::new(InvalidBuilderPayload::Fork { payload: bid.version, - expected: ForkName::Merge, + expected: current_fork, })) } else if !is_signature_valid { Err(Box::new(InvalidBuilderPayload::Signature { signature: bid.data.signature.clone(), pubkey: bid.data.message.pubkey, })) + } else if payload_withdrawals_root != expected_withdrawals_root { + Err(Box::new(InvalidBuilderPayload::WithdrawalsRoot { + payload: payload_withdrawals_root, + expected: expected_withdrawals_root, + })) } else { Ok(()) } @@ -1906,7 +2161,10 @@ mod test { } } -fn noop(_: &ExecutionLayer, _: &ExecutionPayload) -> Option> { +fn noop( + _: &ExecutionLayer, + _: ExecutionPayloadRef, +) -> Option> { None } diff --git a/beacon_node/execution_layer/src/metrics.rs b/beacon_node/execution_layer/src/metrics.rs index 287050f66be..3ed99ca6068 100644 --- a/beacon_node/execution_layer/src/metrics.rs +++ b/beacon_node/execution_layer/src/metrics.rs @@ -45,6 +45,10 @@ lazy_static::lazy_static! { "execution_layer_get_payload_by_block_hash_time", "Time to reconstruct a payload from the EE using eth_getBlockByHash" ); + pub static ref EXECUTION_LAYER_GET_PAYLOAD_BODIES_BY_RANGE: Result = try_create_histogram( + "execution_layer_get_payload_bodies_by_range_time", + "Time to fetch a range of payload bodies from the EE" + ); pub static ref EXECUTION_LAYER_VERIFY_BLOCK_HASH: Result = try_create_histogram_with_buckets( "execution_layer_verify_block_hash_time", "Time to verify the execution block hash in Lighthouse, without the EL", diff --git a/beacon_node/execution_layer/src/payload_status.rs b/beacon_node/execution_layer/src/payload_status.rs index 7db8e234d11..5405fd70099 100644 --- a/beacon_node/execution_layer/src/payload_status.rs +++ b/beacon_node/execution_layer/src/payload_status.rs @@ -10,7 +10,9 @@ use types::ExecutionBlockHash; pub enum PayloadStatus { Valid, Invalid { - latest_valid_hash: ExecutionBlockHash, + /// The EE will provide a `None` LVH when it is unable to determine the + /// latest valid ancestor. + latest_valid_hash: Option, validation_error: Option, }, Syncing, @@ -55,22 +57,10 @@ pub fn process_payload_status( }) } } - PayloadStatusV1Status::Invalid => { - if let Some(latest_valid_hash) = response.latest_valid_hash { - // The response is only valid if `latest_valid_hash` is not `null`. - Ok(PayloadStatus::Invalid { - latest_valid_hash, - validation_error: response.validation_error.clone(), - }) - } else { - Err(EngineError::Api { - error: ApiError::BadResponse( - "new_payload: response.status = INVALID but null latest_valid_hash" - .to_string(), - ), - }) - } - } + PayloadStatusV1Status::Invalid => Ok(PayloadStatus::Invalid { + latest_valid_hash: response.latest_valid_hash, + validation_error: response.validation_error, + }), PayloadStatusV1Status::InvalidBlockHash => { // In the interests of being liberal with what we accept, only raise a // warning here. diff --git a/beacon_node/execution_layer/src/test_utils/execution_block_generator.rs b/beacon_node/execution_layer/src/test_utils/execution_block_generator.rs index 22dcb400708..a8d98a767fb 100644 --- a/beacon_node/execution_layer/src/test_utils/execution_block_generator.rs +++ b/beacon_node/execution_layer/src/test_utils/execution_block_generator.rs @@ -1,4 +1,4 @@ -use crate::engines::ForkChoiceState; +use crate::engines::ForkchoiceState; use crate::{ engine_api::{ json_structures::{ @@ -12,7 +12,10 @@ use serde::{Deserialize, Serialize}; use std::collections::HashMap; use tree_hash::TreeHash; use tree_hash_derive::TreeHash; -use types::{EthSpec, ExecutionBlockHash, ExecutionPayload, Hash256, Uint256}; +use types::{ + EthSpec, ExecutionBlockHash, ExecutionPayload, ExecutionPayloadCapella, ExecutionPayloadMerge, + ForkName, Hash256, Uint256, +}; const GAS_LIMIT: u64 = 16384; const GAS_USED: u64 = GAS_LIMIT - 1; @@ -28,21 +31,21 @@ impl Block { pub fn block_number(&self) -> u64 { match self { Block::PoW(block) => block.block_number, - Block::PoS(payload) => payload.block_number, + Block::PoS(payload) => payload.block_number(), } } pub fn parent_hash(&self) -> ExecutionBlockHash { match self { Block::PoW(block) => block.parent_hash, - Block::PoS(payload) => payload.parent_hash, + Block::PoS(payload) => payload.parent_hash(), } } pub fn block_hash(&self) -> ExecutionBlockHash { match self { Block::PoW(block) => block.block_hash, - Block::PoS(payload) => payload.block_hash, + Block::PoS(payload) => payload.block_hash(), } } @@ -63,33 +66,18 @@ impl Block { timestamp: block.timestamp, }, Block::PoS(payload) => ExecutionBlock { - block_hash: payload.block_hash, - block_number: payload.block_number, - parent_hash: payload.parent_hash, + block_hash: payload.block_hash(), + block_number: payload.block_number(), + parent_hash: payload.parent_hash(), total_difficulty, - timestamp: payload.timestamp, + timestamp: payload.timestamp(), }, } } pub fn as_execution_block_with_tx(&self) -> Option> { match self { - Block::PoS(payload) => Some(ExecutionBlockWithTransactions { - parent_hash: payload.parent_hash, - fee_recipient: payload.fee_recipient, - state_root: payload.state_root, - receipts_root: payload.receipts_root, - logs_bloom: payload.logs_bloom.clone(), - prev_randao: payload.prev_randao, - block_number: payload.block_number, - gas_limit: payload.gas_limit, - gas_used: payload.gas_used, - timestamp: payload.timestamp, - extra_data: payload.extra_data.clone(), - base_fee_per_gas: payload.base_fee_per_gas, - block_hash: payload.block_hash, - transactions: vec![], - }), + Block::PoS(payload) => Some(payload.clone().try_into().unwrap()), Block::PoW(_) => None, } } @@ -126,6 +114,10 @@ pub struct ExecutionBlockGenerator { pub pending_payloads: HashMap>, pub next_payload_id: u64, pub payload_ids: HashMap>, + /* + * Post-merge fork triggers + */ + pub shanghai_time: Option, // withdrawals } impl ExecutionBlockGenerator { @@ -133,6 +125,7 @@ impl ExecutionBlockGenerator { terminal_total_difficulty: Uint256, terminal_block_number: u64, terminal_block_hash: ExecutionBlockHash, + shanghai_time: Option, ) -> Self { let mut gen = Self { head_block: <_>::default(), @@ -145,6 +138,7 @@ impl ExecutionBlockGenerator { pending_payloads: <_>::default(), next_payload_id: 0, payload_ids: <_>::default(), + shanghai_time, }; gen.insert_pow_block(0).unwrap(); @@ -176,6 +170,13 @@ impl ExecutionBlockGenerator { } } + pub fn get_fork_at_timestamp(&self, timestamp: u64) -> ForkName { + match self.shanghai_time { + Some(fork_time) if timestamp >= fork_time => ForkName::Capella, + _ => ForkName::Merge, + } + } + pub fn execution_block_by_number(&self, number: u64) -> Option { self.block_by_number(number) .map(|block| block.as_execution_block(self.terminal_total_difficulty)) @@ -198,6 +199,14 @@ impl ExecutionBlockGenerator { .and_then(|block| block.as_execution_block_with_tx()) } + pub fn execution_block_with_txs_by_number( + &self, + number: u64, + ) -> Option> { + self.block_by_number(number) + .and_then(|block| block.as_execution_block_with_tx()) + } + pub fn move_to_block_prior_to_terminal_block(&mut self) -> Result<(), String> { let target_block = self .terminal_block_number @@ -357,7 +366,9 @@ impl ExecutionBlockGenerator { // Update the block hash after modifying the block match &mut block { Block::PoW(b) => b.block_hash = ExecutionBlockHash::from_root(b.tree_hash_root()), - Block::PoS(b) => b.block_hash = ExecutionBlockHash::from_root(b.tree_hash_root()), + Block::PoS(b) => { + *b.block_hash_mut() = ExecutionBlockHash::from_root(b.tree_hash_root()) + } } // Update head. @@ -378,7 +389,7 @@ impl ExecutionBlockGenerator { } pub fn new_payload(&mut self, payload: ExecutionPayload) -> PayloadStatusV1 { - let parent = if let Some(parent) = self.blocks.get(&payload.parent_hash) { + let parent = if let Some(parent) = self.blocks.get(&payload.parent_hash()) { parent } else { return PayloadStatusV1 { @@ -388,7 +399,7 @@ impl ExecutionBlockGenerator { }; }; - if payload.block_number != parent.block_number() + 1 { + if payload.block_number() != parent.block_number() + 1 { return PayloadStatusV1 { status: PayloadStatusV1Status::Invalid, latest_valid_hash: Some(parent.block_hash()), @@ -396,8 +407,8 @@ impl ExecutionBlockGenerator { }; } - let valid_hash = payload.block_hash; - self.pending_payloads.insert(payload.block_hash, payload); + let valid_hash = payload.block_hash(); + self.pending_payloads.insert(payload.block_hash(), payload); PayloadStatusV1 { status: PayloadStatusV1Status::Valid, @@ -406,9 +417,11 @@ impl ExecutionBlockGenerator { } } - pub fn forkchoice_updated_v1( + // This function expects payload_attributes to already be validated with respect to + // the current fork [obtained by self.get_fork_at_timestamp(payload_attributes.timestamp)] + pub fn forkchoice_updated( &mut self, - forkchoice_state: ForkChoiceState, + forkchoice_state: ForkchoiceState, payload_attributes: Option, ) -> Result { if let Some(payload) = self @@ -462,24 +475,62 @@ impl ExecutionBlockGenerator { let id = payload_id_from_u64(self.next_payload_id); self.next_payload_id += 1; - let mut execution_payload = ExecutionPayload { - parent_hash: forkchoice_state.head_block_hash, - fee_recipient: attributes.suggested_fee_recipient, - receipts_root: Hash256::repeat_byte(42), - state_root: Hash256::repeat_byte(43), - logs_bloom: vec![0; 256].into(), - prev_randao: attributes.prev_randao, - block_number: parent.block_number() + 1, - gas_limit: GAS_LIMIT, - gas_used: GAS_USED, - timestamp: attributes.timestamp, - extra_data: "block gen was here".as_bytes().to_vec().into(), - base_fee_per_gas: Uint256::one(), - block_hash: ExecutionBlockHash::zero(), - transactions: vec![].into(), + let mut execution_payload = match &attributes { + PayloadAttributes::V1(pa) => ExecutionPayload::Merge(ExecutionPayloadMerge { + parent_hash: forkchoice_state.head_block_hash, + fee_recipient: pa.suggested_fee_recipient, + receipts_root: Hash256::repeat_byte(42), + state_root: Hash256::repeat_byte(43), + logs_bloom: vec![0; 256].into(), + prev_randao: pa.prev_randao, + block_number: parent.block_number() + 1, + gas_limit: GAS_LIMIT, + gas_used: GAS_USED, + timestamp: pa.timestamp, + extra_data: "block gen was here".as_bytes().to_vec().into(), + base_fee_per_gas: Uint256::one(), + block_hash: ExecutionBlockHash::zero(), + transactions: vec![].into(), + }), + PayloadAttributes::V2(pa) => match self.get_fork_at_timestamp(pa.timestamp) { + ForkName::Merge => ExecutionPayload::Merge(ExecutionPayloadMerge { + parent_hash: forkchoice_state.head_block_hash, + fee_recipient: pa.suggested_fee_recipient, + receipts_root: Hash256::repeat_byte(42), + state_root: Hash256::repeat_byte(43), + logs_bloom: vec![0; 256].into(), + prev_randao: pa.prev_randao, + block_number: parent.block_number() + 1, + gas_limit: GAS_LIMIT, + gas_used: GAS_USED, + timestamp: pa.timestamp, + extra_data: "block gen was here".as_bytes().to_vec().into(), + base_fee_per_gas: Uint256::one(), + block_hash: ExecutionBlockHash::zero(), + transactions: vec![].into(), + }), + ForkName::Capella => ExecutionPayload::Capella(ExecutionPayloadCapella { + parent_hash: forkchoice_state.head_block_hash, + fee_recipient: pa.suggested_fee_recipient, + receipts_root: Hash256::repeat_byte(42), + state_root: Hash256::repeat_byte(43), + logs_bloom: vec![0; 256].into(), + prev_randao: pa.prev_randao, + block_number: parent.block_number() + 1, + gas_limit: GAS_LIMIT, + gas_used: GAS_USED, + timestamp: pa.timestamp, + extra_data: "block gen was here".as_bytes().to_vec().into(), + base_fee_per_gas: Uint256::one(), + block_hash: ExecutionBlockHash::zero(), + transactions: vec![].into(), + withdrawals: pa.withdrawals.clone().into(), + }), + _ => unreachable!(), + }, }; - execution_payload.block_hash = + *execution_payload.block_hash_mut() = ExecutionBlockHash::from_root(execution_payload.tree_hash_root()); self.payload_ids.insert(id, execution_payload); @@ -566,6 +617,7 @@ mod test { TERMINAL_DIFFICULTY.into(), TERMINAL_BLOCK, ExecutionBlockHash::zero(), + None, ); for i in 0..=TERMINAL_BLOCK { diff --git a/beacon_node/execution_layer/src/test_utils/handle_rpc.rs b/beacon_node/execution_layer/src/test_utils/handle_rpc.rs index 97c52357559..bda0c782dcc 100644 --- a/beacon_node/execution_layer/src/test_utils/handle_rpc.rs +++ b/beacon_node/execution_layer/src/test_utils/handle_rpc.rs @@ -1,25 +1,33 @@ use super::Context; use crate::engine_api::{http::*, *}; use crate::json_structures::*; -use serde::de::DeserializeOwned; +use crate::test_utils::DEFAULT_MOCK_EL_PAYLOAD_VALUE_WEI; +use serde::{de::DeserializeOwned, Deserialize}; use serde_json::Value as JsonValue; use std::sync::Arc; -use types::EthSpec; +use types::{EthSpec, ForkName}; + +pub const GENERIC_ERROR_CODE: i64 = -1234; +pub const BAD_PARAMS_ERROR_CODE: i64 = -32602; +pub const UNKNOWN_PAYLOAD_ERROR_CODE: i64 = -38001; +pub const FORK_REQUEST_MISMATCH_ERROR_CODE: i64 = -32000; pub async fn handle_rpc( body: JsonValue, ctx: Arc>, -) -> Result { +) -> Result { *ctx.previous_request.lock() = Some(body.clone()); let method = body .get("method") .and_then(JsonValue::as_str) - .ok_or_else(|| "missing/invalid method field".to_string())?; + .ok_or_else(|| "missing/invalid method field".to_string()) + .map_err(|s| (s, GENERIC_ERROR_CODE))?; let params = body .get("params") - .ok_or_else(|| "missing/invalid params field".to_string())?; + .ok_or_else(|| "missing/invalid params field".to_string()) + .map_err(|s| (s, GENERIC_ERROR_CODE))?; match method { ETH_SYNCING => Ok(JsonValue::Bool(false)), @@ -27,7 +35,8 @@ pub async fn handle_rpc( let tag = params .get(0) .and_then(JsonValue::as_str) - .ok_or_else(|| "missing/invalid params[0] value".to_string())?; + .ok_or_else(|| "missing/invalid params[0] value".to_string()) + .map_err(|s| (s, BAD_PARAMS_ERROR_CODE))?; match tag { "latest" => Ok(serde_json::to_value( @@ -36,7 +45,10 @@ pub async fn handle_rpc( .latest_execution_block(), ) .unwrap()), - other => Err(format!("The tag {} is not supported", other)), + other => Err(( + format!("The tag {} is not supported", other), + BAD_PARAMS_ERROR_CODE, + )), } } ETH_GET_BLOCK_BY_HASH => { @@ -47,7 +59,8 @@ pub async fn handle_rpc( .and_then(|s| { s.parse() .map_err(|e| format!("unable to parse hash: {:?}", e)) - })?; + }) + .map_err(|s| (s, BAD_PARAMS_ERROR_CODE))?; // If we have a static response set, just return that. if let Some(response) = *ctx.static_get_block_by_hash_response.lock() { @@ -57,7 +70,8 @@ pub async fn handle_rpc( let full_tx = params .get(1) .and_then(JsonValue::as_bool) - .ok_or_else(|| "missing/invalid params[1] value".to_string())?; + .ok_or_else(|| "missing/invalid params[1] value".to_string()) + .map_err(|s| (s, BAD_PARAMS_ERROR_CODE))?; if full_tx { Ok(serde_json::to_value( ctx.execution_block_generator @@ -74,18 +88,70 @@ pub async fn handle_rpc( .unwrap()) } } - ENGINE_NEW_PAYLOAD_V1 => { - let request: JsonExecutionPayloadV1 = get_param(params, 0)?; + ENGINE_NEW_PAYLOAD_V1 | ENGINE_NEW_PAYLOAD_V2 => { + let request = match method { + ENGINE_NEW_PAYLOAD_V1 => JsonExecutionPayload::V1( + get_param::>(params, 0) + .map_err(|s| (s, BAD_PARAMS_ERROR_CODE))?, + ), + ENGINE_NEW_PAYLOAD_V2 => get_param::>(params, 0) + .map(|jep| JsonExecutionPayload::V2(jep)) + .or_else(|_| { + get_param::>(params, 0) + .map(|jep| JsonExecutionPayload::V1(jep)) + }) + .map_err(|s| (s, BAD_PARAMS_ERROR_CODE))?, + // TODO(4844) add that here.. + _ => unreachable!(), + }; + + let fork = ctx + .execution_block_generator + .read() + .get_fork_at_timestamp(*request.timestamp()); + // validate method called correctly according to shanghai fork time + match fork { + ForkName::Merge => { + if matches!(request, JsonExecutionPayload::V2(_)) { + return Err(( + format!( + "{} called with `ExecutionPayloadV2` before Capella fork!", + method + ), + GENERIC_ERROR_CODE, + )); + } + } + ForkName::Capella => { + if method == ENGINE_NEW_PAYLOAD_V1 { + return Err(( + format!("{} called after Capella fork!", method), + GENERIC_ERROR_CODE, + )); + } + if matches!(request, JsonExecutionPayload::V1(_)) { + return Err(( + format!( + "{} called with `ExecutionPayloadV1` after Capella fork!", + method + ), + GENERIC_ERROR_CODE, + )); + } + } + // TODO(4844) add 4844 error checking here + _ => unreachable!(), + }; // Canned responses set by block hash take priority. - if let Some(status) = ctx.get_new_payload_status(&request.block_hash) { + if let Some(status) = ctx.get_new_payload_status(request.block_hash()) { return Ok(serde_json::to_value(JsonPayloadStatusV1::from(status)).unwrap()); } let (static_response, should_import) = if let Some(mut response) = ctx.static_new_payload_response.lock().clone() { if response.status.status == PayloadStatusV1Status::Valid { - response.status.latest_valid_hash = Some(request.block_hash) + response.status.latest_valid_hash = Some(*request.block_hash()) } (Some(response.status), response.should_import) @@ -107,21 +173,140 @@ pub async fn handle_rpc( Ok(serde_json::to_value(JsonPayloadStatusV1::from(response)).unwrap()) } - ENGINE_GET_PAYLOAD_V1 => { - let request: JsonPayloadIdRequest = get_param(params, 0)?; + ENGINE_GET_PAYLOAD_V1 | ENGINE_GET_PAYLOAD_V2 => { + let request: JsonPayloadIdRequest = + get_param(params, 0).map_err(|s| (s, BAD_PARAMS_ERROR_CODE))?; let id = request.into(); let response = ctx .execution_block_generator .write() .get_payload(&id) - .ok_or_else(|| format!("no payload for id {:?}", id))?; + .ok_or_else(|| { + ( + format!("no payload for id {:?}", id), + UNKNOWN_PAYLOAD_ERROR_CODE, + ) + })?; + + // validate method called correctly according to shanghai fork time + if ctx + .execution_block_generator + .read() + .get_fork_at_timestamp(response.timestamp()) + == ForkName::Capella + && method == ENGINE_GET_PAYLOAD_V1 + { + return Err(( + format!("{} called after Capella fork!", method), + FORK_REQUEST_MISMATCH_ERROR_CODE, + )); + } + // TODO(4844) add 4844 error checking here - Ok(serde_json::to_value(JsonExecutionPayloadV1::from(response)).unwrap()) + match method { + ENGINE_GET_PAYLOAD_V1 => { + Ok(serde_json::to_value(JsonExecutionPayload::from(response)).unwrap()) + } + ENGINE_GET_PAYLOAD_V2 => Ok(match JsonExecutionPayload::from(response) { + JsonExecutionPayload::V1(execution_payload) => { + serde_json::to_value(JsonGetPayloadResponseV1 { + execution_payload, + block_value: DEFAULT_MOCK_EL_PAYLOAD_VALUE_WEI.into(), + }) + .unwrap() + } + JsonExecutionPayload::V2(execution_payload) => { + serde_json::to_value(JsonGetPayloadResponseV2 { + execution_payload, + block_value: DEFAULT_MOCK_EL_PAYLOAD_VALUE_WEI.into(), + }) + .unwrap() + } + }), + _ => unreachable!(), + } } - ENGINE_FORKCHOICE_UPDATED_V1 => { - let forkchoice_state: JsonForkChoiceStateV1 = get_param(params, 0)?; - let payload_attributes: Option = get_param(params, 1)?; + ENGINE_FORKCHOICE_UPDATED_V1 | ENGINE_FORKCHOICE_UPDATED_V2 => { + let forkchoice_state: JsonForkchoiceStateV1 = + get_param(params, 0).map_err(|s| (s, BAD_PARAMS_ERROR_CODE))?; + let payload_attributes = match method { + ENGINE_FORKCHOICE_UPDATED_V1 => { + let jpa1: Option = + get_param(params, 1).map_err(|s| (s, BAD_PARAMS_ERROR_CODE))?; + jpa1.map(JsonPayloadAttributes::V1) + } + ENGINE_FORKCHOICE_UPDATED_V2 => { + // we can't use `deny_unknown_fields` without breaking compatibility with some + // clients that haven't updated to the latest engine_api spec. So instead we'll + // need to deserialize based on timestamp + get_param::>(params, 1) + .and_then(|pa| { + pa.and_then(|pa| { + match ctx + .execution_block_generator + .read() + .get_fork_at_timestamp(*pa.timestamp()) + { + ForkName::Merge => { + get_param::>(params, 1) + .map(|opt| opt.map(JsonPayloadAttributes::V1)) + .transpose() + } + ForkName::Capella => { + get_param::>(params, 1) + .map(|opt| opt.map(JsonPayloadAttributes::V2)) + .transpose() + } + _ => unreachable!(), + } + }) + .transpose() + }) + .map_err(|s| (s, BAD_PARAMS_ERROR_CODE))? + } + _ => unreachable!(), + }; + + // validate method called correctly according to shanghai fork time + if let Some(pa) = payload_attributes.as_ref() { + match ctx + .execution_block_generator + .read() + .get_fork_at_timestamp(*pa.timestamp()) + { + ForkName::Merge => { + if matches!(pa, JsonPayloadAttributes::V2(_)) { + return Err(( + format!( + "{} called with `JsonPayloadAttributesV2` before Capella fork!", + method + ), + GENERIC_ERROR_CODE, + )); + } + } + ForkName::Capella => { + if method == ENGINE_FORKCHOICE_UPDATED_V1 { + return Err(( + format!("{} called after Capella fork!", method), + FORK_REQUEST_MISMATCH_ERROR_CODE, + )); + } + if matches!(pa, JsonPayloadAttributes::V1(_)) { + return Err(( + format!( + "{} called with `JsonPayloadAttributesV1` after Capella fork!", + method + ), + FORK_REQUEST_MISMATCH_ERROR_CODE, + )); + } + } + // TODO(4844) add 4844 error checking here + _ => unreachable!(), + }; + } if let Some(hook_response) = ctx .hook @@ -145,10 +330,11 @@ pub async fn handle_rpc( let mut response = ctx .execution_block_generator .write() - .forkchoice_updated_v1( + .forkchoice_updated( forkchoice_state.into(), payload_attributes.map(|json| json.into()), - )?; + ) + .map_err(|s| (s, GENERIC_ERROR_CODE))?; if let Some(mut status) = ctx.static_forkchoice_updated_response.lock().clone() { if status.status == PayloadStatusV1Status::Valid { @@ -169,9 +355,68 @@ pub async fn handle_rpc( }; Ok(serde_json::to_value(transition_config).unwrap()) } - other => Err(format!( - "The method {} does not exist/is not available", - other + ENGINE_EXCHANGE_CAPABILITIES => { + let engine_capabilities = ctx.engine_capabilities.read(); + Ok(serde_json::to_value(engine_capabilities.to_response()).unwrap()) + } + ENGINE_GET_PAYLOAD_BODIES_BY_RANGE_V1 => { + #[derive(Deserialize)] + #[serde(transparent)] + struct Quantity(#[serde(with = "eth2_serde_utils::u64_hex_be")] pub u64); + + let start = get_param::(params, 0) + .map_err(|s| (s, BAD_PARAMS_ERROR_CODE))? + .0; + let count = get_param::(params, 1) + .map_err(|s| (s, BAD_PARAMS_ERROR_CODE))? + .0; + + let mut response = vec![]; + for block_num in start..(start + count) { + let maybe_block = ctx + .execution_block_generator + .read() + .execution_block_with_txs_by_number(block_num); + + match maybe_block { + Some(block) => { + let transactions = Transactions::::new( + block + .transactions() + .iter() + .map(|transaction| VariableList::new(transaction.rlp().to_vec())) + .collect::>() + .map_err(|e| { + ( + format!("failed to deserialize transaction: {:?}", e), + GENERIC_ERROR_CODE, + ) + })?, + ) + .map_err(|e| { + ( + format!("failed to deserialize transactions: {:?}", e), + GENERIC_ERROR_CODE, + ) + })?; + + response.push(Some(JsonExecutionPayloadBodyV1:: { + transactions, + withdrawals: block + .withdrawals() + .ok() + .map(|withdrawals| VariableList::from(withdrawals.clone())), + })); + } + None => response.push(None), + } + } + + Ok(serde_json::to_value(response).unwrap()) + } + other => Err(( + format!("The method {} does not exist/is not available", other), + METHOD_NOT_FOUND_CODE, )), } } diff --git a/beacon_node/execution_layer/src/test_utils/hook.rs b/beacon_node/execution_layer/src/test_utils/hook.rs index a3748103e3e..4653811ac90 100644 --- a/beacon_node/execution_layer/src/test_utils/hook.rs +++ b/beacon_node/execution_layer/src/test_utils/hook.rs @@ -1,8 +1,8 @@ use crate::json_structures::*; type ForkChoiceUpdatedHook = dyn Fn( - JsonForkChoiceStateV1, - Option, + JsonForkchoiceStateV1, + Option, ) -> Option + Send + Sync; @@ -15,8 +15,8 @@ pub struct Hook { impl Hook { pub fn on_forkchoice_updated( &self, - state: JsonForkChoiceStateV1, - payload_attributes: Option, + state: JsonForkchoiceStateV1, + payload_attributes: Option, ) -> Option { (self.forkchoice_updated.as_ref()?)(state, payload_attributes) } diff --git a/beacon_node/execution_layer/src/test_utils/mock_builder.rs b/beacon_node/execution_layer/src/test_utils/mock_builder.rs index b8f74c1c93f..668d1fb3b1c 100644 --- a/beacon_node/execution_layer/src/test_utils/mock_builder.rs +++ b/beacon_node/execution_layer/src/test_utils/mock_builder.rs @@ -1,17 +1,21 @@ -use crate::test_utils::DEFAULT_JWT_SECRET; +use crate::test_utils::{DEFAULT_BUILDER_PAYLOAD_VALUE_WEI, DEFAULT_JWT_SECRET}; use crate::{Config, ExecutionLayer, PayloadAttributes}; use async_trait::async_trait; use eth2::types::{BlockId, StateId, ValidatorId}; use eth2::{BeaconNodeHttpClient, Timeouts}; -use ethereum_consensus::crypto::{SecretKey, Signature}; -use ethereum_consensus::primitives::BlsPublicKey; pub use ethereum_consensus::state_transition::Context; +use ethereum_consensus::{ + crypto::{SecretKey, Signature}, + primitives::{BlsPublicKey, BlsSignature, ExecutionAddress, Hash32, Root, U256}, + state_transition::Error, +}; use fork_choice::ForkchoiceUpdateParameters; -use mev_build_rs::{ +use mev_rs::{ + bellatrix::{BuilderBid as BuilderBidBellatrix, SignedBuilderBid as SignedBuilderBidBellatrix}, + capella::{BuilderBid as BuilderBidCapella, SignedBuilderBid as SignedBuilderBidCapella}, sign_builder_message, verify_signed_builder_message, BidRequest, BlindedBlockProviderError, BlindedBlockProviderServer, BuilderBid, ExecutionPayload as ServerPayload, - ExecutionPayloadHeader as ServerPayloadHeader, SignedBlindedBeaconBlock, SignedBuilderBid, - SignedValidatorRegistration, + SignedBlindedBeaconBlock, SignedBuilderBid, SignedValidatorRegistration, }; use parking_lot::RwLock; use sensitive_url::SensitiveUrl; @@ -26,7 +30,8 @@ use task_executor::TaskExecutor; use tempfile::NamedTempFile; use tree_hash::TreeHash; use types::{ - Address, BeaconState, BlindedPayload, ChainSpec, EthSpec, ExecPayload, Hash256, Slot, Uint256, + Address, BeaconState, BlindedPayload, ChainSpec, EthSpec, ExecPayload, ForkName, Hash256, Slot, + Uint256, }; #[derive(Clone)] @@ -38,25 +43,129 @@ pub enum Operation { PrevRandao(Hash256), BlockNumber(usize), Timestamp(usize), + WithdrawalsRoot(Hash256), } impl Operation { - fn apply(self, bid: &mut BuilderBid) -> Result<(), BlindedBlockProviderError> { + fn apply(self, bid: &mut B) -> Result<(), BlindedBlockProviderError> { match self { Operation::FeeRecipient(fee_recipient) => { - bid.header.fee_recipient = to_ssz_rs(&fee_recipient)? + *bid.fee_recipient_mut() = to_ssz_rs(&fee_recipient)? } - Operation::GasLimit(gas_limit) => bid.header.gas_limit = gas_limit as u64, - Operation::Value(value) => bid.value = to_ssz_rs(&value)?, - Operation::ParentHash(parent_hash) => bid.header.parent_hash = to_ssz_rs(&parent_hash)?, - Operation::PrevRandao(prev_randao) => bid.header.prev_randao = to_ssz_rs(&prev_randao)?, - Operation::BlockNumber(block_number) => bid.header.block_number = block_number as u64, - Operation::Timestamp(timestamp) => bid.header.timestamp = timestamp as u64, + Operation::GasLimit(gas_limit) => *bid.gas_limit_mut() = gas_limit as u64, + Operation::Value(value) => *bid.value_mut() = to_ssz_rs(&value)?, + Operation::ParentHash(parent_hash) => *bid.parent_hash_mut() = to_ssz_rs(&parent_hash)?, + Operation::PrevRandao(prev_randao) => *bid.prev_randao_mut() = to_ssz_rs(&prev_randao)?, + Operation::BlockNumber(block_number) => *bid.block_number_mut() = block_number as u64, + Operation::Timestamp(timestamp) => *bid.timestamp_mut() = timestamp as u64, + Operation::WithdrawalsRoot(root) => *bid.withdrawals_root_mut()? = to_ssz_rs(&root)?, } Ok(()) } } +// contains functions we need for BuilderBids.. not sure what to call this +pub trait BidStuff { + fn fee_recipient_mut(&mut self) -> &mut ExecutionAddress; + fn gas_limit_mut(&mut self) -> &mut u64; + fn value_mut(&mut self) -> &mut U256; + fn parent_hash_mut(&mut self) -> &mut Hash32; + fn prev_randao_mut(&mut self) -> &mut Hash32; + fn block_number_mut(&mut self) -> &mut u64; + fn timestamp_mut(&mut self) -> &mut u64; + fn withdrawals_root_mut(&mut self) -> Result<&mut Root, BlindedBlockProviderError>; + + fn sign_builder_message( + &mut self, + signing_key: &SecretKey, + context: &Context, + ) -> Result; + + fn to_signed_bid(self, signature: BlsSignature) -> SignedBuilderBid; +} + +impl BidStuff for BuilderBid { + fn fee_recipient_mut(&mut self) -> &mut ExecutionAddress { + match self { + Self::Bellatrix(bid) => &mut bid.header.fee_recipient, + Self::Capella(bid) => &mut bid.header.fee_recipient, + } + } + + fn gas_limit_mut(&mut self) -> &mut u64 { + match self { + Self::Bellatrix(bid) => &mut bid.header.gas_limit, + Self::Capella(bid) => &mut bid.header.gas_limit, + } + } + + fn value_mut(&mut self) -> &mut U256 { + match self { + Self::Bellatrix(bid) => &mut bid.value, + Self::Capella(bid) => &mut bid.value, + } + } + + fn parent_hash_mut(&mut self) -> &mut Hash32 { + match self { + Self::Bellatrix(bid) => &mut bid.header.parent_hash, + Self::Capella(bid) => &mut bid.header.parent_hash, + } + } + + fn prev_randao_mut(&mut self) -> &mut Hash32 { + match self { + Self::Bellatrix(bid) => &mut bid.header.prev_randao, + Self::Capella(bid) => &mut bid.header.prev_randao, + } + } + + fn block_number_mut(&mut self) -> &mut u64 { + match self { + Self::Bellatrix(bid) => &mut bid.header.block_number, + Self::Capella(bid) => &mut bid.header.block_number, + } + } + + fn timestamp_mut(&mut self) -> &mut u64 { + match self { + Self::Bellatrix(bid) => &mut bid.header.timestamp, + Self::Capella(bid) => &mut bid.header.timestamp, + } + } + + fn withdrawals_root_mut(&mut self) -> Result<&mut Root, BlindedBlockProviderError> { + match self { + Self::Bellatrix(_) => Err(BlindedBlockProviderError::Custom( + "withdrawals_root called on bellatrix bid".to_string(), + )), + Self::Capella(bid) => Ok(&mut bid.header.withdrawals_root), + } + } + + fn sign_builder_message( + &mut self, + signing_key: &SecretKey, + context: &Context, + ) -> Result { + match self { + Self::Bellatrix(message) => sign_builder_message(message, signing_key, context), + Self::Capella(message) => sign_builder_message(message, signing_key, context), + } + } + + fn to_signed_bid(self, signature: Signature) -> SignedBuilderBid { + match self { + Self::Bellatrix(message) => { + SignedBuilderBid::Bellatrix(SignedBuilderBidBellatrix { message, signature }) + } + Self::Capella(message) => { + SignedBuilderBid::Capella(SignedBuilderBidCapella { message, signature }) + } + } + } +} + pub struct TestingBuilder { server: BlindedBlockProviderServer>, pub builder: MockBuilder, @@ -111,7 +220,10 @@ impl TestingBuilder { } pub async fn run(&self) { - self.server.run().await + let server = self.server.serve(); + if let Err(err) = server.await { + println!("error while listening for incoming: {err}") + } } } @@ -162,7 +274,7 @@ impl MockBuilder { *self.invalidate_signatures.write() = false; } - fn apply_operations(&self, bid: &mut BuilderBid) -> Result<(), BlindedBlockProviderError> { + fn apply_operations(&self, bid: &mut B) -> Result<(), BlindedBlockProviderError> { let mut guard = self.operations.write(); while let Some(op) = guard.pop() { op.apply(bid)?; @@ -172,7 +284,7 @@ impl MockBuilder { } #[async_trait] -impl mev_build_rs::BlindedBlockProvider for MockBuilder { +impl mev_rs::BlindedBlockProvider for MockBuilder { async fn register_validators( &self, registrations: &mut [SignedValidatorRegistration], @@ -200,6 +312,7 @@ impl mev_build_rs::BlindedBlockProvider for MockBuilder { bid_request: &BidRequest, ) -> Result { let slot = Slot::new(bid_request.slot); + let fork = self.spec.fork_name_at_slot::(slot); let signed_cached_data = self .val_registration_cache .read() @@ -215,9 +328,13 @@ impl mev_build_rs::BlindedBlockProvider for MockBuilder { .map_err(convert_err)? .ok_or_else(|| convert_err("missing head block"))?; - let block = head.data.message_merge().map_err(convert_err)?; + let block = head.data.message(); let head_block_root = block.tree_hash_root(); - let head_execution_hash = block.body.execution_payload.execution_payload.block_hash; + let head_execution_hash = block + .body() + .execution_payload() + .map_err(convert_err)? + .block_hash(); if head_execution_hash != from_ssz_rs(&bid_request.parent_hash)? { return Err(BlindedBlockProviderError::Custom(format!( "head mismatch: {} {}", @@ -232,12 +349,11 @@ impl mev_build_rs::BlindedBlockProvider for MockBuilder { .map_err(convert_err)? .ok_or_else(|| convert_err("missing finalized block"))? .data - .message_merge() + .message() + .body() + .execution_payload() .map_err(convert_err)? - .body - .execution_payload - .execution_payload - .block_hash; + .block_hash(); let justified_execution_hash = self .beacon_client @@ -246,12 +362,11 @@ impl mev_build_rs::BlindedBlockProvider for MockBuilder { .map_err(convert_err)? .ok_or_else(|| convert_err("missing finalized block"))? .data - .message_merge() + .message() + .body() + .execution_payload() .map_err(convert_err)? - .body - .execution_payload - .execution_payload - .block_hash; + .block_hash(); let val_index = self .beacon_client @@ -287,14 +402,22 @@ impl mev_build_rs::BlindedBlockProvider for MockBuilder { .get_randao_mix(head_state.current_epoch()) .map_err(convert_err)?; - let payload_attributes = PayloadAttributes { - timestamp, - prev_randao: *prev_randao, - suggested_fee_recipient: fee_recipient, + let payload_attributes = match fork { + ForkName::Merge => PayloadAttributes::new(timestamp, *prev_randao, fee_recipient, None), + // the withdrawals root is filled in by operations + ForkName::Capella => { + PayloadAttributes::new(timestamp, *prev_randao, fee_recipient, Some(vec![])) + } + ForkName::Base | ForkName::Altair => { + return Err(BlindedBlockProviderError::Custom(format!( + "Unsupported fork: {}", + fork + ))); + } }; self.el - .insert_proposer(slot, head_block_root, val_index, payload_attributes) + .insert_proposer(slot, head_block_root, val_index, payload_attributes.clone()) .await; let forkchoice_update_params = ForkchoiceUpdateParameters { @@ -308,54 +431,64 @@ impl mev_build_rs::BlindedBlockProvider for MockBuilder { .el .get_full_payload_caching::>( head_execution_hash, - timestamp, - *prev_randao, - fee_recipient, + &payload_attributes, forkchoice_update_params, + fork, ) .await .map_err(convert_err)? + .to_payload() .to_execution_payload_header(); let json_payload = serde_json::to_string(&payload).map_err(convert_err)?; - let mut header: ServerPayloadHeader = - serde_json::from_str(json_payload.as_str()).map_err(convert_err)?; - - header.gas_limit = cached_data.gas_limit; - - let mut message = BuilderBid { - header, - value: ssz_rs::U256::default(), - public_key: self.builder_sk.public_key(), + let mut message = match fork { + ForkName::Capella => BuilderBid::Capella(BuilderBidCapella { + header: serde_json::from_str(json_payload.as_str()).map_err(convert_err)?, + value: to_ssz_rs(&Uint256::from(DEFAULT_BUILDER_PAYLOAD_VALUE_WEI))?, + public_key: self.builder_sk.public_key(), + }), + ForkName::Merge => BuilderBid::Bellatrix(BuilderBidBellatrix { + header: serde_json::from_str(json_payload.as_str()).map_err(convert_err)?, + value: to_ssz_rs(&Uint256::from(DEFAULT_BUILDER_PAYLOAD_VALUE_WEI))?, + public_key: self.builder_sk.public_key(), + }), + ForkName::Base | ForkName::Altair => { + return Err(BlindedBlockProviderError::Custom(format!( + "Unsupported fork: {}", + fork + ))) + } }; + *message.gas_limit_mut() = cached_data.gas_limit; self.apply_operations(&mut message)?; - let mut signature = - sign_builder_message(&mut message, &self.builder_sk, self.context.as_ref())?; + message.sign_builder_message(&self.builder_sk, self.context.as_ref())?; if *self.invalidate_signatures.read() { signature = Signature::default(); } - let signed_bid = SignedBuilderBid { message, signature }; - Ok(signed_bid) + Ok(message.to_signed_bid(signature)) } async fn open_bid( &self, signed_block: &mut SignedBlindedBeaconBlock, ) -> Result { + let node = match signed_block { + SignedBlindedBeaconBlock::Bellatrix(block) => { + block.message.body.execution_payload_header.hash_tree_root() + } + SignedBlindedBeaconBlock::Capella(block) => { + block.message.body.execution_payload_header.hash_tree_root() + } + } + .map_err(convert_err)?; + let payload = self .el - .get_payload_by_root(&from_ssz_rs( - &signed_block - .message - .body - .execution_payload_header - .hash_tree_root() - .map_err(convert_err)?, - )?) + .get_payload_by_root(&from_ssz_rs(&node)?) .ok_or_else(|| convert_err("missing payload for tx root"))?; let json_payload = serde_json::to_string(&payload).map_err(convert_err)?; diff --git a/beacon_node/execution_layer/src/test_utils/mock_execution_layer.rs b/beacon_node/execution_layer/src/test_utils/mock_execution_layer.rs index e9d4b2121be..2b512d8b1c2 100644 --- a/beacon_node/execution_layer/src/test_utils/mock_execution_layer.rs +++ b/beacon_node/execution_layer/src/test_utils/mock_execution_layer.rs @@ -9,7 +9,7 @@ use sensitive_url::SensitiveUrl; use task_executor::TaskExecutor; use tempfile::NamedTempFile; use tree_hash::TreeHash; -use types::{Address, ChainSpec, Epoch, EthSpec, FullPayload, Hash256, Uint256}; +use types::{Address, ChainSpec, Epoch, EthSpec, FullPayload, Hash256, MainnetEthSpec}; pub struct MockExecutionLayer { pub server: MockServer, @@ -20,40 +20,41 @@ pub struct MockExecutionLayer { impl MockExecutionLayer { pub fn default_params(executor: TaskExecutor) -> Self { + let mut spec = MainnetEthSpec::default_spec(); + spec.terminal_total_difficulty = DEFAULT_TERMINAL_DIFFICULTY.into(); + spec.terminal_block_hash = ExecutionBlockHash::zero(); + spec.terminal_block_hash_activation_epoch = Epoch::new(0); Self::new( executor, - DEFAULT_TERMINAL_DIFFICULTY.into(), DEFAULT_TERMINAL_BLOCK, - ExecutionBlockHash::zero(), - Epoch::new(0), + None, + None, Some(JwtKey::from_slice(&DEFAULT_JWT_SECRET).unwrap()), + spec, None, ) } + #[allow(clippy::too_many_arguments)] pub fn new( executor: TaskExecutor, - terminal_total_difficulty: Uint256, terminal_block: u64, - terminal_block_hash: ExecutionBlockHash, - terminal_block_hash_activation_epoch: Epoch, + shanghai_time: Option, + builder_threshold: Option, jwt_key: Option, + spec: ChainSpec, builder_url: Option, ) -> Self { let handle = executor.handle().unwrap(); - let mut spec = T::default_spec(); - spec.terminal_total_difficulty = terminal_total_difficulty; - spec.terminal_block_hash = terminal_block_hash; - spec.terminal_block_hash_activation_epoch = terminal_block_hash_activation_epoch; - let jwt_key = jwt_key.unwrap_or_else(JwtKey::random); let server = MockServer::new( &handle, jwt_key, - terminal_total_difficulty, + spec.terminal_total_difficulty, terminal_block, - terminal_block_hash, + spec.terminal_block_hash, + shanghai_time, ); let url = SensitiveUrl::parse(&server.url()).unwrap(); @@ -67,7 +68,7 @@ impl MockExecutionLayer { builder_url, secret_files: vec![path], suggested_fee_recipient: Some(Address::repeat_byte(42)), - builder_profit_threshold: DEFAULT_BUILDER_THRESHOLD_WEI, + builder_profit_threshold: builder_threshold.unwrap_or(DEFAULT_BUILDER_THRESHOLD_WEI), ..Default::default() }; let el = @@ -98,21 +99,19 @@ impl MockExecutionLayer { justified_hash: None, finalized_hash: None, }; + let payload_attributes = PayloadAttributes::new( + timestamp, + prev_randao, + Address::repeat_byte(42), + // FIXME: think about how to handle different forks / withdrawals here.. + None, + ); // Insert a proposer to ensure the fork choice updated command works. let slot = Slot::new(0); let validator_index = 0; self.el - .insert_proposer( - slot, - head_block_root, - validator_index, - PayloadAttributes { - timestamp, - prev_randao, - suggested_fee_recipient: Address::repeat_byte(42), - }, - ) + .insert_proposer(slot, head_block_root, validator_index, payload_attributes) .await; self.el @@ -132,25 +131,30 @@ impl MockExecutionLayer { slot, chain_health: ChainHealth::Healthy, }; - let payload = self + let suggested_fee_recipient = self.el.get_suggested_fee_recipient(validator_index).await; + let payload_attributes = + PayloadAttributes::new(timestamp, prev_randao, suggested_fee_recipient, None); + let payload: ExecutionPayload = self .el .get_payload::>( parent_hash, - timestamp, - prev_randao, - validator_index, + &payload_attributes, forkchoice_update_params, builder_params, + // FIXME: do we need to consider other forks somehow? What about withdrawals? + ForkName::Merge, &self.spec, ) .await .unwrap() - .execution_payload; - let block_hash = payload.block_hash; - assert_eq!(payload.parent_hash, parent_hash); - assert_eq!(payload.block_number, block_number); - assert_eq!(payload.timestamp, timestamp); - assert_eq!(payload.prev_randao, prev_randao); + .to_payload() + .into(); + + let block_hash = payload.block_hash(); + assert_eq!(payload.parent_hash(), parent_hash); + assert_eq!(payload.block_number(), block_number); + assert_eq!(payload.timestamp(), timestamp); + assert_eq!(payload.prev_randao(), prev_randao); // Ensure the payload cache is empty. assert!(self @@ -162,25 +166,29 @@ impl MockExecutionLayer { slot, chain_health: ChainHealth::Healthy, }; + let suggested_fee_recipient = self.el.get_suggested_fee_recipient(validator_index).await; + let payload_attributes = + PayloadAttributes::new(timestamp, prev_randao, suggested_fee_recipient, None); let payload_header = self .el .get_payload::>( parent_hash, - timestamp, - prev_randao, - validator_index, + &payload_attributes, forkchoice_update_params, builder_params, + // FIXME: do we need to consider other forks somehow? What about withdrawals? + ForkName::Merge, &self.spec, ) .await .unwrap() - .execution_payload_header; - assert_eq!(payload_header.block_hash, block_hash); - assert_eq!(payload_header.parent_hash, parent_hash); - assert_eq!(payload_header.block_number, block_number); - assert_eq!(payload_header.timestamp, timestamp); - assert_eq!(payload_header.prev_randao, prev_randao); + .to_payload(); + + assert_eq!(payload_header.block_hash(), block_hash); + assert_eq!(payload_header.parent_hash(), parent_hash); + assert_eq!(payload_header.block_number(), block_number); + assert_eq!(payload_header.timestamp(), timestamp); + assert_eq!(payload_header.prev_randao(), prev_randao); // Ensure the payload cache has the correct payload. assert_eq!( diff --git a/beacon_node/execution_layer/src/test_utils/mod.rs b/beacon_node/execution_layer/src/test_utils/mod.rs index f18ecbe6226..9379a3c2389 100644 --- a/beacon_node/execution_layer/src/test_utils/mod.rs +++ b/beacon_node/execution_layer/src/test_utils/mod.rs @@ -22,6 +22,7 @@ use tokio::{runtime, sync::oneshot}; use types::{EthSpec, ExecutionBlockHash, Uint256}; use warp::{http::StatusCode, Filter, Rejection}; +use crate::EngineCapabilities; pub use execution_block_generator::{generate_pow_block, Block, ExecutionBlockGenerator}; pub use hook::Hook; pub use mock_builder::{Context as MockBuilderContext, MockBuilder, Operation, TestingBuilder}; @@ -31,6 +32,19 @@ pub const DEFAULT_TERMINAL_DIFFICULTY: u64 = 6400; pub const DEFAULT_TERMINAL_BLOCK: u64 = 64; pub const DEFAULT_JWT_SECRET: [u8; 32] = [42; 32]; pub const DEFAULT_BUILDER_THRESHOLD_WEI: u128 = 1_000_000_000_000_000_000; +pub const DEFAULT_MOCK_EL_PAYLOAD_VALUE_WEI: u128 = 10_000_000_000_000_000; +pub const DEFAULT_BUILDER_PAYLOAD_VALUE_WEI: u128 = 20_000_000_000_000_000; +pub const DEFAULT_ENGINE_CAPABILITIES: EngineCapabilities = EngineCapabilities { + new_payload_v1: true, + new_payload_v2: true, + forkchoice_updated_v1: true, + forkchoice_updated_v2: true, + get_payload_bodies_by_hash_v1: true, + get_payload_bodies_by_range_v1: true, + get_payload_v1: true, + get_payload_v2: true, + exchange_transition_configuration_v1: true, +}; mod execution_block_generator; mod handle_rpc; @@ -45,6 +59,7 @@ pub struct MockExecutionConfig { pub terminal_difficulty: Uint256, pub terminal_block: u64, pub terminal_block_hash: ExecutionBlockHash, + pub shanghai_time: Option, } impl Default for MockExecutionConfig { @@ -55,6 +70,7 @@ impl Default for MockExecutionConfig { terminal_block: DEFAULT_TERMINAL_BLOCK, terminal_block_hash: ExecutionBlockHash::zero(), server_config: Config::default(), + shanghai_time: None, } } } @@ -74,6 +90,7 @@ impl MockServer { DEFAULT_TERMINAL_DIFFICULTY.into(), DEFAULT_TERMINAL_BLOCK, ExecutionBlockHash::zero(), + None, // FIXME(capella): should this be the default? ) } @@ -84,11 +101,16 @@ impl MockServer { terminal_block, terminal_block_hash, server_config, + shanghai_time, } = config; let last_echo_request = Arc::new(RwLock::new(None)); let preloaded_responses = Arc::new(Mutex::new(vec![])); - let execution_block_generator = - ExecutionBlockGenerator::new(terminal_difficulty, terminal_block, terminal_block_hash); + let execution_block_generator = ExecutionBlockGenerator::new( + terminal_difficulty, + terminal_block, + terminal_block_hash, + shanghai_time, + ); let ctx: Arc> = Arc::new(Context { config: server_config, @@ -104,6 +126,7 @@ impl MockServer { hook: <_>::default(), new_payload_statuses: <_>::default(), fcu_payload_statuses: <_>::default(), + engine_capabilities: Arc::new(RwLock::new(DEFAULT_ENGINE_CAPABILITIES)), _phantom: PhantomData, }); @@ -134,12 +157,17 @@ impl MockServer { } } + pub fn set_engine_capabilities(&self, engine_capabilities: EngineCapabilities) { + *self.ctx.engine_capabilities.write() = engine_capabilities; + } + pub fn new( handle: &runtime::Handle, jwt_key: JwtKey, terminal_difficulty: Uint256, terminal_block: u64, terminal_block_hash: ExecutionBlockHash, + shanghai_time: Option, ) -> Self { Self::new_with_config( handle, @@ -149,6 +177,7 @@ impl MockServer { terminal_difficulty, terminal_block, terminal_block_hash, + shanghai_time, }, ) } @@ -452,6 +481,7 @@ pub struct Context { pub new_payload_statuses: Arc>>, pub fcu_payload_statuses: Arc>>, + pub engine_capabilities: Arc>, pub _phantom: PhantomData, } @@ -603,11 +633,11 @@ pub fn serve( "jsonrpc": JSONRPC_VERSION, "result": result }), - Err(message) => json!({ + Err((message, code)) => json!({ "id": id, "jsonrpc": JSONRPC_VERSION, "error": { - "code": -1234, // Junk error code. + "code": code, "message": message } }), diff --git a/beacon_node/genesis/src/interop.rs b/beacon_node/genesis/src/interop.rs index d8c25baec80..122ca8eda6b 100644 --- a/beacon_node/genesis/src/interop.rs +++ b/beacon_node/genesis/src/interop.rs @@ -10,6 +10,20 @@ use types::{ pub const DEFAULT_ETH1_BLOCK_HASH: &[u8] = &[0x42; 32]; +pub fn bls_withdrawal_credentials(pubkey: &PublicKey, spec: &ChainSpec) -> Hash256 { + let mut credentials = hash(&pubkey.as_ssz_bytes()); + credentials[0] = spec.bls_withdrawal_prefix_byte; + Hash256::from_slice(&credentials) +} + +fn eth1_withdrawal_credentials(pubkey: &PublicKey, spec: &ChainSpec) -> Hash256 { + let fake_execution_address = &hash(&pubkey.as_ssz_bytes())[0..20]; + let mut credentials = [0u8; 32]; + credentials[0] = spec.eth1_address_withdrawal_prefix_byte; + credentials[12..].copy_from_slice(fake_execution_address); + Hash256::from_slice(&credentials) +} + /// Builds a genesis state as defined by the Eth2 interop procedure (see below). /// /// Reference: @@ -21,20 +35,75 @@ pub fn interop_genesis_state( execution_payload_header: Option>, spec: &ChainSpec, ) -> Result, String> { + let withdrawal_credentials = keypairs + .iter() + .map(|keypair| bls_withdrawal_credentials(&keypair.pk, spec)) + .collect::>(); + interop_genesis_state_with_withdrawal_credentials::( + keypairs, + &withdrawal_credentials, + genesis_time, + eth1_block_hash, + execution_payload_header, + spec, + ) +} + +// returns an interop genesis state except every other +// validator has eth1 withdrawal credentials +pub fn interop_genesis_state_with_eth1( + keypairs: &[Keypair], + genesis_time: u64, + eth1_block_hash: Hash256, + execution_payload_header: Option>, + spec: &ChainSpec, +) -> Result, String> { + let withdrawal_credentials = keypairs + .iter() + .enumerate() + .map(|(index, keypair)| { + if index % 2 == 0 { + bls_withdrawal_credentials(&keypair.pk, spec) + } else { + eth1_withdrawal_credentials(&keypair.pk, spec) + } + }) + .collect::>(); + interop_genesis_state_with_withdrawal_credentials::( + keypairs, + &withdrawal_credentials, + genesis_time, + eth1_block_hash, + execution_payload_header, + spec, + ) +} + +pub fn interop_genesis_state_with_withdrawal_credentials( + keypairs: &[Keypair], + withdrawal_credentials: &[Hash256], + genesis_time: u64, + eth1_block_hash: Hash256, + execution_payload_header: Option>, + spec: &ChainSpec, +) -> Result, String> { + if keypairs.len() != withdrawal_credentials.len() { + return Err(format!( + "wrong number of withdrawal credentials, expected: {}, got: {}", + keypairs.len(), + withdrawal_credentials.len() + )); + } + let eth1_timestamp = 2_u64.pow(40); let amount = spec.max_effective_balance; - let withdrawal_credentials = |pubkey: &PublicKey| { - let mut credentials = hash(&pubkey.as_ssz_bytes()); - credentials[0] = spec.bls_withdrawal_prefix_byte; - Hash256::from_slice(&credentials) - }; - let datas = keypairs .into_par_iter() - .map(|keypair| { + .zip(withdrawal_credentials.into_par_iter()) + .map(|(keypair, &withdrawal_credentials)| { let mut data = DepositData { - withdrawal_credentials: withdrawal_credentials(&keypair.pk), + withdrawal_credentials, pubkey: keypair.pk.clone().into(), amount, signature: Signature::empty().into(), @@ -133,4 +202,83 @@ mod test { "validator count should be correct" ); } + + #[test] + fn interop_state_with_eth1() { + let validator_count = 16; + let genesis_time = 42; + let spec = &TestEthSpec::default_spec(); + + let keypairs = generate_deterministic_keypairs(validator_count); + + let state = interop_genesis_state_with_eth1::( + &keypairs, + genesis_time, + Hash256::from_slice(DEFAULT_ETH1_BLOCK_HASH), + None, + spec, + ) + .expect("should build state"); + + assert_eq!( + state.eth1_data().block_hash, + Hash256::from_slice(&[0x42; 32]), + "eth1 block hash should be co-ordinated junk" + ); + + assert_eq!( + state.genesis_time(), + genesis_time, + "genesis time should be as specified" + ); + + for b in state.balances() { + assert_eq!( + *b, spec.max_effective_balance, + "validator balances should be max effective balance" + ); + } + + for (index, v) in state.validators().iter().enumerate() { + let creds = v.withdrawal_credentials.as_bytes(); + if index % 2 == 0 { + assert_eq!( + creds[0], spec.bls_withdrawal_prefix_byte, + "first byte of withdrawal creds should be bls prefix" + ); + assert_eq!( + &creds[1..], + &hash(&v.pubkey.as_ssz_bytes())[1..], + "rest of withdrawal creds should be pubkey hash" + ); + } else { + assert_eq!( + creds[0], spec.eth1_address_withdrawal_prefix_byte, + "first byte of withdrawal creds should be eth1 prefix" + ); + assert_eq!( + creds[1..12], + [0u8; 11], + "bytes [1:12] of withdrawal creds must be zero" + ); + assert_eq!( + &creds[12..], + &hash(&v.pubkey.as_ssz_bytes())[0..20], + "rest of withdrawal creds should be first 20 bytes of pubkey hash" + ) + } + } + + assert_eq!( + state.balances().len(), + validator_count, + "validator balances len should be correct" + ); + + assert_eq!( + state.validators().len(), + validator_count, + "validator count should be correct" + ); + } } diff --git a/beacon_node/genesis/src/lib.rs b/beacon_node/genesis/src/lib.rs index 1233d99fd31..3fb053bf880 100644 --- a/beacon_node/genesis/src/lib.rs +++ b/beacon_node/genesis/src/lib.rs @@ -5,5 +5,8 @@ mod interop; pub use eth1::Config as Eth1Config; pub use eth1::Eth1Endpoint; pub use eth1_genesis_service::{Eth1GenesisService, Statistics}; -pub use interop::{interop_genesis_state, DEFAULT_ETH1_BLOCK_HASH}; +pub use interop::{ + bls_withdrawal_credentials, interop_genesis_state, interop_genesis_state_with_eth1, + interop_genesis_state_with_withdrawal_credentials, DEFAULT_ETH1_BLOCK_HASH, +}; pub use types::test_utils::generate_deterministic_keypairs; diff --git a/beacon_node/http_api/Cargo.toml b/beacon_node/http_api/Cargo.toml index da8331b7ad8..a871e0c35f4 100644 --- a/beacon_node/http_api/Cargo.toml +++ b/beacon_node/http_api/Cargo.toml @@ -36,15 +36,18 @@ tree_hash = { version = "0.4.1", path = "../../consensus/tree_hash" } sysinfo = "0.26.5" system_health = { path = "../../common/system_health" } directory = { path = "../../common/directory" } +eth2_serde_utils = { version = "0.1.1", path = "../../consensus/serde_utils" } +operation_pool = { path = "../operation_pool" } +sensitive_url = { path = "../../common/sensitive_url" } +unused_port = {path = "../../common/unused_port"} +logging = { path = "../../common/logging" } +store = { path = "../store" } [dev-dependencies] -store = { path = "../store" } environment = { path = "../../lighthouse/environment" } -sensitive_url = { path = "../../common/sensitive_url" } -logging = { path = "../../common/logging" } serde_json = "1.0.58" proto_array = { path = "../../consensus/proto_array" } -unused_port = {path = "../../common/unused_port"} +genesis = { path = "../genesis" } [[test]] name = "bn_http_api_tests" diff --git a/beacon_node/http_api/src/attestation_performance.rs b/beacon_node/http_api/src/attestation_performance.rs index ca68d4d04cc..3e7d8d5e316 100644 --- a/beacon_node/http_api/src/attestation_performance.rs +++ b/beacon_node/http_api/src/attestation_performance.rs @@ -77,8 +77,8 @@ pub fn get_attestation_performance( // query is within permitted bounds to prevent potential OOM errors. if (end_epoch - start_epoch).as_usize() > MAX_REQUEST_RANGE_EPOCHS { return Err(custom_bad_request(format!( - "end_epoch must not exceed start_epoch by more than 100 epochs. start: {}, end: {}", - query.start_epoch, query.end_epoch + "end_epoch must not exceed start_epoch by more than {} epochs. start: {}, end: {}", + MAX_REQUEST_RANGE_EPOCHS, query.start_epoch, query.end_epoch ))); } diff --git a/beacon_node/http_api/src/attester_duties.rs b/beacon_node/http_api/src/attester_duties.rs index 9febae5b197..5c3e420839d 100644 --- a/beacon_node/http_api/src/attester_duties.rs +++ b/beacon_node/http_api/src/attester_duties.rs @@ -114,8 +114,10 @@ fn compute_historic_attester_duties( )?; (state, execution_optimistic) } else { - StateId::from_slot(request_epoch.start_slot(T::EthSpec::slots_per_epoch())) - .state(chain)? + let (state, execution_optimistic, _finalized) = + StateId::from_slot(request_epoch.start_slot(T::EthSpec::slots_per_epoch())) + .state(chain)?; + (state, execution_optimistic) }; // Sanity-check the state lookup. diff --git a/beacon_node/http_api/src/block_id.rs b/beacon_node/http_api/src/block_id.rs index 5c785fe6517..f1a42b87442 100644 --- a/beacon_node/http_api/src/block_id.rs +++ b/beacon_node/http_api/src/block_id.rs @@ -4,13 +4,15 @@ use eth2::types::BlockId as CoreBlockId; use std::fmt; use std::str::FromStr; use std::sync::Arc; -use types::{Hash256, SignedBeaconBlock, SignedBlindedBeaconBlock, Slot}; +use types::{EthSpec, Hash256, SignedBeaconBlock, SignedBlindedBeaconBlock, Slot}; /// Wraps `eth2::types::BlockId` and provides a simple way to obtain a block or root for a given /// `BlockId`. #[derive(Debug)] pub struct BlockId(pub CoreBlockId); +type Finalized = bool; + impl BlockId { pub fn from_slot(slot: Slot) -> Self { Self(CoreBlockId::Slot(slot)) @@ -24,7 +26,7 @@ impl BlockId { pub fn root( &self, chain: &BeaconChain, - ) -> Result<(Hash256, ExecutionOptimistic), warp::Rejection> { + ) -> Result<(Hash256, ExecutionOptimistic, Finalized), warp::Rejection> { match &self.0 { CoreBlockId::Head => { let (cached_head, execution_status) = chain @@ -34,22 +36,23 @@ impl BlockId { Ok(( cached_head.head_block_root(), execution_status.is_optimistic_or_invalid(), + false, )) } - CoreBlockId::Genesis => Ok((chain.genesis_block_root, false)), + CoreBlockId::Genesis => Ok((chain.genesis_block_root, false, true)), CoreBlockId::Finalized => { let finalized_checkpoint = chain.canonical_head.cached_head().finalized_checkpoint(); let (_slot, execution_optimistic) = checkpoint_slot_and_execution_optimistic(chain, finalized_checkpoint)?; - Ok((finalized_checkpoint.root, execution_optimistic)) + Ok((finalized_checkpoint.root, execution_optimistic, true)) } CoreBlockId::Justified => { let justified_checkpoint = chain.canonical_head.cached_head().justified_checkpoint(); let (_slot, execution_optimistic) = checkpoint_slot_and_execution_optimistic(chain, justified_checkpoint)?; - Ok((justified_checkpoint.root, execution_optimistic)) + Ok((justified_checkpoint.root, execution_optimistic, false)) } CoreBlockId::Slot(slot) => { let execution_optimistic = chain @@ -66,7 +69,14 @@ impl BlockId { )) }) })?; - Ok((root, execution_optimistic)) + let finalized = *slot + <= chain + .canonical_head + .cached_head() + .finalized_checkpoint() + .epoch + .start_slot(T::EthSpec::slots_per_epoch()); + Ok((root, execution_optimistic, finalized)) } CoreBlockId::Root(root) => { // This matches the behaviour of other consensus clients (e.g. Teku). @@ -88,7 +98,20 @@ impl BlockId { .is_optimistic_or_invalid_block(root) .map_err(BeaconChainError::ForkChoiceError) .map_err(warp_utils::reject::beacon_chain_error)?; - Ok((*root, execution_optimistic)) + let blinded_block = chain + .get_blinded_block(root) + .map_err(warp_utils::reject::beacon_chain_error)? + .ok_or_else(|| { + warp_utils::reject::custom_not_found(format!( + "beacon block with root {}", + root + )) + })?; + let block_slot = blinded_block.slot(); + let finalized = chain + .is_finalized_block(root, block_slot) + .map_err(warp_utils::reject::beacon_chain_error)?; + Ok((*root, execution_optimistic, finalized)) } else { Err(warp_utils::reject::custom_not_found(format!( "beacon block with root {}", @@ -103,7 +126,14 @@ impl BlockId { pub fn blinded_block( &self, chain: &BeaconChain, - ) -> Result<(SignedBlindedBeaconBlock, ExecutionOptimistic), warp::Rejection> { + ) -> Result< + ( + SignedBlindedBeaconBlock, + ExecutionOptimistic, + Finalized, + ), + warp::Rejection, + > { match &self.0 { CoreBlockId::Head => { let (cached_head, execution_status) = chain @@ -113,10 +143,11 @@ impl BlockId { Ok(( cached_head.snapshot.beacon_block.clone_as_blinded(), execution_status.is_optimistic_or_invalid(), + false, )) } CoreBlockId::Slot(slot) => { - let (root, execution_optimistic) = self.root(chain)?; + let (root, execution_optimistic, finalized) = self.root(chain)?; chain .get_blinded_block(&root) .map_err(warp_utils::reject::beacon_chain_error) @@ -128,7 +159,7 @@ impl BlockId { slot ))); } - Ok((block, execution_optimistic)) + Ok((block, execution_optimistic, finalized)) } None => Err(warp_utils::reject::custom_not_found(format!( "beacon block with root {}", @@ -137,7 +168,7 @@ impl BlockId { }) } _ => { - let (root, execution_optimistic) = self.root(chain)?; + let (root, execution_optimistic, finalized) = self.root(chain)?; let block = chain .get_blinded_block(&root) .map_err(warp_utils::reject::beacon_chain_error) @@ -149,7 +180,7 @@ impl BlockId { )) }) })?; - Ok((block, execution_optimistic)) + Ok((block, execution_optimistic, finalized)) } } } @@ -158,7 +189,14 @@ impl BlockId { pub async fn full_block( &self, chain: &BeaconChain, - ) -> Result<(Arc>, ExecutionOptimistic), warp::Rejection> { + ) -> Result< + ( + Arc>, + ExecutionOptimistic, + Finalized, + ), + warp::Rejection, + > { match &self.0 { CoreBlockId::Head => { let (cached_head, execution_status) = chain @@ -168,10 +206,11 @@ impl BlockId { Ok(( cached_head.snapshot.beacon_block.clone(), execution_status.is_optimistic_or_invalid(), + false, )) } CoreBlockId::Slot(slot) => { - let (root, execution_optimistic) = self.root(chain)?; + let (root, execution_optimistic, finalized) = self.root(chain)?; chain .get_block(&root) .await @@ -184,7 +223,7 @@ impl BlockId { slot ))); } - Ok((Arc::new(block), execution_optimistic)) + Ok((Arc::new(block), execution_optimistic, finalized)) } None => Err(warp_utils::reject::custom_not_found(format!( "beacon block with root {}", @@ -193,14 +232,14 @@ impl BlockId { }) } _ => { - let (root, execution_optimistic) = self.root(chain)?; + let (root, execution_optimistic, finalized) = self.root(chain)?; chain .get_block(&root) .await .map_err(warp_utils::reject::beacon_chain_error) .and_then(|block_opt| { block_opt - .map(|block| (Arc::new(block), execution_optimistic)) + .map(|block| (Arc::new(block), execution_optimistic, finalized)) .ok_or_else(|| { warp_utils::reject::custom_not_found(format!( "beacon block with root {}", diff --git a/beacon_node/http_api/src/block_rewards.rs b/beacon_node/http_api/src/block_rewards.rs index 05886a4d023..828be8e5760 100644 --- a/beacon_node/http_api/src/block_rewards.rs +++ b/beacon_node/http_api/src/block_rewards.rs @@ -4,7 +4,7 @@ use lru::LruCache; use slog::{debug, warn, Logger}; use state_processing::BlockReplayer; use std::sync::Arc; -use types::BlindedBeaconBlock; +use types::beacon_block::BlindedBeaconBlock; use warp_utils::reject::{ beacon_chain_error, beacon_state_error, custom_bad_request, custom_server_error, }; diff --git a/beacon_node/http_api/src/lib.rs b/beacon_node/http_api/src/lib.rs index 6cfdaf5db6a..d19187cb44e 100644 --- a/beacon_node/http_api/src/lib.rs +++ b/beacon_node/http_api/src/lib.rs @@ -1,4 +1,3 @@ -#![recursion_limit = "256"] //! This crate contains a HTTP server which serves the endpoints listed here: //! //! https://github.com/ethereum/beacon-APIs @@ -15,8 +14,11 @@ mod database; mod metrics; mod proposer_duties; mod publish_blocks; +mod standard_block_rewards; mod state_id; +mod sync_committee_rewards; mod sync_committees; +pub mod test_utils; mod ui; mod validator_inclusion; mod version; @@ -29,12 +31,15 @@ use beacon_chain::{ pub use block_id::BlockId; use directory::DEFAULT_ROOT_DIR; use eth2::types::{ - self as api_types, EndpointVersion, SkipRandaoVerification, ValidatorId, ValidatorStatus, + self as api_types, EndpointVersion, ForkChoice, ForkChoiceNode, SkipRandaoVerification, + ValidatorId, ValidatorStatus, }; use lighthouse_network::{types::SyncState, EnrExt, NetworkGlobals, PeerId, PubsubMessage}; use lighthouse_version::version_with_platform; use network::{NetworkMessage, NetworkSenders, ValidatorSubscriptionMessage}; +use operation_pool::ReceivedPreCapella; use parking_lot::RwLock; +use publish_blocks::ProvenancedBlock; use serde::{Deserialize, Serialize}; use slog::{crit, debug, error, info, warn, Logger}; use slot_clock::SlotClock; @@ -51,15 +56,15 @@ use system_health::observe_system_health_bn; use tokio::sync::mpsc::{Sender, UnboundedSender}; use tokio_stream::{wrappers::BroadcastStream, StreamExt}; use types::{ - Attestation, AttestationData, AttesterSlashing, BeaconStateError, BlindedPayload, - CommitteeCache, ConfigAndPreset, Epoch, EthSpec, ForkName, FullPayload, + Attestation, AttestationData, AttestationShufflingId, AttesterSlashing, BeaconStateError, + BlindedPayload, CommitteeCache, ConfigAndPreset, Epoch, EthSpec, ForkName, FullPayload, ProposerPreparationData, ProposerSlashing, RelativeEpoch, SignedAggregateAndProof, - SignedBeaconBlock, SignedBlindedBeaconBlock, SignedContributionAndProof, - SignedValidatorRegistrationData, SignedVoluntaryExit, Slot, SyncCommitteeMessage, - SyncContributionData, + SignedBeaconBlock, SignedBlindedBeaconBlock, SignedBlsToExecutionChange, + SignedContributionAndProof, SignedValidatorRegistrationData, SignedVoluntaryExit, Slot, + SyncCommitteeMessage, SyncContributionData, }; use version::{ - add_consensus_version_header, execution_optimistic_fork_versioned_response, + add_consensus_version_header, execution_optimistic_finalized_fork_versioned_response, fork_versioned_response, inconsistent_fork_rejection, unsupported_version_rejection, V1, V2, }; use warp::http::StatusCode; @@ -68,7 +73,8 @@ use warp::Reply; use warp::{http::Response, Filter}; use warp_utils::{ query::multi_key_query, - task::{blocking_json_task, blocking_task}, + task::{blocking_json_task, blocking_response_task}, + uor::UnifyingOrFilter, }; const API_PREFIX: &str = "eth"; @@ -517,12 +523,13 @@ pub fn serve( .and(warp::path::end()) .and_then(|state_id: StateId, chain: Arc>| { blocking_json_task(move || { - let (root, execution_optimistic) = state_id.root(&chain)?; - + let (root, execution_optimistic, finalized) = state_id.root(&chain)?; Ok(root) .map(api_types::RootData::from) .map(api_types::GenericResponse::from) - .map(|resp| resp.add_execution_optimistic(execution_optimistic)) + .map(|resp| { + resp.add_execution_optimistic_finalized(execution_optimistic, finalized) + }) }) }); @@ -533,11 +540,12 @@ pub fn serve( .and(warp::path::end()) .and_then(|state_id: StateId, chain: Arc>| { blocking_json_task(move || { - let (fork, execution_optimistic) = - state_id.fork_and_execution_optimistic(&chain)?; - Ok(api_types::ExecutionOptimisticResponse { + let (fork, execution_optimistic, finalized) = + state_id.fork_and_execution_optimistic_and_finalized(&chain)?; + Ok(api_types::ExecutionOptimisticFinalizedResponse { data: fork, execution_optimistic: Some(execution_optimistic), + finalized: Some(finalized), }) }) }); @@ -549,23 +557,26 @@ pub fn serve( .and(warp::path::end()) .and_then(|state_id: StateId, chain: Arc>| { blocking_json_task(move || { - let (data, execution_optimistic) = state_id.map_state_and_execution_optimistic( - &chain, - |state, execution_optimistic| { - Ok(( - api_types::FinalityCheckpointsData { - previous_justified: state.previous_justified_checkpoint(), - current_justified: state.current_justified_checkpoint(), - finalized: state.finalized_checkpoint(), - }, - execution_optimistic, - )) - }, - )?; + let (data, execution_optimistic, finalized) = state_id + .map_state_and_execution_optimistic_and_finalized( + &chain, + |state, execution_optimistic, finalized| { + Ok(( + api_types::FinalityCheckpointsData { + previous_justified: state.previous_justified_checkpoint(), + current_justified: state.current_justified_checkpoint(), + finalized: state.finalized_checkpoint(), + }, + execution_optimistic, + finalized, + )) + }, + )?; - Ok(api_types::ExecutionOptimisticResponse { + Ok(api_types::ExecutionOptimisticFinalizedResponse { data, execution_optimistic: Some(execution_optimistic), + finalized: Some(finalized), }) }) }); @@ -582,10 +593,10 @@ pub fn serve( query_res: Result| { blocking_json_task(move || { let query = query_res?; - let (data, execution_optimistic) = state_id - .map_state_and_execution_optimistic( + let (data, execution_optimistic, finalized) = state_id + .map_state_and_execution_optimistic_and_finalized( &chain, - |state, execution_optimistic| { + |state, execution_optimistic, finalized| { Ok(( state .validators() @@ -613,13 +624,15 @@ pub fn serve( }) .collect::>(), execution_optimistic, + finalized, )) }, )?; - Ok(api_types::ExecutionOptimisticResponse { + Ok(api_types::ExecutionOptimisticFinalizedResponse { data, execution_optimistic: Some(execution_optimistic), + finalized: Some(finalized), }) }) }, @@ -637,10 +650,10 @@ pub fn serve( query_res: Result| { blocking_json_task(move || { let query = query_res?; - let (data, execution_optimistic) = state_id - .map_state_and_execution_optimistic( + let (data, execution_optimistic, finalized) = state_id + .map_state_and_execution_optimistic_and_finalized( &chain, - |state, execution_optimistic| { + |state, execution_optimistic, finalized| { let epoch = state.current_epoch(); let far_future_epoch = chain.spec.far_future_epoch; @@ -690,13 +703,15 @@ pub fn serve( }) .collect::>(), execution_optimistic, + finalized, )) }, )?; - Ok(api_types::ExecutionOptimisticResponse { + Ok(api_types::ExecutionOptimisticFinalizedResponse { data, execution_optimistic: Some(execution_optimistic), + finalized: Some(finalized), }) }) }, @@ -715,10 +730,10 @@ pub fn serve( .and_then( |state_id: StateId, chain: Arc>, validator_id: ValidatorId| { blocking_json_task(move || { - let (data, execution_optimistic) = state_id - .map_state_and_execution_optimistic( + let (data, execution_optimistic, finalized) = state_id + .map_state_and_execution_optimistic_and_finalized( &chain, - |state, execution_optimistic| { + |state, execution_optimistic, finalized| { let index_opt = match &validator_id { ValidatorId::PublicKey(pubkey) => { state.validators().iter().position(|v| v.pubkey == *pubkey) @@ -752,13 +767,15 @@ pub fn serve( )) })?, execution_optimistic, + finalized, )) }, )?; - Ok(api_types::ExecutionOptimisticResponse { + Ok(api_types::ExecutionOptimisticFinalizedResponse { data, execution_optimistic: Some(execution_optimistic), + finalized: Some(finalized), }) }) }, @@ -773,46 +790,119 @@ pub fn serve( .and_then( |state_id: StateId, chain: Arc>, query: api_types::CommitteesQuery| { blocking_json_task(move || { - let (data, execution_optimistic) = state_id - .map_state_and_execution_optimistic( + let (data, execution_optimistic, finalized) = state_id + .map_state_and_execution_optimistic_and_finalized( &chain, - |state, execution_optimistic| { + |state, execution_optimistic, finalized| { let current_epoch = state.current_epoch(); let epoch = query.epoch.unwrap_or(current_epoch); - let committee_cache = - match RelativeEpoch::from_epoch(current_epoch, epoch) { - Ok(relative_epoch) - if state - .committee_cache_is_initialized(relative_epoch) => - { - state.committee_cache(relative_epoch).map(Cow::Borrowed) - } - _ => CommitteeCache::initialized(state, epoch, &chain.spec) + // Attempt to obtain the committee_cache from the beacon chain + let decision_slot = (epoch.saturating_sub(2u64)) + .end_slot(T::EthSpec::slots_per_epoch()); + // Find the decision block and skip to another method on any kind + // of failure + let shuffling_id = if let Ok(Some(shuffling_decision_block)) = + chain.block_root_at_slot(decision_slot, WhenSlotSkipped::Prev) + { + Some(AttestationShufflingId { + shuffling_epoch: epoch, + shuffling_decision_block, + }) + } else { + None + }; + + // Attempt to read from the chain cache if there exists a + // shuffling_id + let maybe_cached_shuffling = if let Some(shuffling_id) = + shuffling_id.as_ref() + { + chain + .shuffling_cache + .try_write_for(std::time::Duration::from_secs(1)) + .and_then(|mut cache_write| cache_write.get(shuffling_id)) + .and_then(|cache_item| cache_item.wait().ok()) + } else { + None + }; + + let committee_cache = if let Some(ref shuffling) = + maybe_cached_shuffling + { + Cow::Borrowed(&**shuffling) + } else { + let possibly_built_cache = + match RelativeEpoch::from_epoch(current_epoch, epoch) { + Ok(relative_epoch) + if state.committee_cache_is_initialized( + relative_epoch, + ) => + { + state + .committee_cache(relative_epoch) + .map(Cow::Borrowed) + } + _ => CommitteeCache::initialized( + state, + epoch, + &chain.spec, + ) .map(Cow::Owned), - } - .map_err(|e| match e { - BeaconStateError::EpochOutOfBounds => { - let max_sprp = - T::EthSpec::slots_per_historical_root() as u64; - let first_subsequent_restore_point_slot = ((epoch - .start_slot(T::EthSpec::slots_per_epoch()) - / max_sprp) - + 1) - * max_sprp; - if epoch < current_epoch { - warp_utils::reject::custom_bad_request(format!( - "epoch out of bounds, try state at slot {}", - first_subsequent_restore_point_slot, - )) - } else { - warp_utils::reject::custom_bad_request( - "epoch out of bounds, too far in future".into(), - ) + } + .map_err(|e| { + match e { + BeaconStateError::EpochOutOfBounds => { + let max_sprp = + T::EthSpec::slots_per_historical_root() + as u64; + let first_subsequent_restore_point_slot = + ((epoch.start_slot( + T::EthSpec::slots_per_epoch(), + ) / max_sprp) + + 1) + * max_sprp; + if epoch < current_epoch { + warp_utils::reject::custom_bad_request( + format!( + "epoch out of bounds, \ + try state at slot {}", + first_subsequent_restore_point_slot, + ), + ) + } else { + warp_utils::reject::custom_bad_request( + "epoch out of bounds, \ + too far in future" + .into(), + ) + } + } + _ => { + warp_utils::reject::beacon_chain_error(e.into()) + } + } + })?; + + // Attempt to write to the beacon cache (only if the cache + // size is not the default value). + if chain.config.shuffling_cache_size + != beacon_chain::shuffling_cache::DEFAULT_CACHE_SIZE + { + if let Some(shuffling_id) = shuffling_id { + if let Some(mut cache_write) = chain + .shuffling_cache + .try_write_for(std::time::Duration::from_secs(1)) + { + cache_write.insert_committee_cache( + shuffling_id, + &*possibly_built_cache, + ); } } - _ => warp_utils::reject::beacon_chain_error(e.into()), - })?; + } + possibly_built_cache + }; // Use either the supplied slot or all slots in the epoch. let slots = @@ -859,12 +949,13 @@ pub fn serve( } } - Ok((response, execution_optimistic)) + Ok((response, execution_optimistic, finalized)) }, )?; - Ok(api_types::ExecutionOptimisticResponse { + Ok(api_types::ExecutionOptimisticFinalizedResponse { data, execution_optimistic: Some(execution_optimistic), + finalized: Some(finalized), }) }) }, @@ -881,10 +972,10 @@ pub fn serve( chain: Arc>, query: api_types::SyncCommitteesQuery| { blocking_json_task(move || { - let (sync_committee, execution_optimistic) = state_id - .map_state_and_execution_optimistic( + let (sync_committee, execution_optimistic, finalized) = state_id + .map_state_and_execution_optimistic_and_finalized( &chain, - |state, execution_optimistic| { + |state, execution_optimistic, finalized| { let current_epoch = state.current_epoch(); let epoch = query.epoch.unwrap_or(current_epoch); Ok(( @@ -894,9 +985,10 @@ pub fn serve( .map_err(|e| match e { BeaconStateError::SyncCommitteeNotKnown { .. } => { warp_utils::reject::custom_bad_request(format!( - "state at epoch {} has no sync committee for epoch {}", - current_epoch, epoch - )) + "state at epoch {} has no \ + sync committee for epoch {}", + current_epoch, epoch + )) } BeaconStateError::IncorrectStateVariant => { warp_utils::reject::custom_bad_request(format!( @@ -907,6 +999,7 @@ pub fn serve( e => warp_utils::reject::beacon_state_error(e), })?, execution_optimistic, + finalized, )) }, )?; @@ -928,7 +1021,7 @@ pub fn serve( }; Ok(api_types::GenericResponse::from(response) - .add_execution_optimistic(execution_optimistic)) + .add_execution_optimistic_finalized(execution_optimistic, finalized)) }) }, ); @@ -942,23 +1035,23 @@ pub fn serve( .and_then( |state_id: StateId, chain: Arc>, query: api_types::RandaoQuery| { blocking_json_task(move || { - let (randao, execution_optimistic) = state_id - .map_state_and_execution_optimistic( + let (randao, execution_optimistic, finalized) = state_id + .map_state_and_execution_optimistic_and_finalized( &chain, - |state, execution_optimistic| { + |state, execution_optimistic, finalized| { let epoch = query.epoch.unwrap_or_else(|| state.current_epoch()); let randao = *state.get_randao_mix(epoch).map_err(|e| { warp_utils::reject::custom_bad_request(format!( "epoch out of range: {e:?}" )) })?; - Ok((randao, execution_optimistic)) + Ok((randao, execution_optimistic, finalized)) }, )?; Ok( api_types::GenericResponse::from(api_types::RandaoMix { randao }) - .add_execution_optimistic(execution_optimistic), + .add_execution_optimistic_finalized(execution_optimistic, finalized), ) }) }, @@ -980,72 +1073,73 @@ pub fn serve( .and_then( |query: api_types::HeadersQuery, chain: Arc>| { blocking_json_task(move || { - let (root, block, execution_optimistic) = match (query.slot, query.parent_root) - { - // No query parameters, return the canonical head block. - (None, None) => { - let (cached_head, execution_status) = chain - .canonical_head - .head_and_execution_status() - .map_err(warp_utils::reject::beacon_chain_error)?; - ( - cached_head.head_block_root(), - cached_head.snapshot.beacon_block.clone_as_blinded(), - execution_status.is_optimistic_or_invalid(), - ) - } - // Only the parent root parameter, do a forwards-iterator lookup. - (None, Some(parent_root)) => { - let (parent, execution_optimistic) = - BlockId::from_root(parent_root).blinded_block(&chain)?; - let (root, _slot) = chain - .forwards_iter_block_roots(parent.slot()) - .map_err(warp_utils::reject::beacon_chain_error)? - // Ignore any skip-slots immediately following the parent. - .find(|res| { - res.as_ref().map_or(false, |(root, _)| *root != parent_root) - }) - .transpose() - .map_err(warp_utils::reject::beacon_chain_error)? - .ok_or_else(|| { - warp_utils::reject::custom_not_found(format!( - "child of block with root {}", - parent_root - )) - })?; + let (root, block, execution_optimistic, finalized) = + match (query.slot, query.parent_root) { + // No query parameters, return the canonical head block. + (None, None) => { + let (cached_head, execution_status) = chain + .canonical_head + .head_and_execution_status() + .map_err(warp_utils::reject::beacon_chain_error)?; + ( + cached_head.head_block_root(), + cached_head.snapshot.beacon_block.clone_as_blinded(), + execution_status.is_optimistic_or_invalid(), + false, + ) + } + // Only the parent root parameter, do a forwards-iterator lookup. + (None, Some(parent_root)) => { + let (parent, execution_optimistic, _parent_finalized) = + BlockId::from_root(parent_root).blinded_block(&chain)?; + let (root, _slot) = chain + .forwards_iter_block_roots(parent.slot()) + .map_err(warp_utils::reject::beacon_chain_error)? + // Ignore any skip-slots immediately following the parent. + .find(|res| { + res.as_ref().map_or(false, |(root, _)| *root != parent_root) + }) + .transpose() + .map_err(warp_utils::reject::beacon_chain_error)? + .ok_or_else(|| { + warp_utils::reject::custom_not_found(format!( + "child of block with root {}", + parent_root + )) + })?; - BlockId::from_root(root) - .blinded_block(&chain) - // Ignore this `execution_optimistic` since the first value has - // more information about the original request. - .map(|(block, _execution_optimistic)| { - (root, block, execution_optimistic) - })? - } - // Slot is supplied, search by slot and optionally filter by - // parent root. - (Some(slot), parent_root_opt) => { - let (root, execution_optimistic) = - BlockId::from_slot(slot).root(&chain)?; - // Ignore the second `execution_optimistic`, the first one is the - // most relevant since it knows that we queried by slot. - let (block, _execution_optimistic) = - BlockId::from_root(root).blinded_block(&chain)?; - - // If the parent root was supplied, check that it matches the block - // obtained via a slot lookup. - if let Some(parent_root) = parent_root_opt { - if block.parent_root() != parent_root { - return Err(warp_utils::reject::custom_not_found(format!( - "no canonical block at slot {} with parent root {}", - slot, parent_root - ))); - } + BlockId::from_root(root) + .blinded_block(&chain) + // Ignore this `execution_optimistic` since the first value has + // more information about the original request. + .map(|(block, _execution_optimistic, finalized)| { + (root, block, execution_optimistic, finalized) + })? } + // Slot is supplied, search by slot and optionally filter by + // parent root. + (Some(slot), parent_root_opt) => { + let (root, execution_optimistic, finalized) = + BlockId::from_slot(slot).root(&chain)?; + // Ignore the second `execution_optimistic`, the first one is the + // most relevant since it knows that we queried by slot. + let (block, _execution_optimistic, _finalized) = + BlockId::from_root(root).blinded_block(&chain)?; + + // If the parent root was supplied, check that it matches the block + // obtained via a slot lookup. + if let Some(parent_root) = parent_root_opt { + if block.parent_root() != parent_root { + return Err(warp_utils::reject::custom_not_found(format!( + "no canonical block at slot {} with parent root {}", + slot, parent_root + ))); + } + } - (root, block, execution_optimistic) - } - }; + (root, block, execution_optimistic, finalized) + } + }; let data = api_types::BlockHeaderData { root, @@ -1057,7 +1151,7 @@ pub fn serve( }; Ok(api_types::GenericResponse::from(vec![data]) - .add_execution_optimistic(execution_optimistic)) + .add_execution_optimistic_finalized(execution_optimistic, finalized)) }) }, ); @@ -1075,10 +1169,10 @@ pub fn serve( .and(chain_filter.clone()) .and_then(|block_id: BlockId, chain: Arc>| { blocking_json_task(move || { - let (root, execution_optimistic) = block_id.root(&chain)?; + let (root, execution_optimistic, finalized) = block_id.root(&chain)?; // Ignore the second `execution_optimistic` since the first one has more // information about the original request. - let (block, _execution_optimistic) = + let (block, _execution_optimistic, _finalized) = BlockId::from_root(root).blinded_block(&chain)?; let canonical = chain @@ -1095,8 +1189,9 @@ pub fn serve( }, }; - Ok(api_types::ExecutionOptimisticResponse { + Ok(api_types::ExecutionOptimisticFinalizedResponse { execution_optimistic: Some(execution_optimistic), + finalized: Some(finalized), data, }) }) @@ -1120,9 +1215,15 @@ pub fn serve( chain: Arc>, network_tx: UnboundedSender>, log: Logger| async move { - publish_blocks::publish_block(None, block, chain, &network_tx, log) - .await - .map(|()| warp::reply()) + publish_blocks::publish_block( + None, + ProvenancedBlock::Local(block), + chain, + &network_tx, + log, + ) + .await + .map(|()| warp::reply().into_response()) }, ); @@ -1146,7 +1247,7 @@ pub fn serve( log: Logger| async move { publish_blocks::publish_blinded_block(block, chain, &network_tx, log) .await - .map(|()| warp::reply()) + .map(|()| warp::reply().into_response()) }, ); @@ -1179,7 +1280,8 @@ pub fn serve( chain: Arc>, accept_header: Option| { async move { - let (block, execution_optimistic) = block_id.full_block(&chain).await?; + let (block, execution_optimistic, finalized) = + block_id.full_block(&chain).await?; let fork_name = block .fork_name(&chain.spec) .map_err(inconsistent_fork_rejection)?; @@ -1195,10 +1297,11 @@ pub fn serve( e )) }), - _ => execution_optimistic_fork_versioned_response( + _ => execution_optimistic_finalized_fork_versioned_response( endpoint_version, fork_name, execution_optimistic, + finalized, block, ) .map(|res| warp::reply::json(&res).into_response()), @@ -1215,12 +1318,11 @@ pub fn serve( .and(warp::path::end()) .and_then(|block_id: BlockId, chain: Arc>| { blocking_json_task(move || { - let (block, execution_optimistic) = block_id.blinded_block(&chain)?; - + let (block, execution_optimistic, finalized) = block_id.blinded_block(&chain)?; Ok(api_types::GenericResponse::from(api_types::RootData::from( block.canonical_root(), )) - .add_execution_optimistic(execution_optimistic)) + .add_execution_optimistic_finalized(execution_optimistic, finalized)) }) }); @@ -1231,11 +1333,10 @@ pub fn serve( .and(warp::path::end()) .and_then(|block_id: BlockId, chain: Arc>| { blocking_json_task(move || { - let (block, execution_optimistic) = block_id.blinded_block(&chain)?; - + let (block, execution_optimistic, finalized) = block_id.blinded_block(&chain)?; Ok( api_types::GenericResponse::from(block.message().body().attestations().clone()) - .add_execution_optimistic(execution_optimistic), + .add_execution_optimistic_finalized(execution_optimistic, finalized), ) }) }); @@ -1252,8 +1353,9 @@ pub fn serve( |block_id: BlockId, chain: Arc>, accept_header: Option| { - blocking_task(move || { - let (block, execution_optimistic) = block_id.blinded_block(&chain)?; + blocking_response_task(move || { + let (block, execution_optimistic, finalized) = + block_id.blinded_block(&chain)?; let fork_name = block .fork_name(&chain.spec) .map_err(inconsistent_fork_rejection)?; @@ -1271,10 +1373,11 @@ pub fn serve( }), _ => { // Post as a V2 endpoint so we return the fork version. - execution_optimistic_fork_versioned_response( + execution_optimistic_finalized_fork_versioned_response( V2, fork_name, execution_optimistic, + finalized, block, ) .map(|res| warp::reply::json(&res).into_response()) @@ -1652,6 +1755,109 @@ pub fn serve( }, ); + // GET beacon/pool/bls_to_execution_changes + let get_beacon_pool_bls_to_execution_changes = beacon_pool_path + .clone() + .and(warp::path("bls_to_execution_changes")) + .and(warp::path::end()) + .and_then(|chain: Arc>| { + blocking_json_task(move || { + let address_changes = chain.op_pool.get_all_bls_to_execution_changes(); + Ok(api_types::GenericResponse::from(address_changes)) + }) + }); + + // POST beacon/pool/bls_to_execution_changes + let post_beacon_pool_bls_to_execution_changes = beacon_pool_path + .clone() + .and(warp::path("bls_to_execution_changes")) + .and(warp::path::end()) + .and(warp::body::json()) + .and(network_tx_filter.clone()) + .and(log_filter.clone()) + .and_then( + |chain: Arc>, + address_changes: Vec, + network_tx: UnboundedSender>, + log: Logger| { + blocking_json_task(move || { + let mut failures = vec![]; + + for (index, address_change) in address_changes.into_iter().enumerate() { + let validator_index = address_change.message.validator_index; + + match chain.verify_bls_to_execution_change_for_http_api(address_change) { + Ok(ObservationOutcome::New(verified_address_change)) => { + let validator_index = + verified_address_change.as_inner().message.validator_index; + let address = verified_address_change + .as_inner() + .message + .to_execution_address; + + // New to P2P *and* op pool, gossip immediately if post-Capella. + let received_pre_capella = if chain.current_slot_is_post_capella().unwrap_or(false) { + ReceivedPreCapella::No + } else { + ReceivedPreCapella::Yes + }; + if matches!(received_pre_capella, ReceivedPreCapella::No) { + publish_pubsub_message( + &network_tx, + PubsubMessage::BlsToExecutionChange(Box::new( + verified_address_change.as_inner().clone(), + )), + )?; + } + + // Import to op pool (may return `false` if there's a race). + let imported = + chain.import_bls_to_execution_change(verified_address_change, received_pre_capella); + + info!( + log, + "Processed BLS to execution change"; + "validator_index" => validator_index, + "address" => ?address, + "published" => matches!(received_pre_capella, ReceivedPreCapella::No), + "imported" => imported, + ); + } + Ok(ObservationOutcome::AlreadyKnown) => { + debug!( + log, + "BLS to execution change already known"; + "validator_index" => validator_index, + ); + } + Err(e) => { + warn!( + log, + "Invalid BLS to execution change"; + "validator_index" => validator_index, + "reason" => ?e, + "source" => "HTTP", + ); + failures.push(api_types::Failure::new( + index, + format!("invalid: {e:?}"), + )); + } + } + } + + if failures.is_empty() { + Ok(()) + } else { + Err(warp_utils::reject::indexed_bad_request( + "some BLS to execution changes failed to verify".into(), + failures, + )) + } + }) + }, + ); + // GET beacon/deposit_snapshot let get_beacon_deposit_snapshot = eth_v1 .and(warp::path("beacon")) @@ -1661,7 +1867,7 @@ pub fn serve( .and(eth1_service_filter.clone()) .and_then( |accept_header: Option, eth1_service: eth1::Service| { - blocking_task(move || match accept_header { + blocking_response_task(move || match accept_header { Some(api_types::Accept::Json) | None => { let snapshot = eth1_service.get_deposit_snapshot(); Ok( @@ -1699,6 +1905,118 @@ pub fn serve( }, ); + let beacon_rewards_path = eth_v1 + .and(warp::path("beacon")) + .and(warp::path("rewards")) + .and(chain_filter.clone()); + + // GET beacon/rewards/blocks/{block_id} + let get_beacon_rewards_blocks = beacon_rewards_path + .clone() + .and(warp::path("blocks")) + .and(block_id_or_err) + .and(warp::path::end()) + .and_then(|chain: Arc>, block_id: BlockId| { + blocking_json_task(move || { + let (rewards, execution_optimistic, finalized) = + standard_block_rewards::compute_beacon_block_rewards(chain, block_id)?; + Ok(rewards) + .map(api_types::GenericResponse::from) + .map(|resp| { + resp.add_execution_optimistic_finalized(execution_optimistic, finalized) + }) + }) + }); + + /* + * beacon/rewards + */ + + let beacon_rewards_path = eth_v1 + .and(warp::path("beacon")) + .and(warp::path("rewards")) + .and(chain_filter.clone()); + + // POST beacon/rewards/attestations/{epoch} + let post_beacon_rewards_attestations = beacon_rewards_path + .clone() + .and(warp::path("attestations")) + .and(warp::path::param::()) + .and(warp::path::end()) + .and(warp::body::json()) + .and(log_filter.clone()) + .and_then( + |chain: Arc>, + epoch: Epoch, + validators: Vec, + log: Logger| { + blocking_json_task(move || { + let attestation_rewards = chain + .compute_attestation_rewards(epoch, validators, log) + .map_err(|e| match e { + BeaconChainError::MissingBeaconState(root) => { + warp_utils::reject::custom_not_found(format!( + "missing state {root:?}", + )) + } + BeaconChainError::NoStateForSlot(slot) => { + warp_utils::reject::custom_not_found(format!( + "missing state at slot {slot}" + )) + } + BeaconChainError::BeaconStateError( + BeaconStateError::UnknownValidator(validator_index), + ) => warp_utils::reject::custom_bad_request(format!( + "validator is unknown: {validator_index}" + )), + BeaconChainError::ValidatorPubkeyUnknown(pubkey) => { + warp_utils::reject::custom_bad_request(format!( + "validator pubkey is unknown: {pubkey:?}" + )) + } + e => warp_utils::reject::custom_server_error(format!( + "unexpected error: {:?}", + e + )), + })?; + let execution_optimistic = + chain.is_optimistic_or_invalid_head().unwrap_or_default(); + + Ok(attestation_rewards) + .map(api_types::GenericResponse::from) + .map(|resp| resp.add_execution_optimistic(execution_optimistic)) + }) + }, + ); + + // POST beacon/rewards/sync_committee/{block_id} + let post_beacon_rewards_sync_committee = beacon_rewards_path + .clone() + .and(warp::path("sync_committee")) + .and(block_id_or_err) + .and(warp::path::end()) + .and(warp::body::json()) + .and(log_filter.clone()) + .and_then( + |chain: Arc>, + block_id: BlockId, + validators: Vec, + log: Logger| { + blocking_json_task(move || { + let (rewards, execution_optimistic, finalized) = + sync_committee_rewards::compute_sync_committee_rewards( + chain, block_id, validators, log, + )?; + + Ok(rewards) + .map(api_types::GenericResponse::from) + .map(|resp| { + resp.add_execution_optimistic_finalized(execution_optimistic, finalized) + }) + }) + }, + ); + /* * config */ @@ -1772,12 +2090,12 @@ pub fn serve( state_id: StateId, accept_header: Option, chain: Arc>| { - blocking_task(move || match accept_header { + blocking_response_task(move || match accept_header { Some(api_types::Accept::Ssz) => { // We can ignore the optimistic status for the "fork" since it's a // specification constant that doesn't change across competing heads of the // beacon chain. - let (state, _execution_optimistic) = state_id.state(&chain)?; + let (state, _execution_optimistic, _finalized) = state_id.state(&chain)?; let fork_name = state .fork_name(&chain.spec) .map_err(inconsistent_fork_rejection)?; @@ -1785,7 +2103,9 @@ pub fn serve( .status(200) .header("Content-Type", "application/octet-stream") .body(state.as_ssz_bytes().into()) - .map(|resp| add_consensus_version_header(resp, fork_name)) + .map(|resp: warp::reply::Response| { + add_consensus_version_header(resp, fork_name) + }) .map_err(|e| { warp_utils::reject::custom_server_error(format!( "failed to create response: {}", @@ -1793,16 +2113,17 @@ pub fn serve( )) }) } - _ => state_id.map_state_and_execution_optimistic( + _ => state_id.map_state_and_execution_optimistic_and_finalized( &chain, - |state, execution_optimistic| { + |state, execution_optimistic, finalized| { let fork_name = state .fork_name(&chain.spec) .map_err(inconsistent_fork_rejection)?; - let res = execution_optimistic_fork_versioned_response( + let res = execution_optimistic_finalized_fork_versioned_response( endpoint_version, fork_name, execution_optimistic, + finalized, &state, )?; Ok(add_consensus_version_header( @@ -1852,6 +2173,58 @@ pub fn serve( }, ); + // GET debug/fork_choice + let get_debug_fork_choice = eth_v1 + .and(warp::path("debug")) + .and(warp::path("fork_choice")) + .and(warp::path::end()) + .and(chain_filter.clone()) + .and_then(|chain: Arc>| { + blocking_json_task(move || { + let beacon_fork_choice = chain.canonical_head.fork_choice_read_lock(); + + let proto_array = beacon_fork_choice.proto_array().core_proto_array(); + + let fork_choice_nodes = proto_array + .nodes + .iter() + .map(|node| { + let execution_status = if node.execution_status.is_execution_enabled() { + Some(node.execution_status.to_string()) + } else { + None + }; + + ForkChoiceNode { + slot: node.slot, + block_root: node.root, + parent_root: node + .parent + .and_then(|index| proto_array.nodes.get(index)) + .map(|parent| parent.root), + justified_epoch: node + .justified_checkpoint + .map(|checkpoint| checkpoint.epoch), + finalized_epoch: node + .finalized_checkpoint + .map(|checkpoint| checkpoint.epoch), + weight: node.weight, + validity: execution_status, + execution_block_hash: node + .execution_status + .block_hash() + .map(|block_hash| block_hash.into_root()), + } + }) + .collect::>(); + Ok(ForkChoice { + justified_checkpoint: proto_array.justified_checkpoint, + finalized_checkpoint: proto_array.finalized_checkpoint, + fork_choice_nodes, + }) + }) + }); + /* * node */ @@ -1948,7 +2321,7 @@ pub fn serve( .and(warp::path::end()) .and(network_globals.clone()) .and_then(|network_globals: Arc>| { - blocking_task(move || match *network_globals.sync_state.read() { + blocking_response_task(move || match *network_globals.sync_state.read() { SyncState::SyncingFinalized { .. } | SyncState::SyncingHead { .. } | SyncState::SyncTransition @@ -2164,11 +2537,19 @@ pub fn serve( .and(not_while_syncing_filter.clone()) .and(warp::query::()) .and(chain_filter.clone()) + .and(log_filter.clone()) .and_then( |endpoint_version: EndpointVersion, slot: Slot, query: api_types::ValidatorBlocksQuery, - chain: Arc>| async move { + chain: Arc>, + log: Logger| async move { + debug!( + log, + "Block production request from HTTP API"; + "slot" => slot + ); + let randao_reveal = query.randao_reveal.decompress().map_err(|e| { warp_utils::reject::custom_bad_request(format!( "randao reveal is not a valid BLS signature: {:?}", @@ -2204,7 +2585,7 @@ pub fn serve( .map_err(inconsistent_fork_rejection)?; fork_versioned_response(endpoint_version, fork_name, block) - .map(|response| warp::reply::json(&response)) + .map(|response| warp::reply::json(&response).into_response()) }, ); @@ -2261,7 +2642,7 @@ pub fn serve( // Pose as a V2 endpoint so we return the fork `version`. fork_versioned_response(V2, fork_name, block) - .map(|response| warp::reply::json(&response)) + .map(|response| warp::reply::json(&response).into_response()) }, ); @@ -2634,7 +3015,7 @@ pub fn serve( )) })?; - Ok::<_, warp::reject::Rejection>(warp::reply::json(&())) + Ok::<_, warp::reject::Rejection>(warp::reply::json(&()).into_response()) }, ); @@ -2743,9 +3124,9 @@ pub fn serve( builder .post_builder_validators(&filtered_registration_data) .await - .map(|resp| warp::reply::json(&resp)) + .map(|resp| warp::reply::json(&resp).into_response()) .map_err(|e| { - error!( + warn!( log, "Relay error when registering validator(s)"; "num_registrations" => filtered_registration_data.len(), @@ -2915,6 +3296,22 @@ pub fn serve( }, ); + // POST lighthouse/ui/validator_info + let post_lighthouse_ui_validator_info = warp::path("lighthouse") + .and(warp::path("ui")) + .and(warp::path("validator_info")) + .and(warp::path::end()) + .and(warp::body::json()) + .and(chain_filter.clone()) + .and_then( + |request_data: ui::ValidatorInfoRequestData, chain: Arc>| { + blocking_json_task(move || { + ui::get_validator_info(request_data, chain) + .map(api_types::GenericResponse::from) + }) + }, + ); + // GET lighthouse/syncing let get_lighthouse_syncing = warp::path("lighthouse") .and(warp::path("syncing")) @@ -2989,7 +3386,7 @@ pub fn serve( .and(warp::path::end()) .and(chain_filter.clone()) .and_then(|chain: Arc>| { - blocking_task(move || { + blocking_response_task(move || { Ok::<_, warp::Rejection>(warp::reply::json(&api_types::GenericResponseRef::from( chain .canonical_head @@ -3108,9 +3505,9 @@ pub fn serve( .and(warp::path::end()) .and(chain_filter.clone()) .and_then(|state_id: StateId, chain: Arc>| { - blocking_task(move || { + blocking_response_task(move || { // This debug endpoint provides no indication of optimistic status. - let (state, _execution_optimistic) = state_id.state(&chain)?; + let (state, _execution_optimistic, _finalized) = state_id.state(&chain)?; Response::builder() .status(200) .header("Content-Type", "application/ssz") @@ -3244,9 +3641,10 @@ pub fn serve( .and(chain_filter.clone()) .and_then(|chain: Arc>| async move { let merge_readiness = chain.check_merge_readiness().await; - Ok::<_, warp::reject::Rejection>(warp::reply::json(&api_types::GenericResponse::from( - merge_readiness, - ))) + Ok::<_, warp::reject::Rejection>( + warp::reply::json(&api_types::GenericResponse::from(merge_readiness)) + .into_response(), + ) }); let get_events = eth_v1 @@ -3257,7 +3655,7 @@ pub fn serve( .and_then( |topics_res: Result, chain: Arc>| { - blocking_task(move || { + blocking_response_task(move || { let topics = topics_res?; // for each topic subscribed spawn a new subscription let mut receivers = Vec::with_capacity(topics.topics.len()); @@ -3282,6 +3680,9 @@ pub fn serve( api_types::EventTopic::ContributionAndProof => { event_handler.subscribe_contributions() } + api_types::EventTopic::PayloadAttributes => { + event_handler.subscribe_payload_attributes() + } api_types::EventTopic::LateHead => { event_handler.subscribe_late_head() } @@ -3321,100 +3722,111 @@ pub fn serve( ); // Define the ultimate set of routes that will be provided to the server. + // Use `uor` rather than `or` in order to simplify types (see `UnifyingOrFilter`). let routes = warp::get() .and( get_beacon_genesis - .boxed() - .or(get_beacon_state_root.boxed()) - .or(get_beacon_state_fork.boxed()) - .or(get_beacon_state_finality_checkpoints.boxed()) - .or(get_beacon_state_validator_balances.boxed()) - .or(get_beacon_state_validators_id.boxed()) - .or(get_beacon_state_validators.boxed()) - .or(get_beacon_state_committees.boxed()) - .or(get_beacon_state_sync_committees.boxed()) - .or(get_beacon_state_randao.boxed()) - .or(get_beacon_headers.boxed()) - .or(get_beacon_headers_block_id.boxed()) - .or(get_beacon_block.boxed()) - .or(get_beacon_block_attestations.boxed()) - .or(get_beacon_blinded_block.boxed()) - .or(get_beacon_block_root.boxed()) - .or(get_beacon_pool_attestations.boxed()) - .or(get_beacon_pool_attester_slashings.boxed()) - .or(get_beacon_pool_proposer_slashings.boxed()) - .or(get_beacon_pool_voluntary_exits.boxed()) - .or(get_beacon_deposit_snapshot.boxed()) - .or(get_config_fork_schedule.boxed()) - .or(get_config_spec.boxed()) - .or(get_config_deposit_contract.boxed()) - .or(get_debug_beacon_states.boxed()) - .or(get_debug_beacon_heads.boxed()) - .or(get_node_identity.boxed()) - .or(get_node_version.boxed()) - .or(get_node_syncing.boxed()) - .or(get_node_health.boxed()) - .or(get_node_peers_by_id.boxed()) - .or(get_node_peers.boxed()) - .or(get_node_peer_count.boxed()) - .or(get_validator_duties_proposer.boxed()) - .or(get_validator_blocks.boxed()) - .or(get_validator_blinded_blocks.boxed()) - .or(get_validator_attestation_data.boxed()) - .or(get_validator_aggregate_attestation.boxed()) - .or(get_validator_sync_committee_contribution.boxed()) - .or(get_lighthouse_health.boxed()) - .or(get_lighthouse_ui_health.boxed()) - .or(get_lighthouse_ui_validator_count.boxed()) - .or(get_lighthouse_syncing.boxed()) - .or(get_lighthouse_nat.boxed()) - .or(get_lighthouse_peers.boxed()) - .or(get_lighthouse_peers_connected.boxed()) - .or(get_lighthouse_proto_array.boxed()) - .or(get_lighthouse_validator_inclusion_global.boxed()) - .or(get_lighthouse_validator_inclusion.boxed()) - .or(get_lighthouse_eth1_syncing.boxed()) - .or(get_lighthouse_eth1_block_cache.boxed()) - .or(get_lighthouse_eth1_deposit_cache.boxed()) - .or(get_lighthouse_beacon_states_ssz.boxed()) - .or(get_lighthouse_staking.boxed()) - .or(get_lighthouse_database_info.boxed()) - .or(get_lighthouse_block_rewards.boxed()) - .or(get_lighthouse_attestation_performance.boxed()) - .or(get_lighthouse_block_packing_efficiency.boxed()) - .or(get_lighthouse_merge_readiness.boxed()) - .or(get_events.boxed()), + .uor(get_beacon_state_root) + .uor(get_beacon_state_fork) + .uor(get_beacon_state_finality_checkpoints) + .uor(get_beacon_state_validator_balances) + .uor(get_beacon_state_validators_id) + .uor(get_beacon_state_validators) + .uor(get_beacon_state_committees) + .uor(get_beacon_state_sync_committees) + .uor(get_beacon_state_randao) + .uor(get_beacon_headers) + .uor(get_beacon_headers_block_id) + .uor(get_beacon_block) + .uor(get_beacon_block_attestations) + .uor(get_beacon_blinded_block) + .uor(get_beacon_block_root) + .uor(get_beacon_pool_attestations) + .uor(get_beacon_pool_attester_slashings) + .uor(get_beacon_pool_proposer_slashings) + .uor(get_beacon_pool_voluntary_exits) + .uor(get_beacon_pool_bls_to_execution_changes) + .uor(get_beacon_deposit_snapshot) + .uor(get_beacon_rewards_blocks) + .uor(get_config_fork_schedule) + .uor(get_config_spec) + .uor(get_config_deposit_contract) + .uor(get_debug_beacon_states) + .uor(get_debug_beacon_heads) + .uor(get_debug_fork_choice) + .uor(get_node_identity) + .uor(get_node_version) + .uor(get_node_syncing) + .uor(get_node_health) + .uor(get_node_peers_by_id) + .uor(get_node_peers) + .uor(get_node_peer_count) + .uor(get_validator_duties_proposer) + .uor(get_validator_blocks) + .uor(get_validator_blinded_blocks) + .uor(get_validator_attestation_data) + .uor(get_validator_aggregate_attestation) + .uor(get_validator_sync_committee_contribution) + .uor(get_lighthouse_health) + .uor(get_lighthouse_ui_health) + .uor(get_lighthouse_ui_validator_count) + .uor(get_lighthouse_syncing) + .uor(get_lighthouse_nat) + .uor(get_lighthouse_peers) + .uor(get_lighthouse_peers_connected) + .uor(get_lighthouse_proto_array) + .uor(get_lighthouse_validator_inclusion_global) + .uor(get_lighthouse_validator_inclusion) + .uor(get_lighthouse_eth1_syncing) + .uor(get_lighthouse_eth1_block_cache) + .uor(get_lighthouse_eth1_deposit_cache) + .uor(get_lighthouse_beacon_states_ssz) + .uor(get_lighthouse_staking) + .uor(get_lighthouse_database_info) + .uor(get_lighthouse_block_rewards) + .uor(get_lighthouse_attestation_performance) + .uor(get_lighthouse_block_packing_efficiency) + .uor(get_lighthouse_merge_readiness) + .uor(get_events) + .recover(warp_utils::reject::handle_rejection), ) .boxed() - .or(warp::post().and( - post_beacon_blocks - .boxed() - .or(post_beacon_blinded_blocks.boxed()) - .or(post_beacon_pool_attestations.boxed()) - .or(post_beacon_pool_attester_slashings.boxed()) - .or(post_beacon_pool_proposer_slashings.boxed()) - .or(post_beacon_pool_voluntary_exits.boxed()) - .or(post_beacon_pool_sync_committees.boxed()) - .or(post_validator_duties_attester.boxed()) - .or(post_validator_duties_sync.boxed()) - .or(post_validator_aggregate_and_proofs.boxed()) - .or(post_validator_contribution_and_proofs.boxed()) - .or(post_validator_beacon_committee_subscriptions.boxed()) - .or(post_validator_sync_committee_subscriptions.boxed()) - .or(post_validator_prepare_beacon_proposer.boxed()) - .or(post_validator_register_validator.boxed()) - .or(post_lighthouse_liveness.boxed()) - .or(post_lighthouse_database_reconstruct.boxed()) - .or(post_lighthouse_database_historical_blocks.boxed()) - .or(post_lighthouse_block_rewards.boxed()) - .or(post_lighthouse_ui_validator_metrics.boxed()), - )) + .uor( + warp::post().and( + post_beacon_blocks + .uor(post_beacon_blinded_blocks) + .uor(post_beacon_pool_attestations) + .uor(post_beacon_pool_attester_slashings) + .uor(post_beacon_pool_proposer_slashings) + .uor(post_beacon_pool_voluntary_exits) + .uor(post_beacon_pool_sync_committees) + .uor(post_beacon_pool_bls_to_execution_changes) + .uor(post_beacon_rewards_attestations) + .uor(post_beacon_rewards_sync_committee) + .uor(post_validator_duties_attester) + .uor(post_validator_duties_sync) + .uor(post_validator_aggregate_and_proofs) + .uor(post_validator_contribution_and_proofs) + .uor(post_validator_beacon_committee_subscriptions) + .uor(post_validator_sync_committee_subscriptions) + .uor(post_validator_prepare_beacon_proposer) + .uor(post_validator_register_validator) + .uor(post_lighthouse_liveness) + .uor(post_lighthouse_database_reconstruct) + .uor(post_lighthouse_database_historical_blocks) + .uor(post_lighthouse_block_rewards) + .uor(post_lighthouse_ui_validator_metrics) + .uor(post_lighthouse_ui_validator_info) + .recover(warp_utils::reject::handle_rejection), + ), + ) .recover(warp_utils::reject::handle_rejection) .with(slog_logging(log.clone())) .with(prometheus_metrics()) // Add a `Server` header. .map(|reply| warp::reply::with_header(reply, "Server", &version_with_platform())) - .with(cors_builder.build()); + .with(cors_builder.build()) + .boxed(); let http_socket: SocketAddr = SocketAddr::new(config.listen_addr, config.listen_port); let http_server: HttpServer = match config.tls_config { diff --git a/beacon_node/http_api/src/metrics.rs b/beacon_node/http_api/src/metrics.rs index 1c3ab1f6804..26ee183c83f 100644 --- a/beacon_node/http_api/src/metrics.rs +++ b/beacon_node/http_api/src/metrics.rs @@ -29,9 +29,10 @@ lazy_static::lazy_static! { "http_api_beacon_proposer_cache_misses_total", "Count of times the proposer cache has been missed", ); - pub static ref HTTP_API_BLOCK_BROADCAST_DELAY_TIMES: Result = try_create_histogram( + pub static ref HTTP_API_BLOCK_BROADCAST_DELAY_TIMES: Result = try_create_histogram_vec( "http_api_block_broadcast_delay_times", - "Time between start of the slot and when the block was broadcast" + "Time between start of the slot and when the block was broadcast", + &["provenance"] ); pub static ref HTTP_API_BLOCK_PUBLISHED_LATE_TOTAL: Result = try_create_int_counter( "http_api_block_published_late_total", diff --git a/beacon_node/http_api/src/proposer_duties.rs b/beacon_node/http_api/src/proposer_duties.rs index 877d64e20f8..7e946b89e72 100644 --- a/beacon_node/http_api/src/proposer_duties.rs +++ b/beacon_node/http_api/src/proposer_duties.rs @@ -209,7 +209,9 @@ fn compute_historic_proposer_duties( .map_err(warp_utils::reject::beacon_chain_error)?; (state, execution_optimistic) } else { - StateId::from_slot(epoch.start_slot(T::EthSpec::slots_per_epoch())).state(chain)? + let (state, execution_optimistic, _finalized) = + StateId::from_slot(epoch.start_slot(T::EthSpec::slots_per_epoch())).state(chain)?; + (state, execution_optimistic) }; // Ensure the state lookup was correct. diff --git a/beacon_node/http_api/src/publish_blocks.rs b/beacon_node/http_api/src/publish_blocks.rs index 5d27f117b02..1a5d5175bc2 100644 --- a/beacon_node/http_api/src/publish_blocks.rs +++ b/beacon_node/http_api/src/publish_blocks.rs @@ -3,36 +3,55 @@ use beacon_chain::validator_monitor::{get_block_delay_ms, timestamp_now}; use beacon_chain::{ BeaconChain, BeaconChainTypes, BlockError, CountUnrealized, NotifyExecutionLayer, }; +use execution_layer::ProvenancedPayload; use lighthouse_network::PubsubMessage; use network::NetworkMessage; -use slog::{error, info, warn, Logger}; +use slog::{debug, error, info, warn, Logger}; use slot_clock::SlotClock; use std::sync::Arc; +use std::time::Duration; use tokio::sync::mpsc::UnboundedSender; use tree_hash::TreeHash; use types::{ - BlindedPayload, ExecPayload, ExecutionBlockHash, ExecutionPayload, FullPayload, Hash256, - SignedBeaconBlock, + AbstractExecPayload, BeaconBlockRef, BlindedPayload, EthSpec, ExecPayload, ExecutionBlockHash, + FullPayload, Hash256, SignedBeaconBlock, }; use warp::Rejection; +pub enum ProvenancedBlock { + /// The payload was built using a local EE. + Local(Arc>>), + /// The payload was build using a remote builder (e.g., via a mev-boost + /// compatible relay). + Builder(Arc>>), +} + /// Handles a request from the HTTP API for full blocks. pub async fn publish_block( block_root: Option, - block: Arc>, + provenanced_block: ProvenancedBlock, chain: Arc>, network_tx: &UnboundedSender>, log: Logger, ) -> Result<(), Rejection> { let seen_timestamp = timestamp_now(); + let (block, is_locally_built_block) = match provenanced_block { + ProvenancedBlock::Local(block) => (block, true), + ProvenancedBlock::Builder(block) => (block, false), + }; + let delay = get_block_delay_ms(seen_timestamp, block.message(), &chain.slot_clock); + + debug!( + log, + "Signed block published to HTTP API"; + "slot" => block.slot() + ); // Send the block, regardless of whether or not it is valid. The API // specification is very clear that this is the desired behaviour. - crate::publish_pubsub_message(network_tx, PubsubMessage::BeaconBlock(block.clone()))?; - // Determine the delay after the start of the slot, register it with metrics. - let delay = get_block_delay_ms(seen_timestamp, block.message(), &chain.slot_clock); - metrics::observe_duration(&metrics::HTTP_API_BLOCK_BROADCAST_DELAY_TIMES, delay); + let message = PubsubMessage::BeaconBlock(block.clone()); + crate::publish_pubsub_message(network_tx, message)?; let block_root = block_root.unwrap_or_else(|| block.canonical_root()); @@ -67,31 +86,11 @@ pub async fn publish_block( // head. chain.recompute_head_at_current_slot().await; - // Perform some logging to inform users if their blocks are being produced - // late. - // - // Check to see the thresholds are non-zero to avoid logging errors with small - // slot times (e.g., during testing) - let too_late_threshold = chain.slot_clock.unagg_attestation_production_delay(); - let delayed_threshold = too_late_threshold / 2; - if delay >= too_late_threshold { - error!( - log, - "Block was broadcast too late"; - "msg" => "system may be overloaded, block likely to be orphaned", - "delay_ms" => delay.as_millis(), - "slot" => block.slot(), - "root" => ?root, - ) - } else if delay >= delayed_threshold { - error!( - log, - "Block broadcast was delayed"; - "msg" => "system may be overloaded, block may be orphaned", - "delay_ms" => delay.as_millis(), - "slot" => block.slot(), - "root" => ?root, - ) + // Only perform late-block logging here if the block is local. For + // blocks built with builders we consider the broadcast time to be + // when the blinded block is published to the builder. + if is_locally_built_block { + late_block_logging(&chain, seen_timestamp, block.message(), root, "local", &log) } Ok(()) @@ -139,14 +138,7 @@ pub async fn publish_blinded_block( ) -> Result<(), Rejection> { let block_root = block.canonical_root(); let full_block = reconstruct_block(chain.clone(), block_root, block, log.clone()).await?; - publish_block::( - Some(block_root), - Arc::new(full_block), - chain, - network_tx, - log, - ) - .await + publish_block::(Some(block_root), full_block, chain, network_tx, log).await } /// Deconstruct the given blinded block, and construct a full block. This attempts to use the @@ -157,23 +149,48 @@ async fn reconstruct_block( block_root: Hash256, block: SignedBeaconBlock>, log: Logger, -) -> Result>, Rejection> { - let full_payload = if let Ok(payload_header) = block.message().body().execution_payload() { +) -> Result, Rejection> { + let full_payload_opt = if let Ok(payload_header) = block.message().body().execution_payload() { let el = chain.execution_layer.as_ref().ok_or_else(|| { warp_utils::reject::custom_server_error("Missing execution layer".to_string()) })?; // If the execution block hash is zero, use an empty payload. let full_payload = if payload_header.block_hash() == ExecutionBlockHash::zero() { - ExecutionPayload::default() - // If we already have an execution payload with this transactions root cached, use it. + let payload = FullPayload::default_at_fork( + chain + .spec + .fork_name_at_epoch(block.slot().epoch(T::EthSpec::slots_per_epoch())), + ) + .map_err(|e| { + warp_utils::reject::custom_server_error(format!( + "Default payload construction error: {e:?}" + )) + })? + .into(); + ProvenancedPayload::Local(payload) + // If we already have an execution payload with this transactions root cached, use it. } else if let Some(cached_payload) = el.get_payload_by_root(&payload_header.tree_hash_root()) { - info!(log, "Reconstructing a full block using a local payload"; "block_hash" => ?cached_payload.block_hash); - cached_payload - // Otherwise, this means we are attempting a blind block proposal. + info!(log, "Reconstructing a full block using a local payload"; "block_hash" => ?cached_payload.block_hash()); + ProvenancedPayload::Local(cached_payload) + // Otherwise, this means we are attempting a blind block proposal. } else { + // Perform the logging for late blocks when we publish to the + // builder, rather than when we publish to the network. This helps + // prevent false positive logs when the builder publishes to the P2P + // network significantly earlier than when they return the block to + // us. + late_block_logging( + &chain, + timestamp_now(), + block.message(), + block_root, + "builder", + &log, + ); + let full_payload = el .propose_blinded_beacon_block(block_root, &block) .await @@ -183,8 +200,8 @@ async fn reconstruct_block( e )) })?; - info!(log, "Successfully published a block to the builder network"; "block_hash" => ?full_payload.block_hash); - full_payload + info!(log, "Successfully published a block to the builder network"; "block_hash" => ?full_payload.block_hash()); + ProvenancedPayload::Builder(full_payload) }; Some(full_payload) @@ -192,7 +209,71 @@ async fn reconstruct_block( None }; - block.try_into_full_block(full_payload).ok_or_else(|| { + match full_payload_opt { + // A block without a payload is pre-merge and we consider it locally + // built. + None => block + .try_into_full_block(None) + .map(Arc::new) + .map(ProvenancedBlock::Local), + Some(ProvenancedPayload::Local(full_payload)) => block + .try_into_full_block(Some(full_payload)) + .map(Arc::new) + .map(ProvenancedBlock::Local), + Some(ProvenancedPayload::Builder(full_payload)) => block + .try_into_full_block(Some(full_payload)) + .map(Arc::new) + .map(ProvenancedBlock::Builder), + } + .ok_or_else(|| { warp_utils::reject::custom_server_error("Unable to add payload to block".to_string()) }) } + +/// If the `seen_timestamp` is some time after the start of the slot for +/// `block`, create some logs to indicate that the block was published late. +fn late_block_logging>( + chain: &BeaconChain, + seen_timestamp: Duration, + block: BeaconBlockRef, + root: Hash256, + provenance: &str, + log: &Logger, +) { + let delay = get_block_delay_ms(seen_timestamp, block, &chain.slot_clock); + + metrics::observe_timer_vec( + &metrics::HTTP_API_BLOCK_BROADCAST_DELAY_TIMES, + &[provenance], + delay, + ); + + // Perform some logging to inform users if their blocks are being produced + // late. + // + // Check to see the thresholds are non-zero to avoid logging errors with small + // slot times (e.g., during testing) + let too_late_threshold = chain.slot_clock.unagg_attestation_production_delay(); + let delayed_threshold = too_late_threshold / 2; + if delay >= too_late_threshold { + error!( + log, + "Block was broadcast too late"; + "msg" => "system may be overloaded, block likely to be orphaned", + "provenance" => provenance, + "delay_ms" => delay.as_millis(), + "slot" => block.slot(), + "root" => ?root, + ) + } else if delay >= delayed_threshold { + error!( + log, + "Block broadcast was delayed"; + "msg" => "system may be overloaded, block may be orphaned", + "provenance" => provenance, + "delay_ms" => delay.as_millis(), + "slot" => block.slot(), + "root" => ?root, + ) + } +} diff --git a/beacon_node/http_api/src/standard_block_rewards.rs b/beacon_node/http_api/src/standard_block_rewards.rs new file mode 100644 index 00000000000..de7e5eb7d3b --- /dev/null +++ b/beacon_node/http_api/src/standard_block_rewards.rs @@ -0,0 +1,27 @@ +use crate::sync_committee_rewards::get_state_before_applying_block; +use crate::BlockId; +use crate::ExecutionOptimistic; +use beacon_chain::{BeaconChain, BeaconChainTypes}; +use eth2::lighthouse::StandardBlockReward; +use std::sync::Arc; +use warp_utils::reject::beacon_chain_error; +//// The difference between block_rewards and beacon_block_rewards is the later returns block +//// reward format that satisfies beacon-api specs +pub fn compute_beacon_block_rewards( + chain: Arc>, + block_id: BlockId, +) -> Result<(StandardBlockReward, ExecutionOptimistic, bool), warp::Rejection> { + let (block, execution_optimistic, finalized) = block_id.blinded_block(&chain)?; + + let block_ref = block.message(); + + let block_root = block.canonical_root(); + + let mut state = get_state_before_applying_block(chain.clone(), &block)?; + + let rewards = chain + .compute_beacon_block_reward(block_ref, block_root, &mut state) + .map_err(beacon_chain_error)?; + + Ok((rewards, execution_optimistic, finalized)) +} diff --git a/beacon_node/http_api/src/state_id.rs b/beacon_node/http_api/src/state_id.rs index 44354217bc4..9e4aadef17e 100644 --- a/beacon_node/http_api/src/state_id.rs +++ b/beacon_node/http_api/src/state_id.rs @@ -10,6 +10,9 @@ use types::{BeaconState, Checkpoint, EthSpec, Fork, Hash256, Slot}; #[derive(Debug)] pub struct StateId(pub CoreStateId); +// More clarity when returning if the state is finalized or not in the root function. +type Finalized = bool; + impl StateId { pub fn from_slot(slot: Slot) -> Self { Self(CoreStateId::Slot(slot)) @@ -19,8 +22,8 @@ impl StateId { pub fn root( &self, chain: &BeaconChain, - ) -> Result<(Hash256, ExecutionOptimistic), warp::Rejection> { - let (slot, execution_optimistic) = match &self.0 { + ) -> Result<(Hash256, ExecutionOptimistic, Finalized), warp::Rejection> { + let (slot, execution_optimistic, finalized) = match &self.0 { CoreStateId::Head => { let (cached_head, execution_status) = chain .canonical_head @@ -29,24 +32,36 @@ impl StateId { return Ok(( cached_head.head_state_root(), execution_status.is_optimistic_or_invalid(), + false, )); } - CoreStateId::Genesis => return Ok((chain.genesis_state_root, false)), + CoreStateId::Genesis => return Ok((chain.genesis_state_root, false, true)), CoreStateId::Finalized => { let finalized_checkpoint = chain.canonical_head.cached_head().finalized_checkpoint(); - checkpoint_slot_and_execution_optimistic(chain, finalized_checkpoint)? + let (slot, execution_optimistic) = + checkpoint_slot_and_execution_optimistic(chain, finalized_checkpoint)?; + (slot, execution_optimistic, true) } CoreStateId::Justified => { let justified_checkpoint = chain.canonical_head.cached_head().justified_checkpoint(); - checkpoint_slot_and_execution_optimistic(chain, justified_checkpoint)? + let (slot, execution_optimistic) = + checkpoint_slot_and_execution_optimistic(chain, justified_checkpoint)?; + (slot, execution_optimistic, false) } CoreStateId::Slot(slot) => ( *slot, chain .is_optimistic_or_invalid_head() .map_err(warp_utils::reject::beacon_chain_error)?, + *slot + <= chain + .canonical_head + .cached_head() + .finalized_checkpoint() + .epoch + .start_slot(T::EthSpec::slots_per_epoch()), ), CoreStateId::Root(root) => { if let Some(hot_summary) = chain @@ -61,7 +76,10 @@ impl StateId { .is_optimistic_or_invalid_block_no_fallback(&hot_summary.latest_block_root) .map_err(BeaconChainError::ForkChoiceError) .map_err(warp_utils::reject::beacon_chain_error)?; - return Ok((*root, execution_optimistic)); + let finalized = chain + .is_finalized_state(root, hot_summary.slot) + .map_err(warp_utils::reject::beacon_chain_error)?; + return Ok((*root, execution_optimistic, finalized)); } else if let Some(_cold_state_slot) = chain .store .load_cold_state_slot(root) @@ -77,7 +95,7 @@ impl StateId { .is_optimistic_or_invalid_block_no_fallback(&finalized_root) .map_err(BeaconChainError::ForkChoiceError) .map_err(warp_utils::reject::beacon_chain_error)?; - return Ok((*root, execution_optimistic)); + return Ok((*root, execution_optimistic, true)); } else { return Err(warp_utils::reject::custom_not_found(format!( "beacon state for state root {}", @@ -94,7 +112,7 @@ impl StateId { warp_utils::reject::custom_not_found(format!("beacon state at slot {}", slot)) })?; - Ok((root, execution_optimistic)) + Ok((root, execution_optimistic, finalized)) } /// Return the `fork` field of the state identified by `self`. @@ -103,9 +121,25 @@ impl StateId { &self, chain: &BeaconChain, ) -> Result<(Fork, bool), warp::Rejection> { - self.map_state_and_execution_optimistic(chain, |state, execution_optimistic| { - Ok((state.fork(), execution_optimistic)) - }) + self.map_state_and_execution_optimistic_and_finalized( + chain, + |state, execution_optimistic, _finalized| Ok((state.fork(), execution_optimistic)), + ) + } + + /// Return the `fork` field of the state identified by `self`. + /// Also returns the `execution_optimistic` value of the state. + /// Also returns the `finalized` value of the state. + pub fn fork_and_execution_optimistic_and_finalized( + &self, + chain: &BeaconChain, + ) -> Result<(Fork, bool, bool), warp::Rejection> { + self.map_state_and_execution_optimistic_and_finalized( + chain, + |state, execution_optimistic, finalized| { + Ok((state.fork(), execution_optimistic, finalized)) + }, + ) } /// Convenience function to compute `fork` when `execution_optimistic` isn't desired. @@ -121,8 +155,8 @@ impl StateId { pub fn state( &self, chain: &BeaconChain, - ) -> Result<(BeaconState, ExecutionOptimistic), warp::Rejection> { - let ((state_root, execution_optimistic), slot_opt) = match &self.0 { + ) -> Result<(BeaconState, ExecutionOptimistic, Finalized), warp::Rejection> { + let ((state_root, execution_optimistic, finalized), slot_opt) = match &self.0 { CoreStateId::Head => { let (cached_head, execution_status) = chain .canonical_head @@ -134,6 +168,7 @@ impl StateId { .beacon_state .clone_with_only_committee_caches(), execution_status.is_optimistic_or_invalid(), + false, )); } CoreStateId::Slot(slot) => (self.root(chain)?, Some(*slot)), @@ -152,24 +187,25 @@ impl StateId { }) })?; - Ok((state, execution_optimistic)) + Ok((state, execution_optimistic, finalized)) } /// Map a function across the `BeaconState` identified by `self`. /// - /// The optimistic status of the requested state is also provided to the `func` closure. + /// The optimistic and finalization status of the requested state is also provided to the `func` + /// closure. /// /// This function will avoid instantiating/copying a new state when `self` points to the head /// of the chain. - pub fn map_state_and_execution_optimistic( + pub fn map_state_and_execution_optimistic_and_finalized( &self, chain: &BeaconChain, func: F, ) -> Result where - F: Fn(&BeaconState, bool) -> Result, + F: Fn(&BeaconState, bool, bool) -> Result, { - let (state, execution_optimistic) = match &self.0 { + let (state, execution_optimistic, finalized) = match &self.0 { CoreStateId::Head => { let (head, execution_status) = chain .canonical_head @@ -178,12 +214,13 @@ impl StateId { return func( &head.snapshot.beacon_state, execution_status.is_optimistic_or_invalid(), + false, ); } _ => self.state(chain)?, }; - func(&state, execution_optimistic) + func(&state, execution_optimistic, finalized) } } diff --git a/beacon_node/http_api/src/sync_committee_rewards.rs b/beacon_node/http_api/src/sync_committee_rewards.rs new file mode 100644 index 00000000000..68a06b1ce8c --- /dev/null +++ b/beacon_node/http_api/src/sync_committee_rewards.rs @@ -0,0 +1,77 @@ +use crate::{BlockId, ExecutionOptimistic}; +use beacon_chain::{BeaconChain, BeaconChainError, BeaconChainTypes}; +use eth2::lighthouse::SyncCommitteeReward; +use eth2::types::ValidatorId; +use slog::{debug, Logger}; +use state_processing::BlockReplayer; +use std::sync::Arc; +use types::{BeaconState, SignedBlindedBeaconBlock}; +use warp_utils::reject::{beacon_chain_error, custom_not_found}; + +pub fn compute_sync_committee_rewards( + chain: Arc>, + block_id: BlockId, + validators: Vec, + log: Logger, +) -> Result<(Option>, ExecutionOptimistic, bool), warp::Rejection> { + let (block, execution_optimistic, finalized) = block_id.blinded_block(&chain)?; + + let mut state = get_state_before_applying_block(chain.clone(), &block)?; + + let reward_payload = chain + .compute_sync_committee_rewards(block.message(), &mut state) + .map_err(beacon_chain_error)?; + + let data = if reward_payload.is_empty() { + debug!(log, "compute_sync_committee_rewards returned empty"); + None + } else if validators.is_empty() { + Some(reward_payload) + } else { + Some( + reward_payload + .into_iter() + .filter(|reward| { + validators.iter().any(|validator| match validator { + ValidatorId::Index(i) => reward.validator_index == *i, + ValidatorId::PublicKey(pubkey) => match state.get_validator_index(pubkey) { + Ok(Some(i)) => reward.validator_index == i as u64, + _ => false, + }, + }) + }) + .collect::>(), + ) + }; + + Ok((data, execution_optimistic, finalized)) +} + +pub fn get_state_before_applying_block( + chain: Arc>, + block: &SignedBlindedBeaconBlock, +) -> Result, warp::reject::Rejection> { + let parent_block: SignedBlindedBeaconBlock = chain + .get_blinded_block(&block.parent_root()) + .and_then(|maybe_block| { + maybe_block.ok_or_else(|| BeaconChainError::MissingBeaconBlock(block.parent_root())) + }) + .map_err(|e| custom_not_found(format!("Parent block is not available! {:?}", e)))?; + + let parent_state = chain + .get_state(&parent_block.state_root(), Some(parent_block.slot())) + .and_then(|maybe_state| { + maybe_state + .ok_or_else(|| BeaconChainError::MissingBeaconState(parent_block.state_root())) + }) + .map_err(|e| custom_not_found(format!("Parent state is not available! {:?}", e)))?; + + let replayer = BlockReplayer::new(parent_state, &chain.spec) + .no_signature_verification() + .state_root_iter([Ok((parent_block.state_root(), parent_block.slot()))].into_iter()) + .minimal_block_root_verification() + .apply_blocks(vec![], Some(block.slot())) + .map_err(beacon_chain_error)?; + + Ok(replayer.into_state()) +} diff --git a/beacon_node/http_api/tests/common.rs b/beacon_node/http_api/src/test_utils.rs similarity index 82% rename from beacon_node/http_api/tests/common.rs rename to beacon_node/http_api/src/test_utils.rs index 7c228d9803f..8dc9be7dd43 100644 --- a/beacon_node/http_api/tests/common.rs +++ b/beacon_node/http_api/src/test_utils.rs @@ -1,10 +1,12 @@ +use crate::{Config, Context}; use beacon_chain::{ - test_utils::{BeaconChainHarness, BoxedMutator, EphemeralHarnessType}, + test_utils::{ + BeaconChainHarness, BoxedMutator, Builder as HarnessBuilder, EphemeralHarnessType, + }, BeaconChain, BeaconChainTypes, }; use directory::DEFAULT_ROOT_DIR; use eth2::{BeaconNodeHttpClient, Timeouts}; -use http_api::{Config, Context}; use lighthouse_network::{ discv5::enr::{CombinedKey, EnrBuilder}, libp2p::{ @@ -55,25 +57,39 @@ pub struct ApiServer> { pub external_peer_id: PeerId, } +type Initializer = Box< + dyn FnOnce(HarnessBuilder>) -> HarnessBuilder>, +>; type Mutator = BoxedMutator, MemoryStore>; impl InteractiveTester { pub async fn new(spec: Option, validator_count: usize) -> Self { - Self::new_with_mutator(spec, validator_count, None).await + Self::new_with_initializer_and_mutator(spec, validator_count, None, None).await } - pub async fn new_with_mutator( + pub async fn new_with_initializer_and_mutator( spec: Option, validator_count: usize, + initializer: Option>, mutator: Option>, ) -> Self { let mut harness_builder = BeaconChainHarness::builder(E::default()) .spec_or_default(spec) - .deterministic_keypairs(validator_count) .logger(test_logger()) - .mock_execution_layer() - .fresh_ephemeral_store(); - + .mock_execution_layer(); + + harness_builder = if let Some(initializer) = initializer { + // Apply custom initialization provided by the caller. + initializer(harness_builder) + } else { + // Apply default initial configuration. + harness_builder + .deterministic_keypairs(validator_count) + .fresh_ephemeral_store() + }; + + // Add a mutator for the beacon chain builder which will be called in + // `HarnessBuilder::build`. if let Some(mutator) = mutator { harness_builder = harness_builder.initial_mutator(mutator); } @@ -114,7 +130,7 @@ pub async fn create_api_server( log: Logger, ) -> ApiServer> { // Get a random unused port. - let port = unused_port::unused_tcp_port().unwrap(); + let port = unused_port::unused_tcp4_port().unwrap(); create_api_server_on_port(chain, log, port).await } @@ -135,10 +151,11 @@ pub async fn create_api_server_on_port( let enr = EnrBuilder::new("v4").build(&enr_key).unwrap(); let network_globals = Arc::new(NetworkGlobals::new( enr.clone(), - TCP_PORT, - UDP_PORT, + Some(TCP_PORT), + None, meta_data, vec![], + false, &log, )); @@ -166,7 +183,7 @@ pub async fn create_api_server_on_port( let eth1_service = eth1::Service::new(eth1::Config::default(), log.clone(), chain.spec.clone()).unwrap(); - let context = Arc::new(Context { + let ctx = Arc::new(Context { config: Config { enabled: true, listen_addr: IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)), @@ -177,19 +194,19 @@ pub async fn create_api_server_on_port( data_dir: std::path::PathBuf::from(DEFAULT_ROOT_DIR), spec_fork_name: None, }, - chain: Some(chain.clone()), + chain: Some(chain), network_senders: Some(network_senders), network_globals: Some(network_globals), eth1_service: Some(eth1_service), log, }); - let ctx = context.clone(); + let (shutdown_tx, shutdown_rx) = oneshot::channel(); let server_shutdown = async { // It's not really interesting why this triggered, just that it happened. let _ = shutdown_rx.await; }; - let (listening_socket, server) = http_api::serve(ctx, server_shutdown).unwrap(); + let (listening_socket, server) = crate::serve(ctx, server_shutdown).unwrap(); ApiServer { server, diff --git a/beacon_node/http_api/src/ui.rs b/beacon_node/http_api/src/ui.rs index a5b3a8b2f2e..e8280a796a3 100644 --- a/beacon_node/http_api/src/ui.rs +++ b/beacon_node/http_api/src/ui.rs @@ -1,5 +1,7 @@ -use beacon_chain::{metrics, BeaconChain, BeaconChainError, BeaconChainTypes}; -use eth2::types::ValidatorStatus; +use beacon_chain::{ + validator_monitor::HISTORIC_EPOCHS, BeaconChain, BeaconChainError, BeaconChainTypes, +}; +use eth2::types::{Epoch, ValidatorStatus}; use serde::{Deserialize, Serialize}; use std::collections::{HashMap, HashSet}; use std::sync::Arc; @@ -71,6 +73,82 @@ pub fn get_validator_count( }) } +#[derive(PartialEq, Serialize, Deserialize)] +pub struct ValidatorInfoRequestData { + #[serde(with = "eth2_serde_utils::quoted_u64_vec")] + indices: Vec, +} + +#[derive(PartialEq, Serialize, Deserialize)] +pub struct ValidatorInfoValues { + #[serde(with = "eth2_serde_utils::quoted_u64")] + epoch: u64, + #[serde(with = "eth2_serde_utils::quoted_u64")] + total_balance: u64, +} + +#[derive(PartialEq, Serialize, Deserialize)] +pub struct ValidatorInfo { + info: Vec, +} + +#[derive(PartialEq, Serialize, Deserialize)] +pub struct ValidatorInfoResponse { + validators: HashMap, +} + +pub fn get_validator_info( + request_data: ValidatorInfoRequestData, + chain: Arc>, +) -> Result { + let current_epoch = chain.epoch().map_err(beacon_chain_error)?; + + let epochs = current_epoch.saturating_sub(HISTORIC_EPOCHS).as_u64()..=current_epoch.as_u64(); + + let validator_ids = chain + .validator_monitor + .read() + .get_all_monitored_validators() + .iter() + .cloned() + .collect::>(); + + let indices = request_data + .indices + .iter() + .map(|index| index.to_string()) + .collect::>(); + + let ids = validator_ids + .intersection(&indices) + .collect::>(); + + let mut validators = HashMap::new(); + + for id in ids { + if let Ok(index) = id.parse::() { + if let Some(validator) = chain + .validator_monitor + .read() + .get_monitored_validator(index) + { + let mut info = vec![]; + for epoch in epochs.clone() { + if let Some(total_balance) = validator.get_total_balance(Epoch::new(epoch)) { + info.push(ValidatorInfoValues { + epoch, + total_balance, + }); + } + } + validators.insert(id.clone(), ValidatorInfo { info }); + } + } + } + + Ok(ValidatorInfoResponse { validators }) +} + #[derive(PartialEq, Serialize, Deserialize)] pub struct ValidatorMetricsRequestData { indices: Vec, @@ -119,76 +197,56 @@ pub fn post_validator_monitor_metrics( let mut validators = HashMap::new(); for id in ids { - let attestation_hits = metrics::get_int_counter( - &metrics::VALIDATOR_MONITOR_PREV_EPOCH_ON_CHAIN_ATTESTER_HIT, - &[id], - ) - .map(|counter| counter.get()) - .unwrap_or(0); - let attestation_misses = metrics::get_int_counter( - &metrics::VALIDATOR_MONITOR_PREV_EPOCH_ON_CHAIN_ATTESTER_MISS, - &[id], - ) - .map(|counter| counter.get()) - .unwrap_or(0); - let attestations = attestation_hits + attestation_misses; - let attestation_hit_percentage: f64 = if attestations == 0 { - 0.0 - } else { - (100 * attestation_hits / attestations) as f64 - }; - - let attestation_head_hits = metrics::get_int_counter( - &metrics::VALIDATOR_MONITOR_PREV_EPOCH_ON_CHAIN_HEAD_ATTESTER_HIT, - &[id], - ) - .map(|counter| counter.get()) - .unwrap_or(0); - let attestation_head_misses = metrics::get_int_counter( - &metrics::VALIDATOR_MONITOR_PREV_EPOCH_ON_CHAIN_HEAD_ATTESTER_MISS, - &[id], - ) - .map(|counter| counter.get()) - .unwrap_or(0); - let head_attestations = attestation_head_hits + attestation_head_misses; - let attestation_head_hit_percentage: f64 = if head_attestations == 0 { - 0.0 - } else { - (100 * attestation_head_hits / head_attestations) as f64 - }; - - let attestation_target_hits = metrics::get_int_counter( - &metrics::VALIDATOR_MONITOR_PREV_EPOCH_ON_CHAIN_TARGET_ATTESTER_HIT, - &[id], - ) - .map(|counter| counter.get()) - .unwrap_or(0); - let attestation_target_misses = metrics::get_int_counter( - &metrics::VALIDATOR_MONITOR_PREV_EPOCH_ON_CHAIN_TARGET_ATTESTER_MISS, - &[id], - ) - .map(|counter| counter.get()) - .unwrap_or(0); - let target_attestations = attestation_target_hits + attestation_target_misses; - let attestation_target_hit_percentage: f64 = if target_attestations == 0 { - 0.0 - } else { - (100 * attestation_target_hits / target_attestations) as f64 - }; - - let metrics = ValidatorMetrics { - attestation_hits, - attestation_misses, - attestation_hit_percentage, - attestation_head_hits, - attestation_head_misses, - attestation_head_hit_percentage, - attestation_target_hits, - attestation_target_misses, - attestation_target_hit_percentage, - }; - - validators.insert(id.clone(), metrics); + if let Ok(index) = id.parse::() { + if let Some(validator) = chain + .validator_monitor + .read() + .get_monitored_validator(index) + { + let val_metrics = validator.metrics.read(); + let attestation_hits = val_metrics.attestation_hits; + let attestation_misses = val_metrics.attestation_misses; + let attestation_head_hits = val_metrics.attestation_head_hits; + let attestation_head_misses = val_metrics.attestation_head_misses; + let attestation_target_hits = val_metrics.attestation_target_hits; + let attestation_target_misses = val_metrics.attestation_target_misses; + drop(val_metrics); + + let attestations = attestation_hits + attestation_misses; + let attestation_hit_percentage: f64 = if attestations == 0 { + 0.0 + } else { + (100 * attestation_hits / attestations) as f64 + }; + let head_attestations = attestation_head_hits + attestation_head_misses; + let attestation_head_hit_percentage: f64 = if head_attestations == 0 { + 0.0 + } else { + (100 * attestation_head_hits / head_attestations) as f64 + }; + + let target_attestations = attestation_target_hits + attestation_target_misses; + let attestation_target_hit_percentage: f64 = if target_attestations == 0 { + 0.0 + } else { + (100 * attestation_target_hits / target_attestations) as f64 + }; + + let metrics = ValidatorMetrics { + attestation_hits, + attestation_misses, + attestation_hit_percentage, + attestation_head_hits, + attestation_head_misses, + attestation_head_hit_percentage, + attestation_target_hits, + attestation_target_misses, + attestation_target_hit_percentage, + }; + + validators.insert(id.clone(), metrics); + } + } } Ok(ValidatorMetricsResponse { validators }) diff --git a/beacon_node/http_api/src/validator_inclusion.rs b/beacon_node/http_api/src/validator_inclusion.rs index 917e85e6493..f22ced1e693 100644 --- a/beacon_node/http_api/src/validator_inclusion.rs +++ b/beacon_node/http_api/src/validator_inclusion.rs @@ -18,7 +18,7 @@ fn end_of_epoch_state( let target_slot = epoch.end_slot(T::EthSpec::slots_per_epoch()); // The execution status is not returned, any functions which rely upon this method might return // optimistic information without explicitly declaring so. - let (state, _execution_status) = StateId::from_slot(target_slot).state(chain)?; + let (state, _execution_status, _finalized) = StateId::from_slot(target_slot).state(chain)?; Ok(state) } diff --git a/beacon_node/http_api/src/version.rs b/beacon_node/http_api/src/version.rs index 87ba3a4663f..e01ff982201 100644 --- a/beacon_node/http_api/src/version.rs +++ b/beacon_node/http_api/src/version.rs @@ -1,10 +1,9 @@ -use crate::api_types::{ - EndpointVersion, ExecutionOptimisticForkVersionedResponse, ForkVersionedResponse, -}; +use crate::api_types::fork_versioned_response::ExecutionOptimisticFinalizedForkVersionedResponse; +use crate::api_types::EndpointVersion; use eth2::CONSENSUS_VERSION_HEADER; use serde::Serialize; -use types::{ForkName, InconsistentFork}; -use warp::reply::{self, Reply, WithHeader}; +use types::{ForkName, ForkVersionedResponse, InconsistentFork}; +use warp::reply::{self, Reply, Response}; pub const V1: EndpointVersion = EndpointVersion(1); pub const V2: EndpointVersion = EndpointVersion(2); @@ -27,12 +26,13 @@ pub fn fork_versioned_response( }) } -pub fn execution_optimistic_fork_versioned_response( +pub fn execution_optimistic_finalized_fork_versioned_response( endpoint_version: EndpointVersion, fork_name: ForkName, execution_optimistic: bool, + finalized: bool, data: T, -) -> Result, warp::reject::Rejection> { +) -> Result, warp::reject::Rejection> { let fork_name = if endpoint_version == V1 { None } else if endpoint_version == V2 { @@ -40,16 +40,17 @@ pub fn execution_optimistic_fork_versioned_response( } else { return Err(unsupported_version_rejection(endpoint_version)); }; - Ok(ExecutionOptimisticForkVersionedResponse { + Ok(ExecutionOptimisticFinalizedForkVersionedResponse { version: fork_name, execution_optimistic: Some(execution_optimistic), + finalized: Some(finalized), data, }) } /// Add the `Eth-Consensus-Version` header to a response. -pub fn add_consensus_version_header(reply: T, fork_name: ForkName) -> WithHeader { - reply::with_header(reply, CONSENSUS_VERSION_HEADER, fork_name.to_string()) +pub fn add_consensus_version_header(reply: T, fork_name: ForkName) -> Response { + reply::with_header(reply, CONSENSUS_VERSION_HEADER, fork_name.to_string()).into_response() } pub fn inconsistent_fork_rejection(error: InconsistentFork) -> warp::reject::Rejection { diff --git a/beacon_node/http_api/tests/fork_tests.rs b/beacon_node/http_api/tests/fork_tests.rs index 942a1167c2f..8a3ba887b39 100644 --- a/beacon_node/http_api/tests/fork_tests.rs +++ b/beacon_node/http_api/tests/fork_tests.rs @@ -1,8 +1,16 @@ //! Tests for API behaviour across fork boundaries. -use crate::common::*; -use beacon_chain::{test_utils::RelativeSyncCommittee, StateSkipConfig}; -use eth2::types::{StateId, SyncSubcommittee}; -use types::{ChainSpec, Epoch, EthSpec, MinimalEthSpec, Slot}; +use beacon_chain::{ + test_utils::{RelativeSyncCommittee, DEFAULT_ETH1_BLOCK_HASH, HARNESS_GENESIS_TIME}, + StateSkipConfig, +}; +use eth2::types::{IndexedErrorMessage, StateId, SyncSubcommittee}; +use genesis::{bls_withdrawal_credentials, interop_genesis_state_with_withdrawal_credentials}; +use http_api::test_utils::*; +use std::collections::HashSet; +use types::{ + test_utils::{generate_deterministic_keypair, generate_deterministic_keypairs}, + Address, ChainSpec, Epoch, EthSpec, Hash256, MinimalEthSpec, Slot, +}; type E = MinimalEthSpec; @@ -12,6 +20,14 @@ fn altair_spec(altair_fork_epoch: Epoch) -> ChainSpec { spec } +fn capella_spec(capella_fork_epoch: Epoch) -> ChainSpec { + let mut spec = E::default_spec(); + spec.altair_fork_epoch = Some(Epoch::new(0)); + spec.bellatrix_fork_epoch = Some(Epoch::new(0)); + spec.capella_fork_epoch = Some(capella_fork_epoch); + spec +} + #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn sync_committee_duties_across_fork() { let validator_count = E::sync_committee_size(); @@ -307,3 +323,219 @@ async fn sync_committee_indices_across_fork() { ); } } + +/// Assert that an HTTP API error has the given status code and indexed errors for the given indices. +fn assert_server_indexed_error(error: eth2::Error, status_code: u16, indices: Vec) { + let eth2::Error::ServerIndexedMessage(IndexedErrorMessage { + code, + failures, + .. + }) = error else { + panic!("wrong error, expected ServerIndexedMessage, got: {error:?}") + }; + assert_eq!(code, status_code); + assert_eq!(failures.len(), indices.len()); + for (index, failure) in indices.into_iter().zip(failures) { + assert_eq!(failure.index, index as u64); + } +} + +#[tokio::test(flavor = "multi_thread", worker_threads = 2)] +async fn bls_to_execution_changes_update_all_around_capella_fork() { + let validator_count = 128; + let fork_epoch = Epoch::new(2); + let spec = capella_spec(fork_epoch); + let max_bls_to_execution_changes = E::max_bls_to_execution_changes(); + + // Use a genesis state with entirely BLS withdrawal credentials. + // Offset keypairs by `validator_count` to create keys distinct from the signing keys. + let validator_keypairs = generate_deterministic_keypairs(validator_count); + let withdrawal_keypairs = (0..validator_count) + .map(|i| Some(generate_deterministic_keypair(i + validator_count))) + .collect::>(); + let withdrawal_credentials = withdrawal_keypairs + .iter() + .map(|keypair| bls_withdrawal_credentials(&keypair.as_ref().unwrap().pk, &spec)) + .collect::>(); + let genesis_state = interop_genesis_state_with_withdrawal_credentials( + &validator_keypairs, + &withdrawal_credentials, + HARNESS_GENESIS_TIME, + Hash256::from_slice(DEFAULT_ETH1_BLOCK_HASH), + None, + &spec, + ) + .unwrap(); + + let tester = InteractiveTester::::new_with_initializer_and_mutator( + Some(spec.clone()), + validator_count, + Some(Box::new(|harness_builder| { + harness_builder + .keypairs(validator_keypairs) + .withdrawal_keypairs(withdrawal_keypairs) + .genesis_state_ephemeral_store(genesis_state) + })), + None, + ) + .await; + let harness = &tester.harness; + let client = &tester.client; + + let all_validators = harness.get_all_validators(); + let all_validators_u64 = all_validators.iter().map(|x| *x as u64).collect::>(); + + // Create a bunch of valid address changes. + let valid_address_changes = all_validators_u64 + .iter() + .map(|&validator_index| { + harness.make_bls_to_execution_change( + validator_index, + Address::from_low_u64_be(validator_index), + ) + }) + .collect::>(); + + // Address changes which conflict with `valid_address_changes` on the address chosen. + let conflicting_address_changes = all_validators_u64 + .iter() + .map(|&validator_index| { + harness.make_bls_to_execution_change( + validator_index, + Address::from_low_u64_be(validator_index + 1), + ) + }) + .collect::>(); + + // Address changes signed with the wrong key. + let wrong_key_address_changes = all_validators_u64 + .iter() + .map(|&validator_index| { + // Use the correct pubkey. + let pubkey = &harness.get_withdrawal_keypair(validator_index).pk; + // And the wrong secret key. + let secret_key = &harness + .get_withdrawal_keypair((validator_index + 1) % validator_count as u64) + .sk; + harness.make_bls_to_execution_change_with_keys( + validator_index, + Address::from_low_u64_be(validator_index), + pubkey, + secret_key, + ) + }) + .collect::>(); + + // Submit some changes before Capella. Just enough to fill two blocks. + let num_pre_capella = validator_count / 4; + let blocks_filled_pre_capella = 2; + assert_eq!( + num_pre_capella, + blocks_filled_pre_capella * max_bls_to_execution_changes + ); + + client + .post_beacon_pool_bls_to_execution_changes(&valid_address_changes[..num_pre_capella]) + .await + .unwrap(); + + let expected_received_pre_capella_messages = valid_address_changes[..num_pre_capella].to_vec(); + + // Conflicting changes for the same validators should all fail. + let error = client + .post_beacon_pool_bls_to_execution_changes(&conflicting_address_changes[..num_pre_capella]) + .await + .unwrap_err(); + assert_server_indexed_error(error, 400, (0..num_pre_capella).collect()); + + // Re-submitting the same changes should be accepted. + client + .post_beacon_pool_bls_to_execution_changes(&valid_address_changes[..num_pre_capella]) + .await + .unwrap(); + + // Invalid changes signed with the wrong keys should all be rejected without affecting the seen + // indices filters (apply ALL of them). + let error = client + .post_beacon_pool_bls_to_execution_changes(&wrong_key_address_changes) + .await + .unwrap_err(); + assert_server_indexed_error(error, 400, all_validators.clone()); + + // Advance to right before Capella. + let capella_slot = fork_epoch.start_slot(E::slots_per_epoch()); + harness.extend_to_slot(capella_slot - 1).await; + assert_eq!(harness.head_slot(), capella_slot - 1); + + assert_eq!( + harness + .chain + .op_pool + .get_bls_to_execution_changes_received_pre_capella( + &harness.chain.head_snapshot().beacon_state, + &spec, + ) + .into_iter() + .collect::>(), + HashSet::from_iter(expected_received_pre_capella_messages.into_iter()), + "all pre-capella messages should be queued for capella broadcast" + ); + + // Add Capella blocks which should be full of BLS to execution changes. + for i in 0..validator_count / max_bls_to_execution_changes { + let head_block_root = harness.extend_slots(1).await; + let head_block = harness + .chain + .get_block(&head_block_root) + .await + .unwrap() + .unwrap(); + + let bls_to_execution_changes = head_block + .message() + .body() + .bls_to_execution_changes() + .unwrap(); + + // Block should be full. + assert_eq!( + bls_to_execution_changes.len(), + max_bls_to_execution_changes, + "block not full on iteration {i}" + ); + + // Included changes should be the ones from `valid_address_changes` in any order. + for address_change in bls_to_execution_changes.iter() { + assert!(valid_address_changes.contains(address_change)); + } + + // After the initial 2 blocks, add the rest of the changes using a large + // request containing all the valid, all the conflicting and all the invalid. + // Despite the invalid and duplicate messages, the new ones should still get picked up by + // the pool. + if i == blocks_filled_pre_capella - 1 { + let all_address_changes: Vec<_> = [ + valid_address_changes.clone(), + conflicting_address_changes.clone(), + wrong_key_address_changes.clone(), + ] + .concat(); + + let error = client + .post_beacon_pool_bls_to_execution_changes(&all_address_changes) + .await + .unwrap_err(); + assert_server_indexed_error( + error, + 400, + (validator_count..3 * validator_count).collect(), + ); + } + } + + // Eventually all validators should have eth1 withdrawal credentials. + let head_state = harness.get_current_state(); + for validator in head_state.validators() { + assert!(validator.has_eth1_withdrawal_credential(&spec)); + } +} diff --git a/beacon_node/http_api/tests/interactive_tests.rs b/beacon_node/http_api/tests/interactive_tests.rs index 17a3624afed..da92419744e 100644 --- a/beacon_node/http_api/tests/interactive_tests.rs +++ b/beacon_node/http_api/tests/interactive_tests.rs @@ -1,14 +1,16 @@ //! Generic tests that make use of the (newer) `InteractiveApiTester` -use crate::common::*; use beacon_chain::{ - chain_config::ReOrgThreshold, - test_utils::{AttestationStrategy, BlockStrategy}, + chain_config::{DisallowedReOrgOffsets, ReOrgThreshold}, + test_utils::{AttestationStrategy, BlockStrategy, SyncCommitteeStrategy}, }; use eth2::types::DepositContractData; -use execution_layer::{ForkChoiceState, PayloadAttributes}; +use execution_layer::{ForkchoiceState, PayloadAttributes}; +use http_api::test_utils::InteractiveTester; use parking_lot::Mutex; use slot_clock::SlotClock; -use state_processing::state_advance::complete_state_advance; +use state_processing::{ + per_block_processing::get_expected_withdrawals, state_advance::complete_state_advance, +}; use std::collections::HashMap; use std::sync::Arc; use std::time::Duration; @@ -55,7 +57,7 @@ struct ForkChoiceUpdates { #[derive(Debug, Clone)] struct ForkChoiceUpdateMetadata { received_at: Duration, - state: ForkChoiceState, + state: ForkchoiceState, payload_attributes: Option, } @@ -86,7 +88,7 @@ impl ForkChoiceUpdates { .payload_attributes .as_ref() .map_or(false, |payload_attributes| { - payload_attributes.timestamp == proposal_timestamp + payload_attributes.timestamp() == proposal_timestamp }) }) .cloned() @@ -106,13 +108,17 @@ pub struct ReOrgTest { percent_head_votes: usize, should_re_org: bool, misprediction: bool, + /// Whether to expect withdrawals to change on epoch boundaries. + expect_withdrawals_change_on_epoch: bool, + /// Epoch offsets to avoid proposing reorg blocks at. + disallowed_offsets: Vec, } impl Default for ReOrgTest { /// Default config represents a regular easy re-org. fn default() -> Self { Self { - head_slot: Slot::new(30), + head_slot: Slot::new(E::slots_per_epoch() - 2), parent_distance: 1, head_distance: 1, re_org_threshold: 20, @@ -122,6 +128,8 @@ impl Default for ReOrgTest { percent_head_votes: 0, should_re_org: true, misprediction: false, + expect_withdrawals_change_on_epoch: false, + disallowed_offsets: vec![], } } } @@ -136,8 +144,35 @@ pub async fn proposer_boost_re_org_zero_weight() { #[tokio::test(flavor = "multi_thread", worker_threads = 2)] pub async fn proposer_boost_re_org_epoch_boundary() { proposer_boost_re_org_test(ReOrgTest { - head_slot: Slot::new(31), + head_slot: Slot::new(E::slots_per_epoch() - 1), + should_re_org: false, + ..Default::default() + }) + .await; +} + +#[tokio::test(flavor = "multi_thread", worker_threads = 2)] +pub async fn proposer_boost_re_org_epoch_boundary_skip1() { + // Proposing a block on a boundary after a skip will change the set of expected withdrawals + // sent in the payload attributes. + proposer_boost_re_org_test(ReOrgTest { + head_slot: Slot::new(2 * E::slots_per_epoch() - 2), + head_distance: 2, + should_re_org: false, + expect_withdrawals_change_on_epoch: true, + ..Default::default() + }) + .await; +} + +#[tokio::test(flavor = "multi_thread", worker_threads = 2)] +pub async fn proposer_boost_re_org_epoch_boundary_skip32() { + // Propose a block at 64 after a whole epoch of skipped slots. + proposer_boost_re_org_test(ReOrgTest { + head_slot: Slot::new(E::slots_per_epoch() - 1), + head_distance: E::slots_per_epoch() + 1, should_re_org: false, + expect_withdrawals_change_on_epoch: true, ..Default::default() }) .await; @@ -187,7 +222,7 @@ pub async fn proposer_boost_re_org_finality() { #[tokio::test(flavor = "multi_thread", worker_threads = 2)] pub async fn proposer_boost_re_org_parent_distance() { proposer_boost_re_org_test(ReOrgTest { - head_slot: Slot::new(30), + head_slot: Slot::new(E::slots_per_epoch() - 2), parent_distance: 2, should_re_org: false, ..Default::default() @@ -198,7 +233,7 @@ pub async fn proposer_boost_re_org_parent_distance() { #[tokio::test(flavor = "multi_thread", worker_threads = 2)] pub async fn proposer_boost_re_org_head_distance() { proposer_boost_re_org_test(ReOrgTest { - head_slot: Slot::new(29), + head_slot: Slot::new(E::slots_per_epoch() - 3), head_distance: 2, should_re_org: false, ..Default::default() @@ -206,10 +241,36 @@ pub async fn proposer_boost_re_org_head_distance() { .await; } +// Check that a re-org at a disallowed offset fails. +#[tokio::test(flavor = "multi_thread", worker_threads = 2)] +pub async fn proposer_boost_re_org_disallowed_offset() { + let offset = 4; + proposer_boost_re_org_test(ReOrgTest { + head_slot: Slot::new(E::slots_per_epoch() + offset - 1), + disallowed_offsets: vec![offset], + should_re_org: false, + ..Default::default() + }) + .await; +} + +// Check that a re-org at the *only* allowed offset succeeds. +#[tokio::test(flavor = "multi_thread", worker_threads = 2)] +pub async fn proposer_boost_re_org_disallowed_offset_exact() { + let offset = 4; + let disallowed_offsets = (0..E::slots_per_epoch()).filter(|o| *o != offset).collect(); + proposer_boost_re_org_test(ReOrgTest { + head_slot: Slot::new(E::slots_per_epoch() + offset - 1), + disallowed_offsets, + ..Default::default() + }) + .await; +} + #[tokio::test(flavor = "multi_thread", worker_threads = 2)] pub async fn proposer_boost_re_org_very_unhealthy() { proposer_boost_re_org_test(ReOrgTest { - head_slot: Slot::new(31), + head_slot: Slot::new(E::slots_per_epoch() - 1), parent_distance: 2, head_distance: 2, percent_parent_votes: 10, @@ -225,7 +286,6 @@ pub async fn proposer_boost_re_org_very_unhealthy() { #[tokio::test(flavor = "multi_thread", worker_threads = 2)] pub async fn proposer_boost_re_org_weight_misprediction() { proposer_boost_re_org_test(ReOrgTest { - head_slot: Slot::new(30), percent_empty_votes: 70, percent_head_votes: 30, should_re_org: false, @@ -254,12 +314,14 @@ pub async fn proposer_boost_re_org_test( percent_head_votes, should_re_org, misprediction, + expect_withdrawals_change_on_epoch, + disallowed_offsets, }: ReOrgTest, ) { assert!(head_slot > 0); - // We require a network with execution enabled so we can check EL message timings. - let mut spec = ForkName::Merge.make_genesis_spec(E::default_spec()); + // Test using Capella so that we simulate conditions as similar to mainnet as possible. + let mut spec = ForkName::Capella.make_genesis_spec(E::default_spec()); spec.terminal_total_difficulty = 1.into(); // Ensure there are enough validators to have `attesters_per_slot`. @@ -278,15 +340,19 @@ pub async fn proposer_boost_re_org_test( let num_empty_votes = Some(attesters_per_slot * percent_empty_votes / 100); let num_head_votes = Some(attesters_per_slot * percent_head_votes / 100); - let tester = InteractiveTester::::new_with_mutator( + let tester = InteractiveTester::::new_with_initializer_and_mutator( Some(spec), validator_count, + None, Some(Box::new(move |builder| { builder .proposer_re_org_threshold(Some(ReOrgThreshold(re_org_threshold))) .proposer_re_org_max_epochs_since_finalization(Epoch::new( max_epochs_since_finalization, )) + .proposer_re_org_disallowed_offsets( + DisallowedReOrgOffsets::new::(disallowed_offsets).unwrap(), + ) })), ) .await; @@ -322,13 +388,15 @@ pub async fn proposer_boost_re_org_test( ) .await; - // Create some chain depth. + // Create some chain depth. Sign sync committee signatures so validator balances don't dip + // below 32 ETH and become ineligible for withdrawals. harness.advance_slot(); harness - .extend_chain( + .extend_chain_with_sync( num_initial as usize, BlockStrategy::OnCanonicalHead, AttestationStrategy::AllValidators, + SyncCommitteeStrategy::AllValidators, ) .await; @@ -342,7 +410,7 @@ pub async fn proposer_boost_re_org_test( .lock() .set_forkchoice_updated_hook(Box::new(move |state, payload_attributes| { let received_at = chain_inner.slot_clock.now_duration().unwrap(); - let state = ForkChoiceState::from(state); + let state = ForkchoiceState::from(state); let payload_attributes = payload_attributes.map(Into::into); let update = ForkChoiceUpdateMetadata { received_at, @@ -363,6 +431,16 @@ pub async fn proposer_boost_re_org_test( let slot_b = slot_a + parent_distance; let slot_c = slot_b + head_distance; + // We need to transition to at least epoch 2 in order to trigger + // `process_rewards_and_penalties`. This allows us to test withdrawals changes at epoch + // boundaries. + if expect_withdrawals_change_on_epoch { + assert!( + slot_c.epoch(E::slots_per_epoch()) >= 2, + "for withdrawals to change, test must end at an epoch >= 2" + ); + } + harness.advance_slot(); let (block_a_root, block_a, state_a) = harness .add_block_at_slot(slot_a, harness.get_current_state()) @@ -456,6 +534,10 @@ pub async fn proposer_boost_re_org_test( // Produce block C. // Advance state_b so we can get the proposer. + assert_eq!(state_b.slot(), slot_b); + let pre_advance_withdrawals = get_expected_withdrawals(&state_b, &harness.chain.spec) + .unwrap() + .to_vec(); complete_state_advance(&mut state_b, None, slot_c, &harness.chain.spec).unwrap(); let proposer_index = state_b @@ -513,6 +595,28 @@ pub async fn proposer_boost_re_org_test( .unwrap(); let payload_attribs = first_update.payload_attributes.as_ref().unwrap(); + // Check that withdrawals from the payload attributes match those computed from the parent's + // advanced state. + let expected_withdrawals = if should_re_org { + let mut state_a_advanced = state_a.clone(); + complete_state_advance(&mut state_a_advanced, None, slot_c, &harness.chain.spec).unwrap(); + get_expected_withdrawals(&state_a_advanced, &harness.chain.spec) + } else { + get_expected_withdrawals(&state_b, &harness.chain.spec) + } + .unwrap() + .to_vec(); + let payload_attribs_withdrawals = payload_attribs.withdrawals().unwrap(); + assert_eq!(expected_withdrawals, *payload_attribs_withdrawals); + assert!(!expected_withdrawals.is_empty()); + + if should_re_org + || expect_withdrawals_change_on_epoch + && slot_c.epoch(E::slots_per_epoch()) != slot_b.epoch(E::slots_per_epoch()) + { + assert_ne!(expected_withdrawals, pre_advance_withdrawals); + } + let lookahead = slot_clock .start_of(slot_c) .unwrap() @@ -521,16 +625,20 @@ pub async fn proposer_boost_re_org_test( if !misprediction { assert_eq!( - lookahead, payload_lookahead, + lookahead, + payload_lookahead, "lookahead={lookahead:?}, timestamp={}, prev_randao={:?}", - payload_attribs.timestamp, payload_attribs.prev_randao, + payload_attribs.timestamp(), + payload_attribs.prev_randao(), ); } else { // On a misprediction we issue the first fcU 500ms before creating a block! assert_eq!( - lookahead, fork_choice_lookahead, + lookahead, + fork_choice_lookahead, "timestamp={}, prev_randao={:?}", - payload_attribs.timestamp, payload_attribs.prev_randao, + payload_attribs.timestamp(), + payload_attribs.prev_randao(), ); } } @@ -540,7 +648,7 @@ pub async fn proposer_boost_re_org_test( pub async fn fork_choice_before_proposal() { // Validator count needs to be at least 32 or proposer boost gets set to 0 when computing // `validator_count // 32`. - let validator_count = 32; + let validator_count = 64; let all_validators = (0..validator_count).collect::>(); let num_initial: u64 = 31; diff --git a/beacon_node/http_api/tests/main.rs b/beacon_node/http_api/tests/main.rs index ca6a27530a6..342b72cc7de 100644 --- a/beacon_node/http_api/tests/main.rs +++ b/beacon_node/http_api/tests/main.rs @@ -1,7 +1,5 @@ #![cfg(not(debug_assertions))] // Tests are too slow in debug. -#![recursion_limit = "256"] -pub mod common; pub mod fork_tests; pub mod interactive_tests; pub mod tests; diff --git a/beacon_node/http_api/tests/tests.rs b/beacon_node/http_api/tests/tests.rs index 2e795e522d5..a54f17e96f6 100644 --- a/beacon_node/http_api/tests/tests.rs +++ b/beacon_node/http_api/tests/tests.rs @@ -1,4 +1,3 @@ -use crate::common::{create_api_server, create_api_server_on_port, ApiServer}; use beacon_chain::test_utils::RelativeSyncCommittee; use beacon_chain::{ test_utils::{AttestationStrategy, BeaconChainHarness, BlockStrategy, EphemeralHarnessType}, @@ -8,20 +7,26 @@ use environment::null_logger; use eth2::{ mixin::{RequestAccept, ResponseForkName, ResponseOptional}, reqwest::RequestBuilder, - types::{BlockId as CoreBlockId, StateId as CoreStateId, *}, + types::{BlockId as CoreBlockId, ForkChoiceNode, StateId as CoreStateId, *}, BeaconNodeHttpClient, Error, StatusCode, Timeouts, }; -use execution_layer::test_utils::Operation; use execution_layer::test_utils::TestingBuilder; use execution_layer::test_utils::DEFAULT_BUILDER_THRESHOLD_WEI; +use execution_layer::test_utils::{ + Operation, DEFAULT_BUILDER_PAYLOAD_VALUE_WEI, DEFAULT_MOCK_EL_PAYLOAD_VALUE_WEI, +}; use futures::stream::{Stream, StreamExt}; use futures::FutureExt; -use http_api::{BlockId, StateId}; +use http_api::{ + test_utils::{create_api_server, create_api_server_on_port, ApiServer}, + BlockId, StateId, +}; use lighthouse_network::{Enr, EnrExt, PeerId}; use network::NetworkReceivers; use proto_array::ExecutionStatus; use sensitive_url::SensitiveUrl; use slot_clock::SlotClock; +use state_processing::per_block_processing::get_expected_withdrawals; use state_processing::per_slot_processing; use std::convert::TryInto; use std::sync::Arc; @@ -72,38 +77,53 @@ struct ApiTester { mock_builder: Option>>, } +struct ApiTesterConfig { + spec: ChainSpec, + builder_threshold: Option, +} + +impl Default for ApiTesterConfig { + fn default() -> Self { + let mut spec = E::default_spec(); + spec.shard_committee_period = 2; + Self { + spec, + builder_threshold: None, + } + } +} + impl ApiTester { pub async fn new() -> Self { // This allows for testing voluntary exits without building out a massive chain. - let mut spec = E::default_spec(); - spec.shard_committee_period = 2; - Self::new_from_spec(spec).await + Self::new_from_config(ApiTesterConfig::default()).await } pub async fn new_with_hard_forks(altair: bool, bellatrix: bool) -> Self { - let mut spec = E::default_spec(); - spec.shard_committee_period = 2; + let mut config = ApiTesterConfig::default(); // Set whether the chain has undergone each hard fork. if altair { - spec.altair_fork_epoch = Some(Epoch::new(0)); + config.spec.altair_fork_epoch = Some(Epoch::new(0)); } if bellatrix { - spec.bellatrix_fork_epoch = Some(Epoch::new(0)); + config.spec.bellatrix_fork_epoch = Some(Epoch::new(0)); } - Self::new_from_spec(spec).await + Self::new_from_config(config).await } - pub async fn new_from_spec(spec: ChainSpec) -> Self { + pub async fn new_from_config(config: ApiTesterConfig) -> Self { // Get a random unused port - let port = unused_port::unused_tcp_port().unwrap(); + let spec = config.spec; + let port = unused_port::unused_tcp4_port().unwrap(); let beacon_url = SensitiveUrl::parse(format!("http://127.0.0.1:{port}").as_str()).unwrap(); let harness = Arc::new( BeaconChainHarness::builder(MainnetEthSpec) .spec(spec.clone()) + .logger(logging::test_logger()) .deterministic_keypairs(VALIDATOR_COUNT) .fresh_ephemeral_store() - .mock_execution_layer_with_builder(beacon_url.clone()) + .mock_execution_layer_with_builder(beacon_url.clone(), config.builder_threshold) .build(), ); @@ -358,6 +378,28 @@ impl ApiTester { tester } + pub async fn new_mev_tester_no_builder_threshold() -> Self { + let mut config = ApiTesterConfig { + builder_threshold: Some(0), + spec: E::default_spec(), + }; + config.spec.altair_fork_epoch = Some(Epoch::new(0)); + config.spec.bellatrix_fork_epoch = Some(Epoch::new(0)); + let tester = Self::new_from_config(config) + .await + .test_post_validator_register_validator() + .await; + tester + .mock_builder + .as_ref() + .unwrap() + .builder + .add_operation(Operation::Value(Uint256::from( + DEFAULT_BUILDER_PAYLOAD_VALUE_WEI, + ))); + tester + } + fn skip_slots(self, count: u64) -> Self { for _ in 0..count { self.chain @@ -422,6 +464,264 @@ impl ApiTester { self } + // finalization tests + pub async fn test_beacon_states_root_finalized(self) -> Self { + for state_id in self.interesting_state_ids() { + let state_root = state_id.root(&self.chain); + let state = state_id.state(&self.chain); + + // if .root or .state fail, skip the test. those would be errors outside the scope + // of this test, here we're testing the finalized field assuming the call to .is_finalized_state + // occurs after the state_root and state calls, and that the state_root and state calls + // were correct. + if state_root.is_err() || state.is_err() { + continue; + } + + // now that we know the state is valid, we can unwrap() everything we need + let result = self + .client + .get_beacon_states_root(state_id.0) + .await + .unwrap() + .unwrap() + .finalized + .unwrap(); + + let (state_root, _, _) = state_root.unwrap(); + let (state, _, _) = state.unwrap(); + let state_slot = state.slot(); + let expected = self + .chain + .is_finalized_state(&state_root, state_slot) + .unwrap(); + + assert_eq!(result, expected, "{:?}", state_id); + } + + self + } + + pub async fn test_beacon_states_fork_finalized(self) -> Self { + for state_id in self.interesting_state_ids() { + let state_root = state_id.root(&self.chain); + let state = state_id.state(&self.chain); + + // if .root or .state fail, skip the test. those would be errors outside the scope + // of this test, here we're testing the finalized field assuming the call to .is_finalized_state + // occurs after the state_root and state calls, and that the state_root and state calls + // were correct. + if state_root.is_err() || state.is_err() { + continue; + } + + // now that we know the state is valid, we can unwrap() everything we need + let result = self + .client + .get_beacon_states_fork(state_id.0) + .await + .unwrap() + .unwrap() + .finalized + .unwrap(); + + let (state_root, _, _) = state_root.unwrap(); + let (state, _, _) = state.unwrap(); + let state_slot = state.slot(); + let expected = self + .chain + .is_finalized_state(&state_root, state_slot) + .unwrap(); + + assert_eq!(result, expected, "{:?}", state_id); + } + + self + } + + pub async fn test_beacon_states_finality_checkpoints_finalized(self) -> Self { + for state_id in self.interesting_state_ids() { + let state_root = state_id.root(&self.chain); + let state = state_id.state(&self.chain); + + // if .root or .state fail, skip the test. those would be errors outside the scope + // of this test, here we're testing the finalized field assuming the call to .is_finalized_state + // occurs after the state_root and state calls, and that the state_root and state calls + // were correct. + if state_root.is_err() || state.is_err() { + continue; + } + + // now that we know the state is valid, we can unwrap() everything we need + let result = self + .client + .get_beacon_states_finality_checkpoints(state_id.0) + .await + .unwrap() + .unwrap() + .finalized + .unwrap(); + + let (state_root, _, _) = state_root.unwrap(); + let (state, _, _) = state.unwrap(); + let state_slot = state.slot(); + let expected = self + .chain + .is_finalized_state(&state_root, state_slot) + .unwrap(); + + assert_eq!(result, expected, "{:?}", state_id); + } + + self + } + + pub async fn test_beacon_headers_block_id_finalized(self) -> Self { + for block_id in self.interesting_block_ids() { + let block_root = block_id.root(&self.chain); + let block = block_id.full_block(&self.chain).await; + + // if .root or .state fail, skip the test. those would be errors outside the scope + // of this test, here we're testing the finalized field assuming the call to .is_finalized_state + // occurs after the state_root and state calls, and that the state_root and state calls + // were correct. + if block_root.is_err() || block.is_err() { + continue; + } + + // now that we know the block is valid, we can unwrap() everything we need + let result = self + .client + .get_beacon_headers_block_id(block_id.0) + .await + .unwrap() + .unwrap() + .finalized + .unwrap(); + + let (block_root, _, _) = block_root.unwrap(); + let (block, _, _) = block.unwrap(); + let block_slot = block.slot(); + let expected = self + .chain + .is_finalized_block(&block_root, block_slot) + .unwrap(); + + assert_eq!(result, expected, "{:?}", block_id); + } + + self + } + + pub async fn test_beacon_blocks_finalized(self) -> Self { + for block_id in self.interesting_block_ids() { + let block_root = block_id.root(&self.chain); + let block = block_id.full_block(&self.chain).await; + + // if .root or .full_block fail, skip the test. those would be errors outside the scope + // of this test, here we're testing the finalized field assuming the call to .is_finalized_block + // occurs after those calls, and that they were correct. + if block_root.is_err() || block.is_err() { + continue; + } + + // now that we know the block is valid, we can unwrap() everything we need + let result = self + .client + .get_beacon_blocks::(block_id.0) + .await + .unwrap() + .unwrap() + .finalized + .unwrap(); + + let (block_root, _, _) = block_root.unwrap(); + let (block, _, _) = block.unwrap(); + let block_slot = block.slot(); + let expected = self + .chain + .is_finalized_block(&block_root, block_slot) + .unwrap(); + + assert_eq!(result, expected, "{:?}", block_id); + } + + self + } + + pub async fn test_beacon_blinded_blocks_finalized(self) -> Self { + for block_id in self.interesting_block_ids() { + let block_root = block_id.root(&self.chain); + let block = block_id.full_block(&self.chain).await; + + // if .root or .full_block fail, skip the test. those would be errors outside the scope + // of this test, here we're testing the finalized field assuming the call to .is_finalized_block + // occurs after those calls, and that they were correct. + if block_root.is_err() || block.is_err() { + continue; + } + + // now that we know the block is valid, we can unwrap() everything we need + let result = self + .client + .get_beacon_blinded_blocks::(block_id.0) + .await + .unwrap() + .unwrap() + .finalized + .unwrap(); + + let (block_root, _, _) = block_root.unwrap(); + let (block, _, _) = block.unwrap(); + let block_slot = block.slot(); + let expected = self + .chain + .is_finalized_block(&block_root, block_slot) + .unwrap(); + + assert_eq!(result, expected, "{:?}", block_id); + } + + self + } + + pub async fn test_debug_beacon_states_finalized(self) -> Self { + for state_id in self.interesting_state_ids() { + let state_root = state_id.root(&self.chain); + let state = state_id.state(&self.chain); + + // if .root or .state fail, skip the test. those would be errors outside the scope + // of this test, here we're testing the finalized field assuming the call to .is_finalized_state + // occurs after the state_root and state calls, and that the state_root and state calls + // were correct. + if state_root.is_err() || state.is_err() { + continue; + } + + // now that we know the state is valid, we can unwrap() everything we need + let result = self + .client + .get_debug_beacon_states::(state_id.0) + .await + .unwrap() + .unwrap() + .finalized + .unwrap(); + + let (state_root, _, _) = state_root.unwrap(); + let (state, _, _) = state.unwrap(); + let state_slot = state.slot(); + let expected = self + .chain + .is_finalized_state(&state_root, state_slot) + .unwrap(); + + assert_eq!(result, expected, "{:?}", state_id); + } + + self + } + pub async fn test_beacon_states_root(self) -> Self { for state_id in self.interesting_state_ids() { let result = self @@ -434,7 +734,7 @@ impl ApiTester { let expected = state_id .root(&self.chain) .ok() - .map(|(root, _execution_optimistic)| root); + .map(|(root, _execution_optimistic, _finalized)| root); assert_eq!(result, expected, "{:?}", state_id); } @@ -468,15 +768,13 @@ impl ApiTester { .unwrap() .map(|res| res.data); - let expected = - state_id - .state(&self.chain) - .ok() - .map(|(state, _execution_optimistic)| FinalityCheckpointsData { - previous_justified: state.previous_justified_checkpoint(), - current_justified: state.current_justified_checkpoint(), - finalized: state.finalized_checkpoint(), - }); + let expected = state_id.state(&self.chain).ok().map( + |(state, _execution_optimistic, _finalized)| FinalityCheckpointsData { + previous_justified: state.previous_justified_checkpoint(), + current_justified: state.current_justified_checkpoint(), + finalized: state.finalized_checkpoint(), + }, + ); assert_eq!(result, expected, "{:?}", state_id); } @@ -489,7 +787,9 @@ impl ApiTester { for validator_indices in self.interesting_validator_indices() { let state_opt = state_id.state(&self.chain).ok(); let validators: Vec = match state_opt.as_ref() { - Some((state, _execution_optimistic)) => state.validators().clone().into(), + Some((state, _execution_optimistic, _finalized)) => { + state.validators().clone().into() + } None => vec![], }; let validator_index_ids = validator_indices @@ -528,7 +828,7 @@ impl ApiTester { .unwrap() .map(|res| res.data); - let expected = state_opt.map(|(state, _execution_optimistic)| { + let expected = state_opt.map(|(state, _execution_optimistic, _finalized)| { let mut validators = Vec::with_capacity(validator_indices.len()); for i in validator_indices { @@ -558,7 +858,7 @@ impl ApiTester { let state_opt = state_id .state(&self.chain) .ok() - .map(|(state, _execution_optimistic)| state); + .map(|(state, _execution_optimistic, _finalized)| state); let validators: Vec = match state_opt.as_ref() { Some(state) => state.validators().clone().into(), None => vec![], @@ -648,7 +948,7 @@ impl ApiTester { let state_opt = state_id .state(&self.chain) .ok() - .map(|(state, _execution_optimistic)| state); + .map(|(state, _execution_optimistic, _finalized)| state); let validators = match state_opt.as_ref() { Some(state) => state.validators().clone().into(), None => vec![], @@ -703,7 +1003,7 @@ impl ApiTester { let mut state_opt = state_id .state(&self.chain) .ok() - .map(|(state, _execution_optimistic)| state); + .map(|(state, _execution_optimistic, _finalized)| state); let epoch_opt = state_opt.as_ref().map(|state| state.current_epoch()); let results = self @@ -750,7 +1050,7 @@ impl ApiTester { let mut state_opt = state_id .state(&self.chain) .ok() - .map(|(state, _execution_optimistic)| state); + .map(|(state, _execution_optimistic, _finalized)| state); let epoch_opt = state_opt.as_ref().map(|state| state.current_epoch()); let result = self @@ -860,7 +1160,7 @@ impl ApiTester { let block_root_opt = block_id .root(&self.chain) .ok() - .map(|(root, _execution_optimistic)| root); + .map(|(root, _execution_optimistic, _finalized)| root); if let CoreBlockId::Slot(slot) = block_id.0 { if block_root_opt.is_none() { @@ -874,7 +1174,7 @@ impl ApiTester { .full_block(&self.chain) .await .ok() - .map(|(block, _execution_optimistic)| block); + .map(|(block, _execution_optimistic, _finalized)| block); if block_opt.is_none() && result.is_none() { continue; @@ -920,7 +1220,7 @@ impl ApiTester { let expected = block_id .root(&self.chain) .ok() - .map(|(root, _execution_optimistic)| root); + .map(|(root, _execution_optimistic, _finalized)| root); if let CoreBlockId::Slot(slot) = block_id.0 { if expected.is_none() { assert!(SKIPPED_SLOTS.contains(&slot.as_u64())); @@ -967,7 +1267,7 @@ impl ApiTester { .full_block(&self.chain) .await .ok() - .map(|(block, _execution_optimistic)| block); + .map(|(block, _execution_optimistic, _finalized)| block); if let CoreBlockId::Slot(slot) = block_id.0 { if expected.is_none() { @@ -1051,7 +1351,7 @@ impl ApiTester { let expected = block_id .blinded_block(&self.chain) .ok() - .map(|(block, _execution_optimistic)| block); + .map(|(block, _execution_optimistic, _finalized)| block); if let CoreBlockId::Slot(slot) = block_id.0 { if expected.is_none() { @@ -1132,7 +1432,7 @@ impl ApiTester { .map(|res| res.data); let expected = block_id.full_block(&self.chain).await.ok().map( - |(block, _execution_optimistic)| { + |(block, _execution_optimistic, _finalized)| { block.message().body().attestations().clone().into() }, ); @@ -1372,9 +1672,9 @@ impl ApiTester { pub async fn test_get_config_spec(self) -> Self { let result = self .client - .get_config_spec::() + .get_config_spec::() .await - .map(|res| ConfigAndPreset::Bellatrix(res.data)) + .map(|res| ConfigAndPreset::Capella(res.data)) .unwrap(); let expected = ConfigAndPreset::from_chain_spec::(&self.chain.spec, None); @@ -1553,7 +1853,7 @@ impl ApiTester { let mut expected = state_id .state(&self.chain) .ok() - .map(|(state, _execution_optimistic)| state); + .map(|(state, _execution_optimistic, _finalized)| state); expected.as_mut().map(|state| state.drop_all_caches()); if let (Some(json), Some(expected)) = (&result_json, &expected) { @@ -1575,21 +1875,6 @@ impl ApiTester { .unwrap(); assert_eq!(result_ssz, expected, "{:?}", state_id); - // Check legacy v1 API. - let result_v1 = self - .client - .get_debug_beacon_states_v1(state_id.0) - .await - .unwrap(); - - if let (Some(json), Some(expected)) = (&result_v1, &expected) { - assert_eq!(json.version, None); - assert_eq!(json.data, *expected, "{:?}", state_id); - } else { - assert_eq!(result_v1, None); - assert_eq!(expected, None); - } - // Check that version headers are provided. let url = self .client @@ -1639,6 +1924,59 @@ impl ApiTester { self } + pub async fn test_get_debug_fork_choice(self) -> Self { + let result = self.client.get_debug_fork_choice().await.unwrap(); + + let beacon_fork_choice = self.chain.canonical_head.fork_choice_read_lock(); + + let expected_proto_array = beacon_fork_choice.proto_array().core_proto_array(); + + assert_eq!( + result.justified_checkpoint, + expected_proto_array.justified_checkpoint + ); + assert_eq!( + result.finalized_checkpoint, + expected_proto_array.finalized_checkpoint + ); + + let expected_fork_choice_nodes: Vec = expected_proto_array + .nodes + .iter() + .map(|node| { + let execution_status = if node.execution_status.is_execution_enabled() { + Some(node.execution_status.to_string()) + } else { + None + }; + ForkChoiceNode { + slot: node.slot, + block_root: node.root, + parent_root: node + .parent + .and_then(|index| expected_proto_array.nodes.get(index)) + .map(|parent| parent.root), + justified_epoch: node.justified_checkpoint.map(|checkpoint| checkpoint.epoch), + finalized_epoch: node.finalized_checkpoint.map(|checkpoint| checkpoint.epoch), + weight: node.weight, + validity: execution_status, + execution_block_hash: node + .execution_status + .block_hash() + .map(|block_hash| block_hash.into_root()), + } + }) + .collect(); + + assert_eq!(result.fork_choice_nodes, expected_fork_choice_nodes); + + // need to drop beacon_fork_choice here, else borrow checker will complain + // that self cannot be moved out since beacon_fork_choice borrowed self.chain + // and might still live after self is moved out + drop(beacon_fork_choice); + self + } + fn validator_count(&self) -> usize { self.chain.head_snapshot().beacon_state.validators().len() } @@ -2122,7 +2460,7 @@ impl ApiTester { self } - pub async fn test_blinded_block_production>(&self) { + pub async fn test_blinded_block_production>(&self) { let fork = self.chain.canonical_head.cached_head().head_fork(); let genesis_validators_root = self.chain.genesis_validators_root; @@ -2182,7 +2520,7 @@ impl ApiTester { } } - pub async fn test_blinded_block_production_no_verify_randao>( + pub async fn test_blinded_block_production_no_verify_randao>( self, ) -> Self { for _ in 0..E::slots_per_epoch() { @@ -2206,7 +2544,9 @@ impl ApiTester { self } - pub async fn test_blinded_block_production_verify_randao_invalid>( + pub async fn test_blinded_block_production_verify_randao_invalid< + Payload: AbstractExecPayload, + >( self, ) -> Self { let fork = self.chain.canonical_head.cached_head().head_fork(); @@ -2664,7 +3004,7 @@ impl ApiTester { let (proposer_index, randao_reveal) = self.get_test_randao(slot, epoch).await; - let payload = self + let payload: BlindedPayload = self .client .get_validator_blinded_blocks::>(slot, &randao_reveal, None) .await @@ -2673,14 +3013,11 @@ impl ApiTester { .body() .execution_payload() .unwrap() - .clone(); + .into(); let expected_fee_recipient = Address::from_low_u64_be(proposer_index as u64); - assert_eq!( - payload.execution_payload_header.fee_recipient, - expected_fee_recipient - ); - assert_eq!(payload.execution_payload_header.gas_limit, 11_111_111); + assert_eq!(payload.fee_recipient(), expected_fee_recipient); + assert_eq!(payload.gas_limit(), 11_111_111); // If this cache is empty, it indicates fallback was not used, so the payload came from the // mock builder. @@ -2707,7 +3044,7 @@ impl ApiTester { let (proposer_index, randao_reveal) = self.get_test_randao(slot, epoch).await; - let payload = self + let payload: BlindedPayload = self .client .get_validator_blinded_blocks::>(slot, &randao_reveal, None) .await @@ -2716,14 +3053,11 @@ impl ApiTester { .body() .execution_payload() .unwrap() - .clone(); + .into(); let expected_fee_recipient = Address::from_low_u64_be(proposer_index as u64); - assert_eq!( - payload.execution_payload_header.fee_recipient, - expected_fee_recipient - ); - assert_eq!(payload.execution_payload_header.gas_limit, 30_000_000); + assert_eq!(payload.fee_recipient(), expected_fee_recipient); + assert_eq!(payload.gas_limit(), 30_000_000); // This cache should not be populated because fallback should not have been used. assert!(self @@ -2753,7 +3087,7 @@ impl ApiTester { let (_, randao_reveal) = self.get_test_randao(slot, epoch).await; - let payload = self + let payload: BlindedPayload = self .client .get_validator_blinded_blocks::>(slot, &randao_reveal, None) .await @@ -2762,12 +3096,9 @@ impl ApiTester { .body() .execution_payload() .unwrap() - .clone(); + .into(); - assert_eq!( - payload.execution_payload_header.fee_recipient, - test_fee_recipient - ); + assert_eq!(payload.fee_recipient(), test_fee_recipient); // This cache should not be populated because fallback should not have been used. assert!(self @@ -2801,11 +3132,11 @@ impl ApiTester { .beacon_state .latest_execution_payload_header() .unwrap() - .block_hash; + .block_hash(); let (_, randao_reveal) = self.get_test_randao(slot, epoch).await; - let payload = self + let payload: BlindedPayload = self .client .get_validator_blinded_blocks::>(slot, &randao_reveal, None) .await @@ -2814,12 +3145,9 @@ impl ApiTester { .body() .execution_payload() .unwrap() - .clone(); + .into(); - assert_eq!( - payload.execution_payload_header.parent_hash, - expected_parent_hash - ); + assert_eq!(payload.parent_hash(), expected_parent_hash); // If this cache is populated, it indicates fallback to the local EE was correctly used. assert!(self @@ -2856,7 +3184,7 @@ impl ApiTester { let (_, randao_reveal) = self.get_test_randao(slot, epoch).await; - let payload = self + let payload: BlindedPayload = self .client .get_validator_blinded_blocks::>(slot, &randao_reveal, None) .await @@ -2865,12 +3193,9 @@ impl ApiTester { .body() .execution_payload() .unwrap() - .clone(); + .into(); - assert_eq!( - payload.execution_payload_header.prev_randao, - expected_prev_randao - ); + assert_eq!(payload.prev_randao(), expected_prev_randao); // If this cache is populated, it indicates fallback to the local EE was correctly used. assert!(self @@ -2901,12 +3226,12 @@ impl ApiTester { .beacon_state .latest_execution_payload_header() .unwrap() - .block_number + .block_number() + 1; let (_, randao_reveal) = self.get_test_randao(slot, epoch).await; - let payload = self + let payload: BlindedPayload = self .client .get_validator_blinded_blocks::>(slot, &randao_reveal, None) .await @@ -2915,12 +3240,9 @@ impl ApiTester { .body() .execution_payload() .unwrap() - .clone(); + .into(); - assert_eq!( - payload.execution_payload_header.block_number, - expected_block_number - ); + assert_eq!(payload.block_number(), expected_block_number); // If this cache is populated, it indicates fallback to the local EE was correctly used. assert!(self @@ -2951,11 +3273,11 @@ impl ApiTester { .beacon_state .latest_execution_payload_header() .unwrap() - .timestamp; + .timestamp(); let (_, randao_reveal) = self.get_test_randao(slot, epoch).await; - let payload = self + let payload: BlindedPayload = self .client .get_validator_blinded_blocks::>(slot, &randao_reveal, None) .await @@ -2964,9 +3286,9 @@ impl ApiTester { .body() .execution_payload() .unwrap() - .clone(); + .into(); - assert!(payload.execution_payload_header.timestamp > min_expected_timestamp); + assert!(payload.timestamp() > min_expected_timestamp); // If this cache is populated, it indicates fallback to the local EE was correctly used. assert!(self @@ -2991,7 +3313,7 @@ impl ApiTester { let (_, randao_reveal) = self.get_test_randao(slot, epoch).await; - let payload = self + let payload: BlindedPayload = self .client .get_validator_blinded_blocks::>(slot, &randao_reveal, None) .await @@ -3000,7 +3322,7 @@ impl ApiTester { .body() .execution_payload() .unwrap() - .clone(); + .into(); // If this cache is populated, it indicates fallback to the local EE was correctly used. assert!(self @@ -3028,7 +3350,7 @@ impl ApiTester { let (_, randao_reveal) = self.get_test_randao(slot, epoch).await; - let payload = self + let payload: BlindedPayload = self .client .get_validator_blinded_blocks::>(slot, &randao_reveal, None) .await @@ -3037,7 +3359,7 @@ impl ApiTester { .body() .execution_payload() .unwrap() - .clone(); + .into(); // If this cache is populated, it indicates fallback to the local EE was correctly used. assert!(self @@ -3071,7 +3393,7 @@ impl ApiTester { .get_test_randao(next_slot, next_slot.epoch(E::slots_per_epoch())) .await; - let payload = self + let payload: BlindedPayload = self .client .get_validator_blinded_blocks::>(next_slot, &randao_reveal, None) .await @@ -3080,7 +3402,7 @@ impl ApiTester { .body() .execution_payload() .unwrap() - .clone(); + .into(); // This cache should not be populated because fallback should not have been used. assert!(self @@ -3100,7 +3422,7 @@ impl ApiTester { .get_test_randao(next_slot, next_slot.epoch(E::slots_per_epoch())) .await; - let payload = self + let payload: BlindedPayload = self .client .get_validator_blinded_blocks::>(next_slot, &randao_reveal, None) .await @@ -3109,7 +3431,7 @@ impl ApiTester { .body() .execution_payload() .unwrap() - .clone(); + .into(); // If this cache is populated, it indicates fallback to the local EE was correctly used. assert!(self @@ -3149,7 +3471,7 @@ impl ApiTester { .get_test_randao(next_slot, next_slot.epoch(E::slots_per_epoch())) .await; - let payload = self + let payload: BlindedPayload = self .client .get_validator_blinded_blocks::>(next_slot, &randao_reveal, None) .await @@ -3158,7 +3480,7 @@ impl ApiTester { .body() .execution_payload() .unwrap() - .clone(); + .into(); // If this cache is populated, it indicates fallback to the local EE was correctly used. assert!(self @@ -3188,7 +3510,7 @@ impl ApiTester { .get_test_randao(next_slot, next_slot.epoch(E::slots_per_epoch())) .await; - let payload = self + let payload: BlindedPayload = self .client .get_validator_blinded_blocks::>(next_slot, &randao_reveal, None) .await @@ -3197,7 +3519,7 @@ impl ApiTester { .body() .execution_payload() .unwrap() - .clone(); + .into(); // This cache should not be populated because fallback should not have been used. assert!(self @@ -3231,7 +3553,7 @@ impl ApiTester { let (proposer_index, randao_reveal) = self.get_test_randao(slot, epoch).await; - let payload = self + let payload: BlindedPayload = self .client .get_validator_blinded_blocks::>(slot, &randao_reveal, None) .await @@ -3240,13 +3562,10 @@ impl ApiTester { .body() .execution_payload() .unwrap() - .clone(); + .into(); let expected_fee_recipient = Address::from_low_u64_be(proposer_index as u64); - assert_eq!( - payload.execution_payload_header.fee_recipient, - expected_fee_recipient - ); + assert_eq!(payload.fee_recipient(), expected_fee_recipient); // If this cache is populated, it indicates fallback to the local EE was correctly used. assert!(self @@ -3275,7 +3594,7 @@ impl ApiTester { let (_, randao_reveal) = self.get_test_randao(slot, epoch).await; - let payload = self + let payload: BlindedPayload = self .client .get_validator_blinded_blocks::>(slot, &randao_reveal, None) .await @@ -3284,7 +3603,7 @@ impl ApiTester { .body() .execution_payload() .unwrap() - .clone(); + .into(); // If this cache is populated, it indicates fallback to the local EE was correctly used. assert!(self @@ -3297,6 +3616,209 @@ impl ApiTester { self } + pub async fn test_builder_payload_chosen_when_more_profitable(self) -> Self { + // Mutate value. + self.mock_builder + .as_ref() + .unwrap() + .builder + .add_operation(Operation::Value(Uint256::from( + DEFAULT_MOCK_EL_PAYLOAD_VALUE_WEI + 1, + ))); + + let slot = self.chain.slot().unwrap(); + let epoch = self.chain.epoch().unwrap(); + + let (_, randao_reveal) = self.get_test_randao(slot, epoch).await; + + let payload: BlindedPayload = self + .client + .get_validator_blinded_blocks::>(slot, &randao_reveal, None) + .await + .unwrap() + .data + .body() + .execution_payload() + .unwrap() + .into(); + + // The builder's payload should've been chosen, so this cache should not be populated + assert!(self + .chain + .execution_layer + .as_ref() + .unwrap() + .get_payload_by_root(&payload.tree_hash_root()) + .is_none()); + self + } + + pub async fn test_local_payload_chosen_when_equally_profitable(self) -> Self { + // Mutate value. + self.mock_builder + .as_ref() + .unwrap() + .builder + .add_operation(Operation::Value(Uint256::from( + DEFAULT_MOCK_EL_PAYLOAD_VALUE_WEI, + ))); + + let slot = self.chain.slot().unwrap(); + let epoch = self.chain.epoch().unwrap(); + + let (_, randao_reveal) = self.get_test_randao(slot, epoch).await; + + let payload: BlindedPayload = self + .client + .get_validator_blinded_blocks::>(slot, &randao_reveal, None) + .await + .unwrap() + .data + .body() + .execution_payload() + .unwrap() + .into(); + + // The local payload should've been chosen, so this cache should be populated + assert!(self + .chain + .execution_layer + .as_ref() + .unwrap() + .get_payload_by_root(&payload.tree_hash_root()) + .is_some()); + self + } + + pub async fn test_local_payload_chosen_when_more_profitable(self) -> Self { + // Mutate value. + self.mock_builder + .as_ref() + .unwrap() + .builder + .add_operation(Operation::Value(Uint256::from( + DEFAULT_MOCK_EL_PAYLOAD_VALUE_WEI - 1, + ))); + + let slot = self.chain.slot().unwrap(); + let epoch = self.chain.epoch().unwrap(); + + let (_, randao_reveal) = self.get_test_randao(slot, epoch).await; + + let payload: BlindedPayload = self + .client + .get_validator_blinded_blocks::>(slot, &randao_reveal, None) + .await + .unwrap() + .data + .body() + .execution_payload() + .unwrap() + .into(); + + // The local payload should've been chosen, so this cache should be populated + assert!(self + .chain + .execution_layer + .as_ref() + .unwrap() + .get_payload_by_root(&payload.tree_hash_root()) + .is_some()); + self + } + + pub async fn test_builder_works_post_capella(self) -> Self { + // Ensure builder payload is chosen + self.mock_builder + .as_ref() + .unwrap() + .builder + .add_operation(Operation::Value(Uint256::from( + DEFAULT_MOCK_EL_PAYLOAD_VALUE_WEI + 1, + ))); + + let slot = self.chain.slot().unwrap(); + let propose_state = self + .harness + .chain + .state_at_slot(slot, StateSkipConfig::WithoutStateRoots) + .unwrap(); + let withdrawals = get_expected_withdrawals(&propose_state, &self.chain.spec).unwrap(); + let withdrawals_root = withdrawals.tree_hash_root(); + // Set withdrawals root for builder + self.mock_builder + .as_ref() + .unwrap() + .builder + .add_operation(Operation::WithdrawalsRoot(withdrawals_root)); + + let epoch = self.chain.epoch().unwrap(); + let (_, randao_reveal) = self.get_test_randao(slot, epoch).await; + + let payload: BlindedPayload = self + .client + .get_validator_blinded_blocks::>(slot, &randao_reveal, None) + .await + .unwrap() + .data + .body() + .execution_payload() + .unwrap() + .into(); + + // The builder's payload should've been chosen, so this cache should not be populated + assert!(self + .chain + .execution_layer + .as_ref() + .unwrap() + .get_payload_by_root(&payload.tree_hash_root()) + .is_none()); + self + } + + pub async fn test_lighthouse_rejects_invalid_withdrawals_root(self) -> Self { + // Ensure builder payload *would be* chosen + self.mock_builder + .as_ref() + .unwrap() + .builder + .add_operation(Operation::Value(Uint256::from( + DEFAULT_MOCK_EL_PAYLOAD_VALUE_WEI + 1, + ))); + // Set withdrawals root to something invalid + self.mock_builder + .as_ref() + .unwrap() + .builder + .add_operation(Operation::WithdrawalsRoot(Hash256::repeat_byte(0x42))); + + let slot = self.chain.slot().unwrap(); + let epoch = self.chain.epoch().unwrap(); + let (_, randao_reveal) = self.get_test_randao(slot, epoch).await; + + let payload: BlindedPayload = self + .client + .get_validator_blinded_blocks::>(slot, &randao_reveal, None) + .await + .unwrap() + .data + .body() + .execution_payload() + .unwrap() + .into(); + + // The local payload should've been chosen because the builder's was invalid + assert!(self + .chain + .execution_layer + .as_ref() + .unwrap() + .get_payload_by_root(&payload.tree_hash_root()) + .is_some()); + self + } + #[cfg(target_os = "linux")] pub async fn test_get_lighthouse_health(self) -> Self { self.client.get_lighthouse_health().await.unwrap(); @@ -3380,7 +3902,7 @@ impl ApiTester { let mut expected = state_id .state(&self.chain) .ok() - .map(|(state, _execution_optimistic)| state); + .map(|(state, _execution_optimistic, _finalized)| state); expected.as_mut().map(|state| state.drop_all_caches()); assert_eq!(result, expected, "{:?}", state_id); @@ -3766,9 +4288,9 @@ async fn get_events() { #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn get_events_altair() { - let mut spec = E::default_spec(); - spec.altair_fork_epoch = Some(Epoch::new(0)); - ApiTester::new_from_spec(spec) + let mut config = ApiTesterConfig::default(); + config.spec.altair_fork_epoch = Some(Epoch::new(0)); + ApiTester::new_from_config(config) .await .test_get_events_altair() .await; @@ -3788,6 +4310,20 @@ async fn beacon_get() { .await .test_beacon_genesis() .await + .test_beacon_states_root_finalized() + .await + .test_beacon_states_fork_finalized() + .await + .test_beacon_states_finality_checkpoints_finalized() + .await + .test_beacon_headers_block_id_finalized() + .await + .test_beacon_blocks_finalized::() + .await + .test_beacon_blinded_blocks_finalized::() + .await + .test_debug_beacon_states_finalized() + .await .test_beacon_states_root() .await .test_beacon_states_fork() @@ -3924,6 +4460,8 @@ async fn debug_get() { .test_get_debug_beacon_states() .await .test_get_debug_beacon_heads() + .await + .test_get_debug_fork_choice() .await; } @@ -4281,6 +4819,38 @@ async fn builder_inadequate_builder_threshold() { .await; } +#[tokio::test(flavor = "multi_thread", worker_threads = 2)] +async fn builder_payload_chosen_by_profit() { + ApiTester::new_mev_tester_no_builder_threshold() + .await + .test_builder_payload_chosen_when_more_profitable() + .await + .test_local_payload_chosen_when_equally_profitable() + .await + .test_local_payload_chosen_when_more_profitable() + .await; +} + +#[tokio::test(flavor = "multi_thread", worker_threads = 2)] +async fn builder_works_post_capella() { + let mut config = ApiTesterConfig { + builder_threshold: Some(0), + spec: E::default_spec(), + }; + config.spec.altair_fork_epoch = Some(Epoch::new(0)); + config.spec.bellatrix_fork_epoch = Some(Epoch::new(0)); + config.spec.capella_fork_epoch = Some(Epoch::new(0)); + + ApiTester::new_from_config(config) + .await + .test_post_validator_register_validator() + .await + .test_builder_works_post_capella() + .await + .test_lighthouse_rejects_invalid_withdrawals_root() + .await; +} + #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn lighthouse_endpoints() { ApiTester::new() diff --git a/beacon_node/http_metrics/src/lib.rs b/beacon_node/http_metrics/src/lib.rs index dfdb8f7ff1b..2895506c3b3 100644 --- a/beacon_node/http_metrics/src/lib.rs +++ b/beacon_node/http_metrics/src/lib.rs @@ -116,7 +116,13 @@ pub fn serve( .and_then(|ctx: Arc>| async move { Ok::<_, warp::Rejection>( metrics::gather_prometheus_metrics(&ctx) - .map(|body| Response::builder().status(200).body(body).unwrap()) + .map(|body| { + Response::builder() + .status(200) + .header("Content-Type", "text/plain") + .body(body) + .unwrap() + }) .unwrap_or_else(|e| { Response::builder() .status(500) diff --git a/beacon_node/http_metrics/tests/tests.rs b/beacon_node/http_metrics/tests/tests.rs index b3e02d4cb6f..89fde323746 100644 --- a/beacon_node/http_metrics/tests/tests.rs +++ b/beacon_node/http_metrics/tests/tests.rs @@ -1,6 +1,7 @@ use beacon_chain::test_utils::EphemeralHarnessType; use environment::null_logger; use http_metrics::Config; +use reqwest::header::HeaderValue; use reqwest::StatusCode; use std::net::{IpAddr, Ipv4Addr}; use std::sync::Arc; @@ -45,7 +46,13 @@ async fn returns_200_ok() { listening_socket.port() ); - assert_eq!(reqwest::get(&url).await.unwrap().status(), StatusCode::OK); + let response = reqwest::get(&url).await.unwrap(); + + assert_eq!(response.status(), StatusCode::OK); + assert_eq!( + response.headers().get("Content-Type").unwrap(), + &HeaderValue::from_str("text/plain").unwrap() + ); } .await } diff --git a/beacon_node/lighthouse_network/Cargo.toml b/beacon_node/lighthouse_network/Cargo.toml index c7a2b99c9c3..8928edfb7f9 100644 --- a/beacon_node/lighthouse_network/Cargo.toml +++ b/beacon_node/lighthouse_network/Cargo.toml @@ -5,7 +5,7 @@ authors = ["Sigma Prime "] edition = "2021" [dependencies] -discv5 = { version = "0.1.0", features = ["libp2p"] } +discv5 = { version = "0.2.2", features = ["libp2p"] } unsigned-varint = { version = "0.6.0", features = ["codec"] } types = { path = "../../consensus/types" } eth2_ssz_types = { version = "0.2.2", path = "../../consensus/ssz_types" } @@ -13,6 +13,8 @@ serde = { version = "1.0.116", features = ["derive"] } serde_derive = "1.0.116" eth2_ssz = { version = "0.4.1", path = "../../consensus/ssz" } eth2_ssz_derive = { version = "0.3.0", path = "../../consensus/ssz_derive" } +tree_hash = { version = "0.4.1", path = "../../consensus/tree_hash" } +tree_hash_derive = { version = "0.4.0", path = "../../consensus/tree_hash_derive" } slog = { version = "2.5.2", features = ["max_level_trace"] } lighthouse_version = { path = "../../common/lighthouse_version" } tokio = { version = "1.14.0", features = ["time", "macros"] } @@ -25,6 +27,7 @@ lighthouse_metrics = { path = "../../common/lighthouse_metrics" } smallvec = "1.6.1" tokio-io-timeout = "1.1.1" lru = "0.7.1" +lru_cache = { path = "../../common/lru_cache" } parking_lot = "0.12.0" sha2 = "0.10" snap = "1.0.1" @@ -39,7 +42,7 @@ strum = { version = "0.24.0", features = ["derive"] } superstruct = "0.5.0" prometheus-client = "0.18.0" unused_port = { path = "../../common/unused_port" } -delay_map = "0.1.1" +delay_map = "0.3.0" void = "1" [dependencies.libp2p] diff --git a/beacon_node/lighthouse_network/src/config.rs b/beacon_node/lighthouse_network/src/config.rs index 0ae3d9a23b6..d8efa20209c 100644 --- a/beacon_node/lighthouse_network/src/config.rs +++ b/beacon_node/lighthouse_network/src/config.rs @@ -1,3 +1,5 @@ +use crate::listen_addr::{ListenAddr, ListenAddress}; +use crate::rpc::config::OutboundRateLimiterConfig; use crate::types::GossipKind; use crate::{Enr, PeerIdSerialized}; use directory::{ @@ -11,6 +13,7 @@ use libp2p::gossipsub::{ use libp2p::Multiaddr; use serde_derive::{Deserialize, Serialize}; use sha2::{Digest, Sha256}; +use std::net::{Ipv4Addr, Ipv6Addr}; use std::path::PathBuf; use std::sync::Arc; use std::time::Duration; @@ -56,24 +59,24 @@ pub struct Config { /// Data directory where node's keyfile is stored pub network_dir: PathBuf, - /// IP address to listen on. - pub listen_address: std::net::IpAddr, - - /// The TCP port that libp2p listens on. - pub libp2p_port: u16, - - /// UDP port that discovery listens on. - pub discovery_port: u16, + /// IP addresses to listen on. + listen_addresses: ListenAddress, /// The address to broadcast to peers about which address we are listening on. None indicates /// that no discovery address has been set in the CLI args. - pub enr_address: Option, + pub enr_address: (Option, Option), + + /// The udp4 port to broadcast to peers in order to reach back for discovery. + pub enr_udp4_port: Option, + + /// The tcp4 port to broadcast to peers in order to reach back for libp2p services. + pub enr_tcp4_port: Option, - /// The udp port to broadcast to peers in order to reach back for discovery. - pub enr_udp_port: Option, + /// The udp6 port to broadcast to peers in order to reach back for discovery. + pub enr_udp6_port: Option, - /// The tcp port to broadcast to peers in order to reach back for libp2p services. - pub enr_tcp_port: Option, + /// The tcp6 port to broadcast to peers in order to reach back for libp2p services. + pub enr_tcp6_port: Option, /// Target number of connected peers. pub target_peers: usize, @@ -98,6 +101,9 @@ pub struct Config { /// List of trusted libp2p nodes which are not scored. pub trusted_peers: Vec, + /// Disables peer scoring altogether. + pub disable_peer_scoring: bool, + /// Client version pub client_version: String, @@ -133,6 +139,108 @@ pub struct Config { /// Whether light client protocols should be enabled. pub enable_light_client_server: bool, + + /// Configuration for the outbound rate limiter (requests made by this node). + pub outbound_rate_limiter_config: Option, +} + +impl Config { + /// Sets the listening address to use an ipv4 address. The discv5 ip_mode and table filter are + /// adjusted accordingly to ensure addresses that are present in the enr are globally + /// reachable. + pub fn set_ipv4_listening_address(&mut self, addr: Ipv4Addr, tcp_port: u16, udp_port: u16) { + self.listen_addresses = ListenAddress::V4(ListenAddr { + addr, + udp_port, + tcp_port, + }); + self.discv5_config.ip_mode = discv5::IpMode::Ip4; + self.discv5_config.table_filter = |enr| enr.ip4().as_ref().map_or(false, is_global_ipv4) + } + + /// Sets the listening address to use an ipv6 address. The discv5 ip_mode and table filter is + /// adjusted accordingly to ensure addresses that are present in the enr are globally + /// reachable. + pub fn set_ipv6_listening_address(&mut self, addr: Ipv6Addr, tcp_port: u16, udp_port: u16) { + self.listen_addresses = ListenAddress::V6(ListenAddr { + addr, + udp_port, + tcp_port, + }); + self.discv5_config.ip_mode = discv5::IpMode::Ip6 { + enable_mapped_addresses: false, + }; + self.discv5_config.table_filter = |enr| enr.ip6().as_ref().map_or(false, is_global_ipv6) + } + + /// Sets the listening address to use both an ipv4 and ipv6 address. The discv5 ip_mode and + /// table filter is adjusted accordingly to ensure addresses that are present in the enr are + /// globally reachable. + pub fn set_ipv4_ipv6_listening_addresses( + &mut self, + v4_addr: Ipv4Addr, + tcp4_port: u16, + udp4_port: u16, + v6_addr: Ipv6Addr, + tcp6_port: u16, + udp6_port: u16, + ) { + self.listen_addresses = ListenAddress::DualStack( + ListenAddr { + addr: v4_addr, + udp_port: udp4_port, + tcp_port: tcp4_port, + }, + ListenAddr { + addr: v6_addr, + udp_port: udp6_port, + tcp_port: tcp6_port, + }, + ); + + self.discv5_config.ip_mode = discv5::IpMode::Ip6 { + enable_mapped_addresses: true, + }; + self.discv5_config.table_filter = |enr| match (&enr.ip4(), &enr.ip6()) { + (None, None) => false, + (None, Some(ip6)) => is_global_ipv6(ip6), + (Some(ip4), None) => is_global_ipv4(ip4), + (Some(ip4), Some(ip6)) => is_global_ipv4(ip4) && is_global_ipv6(ip6), + }; + } + + pub fn set_listening_addr(&mut self, listen_addr: ListenAddress) { + match listen_addr { + ListenAddress::V4(ListenAddr { + addr, + udp_port, + tcp_port, + }) => self.set_ipv4_listening_address(addr, tcp_port, udp_port), + ListenAddress::V6(ListenAddr { + addr, + udp_port, + tcp_port, + }) => self.set_ipv6_listening_address(addr, tcp_port, udp_port), + ListenAddress::DualStack( + ListenAddr { + addr: ip4addr, + udp_port: udp4_port, + tcp_port: tcp4_port, + }, + ListenAddr { + addr: ip6addr, + udp_port: udp6_port, + tcp_port: tcp6_port, + }, + ) => self.set_ipv4_ipv6_listening_addresses( + ip4addr, tcp4_port, udp4_port, ip6addr, tcp6_port, udp6_port, + ), + } + } + + pub fn listen_addrs(&self) -> &ListenAddress { + &self.listen_addresses + } } impl Default for Config { @@ -179,7 +287,7 @@ impl Default for Config { .filter_rate_limiter(filter_rate_limiter) .filter_max_bans_per_ip(Some(5)) .filter_max_nodes_per_ip(Some(10)) - .table_filter(|enr| enr.ip4().map_or(false, |ip| is_global(&ip))) // Filter non-global IPs + .table_filter(|enr| enr.ip4().map_or(false, |ip| is_global_ipv4(&ip))) // Filter non-global IPs .ban_duration(Some(Duration::from_secs(3600))) .ping_interval(Duration::from_secs(300)) .build(); @@ -187,12 +295,16 @@ impl Default for Config { // NOTE: Some of these get overridden by the corresponding CLI default values. Config { network_dir, - listen_address: "0.0.0.0".parse().expect("valid ip address"), - libp2p_port: 9000, - discovery_port: 9000, - enr_address: None, - enr_udp_port: None, - enr_tcp_port: None, + listen_addresses: ListenAddress::V4(ListenAddr { + addr: Ipv4Addr::UNSPECIFIED, + udp_port: 9000, + tcp_port: 9000, + }), + enr_address: (None, None), + enr_udp4_port: None, + enr_tcp4_port: None, + enr_udp6_port: None, + enr_tcp6_port: None, target_peers: 50, gs_config, discv5_config, @@ -200,6 +312,7 @@ impl Default for Config { boot_nodes_multiaddr: vec![], libp2p_nodes: vec![], trusted_peers: vec![], + disable_peer_scoring: false, client_version: lighthouse_version::version_with_platform(), disable_discovery: false, upnp_enabled: true, @@ -211,6 +324,7 @@ impl Default for Config { topics: Vec::new(), metrics_enabled: false, enable_light_client_server: false, + outbound_rate_limiter_config: None, } } } @@ -300,9 +414,7 @@ pub fn gossipsub_config(network_load: u8, fork_context: Arc) -> Gos ) -> Vec { let topic_bytes = message.topic.as_str().as_bytes(); match fork_context.current_fork() { - // according to: https://github.com/ethereum/consensus-specs/blob/dev/specs/merge/p2p-interface.md#the-gossip-domain-gossipsub - // the derivation of the message-id remains the same in the merge - ForkName::Altair | ForkName::Merge => { + ForkName::Altair | ForkName::Merge | ForkName::Capella => { let topic_len_bytes = topic_bytes.len().to_le_bytes(); let mut vec = Vec::with_capacity( prefix.len() + topic_len_bytes.len() + topic_bytes.len() + message.data.len(), @@ -358,7 +470,7 @@ pub fn gossipsub_config(network_load: u8, fork_context: Arc) -> Gos /// Helper function to determine if the IpAddr is a global address or not. The `is_global()` /// function is not yet stable on IpAddr. #[allow(clippy::nonminimal_bool)] -fn is_global(addr: &std::net::Ipv4Addr) -> bool { +fn is_global_ipv4(addr: &Ipv4Addr) -> bool { // check if this address is 192.0.0.9 or 192.0.0.10. These addresses are the only two // globally routable addresses in the 192.0.0.0/24 range. if u32::from_be_bytes(addr.octets()) == 0xc0000009 @@ -379,3 +491,60 @@ fn is_global(addr: &std::net::Ipv4Addr) -> bool { // Make sure the address is not in 0.0.0.0/8 && addr.octets()[0] != 0 } + +/// NOTE: Docs taken from https://doc.rust-lang.org/stable/std/net/struct.Ipv6Addr.html#method.is_global +/// +/// Returns true if the address appears to be globally reachable as specified by the IANA IPv6 +/// Special-Purpose Address Registry. Whether or not an address is practically reachable will +/// depend on your network configuration. +/// +/// Most IPv6 addresses are globally reachable; unless they are specifically defined as not +/// globally reachable. +/// +/// Non-exhaustive list of notable addresses that are not globally reachable: +/// +/// - The unspecified address (is_unspecified) +/// - The loopback address (is_loopback) +/// - IPv4-mapped addresses +/// - Addresses reserved for benchmarking +/// - Addresses reserved for documentation (is_documentation) +/// - Unique local addresses (is_unique_local) +/// - Unicast addresses with link-local scope (is_unicast_link_local) +// TODO: replace with [`Ipv6Addr::is_global`] once +// [Ip](https://github.com/rust-lang/rust/issues/27709) is stable. +pub const fn is_global_ipv6(addr: &Ipv6Addr) -> bool { + const fn is_documentation(addr: &Ipv6Addr) -> bool { + (addr.segments()[0] == 0x2001) && (addr.segments()[1] == 0xdb8) + } + const fn is_unique_local(addr: &Ipv6Addr) -> bool { + (addr.segments()[0] & 0xfe00) == 0xfc00 + } + const fn is_unicast_link_local(addr: &Ipv6Addr) -> bool { + (addr.segments()[0] & 0xffc0) == 0xfe80 + } + !(addr.is_unspecified() + || addr.is_loopback() + // IPv4-mapped Address (`::ffff:0:0/96`) + || matches!(addr.segments(), [0, 0, 0, 0, 0, 0xffff, _, _]) + // IPv4-IPv6 Translat. (`64:ff9b:1::/48`) + || matches!(addr.segments(), [0x64, 0xff9b, 1, _, _, _, _, _]) + // Discard-Only Address Block (`100::/64`) + || matches!(addr.segments(), [0x100, 0, 0, 0, _, _, _, _]) + // IETF Protocol Assignments (`2001::/23`) + || (matches!(addr.segments(), [0x2001, b, _, _, _, _, _, _] if b < 0x200) + && !( + // Port Control Protocol Anycast (`2001:1::1`) + u128::from_be_bytes(addr.octets()) == 0x2001_0001_0000_0000_0000_0000_0000_0001 + // Traversal Using Relays around NAT Anycast (`2001:1::2`) + || u128::from_be_bytes(addr.octets()) == 0x2001_0001_0000_0000_0000_0000_0000_0002 + // AMT (`2001:3::/32`) + || matches!(addr.segments(), [0x2001, 3, _, _, _, _, _, _]) + // AS112-v6 (`2001:4:112::/48`) + || matches!(addr.segments(), [0x2001, 4, 0x112, _, _, _, _, _]) + // ORCHIDv2 (`2001:20::/28`) + || matches!(addr.segments(), [0x2001, b, _, _, _, _, _, _] if b >= 0x20 && b <= 0x2F) + )) + || is_documentation(addr) + || is_unique_local(addr) + || is_unicast_link_local(addr)) +} diff --git a/beacon_node/lighthouse_network/src/discovery/enr.rs b/beacon_node/lighthouse_network/src/discovery/enr.rs index 6b4b87a5f80..938e7cfa257 100644 --- a/beacon_node/lighthouse_network/src/discovery/enr.rs +++ b/beacon_node/lighthouse_network/src/discovery/enr.rs @@ -145,16 +145,39 @@ pub fn create_enr_builder_from_config( enable_tcp: bool, ) -> EnrBuilder { let mut builder = EnrBuilder::new("v4"); - if let Some(enr_address) = config.enr_address { - builder.ip(enr_address); + let (maybe_ipv4_address, maybe_ipv6_address) = &config.enr_address; + + if let Some(ip) = maybe_ipv4_address { + builder.ip4(*ip); + } + + if let Some(ip) = maybe_ipv6_address { + builder.ip6(*ip); + } + + if let Some(udp4_port) = config.enr_udp4_port { + builder.udp4(udp4_port); } - if let Some(udp_port) = config.enr_udp_port { - builder.udp4(udp_port); + + if let Some(udp6_port) = config.enr_udp6_port { + builder.udp6(udp6_port); } - // we always give it our listening tcp port + if enable_tcp { - let tcp_port = config.enr_tcp_port.unwrap_or(config.libp2p_port); - builder.tcp4(tcp_port); + // If the ENR port is not set, and we are listening over that ip version, use the listening port instead. + let tcp4_port = config + .enr_tcp4_port + .or_else(|| config.listen_addrs().v4().map(|v4_addr| v4_addr.tcp_port)); + if let Some(tcp4_port) = tcp4_port { + builder.tcp4(tcp4_port); + } + + let tcp6_port = config + .enr_tcp6_port + .or_else(|| config.listen_addrs().v6().map(|v6_addr| v6_addr.tcp_port)); + if let Some(tcp6_port) = tcp6_port { + builder.tcp6(tcp6_port); + } } builder } diff --git a/beacon_node/lighthouse_network/src/discovery/mod.rs b/beacon_node/lighthouse_network/src/discovery/mod.rs index c41844c2c59..13fdf8ed577 100644 --- a/beacon_node/lighthouse_network/src/discovery/mod.rs +++ b/beacon_node/lighthouse_network/src/discovery/mod.rs @@ -177,6 +177,13 @@ pub struct Discovery { /// always false. pub started: bool, + /// This keeps track of whether an external UDP port change should also indicate an internal + /// TCP port change. As we cannot detect our external TCP port, we assume that the external UDP + /// port is also our external TCP port. This assumption only holds if the user has not + /// explicitly set their ENR TCP port via the CLI config. The first indicates tcp4 and the + /// second indicates tcp6. + update_tcp_port: (bool, bool), + /// Logger for the discovery behaviour. log: slog::Logger, } @@ -197,12 +204,18 @@ impl Discovery { }; let local_enr = network_globals.local_enr.read().clone(); + let local_node_id = local_enr.node_id(); info!(log, "ENR Initialised"; "enr" => local_enr.to_base64(), "seq" => local_enr.seq(), "id"=> %local_enr.node_id(), - "ip4" => ?local_enr.ip4(), "udp4"=> ?local_enr.udp4(), "tcp4" => ?local_enr.tcp6() + "ip4" => ?local_enr.ip4(), "udp4"=> ?local_enr.udp4(), "tcp4" => ?local_enr.tcp4(), "tcp6" => ?local_enr.tcp6(), "udp6" => ?local_enr.udp6() ); - - let listen_socket = SocketAddr::new(config.listen_address, config.discovery_port); + let listen_socket = match config.listen_addrs() { + crate::listen_addr::ListenAddress::V4(v4_addr) => v4_addr.udp_socket_addr(), + crate::listen_addr::ListenAddress::V6(v6_addr) => v6_addr.udp_socket_addr(), + crate::listen_addr::ListenAddress::DualStack(_v4_addr, v6_addr) => { + v6_addr.udp_socket_addr() + } + }; // convert the keypair into an ENR key let enr_key: CombinedKey = CombinedKey::from_libp2p(local_key)?; @@ -212,6 +225,10 @@ impl Discovery { // Add bootnodes to routing table for bootnode_enr in config.boot_nodes_enr.clone() { + if bootnode_enr.node_id() == local_node_id { + // If we are a boot node, ignore adding it to the routing table + continue; + } debug!( log, "Adding node to routing table"; @@ -290,6 +307,11 @@ impl Discovery { } } + let update_tcp_port = ( + config.enr_tcp4_port.is_none(), + config.enr_tcp6_port.is_none(), + ); + Ok(Self { cached_enrs: LruCache::new(50), network_globals, @@ -299,6 +321,7 @@ impl Discovery { discv5, event_stream, started: !config.disable_discovery, + update_tcp_port, log, enr_dir, }) @@ -1009,20 +1032,40 @@ impl NetworkBehaviour for Discovery { metrics::check_nat(); // Discv5 will have updated our local ENR. We save the updated version // to disk. + + if (self.update_tcp_port.0 && socket_addr.is_ipv4()) + || (self.update_tcp_port.1 && socket_addr.is_ipv6()) + { + // Update the TCP port in the ENR + self.discv5.update_local_enr_socket(socket_addr, true); + } let enr = self.discv5.local_enr(); enr::save_enr_to_disk(Path::new(&self.enr_dir), &enr, &self.log); // update network globals *self.network_globals.local_enr.write() = enr; // A new UDP socket has been detected. // Build a multiaddr to report to libp2p - let mut address = Multiaddr::from(socket_addr.ip()); - // NOTE: This doesn't actually track the external TCP port. More sophisticated NAT handling - // should handle this. - address.push(Protocol::Tcp(self.network_globals.listen_port_tcp())); - return Poll::Ready(NBAction::ReportObservedAddr { - address, - score: AddressScore::Finite(1), - }); + let addr = match socket_addr.ip() { + IpAddr::V4(v4_addr) => { + self.network_globals.listen_port_tcp4().map(|tcp4_port| { + Multiaddr::from(v4_addr).with(Protocol::Tcp(tcp4_port)) + }) + } + IpAddr::V6(v6_addr) => { + self.network_globals.listen_port_tcp6().map(|tcp6_port| { + Multiaddr::from(v6_addr).with(Protocol::Tcp(tcp6_port)) + }) + } + }; + + if let Some(address) = addr { + // NOTE: This doesn't actually track the external TCP port. More sophisticated NAT handling + // should handle this. + return Poll::Ready(NBAction::ReportObservedAddr { + address, + score: AddressScore::Finite(1), + }); + } } Discv5Event::EnrAdded { .. } | Discv5Event::TalkRequest(_) @@ -1087,7 +1130,6 @@ mod tests { use enr::EnrBuilder; use slog::{o, Drain}; use types::{BitVector, MinimalEthSpec, SubnetId}; - use unused_port::unused_udp_port; type E = MinimalEthSpec; @@ -1105,23 +1147,22 @@ mod tests { async fn build_discovery() -> Discovery { let keypair = libp2p::identity::Keypair::generate_secp256k1(); - let config = NetworkConfig { - discovery_port: unused_udp_port().unwrap(), - ..Default::default() - }; + let mut config = NetworkConfig::default(); + config.set_listening_addr(crate::ListenAddress::unused_v4_ports()); let enr_key: CombinedKey = CombinedKey::from_libp2p(&keypair).unwrap(); let enr: Enr = build_enr::(&enr_key, &config, &EnrForkId::default()).unwrap(); let log = build_log(slog::Level::Debug, false); let globals = NetworkGlobals::new( enr, - 9000, - 9000, + Some(9000), + None, MetaData::V2(MetaDataV2 { seq_number: 0, attnets: Default::default(), syncnets: Default::default(), }), vec![], + false, &log, ); Discovery::new(&keypair, &config, Arc::new(globals), &log) diff --git a/beacon_node/lighthouse_network/src/lib.rs b/beacon_node/lighthouse_network/src/lib.rs index be4da809cb2..3d539af3b28 100644 --- a/beacon_node/lighthouse_network/src/lib.rs +++ b/beacon_node/lighthouse_network/src/lib.rs @@ -10,12 +10,14 @@ pub mod service; #[allow(clippy::mutable_key_type)] // PeerId in hashmaps are no longer permitted by clippy pub mod discovery; +pub mod listen_addr; pub mod metrics; pub mod peer_manager; pub mod rpc; pub mod types; pub use config::gossip_max_size; +pub use listen_addr::*; use serde::{de, Deserialize, Deserializer, Serialize, Serializer}; use std::str::FromStr; diff --git a/beacon_node/lighthouse_network/src/listen_addr.rs b/beacon_node/lighthouse_network/src/listen_addr.rs new file mode 100644 index 00000000000..20d87d403cd --- /dev/null +++ b/beacon_node/lighthouse_network/src/listen_addr.rs @@ -0,0 +1,97 @@ +use std::net::{IpAddr, Ipv4Addr, Ipv6Addr, SocketAddr}; + +use libp2p::{multiaddr::Protocol, Multiaddr}; +use serde::{Deserialize, Serialize}; + +/// A listening address composed by an Ip, an UDP port and a TCP port. +#[derive(Clone, Debug, Serialize, Deserialize)] +pub struct ListenAddr { + pub addr: Ip, + pub udp_port: u16, + pub tcp_port: u16, +} + +impl + Clone> ListenAddr { + pub fn udp_socket_addr(&self) -> SocketAddr { + (self.addr.clone().into(), self.udp_port).into() + } + + pub fn tcp_socket_addr(&self) -> SocketAddr { + (self.addr.clone().into(), self.tcp_port).into() + } +} + +/// Types of listening addresses Lighthouse can accept. +#[derive(Clone, Debug, Serialize, Deserialize)] +pub enum ListenAddress { + V4(ListenAddr), + V6(ListenAddr), + DualStack(ListenAddr, ListenAddr), +} + +impl ListenAddress { + /// Return the listening address over IpV4 if any. + pub fn v4(&self) -> Option<&ListenAddr> { + match self { + ListenAddress::V4(v4_addr) | ListenAddress::DualStack(v4_addr, _) => Some(v4_addr), + ListenAddress::V6(_) => None, + } + } + + /// Return the listening address over IpV6 if any. + pub fn v6(&self) -> Option<&ListenAddr> { + match self { + ListenAddress::V6(v6_addr) | ListenAddress::DualStack(_, v6_addr) => Some(v6_addr), + ListenAddress::V4(_) => None, + } + } + + /// Returns the TCP addresses. + pub fn tcp_addresses(&self) -> impl Iterator + '_ { + let v4_multiaddr = self + .v4() + .map(|v4_addr| Multiaddr::from(v4_addr.addr).with(Protocol::Tcp(v4_addr.tcp_port))); + let v6_multiaddr = self + .v6() + .map(|v6_addr| Multiaddr::from(v6_addr.addr).with(Protocol::Tcp(v6_addr.tcp_port))); + v4_multiaddr.into_iter().chain(v6_multiaddr) + } + + #[cfg(test)] + pub fn unused_v4_ports() -> Self { + ListenAddress::V4(ListenAddr { + addr: Ipv4Addr::UNSPECIFIED, + udp_port: unused_port::unused_udp4_port().unwrap(), + tcp_port: unused_port::unused_tcp4_port().unwrap(), + }) + } + + #[cfg(test)] + pub fn unused_v6_ports() -> Self { + ListenAddress::V6(ListenAddr { + addr: Ipv6Addr::UNSPECIFIED, + udp_port: unused_port::unused_udp6_port().unwrap(), + tcp_port: unused_port::unused_tcp6_port().unwrap(), + }) + } +} + +impl slog::KV for ListenAddress { + fn serialize( + &self, + _record: &slog::Record, + serializer: &mut dyn slog::Serializer, + ) -> slog::Result { + if let Some(v4_addr) = self.v4() { + serializer.emit_arguments("ip4_address", &format_args!("{}", v4_addr.addr))?; + serializer.emit_u16("udp4_port", v4_addr.udp_port)?; + serializer.emit_u16("tcp4_port", v4_addr.tcp_port)?; + } + if let Some(v6_addr) = self.v6() { + serializer.emit_arguments("ip6_address", &format_args!("{}", v6_addr.addr))?; + serializer.emit_u16("udp6_port", v6_addr.udp_port)?; + serializer.emit_u16("tcp6_port", v6_addr.tcp_port)?; + } + slog::Result::Ok(()) + } +} diff --git a/beacon_node/lighthouse_network/src/metrics.rs b/beacon_node/lighthouse_network/src/metrics.rs index 2ee224d5e28..58cc9920126 100644 --- a/beacon_node/lighthouse_network/src/metrics.rs +++ b/beacon_node/lighthouse_network/src/metrics.rs @@ -159,7 +159,7 @@ pub fn check_nat() { if NAT_OPEN.as_ref().map(|v| v.get()).unwrap_or(0) != 0 { return; } - if ADDRESS_UPDATE_COUNT.as_ref().map(|v| v.get()).unwrap_or(0) == 0 + if ADDRESS_UPDATE_COUNT.as_ref().map(|v| v.get()).unwrap_or(0) != 0 || NETWORK_INBOUND_PEERS.as_ref().map(|v| v.get()).unwrap_or(0) != 0_i64 { inc_counter(&NAT_OPEN); @@ -167,7 +167,8 @@ pub fn check_nat() { } pub fn scrape_discovery_metrics() { - let metrics = discv5::metrics::Metrics::from(discv5::Discv5::raw_metrics()); + let metrics = + discv5::metrics::Metrics::from(discv5::Discv5::::raw_metrics()); set_float_gauge(&DISCOVERY_REQS, metrics.unsolicited_requests_per_second); set_gauge(&DISCOVERY_SESSIONS, metrics.active_sessions as i64); set_gauge(&DISCOVERY_SENT_BYTES, metrics.bytes_sent as i64); diff --git a/beacon_node/lighthouse_network/src/peer_manager/mod.rs b/beacon_node/lighthouse_network/src/peer_manager/mod.rs index 89670a2eb3c..a461a12e530 100644 --- a/beacon_node/lighthouse_network/src/peer_manager/mod.rs +++ b/beacon_node/lighthouse_network/src/peer_manager/mod.rs @@ -8,11 +8,12 @@ use crate::{Subnet, SubnetDiscovery}; use delay_map::HashSetDelay; use discv5::Enr; use libp2p::identify::Info as IdentifyInfo; +use lru_cache::LRUTimeCache; use peerdb::{client::ClientKind, BanOperation, BanResult, ScoreUpdateResult}; use rand::seq::SliceRandom; use slog::{debug, error, trace, warn}; use smallvec::SmallVec; -use std::collections::VecDeque; +use std::collections::BTreeMap; use std::{ sync::Arc, time::{Duration, Instant}, @@ -39,6 +40,9 @@ mod network_behaviour; /// requests. This defines the interval in seconds. const HEARTBEAT_INTERVAL: u64 = 30; +/// The minimum amount of time we allow peers to reconnect to us after a disconnect when we are +/// saturated with peers. This effectively looks like a swarm BAN for this amount of time. +pub const PEER_RECONNECTION_TIMEOUT: Duration = Duration::from_secs(600); /// This is used in the pruning logic. We avoid pruning peers on sync-committees if doing so would /// lower our peer count below this number. Instead we favour a non-uniform distribution of subnet /// peers. @@ -73,7 +77,21 @@ pub struct PeerManager { /// The target number of peers we would like to connect to. target_peers: usize, /// Peers queued to be dialed. - peers_to_dial: VecDeque<(PeerId, Option)>, + peers_to_dial: BTreeMap>, + /// The number of temporarily banned peers. This is used to prevent instantaneous + /// reconnection. + // NOTE: This just prevents re-connections. The state of the peer is otherwise unaffected. A + // peer can be in a disconnected state and new connections will be refused and logged as if the + // peer is banned without it being reflected in the peer's state. + // Also the banned state can out-last the peer's reference in the peer db. So peers that are + // unknown to us can still be temporarily banned. This is fundamentally a relationship with + // the swarm. Regardless of our knowledge of the peer in the db, it will be temporarily banned + // at the swarm layer. + // NOTE: An LRUTimeCache is used compared to a structure that needs to be polled to avoid very + // frequent polling to unban peers. Instead, this cache piggy-backs the PeerManager heartbeat + // to update and clear the cache. Therefore the PEER_RECONNECTION_TIMEOUT only has a resolution + // of the HEARTBEAT_INTERVAL. + temporary_banned_peers: LRUTimeCache, /// A collection of sync committee subnets that we need to stay subscribed to. /// Sync committee subnets are longer term (256 epochs). Hence, we need to re-run /// discovery queries for subnet peers if we disconnect from existing sync @@ -143,6 +161,7 @@ impl PeerManager { outbound_ping_peers: HashSetDelay::new(Duration::from_secs(ping_interval_outbound)), status_peers: HashSetDelay::new(Duration::from_secs(status_interval)), target_peers: target_peer_count, + temporary_banned_peers: LRUTimeCache::new(PEER_RECONNECTION_TIMEOUT), sync_committee_subnets: Default::default(), heartbeat, discovery_enabled, @@ -243,6 +262,15 @@ impl PeerManager { reason: Option, ) { match ban_operation { + BanOperation::TemporaryBan => { + // The peer could be temporarily banned. We only do this in the case that + // we have currently reached our peer target limit. + if self.network_globals.connected_peers() >= self.target_peers { + // We have enough peers, prevent this reconnection. + self.temporary_banned_peers.raw_insert(*peer_id); + self.events.push(PeerManagerEvent::Banned(*peer_id, vec![])); + } + } BanOperation::DisconnectThePeer => { // The peer was currently connected, so we start a disconnection. // Once the peer has disconnected, its connection state will transition to a @@ -259,9 +287,23 @@ impl PeerManager { BanOperation::ReadyToBan(banned_ips) => { // The peer is not currently connected, we can safely ban it at the swarm // level. - // Inform the Swarm to ban the peer - self.events - .push(PeerManagerEvent::Banned(*peer_id, banned_ips)); + + // If a peer is being banned, this trumps any temporary ban the peer might be + // under. We no longer track it in the temporary ban list. + if !self.temporary_banned_peers.raw_remove(peer_id) { + // If the peer is not already banned, inform the Swarm to ban the peer + self.events + .push(PeerManagerEvent::Banned(*peer_id, banned_ips)); + // If the peer was in the process of being un-banned, remove it (a rare race + // condition) + self.events.retain(|event| { + if let PeerManagerEvent::UnBanned(unbanned_peer_id, _) = event { + unbanned_peer_id != peer_id // Remove matching peer ids + } else { + true + } + }); + } } } } @@ -275,7 +317,7 @@ impl PeerManager { /// proves resource constraining, we should switch to multiaddr dialling here. #[allow(clippy::mutable_key_type)] pub fn peers_discovered(&mut self, results: HashMap>) -> Vec { - let mut to_dial_peers = Vec::new(); + let mut to_dial_peers = Vec::with_capacity(4); let connected_or_dialing = self.network_globals.connected_or_dialing_peers(); for (peer_id, min_ttl) in results { @@ -365,7 +407,7 @@ impl PeerManager { // A peer is being dialed. pub fn dial_peer(&mut self, peer_id: &PeerId, enr: Option) { - self.peers_to_dial.push_back((*peer_id, enr)); + self.peers_to_dial.insert(*peer_id, enr); } /// Reports if a peer is banned or not. @@ -519,8 +561,8 @@ impl PeerManager { Protocol::BlocksByRoot => return, Protocol::Goodbye => return, Protocol::LightClientBootstrap => return, - Protocol::MetaData => PeerAction::LowToleranceError, - Protocol::Status => PeerAction::LowToleranceError, + Protocol::MetaData => PeerAction::Fatal, + Protocol::Status => PeerAction::Fatal, } } RPCError::StreamTimeout => match direction { @@ -1109,6 +1151,14 @@ impl PeerManager { } } + /// Unbans any temporarily banned peers that have served their timeout. + fn unban_temporary_banned_peers(&mut self) { + for peer_id in self.temporary_banned_peers.remove_expired() { + self.events + .push(PeerManagerEvent::UnBanned(peer_id, Vec::new())); + } + } + /// The Peer manager's heartbeat maintains the peer count and maintains peer reputations. /// /// It will request discovery queries if the peer count has not reached the desired number of @@ -1141,6 +1191,21 @@ impl PeerManager { // Prune any excess peers back to our target in such a way that incentivises good scores and // a uniform distribution of subnets. self.prune_excess_peers(); + + // Unban any peers that have served their temporary ban timeout + self.unban_temporary_banned_peers(); + + // Maintains memory by shrinking mappings + self.shrink_mappings(); + } + + // Reduce memory footprint by routinely shrinking associating mappings. + fn shrink_mappings(&mut self) { + self.inbound_ping_peers.shrink_to(5); + self.outbound_ping_peers.shrink_to(5); + self.status_peers.shrink_to(5); + self.temporary_banned_peers.shrink_to_fit(); + self.sync_committee_subnets.shrink_to_fit(); } // Update metrics related to peer scoring. diff --git a/beacon_node/lighthouse_network/src/peer_manager/network_behaviour.rs b/beacon_node/lighthouse_network/src/peer_manager/network_behaviour.rs index 42eb270c40e..24de83a61da 100644 --- a/beacon_node/lighthouse_network/src/peer_manager/network_behaviour.rs +++ b/beacon_node/lighthouse_network/src/peer_manager/network_behaviour.rs @@ -89,7 +89,7 @@ impl NetworkBehaviour for PeerManager { self.events.shrink_to_fit(); } - if let Some((peer_id, maybe_enr)) = self.peers_to_dial.pop_front() { + if let Some((peer_id, maybe_enr)) = self.peers_to_dial.pop_first() { self.inject_peer_connection(&peer_id, ConnectingType::Dialing, maybe_enr); let handler = self.new_handler(); return Poll::Ready(NetworkBehaviourAction::Dial { @@ -156,8 +156,10 @@ impl PeerManager { BanResult::BadScore => { // This is a faulty state error!(self.log, "Connected to a banned peer. Re-banning"; "peer_id" => %peer_id); - // Reban the peer + // Disconnect the peer. self.goodbye_peer(&peer_id, GoodbyeReason::Banned, ReportSource::PeerManager); + // Re-ban the peer to prevent repeated errors. + self.events.push(PeerManagerEvent::Banned(peer_id, vec![])); return; } BanResult::BannedIp(ip_addr) => { @@ -170,7 +172,7 @@ impl PeerManager { BanResult::NotBanned => {} } - // Count dialing peers in the limit if the peer dialied us. + // Count dialing peers in the limit if the peer dialed us. let count_dialing = endpoint.is_listener(); // Check the connection limits if self.peer_limit_reached(count_dialing) diff --git a/beacon_node/lighthouse_network/src/peer_manager/peerdb.rs b/beacon_node/lighthouse_network/src/peer_manager/peerdb.rs index 1f44488a569..20870656883 100644 --- a/beacon_node/lighthouse_network/src/peer_manager/peerdb.rs +++ b/beacon_node/lighthouse_network/src/peer_manager/peerdb.rs @@ -41,12 +41,14 @@ pub struct PeerDB { disconnected_peers: usize, /// Counts banned peers in total and per ip banned_peers_count: BannedPeersCount, + /// Specifies if peer scoring is disabled. + disable_peer_scoring: bool, /// PeerDB's logger log: slog::Logger, } impl PeerDB { - pub fn new(trusted_peers: Vec, log: &slog::Logger) -> Self { + pub fn new(trusted_peers: Vec, disable_peer_scoring: bool, log: &slog::Logger) -> Self { // Initialize the peers hashmap with trusted peers let peers = trusted_peers .into_iter() @@ -56,6 +58,7 @@ impl PeerDB { log: log.clone(), disconnected_peers: 0, banned_peers_count: BannedPeersCount::default(), + disable_peer_scoring, peers, } } @@ -704,7 +707,11 @@ impl PeerDB { warn!(log_ref, "Updating state of unknown peer"; "peer_id" => %peer_id, "new_state" => ?new_state); } - PeerInfo::default() + if self.disable_peer_scoring { + PeerInfo::trusted_peer_info() + } else { + PeerInfo::default() + } }); // Ban the peer if the score is not already low enough. @@ -844,8 +851,16 @@ impl PeerDB { .collect::>(); return Some(BanOperation::ReadyToBan(banned_ips)); } - PeerConnectionStatus::Disconnecting { .. } - | PeerConnectionStatus::Unknown + PeerConnectionStatus::Disconnecting { .. } => { + // The peer has been disconnected but not banned. Inform the peer manager + // that this peer could be eligible for a temporary ban. + self.disconnected_peers += 1; + info.set_connection_status(PeerConnectionStatus::Disconnected { + since: Instant::now(), + }); + return Some(BanOperation::TemporaryBan); + } + PeerConnectionStatus::Unknown | PeerConnectionStatus::Connected { .. } | PeerConnectionStatus::Dialing { .. } => { self.disconnected_peers += 1; @@ -1177,6 +1192,9 @@ impl From> for ScoreUpdateResult { /// When attempting to ban a peer provides the peer manager with the operation that must be taken. pub enum BanOperation { + /// Optionally temporarily ban this peer to prevent instantaneous reconnection. + /// The peer manager will decide if temporary banning is required. + TemporaryBan, // The peer is currently connected. Perform a graceful disconnect before banning at the swarm // level. DisconnectThePeer, @@ -1289,7 +1307,7 @@ mod tests { fn get_db() -> PeerDB { let log = build_log(slog::Level::Debug, false); - PeerDB::new(vec![], &log) + PeerDB::new(vec![], false, &log) } #[test] @@ -1988,7 +2006,7 @@ mod tests { fn test_trusted_peers_score() { let trusted_peer = PeerId::random(); let log = build_log(slog::Level::Debug, false); - let mut pdb: PeerDB = PeerDB::new(vec![trusted_peer], &log); + let mut pdb: PeerDB = PeerDB::new(vec![trusted_peer], false, &log); pdb.connect_ingoing(&trusted_peer, "/ip4/0.0.0.0".parse().unwrap(), None); @@ -2007,4 +2025,28 @@ mod tests { Score::max_score().score() ); } + + #[test] + fn test_disable_peer_scoring() { + let peer = PeerId::random(); + let log = build_log(slog::Level::Debug, false); + let mut pdb: PeerDB = PeerDB::new(vec![], true, &log); + + pdb.connect_ingoing(&peer, "/ip4/0.0.0.0".parse().unwrap(), None); + + // Check trusted status and score + assert!(pdb.peer_info(&peer).unwrap().is_trusted()); + assert_eq!( + pdb.peer_info(&peer).unwrap().score().score(), + Score::max_score().score() + ); + + // Adding/Subtracting score should have no effect on a trusted peer + add_score(&mut pdb, &peer, -50.0); + + assert_eq!( + pdb.peer_info(&peer).unwrap().score().score(), + Score::max_score().score() + ); + } } diff --git a/beacon_node/lighthouse_network/src/rpc/codec/base.rs b/beacon_node/lighthouse_network/src/rpc/codec/base.rs index 53f85d9a7b6..6c6ce2da32f 100644 --- a/beacon_node/lighthouse_network/src/rpc/codec/base.rs +++ b/beacon_node/lighthouse_network/src/rpc/codec/base.rs @@ -193,14 +193,17 @@ mod tests { let mut chain_spec = Spec::default_spec(); let altair_fork_epoch = Epoch::new(1); let merge_fork_epoch = Epoch::new(2); + let capella_fork_epoch = Epoch::new(3); chain_spec.altair_fork_epoch = Some(altair_fork_epoch); chain_spec.bellatrix_fork_epoch = Some(merge_fork_epoch); + chain_spec.capella_fork_epoch = Some(capella_fork_epoch); let current_slot = match fork_name { ForkName::Base => Slot::new(0), ForkName::Altair => altair_fork_epoch.start_slot(Spec::slots_per_epoch()), ForkName::Merge => merge_fork_epoch.start_slot(Spec::slots_per_epoch()), + ForkName::Capella => capella_fork_epoch.start_slot(Spec::slots_per_epoch()), }; ForkContext::new::(current_slot, Hash256::zero(), &chain_spec) } diff --git a/beacon_node/lighthouse_network/src/rpc/codec/ssz_snappy.rs b/beacon_node/lighthouse_network/src/rpc/codec/ssz_snappy.rs index eccbf0dd623..28fea40a20d 100644 --- a/beacon_node/lighthouse_network/src/rpc/codec/ssz_snappy.rs +++ b/beacon_node/lighthouse_network/src/rpc/codec/ssz_snappy.rs @@ -15,9 +15,10 @@ use std::io::{Read, Write}; use std::marker::PhantomData; use std::sync::Arc; use tokio_util::codec::{Decoder, Encoder}; +use types::light_client_bootstrap::LightClientBootstrap; use types::{ - light_client_bootstrap::LightClientBootstrap, EthSpec, ForkContext, ForkName, Hash256, - SignedBeaconBlock, SignedBeaconBlockAltair, SignedBeaconBlockBase, SignedBeaconBlockMerge, + EthSpec, ForkContext, ForkName, Hash256, SignedBeaconBlock, SignedBeaconBlockAltair, + SignedBeaconBlockBase, SignedBeaconBlockCapella, SignedBeaconBlockMerge, }; use unsigned_varint::codec::Uvi; @@ -409,6 +410,10 @@ fn context_bytes( return match **ref_box_block { // NOTE: If you are adding another fork type here, be sure to modify the // `fork_context.to_context_bytes()` function to support it as well! + SignedBeaconBlock::Capella { .. } => { + // Capella context being `None` implies that "merge never happened". + fork_context.to_context_bytes(ForkName::Capella) + } SignedBeaconBlock::Merge { .. } => { // Merge context being `None` implies that "merge never happened". fork_context.to_context_bytes(ForkName::Merge) @@ -595,6 +600,11 @@ fn handle_v2_response( decoded_buffer, )?), )))), + ForkName::Capella => Ok(Some(RPCResponse::BlocksByRange(Arc::new( + SignedBeaconBlock::Capella(SignedBeaconBlockCapella::from_ssz_bytes( + decoded_buffer, + )?), + )))), }, Protocol::BlocksByRoot => match fork_name { ForkName::Altair => Ok(Some(RPCResponse::BlocksByRoot(Arc::new( @@ -610,6 +620,11 @@ fn handle_v2_response( decoded_buffer, )?), )))), + ForkName::Capella => Ok(Some(RPCResponse::BlocksByRoot(Arc::new( + SignedBeaconBlock::Capella(SignedBeaconBlockCapella::from_ssz_bytes( + decoded_buffer, + )?), + )))), }, _ => Err(RPCError::ErrorResponse( RPCResponseErrorCode::InvalidRequest, @@ -645,8 +660,8 @@ mod tests { }; use std::sync::Arc; use types::{ - BeaconBlock, BeaconBlockAltair, BeaconBlockBase, BeaconBlockMerge, Epoch, ForkContext, - FullPayload, Hash256, Signature, SignedBeaconBlock, Slot, + BeaconBlock, BeaconBlockAltair, BeaconBlockBase, BeaconBlockMerge, EmptyBlock, Epoch, + ForkContext, FullPayload, Hash256, Signature, SignedBeaconBlock, Slot, }; use snap::write::FrameEncoder; @@ -659,14 +674,17 @@ mod tests { let mut chain_spec = Spec::default_spec(); let altair_fork_epoch = Epoch::new(1); let merge_fork_epoch = Epoch::new(2); + let capella_fork_epoch = Epoch::new(3); chain_spec.altair_fork_epoch = Some(altair_fork_epoch); chain_spec.bellatrix_fork_epoch = Some(merge_fork_epoch); + chain_spec.capella_fork_epoch = Some(capella_fork_epoch); let current_slot = match fork_name { ForkName::Base => Slot::new(0), ForkName::Altair => altair_fork_epoch.start_slot(Spec::slots_per_epoch()), ForkName::Merge => merge_fork_epoch.start_slot(Spec::slots_per_epoch()), + ForkName::Capella => capella_fork_epoch.start_slot(Spec::slots_per_epoch()), }; ForkContext::new::(current_slot, Hash256::zero(), &chain_spec) } diff --git a/beacon_node/lighthouse_network/src/rpc/config.rs b/beacon_node/lighthouse_network/src/rpc/config.rs new file mode 100644 index 00000000000..bea0929fb0b --- /dev/null +++ b/beacon_node/lighthouse_network/src/rpc/config.rs @@ -0,0 +1,173 @@ +use std::{ + fmt::{Debug, Display}, + str::FromStr, + time::Duration, +}; + +use super::{methods, rate_limiter::Quota, Protocol}; + +use serde_derive::{Deserialize, Serialize}; + +/// Auxiliary struct to aid on configuration parsing. +/// +/// A protocol's quota is specified as `protocol_name:tokens/time_in_seconds`. +#[derive(Debug, PartialEq, Eq)] +struct ProtocolQuota { + protocol: Protocol, + quota: Quota, +} + +impl Display for ProtocolQuota { + fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { + write!( + f, + "{}:{}/{}", + self.protocol.as_ref(), + self.quota.max_tokens, + self.quota.replenish_all_every.as_secs() + ) + } +} + +impl FromStr for ProtocolQuota { + type Err = &'static str; + + fn from_str(s: &str) -> Result { + let (protocol_str, quota_str) = s + .split_once(':') + .ok_or("Missing ':' from quota definition.")?; + let protocol = protocol_str + .parse() + .map_err(|_parse_err| "Wrong protocol representation in quota")?; + let (tokens_str, time_str) = quota_str + .split_once('/') + .ok_or("Quota should be defined as \"n/t\" (t in seconds). Missing '/' from quota.")?; + let tokens = tokens_str + .parse() + .map_err(|_| "Failed to parse tokens from quota.")?; + let seconds = time_str + .parse::() + .map_err(|_| "Failed to parse time in seconds from quota.")?; + Ok(ProtocolQuota { + protocol, + quota: Quota { + replenish_all_every: Duration::from_secs(seconds), + max_tokens: tokens, + }, + }) + } +} + +/// Configurations for the rate limiter applied to outbound requests (made by the node itself). +#[derive(Clone, Serialize, Deserialize, PartialEq, Eq)] +pub struct OutboundRateLimiterConfig { + pub(super) ping_quota: Quota, + pub(super) meta_data_quota: Quota, + pub(super) status_quota: Quota, + pub(super) goodbye_quota: Quota, + pub(super) blocks_by_range_quota: Quota, + pub(super) blocks_by_root_quota: Quota, +} + +impl OutboundRateLimiterConfig { + pub const DEFAULT_PING_QUOTA: Quota = Quota::n_every(2, 10); + pub const DEFAULT_META_DATA_QUOTA: Quota = Quota::n_every(2, 5); + pub const DEFAULT_STATUS_QUOTA: Quota = Quota::n_every(5, 15); + pub const DEFAULT_GOODBYE_QUOTA: Quota = Quota::one_every(10); + pub const DEFAULT_BLOCKS_BY_RANGE_QUOTA: Quota = + Quota::n_every(methods::MAX_REQUEST_BLOCKS, 10); + pub const DEFAULT_BLOCKS_BY_ROOT_QUOTA: Quota = Quota::n_every(128, 10); +} + +impl Default for OutboundRateLimiterConfig { + fn default() -> Self { + OutboundRateLimiterConfig { + ping_quota: Self::DEFAULT_PING_QUOTA, + meta_data_quota: Self::DEFAULT_META_DATA_QUOTA, + status_quota: Self::DEFAULT_STATUS_QUOTA, + goodbye_quota: Self::DEFAULT_GOODBYE_QUOTA, + blocks_by_range_quota: Self::DEFAULT_BLOCKS_BY_RANGE_QUOTA, + blocks_by_root_quota: Self::DEFAULT_BLOCKS_BY_ROOT_QUOTA, + } + } +} + +impl Debug for OutboundRateLimiterConfig { + fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { + macro_rules! fmt_q { + ($quota:expr) => { + &format_args!( + "{}/{}s", + $quota.max_tokens, + $quota.replenish_all_every.as_secs() + ) + }; + } + + f.debug_struct("OutboundRateLimiterConfig") + .field("ping", fmt_q!(&self.ping_quota)) + .field("metadata", fmt_q!(&self.meta_data_quota)) + .field("status", fmt_q!(&self.status_quota)) + .field("goodbye", fmt_q!(&self.goodbye_quota)) + .field("blocks_by_range", fmt_q!(&self.blocks_by_range_quota)) + .field("blocks_by_root", fmt_q!(&self.blocks_by_root_quota)) + .finish() + } +} + +/// Parse configurations for the outbound rate limiter. Protocols that are not specified use +/// the default values. Protocol specified more than once use only the first given Quota. +/// +/// The expected format is a ';' separated list of [`ProtocolQuota`]. +impl FromStr for OutboundRateLimiterConfig { + type Err = &'static str; + + fn from_str(s: &str) -> Result { + let mut ping_quota = None; + let mut meta_data_quota = None; + let mut status_quota = None; + let mut goodbye_quota = None; + let mut blocks_by_range_quota = None; + let mut blocks_by_root_quota = None; + for proto_def in s.split(';') { + let ProtocolQuota { protocol, quota } = proto_def.parse()?; + let quota = Some(quota); + match protocol { + Protocol::Status => status_quota = status_quota.or(quota), + Protocol::Goodbye => goodbye_quota = goodbye_quota.or(quota), + Protocol::BlocksByRange => blocks_by_range_quota = blocks_by_range_quota.or(quota), + Protocol::BlocksByRoot => blocks_by_root_quota = blocks_by_root_quota.or(quota), + Protocol::Ping => ping_quota = ping_quota.or(quota), + Protocol::MetaData => meta_data_quota = meta_data_quota.or(quota), + Protocol::LightClientBootstrap => return Err("Lighthouse does not send LightClientBootstrap requests. Quota should not be set."), + } + } + Ok(OutboundRateLimiterConfig { + ping_quota: ping_quota.unwrap_or(Self::DEFAULT_PING_QUOTA), + meta_data_quota: meta_data_quota.unwrap_or(Self::DEFAULT_META_DATA_QUOTA), + status_quota: status_quota.unwrap_or(Self::DEFAULT_STATUS_QUOTA), + goodbye_quota: goodbye_quota.unwrap_or(Self::DEFAULT_GOODBYE_QUOTA), + blocks_by_range_quota: blocks_by_range_quota + .unwrap_or(Self::DEFAULT_BLOCKS_BY_RANGE_QUOTA), + blocks_by_root_quota: blocks_by_root_quota + .unwrap_or(Self::DEFAULT_BLOCKS_BY_ROOT_QUOTA), + }) + } +} + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn test_quota_inverse() { + let quota = ProtocolQuota { + protocol: Protocol::Goodbye, + quota: Quota { + replenish_all_every: Duration::from_secs(10), + max_tokens: 8, + }, + }; + assert_eq!(quota.to_string().parse(), Ok(quota)) + } +} diff --git a/beacon_node/lighthouse_network/src/rpc/mod.rs b/beacon_node/lighthouse_network/src/rpc/mod.rs index 203a642a8be..31569b820b1 100644 --- a/beacon_node/lighthouse_network/src/rpc/mod.rs +++ b/beacon_node/lighthouse_network/src/rpc/mod.rs @@ -12,7 +12,7 @@ use libp2p::swarm::{ PollParameters, SubstreamProtocol, }; use libp2p::PeerId; -use rate_limiter::{RPCRateLimiter as RateLimiter, RPCRateLimiterBuilder, RateLimitedErr}; +use rate_limiter::{RPCRateLimiter as RateLimiter, RateLimitedErr}; use slog::{crit, debug, o}; use std::marker::PhantomData; use std::sync::Arc; @@ -32,12 +32,17 @@ pub use methods::{ pub(crate) use outbound::OutboundRequest; pub use protocol::{max_rpc_size, Protocol, RPCError}; +use self::config::OutboundRateLimiterConfig; +use self::self_limiter::SelfRateLimiter; + pub(crate) mod codec; +pub mod config; mod handler; pub mod methods; mod outbound; mod protocol; mod rate_limiter; +mod self_limiter; /// Composite trait for a request id. pub trait ReqId: Send + 'static + std::fmt::Debug + Copy + Clone {} @@ -100,13 +105,18 @@ pub struct RPCMessage { pub event: HandlerEvent, } +type BehaviourAction = + NetworkBehaviourAction, RPCHandler>; + /// Implements the libp2p `NetworkBehaviour` trait and therefore manages network-level /// logic. pub struct RPC { /// Rate limiter limiter: RateLimiter, + /// Rate limiter for our own requests. + self_limiter: Option>, /// Queue of events to be processed. - events: Vec, RPCHandler>>, + events: Vec>, fork_context: Arc, enable_light_client_server: bool, /// Slog logger for RPC behaviour. @@ -117,10 +127,12 @@ impl RPC { pub fn new( fork_context: Arc, enable_light_client_server: bool, + outbound_rate_limiter_config: Option, log: slog::Logger, ) -> Self { let log = log.new(o!("service" => "libp2p_rpc")); - let limiter = RPCRateLimiterBuilder::new() + + let limiter = RateLimiter::builder() .n_every(Protocol::MetaData, 2, Duration::from_secs(5)) .n_every(Protocol::Ping, 2, Duration::from_secs(10)) .n_every(Protocol::Status, 5, Duration::from_secs(15)) @@ -134,8 +146,14 @@ impl RPC { .n_every(Protocol::BlocksByRoot, 128, Duration::from_secs(10)) .build() .expect("Configuration parameters are valid"); + + let self_limiter = outbound_rate_limiter_config.map(|config| { + SelfRateLimiter::new(config, log.clone()).expect("Configuration parameters are valid") + }); + RPC { limiter, + self_limiter, events: Vec::new(), fork_context, enable_light_client_server, @@ -162,12 +180,24 @@ impl RPC { /// Submits an RPC request. /// /// The peer must be connected for this to succeed. - pub fn send_request(&mut self, peer_id: PeerId, request_id: Id, event: OutboundRequest) { - self.events.push(NetworkBehaviourAction::NotifyHandler { - peer_id, - handler: NotifyHandler::Any, - event: RPCSend::Request(request_id, event), - }); + pub fn send_request(&mut self, peer_id: PeerId, request_id: Id, req: OutboundRequest) { + let event = if let Some(self_limiter) = self.self_limiter.as_mut() { + match self_limiter.allows(peer_id, request_id, req) { + Ok(event) => event, + Err(_e) => { + // Request is logged and queued internally in the self rate limiter. + return; + } + } + } else { + NetworkBehaviourAction::NotifyHandler { + peer_id, + handler: NotifyHandler::Any, + event: RPCSend::Request(request_id, req), + } + }; + + self.events.push(event); } /// Lighthouse wishes to disconnect from this peer by sending a Goodbye message. This @@ -272,11 +302,19 @@ where cx: &mut Context, _: &mut impl PollParameters, ) -> Poll> { - // let the rate limiter prune + // let the rate limiter prune. let _ = self.limiter.poll_unpin(cx); + + if let Some(self_limiter) = self.self_limiter.as_mut() { + if let Poll::Ready(event) = self_limiter.poll_ready(cx) { + self.events.push(event) + } + } + if !self.events.is_empty() { return Poll::Ready(self.events.remove(0)); } + Poll::Pending } } diff --git a/beacon_node/lighthouse_network/src/rpc/protocol.rs b/beacon_node/lighthouse_network/src/rpc/protocol.rs index 1f40f81971c..a8423e47b0b 100644 --- a/beacon_node/lighthouse_network/src/rpc/protocol.rs +++ b/beacon_node/lighthouse_network/src/rpc/protocol.rs @@ -14,15 +14,16 @@ use std::io; use std::marker::PhantomData; use std::sync::Arc; use std::time::Duration; -use strum::IntoStaticStr; +use strum::{AsRefStr, Display, EnumString, IntoStaticStr}; use tokio_io_timeout::TimeoutStream; use tokio_util::{ codec::Framed, compat::{Compat, FuturesAsyncReadCompatExt}, }; use types::{ - BeaconBlock, BeaconBlockAltair, BeaconBlockBase, BeaconBlockMerge, EthSpec, ForkContext, - ForkName, Hash256, MainnetEthSpec, Signature, SignedBeaconBlock, + BeaconBlock, BeaconBlockAltair, BeaconBlockBase, BeaconBlockCapella, BeaconBlockMerge, + EmptyBlock, EthSpec, ForkContext, ForkName, Hash256, MainnetEthSpec, Signature, + SignedBeaconBlock, }; lazy_static! { @@ -61,6 +62,13 @@ lazy_static! { .as_ssz_bytes() .len(); + pub static ref SIGNED_BEACON_BLOCK_CAPELLA_MAX_WITHOUT_PAYLOAD: usize = SignedBeaconBlock::::from_block( + BeaconBlock::Capella(BeaconBlockCapella::full(&MainnetEthSpec::default_spec())), + Signature::empty(), + ) + .as_ssz_bytes() + .len(); + /// The `BeaconBlockMerge` block has an `ExecutionPayload` field which has a max size ~16 GiB for future proofing. /// We calculate the value from its fields instead of constructing the block and checking the length. /// Note: This is only the theoretical upper bound. We further bound the max size we receive over the network @@ -68,7 +76,11 @@ lazy_static! { pub static ref SIGNED_BEACON_BLOCK_MERGE_MAX: usize = // Size of a full altair block *SIGNED_BEACON_BLOCK_ALTAIR_MAX - + types::ExecutionPayload::::max_execution_payload_size() // adding max size of execution payload (~16gb) + + types::ExecutionPayload::::max_execution_payload_merge_size() // adding max size of execution payload (~16gb) + + ssz::BYTES_PER_LENGTH_OFFSET; // Adding the additional ssz offset for the `ExecutionPayload` field + + pub static ref SIGNED_BEACON_BLOCK_CAPELLA_MAX: usize = *SIGNED_BEACON_BLOCK_CAPELLA_MAX_WITHOUT_PAYLOAD + + types::ExecutionPayload::::max_execution_payload_capella_size() // adding max size of execution payload (~16gb) + ssz::BYTES_PER_LENGTH_OFFSET; // Adding the additional ssz offset for the `ExecutionPayload` field pub static ref BLOCKS_BY_ROOT_REQUEST_MIN: usize = @@ -95,13 +107,13 @@ lazy_static! { ]) .as_ssz_bytes() .len(); - } /// The maximum bytes that can be sent across the RPC pre-merge. pub(crate) const MAX_RPC_SIZE: usize = 1_048_576; // 1M /// The maximum bytes that can be sent across the RPC post-merge. pub(crate) const MAX_RPC_SIZE_POST_MERGE: usize = 10 * 1_048_576; // 10M +pub(crate) const MAX_RPC_SIZE_POST_CAPELLA: usize = 10 * 1_048_576; // 10M /// The protocol prefix the RPC protocol id. const PROTOCOL_PREFIX: &str = "/eth2/beacon_chain/req"; /// Time allowed for the first byte of a request to arrive before we time out (Time To First Byte). @@ -113,8 +125,9 @@ const REQUEST_TIMEOUT: u64 = 15; /// Returns the maximum bytes that can be sent across the RPC. pub fn max_rpc_size(fork_context: &ForkContext) -> usize { match fork_context.current_fork() { - ForkName::Merge => MAX_RPC_SIZE_POST_MERGE, ForkName::Altair | ForkName::Base => MAX_RPC_SIZE, + ForkName::Merge => MAX_RPC_SIZE_POST_MERGE, + ForkName::Capella => MAX_RPC_SIZE_POST_CAPELLA, } } @@ -135,25 +148,34 @@ pub fn rpc_block_limits_by_fork(current_fork: ForkName) -> RpcLimits { *SIGNED_BEACON_BLOCK_BASE_MIN, // Base block is smaller than altair and merge blocks *SIGNED_BEACON_BLOCK_MERGE_MAX, // Merge block is larger than base and altair blocks ), + ForkName::Capella => RpcLimits::new( + *SIGNED_BEACON_BLOCK_BASE_MIN, // Base block is smaller than altair and merge blocks + *SIGNED_BEACON_BLOCK_CAPELLA_MAX, // Capella block is larger than base, altair and merge blocks + ), } } /// Protocol names to be used. -#[derive(Debug, Clone, Copy, PartialEq, Eq)] +#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, EnumString, AsRefStr, Display)] +#[strum(serialize_all = "snake_case")] pub enum Protocol { /// The Status protocol name. Status, /// The Goodbye protocol name. Goodbye, /// The `BlocksByRange` protocol name. + #[strum(serialize = "beacon_blocks_by_range")] BlocksByRange, /// The `BlocksByRoot` protocol name. + #[strum(serialize = "beacon_blocks_by_root")] BlocksByRoot, /// The `Ping` protocol name. Ping, /// The `MetaData` protocol name. + #[strum(serialize = "metadata")] MetaData, /// The `LightClientBootstrap` protocol name. + #[strum(serialize = "light_client_bootstrap")] LightClientBootstrap, } @@ -172,21 +194,6 @@ pub enum Encoding { SSZSnappy, } -impl std::fmt::Display for Protocol { - fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { - let repr = match self { - Protocol::Status => "status", - Protocol::Goodbye => "goodbye", - Protocol::BlocksByRange => "beacon_blocks_by_range", - Protocol::BlocksByRoot => "beacon_blocks_by_root", - Protocol::Ping => "ping", - Protocol::MetaData => "metadata", - Protocol::LightClientBootstrap => "light_client_bootstrap", - }; - f.write_str(repr) - } -} - impl std::fmt::Display for Encoding { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { let repr = match self { @@ -319,7 +326,6 @@ impl ProtocolId { Protocol::Goodbye => RpcLimits::new(0, 0), // Goodbye request has no response Protocol::BlocksByRange => rpc_block_limits_by_fork(fork_context.current_fork()), Protocol::BlocksByRoot => rpc_block_limits_by_fork(fork_context.current_fork()), - Protocol::Ping => RpcLimits::new( ::ssz_fixed_len(), ::ssz_fixed_len(), @@ -338,13 +344,16 @@ impl ProtocolId { /// Returns `true` if the given `ProtocolId` should expect `context_bytes` in the /// beginning of the stream, else returns `false`. pub fn has_context_bytes(&self) -> bool { - if self.version == Version::V2 { - match self.message_name { - Protocol::BlocksByRange | Protocol::BlocksByRoot => return true, - _ => return false, - } + match self.message_name { + Protocol::BlocksByRange | Protocol::BlocksByRoot => match self.version { + Version::V2 => true, + Version::V1 => false, + }, + Protocol::LightClientBootstrap => match self.version { + Version::V2 | Version::V1 => true, + }, + Protocol::Goodbye | Protocol::Ping | Protocol::Status | Protocol::MetaData => false, } - false } } diff --git a/beacon_node/lighthouse_network/src/rpc/rate_limiter.rs b/beacon_node/lighthouse_network/src/rpc/rate_limiter.rs index 6ba9f6e9419..a1f7b89a2f2 100644 --- a/beacon_node/lighthouse_network/src/rpc/rate_limiter.rs +++ b/beacon_node/lighthouse_network/src/rpc/rate_limiter.rs @@ -1,6 +1,7 @@ -use crate::rpc::{InboundRequest, Protocol}; +use crate::rpc::Protocol; use fnv::FnvHashMap; use libp2p::PeerId; +use serde_derive::{Deserialize, Serialize}; use std::convert::TryInto; use std::future::Future; use std::hash::Hash; @@ -47,12 +48,31 @@ type Nanosecs = u64; /// n*`replenish_all_every`/`max_tokens` units of time since their last request. /// /// To produce hard limits, set `max_tokens` to 1. +#[derive(Clone, Debug, PartialEq, Eq, Serialize, Deserialize)] pub struct Quota { /// How often are `max_tokens` fully replenished. - replenish_all_every: Duration, + pub(super) replenish_all_every: Duration, /// Token limit. This translates on how large can an instantaneous batch of /// tokens be. - max_tokens: u64, + pub(super) max_tokens: u64, +} + +impl Quota { + /// A hard limit of one token every `seconds`. + pub const fn one_every(seconds: u64) -> Self { + Quota { + replenish_all_every: Duration::from_secs(seconds), + max_tokens: 1, + } + } + + /// Allow `n` tokens to be use used every `seconds`. + pub const fn n_every(n: u64, seconds: u64) -> Self { + Quota { + replenish_all_every: Duration::from_secs(seconds), + max_tokens: n, + } + } } /// Manages rate limiting of requests per peer, with differentiated rates per protocol. @@ -78,6 +98,7 @@ pub struct RPCRateLimiter { } /// Error type for non conformant requests +#[derive(Debug)] pub enum RateLimitedErr { /// Required tokens for this request exceed the maximum TooLarge, @@ -86,7 +107,7 @@ pub enum RateLimitedErr { } /// User-friendly builder of a `RPCRateLimiter` -#[derive(Default)] +#[derive(Default, Clone)] pub struct RPCRateLimiterBuilder { /// Quota for the Goodbye protocol. goodbye_quota: Option, @@ -105,13 +126,8 @@ pub struct RPCRateLimiterBuilder { } impl RPCRateLimiterBuilder { - /// Get an empty `RPCRateLimiterBuilder`. - pub fn new() -> Self { - Default::default() - } - /// Set a quota for a protocol. - fn set_quota(mut self, protocol: Protocol, quota: Quota) -> Self { + pub fn set_quota(mut self, protocol: Protocol, quota: Quota) -> Self { let q = Some(quota); match protocol { Protocol::Ping => self.ping_quota = q, @@ -191,11 +207,40 @@ impl RPCRateLimiterBuilder { } } +pub trait RateLimiterItem { + fn protocol(&self) -> Protocol; + fn expected_responses(&self) -> u64; +} + +impl RateLimiterItem for super::InboundRequest { + fn protocol(&self) -> Protocol { + self.protocol() + } + + fn expected_responses(&self) -> u64 { + self.expected_responses() + } +} + +impl RateLimiterItem for super::OutboundRequest { + fn protocol(&self) -> Protocol { + self.protocol() + } + + fn expected_responses(&self) -> u64 { + self.expected_responses() + } +} impl RPCRateLimiter { - pub fn allows( + /// Get a builder instance. + pub fn builder() -> RPCRateLimiterBuilder { + RPCRateLimiterBuilder::default() + } + + pub fn allows( &mut self, peer_id: &PeerId, - request: &InboundRequest, + request: &Item, ) -> Result<(), RateLimitedErr> { let time_since_start = self.init_time.elapsed(); let tokens = request.expected_responses().max(1); diff --git a/beacon_node/lighthouse_network/src/rpc/self_limiter.rs b/beacon_node/lighthouse_network/src/rpc/self_limiter.rs new file mode 100644 index 00000000000..451c6206f37 --- /dev/null +++ b/beacon_node/lighthouse_network/src/rpc/self_limiter.rs @@ -0,0 +1,202 @@ +use std::{ + collections::{hash_map::Entry, HashMap, VecDeque}, + task::{Context, Poll}, + time::Duration, +}; + +use futures::FutureExt; +use libp2p::{swarm::NotifyHandler, PeerId}; +use slog::{crit, debug, Logger}; +use smallvec::SmallVec; +use tokio_util::time::DelayQueue; +use types::EthSpec; + +use super::{ + config::OutboundRateLimiterConfig, + rate_limiter::{RPCRateLimiter as RateLimiter, RateLimitedErr}, + BehaviourAction, OutboundRequest, Protocol, RPCSend, ReqId, +}; + +/// A request that was rate limited or waiting on rate limited requests for the same peer and +/// protocol. +struct QueuedRequest { + req: OutboundRequest, + request_id: Id, +} + +pub(crate) struct SelfRateLimiter { + /// Requests queued for sending per peer. This requests are stored when the self rate + /// limiter rejects them. Rate limiting is based on a Peer and Protocol basis, therefore + /// are stored in the same way. + delayed_requests: HashMap<(PeerId, Protocol), VecDeque>>, + /// The delay required to allow a peer's outbound request per protocol. + next_peer_request: DelayQueue<(PeerId, Protocol)>, + /// Rate limiter for our own requests. + limiter: RateLimiter, + /// Requests that are ready to be sent. + ready_requests: SmallVec<[BehaviourAction; 3]>, + /// Slog logger. + log: Logger, +} + +/// Error returned when the rate limiter does not accept a request. +// NOTE: this is currently not used, but might be useful for debugging. +pub enum Error { + /// There are queued requests for this same peer and protocol. + PendingRequests, + /// Request was tried but rate limited. + RateLimited, +} + +impl SelfRateLimiter { + /// Creates a new [`SelfRateLimiter`] based on configration values. + pub fn new(config: OutboundRateLimiterConfig, log: Logger) -> Result { + debug!(log, "Using self rate limiting params"; "config" => ?config); + // Destructure to make sure every configuration value is used. + let OutboundRateLimiterConfig { + ping_quota, + meta_data_quota, + status_quota, + goodbye_quota, + blocks_by_range_quota, + blocks_by_root_quota, + } = config; + + let limiter = RateLimiter::builder() + .set_quota(Protocol::Ping, ping_quota) + .set_quota(Protocol::MetaData, meta_data_quota) + .set_quota(Protocol::Status, status_quota) + .set_quota(Protocol::Goodbye, goodbye_quota) + .set_quota(Protocol::BlocksByRange, blocks_by_range_quota) + .set_quota(Protocol::BlocksByRoot, blocks_by_root_quota) + // Manually set the LightClientBootstrap quota, since we use the same rate limiter for + // inbound and outbound requests, and the LightClientBootstrap is an only inbound + // protocol. + .one_every(Protocol::LightClientBootstrap, Duration::from_secs(10)) + .build()?; + + Ok(SelfRateLimiter { + delayed_requests: Default::default(), + next_peer_request: Default::default(), + limiter, + ready_requests: Default::default(), + log, + }) + } + + /// Checks if the rate limiter allows the request. If it's allowed, returns the + /// [`NetworkBehaviourAction`] that should be emitted. When not allowed, the request is delayed + /// until it can be sent. + pub fn allows( + &mut self, + peer_id: PeerId, + request_id: Id, + req: OutboundRequest, + ) -> Result, Error> { + let protocol = req.protocol(); + // First check that there are not already other requests waiting to be sent. + if let Some(queued_requests) = self.delayed_requests.get_mut(&(peer_id, protocol)) { + queued_requests.push_back(QueuedRequest { req, request_id }); + + return Err(Error::PendingRequests); + } + match Self::try_send_request(&mut self.limiter, peer_id, request_id, req, &self.log) { + Err((rate_limited_req, wait_time)) => { + let key = (peer_id, protocol); + self.next_peer_request.insert(key, wait_time); + self.delayed_requests + .entry(key) + .or_default() + .push_back(rate_limited_req); + + Err(Error::RateLimited) + } + Ok(event) => Ok(event), + } + } + + /// Auxiliary function to deal with self rate limiting outcomes. If the rate limiter allows the + /// request, the [`NetworkBehaviourAction`] that should be emitted is returned. If the request + /// should be delayed, it's returned with the duration to wait. + fn try_send_request( + limiter: &mut RateLimiter, + peer_id: PeerId, + request_id: Id, + req: OutboundRequest, + log: &Logger, + ) -> Result, (QueuedRequest, Duration)> { + match limiter.allows(&peer_id, &req) { + Ok(()) => Ok(BehaviourAction::NotifyHandler { + peer_id, + handler: NotifyHandler::Any, + event: RPCSend::Request(request_id, req), + }), + Err(e) => { + let protocol = req.protocol(); + match e { + RateLimitedErr::TooLarge => { + // this should never happen with default parameters. Let's just send the request. + // Log a crit since this is a config issue. + crit!( + log, + "Self rate limiting error for a batch that will never fit. Sending request anyway. Check configuration parameters."; + "protocol" => %req.protocol() + ); + Ok(BehaviourAction::NotifyHandler { + peer_id, + handler: NotifyHandler::Any, + event: RPCSend::Request(request_id, req), + }) + } + RateLimitedErr::TooSoon(wait_time) => { + debug!(log, "Self rate limiting"; "protocol" => %protocol, "wait_time_ms" => wait_time.as_millis(), "peer_id" => %peer_id); + Err((QueuedRequest { req, request_id }, wait_time)) + } + } + } + } + } + + /// When a peer and protocol are allowed to send a next request, this function checks the + /// queued requests and attempts marking as ready as many as the limiter allows. + fn next_peer_request_ready(&mut self, peer_id: PeerId, protocol: Protocol) { + if let Entry::Occupied(mut entry) = self.delayed_requests.entry((peer_id, protocol)) { + let queued_requests = entry.get_mut(); + while let Some(QueuedRequest { req, request_id }) = queued_requests.pop_front() { + match Self::try_send_request(&mut self.limiter, peer_id, request_id, req, &self.log) + { + Err((rate_limited_req, wait_time)) => { + let key = (peer_id, protocol); + self.next_peer_request.insert(key, wait_time); + queued_requests.push_back(rate_limited_req); + // If one fails just wait for the next window that allows sending requests. + return; + } + Ok(event) => self.ready_requests.push(event), + } + } + if queued_requests.is_empty() { + entry.remove(); + } + } + } + + pub fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll> { + // First check the requests that were self rate limited, since those might add events to + // the queue. Also do this this before rate limiter prunning to avoid removing and + // immediately adding rate limiting keys. + if let Poll::Ready(Some(Ok(expired))) = self.next_peer_request.poll_expired(cx) { + let (peer_id, protocol) = expired.into_inner(); + self.next_peer_request_ready(peer_id, protocol); + } + // Prune the rate limiter. + let _ = self.limiter.poll_unpin(cx); + + // Finally return any queued events. + if !self.ready_requests.is_empty() { + return Poll::Ready(self.ready_requests.remove(0)); + } + + Poll::Pending + } +} diff --git a/beacon_node/lighthouse_network/src/service/api_types.rs b/beacon_node/lighthouse_network/src/service/api_types.rs index 849a86f51ba..bd3df797699 100644 --- a/beacon_node/lighthouse_network/src/service/api_types.rs +++ b/beacon_node/lighthouse_network/src/service/api_types.rs @@ -1,7 +1,8 @@ use std::sync::Arc; use libp2p::core::connection::ConnectionId; -use types::{light_client_bootstrap::LightClientBootstrap, EthSpec, SignedBeaconBlock}; +use types::light_client_bootstrap::LightClientBootstrap; +use types::{EthSpec, SignedBeaconBlock}; use crate::rpc::{ methods::{ diff --git a/beacon_node/lighthouse_network/src/service/gossip_cache.rs b/beacon_node/lighthouse_network/src/service/gossip_cache.rs index c784191cd30..2865d5b3f6a 100644 --- a/beacon_node/lighthouse_network/src/service/gossip_cache.rs +++ b/beacon_node/lighthouse_network/src/service/gossip_cache.rs @@ -34,6 +34,8 @@ pub struct GossipCache { signed_contribution_and_proof: Option, /// Timeout for sync committee messages. sync_committee_message: Option, + /// Timeout for signed BLS to execution changes. + bls_to_execution_change: Option, /// Timeout for light client finality updates. light_client_finality_update: Option, /// Timeout for light client optimistic updates. @@ -59,6 +61,8 @@ pub struct GossipCacheBuilder { signed_contribution_and_proof: Option, /// Timeout for sync committee messages. sync_committee_message: Option, + /// Timeout for signed BLS to execution changes. + bls_to_execution_change: Option, /// Timeout for light client finality updates. light_client_finality_update: Option, /// Timeout for light client optimistic updates. @@ -121,6 +125,12 @@ impl GossipCacheBuilder { self } + /// Timeout for BLS to execution change messages. + pub fn bls_to_execution_change_timeout(mut self, timeout: Duration) -> Self { + self.bls_to_execution_change = Some(timeout); + self + } + /// Timeout for light client finality update messages. pub fn light_client_finality_update_timeout(mut self, timeout: Duration) -> Self { self.light_client_finality_update = Some(timeout); @@ -144,6 +154,7 @@ impl GossipCacheBuilder { attester_slashing, signed_contribution_and_proof, sync_committee_message, + bls_to_execution_change, light_client_finality_update, light_client_optimistic_update, } = self; @@ -158,6 +169,7 @@ impl GossipCacheBuilder { attester_slashing: attester_slashing.or(default_timeout), signed_contribution_and_proof: signed_contribution_and_proof.or(default_timeout), sync_committee_message: sync_committee_message.or(default_timeout), + bls_to_execution_change: bls_to_execution_change.or(default_timeout), light_client_finality_update: light_client_finality_update.or(default_timeout), light_client_optimistic_update: light_client_optimistic_update.or(default_timeout), } @@ -182,6 +194,7 @@ impl GossipCache { GossipKind::AttesterSlashing => self.attester_slashing, GossipKind::SignedContributionAndProof => self.signed_contribution_and_proof, GossipKind::SyncCommitteeMessage(_) => self.sync_committee_message, + GossipKind::BlsToExecutionChange => self.bls_to_execution_change, GossipKind::LightClientFinalityUpdate => self.light_client_finality_update, GossipKind::LightClientOptimisticUpdate => self.light_client_optimistic_update, }; diff --git a/beacon_node/lighthouse_network/src/service/mod.rs b/beacon_node/lighthouse_network/src/service/mod.rs index 5b3598216b5..f815e3bd36b 100644 --- a/beacon_node/lighthouse_network/src/service/mod.rs +++ b/beacon_node/lighthouse_network/src/service/mod.rs @@ -1,3 +1,5 @@ +use self::behaviour::Behaviour; +use self::gossip_cache::GossipCache; use crate::config::{gossipsub_config, NetworkLoad}; use crate::discovery::{ subnet_predicate, DiscoveredPeers, Discovery, FIND_NODE_QUERY_CLOSEST_PEERS, @@ -7,15 +9,16 @@ use crate::peer_manager::{ ConnectionDirection, PeerManager, PeerManagerEvent, }; use crate::peer_manager::{MIN_OUTBOUND_ONLY_FACTOR, PEER_EXCESS_FACTOR, PRIORITY_PEER_EXCESS}; +use crate::rpc::*; use crate::service::behaviour::BehaviourEvent; pub use crate::service::behaviour::Gossipsub; use crate::types::{ - subnet_from_topic_hash, GossipEncoding, GossipKind, GossipTopic, SnappyTransform, Subnet, - SubnetDiscovery, + fork_core_topics, subnet_from_topic_hash, GossipEncoding, GossipKind, GossipTopic, + SnappyTransform, Subnet, SubnetDiscovery, }; +use crate::EnrExt; use crate::Eth2Enr; use crate::{error, metrics, Enr, NetworkGlobals, PubsubMessage, TopicHash}; -use crate::{rpc::*, EnrExt}; use api_types::{PeerRequestId, Request, RequestId, Response}; use futures::stream::StreamExt; use gossipsub_scoring_parameters::{lighthouse_gossip_thresholds, PeerScoreSettings}; @@ -31,20 +34,19 @@ use libp2p::multiaddr::{Multiaddr, Protocol as MProtocol}; use libp2p::swarm::{ConnectionLimits, Swarm, SwarmBuilder, SwarmEvent}; use libp2p::PeerId; use slog::{crit, debug, info, o, trace, warn}; - -use std::marker::PhantomData; use std::path::PathBuf; use std::pin::Pin; -use std::sync::Arc; -use std::task::{Context, Poll}; +use std::{ + marker::PhantomData, + sync::Arc, + task::{Context, Poll}, +}; +use types::ForkName; use types::{ consts::altair::SYNC_COMMITTEE_SUBNET_COUNT, EnrForkId, EthSpec, ForkContext, Slot, SubnetId, }; use utils::{build_transport, strip_peer_id, Context as ServiceContext, MAX_CONNECTIONS_PER_PEER}; -use self::behaviour::Behaviour; -use self::gossip_cache::GossipCache; - pub mod api_types; mod behaviour; mod gossip_cache; @@ -161,14 +163,15 @@ impl Network { let meta_data = utils::load_or_build_metadata(&config.network_dir, &log); let globals = NetworkGlobals::new( enr, - config.libp2p_port, - config.discovery_port, + config.listen_addrs().v4().map(|v4_addr| v4_addr.tcp_port), + config.listen_addrs().v6().map(|v6_addr| v6_addr.tcp_port), meta_data, config .trusted_peers .iter() .map(|x| PeerId::from(x.clone())) .collect(), + config.disable_peer_scoring, &log, ); Arc::new(globals) @@ -197,6 +200,7 @@ impl Network { .attester_slashing_timeout(half_epoch * 2) // .signed_contribution_and_proof_timeout(timeout) // Do not retry // .sync_committee_message_timeout(timeout) // Do not retry + .bls_to_execution_change_timeout(half_epoch * 2) .build() }; @@ -262,6 +266,7 @@ impl Network { let eth2_rpc = RPC::new( ctx.fork_context.clone(), config.enable_light_client_server, + config.outbound_rate_limiter_config.clone(), log.clone(), ); @@ -384,36 +389,26 @@ impl Network { async fn start(&mut self, config: &crate::NetworkConfig) -> error::Result<()> { let enr = self.network_globals.local_enr(); info!(self.log, "Libp2p Starting"; "peer_id" => %enr.peer_id(), "bandwidth_config" => format!("{}-{}", config.network_load, NetworkLoad::from(config.network_load).name)); - let discovery_string = if config.disable_discovery { - "None".into() - } else { - config.discovery_port.to_string() - }; - - debug!(self.log, "Attempting to open listening ports"; "address" => ?config.listen_address, "tcp_port" => config.libp2p_port, "udp_port" => discovery_string); - - let listen_multiaddr = { - let mut m = Multiaddr::from(config.listen_address); - m.push(MProtocol::Tcp(config.libp2p_port)); - m - }; - - match self.swarm.listen_on(listen_multiaddr.clone()) { - Ok(_) => { - let mut log_address = listen_multiaddr; - log_address.push(MProtocol::P2p(enr.peer_id().into())); - info!(self.log, "Listening established"; "address" => %log_address); - } - Err(err) => { - crit!( - self.log, - "Unable to listen on libp2p address"; - "error" => ?err, - "listen_multiaddr" => %listen_multiaddr, - ); - return Err("Libp2p was unable to listen on the given listen address.".into()); - } - }; + debug!(self.log, "Attempting to open listening ports"; config.listen_addrs(), "discovery_enabled" => !config.disable_discovery); + + for listen_multiaddr in config.listen_addrs().tcp_addresses() { + match self.swarm.listen_on(listen_multiaddr.clone()) { + Ok(_) => { + let mut log_address = listen_multiaddr; + log_address.push(MProtocol::P2p(enr.peer_id().into())); + info!(self.log, "Listening established"; "address" => %log_address); + } + Err(err) => { + crit!( + self.log, + "Unable to listen on libp2p address"; + "error" => ?err, + "listen_multiaddr" => %listen_multiaddr, + ); + return Err("Libp2p was unable to listen on the given listen address.".into()); + } + }; + } // helper closure for dialing peers let mut dial = |mut multiaddr: Multiaddr| { @@ -556,13 +551,20 @@ impl Network { self.unsubscribe(gossip_topic) } - /// Subscribe to all currently subscribed topics with the new fork digest. - pub fn subscribe_new_fork_topics(&mut self, new_fork_digest: [u8; 4]) { + /// Subscribe to all required topics for the `new_fork` with the given `new_fork_digest`. + pub fn subscribe_new_fork_topics(&mut self, new_fork: ForkName, new_fork_digest: [u8; 4]) { + // Subscribe to existing topics with new fork digest let subscriptions = self.network_globals.gossipsub_subscriptions.read().clone(); for mut topic in subscriptions.into_iter() { topic.fork_digest = new_fork_digest; self.subscribe(topic); } + + // Subscribe to core topics for the new fork + for kind in fork_core_topics(&new_fork) { + let topic = GossipTopic::new(kind, GossipEncoding::default(), new_fork_digest); + self.subscribe(topic); + } } /// Unsubscribe from all topics that doesn't have the given fork_digest @@ -1118,7 +1120,7 @@ impl Network { debug!(self.log, "Peer does not support gossipsub"; "peer_id" => %peer_id); self.peer_manager_mut().report_peer( &peer_id, - PeerAction::LowToleranceError, + PeerAction::Fatal, ReportSource::Gossipsub, Some(GoodbyeReason::Unknown), "does_not_support_gossipsub", diff --git a/beacon_node/lighthouse_network/src/service/utils.rs b/beacon_node/lighthouse_network/src/service/utils.rs index addaaf5b5e9..625df65ee9d 100644 --- a/beacon_node/lighthouse_network/src/service/utils.rs +++ b/beacon_node/lighthouse_network/src/service/utils.rs @@ -252,6 +252,7 @@ pub(crate) fn create_whitelist_filter( add(ProposerSlashing); add(AttesterSlashing); add(SignedContributionAndProof); + add(BlsToExecutionChange); add(LightClientFinalityUpdate); add(LightClientOptimisticUpdate); for id in 0..attestation_subnet_count { diff --git a/beacon_node/lighthouse_network/src/types/globals.rs b/beacon_node/lighthouse_network/src/types/globals.rs index aadd13a236b..43e8ebd76a5 100644 --- a/beacon_node/lighthouse_network/src/types/globals.rs +++ b/beacon_node/lighthouse_network/src/types/globals.rs @@ -7,7 +7,6 @@ use crate::EnrExt; use crate::{Enr, GossipTopic, Multiaddr, PeerId}; use parking_lot::RwLock; use std::collections::HashSet; -use std::sync::atomic::{AtomicU16, Ordering}; use types::EthSpec; pub struct NetworkGlobals { @@ -17,10 +16,10 @@ pub struct NetworkGlobals { pub peer_id: RwLock, /// Listening multiaddrs. pub listen_multiaddrs: RwLock>, - /// The TCP port that the libp2p service is listening on - pub listen_port_tcp: AtomicU16, - /// The UDP port that the discovery service is listening on - pub listen_port_udp: AtomicU16, + /// The TCP port that the libp2p service is listening on over Ipv4. + listen_port_tcp4: Option, + /// The TCP port that the libp2p service is listening on over Ipv6. + listen_port_tcp6: Option, /// The collection of known peers. pub peers: RwLock>, // The local meta data of our node. @@ -36,20 +35,21 @@ pub struct NetworkGlobals { impl NetworkGlobals { pub fn new( enr: Enr, - tcp_port: u16, - udp_port: u16, + listen_port_tcp4: Option, + listen_port_tcp6: Option, local_metadata: MetaData, trusted_peers: Vec, + disable_peer_scoring: bool, log: &slog::Logger, ) -> Self { NetworkGlobals { local_enr: RwLock::new(enr.clone()), peer_id: RwLock::new(enr.peer_id()), listen_multiaddrs: RwLock::new(Vec::new()), - listen_port_tcp: AtomicU16::new(tcp_port), - listen_port_udp: AtomicU16::new(udp_port), + listen_port_tcp4, + listen_port_tcp6, local_metadata: RwLock::new(local_metadata), - peers: RwLock::new(PeerDB::new(trusted_peers, log)), + peers: RwLock::new(PeerDB::new(trusted_peers, disable_peer_scoring, log)), gossipsub_subscriptions: RwLock::new(HashSet::new()), sync_state: RwLock::new(SyncState::Stalled), backfill_state: RwLock::new(BackFillState::NotRequired), @@ -73,13 +73,13 @@ impl NetworkGlobals { } /// Returns the libp2p TCP port that this node has been configured to listen on. - pub fn listen_port_tcp(&self) -> u16 { - self.listen_port_tcp.load(Ordering::Relaxed) + pub fn listen_port_tcp4(&self) -> Option { + self.listen_port_tcp4 } /// Returns the UDP discovery port that this node has been configured to listen on. - pub fn listen_port_udp(&self) -> u16 { - self.listen_port_udp.load(Ordering::Relaxed) + pub fn listen_port_tcp6(&self) -> Option { + self.listen_port_tcp6 } /// Returns the number of libp2p connected peers. @@ -137,14 +137,15 @@ impl NetworkGlobals { let enr = discv5::enr::EnrBuilder::new("v4").build(&enr_key).unwrap(); NetworkGlobals::new( enr, - 9000, - 9000, + Some(9000), + None, MetaData::V2(MetaDataV2 { seq_number: 0, attnets: Default::default(), syncnets: Default::default(), }), vec![], + false, log, ) } diff --git a/beacon_node/lighthouse_network/src/types/mod.rs b/beacon_node/lighthouse_network/src/types/mod.rs index 2a5ca6c8062..e7457f25dac 100644 --- a/beacon_node/lighthouse_network/src/types/mod.rs +++ b/beacon_node/lighthouse_network/src/types/mod.rs @@ -17,6 +17,6 @@ pub use pubsub::{PubsubMessage, SnappyTransform}; pub use subnet::{Subnet, SubnetDiscovery}; pub use sync_state::{BackFillState, SyncState}; pub use topics::{ - subnet_from_topic_hash, GossipEncoding, GossipKind, GossipTopic, CORE_TOPICS, - LIGHT_CLIENT_GOSSIP_TOPICS, + core_topics_to_subscribe, fork_core_topics, subnet_from_topic_hash, GossipEncoding, GossipKind, + GossipTopic, LIGHT_CLIENT_GOSSIP_TOPICS, }; diff --git a/beacon_node/lighthouse_network/src/types/pubsub.rs b/beacon_node/lighthouse_network/src/types/pubsub.rs index b036e558c99..bb0397de1e2 100644 --- a/beacon_node/lighthouse_network/src/types/pubsub.rs +++ b/beacon_node/lighthouse_network/src/types/pubsub.rs @@ -11,8 +11,9 @@ use std::sync::Arc; use types::{ Attestation, AttesterSlashing, EthSpec, ForkContext, ForkName, LightClientFinalityUpdate, LightClientOptimisticUpdate, ProposerSlashing, SignedAggregateAndProof, SignedBeaconBlock, - SignedBeaconBlockAltair, SignedBeaconBlockBase, SignedBeaconBlockMerge, - SignedContributionAndProof, SignedVoluntaryExit, SubnetId, SyncCommitteeMessage, SyncSubnetId, + SignedBeaconBlockAltair, SignedBeaconBlockBase, SignedBeaconBlockCapella, + SignedBeaconBlockMerge, SignedBlsToExecutionChange, SignedContributionAndProof, + SignedVoluntaryExit, SubnetId, SyncCommitteeMessage, SyncSubnetId, }; #[derive(Debug, Clone, PartialEq)] @@ -33,6 +34,8 @@ pub enum PubsubMessage { SignedContributionAndProof(Box>), /// Gossipsub message providing notification of unaggregated sync committee signatures with its subnet id. SyncCommitteeMessage(Box<(SyncSubnetId, SyncCommitteeMessage)>), + /// Gossipsub message for BLS to execution change messages. + BlsToExecutionChange(Box), /// Gossipsub message providing notification of a light client finality update. LightClientFinalityUpdate(Box>), /// Gossipsub message providing notification of a light client optimistic update. @@ -119,6 +122,7 @@ impl PubsubMessage { PubsubMessage::AttesterSlashing(_) => GossipKind::AttesterSlashing, PubsubMessage::SignedContributionAndProof(_) => GossipKind::SignedContributionAndProof, PubsubMessage::SyncCommitteeMessage(data) => GossipKind::SyncCommitteeMessage(data.0), + PubsubMessage::BlsToExecutionChange(_) => GossipKind::BlsToExecutionChange, PubsubMessage::LightClientFinalityUpdate(_) => GossipKind::LightClientFinalityUpdate, PubsubMessage::LightClientOptimisticUpdate(_) => { GossipKind::LightClientOptimisticUpdate @@ -175,6 +179,10 @@ impl PubsubMessage { SignedBeaconBlockMerge::from_ssz_bytes(data) .map_err(|e| format!("{:?}", e))?, ), + Some(ForkName::Capella) => SignedBeaconBlock::::Capella( + SignedBeaconBlockCapella::from_ssz_bytes(data) + .map_err(|e| format!("{:?}", e))?, + ), None => { return Err(format!( "Unknown gossipsub fork digest: {:?}", @@ -214,6 +222,14 @@ impl PubsubMessage { sync_committee, )))) } + GossipKind::BlsToExecutionChange => { + let bls_to_execution_change = + SignedBlsToExecutionChange::from_ssz_bytes(data) + .map_err(|e| format!("{:?}", e))?; + Ok(PubsubMessage::BlsToExecutionChange(Box::new( + bls_to_execution_change, + ))) + } GossipKind::LightClientFinalityUpdate => { let light_client_finality_update = LightClientFinalityUpdate::from_ssz_bytes(data) @@ -251,6 +267,7 @@ impl PubsubMessage { PubsubMessage::Attestation(data) => data.1.as_ssz_bytes(), PubsubMessage::SignedContributionAndProof(data) => data.as_ssz_bytes(), PubsubMessage::SyncCommitteeMessage(data) => data.1.as_ssz_bytes(), + PubsubMessage::BlsToExecutionChange(data) => data.as_ssz_bytes(), PubsubMessage::LightClientFinalityUpdate(data) => data.as_ssz_bytes(), PubsubMessage::LightClientOptimisticUpdate(data) => data.as_ssz_bytes(), } @@ -287,6 +304,13 @@ impl std::fmt::Display for PubsubMessage { PubsubMessage::SyncCommitteeMessage(data) => { write!(f, "Sync committee message: subnet_id: {}", *data.0) } + PubsubMessage::BlsToExecutionChange(data) => { + write!( + f, + "Signed BLS to execution change: validator_index: {}, address: {:?}", + data.message.validator_index, data.message.to_execution_address + ) + } PubsubMessage::LightClientFinalityUpdate(_data) => { write!(f, "Light CLient Finality Update") } diff --git a/beacon_node/lighthouse_network/src/types/topics.rs b/beacon_node/lighthouse_network/src/types/topics.rs index e7e3cf4abbe..0e4aefbb5c1 100644 --- a/beacon_node/lighthouse_network/src/types/topics.rs +++ b/beacon_node/lighthouse_network/src/types/topics.rs @@ -1,7 +1,7 @@ use libp2p::gossipsub::{IdentTopic as Topic, TopicHash}; use serde_derive::{Deserialize, Serialize}; use strum::AsRefStr; -use types::{SubnetId, SyncSubnetId}; +use types::{ForkName, SubnetId, SyncSubnetId}; use crate::Subnet; @@ -18,23 +18,49 @@ pub const PROPOSER_SLASHING_TOPIC: &str = "proposer_slashing"; pub const ATTESTER_SLASHING_TOPIC: &str = "attester_slashing"; pub const SIGNED_CONTRIBUTION_AND_PROOF_TOPIC: &str = "sync_committee_contribution_and_proof"; pub const SYNC_COMMITTEE_PREFIX_TOPIC: &str = "sync_committee_"; +pub const BLS_TO_EXECUTION_CHANGE_TOPIC: &str = "bls_to_execution_change"; pub const LIGHT_CLIENT_FINALITY_UPDATE: &str = "light_client_finality_update"; pub const LIGHT_CLIENT_OPTIMISTIC_UPDATE: &str = "light_client_optimistic_update"; -pub const CORE_TOPICS: [GossipKind; 6] = [ +pub const BASE_CORE_TOPICS: [GossipKind; 5] = [ GossipKind::BeaconBlock, GossipKind::BeaconAggregateAndProof, GossipKind::VoluntaryExit, GossipKind::ProposerSlashing, GossipKind::AttesterSlashing, - GossipKind::SignedContributionAndProof, ]; +pub const ALTAIR_CORE_TOPICS: [GossipKind; 1] = [GossipKind::SignedContributionAndProof]; + +pub const CAPELLA_CORE_TOPICS: [GossipKind; 1] = [GossipKind::BlsToExecutionChange]; + pub const LIGHT_CLIENT_GOSSIP_TOPICS: [GossipKind; 2] = [ GossipKind::LightClientFinalityUpdate, GossipKind::LightClientOptimisticUpdate, ]; +/// Returns the core topics associated with each fork that are new to the previous fork +pub fn fork_core_topics(fork_name: &ForkName) -> Vec { + match fork_name { + ForkName::Base => BASE_CORE_TOPICS.to_vec(), + ForkName::Altair => ALTAIR_CORE_TOPICS.to_vec(), + ForkName::Merge => vec![], + ForkName::Capella => CAPELLA_CORE_TOPICS.to_vec(), + } +} + +/// Returns all the topics that we need to subscribe to for a given fork +/// including topics from older forks and new topics for the current fork. +pub fn core_topics_to_subscribe(mut current_fork: ForkName) -> Vec { + let mut topics = fork_core_topics(¤t_fork); + while let Some(previous_fork) = current_fork.previous_fork() { + let previous_fork_topics = fork_core_topics(&previous_fork); + topics.extend(previous_fork_topics); + current_fork = previous_fork; + } + topics +} + /// A gossipsub topic which encapsulates the type of messages that should be sent and received over /// the pubsub protocol and the way the messages should be encoded. #[derive(Clone, Debug, Serialize, Deserialize, PartialEq, Eq, Hash)] @@ -70,6 +96,8 @@ pub enum GossipKind { /// Topic for publishing unaggregated sync committee signatures on a particular subnet. #[strum(serialize = "sync_committee")] SyncCommitteeMessage(SyncSubnetId), + /// Topic for validator messages which change their withdrawal address. + BlsToExecutionChange, /// Topic for publishing finality updates for light clients. LightClientFinalityUpdate, /// Topic for publishing optimistic updates for light clients. @@ -147,6 +175,7 @@ impl GossipTopic { VOLUNTARY_EXIT_TOPIC => GossipKind::VoluntaryExit, PROPOSER_SLASHING_TOPIC => GossipKind::ProposerSlashing, ATTESTER_SLASHING_TOPIC => GossipKind::AttesterSlashing, + BLS_TO_EXECUTION_CHANGE_TOPIC => GossipKind::BlsToExecutionChange, LIGHT_CLIENT_FINALITY_UPDATE => GossipKind::LightClientFinalityUpdate, LIGHT_CLIENT_OPTIMISTIC_UPDATE => GossipKind::LightClientOptimisticUpdate, topic => match committee_topic_index(topic) { @@ -207,6 +236,7 @@ impl std::fmt::Display for GossipTopic { GossipKind::SyncCommitteeMessage(index) => { format!("{}{}", SYNC_COMMITTEE_PREFIX_TOPIC, *index) } + GossipKind::BlsToExecutionChange => BLS_TO_EXECUTION_CHANGE_TOPIC.into(), GossipKind::LightClientFinalityUpdate => LIGHT_CLIENT_FINALITY_UPDATE.into(), GossipKind::LightClientOptimisticUpdate => LIGHT_CLIENT_OPTIMISTIC_UPDATE.into(), }; @@ -384,4 +414,15 @@ mod tests { assert_eq!("proposer_slashing", ProposerSlashing.as_ref()); assert_eq!("attester_slashing", AttesterSlashing.as_ref()); } + + #[test] + fn test_core_topics_to_subscribe() { + let mut all_topics = Vec::new(); + all_topics.extend(CAPELLA_CORE_TOPICS); + all_topics.extend(ALTAIR_CORE_TOPICS); + all_topics.extend(BASE_CORE_TOPICS); + + let latest_fork = *ForkName::list_all().last().unwrap(); + assert_eq!(core_topics_to_subscribe(latest_fork), all_topics); + } } diff --git a/beacon_node/lighthouse_network/tests/common.rs b/beacon_node/lighthouse_network/tests/common.rs index b67b412cfc2..d44f20c0806 100644 --- a/beacon_node/lighthouse_network/tests/common.rs +++ b/beacon_node/lighthouse_network/tests/common.rs @@ -13,7 +13,7 @@ use tokio::runtime::Runtime; use types::{ ChainSpec, EnrForkId, Epoch, EthSpec, ForkContext, ForkName, Hash256, MinimalEthSpec, Slot, }; -use unused_port::unused_tcp_port; +use unused_port::unused_tcp4_port; type E = MinimalEthSpec; type ReqId = usize; @@ -25,14 +25,17 @@ pub fn fork_context(fork_name: ForkName) -> ForkContext { let mut chain_spec = E::default_spec(); let altair_fork_epoch = Epoch::new(1); let merge_fork_epoch = Epoch::new(2); + let capella_fork_epoch = Epoch::new(3); chain_spec.altair_fork_epoch = Some(altair_fork_epoch); chain_spec.bellatrix_fork_epoch = Some(merge_fork_epoch); + chain_spec.capella_fork_epoch = Some(capella_fork_epoch); let current_slot = match fork_name { ForkName::Base => Slot::new(0), ForkName::Altair => altair_fork_epoch.start_slot(E::slots_per_epoch()), ForkName::Merge => merge_fork_epoch.start_slot(E::slots_per_epoch()), + ForkName::Capella => capella_fork_epoch.start_slot(E::slots_per_epoch()), }; ForkContext::new::(current_slot, Hash256::zero(), &chain_spec) } @@ -72,11 +75,9 @@ pub fn build_config(port: u16, mut boot_nodes: Vec) -> NetworkConfig { .tempdir() .unwrap(); - config.libp2p_port = port; // tcp port - config.discovery_port = port; // udp port - config.enr_tcp_port = Some(port); - config.enr_udp_port = Some(port); - config.enr_address = Some("127.0.0.1".parse().unwrap()); + config.set_ipv4_listening_address(std::net::Ipv4Addr::UNSPECIFIED, port, port); + config.enr_udp4_port = Some(port); + config.enr_address = (Some(std::net::Ipv4Addr::LOCALHOST), None); config.boot_nodes_enr.append(&mut boot_nodes); config.network_dir = path.into_path(); // Reduce gossipsub heartbeat parameters @@ -94,7 +95,7 @@ pub async fn build_libp2p_instance( log: slog::Logger, fork_name: ForkName, ) -> Libp2pInstance { - let port = unused_tcp_port().unwrap(); + let port = unused_tcp4_port().unwrap(); let config = build_config(port, boot_nodes); // launch libp2p service diff --git a/beacon_node/lighthouse_network/tests/rpc_tests.rs b/beacon_node/lighthouse_network/tests/rpc_tests.rs index 9183453492c..ebdbb67421f 100644 --- a/beacon_node/lighthouse_network/tests/rpc_tests.rs +++ b/beacon_node/lighthouse_network/tests/rpc_tests.rs @@ -9,8 +9,8 @@ use std::time::Duration; use tokio::runtime::Runtime; use tokio::time::sleep; use types::{ - BeaconBlock, BeaconBlockAltair, BeaconBlockBase, BeaconBlockMerge, Epoch, EthSpec, ForkContext, - ForkName, Hash256, MinimalEthSpec, Signature, SignedBeaconBlock, Slot, + BeaconBlock, BeaconBlockAltair, BeaconBlockBase, BeaconBlockMerge, EmptyBlock, Epoch, EthSpec, + ForkContext, ForkName, Hash256, MinimalEthSpec, Signature, SignedBeaconBlock, Slot, }; mod common; diff --git a/beacon_node/network/Cargo.toml b/beacon_node/network/Cargo.toml index 43c6cef8464..9a0b7946466 100644 --- a/beacon_node/network/Cargo.toml +++ b/beacon_node/network/Cargo.toml @@ -43,8 +43,10 @@ if-addrs = "0.6.4" strum = "0.24.0" tokio-util = { version = "0.6.3", features = ["time"] } derivative = "2.2.0" -delay_map = "0.1.1" +delay_map = "0.3.0" ethereum-types = { version = "0.14.1", optional = true } +operation_pool = { path = "../operation_pool" } +execution_layer = { path = "../execution_layer" } [features] deterministic_long_lived_attnets = [ "ethereum-types" ] diff --git a/beacon_node/network/src/beacon_processor/mod.rs b/beacon_node/network/src/beacon_processor/mod.rs index 743a97a29c2..96032052284 100644 --- a/beacon_node/network/src/beacon_processor/mod.rs +++ b/beacon_node/network/src/beacon_processor/mod.rs @@ -61,13 +61,15 @@ use std::time::Duration; use std::{cmp, collections::HashSet}; use task_executor::TaskExecutor; use tokio::sync::mpsc; +use tokio::sync::mpsc::error::TrySendError; use types::{ Attestation, AttesterSlashing, Hash256, LightClientFinalityUpdate, LightClientOptimisticUpdate, - ProposerSlashing, SignedAggregateAndProof, SignedBeaconBlock, SignedContributionAndProof, - SignedVoluntaryExit, SubnetId, SyncCommitteeMessage, SyncSubnetId, + ProposerSlashing, SignedAggregateAndProof, SignedBeaconBlock, SignedBlsToExecutionChange, + SignedContributionAndProof, SignedVoluntaryExit, SubnetId, SyncCommitteeMessage, SyncSubnetId, }; use work_reprocessing_queue::{ - spawn_reprocess_scheduler, QueuedAggregate, QueuedRpcBlock, QueuedUnaggregate, ReadyWork, + spawn_reprocess_scheduler, QueuedAggregate, QueuedLightClientUpdate, QueuedRpcBlock, + QueuedUnaggregate, ReadyWork, }; use worker::{Toolbox, Worker}; @@ -76,7 +78,9 @@ mod tests; mod work_reprocessing_queue; mod worker; -use crate::beacon_processor::work_reprocessing_queue::QueuedGossipBlock; +use crate::beacon_processor::work_reprocessing_queue::{ + QueuedBackfillBatch, QueuedGossipBlock, ReprocessQueueMessage, +}; pub use worker::{ChainSegmentProcessId, GossipAggregatePackage, GossipAttestationPackage}; /// The maximum size of the channel for work events to the `BeaconProcessor`. @@ -137,6 +141,10 @@ const MAX_GOSSIP_FINALITY_UPDATE_QUEUE_LEN: usize = 1_024; /// before we start dropping them. const MAX_GOSSIP_OPTIMISTIC_UPDATE_QUEUE_LEN: usize = 1_024; +/// The maximum number of queued `LightClientOptimisticUpdate` objects received on gossip that will be stored +/// for reprocessing before we start dropping them. +const MAX_GOSSIP_OPTIMISTIC_UPDATE_REPROCESS_QUEUE_LEN: usize = 128; + /// The maximum number of queued `SyncCommitteeMessage` objects that will be stored before we start dropping /// them. const MAX_SYNC_MESSAGE_QUEUE_LEN: usize = 2048; @@ -165,6 +173,12 @@ const MAX_BLOCKS_BY_RANGE_QUEUE_LEN: usize = 1_024; /// will be stored before we start dropping them. const MAX_BLOCKS_BY_ROOTS_QUEUE_LEN: usize = 1_024; +/// Maximum number of `SignedBlsToExecutionChange` messages to queue before dropping them. +/// +/// This value is set high to accommodate the large spike that is expected immediately after Capella +/// is activated. +const MAX_BLS_TO_EXECUTION_CHANGE_QUEUE_LEN: usize = 16_384; + /// The maximum number of queued `LightClientBootstrapRequest` objects received from the network RPC that /// will be stored before we start dropping them. const MAX_LIGHT_CLIENT_BOOTSTRAP_QUEUE_LEN: usize = 1_024; @@ -207,12 +221,15 @@ pub const GOSSIP_LIGHT_CLIENT_FINALITY_UPDATE: &str = "light_client_finality_upd pub const GOSSIP_LIGHT_CLIENT_OPTIMISTIC_UPDATE: &str = "light_client_optimistic_update"; pub const RPC_BLOCK: &str = "rpc_block"; pub const CHAIN_SEGMENT: &str = "chain_segment"; +pub const CHAIN_SEGMENT_BACKFILL: &str = "chain_segment_backfill"; pub const STATUS_PROCESSING: &str = "status_processing"; pub const BLOCKS_BY_RANGE_REQUEST: &str = "blocks_by_range_request"; pub const BLOCKS_BY_ROOTS_REQUEST: &str = "blocks_by_roots_request"; pub const LIGHT_CLIENT_BOOTSTRAP_REQUEST: &str = "light_client_bootstrap"; pub const UNKNOWN_BLOCK_ATTESTATION: &str = "unknown_block_attestation"; pub const UNKNOWN_BLOCK_AGGREGATE: &str = "unknown_block_aggregate"; +pub const UNKNOWN_LIGHT_CLIENT_UPDATE: &str = "unknown_light_client_update"; +pub const GOSSIP_BLS_TO_EXECUTION_CHANGE: &str = "gossip_bls_to_execution_change"; /// A simple first-in-first-out queue with a maximum length. struct FifoQueue { @@ -538,6 +555,22 @@ impl WorkEvent { } } + /// Create a new `Work` event for some BLS to execution change. + pub fn gossip_bls_to_execution_change( + message_id: MessageId, + peer_id: PeerId, + bls_to_execution_change: Box, + ) -> Self { + Self { + drop_during_sync: false, + work: Work::GossipBlsToExecutionChange { + message_id, + peer_id, + bls_to_execution_change, + }, + } + } + /// Create a new `Work` event for some block, where the result from computation (if any) is /// sent to the other side of `result_tx`. pub fn rpc_beacon_block( @@ -694,6 +727,24 @@ impl std::convert::From> for WorkEvent { seen_timestamp, }, }, + ReadyWork::LightClientUpdate(QueuedLightClientUpdate { + peer_id, + message_id, + light_client_optimistic_update, + seen_timestamp, + .. + }) => Self { + drop_during_sync: true, + work: Work::UnknownLightClientOptimisticUpdate { + message_id, + peer_id, + light_client_optimistic_update, + seen_timestamp, + }, + }, + ReadyWork::BackfillSync(QueuedBackfillBatch { process_id, blocks }) => { + WorkEvent::chain_segment(process_id, blocks) + } } } } @@ -733,6 +784,12 @@ pub enum Work { aggregate: Box>, seen_timestamp: Duration, }, + UnknownLightClientOptimisticUpdate { + message_id: MessageId, + peer_id: PeerId, + light_client_optimistic_update: Box>, + seen_timestamp: Duration, + }, GossipAggregateBatch { packages: Vec>, }, @@ -813,6 +870,11 @@ pub enum Work { request_id: PeerRequestId, request: BlocksByRootRequest, }, + GossipBlsToExecutionChange { + message_id: MessageId, + peer_id: PeerId, + bls_to_execution_change: Box, + }, LightClientBootstrapRequest { peer_id: PeerId, request_id: PeerRequestId, @@ -838,6 +900,10 @@ impl Work { Work::GossipLightClientFinalityUpdate { .. } => GOSSIP_LIGHT_CLIENT_FINALITY_UPDATE, Work::GossipLightClientOptimisticUpdate { .. } => GOSSIP_LIGHT_CLIENT_OPTIMISTIC_UPDATE, Work::RpcBlock { .. } => RPC_BLOCK, + Work::ChainSegment { + process_id: ChainSegmentProcessId::BackSyncBatchId { .. }, + .. + } => CHAIN_SEGMENT_BACKFILL, Work::ChainSegment { .. } => CHAIN_SEGMENT, Work::Status { .. } => STATUS_PROCESSING, Work::BlocksByRangeRequest { .. } => BLOCKS_BY_RANGE_REQUEST, @@ -845,6 +911,8 @@ impl Work { Work::LightClientBootstrapRequest { .. } => LIGHT_CLIENT_BOOTSTRAP_REQUEST, Work::UnknownBlockAttestation { .. } => UNKNOWN_BLOCK_ATTESTATION, Work::UnknownBlockAggregate { .. } => UNKNOWN_BLOCK_AGGREGATE, + Work::GossipBlsToExecutionChange { .. } => GOSSIP_BLS_TO_EXECUTION_CHANGE, + Work::UnknownLightClientOptimisticUpdate { .. } => UNKNOWN_LIGHT_CLIENT_UPDATE, } } } @@ -979,6 +1047,8 @@ impl BeaconProcessor { // Using a FIFO queue for light client updates to maintain sequence order. let mut finality_update_queue = FifoQueue::new(MAX_GOSSIP_FINALITY_UPDATE_QUEUE_LEN); let mut optimistic_update_queue = FifoQueue::new(MAX_GOSSIP_OPTIMISTIC_UPDATE_QUEUE_LEN); + let mut unknown_light_client_update_queue = + FifoQueue::new(MAX_GOSSIP_OPTIMISTIC_UPDATE_REPROCESS_QUEUE_LEN); // Using a FIFO queue since blocks need to be imported sequentially. let mut rpc_block_queue = FifoQueue::new(MAX_RPC_BLOCK_QUEUE_LEN); @@ -990,24 +1060,28 @@ impl BeaconProcessor { let mut status_queue = FifoQueue::new(MAX_STATUS_QUEUE_LEN); let mut bbrange_queue = FifoQueue::new(MAX_BLOCKS_BY_RANGE_QUEUE_LEN); let mut bbroots_queue = FifoQueue::new(MAX_BLOCKS_BY_ROOTS_QUEUE_LEN); + + let mut gossip_bls_to_execution_change_queue = + FifoQueue::new(MAX_BLS_TO_EXECUTION_CHANGE_QUEUE_LEN); + let mut lcbootstrap_queue = FifoQueue::new(MAX_LIGHT_CLIENT_BOOTSTRAP_QUEUE_LEN); + + let chain = match self.beacon_chain.upgrade() { + Some(chain) => chain, + // No need to proceed any further if the beacon chain has been dropped, the client + // is shutting down. + None => return, + }; + // Channels for sending work to the re-process scheduler (`work_reprocessing_tx`) and to // receive them back once they are ready (`ready_work_rx`). let (ready_work_tx, ready_work_rx) = mpsc::channel(MAX_SCHEDULED_WORK_QUEUE_LEN); - let work_reprocessing_tx = { - if let Some(chain) = self.beacon_chain.upgrade() { - spawn_reprocess_scheduler( - ready_work_tx, - &self.executor, - chain.slot_clock.clone(), - self.log.clone(), - ) - } else { - // No need to proceed any further if the beacon chain has been dropped, the client - // is shutting down. - return; - } - }; + let work_reprocessing_tx = spawn_reprocess_scheduler( + ready_work_tx, + &self.executor, + chain.slot_clock.clone(), + self.log.clone(), + ); let executor = self.executor.clone(); @@ -1020,12 +1094,55 @@ impl BeaconProcessor { reprocess_work_rx: ready_work_rx, }; + let enable_backfill_rate_limiting = chain.config.enable_backfill_rate_limiting; + loop { let work_event = match inbound_events.next().await { Some(InboundEvent::WorkerIdle) => { self.current_workers = self.current_workers.saturating_sub(1); None } + Some(InboundEvent::WorkEvent(event)) if enable_backfill_rate_limiting => { + match QueuedBackfillBatch::try_from(event) { + Ok(backfill_batch) => { + match work_reprocessing_tx + .try_send(ReprocessQueueMessage::BackfillSync(backfill_batch)) + { + Err(e) => { + warn!( + self.log, + "Unable to queue backfill work event. Will try to process now."; + "error" => %e + ); + match e { + TrySendError::Full(reprocess_queue_message) + | TrySendError::Closed(reprocess_queue_message) => { + match reprocess_queue_message { + ReprocessQueueMessage::BackfillSync( + backfill_batch, + ) => Some(backfill_batch.into()), + other => { + crit!( + self.log, + "Unexpected queue message type"; + "message_type" => other.as_ref() + ); + // This is an unhandled exception, drop the message. + continue; + } + } + } + } + } + Ok(..) => { + // backfill work sent to "reprocessing" queue. Process the next event. + continue; + } + } + } + Err(event) => Some(event), + } + } Some(InboundEvent::WorkEvent(event)) | Some(InboundEvent::ReprocessingWork(event)) => Some(event), None => { @@ -1222,9 +1339,12 @@ impl BeaconProcessor { self.spawn_worker(item, toolbox); } else if let Some(item) = gossip_proposer_slashing_queue.pop() { self.spawn_worker(item, toolbox); - // Check exits last since our validators don't get rewards from them. + // Check exits and address changes late since our validators don't get + // rewards from them. } else if let Some(item) = gossip_voluntary_exit_queue.pop() { self.spawn_worker(item, toolbox); + } else if let Some(item) = gossip_bls_to_execution_change_queue.pop() { + self.spawn_worker(item, toolbox); // Handle backfill sync chain segments. } else if let Some(item) = backfill_chain_segment.pop() { self.spawn_worker(item, toolbox); @@ -1346,6 +1466,12 @@ impl BeaconProcessor { Work::UnknownBlockAggregate { .. } => { unknown_block_aggregate_queue.push(work) } + Work::GossipBlsToExecutionChange { .. } => { + gossip_bls_to_execution_change_queue.push(work, work_id, &self.log) + } + Work::UnknownLightClientOptimisticUpdate { .. } => { + unknown_light_client_update_queue.push(work, work_id, &self.log) + } } } } @@ -1398,6 +1524,10 @@ impl BeaconProcessor { &metrics::BEACON_PROCESSOR_ATTESTER_SLASHING_QUEUE_TOTAL, gossip_attester_slashing_queue.len() as i64, ); + metrics::set_gauge( + &metrics::BEACON_PROCESSOR_BLS_TO_EXECUTION_CHANGE_QUEUE_TOTAL, + gossip_bls_to_execution_change_queue.len() as i64, + ); if aggregate_queue.is_full() && aggregate_debounce.elapsed() { error!( @@ -1636,6 +1766,20 @@ impl BeaconProcessor { seen_timestamp, ) }), + /* + * BLS to execution change verification. + */ + Work::GossipBlsToExecutionChange { + message_id, + peer_id, + bls_to_execution_change, + } => task_spawner.spawn_blocking(move || { + worker.process_gossip_bls_to_execution_change( + message_id, + peer_id, + *bls_to_execution_change, + ) + }), /* * Light client finality update verification. */ @@ -1665,6 +1809,7 @@ impl BeaconProcessor { message_id, peer_id, *light_client_optimistic_update, + Some(work_reprocessing_tx), seen_timestamp, ) }), @@ -1787,6 +1932,20 @@ impl BeaconProcessor { seen_timestamp, ) }), + Work::UnknownLightClientOptimisticUpdate { + message_id, + peer_id, + light_client_optimistic_update, + seen_timestamp, + } => task_spawner.spawn_blocking(move || { + worker.process_gossip_optimistic_update( + message_id, + peer_id, + *light_client_optimistic_update, + None, + seen_timestamp, + ) + }), }; } } diff --git a/beacon_node/network/src/beacon_processor/tests.rs b/beacon_node/network/src/beacon_processor/tests.rs index ea1a59e0d05..4b0a159eb4b 100644 --- a/beacon_node/network/src/beacon_processor/tests.rs +++ b/beacon_node/network/src/beacon_processor/tests.rs @@ -9,7 +9,7 @@ use crate::{service::NetworkMessage, sync::SyncMessage}; use beacon_chain::test_utils::{ AttestationStrategy, BeaconChainHarness, BlockStrategy, EphemeralHarnessType, }; -use beacon_chain::{BeaconChain, MAXIMUM_GOSSIP_CLOCK_DISPARITY}; +use beacon_chain::{BeaconChain, ChainConfig, MAXIMUM_GOSSIP_CLOCK_DISPARITY}; use lighthouse_network::{ discv5::enr::{CombinedKey, EnrBuilder}, rpc::methods::{MetaData, MetaDataV2}, @@ -23,8 +23,8 @@ use std::sync::Arc; use std::time::Duration; use tokio::sync::mpsc; use types::{ - Attestation, AttesterSlashing, EthSpec, MainnetEthSpec, ProposerSlashing, SignedBeaconBlock, - SignedVoluntaryExit, SubnetId, + Attestation, AttesterSlashing, Epoch, EthSpec, MainnetEthSpec, ProposerSlashing, + SignedBeaconBlock, SignedVoluntaryExit, SubnetId, }; type E = MainnetEthSpec; @@ -36,7 +36,6 @@ const SMALL_CHAIN: u64 = 2; const LONG_CHAIN: u64 = SLOTS_PER_EPOCH * 2; const TCP_PORT: u16 = 42; -const UDP_PORT: u16 = 42; const SEQ_NUMBER: u64 = 0; /// The default time to wait for `BeaconProcessor` events. @@ -71,6 +70,10 @@ impl Drop for TestRig { impl TestRig { pub async fn new(chain_length: u64) -> Self { + Self::new_with_chain_config(chain_length, ChainConfig::default()).await + } + + pub async fn new_with_chain_config(chain_length: u64, chain_config: ChainConfig) -> Self { // This allows for testing voluntary exits without building out a massive chain. let mut spec = E::default_spec(); spec.shard_committee_period = 2; @@ -79,6 +82,7 @@ impl TestRig { .spec(spec) .deterministic_keypairs(VALIDATOR_COUNT) .fresh_ephemeral_store() + .chain_config(chain_config) .build(); harness.advance_slot(); @@ -177,10 +181,11 @@ impl TestRig { let enr = EnrBuilder::new("v4").build(&enr_key).unwrap(); let network_globals = Arc::new(NetworkGlobals::new( enr, - TCP_PORT, - UDP_PORT, + Some(TCP_PORT), + None, meta_data, vec![], + false, &log, )); @@ -262,6 +267,14 @@ impl TestRig { self.beacon_processor_tx.try_send(event).unwrap(); } + pub fn enqueue_backfill_batch(&self) { + let event = WorkEvent::chain_segment( + ChainSegmentProcessId::BackSyncBatchId(Epoch::default()), + Vec::default(), + ); + self.beacon_processor_tx.try_send(event).unwrap(); + } + pub fn enqueue_unaggregated_attestation(&self) { let (attestation, subnet_id) = self.attestations.first().unwrap().clone(); self.beacon_processor_tx @@ -874,3 +887,49 @@ async fn test_rpc_block_reprocessing() { // cache handle was dropped. assert_eq!(next_block_root, rig.head_root()); } + +/// Ensure that backfill batches get rate-limited and processing is scheduled at specified intervals. +#[tokio::test] +async fn test_backfill_sync_processing() { + let mut rig = TestRig::new(SMALL_CHAIN).await; + // Note: to verify the exact event times in an integration test is not straight forward here + // (not straight forward to manipulate `TestingSlotClock` due to cloning of `SlotClock` in code) + // and makes the test very slow, hence timing calculation is unit tested separately in + // `work_reprocessing_queue`. + for _ in 0..1 { + rig.enqueue_backfill_batch(); + // ensure queued batch is not processed until later + rig.assert_no_events_for(Duration::from_millis(100)).await; + // A new batch should be processed within a slot. + rig.assert_event_journal_with_timeout( + &[CHAIN_SEGMENT_BACKFILL, WORKER_FREED, NOTHING_TO_DO], + rig.chain.slot_clock.slot_duration(), + ) + .await; + } +} + +/// Ensure that backfill batches get processed as fast as they can when rate-limiting is disabled. +#[tokio::test] +async fn test_backfill_sync_processing_rate_limiting_disabled() { + let chain_config = ChainConfig { + enable_backfill_rate_limiting: false, + ..Default::default() + }; + let mut rig = TestRig::new_with_chain_config(SMALL_CHAIN, chain_config).await; + + for _ in 0..3 { + rig.enqueue_backfill_batch(); + } + + // ensure all batches are processed + rig.assert_event_journal_with_timeout( + &[ + CHAIN_SEGMENT_BACKFILL, + CHAIN_SEGMENT_BACKFILL, + CHAIN_SEGMENT_BACKFILL, + ], + Duration::from_millis(100), + ) + .await; +} diff --git a/beacon_node/network/src/beacon_processor/work_reprocessing_queue.rs b/beacon_node/network/src/beacon_processor/work_reprocessing_queue.rs index 2aeec11c325..427be6d5138 100644 --- a/beacon_node/network/src/beacon_processor/work_reprocessing_queue.rs +++ b/beacon_node/network/src/beacon_processor/work_reprocessing_queue.rs @@ -11,31 +11,39 @@ //! Aggregated and unaggregated attestations that failed verification due to referencing an unknown //! block will be re-queued until their block is imported, or until they expire. use super::MAX_SCHEDULED_WORK_QUEUE_LEN; +use crate::beacon_processor::{ChainSegmentProcessId, Work, WorkEvent}; use crate::metrics; use crate::sync::manager::BlockProcessType; use beacon_chain::{BeaconChainTypes, GossipVerifiedBlock, MAXIMUM_GOSSIP_CLOCK_DISPARITY}; use fnv::FnvHashMap; use futures::task::Poll; use futures::{Stream, StreamExt}; +use itertools::Itertools; use lighthouse_network::{MessageId, PeerId}; use logging::TimeLatch; -use slog::{crit, debug, error, warn, Logger}; +use slog::{crit, debug, error, trace, warn, Logger}; use slot_clock::SlotClock; use std::collections::{HashMap, HashSet}; +use std::future::Future; use std::pin::Pin; use std::sync::Arc; use std::task::Context; use std::time::Duration; +use strum::AsRefStr; use task_executor::TaskExecutor; use tokio::sync::mpsc::{self, Receiver, Sender}; use tokio::time::error::Error as TimeError; use tokio_util::time::delay_queue::{DelayQueue, Key as DelayKey}; -use types::{Attestation, EthSpec, Hash256, SignedAggregateAndProof, SignedBeaconBlock, SubnetId}; +use types::{ + Attestation, EthSpec, Hash256, LightClientOptimisticUpdate, SignedAggregateAndProof, + SignedBeaconBlock, SubnetId, +}; const TASK_NAME: &str = "beacon_processor_reprocess_queue"; const GOSSIP_BLOCKS: &str = "gossip_blocks"; const RPC_BLOCKS: &str = "rpc_blocks"; const ATTESTATIONS: &str = "attestations"; +const LIGHT_CLIENT_UPDATES: &str = "lc_updates"; /// Queue blocks for re-processing with an `ADDITIONAL_QUEUED_BLOCK_DELAY` after the slot starts. /// This is to account for any slight drift in the system clock. @@ -44,8 +52,11 @@ const ADDITIONAL_QUEUED_BLOCK_DELAY: Duration = Duration::from_millis(5); /// For how long to queue aggregated and unaggregated attestations for re-processing. pub const QUEUED_ATTESTATION_DELAY: Duration = Duration::from_secs(12); +/// For how long to queue light client updates for re-processing. +pub const QUEUED_LIGHT_CLIENT_UPDATE_DELAY: Duration = Duration::from_secs(12); + /// For how long to queue rpc blocks before sending them back for reprocessing. -pub const QUEUED_RPC_BLOCK_DELAY: Duration = Duration::from_secs(3); +pub const QUEUED_RPC_BLOCK_DELAY: Duration = Duration::from_secs(4); /// Set an arbitrary upper-bound on the number of queued blocks to avoid DoS attacks. The fact that /// we signature-verify blocks before putting them in the queue *should* protect against this, but @@ -55,20 +66,44 @@ const MAXIMUM_QUEUED_BLOCKS: usize = 16; /// How many attestations we keep before new ones get dropped. const MAXIMUM_QUEUED_ATTESTATIONS: usize = 16_384; +/// How many light client updates we keep before new ones get dropped. +const MAXIMUM_QUEUED_LIGHT_CLIENT_UPDATES: usize = 128; + +// Process backfill batch 50%, 60%, 80% through each slot. +// +// Note: use caution to set these fractions in a way that won't cause panic-y +// arithmetic. +pub const BACKFILL_SCHEDULE_IN_SLOT: [(u32, u32); 3] = [ + // One half: 6s on mainnet, 2.5s on Gnosis. + (1, 2), + // Three fifths: 7.2s on mainnet, 3s on Gnosis. + (3, 5), + // Four fifths: 9.6s on mainnet, 4s on Gnosis. + (4, 5), +]; + /// Messages that the scheduler can receive. +#[derive(AsRefStr)] pub enum ReprocessQueueMessage { /// A block that has been received early and we should queue for later processing. EarlyBlock(QueuedGossipBlock), /// A gossip block for hash `X` is being imported, we should queue the rpc block for the same /// hash until the gossip block is imported. RpcBlock(QueuedRpcBlock), - /// A block that was successfully processed. We use this to handle attestations for unknown - /// blocks. - BlockImported(Hash256), + /// A block that was successfully processed. We use this to handle attestations and light client updates + /// for unknown blocks. + BlockImported { + block_root: Hash256, + parent_root: Hash256, + }, /// An unaggregated attestation that references an unknown block. UnknownBlockUnaggregate(QueuedUnaggregate), /// An aggregated attestation that references an unknown block. UnknownBlockAggregate(QueuedAggregate), + /// A light client optimistic update that references a parent root that has not been seen as a parent. + UnknownLightClientOptimisticUpdate(QueuedLightClientUpdate), + /// A new backfill batch that needs to be scheduled for processing. + BackfillSync(QueuedBackfillBatch), } /// Events sent by the scheduler once they are ready for re-processing. @@ -77,6 +112,8 @@ pub enum ReadyWork { RpcBlock(QueuedRpcBlock), Unaggregate(QueuedUnaggregate), Aggregate(QueuedAggregate), + LightClientUpdate(QueuedLightClientUpdate), + BackfillSync(QueuedBackfillBatch), } /// An Attestation for which the corresponding block was not seen while processing, queued for @@ -99,6 +136,16 @@ pub struct QueuedAggregate { pub seen_timestamp: Duration, } +/// A light client update for which the corresponding parent block was not seen while processing, +/// queued for later. +pub struct QueuedLightClientUpdate { + pub peer_id: PeerId, + pub message_id: MessageId, + pub light_client_optimistic_update: Box>, + pub parent_root: Hash256, + pub seen_timestamp: Duration, +} + /// A block that arrived early and has been queued for later import. pub struct QueuedGossipBlock { pub peer_id: PeerId, @@ -118,6 +165,40 @@ pub struct QueuedRpcBlock { pub should_process: bool, } +/// A backfill batch work that has been queued for processing later. +#[derive(Clone)] +pub struct QueuedBackfillBatch { + pub process_id: ChainSegmentProcessId, + pub blocks: Vec>>, +} + +impl TryFrom> for QueuedBackfillBatch { + type Error = WorkEvent; + + fn try_from(event: WorkEvent) -> Result> { + match event { + WorkEvent { + work: + Work::ChainSegment { + process_id: process_id @ ChainSegmentProcessId::BackSyncBatchId(_), + blocks, + }, + .. + } => Ok(QueuedBackfillBatch { process_id, blocks }), + _ => Err(event), + } + } +} + +impl From> for WorkEvent { + fn from(queued_backfill_batch: QueuedBackfillBatch) -> WorkEvent { + WorkEvent::chain_segment( + queued_backfill_batch.process_id, + queued_backfill_batch.blocks, + ) + } +} + /// Unifies the different messages processed by the block delay queue. enum InboundEvent { /// A gossip block that was queued for later processing and is ready for import. @@ -127,6 +208,10 @@ enum InboundEvent { ReadyRpcBlock(QueuedRpcBlock), /// An aggregated or unaggregated attestation is ready for re-processing. ReadyAttestation(QueuedAttestationId), + /// A light client update that is ready for re-processing. + ReadyLightClientUpdate(QueuedLightClientUpdateId), + /// A backfill batch that was queued is ready for processing. + ReadyBackfillSync(QueuedBackfillBatch), /// A `DelayQueue` returned an error. DelayQueueError(TimeError, &'static str), /// A message sent to the `ReprocessQueue` @@ -147,6 +232,8 @@ struct ReprocessQueue { rpc_block_delay_queue: DelayQueue>, /// Queue to manage scheduled attestations. attestations_delay_queue: DelayQueue, + /// Queue to manage scheduled light client updates. + lc_updates_delay_queue: DelayQueue, /* Queued items */ /// Queued blocks. @@ -157,15 +244,27 @@ struct ReprocessQueue { queued_unaggregates: FnvHashMap, DelayKey)>, /// Attestations (aggregated and unaggregated) per root. awaiting_attestations_per_root: HashMap>, + /// Queued Light Client Updates. + queued_lc_updates: FnvHashMap, DelayKey)>, + /// Light Client Updates per parent_root. + awaiting_lc_updates_per_parent_root: HashMap>, + /// Queued backfill batches + queued_backfill_batches: Vec>, /* Aux */ /// Next attestation id, used for both aggregated and unaggregated attestations next_attestation: usize, + next_lc_update: usize, early_block_debounce: TimeLatch, rpc_block_debounce: TimeLatch, attestation_delay_debounce: TimeLatch, + lc_update_delay_debounce: TimeLatch, + next_backfill_batch_event: Option>>, + slot_clock: Pin>, } +pub type QueuedLightClientUpdateId = usize; + #[derive(Debug, Clone, Copy, PartialEq, Eq)] enum QueuedAttestationId { Aggregate(usize), @@ -235,6 +334,34 @@ impl Stream for ReprocessQueue { Poll::Ready(None) | Poll::Pending => (), } + match self.lc_updates_delay_queue.poll_expired(cx) { + Poll::Ready(Some(Ok(lc_id))) => { + return Poll::Ready(Some(InboundEvent::ReadyLightClientUpdate( + lc_id.into_inner(), + ))); + } + Poll::Ready(Some(Err(e))) => { + return Poll::Ready(Some(InboundEvent::DelayQueueError(e, "lc_updates_queue"))); + } + // `Poll::Ready(None)` means that there are no more entries in the delay queue and we + // will continue to get this result until something else is added into the queue. + Poll::Ready(None) | Poll::Pending => (), + } + + if let Some(next_backfill_batch_event) = self.next_backfill_batch_event.as_mut() { + match next_backfill_batch_event.as_mut().poll(cx) { + Poll::Ready(_) => { + let maybe_batch = self.queued_backfill_batches.pop(); + self.recompute_next_backfill_batch_event(); + + if let Some(batch) = maybe_batch { + return Poll::Ready(Some(InboundEvent::ReadyBackfillSync(batch))); + } + } + Poll::Pending => (), + } + } + // Last empty the messages channel. match self.work_reprocessing_rx.poll_recv(cx) { Poll::Ready(Some(message)) => return Poll::Ready(Some(InboundEvent::Msg(message))), @@ -264,14 +391,22 @@ pub fn spawn_reprocess_scheduler( gossip_block_delay_queue: DelayQueue::new(), rpc_block_delay_queue: DelayQueue::new(), attestations_delay_queue: DelayQueue::new(), + lc_updates_delay_queue: DelayQueue::new(), queued_gossip_block_roots: HashSet::new(), + queued_lc_updates: FnvHashMap::default(), queued_aggregates: FnvHashMap::default(), queued_unaggregates: FnvHashMap::default(), awaiting_attestations_per_root: HashMap::new(), + awaiting_lc_updates_per_parent_root: HashMap::new(), + queued_backfill_batches: Vec::new(), next_attestation: 0, + next_lc_update: 0, early_block_debounce: TimeLatch::default(), rpc_block_debounce: TimeLatch::default(), attestation_delay_debounce: TimeLatch::default(), + lc_update_delay_debounce: TimeLatch::default(), + next_backfill_batch_event: None, + slot_clock: Box::pin(slot_clock.clone()), }; executor.spawn( @@ -386,7 +521,7 @@ impl ReprocessQueue { return; } - // Queue the block for 1/4th of a slot + // Queue the block for 1/3rd of a slot self.rpc_block_delay_queue .insert(rpc_block, QUEUED_RPC_BLOCK_DELAY); } @@ -473,9 +608,52 @@ impl ReprocessQueue { self.next_attestation += 1; } - InboundEvent::Msg(BlockImported(root)) => { + InboundEvent::Msg(UnknownLightClientOptimisticUpdate( + queued_light_client_optimistic_update, + )) => { + if self.lc_updates_delay_queue.len() >= MAXIMUM_QUEUED_LIGHT_CLIENT_UPDATES { + if self.lc_update_delay_debounce.elapsed() { + error!( + log, + "Light client updates delay queue is full"; + "queue_size" => MAXIMUM_QUEUED_LIGHT_CLIENT_UPDATES, + "msg" => "check system clock" + ); + } + // Drop the light client update. + return; + } + + let lc_id: QueuedLightClientUpdateId = self.next_lc_update; + + // Register the delay. + let delay_key = self + .lc_updates_delay_queue + .insert(lc_id, QUEUED_LIGHT_CLIENT_UPDATE_DELAY); + + // Register the light client update for the corresponding root. + self.awaiting_lc_updates_per_parent_root + .entry(queued_light_client_optimistic_update.parent_root) + .or_default() + .push(lc_id); + + // Store the light client update and its info. + self.queued_lc_updates.insert( + self.next_lc_update, + (queued_light_client_optimistic_update, delay_key), + ); + + self.next_lc_update += 1; + } + InboundEvent::Msg(BlockImported { + block_root, + parent_root, + }) => { // Unqueue the attestations we have for this root, if any. - if let Some(queued_ids) = self.awaiting_attestations_per_root.remove(&root) { + if let Some(queued_ids) = self.awaiting_attestations_per_root.remove(&block_root) { + let mut sent_count = 0; + let mut failed_to_send_count = 0; + for id in queued_ids { metrics::inc_counter( &metrics::BEACON_PROCESSOR_REPROCESSING_QUEUE_MATCHED_ATTESTATIONS, @@ -500,10 +678,9 @@ impl ReprocessQueue { // Send the work. if self.ready_work_tx.try_send(work).is_err() { - error!( - log, - "Failed to send scheduled attestation"; - ); + failed_to_send_count += 1; + } else { + sent_count += 1; } } else { // There is a mismatch between the attestation ids registered for this @@ -511,11 +688,81 @@ impl ReprocessQueue { error!( log, "Unknown queued attestation for block root"; - "block_root" => ?root, + "block_root" => ?block_root, "att_id" => ?id, ); } } + + if failed_to_send_count > 0 { + error!( + log, + "Ignored scheduled attestation(s) for block"; + "hint" => "system may be overloaded", + "parent_root" => ?parent_root, + "block_root" => ?block_root, + "failed_count" => failed_to_send_count, + "sent_count" => sent_count, + ); + } + } + // Unqueue the light client optimistic updates we have for this root, if any. + if let Some(queued_lc_id) = self + .awaiting_lc_updates_per_parent_root + .remove(&parent_root) + { + debug!( + log, + "Dequeuing light client optimistic updates"; + "parent_root" => %parent_root, + "count" => queued_lc_id.len(), + ); + + for lc_id in queued_lc_id { + metrics::inc_counter( + &metrics::BEACON_PROCESSOR_REPROCESSING_QUEUE_MATCHED_OPTIMISTIC_UPDATES, + ); + if let Some((work, delay_key)) = self.queued_lc_updates.remove(&lc_id).map( + |(light_client_optimistic_update, delay_key)| { + ( + ReadyWork::LightClientUpdate(light_client_optimistic_update), + delay_key, + ) + }, + ) { + // Remove the delay + self.lc_updates_delay_queue.remove(&delay_key); + + // Send the work + match self.ready_work_tx.try_send(work) { + Ok(_) => trace!( + log, + "reprocessing light client update sent"; + ), + Err(_) => error!( + log, + "Failed to send scheduled light client update"; + ), + } + } else { + // There is a mismatch between the light client update ids registered for this + // root and the queued light client updates. This should never happen. + error!( + log, + "Unknown queued light client update for parent root"; + "parent_root" => ?parent_root, + "lc_id" => ?lc_id, + ); + } + } + } + } + InboundEvent::Msg(BackfillSync(queued_backfill_batch)) => { + self.queued_backfill_batches + .insert(0, queued_backfill_batch); + // only recompute if there is no `next_backfill_batch_event` already scheduled + if self.next_backfill_batch_event.is_none() { + self.recompute_next_backfill_batch_event(); } } // A block that was queued for later processing is now ready to be processed. @@ -580,7 +827,9 @@ impl ReprocessQueue { if self.ready_work_tx.try_send(work).is_err() { error!( log, - "Failed to send scheduled attestation"; + "Ignored scheduled attestation"; + "hint" => "system may be overloaded", + "beacon_block_root" => ?root ); } @@ -591,6 +840,65 @@ impl ReprocessQueue { } } } + InboundEvent::ReadyLightClientUpdate(queued_id) => { + metrics::inc_counter( + &metrics::BEACON_PROCESSOR_REPROCESSING_QUEUE_EXPIRED_OPTIMISTIC_UPDATES, + ); + + if let Some((parent_root, work)) = self.queued_lc_updates.remove(&queued_id).map( + |(queued_lc_update, _delay_key)| { + ( + queued_lc_update.parent_root, + ReadyWork::LightClientUpdate(queued_lc_update), + ) + }, + ) { + if self.ready_work_tx.try_send(work).is_err() { + error!( + log, + "Failed to send scheduled light client optimistic update"; + ); + } + + if let Some(queued_lc_updates) = self + .awaiting_lc_updates_per_parent_root + .get_mut(&parent_root) + { + if let Some(index) = + queued_lc_updates.iter().position(|&id| id == queued_id) + { + queued_lc_updates.swap_remove(index); + } + } + } + } + InboundEvent::ReadyBackfillSync(queued_backfill_batch) => { + let millis_from_slot_start = slot_clock + .millis_from_current_slot_start() + .map_or("null".to_string(), |duration| { + duration.as_millis().to_string() + }); + + debug!( + log, + "Sending scheduled backfill work"; + "millis_from_slot_start" => millis_from_slot_start + ); + + if self + .ready_work_tx + .try_send(ReadyWork::BackfillSync(queued_backfill_batch.clone())) + .is_err() + { + error!( + log, + "Failed to send scheduled backfill work"; + "info" => "sending work back to queue" + ); + self.queued_backfill_batches + .insert(0, queued_backfill_batch); + } + } } metrics::set_gauge_vec( @@ -608,5 +916,101 @@ impl ReprocessQueue { &[ATTESTATIONS], self.attestations_delay_queue.len() as i64, ); + metrics::set_gauge_vec( + &metrics::BEACON_PROCESSOR_REPROCESSING_QUEUE_TOTAL, + &[LIGHT_CLIENT_UPDATES], + self.lc_updates_delay_queue.len() as i64, + ); + } + + fn recompute_next_backfill_batch_event(&mut self) { + // only recompute the `next_backfill_batch_event` if there are backfill batches in the queue + if !self.queued_backfill_batches.is_empty() { + self.next_backfill_batch_event = Some(Box::pin(tokio::time::sleep( + ReprocessQueue::::duration_until_next_backfill_batch_event(&self.slot_clock), + ))); + } else { + self.next_backfill_batch_event = None + } + } + + /// Returns duration until the next scheduled processing time. The schedule ensure that backfill + /// processing is done in windows of time that aren't critical + fn duration_until_next_backfill_batch_event(slot_clock: &T::SlotClock) -> Duration { + let slot_duration = slot_clock.slot_duration(); + slot_clock + .millis_from_current_slot_start() + .and_then(|duration_from_slot_start| { + BACKFILL_SCHEDULE_IN_SLOT + .into_iter() + // Convert fractions to seconds from slot start. + .map(|(multiplier, divisor)| (slot_duration / divisor) * multiplier) + .find_or_first(|&event_duration_from_slot_start| { + event_duration_from_slot_start > duration_from_slot_start + }) + .map(|next_event_time| { + if duration_from_slot_start >= next_event_time { + // event is in the next slot, add duration to next slot + let duration_to_next_slot = slot_duration - duration_from_slot_start; + duration_to_next_slot + next_event_time + } else { + next_event_time - duration_from_slot_start + } + }) + }) + // If we can't read the slot clock, just wait another slot. + .unwrap_or(slot_duration) + } +} + +#[cfg(test)] +mod tests { + use super::*; + use beacon_chain::builder::Witness; + use beacon_chain::eth1_chain::CachingEth1Backend; + use slot_clock::TestingSlotClock; + use store::MemoryStore; + use types::MainnetEthSpec as E; + use types::Slot; + + type TestBeaconChainType = + Witness, E, MemoryStore, MemoryStore>; + + #[test] + fn backfill_processing_schedule_calculation() { + let slot_duration = Duration::from_secs(12); + let slot_clock = TestingSlotClock::new(Slot::new(0), Duration::from_secs(0), slot_duration); + let current_slot_start = slot_clock.start_of(Slot::new(100)).unwrap(); + slot_clock.set_current_time(current_slot_start); + + let event_times = BACKFILL_SCHEDULE_IN_SLOT + .map(|(multiplier, divisor)| (slot_duration / divisor) * multiplier); + + for &event_duration_from_slot_start in event_times.iter() { + let duration_to_next_event = + ReprocessQueue::::duration_until_next_backfill_batch_event( + &slot_clock, + ); + + let current_time = slot_clock.millis_from_current_slot_start().unwrap(); + + assert_eq!( + duration_to_next_event, + event_duration_from_slot_start - current_time + ); + + slot_clock.set_current_time(current_slot_start + event_duration_from_slot_start) + } + + // check for next event beyond the current slot + let duration_to_next_slot = slot_clock.duration_to_next_slot().unwrap(); + let duration_to_next_event = + ReprocessQueue::::duration_until_next_backfill_batch_event( + &slot_clock, + ); + assert_eq!( + duration_to_next_event, + duration_to_next_slot + event_times[0] + ); } } diff --git a/beacon_node/network/src/beacon_processor/worker/gossip_methods.rs b/beacon_node/network/src/beacon_processor/worker/gossip_methods.rs index ef23f6761f6..1ec03ae954f 100644 --- a/beacon_node/network/src/beacon_processor/worker/gossip_methods.rs +++ b/beacon_node/network/src/beacon_processor/worker/gossip_methods.rs @@ -12,6 +12,7 @@ use beacon_chain::{ GossipVerifiedBlock, NotifyExecutionLayer, }; use lighthouse_network::{Client, MessageAcceptance, MessageId, PeerAction, PeerId, ReportSource}; +use operation_pool::ReceivedPreCapella; use slog::{crit, debug, error, info, trace, warn}; use slot_clock::SlotClock; use ssz::Encode; @@ -22,13 +23,14 @@ use tokio::sync::mpsc; use types::{ Attestation, AttesterSlashing, EthSpec, Hash256, IndexedAttestation, LightClientFinalityUpdate, LightClientOptimisticUpdate, ProposerSlashing, SignedAggregateAndProof, SignedBeaconBlock, - SignedContributionAndProof, SignedVoluntaryExit, Slot, SubnetId, SyncCommitteeMessage, - SyncSubnetId, + SignedBlsToExecutionChange, SignedContributionAndProof, SignedVoluntaryExit, Slot, SubnetId, + SyncCommitteeMessage, SyncSubnetId, }; use super::{ super::work_reprocessing_queue::{ - QueuedAggregate, QueuedGossipBlock, QueuedUnaggregate, ReprocessQueueMessage, + QueuedAggregate, QueuedGossipBlock, QueuedLightClientUpdate, QueuedUnaggregate, + ReprocessQueueMessage, }, Worker, }; @@ -675,6 +677,7 @@ impl Worker { .await { let block_root = gossip_verified_block.block_root; + if let Some(handle) = duplicate_cache.check_and_insert(block_root) { self.process_gossip_verified_block( peer_id, @@ -715,6 +718,10 @@ impl Worker { &metrics::BEACON_BLOCK_GOSSIP_SLOT_START_DELAY_TIME, block_delay, ); + metrics::set_gauge( + &metrics::BEACON_BLOCK_LAST_DELAY, + block_delay.as_millis() as i64, + ); let verification_result = self .chain @@ -827,7 +834,6 @@ impl Worker { | Err(e @ BlockError::WeakSubjectivityConflict) | Err(e @ BlockError::InconsistentFork(_)) | Err(e @ BlockError::ExecutionPayloadError(_)) - // TODO(merge): reconsider peer scoring for this event. | Err(e @ BlockError::ParentExecutionPayloadInvalid { .. }) | Err(e @ BlockError::GenesisBlock) => { warn!(self.log, "Could not verify block for gossip. Rejecting the block"; @@ -949,7 +955,10 @@ impl Worker { metrics::inc_counter(&metrics::BEACON_PROCESSOR_GOSSIP_BLOCK_IMPORTED_TOTAL); if reprocess_tx - .try_send(ReprocessQueueMessage::BlockImported(block_root)) + .try_send(ReprocessQueueMessage::BlockImported { + block_root, + parent_root: block.message().parent_root(), + }) .is_err() { error!( @@ -1182,6 +1191,83 @@ impl Worker { metrics::inc_counter(&metrics::BEACON_PROCESSOR_ATTESTER_SLASHING_IMPORTED_TOTAL); } + pub fn process_gossip_bls_to_execution_change( + self, + message_id: MessageId, + peer_id: PeerId, + bls_to_execution_change: SignedBlsToExecutionChange, + ) { + let validator_index = bls_to_execution_change.message.validator_index; + let address = bls_to_execution_change.message.to_execution_address; + + let change = match self + .chain + .verify_bls_to_execution_change_for_gossip(bls_to_execution_change) + { + Ok(ObservationOutcome::New(change)) => change, + Ok(ObservationOutcome::AlreadyKnown) => { + self.propagate_validation_result(message_id, peer_id, MessageAcceptance::Ignore); + debug!( + self.log, + "Dropping BLS to execution change"; + "validator_index" => validator_index, + "peer" => %peer_id + ); + return; + } + Err(e) => { + debug!( + self.log, + "Dropping invalid BLS to execution change"; + "validator_index" => validator_index, + "peer" => %peer_id, + "error" => ?e + ); + // We ignore pre-capella messages without penalizing peers. + if matches!(e, BeaconChainError::BlsToExecutionPriorToCapella) { + self.propagate_validation_result( + message_id, + peer_id, + MessageAcceptance::Ignore, + ); + } else { + // We penalize the peer slightly to prevent overuse of invalids. + self.propagate_validation_result( + message_id, + peer_id, + MessageAcceptance::Reject, + ); + self.gossip_penalize_peer( + peer_id, + PeerAction::HighToleranceError, + "invalid_bls_to_execution_change", + ); + } + return; + } + }; + + metrics::inc_counter(&metrics::BEACON_PROCESSOR_BLS_TO_EXECUTION_CHANGE_VERIFIED_TOTAL); + + self.propagate_validation_result(message_id, peer_id, MessageAcceptance::Accept); + + // Address change messages from gossip are only processed *after* the + // Capella fork epoch. + let received_pre_capella = ReceivedPreCapella::No; + + self.chain + .import_bls_to_execution_change(change, received_pre_capella); + + debug!( + self.log, + "Successfully imported BLS to execution change"; + "validator_index" => validator_index, + "address" => ?address, + ); + + metrics::inc_counter(&metrics::BEACON_PROCESSOR_BLS_TO_EXECUTION_CHANGE_IMPORTED_TOTAL); + } + /// Process the sync committee signature received from the gossip network and: /// /// - If it passes gossip propagation criteria, tell the network thread to forward it. @@ -1326,7 +1412,7 @@ impl Worker { LightClientFinalityUpdateError::InvalidLightClientFinalityUpdate => { debug!( self.log, - "LC invalid finality update"; + "Light client invalid finality update"; "peer" => %peer_id, "error" => ?e, ); @@ -1340,7 +1426,7 @@ impl Worker { LightClientFinalityUpdateError::TooEarly => { debug!( self.log, - "LC finality update too early"; + "Light client finality update too early"; "peer" => %peer_id, "error" => ?e, ); @@ -1353,7 +1439,7 @@ impl Worker { } LightClientFinalityUpdateError::FinalityUpdateAlreadySeen => debug!( self.log, - "LC finality update already seen"; + "Light client finality update already seen"; "peer" => %peer_id, "error" => ?e, ), @@ -1362,7 +1448,7 @@ impl Worker { | LightClientFinalityUpdateError::SigSlotStartIsNone | LightClientFinalityUpdateError::FailedConstructingUpdate => debug!( self.log, - "LC error constructing finality update"; + "Light client error constructing finality update"; "peer" => %peer_id, "error" => ?e, ), @@ -1377,22 +1463,77 @@ impl Worker { message_id: MessageId, peer_id: PeerId, light_client_optimistic_update: LightClientOptimisticUpdate, + reprocess_tx: Option>>, seen_timestamp: Duration, ) { - match self - .chain - .verify_optimistic_update_for_gossip(light_client_optimistic_update, seen_timestamp) - { - Ok(_verified_light_client_optimistic_update) => { + match self.chain.verify_optimistic_update_for_gossip( + light_client_optimistic_update.clone(), + seen_timestamp, + ) { + Ok(verified_light_client_optimistic_update) => { + debug!( + self.log, + "Light client successful optimistic update"; + "peer" => %peer_id, + "parent_root" => %verified_light_client_optimistic_update.parent_root, + ); + self.propagate_validation_result(message_id, peer_id, MessageAcceptance::Accept); } Err(e) => { - metrics::register_optimistic_update_error(&e); match e { + LightClientOptimisticUpdateError::UnknownBlockParentRoot(parent_root) => { + metrics::inc_counter( + &metrics::BEACON_PROCESSOR_REPROCESSING_QUEUE_SENT_OPTIMISTIC_UPDATES, + ); + debug!( + self.log, + "Optimistic update for unknown block"; + "peer_id" => %peer_id, + "parent_root" => ?parent_root + ); + + if let Some(sender) = reprocess_tx { + let msg = ReprocessQueueMessage::UnknownLightClientOptimisticUpdate( + QueuedLightClientUpdate { + peer_id, + message_id, + light_client_optimistic_update: Box::new( + light_client_optimistic_update, + ), + parent_root, + seen_timestamp, + }, + ); + + if sender.try_send(msg).is_err() { + error!( + self.log, + "Failed to send optimistic update for re-processing"; + ) + } + } else { + debug!( + self.log, + "Not sending light client update because it had been reprocessed"; + "peer_id" => %peer_id, + "parent_root" => ?parent_root + ); + + self.propagate_validation_result( + message_id, + peer_id, + MessageAcceptance::Ignore, + ); + } + return; + } LightClientOptimisticUpdateError::InvalidLightClientOptimisticUpdate => { + metrics::register_optimistic_update_error(&e); + debug!( self.log, - "LC invalid optimistic update"; + "Light client invalid optimistic update"; "peer" => %peer_id, "error" => ?e, ); @@ -1404,9 +1545,10 @@ impl Worker { ) } LightClientOptimisticUpdateError::TooEarly => { + metrics::register_optimistic_update_error(&e); debug!( self.log, - "LC optimistic update too early"; + "Light client optimistic update too early"; "peer" => %peer_id, "error" => ?e, ); @@ -1417,21 +1559,29 @@ impl Worker { "light_client_gossip_error", ); } - LightClientOptimisticUpdateError::OptimisticUpdateAlreadySeen => debug!( - self.log, - "LC optimistic update already seen"; - "peer" => %peer_id, - "error" => ?e, - ), + LightClientOptimisticUpdateError::OptimisticUpdateAlreadySeen => { + metrics::register_optimistic_update_error(&e); + + debug!( + self.log, + "Light client optimistic update already seen"; + "peer" => %peer_id, + "error" => ?e, + ) + } LightClientOptimisticUpdateError::BeaconChainError(_) | LightClientOptimisticUpdateError::LightClientUpdateError(_) | LightClientOptimisticUpdateError::SigSlotStartIsNone - | LightClientOptimisticUpdateError::FailedConstructingUpdate => debug!( - self.log, - "LC error constructing optimistic update"; - "peer" => %peer_id, - "error" => ?e, - ), + | LightClientOptimisticUpdateError::FailedConstructingUpdate => { + metrics::register_optimistic_update_error(&e); + + debug!( + self.log, + "Light client error constructing optimistic update"; + "peer" => %peer_id, + "error" => ?e, + ) + } } self.propagate_validation_result(message_id, peer_id, MessageAcceptance::Ignore); } diff --git a/beacon_node/network/src/beacon_processor/worker/rpc_methods.rs b/beacon_node/network/src/beacon_processor/worker/rpc_methods.rs index bfa0ea516fa..81b163bf7ee 100644 --- a/beacon_node/network/src/beacon_processor/worker/rpc_methods.rs +++ b/beacon_node/network/src/beacon_processor/worker/rpc_methods.rs @@ -7,10 +7,10 @@ use itertools::process_results; use lighthouse_network::rpc::StatusMessage; use lighthouse_network::rpc::*; use lighthouse_network::{PeerId, PeerRequestId, ReportSource, Response, SyncInfo}; -use slog::{debug, error}; +use slog::{debug, error, warn}; use slot_clock::SlotClock; -use std::sync::Arc; use task_executor::TaskExecutor; +use tokio_stream::StreamExt; use types::{light_client_bootstrap::LightClientBootstrap, Epoch, EthSpec, Hash256, Slot}; use super::Worker; @@ -131,21 +131,25 @@ impl Worker { request_id: PeerRequestId, request: BlocksByRootRequest, ) { + let requested_blocks = request.block_roots.len(); + let mut block_stream = match self + .chain + .get_blocks_checking_early_attester_cache(request.block_roots.into(), &executor) + { + Ok(block_stream) => block_stream, + Err(e) => return error!(self.log, "Error getting block stream"; "error" => ?e), + }; // Fetching blocks is async because it may have to hit the execution layer for payloads. executor.spawn( async move { let mut send_block_count = 0; let mut send_response = true; - for root in request.block_roots.iter() { - match self - .chain - .get_block_checking_early_attester_cache(root) - .await - { + while let Some((root, result)) = block_stream.next().await { + match result.as_ref() { Ok(Some(block)) => { self.send_response( peer_id, - Response::BlocksByRoot(Some(block)), + Response::BlocksByRoot(Some(block.clone())), request_id, ); send_block_count += 1; @@ -190,7 +194,7 @@ impl Worker { self.log, "Received BlocksByRoot Request"; "peer" => %peer_id, - "requested" => request.block_roots.len(), + "requested" => requested_blocks, "returned" => %send_block_count ); @@ -344,14 +348,19 @@ impl Worker { // remove all skip slots let block_roots = block_roots.into_iter().flatten().collect::>(); + let mut block_stream = match self.chain.get_blocks(block_roots, &executor) { + Ok(block_stream) => block_stream, + Err(e) => return error!(self.log, "Error getting block stream"; "error" => ?e), + }; + // Fetching blocks is async because it may have to hit the execution layer for payloads. executor.spawn( async move { let mut blocks_sent = 0; let mut send_response = true; - for root in block_roots { - match self.chain.get_block(&root).await { + while let Some((root, result)) = block_stream.next().await { + match result.as_ref() { Ok(Some(block)) => { // Due to skip slots, blocks could be out of the range, we ensure they // are in the range before sending @@ -361,7 +370,7 @@ impl Worker { blocks_sent += 1; self.send_network_message(NetworkMessage::SendResponse { peer_id, - response: Response::BlocksByRange(Some(Arc::new(block))), + response: Response::BlocksByRange(Some(block.clone())), id: request_id, }); } @@ -392,12 +401,26 @@ impl Worker { break; } Err(e) => { - error!( - self.log, - "Error fetching block for peer"; - "block_root" => ?root, - "error" => ?e - ); + if matches!( + e, + BeaconChainError::ExecutionLayerErrorPayloadReconstruction(_block_hash, ref boxed_error) + if matches!(**boxed_error, execution_layer::Error::EngineError(_)) + ) { + warn!( + self.log, + "Error rebuilding payload for peer"; + "info" => "this may occur occasionally when the EE is busy", + "block_root" => ?root, + "error" => ?e, + ); + } else { + error!( + self.log, + "Error fetching block for peer"; + "block_root" => ?root, + "error" => ?e + ); + } // send the stream terminator self.send_error_response( diff --git a/beacon_node/network/src/beacon_processor/worker/sync_methods.rs b/beacon_node/network/src/beacon_processor/worker/sync_methods.rs index 1ec045e97eb..ca2095348ae 100644 --- a/beacon_node/network/src/beacon_processor/worker/sync_methods.rs +++ b/beacon_node/network/src/beacon_processor/worker/sync_methods.rs @@ -9,12 +9,15 @@ use crate::sync::manager::{BlockProcessType, SyncMessage}; use crate::sync::{BatchProcessResult, ChainId}; use beacon_chain::CountUnrealized; use beacon_chain::{ + observed_block_producers::Error as ObserveError, validator_monitor::get_block_delay_ms, BeaconChainError, BeaconChainTypes, BlockError, ChainSegmentResult, HistoricalBlockError, NotifyExecutionLayer, }; use lighthouse_network::PeerAction; use slog::{debug, error, info, warn}; +use slot_clock::SlotClock; use std::sync::Arc; +use std::time::{SystemTime, UNIX_EPOCH}; use tokio::sync::mpsc; use types::{Epoch, Hash256, SignedBeaconBlock}; @@ -83,7 +86,68 @@ impl Worker { return; } }; + + // Returns `true` if the time now is after the 4s attestation deadline. + let block_is_late = SystemTime::now() + .duration_since(UNIX_EPOCH) + // If we can't read the system time clock then indicate that the + // block is late (and therefore should *not* be requeued). This + // avoids infinite loops. + .map_or(true, |now| { + get_block_delay_ms(now, block.message(), &self.chain.slot_clock) + > self.chain.slot_clock.unagg_attestation_production_delay() + }); + + // Checks if a block from this proposer is already known. + let proposal_already_known = || { + match self + .chain + .observed_block_producers + .read() + .proposer_has_been_observed(block.message()) + { + Ok(is_observed) => is_observed, + // Both of these blocks will be rejected, so reject them now rather + // than re-queuing them. + Err(ObserveError::FinalizedBlock { .. }) + | Err(ObserveError::ValidatorIndexTooHigh { .. }) => false, + } + }; + + // If we've already seen a block from this proposer *and* the block + // arrived before the attestation deadline, requeue it to ensure it is + // imported late enough that it won't receive a proposer boost. + if !block_is_late && proposal_already_known() { + debug!( + self.log, + "Delaying processing of duplicate RPC block"; + "block_root" => ?block_root, + "proposer" => block.message().proposer_index(), + "slot" => block.slot() + ); + + // Send message to work reprocess queue to retry the block + let reprocess_msg = ReprocessQueueMessage::RpcBlock(QueuedRpcBlock { + block_root, + block: block.clone(), + process_type, + seen_timestamp, + should_process: true, + }); + + if reprocess_tx.try_send(reprocess_msg).is_err() { + error!( + self.log, + "Failed to inform block import"; + "source" => "rpc", + "block_root" => %block_root + ); + } + return; + } + let slot = block.slot(); + let parent_root = block.message().parent_root(); let result = self .chain .process_block( @@ -101,7 +165,10 @@ impl Worker { info!(self.log, "New RPC block received"; "slot" => slot, "hash" => %hash); // Trigger processing for work referencing this block. - let reprocess_msg = ReprocessQueueMessage::BlockImported(hash); + let reprocess_msg = ReprocessQueueMessage::BlockImported { + block_root: hash, + parent_root, + }; if reprocess_tx.try_send(reprocess_msg).is_err() { error!(self.log, "Failed to inform block import"; "source" => "rpc", "block_root" => %hash) }; @@ -509,6 +576,21 @@ impl Worker { }) } } + ref err @ BlockError::ParentExecutionPayloadInvalid { ref parent_root } => { + warn!( + self.log, + "Failed to sync chain built on invalid parent"; + "parent_root" => ?parent_root, + "advice" => "check execution node for corruption then restart it and Lighthouse", + ); + Err(ChainSegmentFailed { + message: format!("Peer sent invalid block. Reason: {err:?}"), + // We need to penalise harshly in case this represents an actual attack. In case + // of a faulty EL it will usually require manual intervention to fix anyway, so + // it's not too bad if we drop most of our peers. + peer_action: Some(PeerAction::LowToleranceError), + }) + } other => { debug!( self.log, "Invalid block received"; diff --git a/beacon_node/network/src/metrics.rs b/beacon_node/network/src/metrics.rs index b4f3f29f934..09caaaa11e3 100644 --- a/beacon_node/network/src/metrics.rs +++ b/beacon_node/network/src/metrics.rs @@ -145,6 +145,19 @@ lazy_static! { "beacon_processor_attester_slashing_imported_total", "Total number of attester slashings imported to the op pool." ); + // Gossip BLS to execution changes. + pub static ref BEACON_PROCESSOR_BLS_TO_EXECUTION_CHANGE_QUEUE_TOTAL: Result = try_create_int_gauge( + "beacon_processor_bls_to_execution_change_queue_total", + "Count of address changes from gossip waiting to be verified." + ); + pub static ref BEACON_PROCESSOR_BLS_TO_EXECUTION_CHANGE_VERIFIED_TOTAL: Result = try_create_int_counter( + "beacon_processor_bls_to_execution_change_verified_total", + "Total number of address changes verified for propagation." + ); + pub static ref BEACON_PROCESSOR_BLS_TO_EXECUTION_CHANGE_IMPORTED_TOTAL: Result = try_create_int_counter( + "beacon_processor_bls_to_execution_change_imported_total", + "Total number of address changes imported to the op pool." + ); // Rpc blocks. pub static ref BEACON_PROCESSOR_RPC_BLOCK_QUEUE_TOTAL: Result = try_create_int_gauge( "beacon_processor_rpc_block_queue_total", @@ -335,10 +348,18 @@ lazy_static! { pub static ref BEACON_BLOCK_GOSSIP_SLOT_START_DELAY_TIME: Result = try_create_histogram_with_buckets( "beacon_block_gossip_slot_start_delay_time", "Duration between when the block is received and the start of the slot it belongs to.", + // Create a custom bucket list for greater granularity in block delay + Ok(vec![0.1, 0.2, 0.3,0.4,0.5,0.75,1.0,1.25,1.5,1.75,2.0,2.5,3.0,3.5,4.0,5.0,6.0,7.0,8.0,9.0,10.0,15.0,20.0]) + // NOTE: Previous values, which we may want to switch back to. // [0.1, 0.2, 0.5, 1, 2, 5, 10, 20, 50] - decimal_buckets(-1,2) + //decimal_buckets(-1,2) ); + pub static ref BEACON_BLOCK_LAST_DELAY: Result = try_create_int_gauge( + "beacon_block_last_delay", + "Keeps track of the last block's delay from the start of the slot" + ); + pub static ref BEACON_BLOCK_GOSSIP_ARRIVED_LATE_TOTAL: Result = try_create_int_counter( "beacon_block_gossip_arrived_late_total", "Count of times when a gossip block arrived from the network later than the attestation deadline.", @@ -362,6 +383,21 @@ lazy_static! { "Number of queued attestations where as matching block has been imported." ); + /* + * Light client update reprocessing queue metrics. + */ + pub static ref BEACON_PROCESSOR_REPROCESSING_QUEUE_EXPIRED_OPTIMISTIC_UPDATES: Result = try_create_int_counter( + "beacon_processor_reprocessing_queue_expired_optimistic_updates", + "Number of queued light client optimistic updates which have expired before a matching block has been found." + ); + pub static ref BEACON_PROCESSOR_REPROCESSING_QUEUE_MATCHED_OPTIMISTIC_UPDATES: Result = try_create_int_counter( + "beacon_processor_reprocessing_queue_matched_optimistic_updates", + "Number of queued light client optimistic updates where as matching block has been imported." + ); + pub static ref BEACON_PROCESSOR_REPROCESSING_QUEUE_SENT_OPTIMISTIC_UPDATES: Result = try_create_int_counter( + "beacon_processor_reprocessing_queue_sent_optimistic_updates", + "Number of queued light client optimistic updates where as matching block has been imported." + ); } pub fn update_bandwidth_metrics(bandwidth: Arc) { diff --git a/beacon_node/network/src/nat.rs b/beacon_node/network/src/nat.rs index a2fbe576109..9bf123e8dec 100644 --- a/beacon_node/network/src/nat.rs +++ b/beacon_node/network/src/nat.rs @@ -20,13 +20,13 @@ pub struct UPnPConfig { disable_discovery: bool, } -impl From<&NetworkConfig> for UPnPConfig { - fn from(config: &NetworkConfig) -> Self { - UPnPConfig { - tcp_port: config.libp2p_port, - udp_port: config.discovery_port, +impl UPnPConfig { + pub fn from_config(config: &NetworkConfig) -> Option { + config.listen_addrs().v4().map(|v4_addr| UPnPConfig { + tcp_port: v4_addr.tcp_port, + udp_port: v4_addr.udp_port, disable_discovery: config.disable_discovery, - } + }) } } diff --git a/beacon_node/network/src/router.rs b/beacon_node/network/src/router.rs new file mode 100644 index 00000000000..7f75a27fe25 --- /dev/null +++ b/beacon_node/network/src/router.rs @@ -0,0 +1,535 @@ +//! This module handles incoming network messages. +//! +//! It routes the messages to appropriate services. +//! It handles requests at the application layer in its associated processor and directs +//! syncing-related responses to the Sync manager. +#![allow(clippy::unit_arg)] + +use crate::beacon_processor::{ + BeaconProcessor, WorkEvent as BeaconWorkEvent, MAX_WORK_EVENT_QUEUE_LEN, +}; +use crate::error; +use crate::service::{NetworkMessage, RequestId}; +use crate::status::status_message; +use crate::sync::manager::RequestId as SyncId; +use crate::sync::SyncMessage; +use beacon_chain::{BeaconChain, BeaconChainTypes}; +use futures::prelude::*; +use lighthouse_network::rpc::*; +use lighthouse_network::{ + MessageId, NetworkGlobals, PeerId, PeerRequestId, PubsubMessage, Request, Response, +}; +use slog::{debug, o, trace}; +use slog::{error, warn}; +use std::cmp; +use std::sync::Arc; +use std::time::{Duration, SystemTime, UNIX_EPOCH}; +use tokio::sync::mpsc; +use tokio_stream::wrappers::UnboundedReceiverStream; +use types::{EthSpec, SignedBeaconBlock}; + +/// Handles messages from the network and routes them to the appropriate service to be handled. +pub struct Router { + /// Access to the peer db and network information. + network_globals: Arc>, + /// A reference to the underlying beacon chain. + chain: Arc>, + /// A channel to the syncing thread. + sync_send: mpsc::UnboundedSender>, + /// A network context to return and handle RPC requests. + network: HandlerNetworkContext, + /// A multi-threaded, non-blocking processor for applying messages to the beacon chain. + beacon_processor_send: mpsc::Sender>, + /// The `Router` logger. + log: slog::Logger, +} + +/// Types of messages the router can receive. +#[derive(Debug)] +pub enum RouterMessage { + /// Peer has disconnected. + PeerDisconnected(PeerId), + /// An RPC request has been received. + RPCRequestReceived { + peer_id: PeerId, + id: PeerRequestId, + request: Request, + }, + /// An RPC response has been received. + RPCResponseReceived { + peer_id: PeerId, + request_id: RequestId, + response: Response, + }, + /// An RPC request failed + RPCFailed { + peer_id: PeerId, + request_id: RequestId, + }, + /// A gossip message has been received. The fields are: message id, the peer that sent us this + /// message, the message itself and a bool which indicates if the message should be processed + /// by the beacon chain after successful verification. + PubsubMessage(MessageId, PeerId, PubsubMessage, bool), + /// The peer manager has requested we re-status a peer. + StatusPeer(PeerId), +} + +impl Router { + /// Initializes and runs the Router. + pub fn spawn( + beacon_chain: Arc>, + network_globals: Arc>, + network_send: mpsc::UnboundedSender>, + executor: task_executor::TaskExecutor, + log: slog::Logger, + ) -> error::Result>> { + let message_handler_log = log.new(o!("service"=> "router")); + trace!(message_handler_log, "Service starting"); + + let (handler_send, handler_recv) = mpsc::unbounded_channel(); + + let (beacon_processor_send, beacon_processor_receive) = + mpsc::channel(MAX_WORK_EVENT_QUEUE_LEN); + + let sync_logger = log.new(o!("service"=> "sync")); + + // spawn the sync thread + let sync_send = crate::sync::manager::spawn( + executor.clone(), + beacon_chain.clone(), + network_globals.clone(), + network_send.clone(), + beacon_processor_send.clone(), + sync_logger, + ); + + BeaconProcessor { + beacon_chain: Arc::downgrade(&beacon_chain), + network_tx: network_send.clone(), + sync_tx: sync_send.clone(), + network_globals: network_globals.clone(), + executor: executor.clone(), + max_workers: cmp::max(1, num_cpus::get()), + current_workers: 0, + importing_blocks: Default::default(), + log: log.clone(), + } + .spawn_manager(beacon_processor_receive, None); + + // generate the Message handler + let mut handler = Router { + network_globals, + chain: beacon_chain, + sync_send, + network: HandlerNetworkContext::new(network_send, log.clone()), + beacon_processor_send, + log: message_handler_log, + }; + + // spawn handler task and move the message handler instance into the spawned thread + executor.spawn( + async move { + debug!(log, "Network message router started"); + UnboundedReceiverStream::new(handler_recv) + .for_each(move |msg| future::ready(handler.handle_message(msg))) + .await; + }, + "router", + ); + + Ok(handler_send) + } + + /// Handle all messages incoming from the network service. + fn handle_message(&mut self, message: RouterMessage) { + match message { + // we have initiated a connection to a peer or the peer manager has requested a + // re-status + RouterMessage::StatusPeer(peer_id) => { + self.send_status(peer_id); + } + // A peer has disconnected + RouterMessage::PeerDisconnected(peer_id) => { + self.send_to_sync(SyncMessage::Disconnect(peer_id)); + } + RouterMessage::RPCRequestReceived { + peer_id, + id, + request, + } => { + self.handle_rpc_request(peer_id, id, request); + } + RouterMessage::RPCResponseReceived { + peer_id, + request_id, + response, + } => { + self.handle_rpc_response(peer_id, request_id, response); + } + RouterMessage::RPCFailed { + peer_id, + request_id, + } => { + self.on_rpc_error(peer_id, request_id); + } + RouterMessage::PubsubMessage(id, peer_id, gossip, should_process) => { + self.handle_gossip(id, peer_id, gossip, should_process); + } + } + } + + /* RPC - Related functionality */ + + /// A new RPC request has been received from the network. + fn handle_rpc_request(&mut self, peer_id: PeerId, request_id: PeerRequestId, request: Request) { + if !self.network_globals.peers.read().is_connected(&peer_id) { + debug!(self.log, "Dropping request of disconnected peer"; "peer_id" => %peer_id, "request" => ?request); + return; + } + match request { + Request::Status(status_message) => { + self.on_status_request(peer_id, request_id, status_message) + } + Request::BlocksByRange(request) => self.send_beacon_processor_work( + BeaconWorkEvent::blocks_by_range_request(peer_id, request_id, request), + ), + Request::BlocksByRoot(request) => self.send_beacon_processor_work( + BeaconWorkEvent::blocks_by_roots_request(peer_id, request_id, request), + ), + Request::LightClientBootstrap(request) => self.send_beacon_processor_work( + BeaconWorkEvent::lightclient_bootstrap_request(peer_id, request_id, request), + ), + } + } + + /// An RPC response has been received from the network. + fn handle_rpc_response( + &mut self, + peer_id: PeerId, + request_id: RequestId, + response: Response, + ) { + match response { + Response::Status(status_message) => { + debug!(self.log, "Received Status Response"; "peer_id" => %peer_id, &status_message); + self.send_beacon_processor_work(BeaconWorkEvent::status_message( + peer_id, + status_message, + )) + } + Response::BlocksByRange(beacon_block) => { + self.on_blocks_by_range_response(peer_id, request_id, beacon_block); + } + Response::BlocksByRoot(beacon_block) => { + self.on_blocks_by_root_response(peer_id, request_id, beacon_block); + } + Response::LightClientBootstrap(_) => unreachable!(), + } + } + + /// Handle RPC messages. + /// Note: `should_process` is currently only useful for the `Attestation` variant. + /// if `should_process` is `false`, we only propagate the message on successful verification, + /// else, we propagate **and** import into the beacon chain. + fn handle_gossip( + &mut self, + message_id: MessageId, + peer_id: PeerId, + gossip_message: PubsubMessage, + should_process: bool, + ) { + match gossip_message { + PubsubMessage::AggregateAndProofAttestation(aggregate_and_proof) => self + .send_beacon_processor_work(BeaconWorkEvent::aggregated_attestation( + message_id, + peer_id, + *aggregate_and_proof, + timestamp_now(), + )), + PubsubMessage::Attestation(subnet_attestation) => { + self.send_beacon_processor_work(BeaconWorkEvent::unaggregated_attestation( + message_id, + peer_id, + subnet_attestation.1, + subnet_attestation.0, + should_process, + timestamp_now(), + )) + } + PubsubMessage::BeaconBlock(block) => { + self.send_beacon_processor_work(BeaconWorkEvent::gossip_beacon_block( + message_id, + peer_id, + self.network_globals.client(&peer_id), + block, + timestamp_now(), + )) + } + PubsubMessage::VoluntaryExit(exit) => { + debug!(self.log, "Received a voluntary exit"; "peer_id" => %peer_id); + self.send_beacon_processor_work(BeaconWorkEvent::gossip_voluntary_exit( + message_id, peer_id, exit, + )) + } + PubsubMessage::ProposerSlashing(proposer_slashing) => { + debug!( + self.log, + "Received a proposer slashing"; + "peer_id" => %peer_id + ); + self.send_beacon_processor_work(BeaconWorkEvent::gossip_proposer_slashing( + message_id, + peer_id, + proposer_slashing, + )) + } + PubsubMessage::AttesterSlashing(attester_slashing) => { + debug!( + self.log, + "Received a attester slashing"; + "peer_id" => %peer_id + ); + self.send_beacon_processor_work(BeaconWorkEvent::gossip_attester_slashing( + message_id, + peer_id, + attester_slashing, + )) + } + PubsubMessage::SignedContributionAndProof(contribution_and_proof) => { + trace!( + self.log, + "Received sync committee aggregate"; + "peer_id" => %peer_id + ); + self.send_beacon_processor_work(BeaconWorkEvent::gossip_sync_contribution( + message_id, + peer_id, + *contribution_and_proof, + timestamp_now(), + )) + } + PubsubMessage::SyncCommitteeMessage(sync_committtee_msg) => { + trace!( + self.log, + "Received sync committee signature"; + "peer_id" => %peer_id + ); + self.send_beacon_processor_work(BeaconWorkEvent::gossip_sync_signature( + message_id, + peer_id, + sync_committtee_msg.1, + sync_committtee_msg.0, + timestamp_now(), + )) + } + PubsubMessage::LightClientFinalityUpdate(light_client_finality_update) => { + trace!( + self.log, + "Received light client finality update"; + "peer_id" => %peer_id + ); + self.send_beacon_processor_work( + BeaconWorkEvent::gossip_light_client_finality_update( + message_id, + peer_id, + light_client_finality_update, + timestamp_now(), + ), + ) + } + PubsubMessage::LightClientOptimisticUpdate(light_client_optimistic_update) => { + trace!( + self.log, + "Received light client optimistic update"; + "peer_id" => %peer_id + ); + self.send_beacon_processor_work( + BeaconWorkEvent::gossip_light_client_optimistic_update( + message_id, + peer_id, + light_client_optimistic_update, + timestamp_now(), + ), + ) + } + PubsubMessage::BlsToExecutionChange(bls_to_execution_change) => self + .send_beacon_processor_work(BeaconWorkEvent::gossip_bls_to_execution_change( + message_id, + peer_id, + bls_to_execution_change, + )), + } + } + + fn send_status(&mut self, peer_id: PeerId) { + let status_message = status_message(&self.chain); + debug!(self.log, "Sending Status Request"; "peer" => %peer_id, &status_message); + self.network + .send_processor_request(peer_id, Request::Status(status_message)); + } + + fn send_to_sync(&mut self, message: SyncMessage) { + self.sync_send.send(message).unwrap_or_else(|e| { + warn!( + self.log, + "Could not send message to the sync service"; + "error" => %e, + ) + }); + } + + /// An error occurred during an RPC request. The state is maintained by the sync manager, so + /// this function notifies the sync manager of the error. + pub fn on_rpc_error(&mut self, peer_id: PeerId, request_id: RequestId) { + // Check if the failed RPC belongs to sync + if let RequestId::Sync(request_id) = request_id { + self.send_to_sync(SyncMessage::RpcError { + peer_id, + request_id, + }); + } + } + + /// Handle a `Status` request. + /// + /// Processes the `Status` from the remote peer and sends back our `Status`. + pub fn on_status_request( + &mut self, + peer_id: PeerId, + request_id: PeerRequestId, + status: StatusMessage, + ) { + debug!(self.log, "Received Status Request"; "peer_id" => %peer_id, &status); + + // Say status back. + self.network.send_response( + peer_id, + Response::Status(status_message(&self.chain)), + request_id, + ); + + self.send_beacon_processor_work(BeaconWorkEvent::status_message(peer_id, status)) + } + + /// Handle a `BlocksByRange` response from the peer. + /// A `beacon_block` behaves as a stream which is terminated on a `None` response. + pub fn on_blocks_by_range_response( + &mut self, + peer_id: PeerId, + request_id: RequestId, + beacon_block: Option>>, + ) { + let request_id = match request_id { + RequestId::Sync(sync_id) => match sync_id { + SyncId::SingleBlock { .. } | SyncId::ParentLookup { .. } => { + unreachable!("Block lookups do not request BBRange requests") + } + id @ (SyncId::BackFillSync { .. } | SyncId::RangeSync { .. }) => id, + }, + RequestId::Router => unreachable!("All BBRange requests belong to sync"), + }; + + trace!( + self.log, + "Received BlocksByRange Response"; + "peer" => %peer_id, + ); + + self.send_to_sync(SyncMessage::RpcBlock { + peer_id, + request_id, + beacon_block, + seen_timestamp: timestamp_now(), + }); + } + + /// Handle a `BlocksByRoot` response from the peer. + pub fn on_blocks_by_root_response( + &mut self, + peer_id: PeerId, + request_id: RequestId, + beacon_block: Option>>, + ) { + let request_id = match request_id { + RequestId::Sync(sync_id) => match sync_id { + id @ (SyncId::SingleBlock { .. } | SyncId::ParentLookup { .. }) => id, + SyncId::BackFillSync { .. } | SyncId::RangeSync { .. } => { + unreachable!("Batch syncing do not request BBRoot requests") + } + }, + RequestId::Router => unreachable!("All BBRoot requests belong to sync"), + }; + + trace!( + self.log, + "Received BlocksByRoot Response"; + "peer" => %peer_id, + ); + self.send_to_sync(SyncMessage::RpcBlock { + peer_id, + request_id, + beacon_block, + seen_timestamp: timestamp_now(), + }); + } + + fn send_beacon_processor_work(&mut self, work: BeaconWorkEvent) { + self.beacon_processor_send + .try_send(work) + .unwrap_or_else(|e| { + let work_type = match &e { + mpsc::error::TrySendError::Closed(work) + | mpsc::error::TrySendError::Full(work) => work.work_type(), + }; + error!(&self.log, "Unable to send message to the beacon processor"; + "error" => %e, "type" => work_type) + }) + } +} + +/// Wraps a Network Channel to employ various RPC related network functionality for the +/// processor. +#[derive(Clone)] +pub struct HandlerNetworkContext { + /// The network channel to relay messages to the Network service. + network_send: mpsc::UnboundedSender>, + /// Logger for the `NetworkContext`. + log: slog::Logger, +} + +impl HandlerNetworkContext { + pub fn new(network_send: mpsc::UnboundedSender>, log: slog::Logger) -> Self { + Self { network_send, log } + } + + /// Sends a message to the network task. + fn inform_network(&mut self, msg: NetworkMessage) { + self.network_send.send(msg).unwrap_or_else( + |e| warn!(self.log, "Could not send message to the network service"; "error" => %e), + ) + } + + /// Sends a request to the network task. + pub fn send_processor_request(&mut self, peer_id: PeerId, request: Request) { + self.inform_network(NetworkMessage::SendRequest { + peer_id, + request_id: RequestId::Router, + request, + }) + } + + /// Sends a response to the network task. + pub fn send_response(&mut self, peer_id: PeerId, response: Response, id: PeerRequestId) { + self.inform_network(NetworkMessage::SendResponse { + peer_id, + id, + response, + }) + } +} + +fn timestamp_now() -> Duration { + SystemTime::now() + .duration_since(UNIX_EPOCH) + .unwrap_or_else(|_| Duration::from_secs(0)) +} diff --git a/beacon_node/network/src/router/mod.rs b/beacon_node/network/src/router/mod.rs deleted file mode 100644 index ce98337cfed..00000000000 --- a/beacon_node/network/src/router/mod.rs +++ /dev/null @@ -1,309 +0,0 @@ -//! This module handles incoming network messages. -//! -//! It routes the messages to appropriate services. -//! It handles requests at the application layer in its associated processor and directs -//! syncing-related responses to the Sync manager. -#![allow(clippy::unit_arg)] - -mod processor; - -use crate::error; -use crate::service::{NetworkMessage, RequestId}; -use beacon_chain::{BeaconChain, BeaconChainTypes}; -use futures::prelude::*; -use lighthouse_network::{ - MessageId, NetworkGlobals, PeerId, PeerRequestId, PubsubMessage, Request, Response, -}; -use processor::Processor; -use slog::{debug, o, trace}; -use std::sync::Arc; -use tokio::sync::mpsc; -use tokio_stream::wrappers::UnboundedReceiverStream; -use types::EthSpec; - -/// Handles messages received from the network and client and organises syncing. This -/// functionality of this struct is to validate an decode messages from the network before -/// passing them to the internal message processor. The message processor spawns a syncing thread -/// which manages which blocks need to be requested and processed. -pub struct Router { - /// Access to the peer db. - network_globals: Arc>, - /// Processes validated and decoded messages from the network. Has direct access to the - /// sync manager. - processor: Processor, - /// The `Router` logger. - log: slog::Logger, -} - -/// Types of messages the handler can receive. -#[derive(Debug)] -pub enum RouterMessage { - /// We have initiated a connection to a new peer. - PeerDialed(PeerId), - /// Peer has disconnected, - PeerDisconnected(PeerId), - /// An RPC request has been received. - RPCRequestReceived { - peer_id: PeerId, - id: PeerRequestId, - request: Request, - }, - /// An RPC response has been received. - RPCResponseReceived { - peer_id: PeerId, - request_id: RequestId, - response: Response, - }, - /// An RPC request failed - RPCFailed { - peer_id: PeerId, - request_id: RequestId, - }, - /// A gossip message has been received. The fields are: message id, the peer that sent us this - /// message, the message itself and a bool which indicates if the message should be processed - /// by the beacon chain after successful verification. - PubsubMessage(MessageId, PeerId, PubsubMessage, bool), - /// The peer manager has requested we re-status a peer. - StatusPeer(PeerId), -} - -impl Router { - /// Initializes and runs the Router. - pub fn spawn( - beacon_chain: Arc>, - network_globals: Arc>, - network_send: mpsc::UnboundedSender>, - executor: task_executor::TaskExecutor, - log: slog::Logger, - ) -> error::Result>> { - let message_handler_log = log.new(o!("service"=> "router")); - trace!(message_handler_log, "Service starting"); - - let (handler_send, handler_recv) = mpsc::unbounded_channel(); - - // Initialise a message instance, which itself spawns the syncing thread. - let processor = Processor::new( - executor.clone(), - beacon_chain, - network_globals.clone(), - network_send, - &log, - ); - - // generate the Message handler - let mut handler = Router { - network_globals, - processor, - log: message_handler_log, - }; - - // spawn handler task and move the message handler instance into the spawned thread - executor.spawn( - async move { - debug!(log, "Network message router started"); - UnboundedReceiverStream::new(handler_recv) - .for_each(move |msg| future::ready(handler.handle_message(msg))) - .await; - }, - "router", - ); - - Ok(handler_send) - } - - /// Handle all messages incoming from the network service. - fn handle_message(&mut self, message: RouterMessage) { - match message { - // we have initiated a connection to a peer or the peer manager has requested a - // re-status - RouterMessage::PeerDialed(peer_id) | RouterMessage::StatusPeer(peer_id) => { - self.processor.send_status(peer_id); - } - // A peer has disconnected - RouterMessage::PeerDisconnected(peer_id) => { - self.processor.on_disconnect(peer_id); - } - RouterMessage::RPCRequestReceived { - peer_id, - id, - request, - } => { - self.handle_rpc_request(peer_id, id, request); - } - RouterMessage::RPCResponseReceived { - peer_id, - request_id, - response, - } => { - self.handle_rpc_response(peer_id, request_id, response); - } - RouterMessage::RPCFailed { - peer_id, - request_id, - } => { - self.processor.on_rpc_error(peer_id, request_id); - } - RouterMessage::PubsubMessage(id, peer_id, gossip, should_process) => { - self.handle_gossip(id, peer_id, gossip, should_process); - } - } - } - - /* RPC - Related functionality */ - - /// A new RPC request has been received from the network. - fn handle_rpc_request(&mut self, peer_id: PeerId, id: PeerRequestId, request: Request) { - if !self.network_globals.peers.read().is_connected(&peer_id) { - debug!(self.log, "Dropping request of disconnected peer"; "peer_id" => %peer_id, "request" => ?request); - return; - } - match request { - Request::Status(status_message) => { - self.processor - .on_status_request(peer_id, id, status_message) - } - Request::BlocksByRange(request) => self - .processor - .on_blocks_by_range_request(peer_id, id, request), - Request::BlocksByRoot(request) => self - .processor - .on_blocks_by_root_request(peer_id, id, request), - Request::LightClientBootstrap(request) => self - .processor - .on_lightclient_bootstrap(peer_id, id, request), - } - } - - /// An RPC response has been received from the network. - // we match on id and ignore responses past the timeout. - fn handle_rpc_response( - &mut self, - peer_id: PeerId, - request_id: RequestId, - response: Response, - ) { - // an error could have occurred. - match response { - Response::Status(status_message) => { - self.processor.on_status_response(peer_id, status_message); - } - Response::BlocksByRange(beacon_block) => { - self.processor - .on_blocks_by_range_response(peer_id, request_id, beacon_block); - } - Response::BlocksByRoot(beacon_block) => { - self.processor - .on_blocks_by_root_response(peer_id, request_id, beacon_block); - } - Response::LightClientBootstrap(_) => unreachable!(), - } - } - - /// Handle RPC messages. - /// Note: `should_process` is currently only useful for the `Attestation` variant. - /// if `should_process` is `false`, we only propagate the message on successful verification, - /// else, we propagate **and** import into the beacon chain. - fn handle_gossip( - &mut self, - id: MessageId, - peer_id: PeerId, - gossip_message: PubsubMessage, - should_process: bool, - ) { - match gossip_message { - // Attestations should never reach the router. - PubsubMessage::AggregateAndProofAttestation(aggregate_and_proof) => { - self.processor - .on_aggregated_attestation_gossip(id, peer_id, *aggregate_and_proof); - } - PubsubMessage::Attestation(subnet_attestation) => { - self.processor.on_unaggregated_attestation_gossip( - id, - peer_id, - subnet_attestation.1.clone(), - subnet_attestation.0, - should_process, - ); - } - PubsubMessage::BeaconBlock(block) => { - self.processor.on_block_gossip( - id, - peer_id, - self.network_globals.client(&peer_id), - block, - ); - } - PubsubMessage::VoluntaryExit(exit) => { - debug!(self.log, "Received a voluntary exit"; "peer_id" => %peer_id); - self.processor.on_voluntary_exit_gossip(id, peer_id, exit); - } - PubsubMessage::ProposerSlashing(proposer_slashing) => { - debug!( - self.log, - "Received a proposer slashing"; - "peer_id" => %peer_id - ); - self.processor - .on_proposer_slashing_gossip(id, peer_id, proposer_slashing); - } - PubsubMessage::AttesterSlashing(attester_slashing) => { - debug!( - self.log, - "Received a attester slashing"; - "peer_id" => %peer_id - ); - self.processor - .on_attester_slashing_gossip(id, peer_id, attester_slashing); - } - PubsubMessage::SignedContributionAndProof(contribution_and_proof) => { - trace!( - self.log, - "Received sync committee aggregate"; - "peer_id" => %peer_id - ); - self.processor.on_sync_committee_contribution_gossip( - id, - peer_id, - *contribution_and_proof, - ); - } - PubsubMessage::SyncCommitteeMessage(sync_committtee_msg) => { - trace!( - self.log, - "Received sync committee signature"; - "peer_id" => %peer_id - ); - self.processor.on_sync_committee_signature_gossip( - id, - peer_id, - sync_committtee_msg.1, - sync_committtee_msg.0, - ); - } - PubsubMessage::LightClientFinalityUpdate(light_client_finality_update) => { - trace!( - self.log, - "Received light client finality update"; - "peer_id" => %peer_id - ); - self.processor.on_light_client_finality_update_gossip( - id, - peer_id, - light_client_finality_update, - ); - } - PubsubMessage::LightClientOptimisticUpdate(light_client_optimistic_update) => { - trace!( - self.log, - "Received light client optimistic update"; - "peer_id" => %peer_id - ); - self.processor.on_light_client_optimistic_update_gossip( - id, - peer_id, - light_client_optimistic_update, - ); - } - } - } -} diff --git a/beacon_node/network/src/router/processor.rs b/beacon_node/network/src/router/processor.rs deleted file mode 100644 index 999ba29e90a..00000000000 --- a/beacon_node/network/src/router/processor.rs +++ /dev/null @@ -1,459 +0,0 @@ -use crate::beacon_processor::{ - BeaconProcessor, WorkEvent as BeaconWorkEvent, MAX_WORK_EVENT_QUEUE_LEN, -}; -use crate::service::{NetworkMessage, RequestId}; -use crate::status::status_message; -use crate::sync::manager::RequestId as SyncId; -use crate::sync::SyncMessage; -use beacon_chain::{BeaconChain, BeaconChainTypes}; -use lighthouse_network::rpc::*; -use lighthouse_network::{ - Client, MessageId, NetworkGlobals, PeerId, PeerRequestId, Request, Response, -}; -use slog::{debug, error, o, trace, warn}; -use std::cmp; -use std::sync::Arc; -use std::time::{Duration, SystemTime, UNIX_EPOCH}; -use store::SyncCommitteeMessage; -use tokio::sync::mpsc; -use types::{ - Attestation, AttesterSlashing, EthSpec, LightClientFinalityUpdate, LightClientOptimisticUpdate, - ProposerSlashing, SignedAggregateAndProof, SignedBeaconBlock, SignedContributionAndProof, - SignedVoluntaryExit, SubnetId, SyncSubnetId, -}; - -/// Processes validated messages from the network. It relays necessary data to the syncing thread -/// and processes blocks from the pubsub network. -pub struct Processor { - /// A reference to the underlying beacon chain. - chain: Arc>, - /// A channel to the syncing thread. - sync_send: mpsc::UnboundedSender>, - /// A network context to return and handle RPC requests. - network: HandlerNetworkContext, - /// A multi-threaded, non-blocking processor for applying messages to the beacon chain. - beacon_processor_send: mpsc::Sender>, - /// The `RPCHandler` logger. - log: slog::Logger, -} - -impl Processor { - /// Instantiate a `Processor` instance - pub fn new( - executor: task_executor::TaskExecutor, - beacon_chain: Arc>, - network_globals: Arc>, - network_send: mpsc::UnboundedSender>, - log: &slog::Logger, - ) -> Self { - let sync_logger = log.new(o!("service"=> "sync")); - let (beacon_processor_send, beacon_processor_receive) = - mpsc::channel(MAX_WORK_EVENT_QUEUE_LEN); - - // spawn the sync thread - let sync_send = crate::sync::manager::spawn( - executor.clone(), - beacon_chain.clone(), - network_globals.clone(), - network_send.clone(), - beacon_processor_send.clone(), - sync_logger, - ); - - BeaconProcessor { - beacon_chain: Arc::downgrade(&beacon_chain), - network_tx: network_send.clone(), - sync_tx: sync_send.clone(), - network_globals, - executor, - max_workers: cmp::max(1, num_cpus::get()), - current_workers: 0, - importing_blocks: Default::default(), - log: log.clone(), - } - .spawn_manager(beacon_processor_receive, None); - - Processor { - chain: beacon_chain, - sync_send, - network: HandlerNetworkContext::new(network_send, log.clone()), - beacon_processor_send, - log: log.new(o!("service" => "router")), - } - } - - fn send_to_sync(&mut self, message: SyncMessage) { - self.sync_send.send(message).unwrap_or_else(|e| { - warn!( - self.log, - "Could not send message to the sync service"; - "error" => %e, - ) - }); - } - - /// Handle a peer disconnect. - /// - /// Removes the peer from the manager. - pub fn on_disconnect(&mut self, peer_id: PeerId) { - self.send_to_sync(SyncMessage::Disconnect(peer_id)); - } - - /// An error occurred during an RPC request. The state is maintained by the sync manager, so - /// this function notifies the sync manager of the error. - pub fn on_rpc_error(&mut self, peer_id: PeerId, request_id: RequestId) { - // Check if the failed RPC belongs to sync - if let RequestId::Sync(request_id) = request_id { - self.send_to_sync(SyncMessage::RpcError { - peer_id, - request_id, - }); - } - } - - /// Sends a `Status` message to the peer. - /// - /// Called when we first connect to a peer, or when the PeerManager determines we need to - /// re-status. - pub fn send_status(&mut self, peer_id: PeerId) { - let status_message = status_message(&self.chain); - debug!(self.log, "Sending Status Request"; "peer" => %peer_id, &status_message); - self.network - .send_processor_request(peer_id, Request::Status(status_message)); - } - - /// Handle a `Status` request. - /// - /// Processes the `Status` from the remote peer and sends back our `Status`. - pub fn on_status_request( - &mut self, - peer_id: PeerId, - request_id: PeerRequestId, - status: StatusMessage, - ) { - debug!(self.log, "Received Status Request"; "peer_id" => %peer_id, &status); - - // Say status back. - self.network.send_response( - peer_id, - Response::Status(status_message(&self.chain)), - request_id, - ); - - self.send_beacon_processor_work(BeaconWorkEvent::status_message(peer_id, status)) - } - - /// Process a `Status` response from a peer. - pub fn on_status_response(&mut self, peer_id: PeerId, status: StatusMessage) { - debug!(self.log, "Received Status Response"; "peer_id" => %peer_id, &status); - self.send_beacon_processor_work(BeaconWorkEvent::status_message(peer_id, status)) - } - - /// Handle a `BlocksByRoot` request from the peer. - pub fn on_blocks_by_root_request( - &mut self, - peer_id: PeerId, - request_id: PeerRequestId, - request: BlocksByRootRequest, - ) { - self.send_beacon_processor_work(BeaconWorkEvent::blocks_by_roots_request( - peer_id, request_id, request, - )) - } - - /// Handle a `LightClientBootstrap` request from the peer. - pub fn on_lightclient_bootstrap( - &mut self, - peer_id: PeerId, - request_id: PeerRequestId, - request: LightClientBootstrapRequest, - ) { - self.send_beacon_processor_work(BeaconWorkEvent::lightclient_bootstrap_request( - peer_id, request_id, request, - )) - } - - /// Handle a `BlocksByRange` request from the peer. - pub fn on_blocks_by_range_request( - &mut self, - peer_id: PeerId, - request_id: PeerRequestId, - req: BlocksByRangeRequest, - ) { - self.send_beacon_processor_work(BeaconWorkEvent::blocks_by_range_request( - peer_id, request_id, req, - )) - } - - /// Handle a `BlocksByRange` response from the peer. - /// A `beacon_block` behaves as a stream which is terminated on a `None` response. - pub fn on_blocks_by_range_response( - &mut self, - peer_id: PeerId, - request_id: RequestId, - beacon_block: Option>>, - ) { - let request_id = match request_id { - RequestId::Sync(sync_id) => match sync_id { - SyncId::SingleBlock { .. } | SyncId::ParentLookup { .. } => { - unreachable!("Block lookups do not request BBRange requests") - } - id @ (SyncId::BackFillSync { .. } | SyncId::RangeSync { .. }) => id, - }, - RequestId::Router => unreachable!("All BBRange requests belong to sync"), - }; - - trace!( - self.log, - "Received BlocksByRange Response"; - "peer" => %peer_id, - ); - - self.send_to_sync(SyncMessage::RpcBlock { - peer_id, - request_id, - beacon_block, - seen_timestamp: timestamp_now(), - }); - } - - /// Handle a `BlocksByRoot` response from the peer. - pub fn on_blocks_by_root_response( - &mut self, - peer_id: PeerId, - request_id: RequestId, - beacon_block: Option>>, - ) { - let request_id = match request_id { - RequestId::Sync(sync_id) => match sync_id { - id @ (SyncId::SingleBlock { .. } | SyncId::ParentLookup { .. }) => id, - SyncId::BackFillSync { .. } | SyncId::RangeSync { .. } => { - unreachable!("Batch syncing do not request BBRoot requests") - } - }, - RequestId::Router => unreachable!("All BBRoot requests belong to sync"), - }; - - trace!( - self.log, - "Received BlocksByRoot Response"; - "peer" => %peer_id, - ); - self.send_to_sync(SyncMessage::RpcBlock { - peer_id, - request_id, - beacon_block, - seen_timestamp: timestamp_now(), - }); - } - - /// Process a gossip message declaring a new block. - /// - /// Attempts to apply to block to the beacon chain. May queue the block for later processing. - /// - /// Returns a `bool` which, if `true`, indicates we should forward the block to our peers. - pub fn on_block_gossip( - &mut self, - message_id: MessageId, - peer_id: PeerId, - peer_client: Client, - block: Arc>, - ) { - self.send_beacon_processor_work(BeaconWorkEvent::gossip_beacon_block( - message_id, - peer_id, - peer_client, - block, - timestamp_now(), - )) - } - - pub fn on_unaggregated_attestation_gossip( - &mut self, - message_id: MessageId, - peer_id: PeerId, - unaggregated_attestation: Attestation, - subnet_id: SubnetId, - should_process: bool, - ) { - self.send_beacon_processor_work(BeaconWorkEvent::unaggregated_attestation( - message_id, - peer_id, - unaggregated_attestation, - subnet_id, - should_process, - timestamp_now(), - )) - } - - pub fn on_aggregated_attestation_gossip( - &mut self, - message_id: MessageId, - peer_id: PeerId, - aggregate: SignedAggregateAndProof, - ) { - self.send_beacon_processor_work(BeaconWorkEvent::aggregated_attestation( - message_id, - peer_id, - aggregate, - timestamp_now(), - )) - } - - pub fn on_voluntary_exit_gossip( - &mut self, - message_id: MessageId, - peer_id: PeerId, - voluntary_exit: Box, - ) { - self.send_beacon_processor_work(BeaconWorkEvent::gossip_voluntary_exit( - message_id, - peer_id, - voluntary_exit, - )) - } - - pub fn on_proposer_slashing_gossip( - &mut self, - message_id: MessageId, - peer_id: PeerId, - proposer_slashing: Box, - ) { - self.send_beacon_processor_work(BeaconWorkEvent::gossip_proposer_slashing( - message_id, - peer_id, - proposer_slashing, - )) - } - - pub fn on_attester_slashing_gossip( - &mut self, - message_id: MessageId, - peer_id: PeerId, - attester_slashing: Box>, - ) { - self.send_beacon_processor_work(BeaconWorkEvent::gossip_attester_slashing( - message_id, - peer_id, - attester_slashing, - )) - } - - pub fn on_sync_committee_signature_gossip( - &mut self, - message_id: MessageId, - peer_id: PeerId, - sync_signature: SyncCommitteeMessage, - subnet_id: SyncSubnetId, - ) { - self.send_beacon_processor_work(BeaconWorkEvent::gossip_sync_signature( - message_id, - peer_id, - sync_signature, - subnet_id, - timestamp_now(), - )) - } - - pub fn on_sync_committee_contribution_gossip( - &mut self, - message_id: MessageId, - peer_id: PeerId, - sync_contribution: SignedContributionAndProof, - ) { - self.send_beacon_processor_work(BeaconWorkEvent::gossip_sync_contribution( - message_id, - peer_id, - sync_contribution, - timestamp_now(), - )) - } - - pub fn on_light_client_finality_update_gossip( - &mut self, - message_id: MessageId, - peer_id: PeerId, - light_client_finality_update: Box>, - ) { - self.send_beacon_processor_work(BeaconWorkEvent::gossip_light_client_finality_update( - message_id, - peer_id, - light_client_finality_update, - timestamp_now(), - )) - } - - pub fn on_light_client_optimistic_update_gossip( - &mut self, - message_id: MessageId, - peer_id: PeerId, - light_client_optimistic_update: Box>, - ) { - self.send_beacon_processor_work(BeaconWorkEvent::gossip_light_client_optimistic_update( - message_id, - peer_id, - light_client_optimistic_update, - timestamp_now(), - )) - } - - fn send_beacon_processor_work(&mut self, work: BeaconWorkEvent) { - self.beacon_processor_send - .try_send(work) - .unwrap_or_else(|e| { - let work_type = match &e { - mpsc::error::TrySendError::Closed(work) - | mpsc::error::TrySendError::Full(work) => work.work_type(), - }; - error!(&self.log, "Unable to send message to the beacon processor"; - "error" => %e, "type" => work_type) - }) - } -} - -/// Wraps a Network Channel to employ various RPC related network functionality for the -/// processor. -#[derive(Clone)] -pub struct HandlerNetworkContext { - /// The network channel to relay messages to the Network service. - network_send: mpsc::UnboundedSender>, - /// Logger for the `NetworkContext`. - log: slog::Logger, -} - -impl HandlerNetworkContext { - pub fn new(network_send: mpsc::UnboundedSender>, log: slog::Logger) -> Self { - Self { network_send, log } - } - - /// Sends a message to the network task. - fn inform_network(&mut self, msg: NetworkMessage) { - self.network_send.send(msg).unwrap_or_else( - |e| warn!(self.log, "Could not send message to the network service"; "error" => %e), - ) - } - - /// Sends a request to the network task. - pub fn send_processor_request(&mut self, peer_id: PeerId, request: Request) { - self.inform_network(NetworkMessage::SendRequest { - peer_id, - request_id: RequestId::Router, - request, - }) - } - - /// Sends a response to the network task. - pub fn send_response(&mut self, peer_id: PeerId, response: Response, id: PeerRequestId) { - self.inform_network(NetworkMessage::SendResponse { - peer_id, - id, - response, - }) - } -} - -fn timestamp_now() -> Duration { - SystemTime::now() - .duration_since(UNIX_EPOCH) - .unwrap_or_else(|_| Duration::from_secs(0)) -} diff --git a/beacon_node/network/src/service.rs b/beacon_node/network/src/service.rs index 4568ed1a229..3e86d2099f0 100644 --- a/beacon_node/network/src/service.rs +++ b/beacon_node/network/src/service.rs @@ -19,7 +19,7 @@ use lighthouse_network::{ Context, PeerAction, PeerRequestId, PubsubMessage, ReportSource, Request, Response, Subnet, }; use lighthouse_network::{ - types::{GossipEncoding, GossipTopic}, + types::{core_topics_to_subscribe, GossipEncoding, GossipTopic}, MessageId, NetworkEvent, NetworkGlobals, PeerId, }; use slog::{crit, debug, error, info, o, trace, warn}; @@ -228,16 +228,21 @@ impl NetworkService { let (network_senders, network_recievers) = NetworkSenders::new(); // try and construct UPnP port mappings if required. - let upnp_config = crate::nat::UPnPConfig::from(config); - let upnp_log = network_log.new(o!("service" => "UPnP")); - let upnp_network_send = network_senders.network_send(); - if config.upnp_enabled { - executor.spawn_blocking( - move || { - crate::nat::construct_upnp_mappings(upnp_config, upnp_network_send, upnp_log) - }, - "UPnP", - ); + if let Some(upnp_config) = crate::nat::UPnPConfig::from_config(config) { + let upnp_log = network_log.new(o!("service" => "UPnP")); + let upnp_network_send = network_senders.network_send(); + if config.upnp_enabled { + executor.spawn_blocking( + move || { + crate::nat::construct_upnp_mappings( + upnp_config, + upnp_network_send, + upnp_log, + ) + }, + "UPnP", + ); + } } // get a reference to the beacon chain store @@ -445,7 +450,7 @@ impl NetworkService { let fork_version = self.beacon_chain.spec.fork_version_for_name(fork_name); let fork_digest = ChainSpec::compute_fork_digest(fork_version, self.beacon_chain.genesis_validators_root); info!(self.log, "Subscribing to new fork topics"); - self.libp2p.subscribe_new_fork_topics(fork_digest); + self.libp2p.subscribe_new_fork_topics(fork_name, fork_digest); self.next_fork_subscriptions = Box::pin(None.into()); } else { @@ -467,7 +472,7 @@ impl NetworkService { ) { match ev { NetworkEvent::PeerConnectedOutgoing(peer_id) => { - self.send_to_router(RouterMessage::PeerDialed(peer_id)); + self.send_to_router(RouterMessage::StatusPeer(peer_id)); } NetworkEvent::PeerConnectedIncoming(_) | NetworkEvent::PeerBanned(_) @@ -684,7 +689,7 @@ impl NetworkService { } let mut subscribed_topics: Vec = vec![]; - for topic_kind in lighthouse_network::types::CORE_TOPICS.iter() { + for topic_kind in core_topics_to_subscribe(self.fork_context.current_fork()) { for fork_digest in self.required_gossip_fork_digests() { let topic = GossipTopic::new( topic_kind.clone(), diff --git a/beacon_node/network/src/service/tests.rs b/beacon_node/network/src/service/tests.rs index f0dd0e75ffd..83fcc8c9ac8 100644 --- a/beacon_node/network/src/service/tests.rs +++ b/beacon_node/network/src/service/tests.rs @@ -59,10 +59,9 @@ mod tests { ); let mut config = NetworkConfig::default(); + config.set_ipv4_listening_address(std::net::Ipv4Addr::UNSPECIFIED, 21212, 21212); config.discv5_config.table_filter = |_| true; // Do not ignore local IPs - config.libp2p_port = 21212; config.upnp_enabled = false; - config.discovery_port = 21212; config.boot_nodes_enr = enrs.clone(); runtime.block_on(async move { // Create a new network service which implicitly gets dropped at the diff --git a/beacon_node/network/src/subnet_service/tests/mod.rs b/beacon_node/network/src/subnet_service/tests/mod.rs index 9e1c9f51bcc..a407fe1bcf8 100644 --- a/beacon_node/network/src/subnet_service/tests/mod.rs +++ b/beacon_node/network/src/subnet_service/tests/mod.rs @@ -182,6 +182,7 @@ mod attestation_service { #[cfg(feature = "deterministic_long_lived_attnets")] use std::collections::HashSet; + #[cfg(not(windows))] use crate::subnet_service::attestation_subnets::MIN_PEER_DISCOVERY_SLOT_LOOK_AHEAD; use super::*; @@ -290,6 +291,7 @@ mod attestation_service { } /// Test to verify that we are not unsubscribing to a subnet before a required subscription. + #[cfg(not(windows))] #[tokio::test] async fn test_same_subnet_unsubscription() { // subscription config @@ -513,6 +515,7 @@ mod attestation_service { assert_eq!(unexpected_msg_count, 0); } + #[cfg(not(windows))] #[tokio::test] async fn test_subscribe_same_subnet_several_slots_apart() { // subscription config diff --git a/beacon_node/operation_pool/Cargo.toml b/beacon_node/operation_pool/Cargo.toml index c61ca6b2cff..3ec24a18490 100644 --- a/beacon_node/operation_pool/Cargo.toml +++ b/beacon_node/operation_pool/Cargo.toml @@ -13,12 +13,13 @@ parking_lot = "0.12.0" types = { path = "../../consensus/types" } state_processing = { path = "../../consensus/state_processing" } eth2_ssz = { version = "0.4.1", path = "../../consensus/ssz" } -eth2_ssz_derive = { version = "0.3.0", path = "../../consensus/ssz_derive" } +eth2_ssz_derive = { version = "0.3.1", path = "../../consensus/ssz_derive" } rayon = "1.5.0" serde = "1.0.116" serde_derive = "1.0.116" store = { path = "../store" } bitvec = "1" +rand = "0.8.5" [dev-dependencies] beacon_chain = { path = "../beacon_chain" } diff --git a/beacon_node/operation_pool/src/bls_to_execution_changes.rs b/beacon_node/operation_pool/src/bls_to_execution_changes.rs new file mode 100644 index 00000000000..c73666e1458 --- /dev/null +++ b/beacon_node/operation_pool/src/bls_to_execution_changes.rs @@ -0,0 +1,147 @@ +use state_processing::SigVerifiedOp; +use std::collections::{hash_map::Entry, HashMap, HashSet}; +use std::sync::Arc; +use types::{ + AbstractExecPayload, BeaconState, ChainSpec, EthSpec, SignedBeaconBlock, + SignedBlsToExecutionChange, +}; + +/// Indicates if a `BlsToExecutionChange` was received before or after the +/// Capella fork. This is used to know which messages we should broadcast at the +/// Capella fork epoch. +#[derive(Copy, Clone)] +pub enum ReceivedPreCapella { + Yes, + No, +} + +/// Pool of BLS to execution changes that maintains a LIFO queue and an index by validator. +/// +/// Using the LIFO queue for block production disincentivises spam on P2P at the Capella fork, +/// and is less-relevant after that. +#[derive(Debug, Default)] +pub struct BlsToExecutionChanges { + /// Map from validator index to BLS to execution change. + by_validator_index: HashMap>>, + /// Last-in-first-out (LIFO) queue of verified messages. + queue: Vec>>, + /// Contains a set of validator indices which need to have their changes + /// broadcast at the capella epoch. + received_pre_capella_indices: HashSet, +} + +impl BlsToExecutionChanges { + pub fn existing_change_equals( + &self, + address_change: &SignedBlsToExecutionChange, + ) -> Option { + self.by_validator_index + .get(&address_change.message.validator_index) + .map(|existing| existing.as_inner() == address_change) + } + + pub fn insert( + &mut self, + verified_change: SigVerifiedOp, + received_pre_capella: ReceivedPreCapella, + ) -> bool { + let validator_index = verified_change.as_inner().message.validator_index; + // Wrap in an `Arc` once on insert. + let verified_change = Arc::new(verified_change); + match self.by_validator_index.entry(validator_index) { + Entry::Vacant(entry) => { + self.queue.push(verified_change.clone()); + entry.insert(verified_change); + if matches!(received_pre_capella, ReceivedPreCapella::Yes) { + self.received_pre_capella_indices.insert(validator_index); + } + true + } + Entry::Occupied(_) => false, + } + } + + /// FIFO ordering, used for persistence to disk. + pub fn iter_fifo( + &self, + ) -> impl Iterator>> { + self.queue.iter() + } + + /// LIFO ordering, used for block packing. + pub fn iter_lifo( + &self, + ) -> impl Iterator>> { + self.queue.iter().rev() + } + + /// Returns only those which are flagged for broadcasting at the Capella + /// fork. Uses FIFO ordering, although we expect this list to be shuffled by + /// the caller. + pub fn iter_received_pre_capella( + &self, + ) -> impl Iterator>> { + self.queue.iter().filter(|address_change| { + self.received_pre_capella_indices + .contains(&address_change.as_inner().message.validator_index) + }) + } + + /// Returns the set of indicies which should have their address changes + /// broadcast at the Capella fork. + pub fn iter_pre_capella_indices(&self) -> impl Iterator { + self.received_pre_capella_indices.iter() + } + + /// Prune BLS to execution changes that have been applied to the state more than 1 block ago. + /// + /// The block check is necessary to avoid pruning too eagerly and losing the ability to include + /// address changes during re-orgs. This is isn't *perfect* so some address changes could + /// still get stuck if there are gnarly re-orgs and the changes can't be widely republished + /// due to the gossip duplicate rules. + pub fn prune>( + &mut self, + head_block: &SignedBeaconBlock, + head_state: &BeaconState, + spec: &ChainSpec, + ) { + let mut validator_indices_pruned = vec![]; + + self.queue.retain(|address_change| { + let validator_index = address_change.as_inner().message.validator_index; + head_state + .validators() + .get(validator_index as usize) + .map_or(true, |validator| { + let prune = validator.has_eth1_withdrawal_credential(spec) + && head_block + .message() + .body() + .bls_to_execution_changes() + .map_or(true, |recent_changes| { + !recent_changes + .iter() + .any(|c| c.message.validator_index == validator_index) + }); + if prune { + validator_indices_pruned.push(validator_index); + } + !prune + }) + }); + + for validator_index in validator_indices_pruned { + self.by_validator_index.remove(&validator_index); + } + } + + /// Removes `broadcasted` validators from the set of validators that should + /// have their BLS changes broadcast at the Capella fork boundary. + pub fn register_indices_broadcasted_at_capella(&mut self, broadcasted: &HashSet) { + self.received_pre_capella_indices = self + .received_pre_capella_indices + .difference(broadcasted) + .copied() + .collect(); + } +} diff --git a/beacon_node/operation_pool/src/lib.rs b/beacon_node/operation_pool/src/lib.rs index 4fe5a725458..24c0623f5c3 100644 --- a/beacon_node/operation_pool/src/lib.rs +++ b/beacon_node/operation_pool/src/lib.rs @@ -2,25 +2,31 @@ mod attestation; mod attestation_id; mod attestation_storage; mod attester_slashing; +mod bls_to_execution_changes; mod max_cover; mod metrics; mod persistence; mod reward_cache; mod sync_aggregate_id; -pub use attestation::AttMaxCover; +pub use crate::bls_to_execution_changes::ReceivedPreCapella; +pub use attestation::{earliest_attestation_validators, AttMaxCover}; pub use attestation_storage::{AttestationRef, SplitAttestation}; pub use max_cover::MaxCover; pub use persistence::{ - PersistedOperationPool, PersistedOperationPoolV12, PersistedOperationPoolV5, + PersistedOperationPool, PersistedOperationPoolV12, PersistedOperationPoolV14, + PersistedOperationPoolV15, PersistedOperationPoolV5, }; pub use reward_cache::RewardCache; use crate::attestation_storage::{AttestationMap, CheckpointKey}; +use crate::bls_to_execution_changes::BlsToExecutionChanges; use crate::sync_aggregate_id::SyncAggregateId; use attester_slashing::AttesterSlashingMaxCover; use max_cover::maximum_cover; use parking_lot::{RwLock, RwLockWriteGuard}; +use rand::seq::SliceRandom; +use rand::thread_rng; use state_processing::per_block_processing::errors::AttestationValidationError; use state_processing::per_block_processing::{ get_slashable_indices_modular, verify_exit, VerifySignatures, @@ -30,8 +36,9 @@ use std::collections::{hash_map::Entry, HashMap, HashSet}; use std::marker::PhantomData; use std::ptr; use types::{ - sync_aggregate::Error as SyncAggregateError, typenum::Unsigned, Attestation, AttestationData, - AttesterSlashing, BeaconState, BeaconStateError, ChainSpec, Epoch, EthSpec, ProposerSlashing, + sync_aggregate::Error as SyncAggregateError, typenum::Unsigned, AbstractExecPayload, + Attestation, AttestationData, AttesterSlashing, BeaconState, BeaconStateError, ChainSpec, + Epoch, EthSpec, ProposerSlashing, SignedBeaconBlock, SignedBlsToExecutionChange, SignedVoluntaryExit, Slot, SyncAggregate, SyncCommitteeContribution, Validator, }; @@ -49,6 +56,8 @@ pub struct OperationPool { proposer_slashings: RwLock>>, /// Map from exiting validator to their exit data. voluntary_exits: RwLock>>, + /// Map from credential changing validator to their position in the queue. + bls_to_execution_changes: RwLock>, /// Reward cache for accelerating attestation packing. reward_cache: RwLock, _phantom: PhantomData, @@ -429,7 +438,7 @@ impl OperationPool { pub fn prune_proposer_slashings(&self, head_state: &BeaconState) { prune_validator_hash_map( &mut self.proposer_slashings.write(), - |validator| validator.exit_epoch <= head_state.finalized_checkpoint().epoch, + |_, validator| validator.exit_epoch <= head_state.finalized_checkpoint().epoch, head_state, ); } @@ -488,7 +497,8 @@ impl OperationPool { |exit| { filter(exit.as_inner()) && exit.signature_is_still_valid(&state.fork()) - && verify_exit(state, exit.as_inner(), VerifySignatures::False, spec).is_ok() + && verify_exit(state, None, exit.as_inner(), VerifySignatures::False, spec) + .is_ok() }, |exit| exit.as_inner().clone(), T::MaxVoluntaryExits::to_usize(), @@ -504,18 +514,121 @@ impl OperationPool { // // We choose simplicity over the gain of pruning more exits since they are small and // should not be seen frequently. - |validator| validator.exit_epoch <= head_state.finalized_checkpoint().epoch, + |_, validator| validator.exit_epoch <= head_state.finalized_checkpoint().epoch, head_state, ); } + /// Check if an address change equal to `address_change` is already in the pool. + /// + /// Return `None` if no address change for the validator index exists in the pool. + pub fn bls_to_execution_change_in_pool_equals( + &self, + address_change: &SignedBlsToExecutionChange, + ) -> Option { + self.bls_to_execution_changes + .read() + .existing_change_equals(address_change) + } + + /// Insert a BLS to execution change into the pool, *only if* no prior change is known. + /// + /// Return `true` if the change was inserted. + pub fn insert_bls_to_execution_change( + &self, + verified_change: SigVerifiedOp, + received_pre_capella: ReceivedPreCapella, + ) -> bool { + self.bls_to_execution_changes + .write() + .insert(verified_change, received_pre_capella) + } + + /// Get a list of execution changes for inclusion in a block. + /// + /// They're in random `HashMap` order, which isn't exactly fair, but isn't unfair either. + pub fn get_bls_to_execution_changes( + &self, + state: &BeaconState, + spec: &ChainSpec, + ) -> Vec { + filter_limit_operations( + self.bls_to_execution_changes.read().iter_lifo(), + |address_change| { + address_change.signature_is_still_valid(&state.fork()) + && state + .get_validator(address_change.as_inner().message.validator_index as usize) + .map_or(false, |validator| { + !validator.has_eth1_withdrawal_credential(spec) + }) + }, + |address_change| address_change.as_inner().clone(), + T::MaxBlsToExecutionChanges::to_usize(), + ) + } + + /// Get a list of execution changes to be broadcast at the Capella fork. + /// + /// The list that is returned will be shuffled to help provide a fair + /// broadcast of messages. + pub fn get_bls_to_execution_changes_received_pre_capella( + &self, + state: &BeaconState, + spec: &ChainSpec, + ) -> Vec { + let mut changes = filter_limit_operations( + self.bls_to_execution_changes + .read() + .iter_received_pre_capella(), + |address_change| { + address_change.signature_is_still_valid(&state.fork()) + && state + .get_validator(address_change.as_inner().message.validator_index as usize) + .map_or(false, |validator| { + !validator.has_eth1_withdrawal_credential(spec) + }) + }, + |address_change| address_change.as_inner().clone(), + usize::max_value(), + ); + changes.shuffle(&mut thread_rng()); + changes + } + + /// Removes `broadcasted` validators from the set of validators that should + /// have their BLS changes broadcast at the Capella fork boundary. + pub fn register_indices_broadcasted_at_capella(&self, broadcasted: &HashSet) { + self.bls_to_execution_changes + .write() + .register_indices_broadcasted_at_capella(broadcasted); + } + + /// Prune BLS to execution changes that have been applied to the state more than 1 block ago. + pub fn prune_bls_to_execution_changes>( + &self, + head_block: &SignedBeaconBlock, + head_state: &BeaconState, + spec: &ChainSpec, + ) { + self.bls_to_execution_changes + .write() + .prune(head_block, head_state, spec) + } + /// Prune all types of transactions given the latest head state and head fork. - pub fn prune_all(&self, head_state: &BeaconState, current_epoch: Epoch) { + pub fn prune_all>( + &self, + head_block: &SignedBeaconBlock, + head_state: &BeaconState, + current_epoch: Epoch, + spec: &ChainSpec, + ) { self.prune_attestations(current_epoch); self.prune_sync_contributions(head_state.slot()); self.prune_proposer_slashings(head_state); self.prune_attester_slashings(head_state); self.prune_voluntary_exits(head_state); + self.prune_bls_to_execution_changes(head_block, head_state, spec); } /// Total number of voluntary exits in the pool. @@ -581,6 +694,17 @@ impl OperationPool { .map(|(_, exit)| exit.as_inner().clone()) .collect() } + + /// Returns all known `SignedBlsToExecutionChange` objects. + /// + /// This method may return objects that are invalid for block inclusion. + pub fn get_all_bls_to_execution_changes(&self) -> Vec { + self.bls_to_execution_changes + .read() + .iter_fifo() + .map(|address_change| address_change.as_inner().clone()) + .collect() + } } /// Filter up to a maximum number of operations out of an iterator. @@ -614,7 +738,7 @@ fn prune_validator_hash_map( prune_if: F, head_state: &BeaconState, ) where - F: Fn(&Validator) -> bool, + F: Fn(u64, &Validator) -> bool, T: VerifyOperation, { map.retain(|&validator_index, op| { @@ -622,7 +746,7 @@ fn prune_validator_hash_map( && head_state .validators() .get(validator_index as usize) - .map_or(true, |validator| !prune_if(validator)) + .map_or(true, |validator| !prune_if(validator_index, validator)) }); } @@ -1665,7 +1789,7 @@ mod release_tests { fn cross_fork_harness() -> (BeaconChainHarness>, ChainSpec) { - let mut spec = test_spec::(); + let mut spec = E::default_spec(); // Give some room to sign surround slashings. spec.altair_fork_epoch = Some(Epoch::new(3)); diff --git a/beacon_node/operation_pool/src/persistence.rs b/beacon_node/operation_pool/src/persistence.rs index ed15369df73..35d2b4ce7ee 100644 --- a/beacon_node/operation_pool/src/persistence.rs +++ b/beacon_node/operation_pool/src/persistence.rs @@ -1,5 +1,6 @@ use crate::attestation_id::AttestationId; use crate::attestation_storage::AttestationMap; +use crate::bls_to_execution_changes::{BlsToExecutionChanges, ReceivedPreCapella}; use crate::sync_aggregate_id::SyncAggregateId; use crate::OpPoolError; use crate::OperationPool; @@ -8,6 +9,8 @@ use parking_lot::RwLock; use ssz::{Decode, Encode}; use ssz_derive::{Decode, Encode}; use state_processing::SigVerifiedOp; +use std::collections::HashSet; +use std::mem; use store::{DBColumn, Error as StoreError, StoreItem}; use types::*; @@ -18,7 +21,7 @@ type PersistedSyncContributions = Vec<(SyncAggregateId, Vec { #[superstruct(only(V5))] pub attestations_v5: Vec<(AttestationId, Vec>)>, /// Attestations and their attesting indices. - #[superstruct(only(V12))] + #[superstruct(only(V12, V14, V15))] pub attestations: Vec<(Attestation, Vec)>, /// Mapping from sync contribution ID to sync contributions and aggregate. pub sync_contributions: PersistedSyncContributions, @@ -40,20 +43,27 @@ pub struct PersistedOperationPool { #[superstruct(only(V5))] pub attester_slashings_v5: Vec<(AttesterSlashing, ForkVersion)>, /// Attester slashings. - #[superstruct(only(V12))] + #[superstruct(only(V12, V14, V15))] pub attester_slashings: Vec, T>>, /// [DEPRECATED] Proposer slashings. #[superstruct(only(V5))] pub proposer_slashings_v5: Vec, /// Proposer slashings with fork information. - #[superstruct(only(V12))] + #[superstruct(only(V12, V14, V15))] pub proposer_slashings: Vec>, /// [DEPRECATED] Voluntary exits. #[superstruct(only(V5))] pub voluntary_exits_v5: Vec, /// Voluntary exits with fork information. - #[superstruct(only(V12))] + #[superstruct(only(V12, V14, V15))] pub voluntary_exits: Vec>, + /// BLS to Execution Changes + #[superstruct(only(V14, V15))] + pub bls_to_execution_changes: Vec>, + /// Validator indices with BLS to Execution Changes to be broadcast at the + /// Capella fork. + #[superstruct(only(V15))] + pub capella_bls_change_broadcast_indices: Vec, } impl PersistedOperationPool { @@ -99,17 +109,33 @@ impl PersistedOperationPool { .map(|(_, exit)| exit.clone()) .collect(); - PersistedOperationPool::V12(PersistedOperationPoolV12 { + let bls_to_execution_changes = operation_pool + .bls_to_execution_changes + .read() + .iter_fifo() + .map(|bls_to_execution_change| (**bls_to_execution_change).clone()) + .collect(); + + let capella_bls_change_broadcast_indices = operation_pool + .bls_to_execution_changes + .read() + .iter_pre_capella_indices() + .copied() + .collect(); + + PersistedOperationPool::V15(PersistedOperationPoolV15 { attestations, sync_contributions, attester_slashings, proposer_slashings, voluntary_exits, + bls_to_execution_changes, + capella_bls_change_broadcast_indices, }) } /// Reconstruct an `OperationPool`. - pub fn into_operation_pool(self) -> Result, OpPoolError> { + pub fn into_operation_pool(mut self) -> Result, OpPoolError> { let attester_slashings = RwLock::new(self.attester_slashings()?.iter().cloned().collect()); let proposer_slashings = RwLock::new( self.proposer_slashings()? @@ -127,21 +153,46 @@ impl PersistedOperationPool { ); let sync_contributions = RwLock::new(self.sync_contributions().iter().cloned().collect()); let attestations = match self { - PersistedOperationPool::V5(_) => return Err(OpPoolError::IncorrectOpPoolVariant), - PersistedOperationPool::V12(pool) => { + PersistedOperationPool::V5(_) | PersistedOperationPool::V12(_) => { + return Err(OpPoolError::IncorrectOpPoolVariant) + } + PersistedOperationPool::V14(_) | PersistedOperationPool::V15(_) => { let mut map = AttestationMap::default(); - for (att, attesting_indices) in pool.attestations { + for (att, attesting_indices) in self.attestations()?.clone() { map.insert(att, attesting_indices); } RwLock::new(map) } }; + let mut bls_to_execution_changes = BlsToExecutionChanges::default(); + if let Ok(persisted_changes) = self.bls_to_execution_changes_mut() { + let persisted_changes = mem::take(persisted_changes); + + let broadcast_indices = + if let Ok(indices) = self.capella_bls_change_broadcast_indices_mut() { + mem::take(indices).into_iter().collect() + } else { + HashSet::new() + }; + + for bls_to_execution_change in persisted_changes { + let received_pre_capella = if broadcast_indices + .contains(&bls_to_execution_change.as_inner().message.validator_index) + { + ReceivedPreCapella::Yes + } else { + ReceivedPreCapella::No + }; + bls_to_execution_changes.insert(bls_to_execution_change, received_pre_capella); + } + } let op_pool = OperationPool { attestations, sync_contributions, attester_slashings, proposer_slashings, voluntary_exits, + bls_to_execution_changes: RwLock::new(bls_to_execution_changes), reward_cache: Default::default(), _phantom: Default::default(), }; @@ -163,6 +214,48 @@ impl StoreItem for PersistedOperationPoolV5 { } } +impl StoreItem for PersistedOperationPoolV12 { + fn db_column() -> DBColumn { + DBColumn::OpPool + } + + fn as_store_bytes(&self) -> Vec { + self.as_ssz_bytes() + } + + fn from_store_bytes(bytes: &[u8]) -> Result { + PersistedOperationPoolV12::from_ssz_bytes(bytes).map_err(Into::into) + } +} + +impl StoreItem for PersistedOperationPoolV14 { + fn db_column() -> DBColumn { + DBColumn::OpPool + } + + fn as_store_bytes(&self) -> Vec { + self.as_ssz_bytes() + } + + fn from_store_bytes(bytes: &[u8]) -> Result { + PersistedOperationPoolV14::from_ssz_bytes(bytes).map_err(Into::into) + } +} + +impl StoreItem for PersistedOperationPoolV15 { + fn db_column() -> DBColumn { + DBColumn::OpPool + } + + fn as_store_bytes(&self) -> Vec { + self.as_ssz_bytes() + } + + fn from_store_bytes(bytes: &[u8]) -> Result { + PersistedOperationPoolV15::from_ssz_bytes(bytes).map_err(Into::into) + } +} + /// Deserialization for `PersistedOperationPool` defaults to `PersistedOperationPool::V12`. impl StoreItem for PersistedOperationPool { fn db_column() -> DBColumn { @@ -175,8 +268,8 @@ impl StoreItem for PersistedOperationPool { fn from_store_bytes(bytes: &[u8]) -> Result { // Default deserialization to the latest variant. - PersistedOperationPoolV12::from_ssz_bytes(bytes) - .map(Self::V12) + PersistedOperationPoolV15::from_ssz_bytes(bytes) + .map(Self::V15) .map_err(Into::into) } } diff --git a/beacon_node/src/cli.rs b/beacon_node/src/cli.rs index 38d81512e4b..25521ec2428 100644 --- a/beacon_node/src/cli.rs +++ b/beacon_node/src/cli.rs @@ -71,7 +71,16 @@ pub fn cli_app<'a, 'b>() -> App<'a, 'b> { Arg::with_name("listen-address") .long("listen-address") .value_name("ADDRESS") - .help("The address lighthouse will listen for UDP and TCP connections.") + .help("The address lighthouse will listen for UDP and TCP connections. To listen \ + over IpV4 and IpV6 set this flag twice with the different values.\n\ + Examples:\n\ + - --listen-address '0.0.0.0' will listen over Ipv4.\n\ + - --listen-address '::' will listen over Ipv6.\n\ + - --listen-address '0.0.0.0' --listen-address '::' will listen over both \ + Ipv4 and Ipv6. The order of the given addresses is not relevant. However, \ + multiple Ipv4, or multiple Ipv6 addresses will not be accepted.") + .multiple(true) + .max_values(2) .default_value("0.0.0.0") .takes_value(true) ) @@ -79,10 +88,21 @@ pub fn cli_app<'a, 'b>() -> App<'a, 'b> { Arg::with_name("port") .long("port") .value_name("PORT") - .help("The TCP/UDP port to listen on. The UDP port can be modified by the --discovery-port flag.") + .help("The TCP/UDP port to listen on. The UDP port can be modified by the \ + --discovery-port flag. If listening over both Ipv4 and Ipv6 the --port flag \ + will apply to the Ipv4 address and --port6 to the Ipv6 address.") .default_value("9000") .takes_value(true), ) + .arg( + Arg::with_name("port6") + .long("port6") + .value_name("PORT") + .help("The TCP/UDP port to listen on over IpV6 when listening over both Ipv4 and \ + Ipv6. Defaults to 9090 when required.") + .default_value("9090") + .takes_value(true), + ) .arg( Arg::with_name("discovery-port") .long("discovery-port") @@ -90,6 +110,15 @@ pub fn cli_app<'a, 'b>() -> App<'a, 'b> { .help("The UDP port that discovery will listen on. Defaults to `port`") .takes_value(true), ) + .arg( + Arg::with_name("discovery-port6") + .long("discovery-port6") + .value_name("PORT") + .help("The UDP port that discovery will listen on over IpV6 if listening over \ + both Ipv4 and IpV6. Defaults to `port6`") + .hidden(true) // TODO: implement dual stack via two sockets in discv5. + .takes_value(true), + ) .arg( Arg::with_name("target-peers") .long("target-peers") @@ -130,27 +159,49 @@ pub fn cli_app<'a, 'b>() -> App<'a, 'b> { Arg::with_name("enr-udp-port") .long("enr-udp-port") .value_name("PORT") - .help("The UDP port of the local ENR. Set this only if you are sure other nodes can connect to your local node on this port.") + .help("The UDP4 port of the local ENR. Set this only if you are sure other nodes \ + can connect to your local node on this port over IpV4.") + .takes_value(true), + ) + .arg( + Arg::with_name("enr-udp6-port") + .long("enr-udp6-port") + .value_name("PORT") + .help("The UDP6 port of the local ENR. Set this only if you are sure other nodes \ + can connect to your local node on this port over IpV6.") .takes_value(true), ) .arg( Arg::with_name("enr-tcp-port") .long("enr-tcp-port") .value_name("PORT") - .help("The TCP port of the local ENR. Set this only if you are sure other nodes can connect to your local node on this port.\ - The --port flag is used if this is not set.") + .help("The TCP4 port of the local ENR. Set this only if you are sure other nodes \ + can connect to your local node on this port over IpV4. The --port flag is \ + used if this is not set.") + .takes_value(true), + ) + .arg( + Arg::with_name("enr-tcp6-port") + .long("enr-tcp6-port") + .value_name("PORT") + .help("The TCP6 port of the local ENR. Set this only if you are sure other nodes \ + can connect to your local node on this port over IpV6. The --port6 flag is \ + used if this is not set.") .takes_value(true), ) .arg( Arg::with_name("enr-address") .long("enr-address") .value_name("ADDRESS") - .help("The IP address/ DNS address to broadcast to other peers on how to reach this node. \ - If a DNS address is provided, the enr-address is set to the IP address it resolves to and \ - does not auto-update based on PONG responses in discovery. \ - Set this only if you are sure other nodes can connect to your local node on this address. \ - Discovery will automatically find your external address, if possible.") + .help("The IP address/ DNS address to broadcast to other peers on how to reach \ + this node. If a DNS address is provided, the enr-address is set to the IP \ + address it resolves to and does not auto-update based on PONG responses in \ + discovery. Set this only if you are sure other nodes can connect to your \ + local node on this address. This will update the `ip4` or `ip6` ENR fields \ + accordingly. To update both, set this flag twice with the different values.") .requires("enr-udp-port") + .multiple(true) + .max_values(2) .takes_value(true), ) .arg( @@ -158,7 +209,8 @@ pub fn cli_app<'a, 'b>() -> App<'a, 'b> { .short("e") .long("enr-match") .help("Sets the local ENR IP address and port to match those set for lighthouse. \ - Specifically, the IP address will be the value of --listen-address and the UDP port will be --discovery-port.") + Specifically, the IP address will be the value of --listen-address and the \ + UDP port will be --discovery-port.") ) .arg( Arg::with_name("disable-enr-auto-update") @@ -181,6 +233,14 @@ pub fn cli_app<'a, 'b>() -> App<'a, 'b> { .help("Disables the discv5 discovery protocol. The node will not search for new peers or participate in the discovery protocol.") .takes_value(false), ) + .arg( + Arg::with_name("disable-peer-scoring") + .long("disable-peer-scoring") + .help("Disables peer scoring in lighthouse. WARNING: This is a dev only flag is only meant to be used in local testing scenarios \ + Using this flag on a real network may cause your node to become eclipsed and see a different view of the network") + .takes_value(false) + .hidden(true), + ) .arg( Arg::with_name("trusted-peers") .long("trusted-peers") @@ -194,6 +254,29 @@ pub fn cli_app<'a, 'b>() -> App<'a, 'b> { .help("Lighthouse by default does not discover private IP addresses. Set this flag to enable connection attempts to local addresses.") .takes_value(false), ) + .arg( + Arg::with_name("self-limiter") + .long("self-limiter") + .help( + "Enables the outbound rate limiter (requests made by this node).\ + \ + Rate limit quotas per protocol can be set in the form of \ + :/. To set quotas for multiple protocols, \ + separate them by ';'. If the self rate limiter is enabled and a protocol is not \ + present in the configuration, the quotas used for the inbound rate limiter will be \ + used." + ) + .min_values(0) + .hidden(true) + ) + .arg( + Arg::with_name("disable-backfill-rate-limiting") + .long("disable-backfill-rate-limiting") + .help("Disable the backfill sync rate-limiting. This allow users to just sync the entire chain as fast \ + as possible, however it can result in resource contention which degrades staking performance. Stakers \ + should generally choose to avoid this flag since backfill sync is not required for staking.") + .takes_value(false), + ) /* REST API related arguments */ .arg( Arg::with_name("http") @@ -303,6 +386,14 @@ pub fn cli_app<'a, 'b>() -> App<'a, 'b> { address of this server (e.g., http://localhost:5054).") .takes_value(true), ) + .arg( + Arg::with_name("shuffling-cache-size") + .long("shuffling-cache-size") + .help("Some HTTP API requests can be optimised by caching the shufflings at each epoch. \ + This flag allows the user to set the shuffling cache size in epochs. \ + Shufflings are dependent on validator count and setting this value to a large number can consume a large amount of memory.") + .takes_value(true) + ) /* * Monitoring metrics @@ -794,6 +885,28 @@ pub fn cli_app<'a, 'b>() -> App<'a, 'b> { allowed. Default: 2") .conflicts_with("disable-proposer-reorgs") ) + .arg( + Arg::with_name("proposer-reorg-cutoff") + .long("proposer-reorg-cutoff") + .value_name("MILLISECONDS") + .help("Maximum delay after the start of the slot at which to propose a reorging \ + block. Lower values can prevent failed reorgs by ensuring the block has \ + ample time to propagate and be processed by the network. The default is \ + 1/12th of a slot (1 second on mainnet)") + .conflicts_with("disable-proposer-reorgs") + ) + .arg( + Arg::with_name("proposer-reorg-disallowed-offsets") + .long("proposer-reorg-disallowed-offsets") + .value_name("N1,N2,...") + .help("Comma-separated list of integer offsets which can be used to avoid \ + proposing reorging blocks at certain slots. An offset of N means that \ + reorging proposals will not be attempted at any slot such that \ + `slot % SLOTS_PER_EPOCH == N`. By default only re-orgs at offset 0 will be \ + avoided. Any offsets supplied with this flag will impose additional \ + restrictions.") + .conflicts_with("disable-proposer-reorgs") + ) .arg( Arg::with_name("prepare-payload-lookahead") .long("prepare-payload-lookahead") @@ -804,6 +917,15 @@ pub fn cli_app<'a, 'b>() -> App<'a, 'b> { for ensuring the EL is given ample notice. Default: 1/3 of a slot.") .takes_value(true) ) + .arg( + Arg::with_name("always-prepare-payload") + .long("always-prepare-payload") + .help("Send payload attributes with every fork choice update. This is intended for \ + use by block builders, relays and developers. You should set a fee \ + recipient on this BN and also consider adjusting the \ + --prepare-payload-lookahead flag.") + .takes_value(false) + ) .arg( Arg::with_name("fork-choice-before-proposal-timeout") .long("fork-choice-before-proposal-timeout") @@ -878,12 +1000,20 @@ pub fn cli_app<'a, 'b>() -> App<'a, 'b> { .default_value("0") .takes_value(true) ) + .arg( + Arg::with_name("builder-user-agent") + .long("builder-user-agent") + .value_name("STRING") + .help("The HTTP user agent to send alongside requests to the builder URL. The \ + default is Lighthouse's version string.") + .requires("builder") + .takes_value(true) + ) .arg( Arg::with_name("count-unrealized") .long("count-unrealized") .hidden(true) - .help("Enables an alternative, potentially more performant FFG \ - vote tracking method.") + .help("This flag is deprecated and has no effect.") .takes_value(true) .default_value("true") ) @@ -891,7 +1021,7 @@ pub fn cli_app<'a, 'b>() -> App<'a, 'b> { Arg::with_name("count-unrealized-full") .long("count-unrealized-full") .hidden(true) - .help("Stricter version of `count-unrealized`.") + .help("This flag is deprecated and has no effect.") .takes_value(true) .default_value("false") ) @@ -933,4 +1063,13 @@ pub fn cli_app<'a, 'b>() -> App<'a, 'b> { This is equivalent to --http and --validator-monitor-auto.") .takes_value(false) ) + .arg( + Arg::with_name("always-prefer-builder-payload") + .long("always-prefer-builder-payload") + .help("If set, the beacon node always uses the payload from the builder instead of the local payload.") + // The builder profit threshold flag is used to provide preference + // to local payloads, therefore it fundamentally conflicts with + // always using the builder. + .conflicts_with("builder-profit-threshold") + ) } diff --git a/beacon_node/src/config.rs b/beacon_node/src/config.rs index 294568cca9f..8cc38a534bc 100644 --- a/beacon_node/src/config.rs +++ b/beacon_node/src/config.rs @@ -1,5 +1,5 @@ use beacon_chain::chain_config::{ - ReOrgThreshold, DEFAULT_PREPARE_PAYLOAD_LOOKAHEAD_FACTOR, + DisallowedReOrgOffsets, ReOrgThreshold, DEFAULT_PREPARE_PAYLOAD_LOOKAHEAD_FACTOR, DEFAULT_RE_ORG_MAX_EPOCHS_SINCE_FINALIZATION, DEFAULT_RE_ORG_THRESHOLD, }; use clap::ArgMatches; @@ -10,13 +10,13 @@ use environment::RuntimeContext; use execution_layer::DEFAULT_JWT_FILE; use genesis::Eth1Endpoint; use http_api::TlsConfig; +use lighthouse_network::ListenAddress; use lighthouse_network::{multiaddr::Protocol, Enr, Multiaddr, NetworkConfig, PeerIdSerialized}; use sensitive_url::SensitiveUrl; use slog::{info, warn, Logger}; use std::cmp; use std::cmp::max; use std::fmt::Debug; -use std::fmt::Write; use std::fs; use std::net::Ipv6Addr; use std::net::{IpAddr, Ipv4Addr, ToSocketAddrs}; @@ -24,7 +24,6 @@ use std::path::{Path, PathBuf}; use std::str::FromStr; use std::time::Duration; use types::{Checkpoint, Epoch, EthSpec, Hash256, PublicKeyBytes, GRAFFITI_BYTES_LEN}; -use unused_port::{unused_tcp_port, unused_udp_port}; /// Gets the fully-initialized global client. /// @@ -78,13 +77,7 @@ pub fn get_config( let data_dir_ref = client_config.data_dir().clone(); - set_network_config( - &mut client_config.network, - cli_args, - &data_dir_ref, - log, - false, - )?; + set_network_config(&mut client_config.network, cli_args, &data_dir_ref, log)?; /* * Staking flag @@ -155,6 +148,10 @@ pub fn get_config( client_config.http_api.allow_sync_stalled = true; } + if let Some(cache_size) = clap_utils::parse_optional(cli_args, "shuffling-cache-size")? { + client_config.chain.shuffling_cache_size = cache_size; + } + /* * Prometheus metrics HTTP server */ @@ -332,6 +329,9 @@ pub fn get_config( let payload_builder = parse_only_one_value(endpoint, SensitiveUrl::parse, "--builder", log)?; el_config.builder_url = Some(payload_builder); + + el_config.builder_user_agent = + clap_utils::parse_optional(cli_args, "builder-user-agent")?; } // Set config values from parse values. @@ -404,13 +404,6 @@ pub fn get_config( * Discovery address is set to localhost by default. */ if cli_args.is_present("zero-ports") { - if client_config.network.enr_address == Some(IpAddr::V4(Ipv4Addr::new(0, 0, 0, 0))) { - client_config.network.enr_address = None - } - client_config.network.libp2p_port = - unused_tcp_port().map_err(|e| format!("Failed to get port for libp2p: {}", e))?; - client_config.network.discovery_port = - unused_udp_port().map_err(|e| format!("Failed to get port for discovery: {}", e))?; client_config.http_api.listen_port = 0; client_config.http_metrics.listen_port = 0; } @@ -696,6 +689,23 @@ pub fn get_config( client_config.chain.re_org_max_epochs_since_finalization = clap_utils::parse_optional(cli_args, "proposer-reorg-epochs-since-finalization")? .unwrap_or(DEFAULT_RE_ORG_MAX_EPOCHS_SINCE_FINALIZATION); + client_config.chain.re_org_cutoff_millis = + clap_utils::parse_optional(cli_args, "proposer-reorg-cutoff")?; + + if let Some(disallowed_offsets_str) = + clap_utils::parse_optional::(cli_args, "proposer-reorg-disallowed-offsets")? + { + let disallowed_offsets = disallowed_offsets_str + .split(',') + .map(|s| { + s.parse() + .map_err(|e| format!("invalid disallowed-offsets: {e:?}")) + }) + .collect::, _>>()?; + client_config.chain.re_org_disallowed_offsets = + DisallowedReOrgOffsets::new::(disallowed_offsets) + .map_err(|e| format!("invalid disallowed-offsets: {e:?}"))?; + } } // Note: This overrides any previous flags that enable this option. @@ -711,16 +721,29 @@ pub fn get_config( / DEFAULT_PREPARE_PAYLOAD_LOOKAHEAD_FACTOR }); + client_config.chain.always_prepare_payload = cli_args.is_present("always-prepare-payload"); + if let Some(timeout) = clap_utils::parse_optional(cli_args, "fork-choice-before-proposal-timeout")? { client_config.chain.fork_choice_before_proposal_timeout_ms = timeout; } - client_config.chain.count_unrealized = - clap_utils::parse_required(cli_args, "count-unrealized")?; - client_config.chain.count_unrealized_full = - clap_utils::parse_required::(cli_args, "count-unrealized-full")?.into(); + if !clap_utils::parse_required::(cli_args, "count-unrealized")? { + warn!( + log, + "The flag --count-unrealized is deprecated and will be removed"; + "info" => "any use of the flag will have no effect" + ); + } + + if clap_utils::parse_required::(cli_args, "count-unrealized-full")? { + warn!( + log, + "The flag --count-unrealized-full is deprecated and will be removed"; + "info" => "setting it to `true` has no effect" + ); + } client_config.chain.always_reset_payload_statuses = cli_args.is_present("reset-payload-statuses"); @@ -751,16 +774,189 @@ pub fn get_config( client_config.chain.optimistic_finalized_sync = !cli_args.is_present("disable-optimistic-finalized-sync"); + // Payload selection configs + if cli_args.is_present("always-prefer-builder-payload") { + client_config.always_prefer_builder_payload = true; + } + + // Backfill sync rate-limiting + client_config.chain.enable_backfill_rate_limiting = + !cli_args.is_present("disable-backfill-rate-limiting"); + Ok(client_config) } -/// Sets the network config from the command line arguments +/// Gets the listening_addresses for lighthouse based on the cli options. +pub fn parse_listening_addresses( + cli_args: &ArgMatches, + log: &Logger, +) -> Result { + let listen_addresses_str = cli_args + .values_of("listen-address") + .expect("--listen_addresses has a default value"); + + let use_zero_ports = cli_args.is_present("zero-ports"); + + // parse the possible ips + let mut maybe_ipv4 = None; + let mut maybe_ipv6 = None; + for addr_str in listen_addresses_str { + let addr = addr_str.parse::().map_err(|parse_error| { + format!("Failed to parse listen-address ({addr_str}) as an Ip address: {parse_error}") + })?; + + match addr { + IpAddr::V4(v4_addr) => match &maybe_ipv4 { + Some(first_ipv4_addr) => { + return Err(format!( + "When setting the --listen-address option twice, use an IpV4 address and an Ipv6 address. \ + Got two IpV4 addresses {first_ipv4_addr} and {v4_addr}" + )); + } + None => maybe_ipv4 = Some(v4_addr), + }, + IpAddr::V6(v6_addr) => match &maybe_ipv6 { + Some(first_ipv6_addr) => { + return Err(format!( + "When setting the --listen-address option twice, use an IpV4 address and an Ipv6 address. \ + Got two IpV6 addresses {first_ipv6_addr} and {v6_addr}" + )); + } + None => maybe_ipv6 = Some(v6_addr), + }, + } + } + + // parse the possible tcp ports + let port = cli_args + .value_of("port") + .expect("--port has a default value") + .parse::() + .map_err(|parse_error| format!("Failed to parse --port as an integer: {parse_error}"))?; + let port6 = cli_args + .value_of("port6") + .map(str::parse::) + .transpose() + .map_err(|parse_error| format!("Failed to parse --port6 as an integer: {parse_error}"))? + .unwrap_or(9090); + + // parse the possible udp ports + let maybe_udp_port = cli_args + .value_of("discovery-port") + .map(str::parse::) + .transpose() + .map_err(|parse_error| { + format!("Failed to parse --discovery-port as an integer: {parse_error}") + })?; + let maybe_udp6_port = cli_args + .value_of("discovery-port6") + .map(str::parse::) + .transpose() + .map_err(|parse_error| { + format!("Failed to parse --discovery-port6 as an integer: {parse_error}") + })?; + + // Now put everything together + let listening_addresses = match (maybe_ipv4, maybe_ipv6) { + (None, None) => { + // This should never happen unless clap is broken + return Err("No listening addresses provided".into()); + } + (None, Some(ipv6)) => { + // A single ipv6 address was provided. Set the ports + + if cli_args.is_present("port6") { + warn!(log, "When listening only over IpV6, use the --port flag. The value of --port6 will be ignored.") + } + // use zero ports if required. If not, use the given port. + let tcp_port = use_zero_ports + .then(unused_port::unused_tcp6_port) + .transpose()? + .unwrap_or(port); + + if maybe_udp6_port.is_some() { + warn!(log, "When listening only over IpV6, use the --discovery-port flag. The value of --discovery-port6 will be ignored.") + } + // use zero ports if required. If not, use the specific udp port. If none given, use + // the tcp port. + let udp_port = use_zero_ports + .then(unused_port::unused_udp6_port) + .transpose()? + .or(maybe_udp_port) + .unwrap_or(port); + + ListenAddress::V6(lighthouse_network::ListenAddr { + addr: ipv6, + udp_port, + tcp_port, + }) + } + (Some(ipv4), None) => { + // A single ipv4 address was provided. Set the ports + + // use zero ports if required. If not, use the given port. + let tcp_port = use_zero_ports + .then(unused_port::unused_tcp4_port) + .transpose()? + .unwrap_or(port); + // use zero ports if required. If not, use the specific udp port. If none given, use + // the tcp port. + let udp_port = use_zero_ports + .then(unused_port::unused_udp4_port) + .transpose()? + .or(maybe_udp_port) + .unwrap_or(port); + ListenAddress::V4(lighthouse_network::ListenAddr { + addr: ipv4, + udp_port, + tcp_port, + }) + } + (Some(ipv4), Some(ipv6)) => { + let ipv4_tcp_port = use_zero_ports + .then(unused_port::unused_tcp4_port) + .transpose()? + .unwrap_or(port); + let ipv4_udp_port = use_zero_ports + .then(unused_port::unused_udp4_port) + .transpose()? + .or(maybe_udp_port) + .unwrap_or(ipv4_tcp_port); + + // Defaults to 9090 when required + let ipv6_tcp_port = use_zero_ports + .then(unused_port::unused_tcp6_port) + .transpose()? + .unwrap_or(port6); + let ipv6_udp_port = use_zero_ports + .then(unused_port::unused_udp6_port) + .transpose()? + .or(maybe_udp6_port) + .unwrap_or(ipv6_tcp_port); + ListenAddress::DualStack( + lighthouse_network::ListenAddr { + addr: ipv4, + udp_port: ipv4_udp_port, + tcp_port: ipv4_tcp_port, + }, + lighthouse_network::ListenAddr { + addr: ipv6, + udp_port: ipv6_udp_port, + tcp_port: ipv6_tcp_port, + }, + ) + } + }; + + Ok(listening_addresses) +} + +/// Sets the network config from the command line arguments. pub fn set_network_config( config: &mut NetworkConfig, cli_args: &ArgMatches, data_dir: &Path, log: &Logger, - use_listening_port_as_enr_port_by_default: bool, ) -> Result<(), String> { // If a network dir has been specified, override the `datadir` definition. if let Some(dir) = cli_args.value_of("network-dir") { @@ -781,12 +977,7 @@ pub fn set_network_config( config.shutdown_after_sync = true; } - if let Some(listen_address_str) = cli_args.value_of("listen-address") { - let listen_address = listen_address_str - .parse() - .map_err(|_| format!("Invalid listen address: {:?}", listen_address_str))?; - config.listen_address = listen_address; - } + config.set_listening_addr(parse_listening_addresses(cli_args, log)?); if let Some(target_peers_str) = cli_args.value_of("target-peers") { config.target_peers = target_peers_str @@ -794,21 +985,6 @@ pub fn set_network_config( .map_err(|_| format!("Invalid number of target peers: {}", target_peers_str))?; } - if let Some(port_str) = cli_args.value_of("port") { - let port = port_str - .parse::() - .map_err(|_| format!("Invalid port: {}", port_str))?; - config.libp2p_port = port; - config.discovery_port = port; - } - - if let Some(port_str) = cli_args.value_of("discovery-port") { - let port = port_str - .parse::() - .map_err(|_| format!("Invalid port: {}", port_str))?; - config.discovery_port = port; - } - if let Some(value) = cli_args.value_of("network-load") { let network_load = value .parse::() @@ -852,6 +1028,10 @@ pub fn set_network_config( .collect::, _>>()?; } + if cli_args.is_present("disable-peer-scoring") { + config.disable_peer_scoring = true; + } + if let Some(trusted_peers_str) = cli_args.value_of("trusted-peers") { config.trusted_peers = trusted_peers_str .split(',') @@ -864,7 +1044,7 @@ pub fn set_network_config( } if let Some(enr_udp_port_str) = cli_args.value_of("enr-udp-port") { - config.enr_udp_port = Some( + config.enr_udp4_port = Some( enr_udp_port_str .parse::() .map_err(|_| format!("Invalid discovery port: {}", enr_udp_port_str))?, @@ -872,7 +1052,23 @@ pub fn set_network_config( } if let Some(enr_tcp_port_str) = cli_args.value_of("enr-tcp-port") { - config.enr_tcp_port = Some( + config.enr_tcp4_port = Some( + enr_tcp_port_str + .parse::() + .map_err(|_| format!("Invalid ENR TCP port: {}", enr_tcp_port_str))?, + ); + } + + if let Some(enr_udp_port_str) = cli_args.value_of("enr-udp6-port") { + config.enr_udp6_port = Some( + enr_udp_port_str + .parse::() + .map_err(|_| format!("Invalid discovery port: {}", enr_udp_port_str))?, + ); + } + + if let Some(enr_tcp_port_str) = cli_args.value_of("enr-tcp6-port") { + config.enr_tcp6_port = Some( enr_tcp_port_str .parse::() .map_err(|_| format!("Invalid ENR TCP port: {}", enr_tcp_port_str))?, @@ -880,58 +1076,106 @@ pub fn set_network_config( } if cli_args.is_present("enr-match") { + // Match the Ip and UDP port in the enr. + // set the enr address to localhost if the address is unspecified - if config.listen_address == IpAddr::V4(Ipv4Addr::UNSPECIFIED) { - config.enr_address = Some(IpAddr::V4(Ipv4Addr::LOCALHOST)); - } else if config.listen_address == IpAddr::V6(Ipv6Addr::UNSPECIFIED) { - config.enr_address = Some(IpAddr::V6(Ipv6Addr::LOCALHOST)); - } else { - config.enr_address = Some(config.listen_address); + if let Some(ipv4_addr) = config.listen_addrs().v4().cloned() { + let ipv4_enr_addr = if ipv4_addr.addr == Ipv4Addr::UNSPECIFIED { + Ipv4Addr::LOCALHOST + } else { + ipv4_addr.addr + }; + config.enr_address.0 = Some(ipv4_enr_addr); + config.enr_udp4_port = Some(ipv4_addr.udp_port); + } + + if let Some(ipv6_addr) = config.listen_addrs().v6().cloned() { + let ipv6_enr_addr = if ipv6_addr.addr == Ipv6Addr::UNSPECIFIED { + Ipv6Addr::LOCALHOST + } else { + ipv6_addr.addr + }; + config.enr_address.1 = Some(ipv6_enr_addr); + config.enr_udp6_port = Some(ipv6_addr.udp_port); } - config.enr_udp_port = Some(config.discovery_port); - } - - if let Some(enr_address) = cli_args.value_of("enr-address") { - let resolved_addr = match enr_address.parse::() { - Ok(addr) => addr, // // Input is an IpAddr - Err(_) => { - let mut addr = enr_address.to_string(); - // Appending enr-port to the dns hostname to appease `to_socket_addrs()` parsing. - // Since enr-update is disabled with a dns address, not setting the enr-udp-port - // will make the node undiscoverable. - if let Some(enr_udp_port) = - config - .enr_udp_port - .or(if use_listening_port_as_enr_port_by_default { - Some(config.discovery_port) - } else { - None - }) - { - write!(addr, ":{}", enr_udp_port) - .map_err(|e| format!("Failed to write enr address {}", e))?; - } else { - return Err( - "enr-udp-port must be set for node to be discoverable with dns address" - .into(), - ); + } + + if let Some(enr_addresses) = cli_args.values_of("enr-address") { + let mut enr_ip4 = None; + let mut enr_ip6 = None; + let mut resolved_enr_ip4 = None; + let mut resolved_enr_ip6 = None; + + for addr in enr_addresses { + match addr.parse::() { + Ok(IpAddr::V4(v4_addr)) => { + if let Some(used) = enr_ip4.as_ref() { + warn!(log, "More than one Ipv4 ENR address provided"; "used" => %used, "ignored" => %v4_addr) + } else { + enr_ip4 = Some(v4_addr) + } + } + Ok(IpAddr::V6(v6_addr)) => { + if let Some(used) = enr_ip6.as_ref() { + warn!(log, "More than one Ipv6 ENR address provided"; "used" => %used, "ignored" => %v6_addr) + } else { + enr_ip6 = Some(v6_addr) + } + } + Err(_) => { + // Try to resolve the address + + // NOTE: From checking the `to_socket_addrs` code I don't think the port + // actually matters. Just use the udp port. + + let port = match config.listen_addrs() { + ListenAddress::V4(v4_addr) => v4_addr.udp_port, + ListenAddress::V6(v6_addr) => v6_addr.udp_port, + ListenAddress::DualStack(v4_addr, _v6_addr) => { + // NOTE: slight preference for ipv4 that I don't think is of importance. + v4_addr.udp_port + } + }; + + let addr_str = format!("{addr}:{port}"); + match addr_str.to_socket_addrs() { + Err(_e) => { + return Err(format!("Failed to parse or resolve address {addr}.")) + } + Ok(resolved_addresses) => { + for socket_addr in resolved_addresses { + // Use the first ipv4 and first ipv6 addresses present. + + // NOTE: this means that if two dns addresses are provided, we + // might end up using the ipv4 and ipv6 resolved addresses of just + // the first. + match socket_addr.ip() { + IpAddr::V4(v4_addr) => { + if resolved_enr_ip4.is_none() { + resolved_enr_ip4 = Some(v4_addr) + } + } + IpAddr::V6(v6_addr) => { + if resolved_enr_ip6.is_none() { + resolved_enr_ip6 = Some(v6_addr) + } + } + } + } + } + } } - // `to_socket_addr()` does the dns resolution - // Note: `to_socket_addrs()` is a blocking call - let resolved_addr = if let Ok(mut resolved_addrs) = addr.to_socket_addrs() { - // Pick the first ip from the list of resolved addresses - resolved_addrs - .next() - .map(|a| a.ip()) - .ok_or("Resolved dns addr contains no entries")? - } else { - return Err(format!("Failed to parse enr-address: {}", enr_address)); - }; - config.discv5_config.enr_update = false; - resolved_addr } - }; - config.enr_address = Some(resolved_addr); + } + + // The ENR addresses given as ips should take preference over any resolved address + let used_host_resolution = resolved_enr_ip4.is_some() || resolved_enr_ip6.is_some(); + let ip4 = enr_ip4.or(resolved_enr_ip4); + let ip6 = enr_ip6.or(resolved_enr_ip6); + config.enr_address = (ip4, ip6); + if used_host_resolution { + config.discv5_config.enr_update = false; + } } if cli_args.is_present("disable-enr-auto-update") { @@ -967,6 +1211,13 @@ pub fn set_network_config( // Light client server config. config.enable_light_client_server = cli_args.is_present("light-client-server"); + // This flag can be used both with or without a value. Try to parse it first with a value, if + // no value is defined but the flag is present, use the default params. + config.outbound_rate_limiter_config = clap_utils::parse_optional(cli_args, "self-limiter")?; + if cli_args.is_present("self-limiter") && config.outbound_rate_limiter_config.is_none() { + config.outbound_rate_limiter_config = Some(Default::default()); + } + Ok(()) } diff --git a/beacon_node/store/Cargo.toml b/beacon_node/store/Cargo.toml index 47aef580e13..7df97105cb5 100644 --- a/beacon_node/store/Cargo.toml +++ b/beacon_node/store/Cargo.toml @@ -14,7 +14,7 @@ leveldb = { version = "0.8.6", default-features = false } parking_lot = "0.12.0" itertools = "0.10.0" eth2_ssz = { version = "0.4.1", path = "../../consensus/ssz" } -eth2_ssz_derive = { version = "0.3.0", path = "../../consensus/ssz_derive" } +eth2_ssz_derive = { version = "0.3.1", path = "../../consensus/ssz_derive" } types = { path = "../../consensus/types" } state_processing = { path = "../../consensus/state_processing" } slog = "2.5.2" diff --git a/beacon_node/store/src/chunked_vector.rs b/beacon_node/store/src/chunked_vector.rs index 8c64d4bcc05..73edfbb0744 100644 --- a/beacon_node/store/src/chunked_vector.rs +++ b/beacon_node/store/src/chunked_vector.rs @@ -18,6 +18,7 @@ use self::UpdatePattern::*; use crate::*; use ssz::{Decode, Encode}; use typenum::Unsigned; +use types::historical_summary::HistoricalSummary; /// Description of how a `BeaconState` field is updated during state processing. /// @@ -26,7 +27,18 @@ use typenum::Unsigned; #[derive(Debug, Clone, Copy, PartialEq, Eq)] pub enum UpdatePattern { /// The value is updated once per `n` slots. - OncePerNSlots { n: u64 }, + OncePerNSlots { + n: u64, + /// The slot at which the field begins to accumulate values. + /// + /// The field should not be read or written until `activation_slot` is reached, and the + /// activation slot should act as an offset when converting slots to vector indices. + activation_slot: Option, + /// The slot at which the field ceases to accumulate values. + /// + /// If this is `None` then the field is continually updated. + deactivation_slot: Option, + }, /// The value is updated once per epoch, for the epoch `current_epoch - lag`. OncePerEpoch { lag: u64 }, } @@ -98,12 +110,30 @@ pub trait Field: Copy { fn start_and_end_vindex(current_slot: Slot, spec: &ChainSpec) -> (usize, usize) { // We take advantage of saturating subtraction on slots and epochs match Self::update_pattern(spec) { - OncePerNSlots { n } => { + OncePerNSlots { + n, + activation_slot, + deactivation_slot, + } => { // Per-slot changes exclude the index for the current slot, because // it won't be set until the slot completes (think of `state_roots`, `block_roots`). // This also works for the `historical_roots` because at the `n`th slot, the 0th // entry of the list is created, and before that the list is empty. - let end_vindex = current_slot / n; + // + // To account for the switch from historical roots to historical summaries at + // Capella we also modify the current slot by the activation and deactivation slots. + // The activation slot acts as an offset (subtraction) while the deactivation slot + // acts as a clamp (min). + let slot_with_clamp = deactivation_slot.map_or(current_slot, |deactivation_slot| { + std::cmp::min(current_slot, deactivation_slot) + }); + let slot_with_clamp_and_offset = if let Some(activation_slot) = activation_slot { + slot_with_clamp - activation_slot + } else { + // Return (0, 0) to indicate that the field should not be read/written. + return (0, 0); + }; + let end_vindex = slot_with_clamp_and_offset / n; let start_vindex = end_vindex - Self::Length::to_u64(); (start_vindex.as_usize(), end_vindex.as_usize()) } @@ -295,7 +325,11 @@ field!( Hash256, T::SlotsPerHistoricalRoot, DBColumn::BeaconBlockRoots, - |_| OncePerNSlots { n: 1 }, + |_| OncePerNSlots { + n: 1, + activation_slot: Some(Slot::new(0)), + deactivation_slot: None + }, |state: &BeaconState<_>, index, _| safe_modulo_index(state.block_roots(), index) ); @@ -305,7 +339,11 @@ field!( Hash256, T::SlotsPerHistoricalRoot, DBColumn::BeaconStateRoots, - |_| OncePerNSlots { n: 1 }, + |_| OncePerNSlots { + n: 1, + activation_slot: Some(Slot::new(0)), + deactivation_slot: None, + }, |state: &BeaconState<_>, index, _| safe_modulo_index(state.state_roots(), index) ); @@ -315,8 +353,12 @@ field!( Hash256, T::HistoricalRootsLimit, DBColumn::BeaconHistoricalRoots, - |_| OncePerNSlots { - n: T::SlotsPerHistoricalRoot::to_u64() + |spec: &ChainSpec| OncePerNSlots { + n: T::SlotsPerHistoricalRoot::to_u64(), + activation_slot: Some(Slot::new(0)), + deactivation_slot: spec + .capella_fork_epoch + .map(|fork_epoch| fork_epoch.start_slot(T::slots_per_epoch())), }, |state: &BeaconState<_>, index, _| safe_modulo_index(state.historical_roots(), index) ); @@ -331,6 +373,27 @@ field!( |state: &BeaconState<_>, index, _| safe_modulo_index(state.randao_mixes(), index) ); +field!( + HistoricalSummaries, + VariableLengthField, + HistoricalSummary, + T::HistoricalRootsLimit, + DBColumn::BeaconHistoricalSummaries, + |spec: &ChainSpec| OncePerNSlots { + n: T::SlotsPerHistoricalRoot::to_u64(), + activation_slot: spec + .capella_fork_epoch + .map(|fork_epoch| fork_epoch.start_slot(T::slots_per_epoch())), + deactivation_slot: None, + }, + |state: &BeaconState<_>, index, _| safe_modulo_index( + state + .historical_summaries() + .map_err(|_| ChunkError::InvalidFork)?, + index + ) +); + pub fn store_updated_vector, E: EthSpec, S: KeyValueStore>( field: F, store: &S, @@ -679,6 +742,7 @@ pub enum ChunkError { end_vindex: usize, length: usize, }, + InvalidFork, } #[cfg(test)] diff --git a/beacon_node/store/src/errors.rs b/beacon_node/store/src/errors.rs index 30ee66074f8..fcc40706b30 100644 --- a/beacon_node/store/src/errors.rs +++ b/beacon_node/store/src/errors.rs @@ -3,7 +3,7 @@ use crate::config::StoreConfigError; use crate::hot_cold_store::HotColdDBError; use ssz::DecodeError; use state_processing::BlockReplayError; -use types::{BeaconStateError, Hash256, Slot}; +use types::{BeaconStateError, Hash256, InconsistentFork, Slot}; pub type Result = std::result::Result; @@ -42,9 +42,9 @@ pub enum Error { }, BlockReplayError(BlockReplayError), AddPayloadLogicError, - ResyncRequiredForExecutionPayloadSeparation, SlotClockUnavailableForMigration, - V9MigrationFailure(Hash256), + UnableToDowngrade, + InconsistentFork(InconsistentFork), } pub trait HandleUnavailable { @@ -103,6 +103,12 @@ impl From for Error { } } +impl From for Error { + fn from(e: InconsistentFork) -> Error { + Error::InconsistentFork(e) + } +} + #[derive(Debug)] pub struct DBError { pub message: String, diff --git a/beacon_node/store/src/hot_cold_store.rs b/beacon_node/store/src/hot_cold_store.rs index 4f63f4e7f97..02608f9a0bd 100644 --- a/beacon_node/store/src/hot_cold_store.rs +++ b/beacon_node/store/src/hot_cold_store.rs @@ -1,5 +1,5 @@ use crate::chunked_vector::{ - store_updated_vector, BlockRoots, HistoricalRoots, RandaoMixes, StateRoots, + store_updated_vector, BlockRoots, HistoricalRoots, HistoricalSummaries, RandaoMixes, StateRoots, }; use crate::config::{ OnDiskStoreConfig, StoreConfig, DEFAULT_SLOTS_PER_RESTORE_POINT, @@ -354,7 +354,8 @@ impl, Cold: ItemStore> HotColdDB } else if !self.config.prune_payloads { // If payload pruning is disabled there's a chance we may have the payload of // this finalized block. Attempt to load it but don't error in case it's missing. - if let Some(payload) = self.get_execution_payload(block_root)? { + let fork_name = blinded_block.fork_name(&self.spec)?; + if let Some(payload) = self.get_execution_payload(block_root, fork_name)? { DatabaseBlock::Full( blinded_block .try_into_full_block(Some(payload)) @@ -393,8 +394,9 @@ impl, Cold: ItemStore> HotColdDB blinded_block: SignedBeaconBlock>, ) -> Result, Error> { if blinded_block.message().execution_payload().is_ok() { + let fork_name = blinded_block.fork_name(&self.spec)?; let execution_payload = self - .get_execution_payload(block_root)? + .get_execution_payload(block_root, fork_name)? .ok_or(HotColdDBError::MissingExecutionPayload(*block_root))?; blinded_block.try_into_full_block(Some(execution_payload)) } else { @@ -413,7 +415,7 @@ impl, Cold: ItemStore> HotColdDB } /// Fetch a block from the store, ignoring which fork variant it *should* be for. - pub fn get_block_any_variant>( + pub fn get_block_any_variant>( &self, block_root: &Hash256, ) -> Result>, Error> { @@ -424,7 +426,7 @@ impl, Cold: ItemStore> HotColdDB /// /// This is useful for e.g. ignoring the slot-indicated fork to forcefully load a block as if it /// were for a different fork. - pub fn get_block_with>( + pub fn get_block_with>( &self, block_root: &Hash256, decoder: impl FnOnce(&[u8]) -> Result, ssz::DecodeError>, @@ -437,9 +439,26 @@ impl, Cold: ItemStore> HotColdDB } /// Load the execution payload for a block from disk. + /// This method deserializes with the proper fork. pub fn get_execution_payload( &self, block_root: &Hash256, + fork_name: ForkName, + ) -> Result>, Error> { + let column = ExecutionPayload::::db_column().into(); + let key = block_root.as_bytes(); + + match self.hot_db.get_bytes(column, key)? { + Some(bytes) => Ok(Some(ExecutionPayload::from_ssz_bytes(&bytes, fork_name)?)), + None => Ok(None), + } + } + + /// Load the execution payload for a block from disk. + /// DANGEROUS: this method just guesses the fork. + pub fn get_execution_payload_dangerous_fork_agnostic( + &self, + block_root: &Hash256, ) -> Result>, Error> { self.get_item(block_root) } @@ -727,6 +746,10 @@ impl, Cold: ItemStore> HotColdDB let key = get_key_for_col(DBColumn::ExecPayload.into(), block_root.as_bytes()); key_value_batch.push(KeyValueStoreOp::DeleteKey(key)); } + + StoreOp::KeyValueOp(kv_op) => { + key_value_batch.push(kv_op); + } } } Ok(key_value_batch) @@ -758,6 +781,8 @@ impl, Cold: ItemStore> HotColdDB StoreOp::DeleteState(_, _) => (), StoreOp::DeleteExecutionPayload(_) => (), + + StoreOp::KeyValueOp(_) => (), } } @@ -881,6 +906,7 @@ impl, Cold: ItemStore> HotColdDB store_updated_vector(StateRoots, db, state, &self.spec, ops)?; store_updated_vector(HistoricalRoots, db, state, &self.spec, ops)?; store_updated_vector(RandaoMixes, db, state, &self.spec, ops)?; + store_updated_vector(HistoricalSummaries, db, state, &self.spec, ops)?; // 3. Store restore point. let restore_point_index = state.slot().as_u64() / self.config.slots_per_restore_point; @@ -935,6 +961,7 @@ impl, Cold: ItemStore> HotColdDB partial_state.load_state_roots(&self.cold_db, &self.spec)?; partial_state.load_historical_roots(&self.cold_db, &self.spec)?; partial_state.load_randao_mixes(&self.cold_db, &self.spec)?; + partial_state.load_historical_summaries(&self.cold_db, &self.spec)?; partial_state.try_into() } @@ -1101,6 +1128,11 @@ impl, Cold: ItemStore> HotColdDB &self.spec } + /// Get a reference to the `Logger` used by the database. + pub fn logger(&self) -> &Logger { + &self.log + } + /// Fetch a copy of the current split slot from memory. pub fn get_split_slot(&self) -> Slot { self.split.read_recursive().slot @@ -1709,7 +1741,7 @@ fn no_state_root_iter() -> Option { + impl StoreItem for $ty_name { + fn db_column() -> DBColumn { + DBColumn::ExecPayload + } + + fn as_store_bytes(&self) -> Vec { + self.as_ssz_bytes() + } + + fn from_store_bytes(bytes: &[u8]) -> Result { + Ok(Self::from_ssz_bytes(bytes)?) + } + } + }; +} +impl_store_item!(ExecutionPayloadMerge); +impl_store_item!(ExecutionPayloadCapella); + +/// This fork-agnostic implementation should be only used for writing. +/// +/// It is very inefficient at reading, and decoding the desired fork-specific variant is recommended +/// instead. impl StoreItem for ExecutionPayload { fn db_column() -> DBColumn { DBColumn::ExecPayload @@ -12,6 +36,9 @@ impl StoreItem for ExecutionPayload { } fn from_store_bytes(bytes: &[u8]) -> Result { - Ok(Self::from_ssz_bytes(bytes)?) + ExecutionPayloadCapella::from_ssz_bytes(bytes) + .map(Self::Capella) + .or_else(|_| ExecutionPayloadMerge::from_ssz_bytes(bytes).map(Self::Merge)) + .map_err(Into::into) } } diff --git a/beacon_node/store/src/lib.rs b/beacon_node/store/src/lib.rs index 75aeca058b5..ee01fa1ae15 100644 --- a/beacon_node/store/src/lib.rs +++ b/beacon_node/store/src/lib.rs @@ -161,6 +161,7 @@ pub enum StoreOp<'a, E: EthSpec> { DeleteBlock(Hash256), DeleteState(Hash256, Option), DeleteExecutionPayload(Hash256), + KeyValueOp(KeyValueStoreOp), } /// A unique column identifier. @@ -211,6 +212,8 @@ pub enum DBColumn { /// For Optimistically Imported Merge Transition Blocks #[strum(serialize = "otb")] OptimisticTransitionBlock, + #[strum(serialize = "bhs")] + BeaconHistoricalSummaries, } /// A block from the database, which might have an execution payload or not. diff --git a/beacon_node/store/src/metadata.rs b/beacon_node/store/src/metadata.rs index 5cb3f122008..8e9b3599b14 100644 --- a/beacon_node/store/src/metadata.rs +++ b/beacon_node/store/src/metadata.rs @@ -4,7 +4,7 @@ use ssz::{Decode, Encode}; use ssz_derive::{Decode, Encode}; use types::{Checkpoint, Hash256, Slot}; -pub const CURRENT_SCHEMA_VERSION: SchemaVersion = SchemaVersion(13); +pub const CURRENT_SCHEMA_VERSION: SchemaVersion = SchemaVersion(16); // All the keys that get stored under the `BeaconMeta` column. // diff --git a/beacon_node/store/src/partial_beacon_state.rs b/beacon_node/store/src/partial_beacon_state.rs index 010796afd5b..cd923da40dc 100644 --- a/beacon_node/store/src/partial_beacon_state.rs +++ b/beacon_node/store/src/partial_beacon_state.rs @@ -1,12 +1,13 @@ use crate::chunked_vector::{ - load_variable_list_from_db, load_vector_from_db, BlockRoots, HistoricalRoots, RandaoMixes, - StateRoots, + load_variable_list_from_db, load_vector_from_db, BlockRoots, HistoricalRoots, + HistoricalSummaries, RandaoMixes, StateRoots, }; use crate::{get_key_for_col, DBColumn, Error, KeyValueStore, KeyValueStoreOp}; use ssz::{Decode, DecodeError, Encode}; use ssz_derive::{Decode, Encode}; use std::convert::TryInto; use std::sync::Arc; +use types::historical_summary::HistoricalSummary; use types::superstruct; use types::*; @@ -14,7 +15,7 @@ use types::*; /// /// Utilises lazy-loading from separate storage for its vector fields. #[superstruct( - variants(Base, Altair, Merge), + variants(Base, Altair, Merge, Capella), variant_attributes(derive(Debug, PartialEq, Clone, Encode, Decode)) )] #[derive(Debug, PartialEq, Clone, Encode)] @@ -66,9 +67,9 @@ where pub current_epoch_attestations: VariableList, T::MaxPendingAttestations>, // Participation (Altair and later) - #[superstruct(only(Altair, Merge))] + #[superstruct(only(Altair, Merge, Capella))] pub previous_epoch_participation: VariableList, - #[superstruct(only(Altair, Merge))] + #[superstruct(only(Altair, Merge, Capella))] pub current_epoch_participation: VariableList, // Finality @@ -78,23 +79,41 @@ where pub finalized_checkpoint: Checkpoint, // Inactivity - #[superstruct(only(Altair, Merge))] + #[superstruct(only(Altair, Merge, Capella))] pub inactivity_scores: VariableList, // Light-client sync committees - #[superstruct(only(Altair, Merge))] + #[superstruct(only(Altair, Merge, Capella))] pub current_sync_committee: Arc>, - #[superstruct(only(Altair, Merge))] + #[superstruct(only(Altair, Merge, Capella))] pub next_sync_committee: Arc>, // Execution - #[superstruct(only(Merge))] - pub latest_execution_payload_header: ExecutionPayloadHeader, + #[superstruct( + only(Merge), + partial_getter(rename = "latest_execution_payload_header_merge") + )] + pub latest_execution_payload_header: ExecutionPayloadHeaderMerge, + #[superstruct( + only(Capella), + partial_getter(rename = "latest_execution_payload_header_capella") + )] + pub latest_execution_payload_header: ExecutionPayloadHeaderCapella, + + // Capella + #[superstruct(only(Capella))] + pub next_withdrawal_index: u64, + #[superstruct(only(Capella))] + pub next_withdrawal_validator_index: u64, + + #[ssz(skip_serializing, skip_deserializing)] + #[superstruct(only(Capella))] + pub historical_summaries: Option>, } /// Implement the conversion function from BeaconState -> PartialBeaconState. macro_rules! impl_from_state_forgetful { - ($s:ident, $outer:ident, $variant_name:ident, $struct_name:ident, [$($extra_fields:ident),*]) => { + ($s:ident, $outer:ident, $variant_name:ident, $struct_name:ident, [$($extra_fields:ident),*], [$($extra_fields_opt:ident),*]) => { PartialBeaconState::$variant_name($struct_name { // Versioning genesis_time: $s.genesis_time, @@ -135,6 +154,11 @@ macro_rules! impl_from_state_forgetful { // Variant-specific fields $( $extra_fields: $s.$extra_fields.clone() + ),*, + + // Variant-specific optional + $( + $extra_fields_opt: None ),* }) } @@ -149,7 +173,8 @@ impl PartialBeaconState { outer, Base, PartialBeaconStateBase, - [previous_epoch_attestations, current_epoch_attestations] + [previous_epoch_attestations, current_epoch_attestations], + [] ), BeaconState::Altair(s) => impl_from_state_forgetful!( s, @@ -162,7 +187,8 @@ impl PartialBeaconState { current_sync_committee, next_sync_committee, inactivity_scores - ] + ], + [] ), BeaconState::Merge(s) => impl_from_state_forgetful!( s, @@ -176,7 +202,25 @@ impl PartialBeaconState { next_sync_committee, inactivity_scores, latest_execution_payload_header - ] + ], + [] + ), + BeaconState::Capella(s) => impl_from_state_forgetful!( + s, + outer, + Capella, + PartialBeaconStateCapella, + [ + previous_epoch_participation, + current_epoch_participation, + current_sync_committee, + next_sync_committee, + inactivity_scores, + latest_execution_payload_header, + next_withdrawal_index, + next_withdrawal_validator_index + ], + [historical_summaries] ), } } @@ -252,6 +296,23 @@ impl PartialBeaconState { Ok(()) } + pub fn load_historical_summaries>( + &mut self, + store: &S, + spec: &ChainSpec, + ) -> Result<(), Error> { + let slot = self.slot(); + if let Ok(historical_summaries) = self.historical_summaries_mut() { + if historical_summaries.is_none() { + *historical_summaries = + Some(load_variable_list_from_db::( + store, slot, spec, + )?); + } + } + Ok(()) + } + pub fn load_randao_mixes>( &mut self, store: &S, @@ -275,7 +336,7 @@ impl PartialBeaconState { /// Implement the conversion from PartialBeaconState -> BeaconState. macro_rules! impl_try_into_beacon_state { - ($inner:ident, $variant_name:ident, $struct_name:ident, [$($extra_fields:ident),*]) => { + ($inner:ident, $variant_name:ident, $struct_name:ident, [$($extra_fields:ident),*], [$($extra_opt_fields:ident),*]) => { BeaconState::$variant_name($struct_name { // Versioning genesis_time: $inner.genesis_time, @@ -320,6 +381,11 @@ macro_rules! impl_try_into_beacon_state { // Variant-specific fields $( $extra_fields: $inner.$extra_fields + ),*, + + // Variant-specific optional fields + $( + $extra_opt_fields: unpack_field($inner.$extra_opt_fields)? ),* }) } @@ -338,7 +404,8 @@ impl TryInto> for PartialBeaconState { inner, Base, BeaconStateBase, - [previous_epoch_attestations, current_epoch_attestations] + [previous_epoch_attestations, current_epoch_attestations], + [] ), PartialBeaconState::Altair(inner) => impl_try_into_beacon_state!( inner, @@ -350,7 +417,8 @@ impl TryInto> for PartialBeaconState { current_sync_committee, next_sync_committee, inactivity_scores - ] + ], + [] ), PartialBeaconState::Merge(inner) => impl_try_into_beacon_state!( inner, @@ -363,7 +431,24 @@ impl TryInto> for PartialBeaconState { next_sync_committee, inactivity_scores, latest_execution_payload_header - ] + ], + [] + ), + PartialBeaconState::Capella(inner) => impl_try_into_beacon_state!( + inner, + Capella, + BeaconStateCapella, + [ + previous_epoch_participation, + current_epoch_participation, + current_sync_committee, + next_sync_committee, + inactivity_scores, + latest_execution_payload_header, + next_withdrawal_index, + next_withdrawal_validator_index + ], + [historical_summaries] ), }; Ok(state) diff --git a/beacon_node/store/src/reconstruct.rs b/beacon_node/store/src/reconstruct.rs index c939fd3f51f..c399f1b4571 100644 --- a/beacon_node/store/src/reconstruct.rs +++ b/beacon_node/store/src/reconstruct.rs @@ -1,6 +1,6 @@ //! Implementation of historic state reconstruction (given complete block history). use crate::hot_cold_store::{HotColdDB, HotColdDBError}; -use crate::{Error, ItemStore, KeyValueStore}; +use crate::{Error, ItemStore}; use itertools::{process_results, Itertools}; use slog::info; use state_processing::{ @@ -13,8 +13,8 @@ use types::{EthSpec, Hash256}; impl HotColdDB where E: EthSpec, - Hot: KeyValueStore + ItemStore, - Cold: KeyValueStore + ItemStore, + Hot: ItemStore, + Cold: ItemStore, { pub fn reconstruct_historic_states(self: &Arc) -> Result<(), Error> { let mut anchor = if let Some(anchor) = self.get_anchor_info() { diff --git a/beacon_node/tests/test.rs b/beacon_node/tests/test.rs index 1c11a8349dd..bbec70330b7 100644 --- a/beacon_node/tests/test.rs +++ b/beacon_node/tests/test.rs @@ -1,5 +1,4 @@ #![cfg(test)] -#![recursion_limit = "256"] use beacon_chain::StateSkipConfig; use node_test_rig::{ diff --git a/book/src/SUMMARY.md b/book/src/SUMMARY.md index 470407ebee9..ff5c1e9805f 100644 --- a/book/src/SUMMARY.md +++ b/book/src/SUMMARY.md @@ -2,7 +2,6 @@ * [Introduction](./intro.md) * [Installation](./installation.md) - * [System Requirements](./system-requirements.md) * [Pre-Built Binaries](./installation-binaries.md) * [Docker](./docker.md) * [Build from Source](./installation-source.md) @@ -33,6 +32,11 @@ * [Authorization Header](./api-vc-auth-header.md) * [Signature Header](./api-vc-sig-header.md) * [Prometheus Metrics](./advanced_metrics.md) +* [Lighthouse UI (Siren)](./lighthouse-ui.md) + * [Installation](./ui-installation.md) + * [Configuration](./ui-configuration.md) + * [Usage](./ui-usage.md) + * [FAQs](./ui-faqs.md) * [Advanced Usage](./advanced.md) * [Checkpoint Sync](./checkpoint-sync.md) * [Custom Data Directories](./advanced-datadir.md) diff --git a/book/src/advanced_networking.md b/book/src/advanced_networking.md index fb7f07a51a6..08d276ba356 100644 --- a/book/src/advanced_networking.md +++ b/book/src/advanced_networking.md @@ -41,7 +41,7 @@ drastically and use the (recommended) default. ### NAT Traversal (Port Forwarding) -Lighthouse, by default, used port 9000 for both TCP and UDP. Lighthouse will +Lighthouse, by default, uses port 9000 for both TCP and UDP. Lighthouse will still function if it is behind a NAT without any port mappings. Although Lighthouse still functions, we recommend that some mechanism is used to ensure that your Lighthouse node is publicly accessible. This will typically improve @@ -54,6 +54,16 @@ node will inform you of established routes in this case). If UPnP is not enabled, we recommend you manually set up port mappings to both of Lighthouse's TCP and UDP ports (9000 by default). +> Note: Lighthouse needs to advertise its publicly accessible ports in +> order to inform its peers that it is contactable and how to connect to it. +> Lighthouse has an automated way of doing this for the UDP port. This means +> Lighthouse can detect its external UDP port. There is no such mechanism for the +> TCP port. As such, we assume that the external UDP and external TCP port is the +> same (i.e external 5050 UDP/TCP mapping to internal 9000 is fine). If you are setting up differing external UDP and TCP ports, you should +> explicitly specify them using the `--enr-tcp-port` and `--enr-udp-port` as +> explained in the following section. + + ### ENR Configuration Lighthouse has a number of CLI parameters for constructing and modifying the diff --git a/book/src/api-lighthouse.md b/book/src/api-lighthouse.md index 05cb0b69cf8..28481809703 100644 --- a/book/src/api-lighthouse.md +++ b/book/src/api-lighthouse.md @@ -141,7 +141,7 @@ curl -X POST "http://localhost:5052/lighthouse/ui/validator_metrics" -d '{"indic "attestation_head_hit_percentage": 100, "attestation_target_hits": 5, "attestation_target_misses": 5, - "attestation_target_hit_percentage": 50 + "attestation_target_hit_percentage": 50 } } } diff --git a/book/src/checkpoint-sync.md b/book/src/checkpoint-sync.md index 736aa08f1cf..47dc03b20c4 100644 --- a/book/src/checkpoint-sync.md +++ b/book/src/checkpoint-sync.md @@ -48,17 +48,6 @@ The Ethereum community provides various [public endpoints](https://eth-clients.g lighthouse bn --checkpoint-sync-url https://example.com/ ... ``` -### Use Infura as a remote beacon node provider - -You can use Infura as the remote beacon node provider to load the initial checkpoint state. - -1. Sign up for the free Infura ETH2 API using the `Create new project tab` on the [Infura dashboard](https://infura.io/dashboard). -2. Copy the HTTPS endpoint for the required network (Mainnet/Prater). -3. Use it as the url for the `--checkpoint-sync-url` flag. e.g. -``` -lighthouse bn --checkpoint-sync-url https://:@eth2-beacon-mainnet.infura.io ... -``` - ## Backfilling Blocks Once forwards sync completes, Lighthouse will commence a "backfill sync" to download the blocks @@ -108,7 +97,7 @@ You can opt-in to reconstructing all of the historic states by providing the The database keeps track of three markers to determine the availability of historic blocks and states: -* `oldest_block_slot`: All blocks with slots less than or equal to this value are available in the +* `oldest_block_slot`: All blocks with slots greater than or equal to this value are available in the database. Additionally, the genesis block is always available. * `state_lower_limit`: All states with slots _less than or equal to_ this value are available in the database. The minimum value is 0, indicating that the genesis state is always available. diff --git a/book/src/database-migrations.md b/book/src/database-migrations.md index 0982e10ab90..d2b7b518d75 100644 --- a/book/src/database-migrations.md +++ b/book/src/database-migrations.md @@ -26,10 +26,17 @@ validator client or the slasher**. | v3.1.0 | Sep 2022 | v12 | yes | | v3.2.0 | Oct 2022 | v12 | yes | | v3.3.0 | Nov 2022 | v13 | yes | +| v3.4.0 | Jan 2023 | v13 | yes | +| v3.5.0 | Feb 2023 | v15 | yes before Capella | +| v4.0.1 | Mar 2023 | v16 | yes before Capella | > **Note**: All point releases (e.g. v2.3.1) are schema-compatible with the prior minor release > (e.g. v2.3.0). +> **Note**: Support for old schemas is gradually removed from newer versions of Lighthouse. We +usually do this after a major version has been out for a while and everyone has upgraded. In this +case the above table will continue to record the deprecated schema changes for reference. + ## How to apply a database downgrade To apply a downgrade you need to use the `lighthouse db migrate` command with the correct parameters. @@ -110,7 +117,7 @@ Several conditions need to be met in order to run `lighthouse db`: 2. The command must run as the user that owns the beacon node database. If you are using systemd then your beacon node might run as a user called `lighthousebeacon`. 3. The `--datadir` flag must be set to the location of the Lighthouse data directory. -4. The `--network` flag must be set to the correct network, e.g. `mainnet`, `prater` or `ropsten`. +4. The `--network` flag must be set to the correct network, e.g. `mainnet`, `prater` or `sepolia`. The general form for a `lighthouse db` command is: diff --git a/book/src/docker.md b/book/src/docker.md index f22b8a20082..d67b084da63 100644 --- a/book/src/docker.md +++ b/book/src/docker.md @@ -16,21 +16,18 @@ way to run Lighthouse without building the image yourself. Obtain the latest image with: ```bash -$ docker pull sigp/lighthouse +docker pull sigp/lighthouse ``` Download and test the image with: ```bash -$ docker run sigp/lighthouse lighthouse --version +docker run sigp/lighthouse lighthouse --version ``` If you can see the latest [Lighthouse release](https://github.com/sigp/lighthouse/releases) version (see example below), then you've successfully installed Lighthouse via Docker. -> Pro tip: try the `latest-modern` image for a 20-30% speed-up! See [Available Docker -> Images](#available-docker-images) below. - ### Example Version Output ``` @@ -38,6 +35,9 @@ Lighthouse vx.x.xx-xxxxxxxxx BLS Library: xxxx-xxxxxxx ``` +> Pro tip: try the `latest-modern` image for a 20-30% speed-up! See [Available Docker +> Images](#available-docker-images) below. + ### Available Docker Images There are several images available on Docker Hub. @@ -47,17 +47,16 @@ Lighthouse with optimizations enabled. If you are running on older hardware then `latest` image bundles a _portable_ version of Lighthouse which is slower but with better hardware compatibility (see [Portability](./installation-binaries.md#portability)). -To install a specific tag (in this case `latest-modern`) add the tag name to your `docker` commands -like so: +To install a specific tag (in this case `latest-modern`), add the tag name to your `docker` commands: ``` -$ docker pull sigp/lighthouse:latest-modern +docker pull sigp/lighthouse:latest-modern ``` Image tags follow this format: ``` -${version}${arch}${stability}${modernity} +${version}${arch}${stability}${modernity}${features} ``` The `version` is: @@ -65,22 +64,28 @@ The `version` is: * `vX.Y.Z` for a tagged Lighthouse release, e.g. `v2.1.1` * `latest` for the `stable` branch (latest release) or `unstable` branch -The `stability` is: - -* `-unstable` for the `unstable` branch -* empty for a tagged release or the `stable` branch - The `arch` is: * `-amd64` for x86_64, e.g. Intel, AMD * `-arm64` for aarch64, e.g. Raspberry Pi 4 * empty for a multi-arch image (works on either `amd64` or `arm64` platforms) +The `stability` is: + +* `-unstable` for the `unstable` branch +* empty for a tagged release or the `stable` branch + The `modernity` is: * `-modern` for optimized builds * empty for a `portable` unoptimized build +The `features` is: + +* `-dev` for a development build with `minimal-spec` preset enabled. +* empty for a standard build with no custom feature enabled. + + Examples: * `latest-unstable-modern`: most recent `unstable` build for all modern CPUs (x86_64 or ARM) @@ -93,13 +98,13 @@ To build the image from source, navigate to the root of the repository and run: ```bash -$ docker build . -t lighthouse:local +docker build . -t lighthouse:local ``` The build will likely take several minutes. Once it's built, test it with: ```bash -$ docker run lighthouse:local lighthouse --help +docker run lighthouse:local lighthouse --help ``` ## Using the Docker image @@ -107,12 +112,12 @@ $ docker run lighthouse:local lighthouse --help You can run a Docker beacon node with the following command: ```bash -$ docker run -p 9000:9000/tcp -p 9000:9000/udp -p 127.0.0.1:5052:5052 -v $HOME/.lighthouse:/root/.lighthouse sigp/lighthouse lighthouse --network mainnet beacon --http --http-address 0.0.0.0 +docker run -p 9000:9000/tcp -p 9000:9000/udp -p 127.0.0.1:5052:5052 -v $HOME/.lighthouse:/root/.lighthouse sigp/lighthouse lighthouse --network mainnet beacon --http --http-address 0.0.0.0 ``` -> To join the Prater testnet, use `--network prater` instead. +> To join the Goerli testnet, use `--network goerli` instead. -> The `-p` and `-v` and values are described below. +> The `-v` (Volumes) and `-p` (Ports) and values are described below. ### Volumes @@ -125,7 +130,7 @@ The following example runs a beacon node with the data directory mapped to the users home directory: ```bash -$ docker run -v $HOME/.lighthouse:/root/.lighthouse sigp/lighthouse lighthouse beacon +docker run -v $HOME/.lighthouse:/root/.lighthouse sigp/lighthouse lighthouse beacon ``` ### Ports @@ -134,14 +139,14 @@ In order to be a good peer and serve other peers you should expose port `9000` f Use the `-p` flag to do this: ```bash -$ docker run -p 9000:9000/tcp -p 9000:9000/udp sigp/lighthouse lighthouse beacon +docker run -p 9000:9000/tcp -p 9000:9000/udp sigp/lighthouse lighthouse beacon ``` If you use the `--http` flag you may also want to expose the HTTP port with `-p 127.0.0.1:5052:5052`. ```bash -$ docker run -p 9000:9000/tcp -p 9000:9000/udp -p 127.0.0.1:5052:5052 sigp/lighthouse lighthouse beacon --http --http-address 0.0.0.0 +docker run -p 9000:9000/tcp -p 9000:9000/udp -p 127.0.0.1:5052:5052 sigp/lighthouse lighthouse beacon --http --http-address 0.0.0.0 ``` [docker_hub]: https://hub.docker.com/repository/docker/sigp/lighthouse/ diff --git a/book/src/faq.md b/book/src/faq.md index 5bfae3fa875..b42e197a003 100644 --- a/book/src/faq.md +++ b/book/src/faq.md @@ -9,6 +9,11 @@ - [What is "Syncing deposit contract block cache"?](#what-is-syncing-deposit-contract-block-cache) - [Can I use redundancy in my staking setup?](#can-i-use-redundancy-in-my-staking-setup) - [How can I monitor my validators?](#how-can-i-monitor-my-validators) +- [I see beacon logs showing `WARN: Execution engine called failed`, what should I do?](#i-see-beacon-logs-showing-warn-execution-engine-called-failed-what-should-i-do) +- [How do I check or update my withdrawal credentials?](#how-do-i-check-or-update-my-withdrawal-credentials) +- [I am missing attestations. Why?](#i-am-missing-attestations-why) +- [Sometimes I miss the attestation head vote, resulting in penalty. Is this normal?](#sometimes-i-miss-the-attestation-head-vote-resulting-in-penalty-is-this-normal) +- [My beacon node is stuck at downloading historical block using checkpoing sync. What can I do?](#my-beacon-node-is-stuck-at-downloading-historical-block-using-checkpoing-sync-what-can-i-do) ### Why does it take so long for a validator to be activated? @@ -128,8 +133,9 @@ same `datadir` as a previous network. I.e if you have been running the `datadir` (the `datadir` is also printed out in the beacon node's logs on boot-up). -If you find yourself with a low peer count and is not reaching the target you -expect. Try setting up the correct port forwards as described [here](./advanced_networking.md#nat-traversal-port-forwarding). +If you find yourself with a low peer count and it's not reaching the target you +expect. Try setting up the correct port forwards as described +[here](./advanced_networking.md#nat-traversal-port-forwarding). ### What should I do if I lose my slashing protection database? @@ -184,4 +190,47 @@ However, there are some components which can be configured with redundancy. See Apart from using block explorers, you may use the "Validator Monitor" built into Lighthouse which provides logging and Prometheus/Grafana metrics for individual validators. See [Validator -Monitoring](./validator-monitoring.md) for more information. +Monitoring](./validator-monitoring.md) for more information. Lighthouse has also developed Lighthouse UI (Siren) to monitor performance, see [Lighthouse UI (Siren)](./lighthouse-ui.md). + +### I see beacon logs showing `WARN: Execution engine called failed`, what should I do? + +The `WARN Execution engine called failed` log is shown when the beacon node cannot reach the execution engine. When this warning occurs, it will be followed by a detailed message. A frequently encountered example of the error message is: + +`error: Reqwest(reqwest::Error { kind: Request, url: Url { scheme: "http", cannot_be_a_base: false, username: "", password: None, host: Some(Ipv4(127.0.0.1)), port: Some(8551), path: "/", query: None, fragment: None }, source: TimedOut }), service: exec` + +which says `TimedOut` at the end of the message. This means that the execution engine has not responded in time to the beacon node. There are a few reasons why this can occur: +1. The execution engine is not synced. Check the log of the execution engine to make sure that it is synced. If it is syncing, wait until it is synced and the error will disappear. You will see the beacon node logs `INFO Execution engine online` when it is synced. +1. The computer is overloaded. Check the CPU and RAM usage to see if it has overloaded. You can use `htop` to check for CPU and RAM usage. +1. Your SSD is slow. Check if your SSD is in "The Bad" list [here](https://gist.github.com/yorickdowne/f3a3e79a573bf35767cd002cc977b038). If your SSD is in "The Bad" list, it means it cannot keep in sync to the network and you may want to consider upgrading to a better SSD. + +If the reason for the error message is caused by no. 1 above, you may want to look further. If the execution engine is out of sync suddenly, it is usually caused by ungraceful shutdown. The common causes for ungraceful shutdown are: +- Power outage. If power outages are an issue at your place, consider getting a UPS to avoid ungraceful shutdown of services. +- The service file is not stopped properly. To overcome this, make sure that the process is stop properly, e.g., during client updates. +- Out of memory (oom) error. This can happen when the system memory usage has reached its maximum and causes the execution engine to be killed. When this occurs, the log file will show `Main process exited, code=killed, status=9/KILL`. You can also run `sudo journalctl -a --since "18 hours ago" | grep -i "killed process` to confirm that the execution client has been killed due to oom. If you are using geth as the execution client, a short term solution is to reduce the resources used, for example: (1) reduce the cache by adding the flag `--cache 2048` (2) connect to less peers using the flag `--maxpeers 10`. If the oom occurs rather frequently, a long term solution is to increase the memory capacity of the computer. + + +### How do I check or update my withdrawal credentials? +Withdrawals will be available after the Capella/Shanghai upgrades on 12th April 2023. To check that if you are eligible for withdrawals, go to [Staking launchpad](https://launchpad.ethereum.org/en/withdrawals), enter your validator index and click `verify on mainnet`: +- `withdrawals enabled` means you will automatically receive withdrawals to the withdrawal address that you set. +- `withdrawals not enabled` means you will need to update your withdrawal credentials from `0x00` type to `0x01` type. The common way to do this is using `Staking deposit CLI` or `ethdo`, with the instructions available [here](https://launchpad.ethereum.org/en/withdrawals#update-your-keys). + +For the case of `withdrawals not enabled`, you can update your withdrawal credentials **anytime**, and there is no deadline for that. The catch is that as long as you do not update your withdrawal credentials, your rewards in the beacon chain will continue to be locked in the beacon chain. Only after you update the withdrawal credentials, will the rewards be withdrawn to the withdrawal address. + + +### I am missing attestations. Why? +The first thing is to ensure both consensus and execution clients are synced with the network. If they are synced, there may still be some issues with the node setup itself that is causing the missed attestations. Check the setup to ensure that: +- the clock is synced +- the computer has sufficient resources and is not overloaded +- the internet is working well +- you have sufficient peers + +You can see more information on the [Ethstaker KB](https://ethstaker.gitbook.io/ethstaker-knowledge-base/help/missed-attestations). Once the above points are good, missing attestation should be a rare occurance. + +### Sometimes I miss the attestation head vote, resulting in penalty. Is this normal? + +In general it is unavoiadable to have some penalties occasionally. This is particularly the case when you are assigned to attest on the first slot of an epoch and if the proposer of that slot releases the block late, then you will get penalised for missing the target and head votes. Your attestation performance does not only depend on your own setup, but also on everyone else's performance. + + +### My beacon node is stuck at downloading historical block using checkpoing sync. What can I do? + +Check the number of peers you are connected to. If you have low peers (less than 50), try to do port forwarding on the port 9000 TCP/UDP to increase peer count. \ No newline at end of file diff --git a/book/src/imgs/ui-account-earnings.png b/book/src/imgs/ui-account-earnings.png new file mode 100644 index 0000000000000000000000000000000000000000..69e9456035635e947ddfc97771f646f07ee4e588 GIT binary patch literal 886925 zcmZU(1y~$E_Xdg-DHLdNr-c@WEflw<#frPbQlPlIE)*zk#T|+l+r{189TsPG;UYZ_Y`uf}8{v1}O#t0s@xgS5YMdgcpqn2*{M^sLwS-#T1GN z2(N%}`zAER7Hlz6QssqiHDj5vFTL{}MJue3A2W=|vQ&uoODW#ggcU z_lRGD*}2^`5Mj=Ys=h&C8uF!fChS@w!es2P*C=JOQ4)rMH$ArRww9hA9kzfb z!)e;62vzIkQS>;fXm9m2=p(SM^6Bz(#wFzukz#$gN>Sa(t$l(>C@8RhW2HS=A45cJ zOw|u^wVn>1pm}ZUM7D^+S?`8d2yD;a385o^x(I?X5#LI=RWSHdGX!9n2J8F22E?v0 z^FpH6Uhyt5od-u6eO|!+5seW@GmIpBiMz2Yw!r)+a}K90`xo77q~GF!Gju?2T%0w> zppwi;bb|sRnsv0a;ubUGR~Gg>njIQKL`e)N=oq4Isb)_mypJ`T+mZnW8tr+}EZD(u z!z9y(#z#3H8Pt4gZ?t&`YpVrj?4WZx^n92~c0gKfvG zM|=Q)^@Ol@eJ;azD!=#5yfsLvKsoq|vZR+tK}8M4nI7KkJqvVbV?FgB{P~W;>$CmM zsE+IX58>N|G80ezFR5R){r-#s)r1K1gU+%D9%~tZ0FC~XF9%8r`1!=4bcF3F46vS0 zV*X>NX9A~dp+E$95 z5VQMwi*C1e``>Q0(H=fs-@m>~)Bi%N)AluhdVDHCAlRo9!2q2t$kcCtU8t6b5K!ppc&ODNcMA zwRm~f$v5{jOQUIVYUQnE(`B&J&_?Jlpej59wOBIC%;2+{^SiW{E$E(n+PzN1VlMWM z_Nc;ea8yh4QfF^7S&&?hi}ex}f@=*nHa6wsuzxjD=?I0;n|kQ)`N;wJ|aJcZ>WiwQlEJ> zP}Te{zv0bbIQt5I1Dd{&Xnko*s)LZw%4v%RNvAwPy6|&Z`Q-cd{m+om*MgnIWx@xq z>3;g$y!H9YgBzOjcK0hGwdniaSKld>B8r8lX|#Wley17_OBEh|r`yZ2fjA!8C@z#q zT~hWd&cjPb$2Wtti!YCfulP8yhvBkHBr#W;6Ccy^?6J%_6p*ll>WtU@Fi{(x@S zI{ztZBSniZu=VcL<^^Nhk8ctu?@M3y{bX*(|H6>zTbgDYWQ!prE%?(#lv**+RLn7O z!~laWkSDD+vDQaJUO?`ET8J9^*Vp%g(G-0t_+o3nFeGzxoN^X()W(gJaCZHo%cIMp zW0Za=6-Fx*h!%V?OBiR%9gvcb6N;JYBm6FJkDnxwE)J4f61RS*_x}19P?>{X4c|se z;6osLLV3KB1g}_~v|Dy@v1Q>wCV4iqLV9+inj7>?a}%wfN|i&^QdNV|j1f2P10%mm zMA3qhn@Fw_OM%yjKCpJaid z+hlD{d2^vGeU4A0q20Md*;d(US(m17s){RTw0N~PwLB{Ms^C?eRl1dURr%)X=2TNh z(`}B-PBVONe9}%=`#P{&j&|D@4llerv)(w-utvH<`ITVu2~h1kZK#Lrbqu90Kmnj` zQwPdk5(Miks{zHmLmtH+l;L2SDslJljR_-E2H{;Kq1W+h@sjbF z%w#MXn!8$MB_So|B}ZDyCEyY|ZFOzEs&{iAE$ywiW^~HMtaPm^teC6-bD7T8&PWYd z&bry?*#hD(B~&Bp^`QnMm4I?+mG{c;_18xVYoj|S_La9glVbxC86*!wV^eG+b}8#= zY{`aSdSb)ONY3Z_ri1rYb{(gZ_Q*E`=Ii=y<3iIgmyNOqWYv=`e(oc|xL{E0BWr^i zz8a8=!?oiz<2BhU7I6+kw)JAPwvJ`P>Yg`|k?Gdy{^@suPiN)V{g-o_mzKp-8jmwp zhE#4T?hcJLt}b_@;Bhe5o#S2djlE|PmcKJvh5dHf)~Ynso{2Wp!`k9S4MQcw>`IRn*F85gK zsXcg2#Q~LlhD4R*L5I)<>l$SwvL*yJaJ>ax0S;XZ>qOl$5Aa(Uso7N9IEy)F{P>^& zaE-%@u3*>(TdN#JEVMP|G{QY|?=S9+GxI*G&j|IIj!#PJQu8Yo{P8H?-k6HH>(XC4 zMETlwA(gPxs%j12L=a+?MKj>MtRo8VPlRP%Q@EeNYc|z z!it``w5oJhE;peo%S`2KJ$j9qx?Z=jda>nxSEKC#iUFqn7UnHhp{M;sJ=h~(?Ry8w zZKaxAvV=DYFB5e2jv73-F~SMDm?%raG-gXfnhPzb=BMz~Gm2~rW-0^9#mnW!5pPd1ta2T$o(T3%HHBnRDg!226Gq zW7Bl=xHW0F;iuX|&JTy1^DRfOk9L+w8Y-H)oAQhk+90`ArX@MmE-rIN(jG`x98WmD>Pl!iq4^m8So&avMvS(HtwmnUC(5rf{ewfsV#j9C`oLPsY|k9q%G%=8 zoY!$?^(2CZR)R6ooJ!X{%--^`tZjO2+Gjm$9ZXj)Ct#j3hN$E_WgHx@menU1H6=EZ7&%SjFiwlxRo@TQPE)a#a9|Ngt3}`@VOfvjq3=Nu+6hI~19)X$8~11@)L0>pAzv?Vw%pw}ANVZ=rcsel zjd)7IFD}kw0XYDRmN`$WTf(cuqi>HZzk_JDdc9CS&WsMqAb^e$ek3CyU>%LnuW#G| zlir;P7QOO$`nl;LnpB6VfcCU(2wzzq!a}&R_L~R%fySc>a}fM+#reU9i7~XfC3fy=&C;6(X+d8uCjc9G{X!PLfatSspMx#FemS4{^51i}x0 zGoqx@`;+JKVP+~CjvBHue1hZgE{+z0)Ecr1Z$)hEjoxyxeq{YfErjv*?OOr+@5X#eqT>Hnf4&l= zHg$Bgcegyn?>}OSh zzqNb{X0Aq-nxbY_&ujKPhY&X}kHA0u|9>t2H{<_kYW$xj8#^2O|Em5^)Bjgh)xpSK z#K!7*PDi2tdtU!m{@=!bD+&Pq-unM6#lP11pW5ez7Qzq!{P&>=VbFYPYJGl=f+=gQhd z#eF(Q(hVvyfX4gC||uL3DN?|TJ9LBzP;Rs<*P%Z zy&vG06ZL& z{D|rLP}-x&mByQ<ZU#t8i0#WMM6Pw*c?kfeF3GL+Ee=68A%qb*Ps7XUUEVhoid1?avOS zN2gd!)+Z)6UlBvq*d)phlS^UW(Vrmj8J8Yx`hPXPJ7 z>)pFukI|o>rNnXi+S~vg4x15UsYZj`s?UWZnTSbmI6~Gze#(x8C|z&B?jsTx(>f1q4=HzLbPLzo*y7B=SxrwN%ni)(wTCYv-#S zbWa@20^nQvl6peNE?MvO^(q{(`SqC@)<>=GQ706Po3o3nG>8M1X3PDVTyrz@Bi0A# z*22wt;D|gG@ata8{artCV1`6fkAOME_FLpll=tQC?tamTe9QfAxYyqHu7d!vD4tbD z9q;2kJVE|ycM5hrUft>J+A?QtfhAkrmC(7lKQVu;m%c>L)$-?PsQ`PQvK35p1?+!p zdAJd6(tq0BpMA`DDggZ9I!9^}UHO0nWI=kmw|P4HRPD6^zvr4w^B74>BVP}Ko4~X8 ziuBkZe^UAyH45s>ixI=zNE|qhBrl5YyzLGq;`Es*Qy+R4e9I)Gy~9zU4n5wik5tGG zoC^1s0s{ylMqNkANayG88wtUQ3Sk)eZ}7co?FxjsX~NHCS{-O|^4`}HHkoG@5jLy@ z-)bA#c#16J#l(XSRY@4DjLXK2TTjR;Aman7@u8-@NeDM4%qfH2SfP zyU5|q#N}P>Lz=^gsu`CzMX`tHY99Dl;*8&U~V`F^ke<4 z%bvdrgd3IOU}G*c0-$BPx|QJ%VMdyKp;d8j>hu)L$LrAyZ&Dg@70?~1iqD(Z9VnTL zD6hFFzVRJvnq$0I+~yqFVLBlXbW{quRQ3$!#Nk>PDQY2m&}ONoPjSU_31efjO?j^f zgnH3p>S*c5Ml#o`9Blh4u)L~+N`hOZuT;lIDKKedHwHOQjoBaSQ-4v{q)Xcv1Vva2~DwO`3bazgs>QzbPWzn|JMzWKR4`L^)Y zL>|if#}IC-^hc1@!l^oFHWQg)w;O7I5F0==Opd}parvUz5Ond4M)`uDW8d6;U&L*F zVB{Nyo|$Dd4Ph6i3MGEy#Fyofl27$ttSm(uAXDcwBgqNPXLeb1Z`T9`Z`#N1ZI!p% zI3Ifr*)p1=dM@#Hrz%vR2*{aW)oNZm(>zoU0{cE9a>2squ9sS{Rx%darcHfxL*xe{ zu-c{M^QUOaH@y1=OS9o>V zL%zU{d%k~ho7^qQyc_Z0+VFOy9;dV2Vt*E)%P^)4k+YPhHkacSiKeI00?0x6S}fGY z9pPGf>*8pC^))7#D|9PE$jc3WQUs5Yj|O*VToq?J7uYHdx5?(-yB2X4yTF@NWn6E1 zG~`ksf$t}i4lSaTA`K5j@GowS+KLQM|ANm%V&xVEdS1hrcj6(dGQHy*fdD+>N ztCYw1N^b9Rt@yDP6KeCOHx#hK0{UgUYBh!8-Ig}Yf)3dYoBFZoX98Glvp+-*nA!5q z${8ao|JbK-&(1o2hlmMVcZc1vQPh(Ho|BtAR-Hn zne@=Pbmjln6^6oUu-m1R->Dcv@q|coAi{Onup<<~&|Bri;#fe>GNgFXtGPL8``7&0 zfYG^^+A4)Oh?>V&dUIV(1jwVb&HS0dN0Ws_oJa!WQNBPvkkwc2t^}Ku%dU_Ja(zz^ zC~Oh!ftYME+@!d%?vq-zgZ!_Dj5uD;1*Cq6!1; zC^AF$e~K;3YwZ}6?GF%jNr!%|V;u#xlzR@C@TkwV1IO^_v%p4*!Qs6(@CY_gsQr%qRF5MZpdJCSuOWXZ?))nLLcU3{)XfsDXDBi z>G-l^+rE2gd4_kXvhTGv|7L&8xYSriNUl9+5ZfsIWq26snha&F&me-z;~$U?ueD5J z2}FSwPlb_zJIbv+Lx6{L(Rw*3P>v!dAe~5tpFDgliQ~JQzJ7nk87->nm0Vk>dds`j zvxwJ0Db=Ga4Zc8U>d~jUTytC=ES^J@nkJYWF|sScnWT6peP322s|K1^y(i;u0W{;6E^WbQ!E8=aWI8<666NMkEmk>L+$<|Pfecep*< z>kYhk<1(1wPhqqCbB)g2fkvDdu8nHtftvqaSS;EGPqQ`M??z$kyM+h;w(ANm>T^n{<*FjsVj+r;y0&L%y zGk;CcIquM@tVcV&r=>y-6!a@c>yA^r$q9ZZdUlExMc#&(GB1Ff4j0ppQV^J(_5VQO zr=C#R+8KV6OR=W~J)LG#y$=etrEkZ?CVZJTtPH{OOlLcX@UHc?A0{ejVpH6IpRg+s z=;wj-lQ#AV#nCZ=+`oIr+QUh;KLnSMkB#2OjXYaKxOtg@M)KDV&HK23FT<(=wXJ!n zKGaS9KbtdBD0MTrsx!}M=hR0c1uQ@d*y*--f{1U9cY|}qMi+JgV`ckhM;7<}En2## zFsJaXJ3f)75oN7uo`jLymdh|gZWVX+==9oiI0clgyePCRhN588;hOE@axSfNnDt?X zB@VW#Pz~3hj3eT=iX3bIG)w!I`jHVx=(yB4Hq3f^bG7KIsuKzlQgPL-Kf-A$&4HnM zl!Z2bYWWCrt8%~zM3VrWvG;~A?~jBeI6Ip~1!vgY>;#@j=v2|bvz?v68R>7!#Tx45}Ute8zLtJf9idHP{s6duR526I@dq44s`j#b~xsD zFfVX@6438+&^_mUiR8v6Jon4`nPGQ3pzcW~aO!1f28FhxWq(7)xi!mH>6hK@TbRc7 zH`azFK&%UiNzcP)CnKfZTegR>AN!B4d3yti14irU4xHZ_qIqd9*jyfl2pfkfyb0DL z=X%DfpDJiRNWH^K zmTtq+G%Hth)TXxRt$lMw%-Lf5EQhYo>PV6wvyUmXhx1;dUrgwEPvNUfJQ!2YO0Zg1Ij0aug5{V2 z(qPAZj8!(lOp&n+07&-XfI0)|guPl@wSR}vu|hpza*-D`SxkiYhIF9*$p_!_iqt*i zSL;4a!cEw-LK~PFm9C8HR$YRk=PoIXRFtwz7i{($*>7&2%z zT(3vi$VUqpm44yn^`aTvW^rkEfyK1s-mz(MUIE+au({0`+ zOw1Gs0Iuab(BS>G9Pop&I}cVCJxkwa5NcFM(B|qsCEg-=OP?iWSFV4*g7~(S2eS8Dy z;!v$F^=mKFBztl)bya7j__2t1yqsuKk4JM3V&_9a>2IqbiYo7vK=~UgtRdB`TMAn% z%GTekRq$i`lVEeKUv&jl-|>uK%{$XiCW6w@GgU-b_e~b-YM{;`X_f~GuVN+UI1)V>YS8;ilVh~L52n?z zGPgAHW%xpHsSjHADpHWuQGV|%MJl>SCAWCYypy3{Vy(q5gk?7a#9)fgp0Ul{UF88ndcb<|@@ytdNxf|g*Nd+gmmkK|To&Y( z2lUu^DqB&M`c`^|Sp|#6edUC*^uVu}b-Hfz-ti|^YHL%CZ)@x7wd|};yXqcWx{#($ z6Vefq&oC|BOz}4??RMPGv2kvN+;n|DxY23%L{WjNJ#xA@e-NswV7ex`IBhX6+lRGy zs_1Y49~j}A7GWN|dd{x7*m}c&H#dURC+nD?Y`pSwWXc3r`Ir9JQqi>H(uD<0@bY`} zvZA9DbC5WdPGHr@=dTZTNt6tdTY6PXQ63&6ZHb{F?;PZEX#o4t{Q$%8t!6y+cz%|c zHdFD7?B$dC+0&XKAo?W1X^_Z^dd87ZKG0}?4wTAw3G&u~qU z5K!wAUvjT>dg5$E#G;D!Si{dpizVg}x6F^Bb5%7M8=6w=z}s+OI+(*|q&7=W$BobE zu@YO;Ums5fIC&?rhPe0?4s;jLkndw@9$!TI)-HS7TI4C4{@Q5cEsHv8H-=W;8Zzq4 zm+2&)@T*Lo<^rzwJlrLk?^`3Jjx}cD<#IZWyI-f$e(X%b{=QV(=E|pb9MPFa!N59$ z=%%tfjwQD7A1^G$xsiU)aq}U^``V~HYx_AeqRN-(R5TKQNs;%WYDzE^2RkaxT741;7rI5%~R{dc4L_&C>MtB6N zao}YPMiZ+VFV1ij@r8zq9fIzAgAe6RJydwnq=Ng|_EIbgx4nEYy1TI9EnI;eRvZJr zWV;wH8t+q0pk53pCkAk%>dLOwOPz^uS~DNx#&G$YA-Nh~`;->FjgYoxePF-OWTh9+ zd_Y{Uz>=atN7|=FB!+qq5-}a9u`Vs~(I1Mx`;`wA+T1Pyyf2d3xN?}tW}58gs@&HS z1fCVU?K+(NOxbxE_ z&nQ^a7tZ&B;V9lax=_}wHnDE8Z~M_mkb*FeFz@>A8|g+kc-kO5ZZ}y{b1&JfsK|z2 z#+pUG51o4qrPI*56?9ton{acP#ameY3 zQk5W9a`Q*7sv=ZO`q-03yi>wZxGMNya6(UQTF>zQ!pF6*`H6b{A@SR;4G3g*Fxn#g zjLg0_H{0HK*Wc}Q_LnVFy!7yF9@vpe9i#;#SuJIuQxflG)7cQ_es_qV2|tA{1_LM! zGY9WRnU##>ws;e}UoUFel9~rzbwlzA?nE#_ zM)soYyoI@rS~=-2gRFC|f06E(RlyL3rrt2d%>Nfa;yFH54IVlOw~i zlS=u0HHx``DzOjkS-ZBhAq}ML2zVn z)mmC;<6MWiO{=qMR}B$E8&7@|^Pj{Og<`peXkF58-k`SPMph(p}V#&q=tokx71 ztJFokTMGr|_i%u_>TP2!8K6v>A_r$fJme~6QBME`y2!JM=Vyw&ru%w#b|j88;!a@4 z<30As*~}|C*5>RXL$Aw5)41;aSQ?Xg@k8JDUTqGa=)s+^m^kK$x)HT>cm1*=_>sE; zo>6&SqF9EL9MN0s0L92Y0(JWv)tg$L?|CCkPNVIKwk+drx=le+CYcT;MhmaC2zs#G z-~wWX5e$vSVnT;ZuD8;^boi7aqWaSqW#gzXrsUf~sWKhJ&st5^BKCDYOPN@v*HDpt zAZy_IBoGA2vk{uQthpf=evW;(_id8$ua~rM>Y7E(kH|vzome)Tcxt|jtm>Wf@G{F? zyb#r(DP521A9Jmp(DSvp$AFTm-(xIq)D>Ga)>f)>vC9f;f9i%*V>ub77<`3?N z;Wmz^Rs-%#h=lFQ7J*%)gl03FgI{C`f2B8@s*1tnJf?o3uWQ&b$@E=$zIgryAZ@(l z7T~+F2=pY{PivmKJbD0tRLUvK>X%e*?lWUOg`fO{P@gXRZ9-NpF47L;DN=~@naD|5 zPj0%oEh|?pw@N@|_nT>i>y5kRmOn;gz3oCc2 zLUr{HkiElCn_QsTRND9BI9@kZAfri(u!G?V;yC*fxLU2Cr{>At&OZR1ebei!t|^IuYhCScWWzr|}rPj&J9)b%EF}VvC8XL@Y z%F1qRB}!|Po)V>z9(L~pBT2QV4vlDmJ-HrYWVsukuM9YwRF&;s*S;|HRhD`0xA#kS z6T2`hOU7t2M8-hmH$; z2cu(4Y=CCP$oNx+zn-E7f|@O@c{4-9>~oJ-WG(wRUV>0}#v^>17Eo!4*W_-H-J%9b ze$ZIXv{##7mch%m!{izBx3a((E)dw{a9WZV^56{#yLlqV*#yV;Y(0C3^uT3IMkTm- zB6oD6$rIx)iIOT4Jh>TmNk1LEsm-<*M@(KZ*Wc0X9F;O+(8LKm87$*Y`*DLu36*;~ z`iqa(9LPwqf>S<5V#R;9HcfVWJ!W?6Q-hZ^X`ne5``Sz@J{Yz+Un>_!k;@h-w@xoq z?mG}EGOOi$qUoM+sPlPDol#q2OkH<#nSzu}-BGjTGx3W7T8gyVadrFO9P(H z_^xFr@Dt(e?F3>~Q#VB&46lU5YDR z=T2F`)ZZ}NfqAo-Qd#x~9lWk=Rpi3!b*B2ATT{yxI6dZ1jk(6CzXo|Aw%ndqS*s>29H)yFrwPQfv$?!4s)Jl^yIeY5!{1~y zyS`x+Xuz?$!(ha5A(>*uw1x^jv(`9J_GFFZrIdO1%-n3Q7}vqM8S3HQ8i3usXa3VN zRtMze56cCEnrHkz zXYj#dX)>DNvg*-JzE7id!90=nGSZq1t(%<3bpw^6HPBU@(E^XMRg8TjUzIh5M6mUQuS(nCzfaqfz#r1RDgiq0m8IY!|$gR^GJ zp;%R~)$Wa!3n_UHtEpbsw&6A`lAFKlib>m~U8cI2PB%R#YM9qdAj9MA+tJlgwRX0) zW3&D6bA)Kwa`bD0!ODEkaMdVgul9r1IQ`1T^zWO6$Q+}oyR#k*1j2hS53qo}% zXE}cjX_qoDF;yh5gCe-{{6}ml@rLm_Eg%eNN2T;_39OeAX|I_>PP?Ho7b1Y!4qRXjlK>uogHKBPFS4pqQa;6xw_ zCy?!8WhYS|ww*Pag6_lMLR)Wpg4ZkyADjrM*oa7{V#c1WOH6#E3VVz)Dc;Cf41;>n z8)UTlVKyrVwfCunqh&AT$i@}jiW1}9GNIRgO~OO@lPIw=OP z!Mu@~fWk?U%&HWU)-z!hcs>oViDE4!vZ{GLt}gU!5iMIBuqdon3DDFQhb z)N;}H)gblby^ag0MottisEsKFHJ()3D<9$&P#&NE{;y$lEICJXz3r%}UuyQ|1Q`x; zbF=K$mJ`!u+z#}ERFI5!6V%zy4+||Co+&FjESbp_TH$kPK-5rpfsyH)daJED_rV0D zG9h$nXuWdVhDMmCB6D~!KAsjNahKNO8!+X!@G%P}kOA_Z`~oY=Z#mBgG921Ck3nu9 z_R@6W#<2Vv_$soRQUa_#lCpW6DSWbL;Pf9?Eae*>aK``z3K$^}?n>C4cJ@OAczNyh zJ2Q|}69J}r+yyPWS{3@eQPR1nCf7gV@;fT<-fJbbCSb44p~+`eo5K29&}+0}6#9{d z=5R?S$xZZp& zh)(JEh=P4>0*H6vm{x6Jn%0a}s;dePXKisoX&zXfe)KjQ&1j?hE!BklD|=;{)Dwcp zxm`80TV3v}1JG{D;68lqiRBaDM;Bv9)3tS~>hE>r zDZ6ES&nWx|OaL582<1GxUz?c^ZWf;-9a@n$hC-=zAY2Ow32F#467qHenH9vsAj#{k ze3$0MFp^>Q3F8(kK{e|^oo|`8gsbzgpi6;7eTs!uet5Y5Fci08c#FaFqscw#wMF_E z+NkrTVbmfR8jC!a?mJ1~;0)9J^=OlRdMM zF5UGhVZ6lJ33KF>U3<;Wkt@|Xfhtdk+HMFFR#qn0EQ7;Hz6B}iB-iMt*xAB;)SNwJ z@6-g;TqgyGmEbRcI8-Xl$@=Kz^z=>{KR`d4r?2T6yCsUI{MPkQC&KjU+~#+2 zj$2GXfkJ|bcAih&@FP8~FF)eOFdMV!j|0GU?hN&mCsGbL(8mtXJu8Hs)Rf0yp>QFN zbMvkKg{Sa45rB?UONN&yxUZGwkT(%s4EY!L(X~t@%DYaFZDJ)Cp7eKb{KzEWhiQ+p zm>@-4kvnK%A?lVo_g+qwU(uRfga}f!dT=*ZqtHStcBnja^k5w5 z1~LMZY#$j0pK$CaD-Q#6^p=rhB z?mTj};3P90RD5_*eZW|@uVaJ+-M%3mB)!%aVDdiCc|LM6C0OLWr~VPc_}T~QpgFY? z`?vXs+v`j*iOaT1Jpf-l%Qk8RSr+-lwMdV~i?*kKfy+L+y{^@F$`vD)=E?z8Ss|AR z@?KQLs$6+1UH;ls&CsYww)AUBEsFEOJo5g>iBiNjPaF$MdWV@%KG`73#%?|WgFSF6>Fg%ygZiwKjxM4{}Y)s@82Wdl@- zCUU}o`^4RVkPNSw7UM)JXao3?yPlo|Oy?W{DtQ!xu>doV*tBl)NfF2=6E8#NdW8@OY zVpp5)^<6SIj5Kh2gxTFpzlhS&HA)&h*hIvR%!lAla=w_YTJEAIQ z`#AO%gR+3J>gEtyWf!uP^nfUw*RuP2kimrIMDfViBkFiapKI;js*s5Nw#7nvYsy#16%&%MqmhE%(x3f*Ta zbi+;VLh?=s86HjhHYdtg2U$P5=x#@($YA7wiF>@@7y>BpB-^I}W@(lom+_Xh&a+Ke zN0rm=omA&TB=cziISA45FLHT1O=gIH(n0EmQ{)9^=Q#gPYB_obYbkR?zWkfz@R_)# zo3d?2_d~;bOGs53*VUg;ud>KH+dn4hoJTdTY7HjoSHcvkbC7x4>&;m1toz>Dr5JOnRiFK6qUk}<$EI7yPyHfT9M>!*Z4hUrb*JIDSGwCU z{i%7UOzA=@FjQG@^%+GHKx$3T>{p*|_?wN;GW;V2#%3NvSi!mFP6zfZD3AYw(Q6CO zUG(SJfdNbOUkveN8RimLPc2;>QK>(lbz4By6bR7azup_@ouIrfYA87lXhkg zu-dex*YsqZK2YBc|CceM(_$Cmr>}o`Mx3ippCR^a=c&BWZsJb*0N&?3lO9tS>>uk%F|UoI5Nn`<*(QHx`y-ndxX4q z2785Y&wi+ zDN*d~eA-k2xc?wZbY)s}u0Cq3AgNe~aAAm+N?3p5hjVl8&WM>TaKEtv9~fo@Ng4$v z*?K|;ptml1^m1RH-|?ZG z>w6C*L1cjNpj+*;*(MaLh4qQeo8Ke9iKO{{5%uBM{?^T=K0UNg;Y#ON;3=%}Fm2lx zFkKjhlVuknS)fT<^Ksg`t{A`0$+A$)M->aHCR(UN5ff`)Z!zX}X1c@za_21&Zw9nj zC)1N&4fhZn4PLSkH(awIt;v$tejdbzG=@|)`%YEyh_E4b4DqyBV&NQ+U6if(GtIua z4wth!i0m@ko^wd4XE~=|W}IZlD!K5rP|~?Ax=0Vm<5b}rPRtrD)1iD0;3AX7k&+%( zfw~OUm7Hmf=)&F92+SQ9TG=JnQdI86^0aD`D3&L~`c!YCJSE9DDPyo;tPOf}{3TVC zr#+qyNSo**!+VN~wzOyNqkY>Mye}Y#p-24Hqr+ixI zd^zH4c-?2!WhA3!w>3q+26+sRBx!&ezRMtMa8FcJ{7ldN3JE_g^XDLeB|9DCMX$#o zl=PYVDL$2kYkd`J_ZZ3d;3z6BpQ6>sQCmQYJcTf8_pIb%>8z%h=2yMDnpi*}H+b@L7Q{KJ!({Ex(3Tzy_J%qv?S+-7yh;Lcd@T*$gu&RcbRsFfR=1KB9(=4zh3JXIfBzcEgDB&Bg`*l+)$?Zt}rVAK{$-$?%K?*~(# zv^=@AxC#xX&NwVzo~b@5qZyK=5U}^iX6)YaJ|W@``Q0MFXZ8$7d$CK_?OJyEGIU(& zQ2~Bv-_V)v(feNJ*(?b*{IxgE6G`ql%cW_^VlgBXbd*a+HN?Yc=CKuwbJ~%T#4Co>PN{+0B8R{ZOtFXVkLMn#>;h7g= z5m4GZW?v^@cu7Um74)rp3*WT!+>8Q$i_`&Ea{ftM>Z!$o&1tbS)vAy8-m1AcGSN$5Si(5tO?`R9Uj$?Q2!82%r= z-ZCl*zH9qd5Kt+VZUq5pkcI(71f-jxh7{?Bp+P{pySpVux;v$N=o-2^h5_F3Jg@7% zpJ%OiJzwX;e3-TN?7feD9KXL3Kq`1rx-1P%R{o~SHr?@0c?5fHD)xI(S z;}?%IdCW#xYlkx($Val)t3%XpDUNi+#+$eD%RSO|UH~jHA$%vy4uXycZbR%YFCYGP zAc=*4I#7?;+GZ6Y2oavp5d+lRxpR5a)y&=oR~6?*b8^o;o&<~vHtfLi><{+VdjoA8 z>{tqB<^vmr$tGXX4pgkG6GN^M8@yWgYy>4ejdA`^Hts^>Yy3>(u~FV{>*+@My)q*8 zH+j%!PUpR!K}*$Y$dc`Jq(Q^Mpp8A?rT ze1{)tqs-+}%F9ICwAX@Ncqa70e6@Hg{1dmXpYc6+2Ui?U0Y=%vfQ@2A_&WLmZM~X4`NI+(k>wk&$%ghBc!t2fi^WhY{YCi^L6$YT_nx;kzasEd9n;t0g281u)6M z+av^12;_0D9m@46Q5@u|p1_n({k7=y?>NVTOb!*54I1Up+jh)U2ZV&Hvo~_TuKL<# zfqHy#k0B(L52~o4`^6h_zn z&F~yhShdmp4w=Pi+d$|MH_pcGbp59#7lRm)&+LTM$?Udx2K?o?dlN)2bhESW!_tjf zOAGVTfI~R3u>kWfK38CyGsy>&)YnulOxmtl%h6=%pWaKeni&$`4hL|alq%D@Zv`sz zP7-=5vZhGe+ zwf1h=7EK&xLS&1HD5OyHUenfmd9+>EIG%?E6#VsjxfYVOw%Usv8?_)SUNh?pCWm`&yx{iy zqrG_}BL00p!OwhkgYORr{i69wba-Yd8rY2`A1_AvUmcC6su z8o27UY(dFvJz00IC}u=1Sf7*%e}@kBK0Nt?xZ?Imwb#ShyN9)Ise!i_9hg=?H@!Jf zofA0p5x!!1wx6Yh`ED21l*K{ZX8eD4I(qzhk3s%?qeG@mfy_t4EfbHQ; zty3BAV$;yvO#DIohLDio_rRUDZ~0)7aOhFxUFhsy9l*cLYY_Si;{!TN`KV5+PlLqn z>A>BYad`i{e5m{!pFo>*=?xQ~GK)6gzg;Rg(W=CX>!_4oeJlCF?kal958gQrR-qrj zT-9r>L07o>Kn9ql-MUD82Kar}%fN6G|(_4LK%_u30CZc@w@ z2F2Yn$CDBNzW;P5n>c55@4mW&)v7s~(G%lKCAhTl{_LjKO9VxV6L-tyyP2Db*RpF& zJzDRV3UIGdFpC_%stM1gQ-apaiUXu)%ma{;odkFrZ=@sk|B~zqj>9W09@Jolj{Vgc zpVEkg$}>$|)E}M>GVMz#tfL@Z;6~NG_`eb=#JhrqX+?SoSUg(Xdc+ia&EUmhG|4OZ zZ0FaFhyiL?&BZwj{oIj%R6drSo|~_Qw3r|5$k34d3QR+Gog)>lxfFH7Mp0=ILIHxK z6F|yLSUWM!4W>T7ImZ%4YadPn%Xt=%YpM8pZ9dK&9RUFiowU^1OcGie8!ov{9lY~P z@fSVWgdef-kC`JaRx z6UyU)_g!eY1t`98t0eL|YD@S@bGaH^e&?vmVaZufGb3cp2R7u?3&@gSUZHc|5;n@H zV@mGoL{`^&%elaeC=D-0C!clR?YMVmA|NK*MhLCUj8VUbd~Na7M_Wi!%@58j6Geqq zmv$$`%sRX3TbGAAs1-C2HO|k!Y(4=e?QrS1+2UQ(W`r>NPCg=&i~pvG-e9h!r}HNr zCTU!@3NVstrPDqClH)erViaVCeWcW?CiYLWD%Vi|&)MpqNpL&3H%m79MRotrlo`d> zqe(e&a`O^&jt60X>RR^}^khHrZ$l6+Y7SR{?*Fs!GQ1Z zS+iBYe}p{1-aP-cj#=ao(vVD}clkMl0a`(qe_ZW%bbqJQrW-{9HIT}o6J&QK@R>cc zB9O~*#aBq)b+D=@to>7ZSk0CEr#e7=-ji>NbM&vGS5c=t>ZVS)%1B_lVA`NYhLnr- zEeG3s!k-8B(XH@JWLDlWT}Jz}u^jyJnt}fwvosq2Q{!VpX~H$xkH-a{T0HoM=y*Ky zQQl11J>wqc@nFt zP>;~9@7HdLTh5 zd-@eqYv|U+sjPKSI+Ht+pQk9&(5}eqA<3Q4u=yk#jToMPqxYaDP(xC*Usa6Fy&a-$ zAYHpnLz6b18-^6#$UG7E_n8+h9O2bIJEg(0UL4V+LC zhZXgkHn(vmT=p$rpi|nms93t8+)s+WXXh`8U|m5mi_CGcB<&w}pO@WKEHw`U>(zSs z(BDJsJ5#9|pM~Qf#$QHaL}q$xw{X{Q_l5?`S$8sCM!e>r%`9{w$>Ay*?)fyZ66$mL z?xCT{;$V+jUd?1Q@%q2>ob}z!hm~L1#-v)iXwba+GvKZewwy(gV;To?*$|}j3X#>1 zAvM(!6BjZHshrQV`u5Jr##{H9Lu%e(I2ua6wR?t!2}jL5xr=uk2(LqV(r0j|(nr z;wOp(F{b9{Dks&yfEZbY>L-emYm~~|Ft9&;9r-}rUUB}6=&14N=oJ5v_q-q2WPuwY z{7EVj|9*h!>gP&R>C!gwW^3Pl%;z<)VAZg_oS~@J5F4v}s$>U!C*iqEE;xhG8dULs zzw$K6@u2L3wPgkA2U;gjFaxQ1IYC|L?A$qRmt0_*h(z-2bkC?@(}kpCb<(yM(6~o$ z`BVQszlZ!gcF2}--Gmd#2pEE>?~ETUMyduC=Lg}f(cEpc1xqGVM8X1B5xk+nr)&_7 zkRuSN#9Pg6qSh0Q+e9f)#K;8NbnW5g>#qRr?4jD@J4d_iWaaFW7T!a>ye*%zcD`Ku z%0IClIX_p@Y!6qzx~y}}co_F)U1IB#d4^n$n!c=h!%b?sWbRQxKl|j|+V*%EJ*R}2 zT;t|E{t5~@<3{#^MEpy54Xffxzo=ErN9TOfpAwM@S0sFH-|{6|N%z^@Gqn_xY~_C* zN%G2dg?N%_|ARn1G8h`d=CUKP!oPF{ix|U=@v=z8|H!Lzi!SCF8CodSaQh2xi-vO? zjD4CSDBA2N!`s(+`ZE&N2e{|hl`K0}zgqmTCw_+1+~)t$+&Ghq|Icm5y4NB^HcOmt z3c2mL^x|(0Y@Md$dfN(44aK#trF&gk{c?5H+KC5Kn*rafKu?Zy>E?C1fLDKmE|lQy zbfn=(lP-zoAxRSY+L_jMSS5p>ViTtvnLb;CVkVBLyJmc=a%XMg2{r?J#S?nuiH(9c zb#DgtUmh$DIYnT)PPENixkMVku8E(uWxyoa{~6E_ktgkK@EIOz3b#z3au1wZ<8l}B z+_I6L(aDVQANTFL*h`R%h{obpKF{CQ%wJ62F&~o5O@F`ZT?cwCnnhHMDy;W*1xZOt z1bQmI|8^Zu)Ff>)!7b5Oxs+Shh&_tT4xnzJM1r*vXkv5>>(AKiXDXyA%vdmtkMSsD zLjxwX88j%WSYLX7jX(TIgx!G(nD2VMg1i8+E;RVYO~}3)9~TIrEDetRhi(wrDtlXN z$fptt>TU$gviMwfk12eigbE3rUVN%>18GmN*w`eiX1Xytrd{<=Vy6DgtE8PK<2?_~FvFIr~ z8JCdy@Zcq+h$%Hco}Y8;w|~4+H5MOP)`Z*LO1O7}%Kxx63(3qB-bq|CdvjLPI&a|u z5jo%~Yp(CD87-$`1MiY)+O(RQ0Kvu!ywge)!41nfV-|i?7eH#)o{5z==Z3&@0`G}l zJ<(z9scJH|S9=hSv&47%sh}-ILH$xCMXqh-wCc%+D!sxfocfi6RFF$P?-yN9MS!r# z{q8X$XXUQY`&wizeK-+Di}J6Hp6%i4nC1H}w)utT*b16gYQ~oJPScR^syFkL#(*yo z?1QTi#dxv-$GPK?<}?)^A?0t==g&s`@=x{eM!g@ZE>FK~HssQAG3l+bA3r)cs;mF@ z#?V?DSFz3aXCmX~F9C>6jf!|J(rxzR2{68IyPPIvWYnHHuMV|;gNHu5+!3EjpYfxj zezOC)^I?1Sq*mOz-{0#t{EF_|a4r`Dri|<&1%z+^&SX+{n~({2QsfA>%V!i~U&W&E ze~x52b2_Y=Rq>8=?kPX3p6r99Q}WC!0>B3#&IfgZl?8p|&-#9vshXQafh<3ip=|RH z2UT-HF!>oq7^mz2c+!rU;F%O5TbJcQCo7Pn4e8_c+ZNFsuKmvhGw{l)CU< z&&UjOv=Wh>|uyIVdl_w4#He$j=xfXky|C@AzXghb_I#M1TZFUz=q+Sq135MUzCArYIb zz9Zmw6u)}1Z*Co9R1cjNpl?hBpGAvq*ismKIoR}Qj%Sem^cisIB*JbwKt_Gnbo3N0 zqu&`d27T6_?f3jWm1i^>rS#T138OUc#YvxX7fp_+-r2@4GAs%`yF&e{kT*5rw?l7_b)A2-oh84^WV zat#(S8idvBE1W!eYO=SG=OgmaVPF*bly$L336o%f#>rVyVGrv=B}WYT+;|W9XQMQ| zzV`$^{&||LReMvl*Oj6+T6TIc-+UJ1?Kzk5sg;9H=xI3f#jeQZY6Dg#sXi~k3f^R7 z+nZz!h^L*lu2r20Hn+)|sR#ilI7MjJz^K(OLGDGd#nw{~b>ANO^~+7XbM{+yB7PYv zLiHOj)`wL;(l~Hpi)%@y$7+p=@XRCbBkbWq?(X3~tP!M~w{G*8j>*}m>m3VQxZrXY zi)?yL61fQzjS_%_CGV9l8bzb-d(gq=l*bxOFlP?@0Kv z4IjJx^8BPbJy^m+Pio~!Nk8tO%0Q;4N1L3&>IQ4a_cvYE5gw+M){ZqUS*Z&e#3r=C z7`$~x^obJcV3#3iTD@ONS=J43=(>C7Zji*{U;Vol`ac&}_SnH~($#-PIwvAW`O(F# z-B`H-735; zM>d;ca;^K2>Kng?lqY`Iv!X(p_EkYuu;*Svfv^5`FFeu!P9T>H|F*1O0g)%awnH$M zMo+)EFBFU<=xF`20rVOYg~plR)x4dR3nczwgf`)hAdC9@c) zK5z!lGY2gJjHOC9^jOa7J=fSMp$b`trTOil>!@2AJ2NUto%C{8z=s?n*4s<0{A#tA zE!G8clvkOtp#`?_<^8!QK_VORa+W#917*e-8JEguYc_Lg5B-x`oMxe|?WePQh2b4+ zJY2>VTyT|ibG@;+Qw4iH3@jqdD(bvvhFsrN;X(b`sJEM2so+pi!+?mdITPSFKUc%k z=xv?IcdBGDGVpZAv&b)GUV`h4&&H)PCGMP83e?Ez;bI^P4s*cslf79g#4O^v7l-3s6R2=r zswJQ*et#LCuHzcDVS`@zTU7R9uufZJinQ}QE}-oJKJHBrW0NZ6=?bB8ipZGof7X}LmWU}r-H4;-In@#jPB;zc zo~D3Nwt1gWF+BO-anY@#MCUX}HThV!O|CEbF029kv(NV=rZS_WGyZHZ>+H#Ans#DI z5-K5ykB@|_7!wzgzMKhP$tkgJpV@j2p+O$+^zKCx$$8$hB)gRQM>o;zKZU*N4+w+=ICe3SgK8FGWl-wb0I7Rrv$wX$7q6HJ#5dWp%5ihQqPZmC4iIgbQ9E9*Muu>=M4tB{ zE@Le#&q3!aTfvu&-d{*{f5oQ)_sox1rURj?Omy%~cmw|Hun48T@`R?8{XEtZ^Y;z5acFQ8NBSLn%=uln6b z3P6&}67cOl(3cu721{^Y*6YI-hGpjaVxI1by{~Pr+$%^Z>$4h!t)2r8`zB^$Cr!1h zo*APu+V09MuqX78KcVtPP{|r$PafHR3LF)n<-I`{I;5mk%*EjUa!X>xaz+56`+}kW zZRqW;7I9Faf z5lgUfG_DBcTXi0g(yXeoI7peWre)hPgT{R)S^xD=2r$!b0`15#9a-oY5PTzStggRE zp-_e2Hi3q$xD>A65qdLzi9eZJ>F-=QI54DFLkPk(W)?D;it5+HvFvW1h9Ut#d#NC9 zsw|~3LUR)$h^p7bap=PvSP9$ZSOp8Vzf`Yl;Zeiye%!X1KO?NuPnrfV_RJ_JLi2bK zxDQ=O#~(`(TQbs~)ct%B>2Qx8PejlO3l%WwIMEHGB!IFMCf_F8eUzV8?G}pK?PS6e z{)4+1m&|oDHZ8)Xm5J|hJHT{uirO4F!I2ST`?m`B^&U#}akf8%p7>ngRRPn_^%f8? z9+G;d9anJ%h?lGA5Amn>^>&wECe49M@Q2U$#vxAlA2b z&oFuB{=D#Qwjh)~VRBiJGuhv&%k=%%u{U5x>l&whj#?|mL}}FrDEjw8RX~|dFYdOy zBWdY6uX7(H@?#VmkL#XMzV*+T+$*~)xo3i6bElgR??WTUJaXSR$c{sU-Ix7zh5F|78HG3YkdvecJ%)`YD<~T3I^$o%rqX1fE%l9JJm@vv0Dl7-Zlx;= z%||Gj^WIbbrq%}0(GFrrBSB{BQ%szG`%X3!jerl$$}5XvF*pky>#G!6a>T!jr+`V} z+}@|XCmZNdJelb!lvwz0q1UUvldjx^pY_caFyEbrPVntWYUW9hf3Ae8i;u=wyszAR z(IB8qgz@s+C?Xg~QP})JFyfD55Q8eSOIKbZ2FjAI@==@ub&n-WiM$k30HQKT<9jD3 zkp+6=`wE6^!wgj;bm6JAQ$LK)7k`-bZ{oi!(&E5h7_Z!h{(MFY;ft&_TNxxY>p&rz z74|u+IKz)fdbaX!{rwiQ;1LaJ9EDsC7xT_@^f1{nC*1@$!&3I!?_&o>N_?nGOqs?Cf}=O;Q0+runBT$i#S=Cg?XU8WOX~O*;G%O+ zWpKJD{6kyNoQD;=Q{`R5-J~wp6v96@azLK56?&;oRk%%N(mgtlF9>&~M(pb$oi(4m z6jR4T)=kR3hV$4*i0($zxHs7DlqQ`-y9ImzgInA)?+7VRRD+P@g74M3Kagd3@f^^$ zTGP1EVJPl_fe6Au1^^4O^z)8J9lzHxPG$n?O)S-bpPh5uQ!S(Vn?sxXbODQkOm~0) zrLGc5?gA58Vd|e#|*q5-rx?VoVM-ToC|L!=Zv}r;myq0_dvti$I25m zNQ+`JR$QNrosyKS+g7(p`$qPu*!8Wcj+jaLHirnE=iEoOup^{of_C>wXrRvA^q=n6 z;8XhlluYP|9^9%O#j-^bfs%9y(3E$l@4(*IIfO2n8x*{57XizsJD^wy_CXJ-+p8R-?}}r=WDw@0ZFB)t4Bmtb zwuX)iUj0RdF_Uj=gAfr1*!FSLSYMuqD6hEm;~U~HeJ$4U>UyKL>?R&L<2MsqhJTKd zaSCp2I!ZE_C=uTOO*Q*EZh+2hH?GoGZ@+6eDLN?|p&WLGzHd}qE*jZd58Ribl*#9d z5fEcAq!;+&AUp&3p`y`?_n-tRU$}v}y&vhip5laeV^ETzKg%UbQ>GU%DAAvMw_lWM zwCYlbE)cmxKF;pB$yVFHNv_i|9&`5)hXx<|ltYT)D~>=2lu`Wr z@kEg=pqC-1n5ZgLTF>N;4lIN%K9ysYLY;9^tGhQBsKdO1O`Ge+>@kZz$Yd9Cae=G`-ZZB=*#3inhl*0Gf-y9l(fho z4}Enlhwr=XxoPCt3gIMn>3o1<@-DcM#+nh*=SkTgcIZ<7tDb0FbiF>UxdDEU@LU0>5st0nZ zj0;bJigx15Wop^TXsgK1v3PxjpRg4=)ftpg#B0^>_TbwX#^W|qrWFJH8quP2Q`~i4 zl0KE*m(}{brfVO{+W8;Q&(vSg53r$N|E?(_Mn*(@d&htnG|0+aQV>g0zp<9DH(vkt z9dy>)+K-qx4Z`ZI_Yg2Ba!~-1>Z%%25 zy<@(jYhC0OBzs}8jzNS@u7Q&OpO*OWKliVZ9;Jr;e=UPrNXy`281z3*C~SgLWQU9& zHkvfw4WNG1X{~k^yfzFYbX(cgb~;{ZgMLv~a4reVh1ro6rgQp`h3X+U#e0wqBFBia zES*y3yjjm=?jMwEA_>=DlHE!8N9+G0B{eFTZ5`LuLARfexI|abZV_pgCPXn6BnV?elg4w z`($Yk)&1%4?M|IV*-UwixD(DFu*A*o&wd{>%yQh*Xn3P4*$XFG$}jAZ@muIEuUJDH zqCcb~Ey#Mdo497XIMzbBa*sjjFsJ}5*+tUzmuKu2F*xp9bdIrS`fpAHN-GSTtnKGs zmDmc06&jsR!bJI>!wDm#W3BEwLM25nk8%yhIOt~f?y7O#+H>U$$MT)!HvcH_t9&|5 z5X*mVf94JKp86g1%KbzVz3V2MN&+jAtHF-vwXY-Jcro{fAE+1Y~wichaJ8qhFL+(ZYMh9G|U)4^X?~Psasri%>gTv6L`a2M)~_aomKDK z(**VkuFF@xSXMb6-6j;dkYu}Mct&-0T+U!+f`va71I(szR{UuV zXHev1+nu;k8yIccV;1Vb*xz%p?-F+19D%g*>h36GA&77ccK3Twgq@O+dW_{ffj11j z@E#FIq1v|L-dJ5xO|RUrzON`adJ$e4Iq!LOS@m$1txM)e(V+YL&+(oqf?(=RHX};N zGCd1E686=md)q76L_CToNQqwnx`!u5p1ts1#`2aB6u6y8sBmWEy+0i(kF0#0U)gt~ z7pc_NoKO+nB6vIzann3Wv2%ZlJSD`^oOojD^8L}jD8n%NU~klyK-e=bmM9M#z8Jve z-=VuW*NG1o9{6S#N=de!RW1l1+9;33n6Pj|CWNGThV#?r*0XE3_1M&b#Ivf;qcd!$rsoJV)O{(EP>iw9GfIcU)8f!2Wn> zB5+S*hKKD86`X?x#El-dX#uWj@z|pFUKN8V4WsE-)l;2Y#ME%15LA(e(96 z*GN6>9$Hp}B8(`F{1fccC)MqrV?3r%;gj*|SV=U>S1^_DWC#S3OD(FTI(?T_mF+uW zl@6pc0-+cY-9S^|il?=&#ki$~blVXZwbnIyN-pCo|i*k|$1fhAO z>hgDrurn%8KRc}0`$NmkVuy0<;^VhNg=n5(BaL-L)Oa(4=iD=gZ4i6ntCyAu*=x@e zw`hcK)Ar$(OxWBjA=+ypIWN2A!uBFw-MNF@utx-AHJC|^ggA0_xC^|;S@RYPa`n0}>_BurM)u^T5n*Cd<~M`#Z>xC`2sw`1+g+mvly$t7 zzB6B)xsvKKbaQ0YEh<{coeq{W>b~zbo{2_iM}LYHknuAK@3r_l2d(tw^`9OK|(~YoBA}KB!&p8j*ML z8O<-B?K+BIMS|(M^wThmgfx986$h-ZUW+i-k(3JT)3DIpP?A?$8uZ*(KOI-=;*P8@ z{AdG9y$i5iNk(s??B4$t>Jwlcg5frt=2)|5-|oK3 z*8M_~he)ieriH_1d(M(^N6&B6CDpt-B!nH!X|RRfyXeC?%<3-udP@$hjYmPJ7}x)E z=8gZ*K0VfmPi^wM`DEy6&a-S__|?;HgEtPBRsSVY*|11aX7 z@(YObZ+J^{R>0IGTN68+&i~Z!l^E`t9w$ZK$7PQIV8cRAUwabJrc~1|ev|)|PA``h{SJekh)~PK`W1!1))jERxA_=~7RCb`&^ zj0FaH3bLeS0=NS2#mcJE)(F5e6wlTWZKkVyA$s3pHJ%e!8s)z=irazFak{!x{@7?X z4VQeq<#Mk;h~M3r#BPt{|C)*Wq!&0*ovm`9!(6eFAV$Ptf6F8Mn#a?2hGKTC@R=r6 zp9^~Nc}Fecg4i;|^d*YjSLuKyORZ-Gb}uC65#N0)h}d794GWZBR#BI;J?pdS|M9ud z3}qTG1%t(ZP;Z5^P}ZOYQAhu0xc(>M+*%1_H%3(9PxK6~2gTZ4RGT13R9W=Q-Cvm|0@KOL?WXw(AlMZ5b zM1Fb@AcE9p4(?C(O&;Iy60ZCnQ;%q5YI8VfFb|ue@j3x=tvs4dy%XeWo?lD}8kJ;w z;^hXHe4JAPp0?!uQ*C;JC?d71{>~&pix}bQ+KZSsF+kn}Yr6tXSeC-5&@|G^9HW3k^84IOqu=qsnJk|3;Pnf*(^)Q~vMQ z1zauN{sS;aav&!JaZ(*4JW3ZZdt1HERHxGmA&``vvu7NFRLSIb(|XP^7Xdm74I@YK zr>1RZO+fc@m)c4}jq95LqTSPle5HqfF_2Rr4+aoC7|F_c=DVv&TKC%{G5GLX%fO#iGJ>BUz?_Pp}0S|#nEwPT^o&)Vo$3@jnC zqSuBA!#L!j+T{vo_g$9Y2s3KG-}}oNuj3MpB((S27bF3_#2+Ux%nC+iwLU02g;2Z^ zr})UZ%cDHswG>_1noQBk7peXN)4BH}=u4}b!m^MjA< zWu!jZJdK@`vjOV5s|8&68Qg51jUP;eCg~blRvi_+bKf$ln=-_jW;6*$0c92CqcdZQ z4rTQO;F8*X4~qfsImJ-!hlhCK-@4v4{BMkSqdxaUu#Wm6`5P3{dbdom%h_>!OILNs zekPV<2KDlr4v&A><*$Lad5|%!5DWsAd9BK|!>!CkT!-~q&stb-|L7TB$TJ=a@52UI zr;zvare@zx>let#Sck`PNLQg7U;I^o$R&0~WLxFz7d=(MVI-%-ie#}RO}mPaN$rgJ z8Jz9&mfAqWDo+ZRRe+5|C0W{Wz-b@a!~LTnbXm9=Nxnk%He|I$vbc=jai|+N|0@0o z(MjK1(7K3qBy1d*rf1k(?qC3r3ib(!mAT$f8eXs_@Atoi*$jPS#ycp5N8F}7)(4*W zeOVa{Ho+=ws0%G2!Gzl3NA5d^9k2_GxgYf}+=|Tl68Yb`O3Gfh~LM!S; zsX}${S)z6Ey;9)`P0cN+t2Mx0gBs1eedJ0>-w!PF{2%BbT#Jp8?JOA`q}EujFlnb&*!)@*ZrEh~Tb(K}&{*1xU6c!tq0A93h7*p1C^e9^2aO0aWng1iAm_1b=%bo) za}}*P^TMoss-Q}&jWd+3)q2gx?7)03%x&Jta7;yXo^(EY>-m?{MPOvAlAuw5Ee!AQ zfQd<|sudj@&>Us@miZ_e%|>S+$-ZW@_FY@)+v+W1DEifyA2yY36qCU3a-vDY63=II zq#%2aMq~>6bg0q1;cH@qv>5$1bz-vOc-^aBV=%z@T(XXD~Gf?D?`GdhKySJAc z3mop?!XM}@6bP0XlkiUl^2kw-LCqO=>q^W(^=JXMxOKDcwdi;3TZa(Vwj(+-y$)( z{UjvVgm{_TO2yOmu69;%mon(es6sHgSSF-U+O;el^h@%A^xJx_fFp>I6EiVnBWSb` z7jluvn3?h5=S@!0a~ae6kHeVlpSyZ%*isYvQdKa0=={mnJAXG7Co9*t@|oDB){^w2 zs6LWZTruOcrW~c8u<&JryQ`0YDlZx$W_Fg}W10 zgTG@hbNAB+AHrs9f`fyLYeVH|Rq#D=QSpwY-3XKDoPP{iCx?&P`;wU0$}u%wkGbSd zL!kBZ7lLgoIe@a<>|=&|)+KDgn}OU0qd4U}Askm)vGs}I4sKI-Yl797Qm`&*-(~gh z*J3yq&#vYQCqR=5c39xO!bpJC0fIoa@n~{H$3)hmUK9xv{NMZE5Fd|WZ=NEu|Cx?% z%G^cc?$R73vF)#IFi=AYLkGODeg9MvrtF)22BjVhov6rzfXkPg`*JLjH?4#+*XEoj zAq~f{RmY~e{1JQn>lBl|qXrqF5N(Iz%;dw`3I@amq`~zdNj^x&u#(AO!v zJ{|l>Op$%Qfn8*+TJoqcIWuHJZXx*lJtrOXcmA|>9E`0bIY{w`a8qIC%?&1%BJxLX zK$l2;0p!c&_4wJb4~jMDr`c}oE~E~jOFEmknv->xcmXKE9rx1{Gve@vs1j?F$G>cl zLNXnzki|7QuTYc_(+Q0rWla(PP?YLryzi-YO2``DJir^j*%Z^d&qH(i6t?oV6K2UyAnzE#W z(!wNlGK)lSWYW9jLRS~P!x~Q}Jy>Q=fOML_i`!cJKfl3TZ&=PeuiJIMSIQ`G+EphP z-cF=2Wp>#mkhn*@T|iq~3tJoyhFaX{CbmuI`!_%`t3UMh?Rwrr+<^5gkOwYFvEdi; z62b2v>gFcE*7zQJWmW0x;%bbgPv#z%3WAMi2HwHuCW#5XT1)7MS4W$9UxmZcL2fOv zP=IE=$Mpkf3w8IPEWP&6Yy?|obCt~R^{&p%%*SBHn@wa>Lq=s1pCc&=4iv)!CcGPF zegZ#asU~n|$;H+wSbf2S=y#$mRjy%AbR7idmuzN+&ot}bIpO>2Pw z=teqeO(7t?XiGGK(y7=xx*McsSa;6vRzNdTfj@AYM`2Ty_4#7zY10p7B1eq<_2;s* zkvxe;%O^khDG#;9dK{3>%;1ZYET2bF# zGo!xe#nL3UO{$;S(;|g}K;AXPTf(t!lgA}=!^M(%1vKEAqPH;Z_x(`2fQmK=>v+(b ziEC*J9va^~|CaOc>Cw=nZs-;SRg923Jr=Vl{ z#h+w{ugYNR!q}EKp~N_P(T|-k*7}XjbY}WuCh)d*!C6sV=SDg=QD^TwtS%`ZsPz|_ zI%<{6udSEW{cFv4-EN<2`Cnki8L1CN+rBK0H9WQ|JpshcDiJ8V|DwCWxY!iwMEBKm zWnHxu9>wh=H7VWJ;0EMu6DsU>jr_txFsgVk;mo!)QGPa4`bh!OiZifdvJqyNHwB!0 z#%{iN!yFd2G|N0?S~nHzEM@HTz5~lf(_w)Y&<)BlE{<*g!np8>1c1Fcoc%-=AXZa#bXb^Pr#k+~&T+WSu>7(DHf zW;Y>Ntnq^kqFPyhv4O_NTpd5t0z29)Qh}C$orKJsq3-wo1J`XE5J444hKL;6{GWV| zJ6K=j-0hXpW&a2-_;l}@_wma@vu;GsR09=p-78p~-8RrA$_(l~MO_ zv6doYb7;YxDhS5ib8??3t$1yKC4CXTs8Hb$>TWW+;u)DQqi1$h$iU=;4B{!W$Dyqr9ONdIeWcw80m?xQfw~|a zqULqKKsW1&=Z#Dvey-{tUEwi3tjaiyb(Sv>*pU%ZVv)rVbXinLp`_WyMqgyS?$Ve& z$2Y-TI&^l?M3QyY&>!?AeuV?rp4WDx6q~_#$M;ugzioqd*kdSBQatFI0x;NSqMp{V z+VbnUIEqG|85mtUWGB;Wt?4za?~>pFw#llIZx{WIUjO&t5O*E|E86~tXPkW(B)DZH z!t%?LVFlj~NNyD_p8|Z}ytBEar)eu?BA2=uAC#hDkt2}tv%sN%9(-BgHTXl0(o^dk zmFF=;^5j1TnCr!S%3bU7YLli${m5G+S$qx|s~jR}3&=0B-g~85%WS=A@M&7A^+m5- z*AA|&Z08Xf7EH5Q4BCoX>g>1qLb06VPSo=qM6`1Lsg^HHhcX7eHWe?@_VmEE`DveB zhY>z~8s7x+o5jW%h4uo0TvMOVUN5@~FXd|b$CvT^6B=--o~*Q_@Z;qOLxkA4#B2Qx z4P@d`VDfr}Yjl;B+*_xq4K;p1yi>crCAJXtcAW(JFZ|N^@D#5(yTiN$nHa@d&&cbWI?&a0+UfG4W zN31x}Vbt|`CGMwqPuJf~Ab9`2gR{Wmj`k02s?<)tI7B&xYTO~TkQ-@5NyFv_t`)uS zIQV?V%JA?cWXoqh{30N7;_fcB#wAWT)qV`(jzuZhC~Vddnht{8mIUanU~?MhmQtRt zlGQoXD_bB8?NQ53QRe!DV!O|eD@J%*Q zxuptIWLfAwM{nz?-a2|@b3Z1q1lccLrzhWb>i-aZZ~xxD_sLrZbfvg2f8MLc88^KbD3bQ#U}-{R zy%|)C+04R_a*P^ulwwr>qpSoWlzDGF#?@^|==T-n?ML+J{wF?9eWa3-*@kZ(wI6j` z8n2g;LDDvx=lJ~2EAC*3ZbR46l6(5HC-LLgTlU23G@*>syXKtXHWId5lb|=Ns;?rI z2D&3}yAF37@imGtB7yaCq9z(72`aHSBGnQFwB{*p|zP=vVze1Y8ateyF;G z(kbbJo)KiVq+(^qfjor`IZQd3^Hq5mUcN+Bizg&w1K8D^V+2o0r{7Xgi{#qOnMo{O zcB#>$QOpD=ko?$7Zz)TAcihtNVw!J#lLg7&pjnu}@mo6iRP6`T@K2yUvt8!ND#|gJQ$xh|8+t37IYD;}={zxU zrVqguvdXoVwdOm_w4kV9`c0L)7~EZ2>tqE?POmF$(z120uS>o96nC->qi!}YT7xeb zY4qGl50XvmuGZoO)Inzk+^?aQb0mj0?HzpUr|!uZHQoFoM%#-8lmj8bd}nLnvIenR z_yb=Bu~Hkiis4^nXcVgOVNa~G!CB2pTXq1e>gMDa9GInPf!qe$A%6HLU-;Ma?A&th z6`h7j9`FXNmnqp+$%t^h@eid6F8$fhk+Bc?(H(*m?SqIQDXjtqu{|0~6IWE@Wierj z9RHA_=5ZupL}KxYWNQ9Ou(Nb5amuLy{RbLuS4)UXDGgOY;!QRCx}?sn&>oSqO=vfp zvT>bpEN-Z8ZY!zmo4!_fiZN4S)Mfra>_KWqB{3_XsWSexkRW#)#>~1LztNk_;>(y^ zp;)5l_4Pj(i8c~=!3EmT1RR->#ssiLhX3$IMXuo6Y%w>9^nkSpJK6z`eTE||#LlCs zQ@48li((}l)-|&)E>LtY}c>9OuWnroQujJNKu9f-MhTiqBL{pCbG6Ba|uy@6WnFv zq5dOWPd~2hS;NiG(MAz};fig`ohGth05=vC3;$ zIrDs;ejg$3-7%5@BA;RyB?FfK4kY$rh2~zrI`7(;heToB0^+GGQeJ+@+uPk@TEn<= zs;FeDiw`m=WHNm)-6ZoVF9A*!lz!Bn8rN)RvUp$?D)E7MV|(0~u)+C@BL~Aq!|1PQ z4SIk01x0ZE+V=TTS)4}pHB=Gx=2V(UgA_{lBH5mGmm2DsP zG{qcNb!g+{o4X|3uEFzCtIC8w>&dmMelvA1BMIeKSl1gzku-F+1LUi$maJk<2a3-Z zyxxB3lT&O=D6q=?RxLAczN-ThKis6aZSqLP$GFSl-1|y;nduX=hnix#^Z(KHmQiu2 zLDOJ@1Oma`NeJ%lLkPj$ox$B*2MZxM!QBRTch}&~;O_1&JNMpw-#vTw+uw6$&itva zuIhfeXqWJWTv)xe$VYJerE8y`slzzLr;Z`y)^BGu4E(tD{J`*6a7& zbw<8fa?(;g*?DBct@k-+J5t!`2i?6FmoS@P>9|5KX z3Hba9@vt9ZUUlgMaqexR?{73d58+S405#QF0g8r;NIQzbbT#D#&lZN|y!~^nr_e+C z1eG!!9=|u4z5Nj)BtM-TG$@alq{~#2U?5RSubO~nqHARmA2*=Sq-*-(05hOq^K8*a zC3(xbJV8(`$kOtXC$G8wd=A+pmyn52Ve-nrGZ$}me!Bvj=Eii<-Anr`=nq{BW2*N) z2Jy}8dj4Vza(mWj;}B)8v-$YH2&+L8M%EQ&_SsDD$GZyOqghfRZNfES^kFh%^Cz=d z7mr|X14@Z7Qg?7hHj9m$Ju^@YURTw)kUtmc2-ie*8_jM}Hg-py?=So&+h`L=wUIq& zu%TR?=@BS0DK)bPh@lMZFAT&!?h}FK&cQMl@YwXtBbXh;g{n$>nxlE@_~Wd>GsQ>y z8B>Gz!^^b3tzQGgA7es*9A71R#NBL65a+B(v_aTVd~JuREq00yU?%%vlC^ag%C3K$ zsj5Co1AM0EdK$(vgtNa_k|{~74G4u?8R9JNA{8^XxGUTE@9bRpQ`Mir%3 ztMt5C_d6 z`F`z#ZAqQ$JJKoQeRS+->`4XQZYn+U4c;!J%cc z_XV@{(z~(!Xc`yg+yq|Wf)YMZSnn)%d+FUDqf?i7@ zFqnp^V2~AUd`*J(wFzm$bX%0!5`mmCG6yG)0ZL*Yc1U)(4j0&jzU4FBFkKb-M92>% z#CLab4v)2uazM99TT^6ZA&(ABld42w>so8u{eEN+9@rYKuv-P~ALqGUdS!!Fe-Do^ z;J|qoGMF0*3pHoM0+EakW8?F@iFs9H0#TeXXm%5~!;zTmYcxjg0!ulpMr~(X1P>5_ z{Qf5R3d2v1ut9eYqYmk!UJ53jf~u8E5_u9D?os5LpdI(LMh zMhORhw*yJqFS1nIiK_QdB7er(r1TT$Y<8gKF$Cuni;hx90wuN+x8Jyy-xtiW z7?db%{Vbx^3!M_e4oQlVk^Qs6XN1ifRIjVXl;su6STEvX@>XSt&NM6O>17WzVm0NH zyLYv6p?pPi@o}7Ede7;gmPr{|8``I8IfP_hO(g6J#v-(d4;al=d|7^&7~{EqHl{|ql1u;8)EVh@vBFZ#; z&hNKS{*3pxGdR*zrS8!G$Bi;<=ppl^)W<@W2?e`yhNBse5he>GMprLg$B2WkiIvsJ z{OD56Z$ybMdC!S`^7y){EF=tlkFD&bBOIlGt z$z38{_}0cTiY!?#3H586`)WJ81@WDFd3M*aQ4l!M=-A@n+|)bzsR&A6HDjBSt*lnh z8fQ=L)}FcLH{k-EJ4BFo_;NEHFjJFqq`^56Z1a6drLIFq1oSSWIf3g`w82NeffJWE zV*DR%pDK71&gyf+H{|dAsgsNJnxwFSf0`?;$#VIqic1sF zJY92o(-3)-8dd0&@JQ#TDMQBhMW(R0r5G&+x3q?WEh;tbz}LU!n|YP(@^?#BT|XU@;{ufYLku2D%9~}(stJ*~QO#2wupG3ZfQ6Abz~|zg*pHDHX*^+C zQ;=)x3de8p(!5GWW}XpfV0G~GC-ri2E7B{rgqH>|KYE5rdDb;LP#k~Ud+n!Ns_*sd zjKCzoKxMJv#_oV)Z}8bZ{Ki^3P5fW_QvMfmKeg;^xAfn3%IEKb^nVAk?b%JPDm8cX z3%C7s@E-R-sc5>1ST$1itoUIF`k7Y?B&Cc48E4emJGG71Dt_U^rJw4yJA?GmdtxYr zp78I}Ptg}NiR(=74du~qu)j|euf@Z;BX|{iG68$j5i3BEc2%51ULUW!H%5M~PnGmk6kf3Duh4gB{iEYD6h$y2Nnz zGc*kAtc^6b3J@1BVXixNrK_v&79#62)dXtCE%`GxzuTEeR_>%F3ng5!AK;)5At|6& zjm$lbB|Iqd&$-4${1!Ktpj+ejr(=}uymm{(?VOW-2FEu$EUtaCb-H+5+fh|<%0nNI z=@EHVRM0@y_ddzPBxjYvC3EPT#$P1)4kLxlzrinlucB&()wWax$^iH>l4@X2yib(`zpvZ+o(>VK4R+ABOm2Eu@+C3@uSWXf?1W~H;o%; z^68?~N!Qop{MuPl?dCnB9Ih#@ai#Lcq-diMaOLo`PB6t9A(v^ynQ3z#=eW6BX2u@` zcj&fd@otv<7RJdB-iN41<6~U02jT(Z+`UJq&Q&A#7J@q~YS%xS)R1TsI>0VJ z7O_J0*h9nl!LW?1X5q)g>t`12`%T?p0`?;qJaIGurUS-`+(SB#Xr0csOEd4mNH*z= zbK?ox=IgwnDc0e5?0eYS3DxVVV2ussVAhavY^#L-6SG-edk?bj<}v#m@1y>9mw+Zf zVGAvjGI!yCeu_s>fPUUx{&@O1Ias6Uid;{<^cwzuf*SqV3YOJwq#9gw zXThR!h0vx(xdo)0G7e=Tp%JN?Lh8d%K00am^s@6Dr15_UPHeVw0CXBqK203vQY0o2 zNFP`ABLt93s{Uz7Jqq-ZzwM6`(T{Rz(Lr2iLeT-KF373>A4}`jtw{dwk0rpSCl1_- zKsREphQ1g5+-xrDODT$dFp+k-^<{*d7jmy`3K2)M3QZn1iXTz9@8D>vrdJG@dnwFyns^e0*c1jA5mML8KA*IAJn|FN=gP+r(9ANRQW?W zc>Nu-=pRX4#wKU8{wAA;{E8|sP`<75&SN?fT77?2{sx>U6|co!Q5EJKen~w&P<`u+u&M7(q4;&X|PCGKx1=f%B;L zcogYWJu~trV)Y#U7SGm!W87uK=iOmsU7!PsY4N&xxRY$~5*wT4l`CV6D$(SxTC&$h zL4viF4A4$ez47ueSKr$0fz8PLf@R|t3(LHg4y+>HCgW{sUvX4EhA3b3ygYJ(0@r04 zIb?wRDk#!4@KGX4%)5)ir8e0fD$AOU6PvR~9Q#WhvbPxKac0+*XRp*q0s?y=+H=$~ zMs_elQ!<(3q7DBwhgKE{yr}o$1SU0auFePn4UX`md`MBMYgqq+Xk6?W@oWd$AD(vq zzd%%K&Hq1gtCIaNS4EZzbH`{F!=Fj7j)Htk9k0m@^ZfG2Xa7Qj#HhGmy2Dkxvjjc@ z8*Wlg^Kqu}cgR1KHS_$}IyWeF>4V{dMnUo_9*rQU=KYsHVj>6ce0J z>{?ypkhgg0Sb4ML8bThZ*aD(>pO@_!0*j!&^xFE6n9An}GaK40jrzDQ;^v8F zO&+%DzV5NbFZF!eDPl#ufA;uM(dFcw4x<;yZQu$pdG`hOQ?5X3e_(X08=V=v7w0!l zOZXaRr&E6tQg0O%+s#9atzstiP6J=lGrEC@n@N5eWRnto=`uYhUT`a8C0<9&v{t-h zpOw~yl<5ae9ks*-G#`R>n1MeBI={9K4R=$VYT8-d7}n}X3=Yz;qpGEOI2B%hu0;La z3)*<0)5kzWpx-w=a#`JrT!qVh7YO;={orycJLc325d)R-Ilgz6o>zG z09sJ|V)d6#oS?`rXH@ zhA_98IA_{b5k=AdFt{t*x>rPq9Iw60pj|>MlFfD_%q%yO3ybOTA9ku$;1d@*evol0 zM~F`i&MR0@+kDKp!2$N%4cR$ngMdDa9x2RVw5dco`D(kz(i#pVvgS~|$KKh+(dS)) z@nOgBVXwJ^cK zJ3yT2>%T^w>l@-Ir+nE0KI=!Q*#c)xk5DIy(E6k}t_?>m4&)G@=Mo!wgutNvu;J61M^)TnI;WUpYUVHObK zl%!l>t}u^Z*cm>-5o_z;$FaX>d#W+HVpy-UEn*+hMDcF3Tz?U|X~g&3)Y{dXJ8erL z87SPo7+Zz2s4xmH^6IW_h%fN6#en7 z2e?5=e}V{3R!n(l#2A(&HFk^_D_`^@0j^ZiIhQ1VqMHHfyvB32?+g;5I4VlT5D843 zMidTSZep?Qm~x85FO;Zm3P-US$xx>>DVQX*qD zrED~TTT*VqR!q(VJ;%63Tc{Q^TEe=~#gUI=uUj8;e#PBR$hJJVyc7Cq_7>r4b(2xu zM!?aPP4ZGq(WU8|=C-sW;^n!?RxYS>+qX&cmrriUjGpl!T5t1z@T6s-yP5c~SsOnz zo|b&o*vTo{HcW#GAkLh!!R~Mf6TpSj2x0>$w)F zr(_pI@?>E!s{5d5Sbq1P!df@2>DVPWIh|&|dIKKvOT8FBt(J!$dw3`pA_ax3lnX(IrMt&1Do(_TJ94y+~_HCv`lKLIJgd zp?(WSXBl@df2`CqZ(o&K(-7Guvq(sVGFJH%N^Px0+6ATNMEd)6991a$F zfCA8Ed}%lHur|E>S^0@xVQI zECZZ|pdIU-AIjg+>A}G1kTu@bulV?$_pS)K30g?DpFVHdCayr{PX|^n*mO%y_aJ5G zGUrh$D6MbQ8S~FOi$V`PBQ8mv)wh%bH~S*}jvjgxe~pct8i+wuEe1Zf9W@l$Sc+UmmB?rk8+$%LqcL{H#A2_Hv=@9r2`73bg@B1B9@YPt?uvWY8BzvpV^gaP;_4(gpA zN-4A3gVtWgD+`1Eaj1d#HC*$5A?Y;M+Am5}=xy>KbNyMXu}ScDniH5{6Ze;paG0$ddVh{hVD1T!VRQ|3 zJ)k59J|;7=Dl#hxpM>LD9kbMO8x4rp00wJ-CCgw!O|>5eh^|H}5WquR%e_3js3Il6cJM&D!@iD{HQ&_j&%XXr`m znMPt6;(Qk7Qy!-z?yW(BQy1qndHK)*hS)vy*KqbUq(EB$qM<3NDD~3=7BHzIC$-M{UuSH z_BrM$k2yX58alf>f|%UbHJR6%7tgt^z|%C^9hNOW4lxG!DB2=4oaYbuXstz-6eL@X zoiw;JtQBS(BM}~a+{@*;(mwc%0v#c{l(I^O&=;|hc>cgM*D*T@-S!vo4d8q7ymeSi zSr20#GcCFCaukrb3&F(Vr2;eF?^tz0VEb$2p;%jJioMj#p(LAxa zx?`xRhkxT54^MZhkpXiC7xI^5=^Zo%sT1GWaIbkk-55j&vVPDMQ!iE_psJLaxFEFY z`q18d`&`+1BQC^H!i6(c)@dYU^Lkxe_)Md?nBb!SdSR{FUu*cm^53R1N|1%xet~BB zW`BVWd{lL9dofSO#Hq@M5V9DOANqsLouaf*+fN}P`3YdH}IbiUB z80?s}nfNB?UWsIqnTmhxk%)ioj$aExo~uW)Wt_f`vmTE0gcHw;s7FaWmPq}-C(u)W zt`*fE-TZJ*^sc9<1cU}~4eO;*nNpn$QDJr1!p2x zK}Yd;{IHgA`RKY)RJz7OP}`K8E#M!7z80QfdtZTcpTBG~UW7$!)3+2d@5VrRlhp9B za%jXW)xe*t!l*;mom8$E%GJhfLB=Ve9cjp}B$=~pbs z+7tJjuRj0P{c`ku1p~!a*T$AH>UPu?pEd1`^?ph-z?rIxu;T$!*uvWbO9Zo0ObWiW zl|>k=(kt~$S-y&-&XM-ix&AFdC+D%*QZAn_*Hb*syx^)j1Wr6z3f*2A0%GmLGHukDk=@u$;WOc8?Z`z{cig|bMliE;%ifY`=rG+|!&@8Ey8VVLUNmC?(@IIdj8m9%5C24&$P=p$+H)d*G%Ks~{QAS1{3ge3_xn}6ha4(awIsP8p-H`+0pvj*x=CdCV}NL?JF zH|^uf^^JI(8tpz?$!xL*A-Lx^DrLzQ`EK z5T`UW)Od`JHuVc3C@ZNSIu(_S1Tm-X3QrBtdTje9xRtlq2wFg(W9oa@;3FERTw?IA zfBJy?a7eLvwuM?hJsV;1qLfKL%brs!#r`-RZ_xqamYTO>Pn#5W(nSBNsQ#4lkRC>D|#+l z*paNdq;z{KG-g>Hk!+W3&73zXHnVAW`Sp<}!EQ!@4L;)f=SmTKL;8PB=rCa{Hqa&h zNCEHQ8#T_m2xL^oAH_~e--`hlSlAUmLIguMAHRuZ=Fbs6he^ulv<_?-xX2c4TjJ@%G5Tg zahi%wl&B#ZP^3^buAsGHMnT^gsIj9L`Nz9f1OI`aDxpZKe6WEsMBv}S^$l4M?AyNf z0?iWyY{uaAAL_0>4#uhEd3q(0o&w!g&BFISvy!E4)=9u*3jRHK!r3Yvutmseu_>I13Bo%FW_F+8Up*Zs+ zw3XPj$_`cUuUD$%Z|?>#C|X^X?}STw=IgtBGGY00z^EFtT0q85#>*OCRW1sqErKq;;nGM?;bsogA$aGNe z&j8G-mBYN8qr`q8{v}C&*$!8yjYzV9<9lc3mQDwJo(um2rM`4QpR1GrJi4T?k7 zMWiC-jl3M);dF)mj$Vc{Jx9VAbVk7S&oItd5ZIA%j2EX!*rG{FI6pAdQH~Fcv{?Sa z(z_hP4VMG_@s$DD(m?*&Zi>#Kf@2e>-o326t$E)nmb|b*tMgKSe!=l;mWQ`F!yo5h z1mlg}djCB0PYFmUMI{2n^!XZ`@g+0+3ZIukU1jau+;`#vCE@H^r7YUVwVuo=Ru;QQ zU$Mo%hxh*|b?=L5>m~h@xo1d;a~I09-_Dm*1Dw<|L}jw5c`KPH+PY?T`S7&KF@3vt zJWj!_8;|B0bnyjH#B6^0{k(<@#%HYxSbEs}9Q`;&LD!oRZyH!TGs3$V@QtX#sr{At zNX*5R7}|}dWV}p`pJm()jsJ*Pxj85}13awattp&9mriU8A7B3>pP{)c)^g#K(CiEn z)3)|C1!4Jq!kfUSmN;7No1xfUtI|SF^0>XfsKLEsi!z|8A&KG$NXG$fp))q5gm`d| z8#dDp2u3rzCWQ?=FJ2{J!RTpBMTm~ge{Ofc2=6)3cKGm`d!>M}#>Z%WkRMUPW;iS5 zg)X5{M6EHpU{7dv(FMxQ;MGMPW54lq?(B7@i*Q&_IipPiQe>tgGmU93Z(YPpDRE^NBW zx5;gv>s9xO$RnC3iI(!kI_#JHwoRhO{S0RJJ)himTLzYQ7>C2MQJ7I%|5nG$9WVI4_yqX^{dX&&T zBJl7VxMd`wEAdwdxb0)beyLcx6dyj_tx8i*K1b=hZ$;h#Lh%`0)F>#k9<`h{&J@!@Z(fdB zy_8Zp)K53TLXfPaAF1w?Hld{Nm|3oQf#iP#@8;LouUC7+bqd)uPs9-I+NQU}`W< zl&2XOum@YNtOuNN{>Dvq`S?z}O@YEd0x@#;C%ZC0ApZFFA8S?T%pB21YUg(VHTOkH zK$Vizo!*IH-(h*~j@L^W?GtZm&qN)8fQ?&88Qkt&$mLi}PtBb)N?5hPu=X zj) z{bfseMi@mM zGYPsoxK(f+y>$;{Iw_oXmnFK5d+?gPr34JByP*xqKi?nXWmdOb*Jh1F3XDgCy}V1RyHv!Mf89t@!RUNoNhg|7VThjQ&lKk1ZDyu?A8NEROIY)8Dk2H z)r4LM`;na}izQz*`tpDY7!ha)&5=TKJzi;ZyZwqIjYAFe4`nkC_xUDi;faJpcqX<_ z(L0t*huhm*V#6M3&m)|Bw@?3%h@SQ@VWxDIL;J@+rfuv0D%w`riE+4(8b0#@uPn=$ zh_#*ks(ajuUX9=aIk5E8w*N5u>Um)<%4Fufi!jyCct;54b&)V&^%>+~aI>65ChPkYnOklm%LNjMx~CMVw_BiDfqBm{6PPVZ)J(%*h;T8RkuZ!T_ zsMU4CZD=*6lG_5BI#q`Z``%wN+n;0tok{@P?TyrGP~QTS)7}Zr_ewUm9?+vv{vJgY zK*4)qotU>5`qlLKPRih%S_6Lk1XiB^oHJ6;B1@Ui1D%Fv5zPM%kcO5~y7p}NQF1tc zylCnE*)cNh2;pR^=*_)MpdZ!}e4L9i9I)5p|0_T9)^>j|r$0Qcrx#E6^Jm&eEEXeA zBraWS&PUY(Swaz8z=D1Hw2()VYCPzd{^UhRjmm>bHT10<;XzR zRsT5US0z6cZtBcS?_#kcGW?~$KOWl2an|VLK$cm zJ=}9O9=uV6pYTwAgA$5h05Dse$Mx!xt#bo$+}}mwo~AyA+j9 zN*G<%vY8i4Y7z_v;_b=#8WP0u58o?$=cj3$s^bE+cp64&-bY*xdw~2ec1!5iWUJKi zO7frkFl~;G*dK$b*b9$)qHe>~063_H=T;Jcp^kJlCEdnC z9NlLmig6*!*j85y-`fY{S%MBve5kW##LRj~6j8)Ez0mB$ZsRLgM+LmsX?4GXA`~R; z2a0+9382lAKX0aX>Xz1;b$N(+OjdKH)qxPt+vR!70ikgNuVcFBbPek5CJFLm16rQY zQgU0c(&Dw_woEX4#S32vc>A*_rz&cHAX@R%+kCj8MD}!z04&s>u4x{Td|#nGrn}Vk z*s|8qa&v*|?9%l?$8&x2G_d1V>jrX+*0N&NvDelzK-=i@wjw{T+vW_q+cpEa(BcxE z`mV3FbM+tpkwCw$^L=b<2}5$WAwfIxd@1BQ?ob!5*%AJBDq8W@M%ldAabC4|40y`k zyL%?6Tp}Sm mBza@U6rPW7WsR}x;c| zroB9^Mrz#7&uQ3l>Y5J6|YQIy1rJ%Fs?=_sIDFzuquZwQu&3+I)`=yS;dq zfi^LTS{6Z5IUq`vMVPh|MYf6mCP3<#DZw-CJ%J1|W*HF=J`xagB6N=BKIHLWY!lG7 z9yMz3PK@X#DE6*DSK3>apL^a1gqR(S=a(srN+K12P;6})pi6$1R+s{34_rblR+~ks< z#)?Sc<17!Q*bmf;|8hDkt3@|uIJjWwz7|d$!SJH~5L6ZyqOwgxO>j8)gN+1~&I|A@xLlaKWbPDqVJ>umMepkr}U9DvE=5YDhgro@dnn3W17n+me=G8&iv-s6n-!VnA8{W?8##XD;ExMCpk-zQP_AA zrdb}d=kVy0C11gneiu&p7-jMB-cH$tRY8a}E{8=?ghnJ%4X#da9u+Flnad{b(NSpg?}N=w z?m30^#*N8JaT|r{hPsN&H5T&NYR=+dcrK8`A=L~upgq&Hm&I;jT5sWXRSVCy81=jP zg5i>8Y_=Iv%}G;ByhBbOV;}>JZt3x6yyDwoc7?-IQk_wDWO98v7-Kf+UL?KGD!%mR zRoYKp3DRp%->Uv@7&VszR@En$zLLyIgz8fz!%}#AFoJoF?k!VGkgCbEWj=uUL5~)| zFXx^2=i6;;259^=_3XZpt5Q9^h<5D-*=|u+@f$7LKJyzEnWNhE(^V=B^5YYIOYXe}TD~rOdLCs9i`zB!}Hp9aWzDwm7+VkR( zE84Plhtfl@U9;I;sGZvafm!?QEcFkC<9ccz8@H$F(^y$%xQ8j~%%-QACm{CYxLg5> zyGcaRdo=`gp8E~(?e&Yeu6$3|y@9*Eo`4O_q&+^*$_Qb^%pI4KVhp*IbTMyLb6{BuZ;uL-k4B!2&makU%s6?6H1sY44U?dBDuV z19tMXLVfZ9ze+jX2{K9$TGG;Xc2t!=Y%UhXIM-N>D#1YU+w_{s(Hz-kd_F0?{u zIjdw5askVGmp`Zs_YDK67aWoWL7dgpE)`Ina%DwAtH`OS9(MV!iM&Sx%d3v+(5Jo^@4NF@Ewzo zGK7J5FJ?eVXQ4LigL43oaN7cz)G*1fnmKg<&gTR}{9uSLSN^IKGxlM*M8OD_N2u}# zY;jq1yh`DD?%WBtCDThyW`ES4qUFZ|yY*qQgLYCu?bob-TA#pwo}`JSd=DP*mhmeg4|TIcOo{~=tkGB8o3zpc+KInT}7H*Wl9^Hik4)mJTSUw9|Q01Pc5>T(N%*P`L|h4E~#cJO}-Ejyb-upA4re z#xBCTN?pG+es0xE_2SV>pBAJBSIU;{EL+f7l2TD=843g%D?idCs>z;Q4a5Zd&8ctwvdih--Gjv?e}(N&8KFN=02rC8SoQd4`yhXFvM8Epoo~ZtnG*rgGGvT3Sv%)`<^rs&3twi_D%qx zF9-KWuOMFC8RB*7d3j3F@e-Ovg5-XowO;bJ{S6Z2aAo{#N6=Y zV!MG2T(qx(__Tz;GAqmHA2E=B?i;fb{NjJM55D9bi+QSma>Xw0Gesj_nr6 zc(rH1MPxzKA@&9vt2)UiA_2AQe~M4TT8q(x7z(BGgoJ$3$D$`g!@2+b9!{v=qq%y` z-yY_i6^jHFV3N8D@}-=J8rI0}o-E$dO=Z%wQ)#Iii}psQ8q|JA zO4PN6jVhnX1F*Zejr3xn(+vA2me`%7z`oEQYU0fw@Nt>-H6sNsR=Wa*rb%hridOw& zEo;=nNtrb#POk5y{U)yC*HthpFCCB($lgX6HpzZ{7%pCqix1#|fgE?z+mif!XU2Dr z`Ki5LSv}EWD!W@N_k-TGAE#N~H`*1Y=bhj6S-yN^E|>*k>M_+;9c3_)`+*KagAo?G zT-CB>SCmT(m0(%-cr&d53`j&a&(n-EXR1Fk$+wxHpKly!>pKC=JX99={?dI_pj|efoY0?0Skv<2@|W zg{=22A3XKkAwE*j+gETDm+Lj$h;tpkKqL;Kmrjg%Pgz@wIA8W=x0{$X@`hvmWonca7nD82w`VwAs(+zSP5mo>M^L^XaJvu;n4aVH{0S&M zR$Bj=L|!sFW))9BG^1SLFv_6}SG$GG@zBHk;}HR=*5Y*=vM7ZGych97zcxB!^D$0U z)%D&CQyVoIgV8H*VqI5tX+jaqZ7T%4_~TjL%*9fFP!m^#l%_o{2WN6m{;hlrQ`J8{ zxXA#z8%?0Ww`RO{GoNJKY%>2Kf9~nHDON=fd8uAK-&@7;1iX$Vp0r;E6?_o*&KUxl=Z{dFR>L6-->hOFz9a~EZv=W{;ptO6`fV*)I zx*P-g%4U5G!1x4C?RZlMl0CUn`sQDEaC@J%^q@ZZx832LNlY2>3G%M@-3!mwJN)Db z&$NL+$4UnBS?(^JG;NSJHMVo$^( z4OZ8A|8w}a zD+BuT5MZbR$?@cox?1oc=L7Xe+V-J_{EL%H-;+P z&RrKcgOSuZv)4$PdLD_5npQ`!v%&;9Rv~}bk@HOfnHKem^zsRvQRQd#KX2|68ckJS zRNOu(4pK)o!UxROM_+<4fz=ze)zX}<_YS-vB0tX zB_wI+)$L8b-^z3T@jogJ&cCZWZR?9#J%;+OOOfH04GXybwpubNx>Ox~m-oUl?fz%o zEqA7K)3g{xl*<=yFgDs3u*|j#5?)`#H~f#}g|`{(;+D=CkBFt#Mt~=iIN$^6 zGg`Cz4DH5|*E&2XP2Y@+)P_+%nZa%Sm?z+Cr?_e^F-!ASY z*9Qecd&IuFmKIFm;QJtn7pd*0dlMHm+i|vIN>|m6?!Uz~R_e~_R(4Nbhp`HS|Hbp4H6JlCfFL2K$|*pCAD@U z-)46gXXYc?CD&eB9+|5FQ&ARXL@&9D$1${_#3yIQbpA$h2s`jvv@Gt5 zuA0qQ>sO|-q)^8SZup1jd0rZzJ>&AJZ7j@(3+IKKbwN~ z)4&CBj-+C5fOF60>mbib8-B|nv*4GmXFNd|Md9o{v+Pr?54Uwn8;HJo}^tvL{ z)5fhxD!ZO0!@;0)V^ru_s5W~6mLrjjz*L03#~%Jy7H4+Fi{Ou~LY66WA2+p8+CvUw z2_S()v6t+$8yj&UNE||iQ$uB1k}at$4mQBE<`oG%va}34%#tG0$@s@PVoK63i4XDg>OQd)8Moy48E1| zaRHdY2lC+l*GMOXRrDmDCw`8_&u80%j&*Lo|c3yry*^Ij`@*d}*UjT^IH zN8&xgx+0HY=-PX3jZV|fO3b#zO!y2 z`a|yv%wQ?Cf2j5!wM9v*+6eXrN=x&%k5yKPbgvYs$vZ3rUFI_~6;7IZO4AY9z~Db_ z&cM8TuN`aw*6|E-Y2X$xC?2GAbL3+@UT^wp`kW^If6?`p0a5qg*0+)Z(h`y)A_CH# zLr5d2NSA;}cR1uoNq6VaAu)6_ba!`m!%zc5JmYo!&$-VX&zpHU@Ahx+@7im9Hm5?@ z#M~2IN11Srrp*}FU;Ykzej&Hfk8LvZ-*S&v%E%*RQdYLHgx-JD*ql(`#|0(dVaqgI zPo>hw@=l?nhFva_$0nMJNQERv{pwkbWOF4P+p?`%T9K@^>lDYb5j^o~zZcxAk+mLw zsL^JM?_*eY1^XZD;_>|MM_tk^QxuckyC3-dT)H^kNIC)J9W`^7HEkk$zMUFCusX4L z^GE%iXmOlgIcKwvd%#`!%$9qc3l3)Wsl5qNR6L|SpB$lDo}hU+x5Z9WFS{c;S(M-8 z5Nu%deJj?0p-c86{`h70F)-9aQP~W}M5EN)-tXLl&4QV_d%(+AH2f7bN#`Ee)z5~$ z-d^$0+p=mK1BpB4ByjJ$no|M{bt5zB581Q7j@jNlnCGFPc<#XSP zuV$G5v)9RX-7x@n6CPg4(c;$EnPzs>eVXr)k9f5Cb2_=|6u(ON_X|Khb@@xfqwzI6 zgs^IDO#nskIF^$sBf4SZ#r8$FOh>-&wVJZ|O$g`#a`bQpcx?1H8wK1LJWd^h9?M&* z+lh;fIIh{(;d$$7+)eb5l$XsuMEnyr;?1K)Qb>6>*aBQQmVM|0FrWYVVUc{hO>vt3@^(A=K+ki|ccxX0cc2|QJMJa_q~vD0 zjvZ6XF+2O>aiYnOKEc|nv4#-jP~l%4)NfjZ21Q6nClg7OKL11fg1$hx zANT@zPoY-yQ&HZjSIQ9~2<+>;j^e{p<5(8Cf2wnAxP-5CC14%8<-+vmJ-xMGK9_8ykiMbMdZY)Lr5m6P-*t zo862ykB{ZNe#tuM*2}3?h^mxAMe>CJUjY$5Y2MIMh^l<|TCGMzVZL`|^*F&bbxFgZ zkZ{>gHmzN;t=PCq^xT0pRC_eY$|B<6y+iL1sio1kewfm-tVN80%X=(^9*1PxR~Ttc ziX0P)Gny!l+C9j;cEO);=nKUJ{a4p@}9tLgisSIh?ZFPP*X5j#h+++ zKYv}elj+GL@)fZyr~~5waZ}ONqKkmqUadQQr737)_7Hf5BG&l+FSk%6)3v#N znq;rl;UvQG0UM+QaUS9$Xs$r-EMoQ5i(ndVuH2fR@1>GIDlo=62Xp)==tZ zfiR5>7ODQkpLE%(o}~Lfrcn~$53@U=YNj)6mMh#|Z4Pe!gv-@&;~W!awM!rW;8)Le z>F{EX9mL$=M0(*>yqSUFJWO6e9^;OCFr2JfL*~D&yuTV*hi|%9(@y0#(@qh$?CGxo za$F!dY0xe)ZG#E+w-wAG&6Taxl&41YaNRBv3}+*TDc@^RHYZp=eUEx;$j2?fsjUIo z#TgG~Q`7Yfjsc=?iMt9#amnTS3M{&`Uu1jly@GdH{EiaD>QGhh=Sm$NXA*JQ@su59 z^8Ni*gUR4%0`@Hde4O477H+x8E$zYdM{#*$#p8%u2`;w%`5nq#@Y9>A4*+_Yv{Lc% zI?Zn_zjd#+yC4MVPj4!{DY`$x9gE@qOCN0WQn1xiw}ry_{gJyIZ-{6CP6&!yu52EM z&&ZQc*k1SJ8_fHUh?$mp_e)A}y~9E2h-evN`s6}d=_S>W6@J1#d400wB5_b35&vQV z{*Z!*H{G+|Kge8T&Mn%I2f5hrL|h22z43UDHn?Q4-ZP!L$UUBY34Mracuz930-xre z&%oE3?l&g3Ney!U8_A0;M1LGz>%p(7xA^q7ml2ddh;<9{x-xh== zB7b5ckXL|abBT%rT~10A4kR*H8t)=$0s;`Li=hk58x*w+u5v2b(k4h1LwW}#9dF0< zFjWT;=u_Tl3|L|pR z6lGu~dG+Q?RegHxsVH2-Gf%TRtuj$TZ9nWRuur*}^*(ztTHWE(-7iv+_cqS|#R3>+ z;)k^-$gXo!3*wM1HnS47$i2JH4LiG`h3C0Tg`e=WzIb>C6ZQx%D9Uq)u#F> zJYhExiL1bhaR9)c0?N+mhGqk39%7XKM96&J@F8U!eH)EC3ZurL4Q>r}VI}kUCB%cU zet(+eQ?raGi5ZcEdSXvne6xBc2Fy=c8a1tlzcAE&1=1w1BXv%?s?$zLs~!6n zEbZ2jCb|ltIhYwhtHy4WC<1(Afe&3XMr^gg+g|D$^4Nl}&BAG0oMx}O=et-!JqXba z_Pa28xNM6@xUGjjyNl@TWq;Es-hZu0UbNnX)H)@}^nRmbO&Z<_y=fun3+6`0I~}-x zf>RlHFUS;%`J*~beUTt%lZ;6#YN!4;LlVpB)&sBg9IWCb)7j{229Wvk?)uA?46FxobGu(&yy3$ASM={M7Wce2Hiw+4*u^e>%`6#QHa5T7 z`0<>B`o-yiCza`WjgIY#=c?36c=s~wbOba4szdBDu4Ln_raW_}In!^j?L9}_I+6E} zghfcC|J>Y2u%BoD6z(jA*|#B}662nWzak6)7XkJKuDdNYm6?jy&ylNBj!&9|I3BLfCcDZ+|hhV z<5*S&a($vQ-&uWEF^F8B{BUPVVqgGVP25*xcNROwat^wt4kiQgTA=2IR85w&-tdP8 zW{AHX{RHE&jk)yvyvbzrbg6=wK{Yi$bBj#7pJ2=EFSeA$Cvh^wvS+yw;=1ipjp<)y z4%0e)=ZRh01sJivskEn^nk0?5m2Izc^L!bibDex!A9wO!^MF@wE+Kzw377(zDlC;6 zSm?@Ow29RokM7A$r~MHTQ;7j&nk=CGx_zA^siDUZ!Rv5E=K;A(%KyHmG@lqwg{pu1 z#l-%N=VaUib%uXwR)Ux=v&XF|(gj}q(V@x|hLVf} z#-NfiD<4h-6WEvh%&~SKkxjJY)k6pPObU*j;(^wdS7Tz5oIJig2#|A%#oRrWdeTJE zEX>Mk#5;)os*L3O^jM%M(@NGWYTE;;8t?U!*VOOkk$S~PYL@SNDC5-9F3%?40drmr zxt1sxAxSEVz$YD8_dWZ3L-dKNWz0&FFq&wPtB(z}kD*la zDM%nAXy&+Y( z8FM|B*Dn3B4<*qRbTN+&8{oAO=qO;!c{Dz>du70hE4t#UNV#-#A=i@%W~bRTZu%2O z0P?^h-r%>JcETi*py-zEd}53J{353HpUStz2YOzz9LFxOQ87=}bRr#VwKuho-H+u6 zh!>pwi1#K0a1LDpov%Ff?2~&rzeCmY0_ND)XTs@5sXJKo)ma*Lsqf!CV()jVKH-1s zS=SLqSs5p&p{uxyR1Qo3iB_qeFglO_-&z3MG4Bk8D)w}h$yiACs{wHA2+cRBN$WxW z1Elu&VfYaS=!w2xTp56jmiNo~Qb>?oC0hJ2Blc-J)8_Wg_Mzhx#|C>7erYN74ydAQ zR|I7Ki#FpQ;L-j%O($lx*DEQ2`8orc?qEOhc1n`T7_l zR2y#PhID(;O>*)Mwee$bo*p4%8Vb!x^KRAB>tux$YTG$sP+oz3%P{u2D87S2q*FZL z>OHkQt#d92(=4-SH&?|C<|er{sn7>5+}kwTwP^Y_;p?Xwah^A_Gj6VzM_QAiHM4sY zZ91rEP!(WDz4Vk2f3J-!d_7>{+S$&ZcJ1@Vf53vGx8jCY8uT=N0-YImn5SQ2Q*cGi}tL9R81OA*cI>MnUR zME^SIm+MrtfrT#3eFBv<9A?^LejW6)<$E72aS#L}qk(XCjGzt5!i{yW?ow-qzp_&B zh}fVZKas3!7i;w;5FT|6akh4f#g)z-&Lm3_$hnejPJ{ivTg1lqA!n`=!lw|~?ACnN z!#%5q;}ef>NbBly!%S~3Chf8(;-`m>2$fqQTk$@#p^Q8B?l9?qpRq~E73pa0XEdn8 zIbU72)YHz-4(`_1 z`4j74tY%8VlH6AV0aDPv`r`E1-}Qi)P719|o7RtiG(Vo4tjsrm2lxZblu0gFiJ|c{ zyj;bJ=IQSRDR1c75jo*AKfzvx_Y$0*x9k#EBKV|%H@~~}SZNo`j11{%t(mmZ1YRQQ z1RY}hz@?+M@fYab9qRg*gwHvcj2ozuUinXTe?1LyCMizk&~}(^989S8nHCwsa%H5j zASQfxU9W=C@14aO{Ns`YWxEa z)p3ri8mq;)?gzUnxWsC~cnq{fKgj3)#jk0V=CsoO+sZq!!9t<(9(kITIgqITHNZC; z*do!PrUg2&MQ?ECt6s)2{w>7QatCzW!4j%HiSUF|n{yRsH*F*tp!=$Zb;GerxHIOV zh~30=e$*VvUNhftE}*`#4siUZoaFF?lT&=TyllNp$d8*=P1!^dbJc3_d+!VHe}pkeL^*)ExMK&XDUnH|n#t zC4SM9(r0*5?z>*r*pDpBFVf>wR_=bU;6x$q3FF7ACx~s$J*Nd?+F4OVE_mf^AkcO6 zk!#gonJ;Mv-oFFRn^GHD(wj~&B&?rW*go6N2?NLA(QO9 zFvVD^Yp)ym#ufJ&MHdaWw3W>BJ@z^lFLWXblbr;*%rxr8B~#F)JapVR!{oSBXxvXE zeMHWAs5^_gTX%bpQ$r)*`+G|k3HkztwkGavJHZ)953x=n(@eto?0Eo_B~HLmD#58k zCS;`VAWkT4NCUb#)Cf9P1lVVlKRyJ2oUR>PZrOd=uZRPc*D)J*Up25)RxtSLt=2@q zAQ{%f9kY}%r75%K#Z;E(n$R(qfAS{7^7h-c|MvEMO{CqR7W#ia-x*zy9PTZZ{`{*X zTV(O)+b-RgJOgsJ?omqP#+F;i5vMxdy0N3aQ9^MxJ$_oQtLMiq_|)jzC4VL!fqQOb zUK{h*_U$y!`75UsQ)c(Sjond&yHot1`rlKXhz$O1zA>wEJur+<^DhXF7SSn|ei5^F z_ZI6yoQz*geV-{4*OAMBrDJiq&6j8%{H2}Nla6s#^4Xq&cIHR(pFR(gn5ovql!cLF z!{ulcX#9zz}z&;K{8Blm72pQ2mQrA$?ThPuVszkoB10YP~HC$I;gNtbJux~#KC>n;k!vm z)4$<{ewPKsi|l<|k5e%C^M5pW?Z!FO^H9^O8NX);yYRc~)ale$0l4 z{|P&_>x-Cis(j0hg_0YqA?v;$cjZXfrC)u@{dR4Vs!L;%HMMZj*$7asDLQo*+Fa;i8|oTv zHKAclB8)Js_H59cb5>nQTz+WNP{*0K1-&P`>k^zaDc*-!W|Z5iH!19rn3~hyO?;d+ z=yV@pQ`hS1ue(~+o*(G9avYRbhkzBbQD}=DRLjD|Gp-KTNeVe{a@@FNzd^b z={C(5j!=4?erbRwD);=F;Utl0-5y>6I^VU^-JxY2f9smM!Ita-itsQz_RRMSsM9dGDQeGEq|pnkz?r3RzPS z)C`o30Scw?Ttqz1>ik?mphD78;Zcuk;G>7(mB&o1>#=K)>oxdc2SAYKD;iw7&*15! z^ZXF+SAA9%z5TpMe~S;_OBM0u#0#Ea>M|akBu7M7_*HT}Yda;P+WSXJph9kSQ z`#}-XGW0>{Yn4xQ9r86kuw9`O=OhsiU;@u4cr;GY=mP}Nn)>;Ds9)}yk@A`kX%Nkt z@fje7f(YWrL8XmU-$m=yo)n{p7=sGPgG3L3K~}dZH1F?e&T;XK>1pOnXeR%)Fs8W8 zTes4L-#fJ&-mP-}+pLQxn3s|nJ?A$+-!!6GQw)0ioG7C>tRLILh_7OVlrLxTpnNYr zKJc!1%dUM(qMg%DpXE^1%p-YChW=ItyedOU3mS;US6?MiNYi?}ib^R9=jg=h?nRsT zL*mJu+P-r1@^HM1uEim4p85RElrq4qjmywwx#%wVwMrgsIv+b-{jcfe6E9f711x}&)*>Oke3ah0e|1! zi>w~={BO^a*0ULwMrMEsD0gy|iymyZz@ZoaJ&g}&{{tF!RdB&7Tf(VNuR=={;+`!>eo3jCo~!uhpB*I z(lS4LxnrOH3d>!zJW7Zq7kCu@ip*WHJC~4g8?~vT;HC;0V5a}o*^XCo3;ggjNB8r$ z=2<2kwUfN0FP9TK zp!0GqN_`Mc{ie$%b(b_+D;G+n_(638+&am%eK0FVQxutDq--UQbn{UcE}zHI#b{b>yf?o-{u|~D zE+zS55&s-0v7y##+H!c_z-R7Ai?(ol4`ug`lIjrXz;Kv^I=Lc z0azNL@IF&sm2JO_rI6**Ak87ZTO2Zor`NhD`?B7b$m7ET3i$4V=1{(+mJ~Iu*Qu`{ zg=Z(kz+LR9T+#DYgE!=4ykd~dgm!cJtEUC+${GA%tXk!_JBe|N>hKs?JA*~PMI)+S z?Fb8)29*qF-xV9)uyT9Y3*e<$7K6758=Ko$c)1u-x8lyDRzPpq$;e^K-Uc{y>$aOos+#~yUG?Q5l zV`|L$pl$9yD*9yk|5H#?PDhfDk#B4yyI(A4uFC5nuDj&d4E(fhwYBvnM92cxm3PIN zdE`x?XW1~U`I)Dbrd?er63)jD9ouq#;z{{V0J^8<&E%-7^L#kUw6*<2>4@8uwnXW( z)&Pw4Jo@xoWcw?Wbio~6*mFrb4PE37o#Go7!1P*iC6LH=kb<_$D*SNcD!{|u zGEhiEKQLt;72EiS!B!9dxivn&N`vp%-8fMjXVn?4GcF#^QJ%u9BI+tgvi^}$S%{eQ z$atGP7|N`~|AY9|mzkUrh`8BBBt|z7kjkzknba=Oz8Ncfl#{(iCSV#xj*RsoDR}g* z;=^&%Re7nZ{kCS^`Rv^PoXi;H1q_C3Uk_-hj&oOU{{sp7-Q!sEu?CgrdPl4DzaYoR zW~==LJ@@NAlY+$d%k%Z&hlMffXDZAZqvP?`K0{|j%hnIXkbM6s`3Mi(pZI51v(G3p%md`l@W}WhNhEPNT>;vkiXq-#8vUx%ow#C;*HeE5KiXWtiH+IhgGQsRH>_fu3>1s z%b{Wno;W**!H?L3{*b%d*DQ&eQ(ZN<-;>F&Ue7Yn-Waig_9AD>ZGNDuHbM-o7iw7% z3(%y>OI;s)*a*vp3!clFWuP`g0>UQm<1x5ghz%M#(_cEkFa5;+IuONSUeQYI2WYzf zhk-Zxet?=Zzd3$B&Rt!r|GB-&4wS}`hv_Pq|7!m=L2o~sn=THJx5M-)q#m6njH;-K z=GqT)a=7-8rU{a+1<~NgN03#U&fbJ*AVUJ@@KN{X24(slcID+i8f-7i=5#_m8^Z-Tluf0B3y z>km{LRb?biB;2=^uAiQd95w0hgsqk8t|O%i|V!(c{uFWCqWv+5G$I`TL2)tfw>m~+Ugio_!X zITe_pGuR`ON%2Eln}z?8v!R<(^K~&O`p<(lIB(-2eg1r5c%BTOis2Rz@tPt1wJ<4c za`DlanJ=YxXPXtxZ;P>X(IkCTP#ijZE2nZEGtJhF^BGvoLyd@Eb2;v9BahO|F_)Hu zNvGAB0Sbdc$YoG>@tJBfM8x3yWWuqx)+!>gSMjM(1I;|yntB9rvC^Z2FDXHGJ^d3> zz@4`hy7&?5y?1xmWBPn>l*p@k{I4{$ZRHWKpH@bah2PQ<6m{lLuQXxc+IDQv`E%O0 zZpPL!(Ni?Xd9(|nnoO2I{Tu%7Lv=8|cUg+h5B>ecAa7bwHKA&gb4Emhs`@xVaYTM1 z-Y{*U!*xwnMKSKH9)oqEEW7~1i=2Mvi}M|@wEoBJ`(-ju`5E38U1dBsvX{ocISPsw z>X+J()jp#b_>1jIZztt}p^)Wfd`#IhMlP^tA-WQZI>0V(wFpk+t2ZJ&EtzqiQOTcG zD)oO|%-G5OzsKH`|8wk(DDu`=*7WZTW<$~4akb_=xv|GTefHe6>580zZHI9W={k?Nl=D!$SpR7QH0dh(!BPzQ3R8*xrE-G626czgKtz z;eaVd-Hp>p)TjK$lsQB;QWHRPig{cmh<~YgcV*}>$(haPqdpxY+Z|>1%VS?J1Tq4P zw%J(;?BZe_zHDI6r+kMWPokB4k{mY)TTt$8Chy1+aWIowhifT|7{)5`K=s1HqNU@FNLv5sU(+>C4(eQwt zqSAW52L!+ea3hQY`cen5sF0j#n9*(_R zlU8;26OS;mpX{DP6K^#h#tq;jF5rhdS*L8M^Yuc@`N3z2zSB5%u`2hCTIkzK*}N?S z3E5y}EB?1flp{HI+}y}vGzG<5tV%*t94*5Y!W>D~ z$`Ac7TI)y!&p)u%`U}MLdeex=ILC2!n6KJ5dntP{l(hbpEMX*zW;dC9KuI&W#(PGh zvTW9FZh7?-e_%DspaAq_0=G_@@GuL%EfL?cVMSt^ZYzv^H$xkQvKfgP)>FTocGeaJ zAACi~U>1lg5E=nIm}Mc`VjEDmat;3R?$X2}joVpXk;c*FKJ?afsSE5DCf}T--Jw-x zSsYRRmKbb4_4-I~n$^z!sT$t!Af@mG(kRNH_No7OavZhs5O@DGuo#ke{6Y6tcwei- zuIG|sEBW%pHJ{f7s7h$v(^F?cKfO$E`j^!idE=k(f{5sfynn||&HuXKgq%A?Y6m-4 zkW**x->H*9Sn{h6?VEz8yUQ-1f0UZs88&@Uh^_&re$l!S@6W@rt#4#3O?Ue#zHb~; z3?UwW+%OFS1cPnm3!6M^T14*ty34U((m-fd1Z0iuF}gNc!DQWoK5`cTsvD@B_s zgoekF3~`LfRs?8WKEg-MU|^=mjJ3Jj>VOX{vn4YPe<7unCrLcnzxJ_Erm&TPCfQd% z)Y`}YS*o(&aSVRWhmDt#LgWVmzkkh$dc$wiE50wNGbtT2#Cn$wt~|i=kG-~Lyh@p6 znV}bRm@c4W))kT#ybh@=mB*EqI`^Rzuqe~x0|4RMC=9q7#;OIVK9E#_-2q$3{-T6X z!;eMUrEslqPLW%b;{O+*?*Gk8W`Z;Xw!g()cH%0$ZDM{-1D^r` zfuEmBg?^=NQIOyFfR{Am-r?e_C*$)4zouUMCDBGV;h?CU(edyO_3Bruuzh8;gR(iT z6!-FWTiE1gdGk8w%pHe&YAP@CU`$E9hW0s~=N=NsS}HS#PIbw9o2pLcVd&DW#zvR> zFWuO!e^?BS)rq9}BB-`nr>=Sqk0F6q!u{$NtL-!W9rCmi#ed3awAj*|F_hmuJM`0@ zvFy)Ra2DhcsBS*;&kQ(Lse1kaF+0!c_N7I1?uyZM6uU=rq$W0YQ$%J&EFFnCHnk`H zs!MOgu}V2=kDhKgbLpY~?~tih%5FN)Lk_`6b`XI&qfY6>z4 zC0?pOJrnvi$>3}Kybh=I++h`UQ+BGih7*MmDm~`>ncqc>=k!AZBA7h|q4G zjMdl|bf3)7V)pWAdY{{}N4?f6{NWoopqp+KX_PFPCg!5Sj!C&80~df5|4eC4<%(k3^=sbUg558A0R$ta~Q%7C-MFDg|(HEd0_Bmi{({>9>igVX(w`n=yP9<4aS+EDU*&`4$M$YC;_zV~bYsdoI}EtO2RU3vJlw2=nY2CMw7ube>^tsx=vqkudRUapUI-tl?yF>8yeOL?7Nxb_j)cHM*R*od~iFd!eY!n2IF|NWj4rLeNhrNQ<=WvZ)N$v6(zST_i{0h9l`m^*}ODMb#~q*2j<^C zRz*u=`&FEqX--O{3#|m4O}g~7OFhijmJwcogVOm{a=aU`%Fzz3q}#%Swhs6yk_oEi zIg0RI9gXF7dF8>cj8!nv`3f-2OjtzgZ^d6H;68V6;A!isv+G2BAQKi4GL133ASk0I z-M-@>ZB;&ZIO_iR$(+bGeuQ~XC+#FOfu2#EyoV)6#S_>U{Y4cWGwjpoLdsmLunIVp zr%oTgYtMubraVIDBUd?*!)Yo0#KTq}ZvdjM(n%F9rXeVd$ZE7YNI{0bCK|l_#2RRT zBdCT{Cz%&qc|v@F30~`#$(!?zql%s-4)t;P`s?(hk?;e7Vy*3`8l&hOkFQBTbySiFA^n%{oi}?{;Vv~PnQd{kCLTe78Mm4*4W{;7qY-L9^($}xh%Jt z6bgcvq(Zu823^aiG^mU(=fqWD$r>+E$( zS})C&C;9h(r`K1E&budmO&S_ABVH>Oh|;Bw_}`09+RkHc21wKrPm@DM8&^{x#nC(v zy0D-9{oGy`nC@UvUd!maDoGpbujsZG4PR?VxJ(|DlB2(AxF?-ee+68S@GI%h|6x-C z+P9SIrl$jjDV;G`rpmOfwo2+^ZSTlOyw9kBVh*cwDW`v$O&54~WaR`qG2fZR36$2> zaMV^D8W$`pd$g`QU5VC#GM?FD*6om_XZVDPs?b3u~Eza$VqLt+P3${Kp#uV z3?I40<;=O;5nf4wo^E-jF^R%PH{ifFor#N=6D-d66-3w+ndtg8;tlJ6LW0FmzzBHe zUI1C@*sAUgW-SnOduT{&YgPnl(04lw>Uq!W!D^R)Z3@Atm9BEJp6 zfqVoD?yfqW++S~{K3hrs*}uuOr_pj#D61%7o=y%Yym^7adv{5V47944-~ViGq3ioH zqt8FwjfJcFFecBVGp%`c07{0#u*!j5;xVtSfZxC15}7Wm8Iae`ELfHGlNg*XF?PLc zm%#-rC0KA}T^5JA&hcRR<+m|3W?|M)LKw8uszX?|G`Ij#@e4;3ef)3P~(?_?CvHYBaSN z_u~5NZ30Qs5G3{=dgnMdjhUTPnj!1fnlkV)uTvFlE{Z3=??%}f5S}b!RT5=4nyV<{ zja#tIKo$F|Q?(~^gbTmUhwax~gr+!&;|6^RO#9`3z7*o}Vv;p2#eM2$7p+<0+xD!^ zVy2kE59J3hf3Wb}UcO9yimnk>WSCbMrig+33vqQadr2FE4^;tHXx1h#zS?nht2>=} zb9K+$&4vA*H>{~|kYBZQNhTko(bMHcKdSkIW0*fgej(@)>@jk@El$-&Hc63i59T0- zI@od^vGZYhO@JtETTQ2xrd2#yqoKwO^`72@Rd>AHHCuWBglExFfQFBDvJrVtE8wG( zJv--3w}xa1XhScJy5aZk&T}sY2MTqUGiR^EDHCLFBV8LR1R@Rb8PTR1i8MF8abV|G zt|;_iL^9%$*B2or^ZW~pp)CD(Br*#F&#&dWY^*8k`)9bUY0L7*gO6%@tBE&Gfq0=i ztu2C>52D2SPV*Spj=Q1@CJGftgx^trDBI29m#`+2@QHGk`%fo$;zs_ijb!Cocv+B1 z^%TMAlRPK39>5n9!Z52(ygjMl<3LW5s&@Hw=3(!H6;i z8H32atWg33cDeV|LPl=fp(gKiM# z<>e{7<~%DVo4=v_pls4nuR8A-5Ruc2H=CQ4TXl}JOr6@bFO;In{crs&#VlLoO%B2tsFJqXpKA@TkQ@uhmr?j#B`Wb4XRbIoybjtDlT=7i%CK>Xe>dIy5|7$)=EHv4`R>OD zUrUPw(z~20XHP4K@l`j`SZCbl?K1HA8-fovGZoZzo6;fpb7GVwreW5*j&YgUQ)#M^ z_mmRcdJ_1aqpNYfafBvi=k9VjZ`3rjn}T>8ybF01P)*J{Kzb*V#^P)&*3~;nJzVl- zI|*k((1NUDSrm)QqA(X7?7p=lhAlnJ!lyG7mPnfNRyi!HI#VQ$70Ihu^WD&kku*H; zaA`V^&>P{<$sB`>tUiBNXQ7Kbk2ZyhgUgRmTtw} z7P=v}cEKFR4UxvT9nz%Q%zAcf^?MAE#73|oUYTly7(X>|lJOOxS11vr9uL0xc~8pS ziZ9qxsjBBejJ2e88DsAO^YI7CmpHEXSpC(N+4isbz6v0jTNC@XS~XHh=erE1cdw6{ zW|ZSzy+5F>JC%J-Jg$7Q;%e7!iK_%$>cN7rfpTlIrgSU8r+?}Zj7|ChSdbxpH;voH z&yQj3TUB?}nAxikYM{sMcu?j4)vE0kI)p2kl44gSb1qk^60Hpj!ix zb%RvQ2IBB@0EL|-t7%x_nxUHoO*=&^4kceZ#5Ai%wAlLdf9jYWUs%e_2y0HGT$6o} zU>MI`j}!YwzF+rw7>RSQMqm`!8QZ=wPGV8$Ys#lwuRb?XBgTHI8+7e(;o*a%eX;{u zR|iovNcjfThaXQ!pio9A@&^v-kipzk7^zwvWB7HX_A?fJu)KX!Kn`N4DyrulB1Z;ukKB%%+Jng+2S`xRKfn#0iop|8eF*A7~L`lzewih z&4vaR7ADjwzh}5mga-9t2D7HCyIKd9g)PuLbro3c8NanW>kYOt5^ncwMrwWwZz zuA>I`wvOyIXqt4~=dr@I!w`<&_*N0K=k8 zLW0}kY*C4=o2HZapcH+c<8X8(<@JvWAuZP0TMGgL>8Klmr@Gs%tohqr@0+Y#lWK6= zD>s8nL1Dq8&{=G@s;(-31L2@9fdyk*+g3@LhlPj+W?g&od5MZXP-B2E38iACUOWB= zgX%6m-Fu1`@mHw=??zkN^CSXPx=1%+XQ@iSn~%RFMfCdl8+5z6Ab;Ks&yXd3dPsbE zW}LU{*qnILzN}lj;vy`RAlF&Nn65DJ9zmfpAr9=|t;S z6eLY%alo&8XgE<`xjT-M(r_Ld-iR2M1A$iUop>t82S|bTZfhPfvd|y!FnVRh0FZMF zjb;6~lgB1;>V%3ApyaBX!QCV$G6b`a05q`EC(%Ks)a;@;EE zZ6IYOyt$f2CdtfBS+qNx=aL$d2x@aW7BtOGcIx^5&>=DI(2V?hjM zzl)2ndrtY_82iseP_${TO>HTgM;rm7uf)pq)~c^v5P9q>qB;Cva5pg&^qO7ZiEAIq zd0u^8q1i~=!Kx^w4cMk34buF3$<(<0;JW3`bKb7fx?*~imA}E>s8OIx$BHtBEyzNW z(pN%vw`seXdRN)4Fw>9b$?}g+_z=dAGG@OHQpUx?uG;jq&M_tuT`OD&82LOcz9s#3 z^O4l`!duBMf5Up}SytdHaxoKOD3UcZDIv;>zxG#IaHy^+wK@Wo%E=NT(6LXTYZ!`qb0vh>f$=>LGYS~BnhyeAOaxFMP+Y+ zHbma`R4SK7OD|YVj^CpgbeMRKmT=w}k;|RO%WdNv@962uD&9SyzD75V&4mc6Gb4=t zBIKFnnEZxQS2F0z-VUclv7)Wt5cUIiIJWYW5_R$gyT>in(Ha+zKHxHNF|{$s>*p;JZnG8Nf3?) z?hI4$S-_2(*Emd3g)t8U$1_V0hB(&0qhdfLpD5vP!$|X(-Yu5&g;8m$^<6&j(h0PPn7_V zLQm(TEN)u`+GY15W-V5DQggUMqnA+##?6wgZt4}lsWvoZ5U0S9Hn7R?@^Yfj)N_Ja zJOSPbFwLW-0!auRvVgl(Bo4YWh=VB?eTZTFgK0y$lN)t1m zPngd!#H)Z;Y#*!IwH0~ZXq@)mN96NEYQ->kxz(=aSG$Y=_JqsFqQW_zc=rV%TJXGT zK4nn-3A@YLZaYrJ4kl-uY?7hmI5q@|1u4vYBgJ70;c;PX`mq+Y(K-BK8VMkaby?ms z-qWaE8rSq1GE&T=#~0-&xy#$aya2#zWzY847KGtgd%Pj^BX*My2EZ;R+?A1$dS+^e z&9Rpnj(HaWfF)zaM;%BGtd2Iwd;IU*s!ij---QTWJeR$risi@w*(FFf@zCeU)A| zquAD2uAi(sZNjYdo8MZ}LNcu4t`Ad@{GOXqFzQzl22K0vVU>+s|Epm?MJ=a2!SD=& z>6=Q-ZP4$J<*Sh;=4^_YtM8b5g9F!7Upski6R%$cnWW%X)4o?BNVJVTg}sSL>JuW! zsH%MP%*-})#ws%4)|{{`K-%Aske+t$*`b}t=gu8I|A$`Swvo6lAzIR1J0@SoZ?aGc ztb(;O57lk8H#??*;3;;l%5cTr4;56sCLH~4+bTM9q493EEM7`NJB171ZY$Hejq_QW zMHZf>%~{1ewi7%Qt4@xIOn$)2lkIu+ID7fLzd;=zjOb&Fr==iOZYX$TztM&%Uoih{ zDgdQMj3l@HVQQbDzI=h0hHBvW^sohCldF>EtAB-<-cjW< zZJ_#*o?Wtf5Rw|&0k`2&Pw$i6uDm7N$@4_2Bsx6bxWb`apro19L{-+K%BA(!-ICrS zut0YgEwQyV0As`dN7q|+#lZ#Jy1`u%AUF*nKnU&*L4s>=cXw~xA-KB-57M~1yIbQ9 zjk`l`zkAz#M2G_iK*963f(k2Hkv31kAUr z)80|Wc7NkUi@THbT^zmP#w%lfq?{5G0^V)a7iIlfjtv9r`I*3{(Ms=*I&ifYbEDxY z*K#O!7(moo8NLqekk!K-KY8D$C9&aU*xh|ELGkUg`EUJm?F!ORkC;^lC>xDfafK2; zk3sy|j0{l}OL<=Atw|P0>zsr_lJ^Xs3O)9W1FSk%JRA`7?6XPKP1~wf$FF%!NhCRq z3O^L?NY!=G)pB}D

9FF=!X_{c~^*DyL-1GT*q(B-n|G1>DZVeTyII`^^P?HO0h9bdPb&ymhC**}F8 zf$4~G6z!(Tt6ySXiq{1jT-DhHuD9xM10H;s->yJiQ`fDu)zT=WDe~ocKqpQ|>F1UB zO=*xe$meI0^hM{{`)v-Eo$9`#sdNL*ZxrCbEuZCrYiiYRSU;~h{n`6A-*0l6dUDSg zCbifgAOsfDrW~t)*6!)>>2eg!PS(rU3`-A(*Ff7fbSZob%kC=wc7(~hpNHQ2gg4Wu z_hjnHh%z^^4@FpaWzI4mp*c+5z4s(-l1$Gx{$2st!hvS^a(eOl&1dI>eh0oFH-Kdb z;N#*s_W6YSZ*XQ+lU(bWyZioQ9z6;@xa}S28c)#;Vn26gR!hH2`cv=X>bg5WJ8&m; zf1}tstPKMX7@I^>jjlwrwV`<}3%K}ay7WOgZQyNi>DZ+)A5rZLVHMrl##yp?O9M+< zmmIn1Gfh8CJ}r7zo$rvR=bm)o&=ti+nlKsUFGmzK5p_DrvD*aeR8}r-hFA-n8%~Hq za}mWq8;@~{oV>Vf9Uf%&U5-IZvR~Dqb+&&Qcu~q2Sg+^)8qOT{! z=?o@Wc`H)S!vK(WL&0SEeLS}p2N-IOgFECpAKo2v8Z7Oj?ii>xs3=-Vc!YkNseuCY!h|!uZZ&GoI&d-D>hesiuM2U z6=#3RsP(?kTS4whx%HR(lN}TMu1}B$wjAn}8i03p3<^zE)sC-GR$5}NAb9_i>+&-W z%eQ@DK%4}Qli-8^*G585e_n&YUxF0%Q+MD$c2eTqsqWFLk|zYoSC!6uRp~mtwb~Ui zGCkw@zbuufPB~X7OjF@k4$yq{LejiHpBmF+u^?aVKAh@lM?|iWMcqE1^Q5P3LHOAH zRl(bF{1i*tr7QU^1F^ekO5&oo^lh5+CTHxx)?ebq+y`^0A($t*uX9tS#>dQSq6R1X z)H2aO#6SD+be`#^LIGGab0Iw-4B-DLwI$~nSq!zF67~l zc8VZkB&xgp8>XbHt}E$b@mc6S(eT^WB@pg!=>Gfg{R>0W3LWwB0kM6iUF#iwXt@d} zzfU~C`!KM6^7%|qwOt8p{?IpTc?b|9@2wv;FV6JnPNo|T+E3Bp$adU#nBAK!_i563 z8M+N1Z7iLY_62>yncaz`Pn_ic=fw%9P27GMgEY6eY8NuW?2`~K66ab>ROKK+<{zFS zw=(f2^*$vhn0Ed&2FRKdDVJ6=rYUr{zZLY(uJ%celTFQWwSAt4@q&Yj4Bh!)b~k|E z80{Q<*I0&1@RsyuPv=Xf&K)hDN2VIW+fQ#;hm(nVkU0N=Zm)ve%6`rc+OK|pyL<1_ z)V+oQtTzVWn0{m-!2fehtQdUt@E75;v`|~)I`ezA&k73DSfEXO=AyegaADGW8%JM$ zljp23D#C>?mYFw~_ih#9dICEBiU#VIt~_(!M7zwRC*)Uv&Xq7ZNbkxT@gD`(b#-!0 zgko>aw5pg4Be5;VZJ+G#__%7@pz z_!XPOPuE(l8plEXjtAQlOFE~ z#947QS-%aTUPIiU_-0bRv{1N|JyMJORj&PnIC!MsHlpNcigh&e`2c4nj#khi0r#ju zpdizX2)Bzx#n+!EweU(Y;E>P9fyi@Bc8WVztLOJueJDrYuUInT67+|2U?DzM@Y+x@srde?P}|1FG!&u6(b zMp^o|BdFKgTkWJMc)Qbd0Xe69gfS}Aa$@Z!^HbG<5h){GYus0{GwOk3PS$|^K41An z05VIi2^Z<>?4L^ZUwX8pf04}k6EqH$3R@=dcc6($;4z5uQ7ptyeDAA_FUPc+OMVKu zBTmb&=q0{qPyHH!#6*|0quONX%{Sd!eczU7VWt-!=l4bZ0m(5u6t3S{QN|n2s>zi{ zfBXbnG#cneolnzE>B_llxM3c++CS%EJxzWtn`8rDJD(`@C#h%cNqZn<$sx0LZNUvm zw8y+@?aP&LmX!FcSv&rTrfZXa0c0eL>tw$6Qc|}z*UHeH*5ec0Ne7bZ0mD|#mG+n{SLu9i?N z=fI%}?n4djO6b}{oxM5VSmR{-h@&`em^z|jjXZQvfIh5GYm_My~-1tl;WK0F1XuR@1r`1 zL~5?uWe!v5kCk+?or*tKBgFR0nvf4n4a=yI3N&U9)nu+4tDw7$WHOJ_3pJ%Ell`H& zV3%`3A!#%QLH!zuEqiyN1Wxb{DIO#4XC>9P2%2u3E>~1N25(pL54vxO+H-cf%QZbL zBL5CDm|1ka`taaDXS$&yRUHqjq&kMVaZ4YQZDlCSZN;NyxwkWEit#q9;U|)e2XX3V zp}-B^RUGBa$p{kXCiR(5_hd+JcQJ9N(0bU!gB<)jE7qw<(G3aw&iDQ9EJIme+~RutuGwX9-&lyB`|kQOo%I-r?HkRnv~H%_>0Zdc zDr=#7=CjU(h17*EfnGa99+jG`{IbE{+NUObMTeR8>GS1 z3>YYac|9iFOblX?zljUv7$?|KSv?wm#FST(p69jVKBuEWOCfWFHQ-F79;4kqJtf3*F2z~M?BR`JJZcHn-$ zm(C_@Z_5OYhTo}cIM{22(8@d7sb}t96N*lB#^C(qganRY-Ijvm2=nugkNWPONWAUZ zei`L)2P(J}hsVvbJF-%1H)G-5-)2WKDc)SRmexC{;-BMH%k`t5N{yiu2ET!fV7nGK z6YKt(6m!Wp$QmHTi&H$0;9-|e>nIJV6CBN zoyz9yRW9O?3qMYa+5sW7w2I%KQg^APhea+!VeA z*|!QzJh0j~$9HvCR-alyu4n*fIE3Bnu1mU>(hUg_#5#;O8T*PFKr7b4-S)YEOqK16 zUoak$g*pXn>*k6VDVRHaMK?mnnGQbE1z%6z#BI-max>I4y=ufSLP~VA?}c7>k0Mo~ z(+!F}WfCLz&)@6xyygtX_chWf()L9apIJE=Ixl}g{)|nHtH(As zLnVTApycA+I>s5*V~(&`yN8cYVw|gzp*`@z!0NJloS_&R(ADR(Y zf*iYwa0l%FoSYPQw^!x$(R}x2?6K9Gfi7Be_RkPlQlPQ_Bde_dZceMUwhzDY4CT%I z2>Qv&>7X^8{Egxw@V@W0Kucn;KIYmBcVb?nc(nNTdYo143C0$8v7E%v=ZgghwMKx; zK5C5I=rq^+gY?^&qQ-5>ChV_Tib5*Rmvc4b|(aTEZ zm**;~)cR-kJ%z*5C%a`M5L5Aa|K{ZWI(Q@ADFx7vbY68Q{}7%!IS<8c$;-c<(eUY`)WE z7-@wYrJ*ey(^2bcK=0pq#8&d6Gxt>>tHDQ!u8q!n7j?P%X3W+5QySdl1iHa*cmxyn zyFZn$!IT}~(@f=5#;NWQwC9@bPrXp6YZMp5P1ZGvy`6SqITDq1AG_7fZS} zU;(m_F+G#!qUri3dB4xkt7M!gdhVNK&oHp<)yD$B5nHuB0oSFuuiODDTjphdVG(bL zH^U5)9I}(qv|5Ux=@PtrbNc!;Dmw z?}--ynd!1{>kh|!vgsp|DHF)r39NBJEbsACmxUN4eY9{D7(401A>$-+z=a^<-0_*z z&n+2|`r0OxBZC~~@DyW;gDoCA=-Lw;&cz~T^|cY{Y_H^-MCI)a*^cjy7;~gto2i^@ zE@y~b-&yZY4hx%XXFK1U|M~fGWCK@HCBpHD9?n~Dnwqr!?GIbVPDz(nwesGV%_d@X z%`{?WWf#2h+ouU7>|qNmeG;XQ7%84(PTzI;^3U@@Xtr3}*2H5dW5|2U;Hng^E1q-B zN{)8)5GRI*i_~r0ciiE1?ssZ{NHUzcsVF+`O#r*lBsZkJex((3RT`4}c=-;=0{mm% z)qo86O-zC0K|1X|C68~JXIWLx`%}bEoS#fdQ}wjY5?k5a ztN1D5_sFBjH!GO%)X6Uw^4X)3++Wkw9cFQj;~sQbj>v%7hSWSOl)9Af#qw8fZ z8|jK>*LTi2eBF2Z&E6TVYB-N9z4`D%559AI+|FPX_*fH@7c zT@-}08f$KE6{5m={Z4w3BBn}!=g}LSm&N%e{Da?w?$<=EQwFydnNtEIiu2O46-XrW zMr!U`juzrt3i*FR9<#O>sivZ}?kr+WBq!u#e~%#-u$XjbJT;l>JmP#~!tfV%h&Qs# z8?|5sQC0z`L1^Ei9cwN)Yz2#XRiU|9ZcgbS&lz2mc!g&CCV@LS_mk4oT#|WXq-<%U{8e*z7E^eEHFapduH5$$&kgw?H{^SwRZvM~gE(x%{l@~e z)8@#_wPwmJqry&v7BobtGZUf?*TQQ1xs$Q9-XBy8P+Og)?A+vfp#6$OE<1cRm;id5 z2my3;4>RFCe>I*^Ag&l=Yc5H9JEG%mhMhPM8x|NflTogiWP0wwbGS9G-A})xVebKu zY;lMJxMw(gh^Ub*0r~-jSbbQ9A5DAM`qeKV$=5vV*cZxYuvCyd$KJaH5a~G*A2@B%zj6`|#MsmeuwO@rZy7k{lGs;mr z7zE1JXNqB1c*C*1e3vEM`>pm0fB}ZNj-m+3TX)%(g*sp8Vo90n0=*#t^TJZrkfL zVjP%zj*nX0u8If4!1KG4m<)gP{lHpQ3)&!VtMQ{u z`*y~Orf#%8#7ldPw=+mU*g1a8WL) zx_5UtG)#@TBd1LDE%`5KXu?>3=nHN&yUc0|{UEEZ*phu`SvJiiowdtd@7y-uQ|1H2 zO}0OJ*QBWQ<~Sa&p`tvD#Co&)05`EX3M;ubsC4}z1t+(NvH}Dz2{^YG?CqOLa}9DkuZN7kG2=drtl=L}GY`3Y#>l7X^K zbiX2NjV;x9xOT!WT|er#-J#A| zsBg705RfMp9LcMnF>AEu{XN0$jBQeK=*{Nm)bX|CR*Bn1Nu^#ohrGJ~g#4=?>270x zpYtA?>+#Vsa=U1dTh^CsvbyP8Sf+v>c->pSQGR*AtGznOKgkea-^lz!?jSWKjjx!4 z1YtDODi87|(S(5f^95lQeBZBfX_kx!;1jKo8H{R4eu7qdBIk`Jj?aI)Hq2>N<@1df-0j%LD{D z{*6c-DTfQ-O(~2+l5u39cQoe5v&sLjSs(WX4Ki@Fo;|kB+{I_}yVnzm@Tzo^A(A@_ zi%Y!`=`+nWcX*f!*o17W9Vt0FJ>JbPu2c@f!g&zExB+C5bAt=@G<|QjYvMaHg^Cz? zQq@a|N7`uxUbDo6&d`3J{iShI7P2R>HBCKZ2+jN{JFZDMaebe*H2p1;L5@YQvM~zd zsXtudyey>B$lDa38d1|vqo7xirp+YR$BJTdnt>03a_atP@2KQwR5NlcHH+0r=BVP`V2B(Sjx<+ac2}o;sVM08Q*$**_+%>W^+D@wY zDtg>$o^W?LK$hxXcq)Zd0ybRU7%blr4r%G@YX+@Oije^C);$<-jBhilmSuSO7Q^;_ zH)arftTp9paZ>kcjf255mxQ8%9ljbh3b3QoC)prEV|uGSz5+Uin-!P$qO-#F%B77& z&ZjAi9~hlqc(ut*V2^Qjej1^$>+l9i_+> zpttQfp`J9^o@aVvCVlOyRuY3ht-q7E+@!p>0d79FMqefmdSTEo0v!IB8M9jk$zU#O zrV`Rbp-wzEUIv1s@%7HjR|@Zs;JUIq{U&K=ha#AMSBkE{=nt%!=GwB!_b3B*T6m;j7C!=^ujS++ZhlJ3S=w0c^fIL)K(`b*f}&)COkxw(lStAofs( zC1q2qU4=35pjNspTbTG&%d>9|R8XMhb>7mX618-fRBJQuEM8}GWz3@>Mp%xa>|t$6 zy~OGeREQNfMU>s2nJ2As4$DQwN5uf$v!s~}%cS*orbfWG8u|}9i z?laKa*%P`_K6VHo>9&et*!-xhTDf6VAPo~VBYXRH1WO=j1fx2hP=rcS|Mxz={q42E zJV}Z4r>0ggsQ9Trs?RcxJdCNU0Iut`|+S>lr@EYkw1YQLQg#J|K;R+Ryb>Fnz8qn*!g_`;OH=G+(HZHTFE*(_y&zjPX^n<*TJ_?3|zag0{82ViO5r0q=u6h{fe)_~yj}EY; z_4;l;&uQE;Cq5-hzET^*{>G8L%P|VU@_*eAR@4Iz_zYC@w0C|Uxe9vWg%&fB1XYzQ zvF#0bnxXu}zSaDXstOe(nSrXtPcn(Ej8s3@+t0ms zUo_{QLPlC<|6c6R*8Hat7Yy~4HuRgL4tjBalL2 zlig!6zto=|5)Y4W6OuyW#zfUMv$R#|s&1EWLfEA}un@F=LYL>+v-mOZh2YD%AvYdu zFVBYZ_$*Fd^bVLismDBS3nA|JFv~NH@+HyjYw5A7bTY4<8iVhOP@F0}h@fO#Xme~9 zqv@5t$`ZSZ`{MaFKh`c+Q>4+o*E-t0(2Hf8s)|#HL0@zD*(-!Fb*bt!u9%f$SWX z!s*{JP}=0<)%@VQu$Lc8wu?9)in(f$gIss;k}8$>R+K%IQuDdY$C$LOm)!rgg==49 zTd|O6c`*I0(?&QP!?`6%YlL%g#aSsO58x_*QK2iNHoFD4Ld``OhNX7SbK5M!{yr!} z)`Je_%s}5~vk}#19=bypW^&fS?_V{nZfC*W(OQ@KdyIVXu@H%ZuVf7~Zu2&c{7w`; zU=yA|V;#Gj@6!znvhu_!@EgB2neqe<>FEk&ux{)id5%AcgK|F+T)Yy9d+{w-d`32! zw%qoUOc{}SPX+woM{|XcYX@wFDB!b6_GgL~n=hq>xURI;Qy z6?a>;Ka2%hi?|`h)kwvEBm{euZWdYwWc(Zy6}BbXyW2&+nqTKEg~&fu6a?pR&WO2^ z&F0};bM_!wl0_@H>X~F$#!T-gxLK;+lU}S4+Q({9Zs*D1w^)M$^=@Jg%1rYnuhF3t zRCk)~3?136tSydfy=N4~4L1qDO)~IER-gqH21($x=?AlYm5(Vtjmvze85y%y)~b3~Kuj8l-*f6~ku~JoG|~xsFkr zepZhIAqVlvPU0L(6i+E3%M_&aK^Zt+qhzXe-gFz##H;#n1x!d^N~!Yd4z&HOJUX#fN#N^;t$t%UiD__~gaz<=`Fv%-oLz z#_B9U<&@2}sCPqiRJna~pRL}MqI?Y!rtarwM;Z5tH>+lrgLsb`?m2h+Y1G|%1bzIO z+S3PHrDcepC;-ZzCcw|+}j8QP8DWCjQ#2=; z(qPC2!`}D%E?vv3*#dlHqyl@#y3-~Qw}Ce0xr+hm)Wzfj?SJti9hqm6E8cd zMf&TWm8O9=LeH1C_1Aytfb|#@{ru@sAo&faG0>H35`pZR%l^7$YRAUrrS|z&KmQh8 zs5dOz*mW3Pb8{E0m*YV;YOi$REcV7H@ZfLozphxx)eErvEkmrTzJef6iM|5LI?Rm7 z!`6UdnH6%Xj)SI_RkAwX+HUL?T_9bR+73)G)VzTAw_`-fTjQ#|tg|}Ee#3PaFejer z+*_MW%DH{F)ZSZO&b7nwHM?C$P19}Pwq(s z?Yo}Pe=*j+se<*Z5);bu-bQY4u7P(y?nIW*PEf9!_E>z<;!^Z%u3ToOR+GJvpsMSB ztKRb8X{t|xFDM|+%{>SF5@4jQvF*Eg1`m4#lX)zL3!Bc5xnH30x2*aI^M^trv-XXi2BT9kVYSe1h>d@ytZKqehpbT$g1gYl+jipliqN_Iw~#Faf{2vm2;DdJp9G5evGBN?YqQz>xd2)#axuwEqVs`^RSn;5BSY4gwD4+qBmuvV_{IXPweG>n%VI$rv_*1&HH zjMfY~_e`D})jn8} zIbr)dNL^q>j@E z+ca-^qa-SifndOpNDUNY4d9m}pzAuLQ6BdV$XQ(dxy zhdrA&iW91@UYHJ=fSJH{r`cS|_Siie_3~(WNxFF>%X{7wMI_6&QSld7M?B)W2qpoy z*^w*AEkwgIOx2pL#-50)Ye~b{NNc|%wypI)H4wBi;h78&`nUONp%2C?&8_y5((b^+ zLaUF6JPTf#*0|Hw%AnAzbEQzQ!yn<``*_VRLAAO?$qFqCQHA>6=9h&7CT! za3}KOFJ?I>uTFHeQarOs>bpGOS$2s&C_ZmDUkk(G`Prv<7$vhEPL;No`H-m1b^45t zK+ZBjdfw(>#+FAXDnU+idW3lzNYIY(3dWLNs^0xyeEi-wKku)V4U@#uIzON#i6$b8 zeA%3DUxRwl6CEGc`%`@V!O97ub?6}HHV0>>%F`lOJC#q{4NFj;(A$E`E3L8;(sbOi zSQw&Cuu$vSq-E1vqx~!I?()stw*Eg`(zF={N@`HrERT!+Z?q(I;yE4(zPJiKZ?*$bLxB) zZ%U`|sQCj4J=Df*)a278mW1R~ts00RK)^oChSH+?V@YnVaW(zF;l92!1YVuJx(V4z zS5zwS^j2Kh-e1E(?2;96^Feoq2v1+$n#I)ZETl;mZ5}%Hhw(dF6JIo|tE##pUaT_A zu-iwIR40b(Q0$wbo4=DyvURN=Z;PuAoDvs@52x{+?85g6U3z+celsfc)LBkp)ZIZT z^VeCN-%H}HP^@JDIGh_RseZsdA+flslph$bbVT-ed9sdn9yi|KZ9Hn_0aTG!-76j} zQ2&`|kP5XSwBw1*m|f#^{4jQ3_xfi7_E%ZH&r6GLF@B?i(aX8zQs^U(HQ3Hfr~sW- zL|YY;IRPuE79bY>)%OMgxpEDbrOjx!_cYTx_;aN7b=ZBy#6cQq^*tzXGhM>-c!XT* zzTlq0K&j2q=*6Mm?dl`wX0YjKnzpD&nIHdk+8PlM31 z@AzwI9e0fpqy&jxeFbARDcX2?4BC9`X<{)`gh{DC|2}*tnQy=j4PxTd>ew=;db%gKM;a!%;pd%&CnvH&?XBXr7 z!eFv5)fAMqkY-YMyW%1&Z>NwJ2Uw_2tG zHs6j07JU9Zb2KWvJA^SGdeYZ7Ci)S*ZtM@x(oi(nUMVb@OfhSd=MW=OMaL{|vr#HjU^ok7aVVxWvx- zhgv9N#X;gRjRNUd?I;r_^Scd!h=}pW-F#keE9#4RZ%$-rzc1-hxL9?g z%)FfQdQrB~uL^HWFcQ;=fYZUcSVSD3 zEqPWnCVYAbQw#AHb?WmTlZW?Hv}^j!v4YS2p9{SWmL+5$_k6zD2fl`X9j4a3ZFg(>`!c~cMbH7SBx2^jI2;L{)PGfP-~IEg;%D}Ej=J(R<86c5&MpX+Ix zVe-js^|RQL2HBWiEbX^}aJG{pewFOm;5PQ<2$=EeSywQ;U}`v)g=Ob`$$de0aSY8t zkc+bXtzEmMY3mKkp3)6lDEC5y#!-Pc(N8vx;)(`J8IGzznHhr(mZlLWcl$A`y;@bYYT#@kZtnQQrjny7!@AbFlF{?5Ox zC59U3x*#kgYPq@z-#Peu!2$YpJ`D;qwClC8UvvT2<@^3tK@=vtO^Ed^13#!cY5=m{ z90ESgB90VD7M7wt?6sd~P8 z3<)tWvtWqKkguN1^jRp*fDs&_t9&2dsd%(2ZGhz)YI~lu!i%HU6GkN7ZI&D`vIg(-cX|auUV+N$ z9U`jpZh7sCD7lkWsc9}FET)cGZgWmeeE0WmObA<8(=#GXd#O79K@OimZ+RWonjk>% zo2~19n*OzSs4GdH|0ay!=+ewz zs0jc$$-U&Qm?@APp1*>Q7FVuE*xMW=u1x3FQ)rf+)oP;~?)g!O0$Zj8Vc3U>Tdl%V zn1CcqPz6;2L~0~oIwwIBRamnp>>K0a2y+}33I+8P^B%Tc+2!xDzAwok~LrFY`=PyBz2``bOIf`l=bLgj2g zf1)U_RmQH>nQ$uO6+9<->KCN#JotI*xluU&kjf8rGsdA&G~^dcWd#e$H{AR~{2iup zwr`CjOy8$cmX9l%8wyC46W~f~KSljslmE5&qQ?4b(88dmS-!Z>;uGEs@R&89^gEXl zn)UFCqEY#X?_rat0nIR*P!c>E%E_TEmn%18Q_r;2m?!DaUD%8WQQx0gid!ox_4RQm z@7+o=_y9)*iOV{_?g)xZ?%rFH1K-!7WT`a`Fby?B7jyB<@wW}_%P{sY^-QQKcw(-e zeQwKN*^l<~?03*wbdWDq>B=0OgSAn$!$Uh-seAPOH1<6pZ(kpVYi*^PQ8Dz5_P)m5 z%M}1_@qIt5qhS}LsmvW=d8)LZTS6=Hkn{`EP${krze@vwB0&4nHdMKaUVxS5V*Dnl z12%P4e5F>A|DIdxgsyK=FZbxUX^eGc*yo(ED_87j_Esd7saT-*OxpQhq@$DlpGa4D zz|(90?xFkaO0slRccEe@R0fP0S8<9~w#l4FL>19S?szZ2$Xs?J$)K1862uO~dor>! zlXZ7nDxoM9Dv@i~yqHdo_#c4S<#X7Z2>eapN>z1$sC_;q(*)0AF`9Y%=2u4-)xWgN zqobj4qwk$&J-gNdJ{B)aw)f*OoQjp zDRzXdtoU4q0(_VfJu0(nUlRr6V7-BE0vx@PKamD>=%EGgQ(bOM5@W4=E}3(E)h_8_ z8gX)7n?_9&knfpiH;m@4r>`xxDLANnqgE#=tiQ@IV)_o>RX{}=^^K8j^zOAtk3KHEQW{Kf)jEFOoaiOwE=7Y&=U z);-@u&RB@f3llsV*%yz_;o5QW#rkO;wGU|zUV{tM&Nw>bagk0fg*qfHsCsSem+ej; z3VwEh`9V??)_IBS>r~$H_5c3AhoU%h7HWN}`5_M_2Q(7dQ?>mio&3{^$)jlKX&qe) za8y8BZp;W2wl8GT+IJL!60gto+>UoWF1;Vwvsp*DsW7JQm4VNH@tWXEr3&2S67ssZ ztnGS4ozb2t^)f#<49quY1xDm6C;(!#S|5K)bhcS-NE*}*c77!Q4a?<>FG!_H)uOxV z-O+l#7$V*WX9!|Ed;*eA-?x9KTkcJP-e_9l_%ERM&PL7sczS}9RbHj{)-*x}1qP@O zkm7aQ-o?=#KJQ*YN|@=A)m1p2Bgr|JEY7(H5Q?3z(#@;BpHlvVqv2@OuyJZ>$ zuTL*l0weC|6bAZL!Sd6@`d?F~=qWA>qN>xz$Szki>7raPSFzB`m(1kDth~doXip1@ zv($3+>Y#zV(87WV^-OWjglo!ro;_SNs4cX?>cm;${2$DF8Dp_u+M) z$=-sqvTHj+sPNw_D4O10t32rE^emor%Zdg-$nnMJw7PzGXeB%iY_A1qg*&d>3fd9O z*e%?qIn+-`(Q`GFD5fDx4@g$8v3$$R7MA^3Wo=Jh`D@cn+kEa>AG%%ie4xfly@k?v zwpr>x)l&qL+mfot!z^!UEsu3mG_HWO%i~dbNw4$EZRikRZt>+u>zlj#%Dyce^6w|Q zJ~x=vkf{BW_TM}++Z2d~GK{SRdSU!kHGX>~T`BT{5A<>(k0uDm#rl~Qpr*GjIwC@! zqMDpbqT6g(XkXnhFIzWGG?2P(uscr@(X{;E2;UH+2e2}PJr>A$Hu(mV zQIj|fzV>~(^EiIRqo^&g43*0$<6B|{EqO+3l32fpP>ajW0maksx2|vYETM#mAHxt= zZfTui53&1FKCzF6p2OnU4k5SbC{XFHCfNG`mmqFhWWTfpq8hmB8iH>yM^ScNu)+tF z>|#haQ9efNxDX-mF{AaydG0fm1A%HmMuY%73vWZ14eB z!|oetet+h{*0~izI3e@4@oV$k??HC-bs-@A4eZPWYb{+pOW(Rzy zZckG?*@vF-DW;hN!`5Wi1%b#{u;hN()e<1e@^YW`1YL?tuzeN0rGr9|n&9&}#8s)c zV0M1FL_4AB?a&)$dK|sA`W#}b5?SXGtCSbU)(F!5-i~tINmVDyMAsJoz$u)c2Y=;NsH%)k_wsewzLha)h^mW- z=f4BKV$ql^-14R`J9g!z?7fG+3rlEL)t_oX zYG}w#Ak?Ca@FYYcaGvi^2MS!T&*r61lbN%4hwE{Iw=-vKUyyxp^z4254j?&QYoxsh zs5}y`Cp#ERuj~_Dm)`P?ia2^S?A-2E_Py5G3mfU@UYG9?UDo)HZ>vAg;1Z+~skiB( zWeNfBM~~8+MNGc;#E@HnZXQ|&e*1*p&4Lkzj)uOU&(eN$6m>f$LQJFADVD*DM}$2OR(bT@vZtZmIXSuIIQB?nTLEeO9~22G z#4@+AroZUj;RTh5fPx=;K9C~OwADvMDM*y2Ul%Tq$RdA1i$qiRXiV~1-sUajO1TeuKxxt&=5;s58eCc*_b zkeip^eo#7=I7*^kI?O+c(#*C#76+DkSC~HEB0}43E(!?te26G(o?fP&ADT9dtY1Iu z@d_$dd&gIMA-`<6|AuLl*2Xit&-+qcLX%K)qtYAAl&qbshMX-$nNem)p2@&-T;J7TfEPP%APHuQiC^x}^ z&Yw(+<$&8k`<&rMFM3xCKlf<-dv!cu#e(VU+cuw;Dh<^Y~Nt+tIU{ zEMA^k+BTycT@|&wFRwibNcH_R_vAaoSK}R=`e_}@?hH12L-1C^gr~>35c=&R6({Q^ zDjJX|YZ0olt|i($*Fz*}IJa^~@N7ufrVm?V)M2OFN4*YjtuGs1x%vM`_XFDNzTw;c zIaPV3^r>}%*n((e7X$m{7t^yYJJo$0yc`;UpScPN+fo0|J=W;4_%z#yO+Rwnnc#){ zTaGA&V|5fG>eP4oJ>qYzhk3B?w|QlSs)L^8WOgzGH5=ZVcqdV3Kz6_D-*}u8YMSvnPX38Gd5<&7W zKHCL6qBml&Yy_l_3XILH3=Kvvu0st=K(E$^POn<%DwmrFxk1O)CT^8wNBfEcBzLIq zSxd?Ax}f@x$@aa|ot^uqi2Dx#n_`eTsqt&@@G~?b=m&ueD#e+W9&jb|Kg0Ox6w!Svak( z@z3Y4Zx{sM@^@5*3}3t(>MVW!AJX0{tm&@n_ErHA5F#SdK}AG*?@dLd38+Xf5fJIU zmwzwtWR#UN7>_F&9(h5E(aKG^0-ms52f!%%i*>KhZl=c5dfGPNxmHNzWsdM9)R z7wFW?!hG0z9yhZXvji!$=k;N}>mSpaZCTCY9sxVP050s4LU%=lYf~LJIpVHFa??VB zc6C)2s+U%KR5MYhX`;PGy$Kdz;2Us7=A7GQou&k~u%b0CHwT?T*mSK~9=!msh*SlBzT%gvN9VVg z8td)eeOnewkg-2E6;g)-!o)V&blhM*G{}w0b9@3YDU3rIOh0>)DHzJ&H5R z8Xdh861UF-iWi>(Ua(k%kY1`EkQLH?PF<>|#5O6U$+z~%>IE9>dd_@i{N)d=6lRbr z&T11qc27E6ZtMM@jU1o8AYk^zPjj!|mb||Li8?T$F@Qh<_8*8Dd6F4uZ(K%e5~e(! zfA1BIRaBe;M$g=k6W|+|`U3QaN z&y*>517TBVi7nud$^&bF)cERk-SpQ8Ru8@Flfv4+_m9OBS?XsvO?-Q{DK#UgOEHNx zVg98;WLoae^t2>--O{%28GOzy7)#p>gp8g|vx|(|=>dDhBR&@$UZe^51)g`*o>?i3 zpt3Pp@bMx7G885D2duK6zw22)3|jqLyOoDu8Fk+mL;P~e#?k&UDw%VF(gIbYHB+-r z#?<+N>%`yQzF>J6Ra)cGzr)ie4?5{?@ta7NJM@LpYBL4X{6FSc zTR>JF&-`<^qBThjsH^fysV0z)78+33P;|-Wn=|clb&M?VJ;^19x9VE)!TlRB^G7?kvW zu$vH9SkN;d+wyDDBKUEqwC`lhrr$)D8`SOn1uhl|3B=URh@#^&S0mI&*NQnbma@# zrVulkEiGMrP7Rd!6zW+XFjPM0}A(JG+8gydykwh z!)D5w-~3L9N2>{X1_Xqt$pR?2hS;!oUuMiyj;6AIr`h0|dk~JmFYEa{b}WuA5>osQ z)KBqXlu{XOYD-Ic4J!O7{vnj#Wc<$BZ``v-d_<2&s*R6NItFgZ7z5Z7?bohkyTt4H zfLU`M-YWhb)zn8f(+R?UErHw2^M&kb)09%G=w!mkrZ?B)_Co41@63>Y3^n@~{OYkU zWx67Zl5Z~d6aBM=V}CLsO(HT$g!mf&$||AAY{EViwhUnw0k&O4E5*Up=I&)B0SGc+ z<|@riuG>EJ4OBivBxkwQMeJ?dY~$yDf#)9o0MEOV6+1Um)okJxMsd0M!<#=W+d2~g z=?vwuU1eUg??9EJBo;QYGJ9W0_P~yH2xAtpab1iRR06yP5XJ3WCQX?}v{Kg8Dv;_F^24;E@BA@SF)u9kd*n z^Ahnw(lE{I^T*4JqyRf?=~F%5vZjxI9Y2wJtVIOcv7U!q%vJLVIlIGW8I#E$Rg)RsHK-b~s)A zLSNiFB_3+4kq-y3?m2KQsZr>DT??Vn`0|o~)B)!4trI<{GcaIFGO*Y&l)IZ@*|ob8 z?#~e^B2@lhax3uW!QpSiPztDPggXI?Bl?$Cp2EWqzA{dRYK5%k zm=LyL#;PGkPw#z=l|xUsZ;g_QOTMXW+1WXXRGLLxFSZ;}ZC@Bw-tbu#fvDdE7RQ*4v9IBIjPtCXUc(f>mUf|T$07G(fy4;2)!!)Lb zO?N1*Ju0tsWXOCL$>@qg+Rtg_K**=JX1Uv&qx-WsZ{MwLc8sAqnkpWtm3V-f5iZ;H zlc0*<@y)#rAtk&!cVj)eY#O45`cQy|bl_9>#o7VMq;+h;d}#()LDqP?og4z+tC3EpY0*E7U)H?M10<0&~WE&hu0uQf@Rx|{xD7@h7UHi%f| zH6f?WIE^Iz7`0{}xS>;T(%&aI)Axo>8h47;v z2Wn<1f@0K83Z=LIkcp`E`#kxjM`gLltMhq~+wn5e>WM-~E=)-#QWDcJMIfe5_)xVD^lfE4%N=d&(wZHAtQ_$9-Lj z8QZ7rRP>&;c~Lw++_k&NnB@JDw)~F%{OTtYma_J3B^Ecs6W8)IC1HxjImS92S3QC+ zi*!s-GpyddF?1E$q>z|{QvM}Q^|In=>=8mL@{(jrP|#NQo@*WLnM^m}(t1?D?|evA zaotRAwCD2P0irl!%9JW&+H!YuTayNqA&*Xye<|aSfnB}yM8(hh>KdA*M_M3 z`Md7E&=009D9W6!rlR7LUmCY+ZBi|JT44bro_yO1f$r8{7Az=Sfg{t$V)PcX>b>z7 zltb)vS#C;(2Xz!a6I~QqE&ThKr8&_?$6tek;%^pd+K;+VlD%~N%x`*aOF`et?2lGH zXFn8HuSn!?Z#CJ5QTgqQO?B@S=|_lJhmHkog$-NW1)^86;~G$GsE&DWjbml&e0se^ z%Gv`>mCL<$;H*dXnZ%YdtM`)%%&ZubF%!e8&PxkwsVUHs|MBED$+3pbYiJ|hW!sL^ zf6_U$&H1egwB@o8cDA(mo$^3p=`M4~{6g~ z3+#O47R-^qkUXoCBb6E^8xCKuQmvIuJtJ&96U$qMQO*h(-CcY34UJ2PHy*P5pd6`s z%J@ir)kP9`FSNd9x9*On6!rRGPB8P$4+vuV@*lUfDC;A0tJ$+vso9jk8{b@GY3AC} z>U^Vj;g-)?1UU$%SC_2o9cs-FHGET}xiUayU1M1pg@T(Y?zQJC+8Y+{`qJrfh|{g= z{}=;CEO~(ZAi79d0EfX*Uge+1l0)B6BHuoNJ=$*G`XrBj`s=*xVZ}S1uuFubzwYm1I zWYQJf?;bO>e_G-^xfVC;LhRR$Ha)ZIC#uU7G*!*qt8S}Ak$4?=OgipS1-5F^ty!E^ z4a19Rlpf#M`6e-MbT>i1N-t$`ZyEW^j@vC=OQobu_`AT!X3T7Mx>BHv(i{(H0#cUo z?w`W({MQXaqHxelO?fj~;%5<(H+uy1r>-7ZdS{_~lU%gSCuxtXOw8W!UnDEuCj9df zOF;dF12GG*-_=nTFM(|uo(;$A3FAj~z1v&Lf;{3Bmp={gs@Z;HcAg`f)9nt^1Y-bJ zI3Op)*K{qKADsaQ8%bTu%i}1jYWN>(LbLF0dzGZ z)>|WIg<@AYhHF@s1s;;+I2<>KNp7yrSQn)XdagbVxpQNQ+WOs0^7ee!*OF72zCO)}sh(qDK2;kYmw%3_= zx}BOD0tiy{#dZnU{_tAxcbat`Z(wj7Q5)O|S!GNa^J>RjolFhtI{olkJTD@=pkor3 zgiIX|90$Aa?ju(6tShpP7vN*Pn7JDEZfm$6J-K1W1SEj7&9X*}JgtM*U-b*ta_V*3 zC+G2Z^wb-KfEn6BpS(rXD?^*6I-ykW>iDgxo)L-1S*SXv0ttsH1v7-i3damtDr!}a zQJJuakTjcikTV0?%|J<8Lj(#Fh9pf8@;jM;Eqmu~0}q@db{}c^Ls>=-?B+4+2MyUX zTv^r%(ewEfLwWbYqcbR;kB~tqb23+E7C-~|wjyn26H$IzHohe_URWRN)8yfjTkxMj z#hmP2|1#h?A$Y}X7Do@$K8_0w#}QH%6bWpIw0GIVU14w803L|?3vXDSa>{*&Dyl|} zFI1lhlJwrxIeRGQi-%{P2Dh{_wyc_#LTIBOY5K04^xP(?RMfZm`OU7v<#E}| zAUn?X?(pKGS)0nLI;6CTY4a=Hqn8XQBi0d81z~<=MddP?#t4XwrrC27zP439%TynZ z_d|&^62rksh^}HT9%;5mFu zuF)F+mb&3`fnuS1NbaYh!A3Nxr0#u3v znf-dsd1NV`owrBl0#Y5243%-TvR+$66j6%P!0BxL(vE**`7&5d97s0tJWC785B z=^b|Uo{~J!qZlzmJ;(KzRh)1aUw$K#oKHY~!dtBI9g|`sa2Y`n4Gi>4!fXi+KY=Fs z$rJJ|)s{T7k|pA435f(5a>Ws7OOfh$pat_|aR`nJ?*2w6v8HEO=ARehIr5x5v26UA ztAOZM?`Onk`wQr_QLjYBaGEG9fdJtKCUO&BQ93KXS%m=$7HaHOcCUkm@ZcPFIyF148GcVU^a@3kxsm~X^H9*y%oFaA1sH>}qB^}BR&{~XHih13kM zzvXZlDUc>xCa{+OO@mVMu#bIi=Tfeu&wbhCrZ*`}RjQ{XM> z!cBqmzX)c|tF;3%<;N^+n_RuPPFCX9eiTQQQGU}4y$4u_!%H)G3Vtcjq5Db%fW1NK;Ld@j#4xP%uMw@rtJ67w9yfkt>XZYdHF5;onRtB#!lTGx;S&4vsFWy58 z*r}}w8$5g+iHyQyHXWUleEKB(H_?@c*C+kbmJulBfb(oa(`zTw=1wO&w~UaZJ}v(; zB(QaFE`_s|fi6eh^fUotu(%7Vf02#TkWXJzGoLjC> zGrrT%Xo=m^x@bj81$wr;l$B+>%+xY!$7X}B{w&@Oy2jxzxZu5@9c8VPuL2ic5F_vz zHd|`N$nlltb@+9l{~{e98#q$rK=Mo$W8L@w53|<3#AJ8d~iy z*NBI;JECY!=R)KG&%^^z%211@v+DzO1q=kghTc;+=L8-1+mX%R^;tzXe2wR7N%E~8 zi*Q+GQquT9`#}8{m}zMqCmhrhG!wTq*wnhPK?rJnmSy+(plH+5DxzY@;2FxF-|v@Q z6G`@38Wz$;gx@oo`|iF4y3M`o=RM{2C*&aG=(XRR{8&O)H+$$V$@8Bpbc>|2-FZ66 zr>cR{$Yr+qVx@+->6c@4Q!WXyfT?FsQ{?d!LmI8bK@@Rug)g(^N4g8X$t$?7y)tRt zSH8IM!5P-G9%+S{m3{neNf6g;w(S4GEB;PEnavc^wW9g%dhHHRHaClGKP!C5KNw!y z#(Sl5aQ;@h>*WhT@eTA3sU(U*R8swC*c_D;6Uup@c8C<- zw0%77V6TX}Fntxx-_3mwJbyljn;s8Rz?2*)-NrX^y>*;FNjlu2xW%V`MaDAW=Draj0A5#5QF9T^K;e&7hlRFS-Fj1WHzZc=7zOGjp{Y;hj@AOLcDL;5TD{_gQ+< z(*)a`@*O7<^OyoS*yXSPfku{hJo02M-b?vhH0bqBrp_-7>MraOA`Lva3ym|yww!IDX$V zn?CQzRQuLV23(+hQ-H}$j*7feojX$^Z+|#L%<;4zeN~tUiWZ-8R2C1)3URxW#5G1+ zcT2>hW8`C*g;?T4H?H%K9_c{k6Sux)Pq}7(GI!Onml7m@JUvuIG^p-w#~1Ja_S(NN znPnn`*li@y)n;77Kyx90`4NavB#@9&@mNrE7d$zbVlt-jd$-xW=2c7#-arad&vu?( z!RF=aWxK3N**a>wJax+yOtPKf+5cvq>fEKH(C|d2c)-1G5!th6VpU*GR255^ydPsGni5j;=!hoAYt!qzCj__Gu!wjjGQ%2=-4E|_iU#nB+y($@OpXOt^U141*K z=DiR;^eBmuO*^07ZnPSHdFg^QneIAk)5-pNBJiRB*|rqeGFw6)fP{FX(UFIjdhOE> zVjr|+pI6~8(RE8F$T$Dim!KpQ{CP%u05(ahX$g0{Eh4`{kpjK+5XgMbiP}^+bmD^^ zN)eN{MLz`I~a z;=D0I$vQy{fr!hiN@S=&K&P&$#VSembr&)eh~*fk^w2lFs6^(=?*Rm|PqBe3C>_`b zNz&FQbBsfztr$P=r2S-)8WzrrBv`{{5)9O_3*lK7gZ4L127wPR+XJgxXN=kzasacJ z-3nVx@HpIg+-2B^#rH>F^}g9$91ZSgZoGNk@ODhVM|o?q3E|`=R+F1j6}klWB!0>Q zx*HrDA1&HAZ=WWcwUM#Vlzjl7#u}K<^`;l02!{x+z%12E!)dis0#4&>PA_s0b)2eC z;nTwSe(~?zYFCq+Pwb^~`S}X?YgK2jcFD3qh(w1GqTcL!o;J0~yjVV@CVsPGELRy# z?s^9~LRaGQX1y@0-Hu4LLHdcr+a`j1c53uajC%C^Od~GD{B}{C!2sO4*#CEbB)|6< zQ6ngo51Nx2nMy^Wu|1+A8n4{<3zdklFR<9)Gu|f4A2N5nIbwM62)w~wy-8L-JQ}VR zJJpm#A;yd`QxeU#EGTvlZQ0Jx4f~AB8S`a~Koyp;!3A*#9B;lavIAwI#-( z*hdWCR8cC}qkWFHlawKk{toHWCYdhO+WHPQZb+k32g}c>mYisRTJ)cRUwl2Pg|>9<8-$W8PtEx zqOKE3(~R*xU(J%<>3Wth&lW}L-Ovgt($;Y=aF3^QvGg#fnFigGwPXO;3!Cs{=uaZ8 zZUlSGBYypkpS&Ui0Ix~jRDjKO5bTB;v`YpqB**vkwe4f_Rg^_*qK_mhL@g}oIxC6q z*VK`hS88FqfOQ2Ab| z9j!x=mgzzw3y5KuTliL@L1?)2!7(P>`#D9~g#L_2%%{AEw34Z?;NX^&2JBbhEX6|K zQb~}E80t-F(D+xnWAhCX<6@({eXx;OCBt0$LfY9C1LD@ggG~*ojgbeh$CWpD-I-Xi z47#oC2(mHaj_*P1KJij_EV15>jHsj5hb@~!9|AWdwWRK+-ZJe4Y$(6d>v>Ps;A-4* z){a!hcLSPVs>PD~r+8hT&Khl*k=9RV@22Z|VLP*&_G}(Ay4-o%{8vI6XMXo)rQv8& zxI1^on^`smbhkn)VK%gNcxKAhid)yhPsfTS%daXbeDUeTW9htVKLfG((BHC$Zt_mp zsl)Q5_ehg1Ef9J<1$u;N|BM}kUS%Pu#Cd#=QvBg)n_>Jn29Jl=iz)bBq^eWhf;-y= z)5Wu04*aquya;r_$q(`2bMP}fP{^aT2|TKi_w*TWm_A4f#f3oqEwCMDSA;#%fPy5F&$Sg^X71n$kgbv09&whEe1D& z9vgAU3;|j%f{--|M-aZH^NOWP@sY1%D_2litV@EVqqo6nU1egq)7@=}_lb!S(X!+A z7awl{iT~J+5lwlI53;mIH>4AT@;MM|pd_=(_@?ZKav1f!TM z>klK;xYJd1VxT2)w$UM3}kd!GfPLeNj!Sy=DV*KJViv1{WO<(A>|{fQHwJ@NY7 zib}NXvAMfj7%;o(2MMH9bhF|XA>-YYq1t^{zK%vi|#6_<BHE5V(_NLF#N>m zDEygzXLR&C+#86Ua`G&!eb1ax*?Yt7PH*)m#476+v>9x1g`~A1$w;rs-N7lvtNgU88E|s7DdOM&!}nar_I45a_Rhjf$h! zMk5zg$f=#+oj>iK(phf25AxCnl6RmAUCB zJm26EV-9gO+Jd2@8NvX~+}doW7U;YB`Dgj8cxs??dE-|T`x$v;E-ExAd(FEtfg|m& z2zchr57w8JWWS_S12cC--%*y|k@QsFXkc+okQ;UZSHI60VB%daa9u(lS;=qsz&Qh^ zJQFa7cyP910ABf8)8v;}lbsG7Q~D*yTa0QlJnJo5l#1?)uB&A>e==1~b=j6A6>ARi-LmH~{f&a5shPe9vjOnU>@t*&`dcdOMa)K+7k?bW+6a z_#Va$k4;I=_C*f!DO>>qJblKr!Hunha|#|lXrn7aS)e&j-aT11`VGLcu(jh9@JHdd ztKbXo`bdsh4>6|jrORf4ww&Wpq>ci12U!HfmLfMzPmke7ZKv8P4Zbk#WX`rz2!W^W z;C2Cr1UUO(4zbVoEim~ahNu#{9@GjOdIb*jSt-)Qd_Ui?^7#+=HXnXu4Jy@Pu)p>@wHEUV?TQ3wW^!2-?S$<)j{GtSHI7rt0 zbo+K|&iG!wn3m6%Yd7C7tjkTHxi`D*EOOTS?1If!M#{2&fmA-_u?Y{Rl;E(Ap5z)L z0uwu_ORRf6QQ6@^`H6Y}FwY<+$zlJldRbC^cpnOV4}m07&W=(!J;JaBdpE+iv?WMP z?px#Byy7#$WCxOwdCIKMT5>NmAao#Rlf}*u%TyZ*3NeUMY0M*wdF6u7o~WJs7snCn zv4<~)2h4<<3!HSCX49pZ9Pv-&F2AIEWUYA3G`OztUUNs)3e!c1?Hjm%uDb6;wmg8z z4U3h%=Rg*?Ugq>@fsqKyn{h03tgoCJ=02HPZ3`X9=2-Ztzhe&2(v?Gn(3pR}aVC4u ze>&VLl~(Z}@CL<3p{|dkc}xOpCE4;;zGefq^)`u|>wqS79{Z&;Ok;8)A`lMx3dcBXZg++=USh9`sgv_L}8YMQ#Ep#c4tC*7Jd%*2UgHesse=_pX`sw`f@KV1Ab* znfS&e=t=qH^nd~BTKfbdf_%9^+svJ%vgTGOqjAl__xI0)N&;U?@Jv*ia0*H<7QYYA z(|C?ux;)}c9?3eZK28~u-- zE^kW2IK!GWBMP1ay$b?hZmg^iT$Xy2(QC78r=s5lOs>f|1 z?k0BLa*c!h_oc9@7i9 zxhBb^gOp(99=W8OkKw!{hY^Lg)4~ z#eWT*Xw~W?H1Bzlpn&Ff=)fy%>sewlK07)%QhviUn4*;6q*tA1b3Yv)pCttz6M_S= zk#u}`v>()kJ?Se#&)&IjqVP0@$TW@K&|gNg9x2U+nmt?zZLc5%8(lk@fKjhW(2lOS zy*Q|?4<4|tqw6@9=$*Q335(*K9Q@X|Uc#j*0iXYTL@7kWjbX6R6(X-o)+qIlQsI6| zQz8M)GKV6W_Ku?H1n`sz}y^?}7E@hffXXB_F1`ZM-4ydOlU zy!Abb_@PmpHaqu*e)9XeSAMX=6U8mA!WC~fo%nAt=Vr^$$z>LljXiF!S6bQpDF58Z zbofGZ`8}nP|3c34&E9eudgoAwX6rFM?u>k?j0W4&@8Y#adgDfH{9^6yw`;m(nEIj{ z_~fKy!AB-JKp3sl~=R7kLUCjn`ufvq>M{db~H|6KCoJzvexm00}!3IF;PMUGN}G z4l=GRG|B&L72FY%WSTsVAH10En+Xq#%gGlOquQ$dHpw~B{TiWs=}!IILebL{ax52q z_9TfH6fi*h7pq6mYHB&kXAj~&$5qwc6<-8vQd;*X%N~n0;ar_nF2kvr(X8eH1+Kf= za^b%N9|C_@kL`>qe}I76vi7`D8a^2XCbWM(%JF(Fp#(Sd-D54z%gqvfa~$ z3A~dJ;{$!rJUQ@)bB#$dSp*DT2HAia&OGydCjqIXtVX6d8ZE|GUO#2ToQrNRPK%)) zJ7H=VGt$Y=qaQuAm@HFEqi}cXe7`oubuNTCwp&W)3CRf4EDJ#m5r-0Y^F*5IzYS>%v>cfh7 z2dYCrp}FHm+{*|3txfuzpd;$wR{s6RDNfi}xTm2-N>Q#Q zc0$Yh8l4#js!wK*Ye|3D$o0AiUk?eyqBrG{7Ac5I2%sJlrZ&lSo4m6@;4+Vn?-1{| zxFBKD1DZ?(THwCyuVId|F=V7`>(gJ{ooup?RkQ2-PkQApy?Mbdx98aKy$TLKed^I8}-Eks2@h-re?g&u+oqrHR%5sF< z(Eh+$&gHTa)=ymbdG|RgDWjfoNkwp4Ti$;MWQ}@%{yxGd#~MUTCB{L)`k{g}$E;VM_2Dz)vCeA&E0E6t1y7knQ?w zD>MZ`{Cw(yzWT4lWR|3$#}}@#iw>RANKMKI<)WQ6KL_QAo`Az$3?xWQv&xuO{28cQ zuoyk}1rb57=dG)WJ(6iDN#vd;F?lkAr2UI@az6Q>$nImQTJFEef!^|etiVAZKCIQ0 zP>La`QbZ9}mPSFH6NGUI6d}l!s#0U$ZI*gmIZxY$Ed-Xiqt&twdhYmEK(0ph z$ec?g&@LJGpQ_Et_xH2kN^7mpi#zi}H(<8m2?<2@fw4IoPPS`b>nliMp!BLPTXyM$ zK#q42x}xu?bl*)Xvf{R!fdzNT-KoD1?mWZvy@~;o4_DetqwJI_UJRVeUM2zgKiAEO zK@EKdgBs5jrK-DFvK>)o06vmjR-?0#SNNf@40jmAV{uC1*^eI14Vb;=fU9&OdxlNo z;W2Pqre6~jZgdYpsEZKR;cwZvOg}T>DRyFF5&Rr0JP^Z z27-+|G?qUSx&HjoMA8DuvUFgQya}43NItt}na$uii=%s*jHCmecA#;?xL~AR`!)|h z+9ARO99GX9co7%4iWUmI00?OL&x(6DAk|`HuOZjl0A4n{Dq8~%p$D}!C*baAu@AHp zOAjv(1G~^^fh%swfJ=6$UDKh40IBH@(7u3dW9vEmIv1mKS+?l4KheeY#i{0);Scmw zBCrFU0><0GMHaCoBHP3PDwj#1O$pE$dOQ2FmC&@{-3p=GG>h(7F9q}%cdQE@&UUem zoHNA?b7C>uih-M?0>JY;=!9rYsMYn(nGx&QFlx_fzkcnPN1|XVXB*atcj@X7SY3tZ zl;0cX!>{_8n%b+KRrk3CbX}bZ1Rg^IhR{rb$08{bfwLKY+T*Gt%W-Y1y(vpqdkXRW z()ko`v>r-Nc2AU&W(t>I1SgXD8cA2Y&9GS1C+qIfQIl&r?4D^8wd_otoZ_64qN6SmzO32aUkEizg(mGQ_R$B8!q{HQqaEess6AiI+x9*lIRwr_R3RQ{Q<*SeUVl zi`JuWYUl^)1nd-I1 zY@uVIzpp$lK6r4pX%i95AW9%n;Fbe03#&DL2f;ZUI&gd;cy_*@R^@AVW!{fiHO`jf zj7~DzJyks>cC*n;%a&vt(t$1Y+YDqE9Q5ZU%rm5E$9~e^F7Jxo{pD=T-dlU=OX1zr zH6&TD43W^YeaipvI+Xr!FzKr4pglulJ3JBRp|}$vL77^BphfjpuZ-s@Hz)4NHQHX= zqK)|5@RfsSOKTFXhiImyeYv`oYZ*Q#bfHt8Q&A-!1hj9L!gd}M2yMmr;gRSo1za2X z-j-dlk@V~bMP>3a>=I_1Ao(F)f!WBk!;9TyAVEXD_p`b}ARWL*C&=)?E#;KJV<)JS z&&-*B_$^rlx15|hiI4V0sd=Bjx>x2{wU4*(9b%aR#?b>-@@_u8z$*+q5Id~UCcAIr z)PxIXyE;OW25ga@Mx_V<1NM>>bobgh^hItv4sNsy$X{`38G(<(ueZ>4OWQ108k5@K zi(g2*->Co}ew)*@$sYj+R{6u#@qL0`ww$TjI2(gF@8#LmhIR)VN$LSs=&;15GcJ1F z)Nu$fO*MGVzEGU)llnF>g?D;hpR+NEfINTU;s%RJa|!6k6Ynr1YVuc%kmUfhKXbZe zU|`nK1_asl8;=D29-!<~?~$gwOEhH1`{Kjb=#bUhG^}(>iPA`z*hFscFAHzSrK!o* zhI~@M6>U(~gs^;-JLZ7jLKhr+;dyEB^6{t@(fvl{Uu}#k$*oy_Tg=d4tS(YdTu+l`dm% zlNOd={pZ6FX+T9WZO7QB6s?`cy~l)9%yV~*vcWr4$m_7{7@+@$BB~c$BTB!c_+CAT z{8%#VzilVt*Yc{FAEqW8!1umvPJclx({Q&lNX%(Kz{ofA(bqJfd3k-N2f*c4VWKMX zaSklN1aT^Ug96|WeMa4{_*K-B0TLl!%e#b2-v>NNg8f&MoHL$p0Z-GTL2qrMg*Bz5 zjxtj>^ZzJ$2bgP)C=2DFreud@{biN+thRcA+^tN#qwXJT6Wo6q+3=UZqe`1&{?@sm z-11ihLWR9#mX-cQ;jQF$4K^-9vo}+4j)%@li^0}c#IE8w5X=6nK_%9&1@FKGD<}KG zXPf3UzK)d>KiF^F9HXE&%4sU~!=Unu#;N8SoFI!5vYCYp|o)P||` zauq&5(8#i}rZosEtavxh$h}JVqAY#}$}M1u#0{bgK*QlDtPR5{p?m?mbIDVNXXl`H zDT{(*qag>!-Q%2@Ty_8MV5@|B-R6n$$d}IxRj+a#JHVfo>Cw@4cpRV4!IZ~DL%zpx zs*_-DkHAoUp7t2yp}xQ`>w6xhpHf|6Ixm=;oywb@r%FX8nS_|t+bji{D+D}u7}!R={%sF_ zSI-FiRvBz0s|*mI5m3+$kC(u{R>sO44-l6A05vfJMn+frz{>^6<4TA3=O(3lRrW`e z3W)xtRU=HTEYeQ>!7q2);i>m026|Yppf4sGPM|T7*$RJ~!5*N^^L62GT1OQYh5KT! z?0hvybt0#ztvA{R#-;fTuDp~WU-!K#bK5E~Uw4}ijQ7vD6x>#fi^52ay2)G(ik!%m zT*zTHB_UL}^rH5%d%OXV=L|qY?>c@!k9V#BkG+V)N)s*L4m`}}@aTx#IGFchg3_CW zS2p>n1)IdnlxBwB27agW4i`~*Y^I&Rc-_D5Mf8+|yl_{=7A^hGKXbx4-Is%UsjAzb zQ_?*)sf~hZYDr&KT-5OeMy%7mLF?Mec+ZG4PcxPUwqjd9B2#YKupfNY z49}=1<~1CcN7}zMJ=AWW#kB>3gpm36*pG59a>FSLJ6f}LdqSXdlD=`EtSoxvMnj%% z2R~e-VCIj>64dH66GL5gu@3w8I(>D`6{Za?ylI5H*FFg7lh z{rs=O;#a*TUe*5HEZFPaxms_Qz&;1S}idhR`4N+@1;clUpVRDy`r}4|r^hF&>Xj-4pAI!a3 z_$3s0XG78F^r)LITDo!ju%qcDCb#>u!V4EMu(F!FbGd9kCY1)nJi=#mXOpL9Q>G_{ zcO!3C7LZ7A4xe%|J6=}R45yr0Z^&BI3rh9ni%0Vua%A(5mc#nLKs%m@tv&NT8UI9{ zE$TtCnhIH&RH zlFlUgcS%IYL+E50^L(1cvfM_tm}v|toy$*HpTpDN&^sPEQ5huwNVjX#TsvE*Jsfa~q0 zf3@*q{XkeCq2qpe@Q+706PzwUztisQPs8H<^qCZWpBl3~H$P;wilGfE%7{>%{T|Y+ zW)!Mj<%8r6h)^i=PxZl`DMYMF-ih zQqy=@;8*)~d(jUf)nO5!cN=k#7?W6i4tIip@0&s_zCeX^JzX*z*e={n4JfRaP*FO% zN9sYGV1qldG4kMcRiN%wwO8t`D1gH2dur0GZHSwB=O3CmTfU0x)S9I{%CKJ?1e+jq z=t=>)o=2I9?8i6u2Y5;VaY-h7dW1`JPMmK+apJa43Kjo{%{Xr&e_c<@Om$Pok!RQM z$w`U~#Lh-SxY-2qWMSNOe#uV0`&mGna>llVMZ&z^lM)xy5Ky;yiiNY^GL5SuO@V+0 zh>U4e(umF{aISWXQ_=IhD3YhV6nDUXvi9Z)96*k^g<3yQba$*>yM$(ESfC-f! z??jRLab(5uEbAm(7d&x&Jj@eC+<2apluv77{rB_1*Qs;NDm5rHA>%2?F3*X1iWH%K zdo#WHuUD+(^dO^QVQ!kK^TGqY>*sC1a&0J+VzuAC$B0#}>c!LQ zt5S+qn)b> z!dHr50YR8--}~j~(9vaFAI@BSbN7u*@CwxI2{ws${KwN8J%E2^!-rT)7(y< zV6TEZQL`CGkNmY+TRJ0xdpOt4t+EAnm9od>WbN}G{*2pdSs&*}*U_K(Z>L%bKm32~ zRR7VP|C~ckx%`FIm);x4_3cko{a9Cwl;>=XRbSZox_65CO`fEk2@mtKdmz^$O@nym zx_GE$-5zxZIGuj8*qvG2gv;7kw~HVr~)4&bkr;oI2@z_WLlNSY+xh;U?eG(sY=AXS=~uMI`{|<{^L-mysC7y$QTn*T zQAu#5ZD0WH`yft#Rp9f58iC-2bCDS8RAByR9l9U|U5}$&&^AD)eU@#Hngqu>#S_N+ z0}eKs5R%>%nR`1k4S_>X{+0>_9Jx;Tw82!;rjjgX&rJm;%Psr8TXr=&>?)^ZP+5%L z?1+v-P*nEt{Hl)$OnHYV@{rSxwWG=>_UM-zeDRBBlVugftpqkt0Uwj5rwKlhyNdiS z#x$Y2V6G9#z?%rSDk2m8G8?TDyAEt=R5cfD;0Q0(Ib;oQaAA6%OMUtM-b|6X1j?W7?I>(TpE=fTOA#O<+E8x7G zn|Q9+;VM3tP_o*wBQwD~Rg-y(^?`JQsY&KM{ zxJN_yXA`_8w17}W03kDLes2%KjJ6g}dlz~ai9*&O2oo^T+L>J${A0UG3k86^Ud+O- z)mia)L8!i&7W}MVQ^eBr-(3JX+W(Luo=8^~uw0KeJSp@uThGp0JlT9Roek4AC2-j4 zSCLv)d@`NtT*-9Ao3Ea*@dp(-mx?4W_?HpXY?tXS*#mcp%*IPsIJU)5=+K2FuI&Ok z+5FO_^SzV9{vEpkRD-gdsk1-A0BSqu)rsYPplgN9zl`k#_ZW^0^Xq7)OMA7hy?>tJ zO*M{IDr;T7phSAClmL|Dx*`3I^~!QDfJ@9!5W^rua<41RYrZQl-Th0kauEikJ$Eju z4K_|*SK~H=`Qkp!>_m@PZ}&TjketMLgkfyQJ!n|woShOC{eHa@HiFeN(Pv-%8pRU+ zdin?#Gqz*Fo1cqf zJ82aWxxw~Dlt{i{Y-xfB@9arq*3-;Qh#IzR!n zk0&k30E?g%mx8L{g4^dR)cdFYD4UK*-;`-2S?_Bs{)6U%XEh1I80IRv{&Yg~($Bi{ zVWS=Wf|1S2q$=P53!AMZYNAL!v>=RBe6 zPE2hR71Jjl>6(N-v5INS*O71?yhn(7Y`723h2`~!3e&wgoK*;_9Q;Q_WT^oF9KyqR`u4 z-e4zPB8i#BfhBW+64o4Toc2calk8^4xo}3V0ySd(&q?<7c6gMhHoR1P3Rq-;LwJsu zZM5l+Clix6($A*%lkIzwo-s>%L)}Cj*ER}PWnOU6wT!CC`;U8k4c8vyN8@zuUukOG z({*K!f}7Pdhc$MVRmSc+kB1epk1)_iytg;WYI5ksmXX-!9kX6Uy;UX=O(x~IjJUU&%G#cd#ZM+3NRHXhjEf@5H_|Fm#E!`U)g#290hxti zls>d3HTF2hru^xx*P?v6!_%EgD3%YIh9JuV*oV3)7ZS_4+@%E{qcGhW9>nro<4%4_ zy|ZWXERiJ$Uu?Wp#av2(xF zlr{n;D7nuD{>C$vD%EA+hfw(qk_uh~4mdesIoTRC{YBZ`&F%~h#9YQ+1>k862Q`Tr z_Bd4yo@^%UYzY#&4@1U76k=a#FOYjTajoyzcU!EvEmpl<**jDE9#E&%Mb72vbXQVk z@M$x6XnjvoNOM3sE+$GzO)NlQux($Z- zX|X)q(aBWM_W*syn;W7eX4;Tnq2{-OLB|V!diY(+#e5H{4g5?x$^jig7Srd}D4!_Y z>7I zM7OP&AU&hyOG$EO^)g%;lc#P&WKrgZ=5l^r>Zg$VD8ZORNy~6g0Wk4?D~wZT?|qB# z8-3~#;?%Lk##hW?LkU6!dW}2hJ8uP?^j0;=yI2XwP2vajh7uh=k~(GVQKY`-J*plx z>DWI(PepZ}SFQ*v;efLx>1(SxL5_Ca#N%mYZ__(Az z;n}a0E&U`@E105kh@W2H%=W+UmP^o1I&bYi{9s-01+!bme^A{J0?e5$nIb?Pk#;UZ zPyPOX&{Jsi@gs3?qLvYxCvaF)+5%c8Nngc-J=ELGV`zKll;2OvLZYPR{3oP zC7Jy}u?XIkr=22*t)}uz+}Jx9kE7Pmh@?HqVcC+2-LJ8T^B|O+*aAE(Es31sw8f`R zqxiagd;JHpGcxS}14fp^PtCg|6;#ANzk%(SsHrE1P}uBVm#)wVs1b9a-uVw-V|>7|zVI5pj?hIbM1fWPqg^&BgY%DOs`3H;qmn)t#3tX}RX3_oX$ed@!!>HIar z^bF~#23hE?e8#vEu`C@t%hJZG#a|vzEc7?#?{jE+Vihh-;V*8HP-n647_93^t#7b= z67(kN7dWg>7r3Nmd-|gfTBRcaxUDcj>8Np(FNVd#U%Y{$c0pqwRU#k3KzO@jiFq{UQvB`VG@x3q7DYy89dxW z9ppuw2dV8`%u@y!lT{Cz%8@SCKPdP(YzStsv;PDyyI6?|k}-J2*zt=cof2spFH^l7 zen5?(hv|yIaHt6Fr@_d<#h=*xP<<%jlW3E-XtuE2ei%1jrNOm@TBSKcI86>6HHC zit4sqGW)TYX#Q=ZPdlsix$ath?QC=3>E#G1OXixRt&?sx#t5$dCOMI3D0A=>f^;_2 z@&DGe8{xg3>J6>AtoaQzyL>D9v)hZTrnznvfV)=0^s((Edz+50XzM*Xo4Zm=?k7AW z^Y$w#Ro8H8R&D*G^{qh=MgqqabZ=-Gle?m;<3Vd%KEjlgi;>)kei1XVv^#(sjGpN+ zMb#x@%v;E{peyK^I!)y&!MH~VM5k~?fE<5%ces*zFf#qL;FRJ%kaMn3mi74Ipmc- zL5}lrLG^C0)m`7?ALftmP#*Bnw;sT;ERq~^Zx)rHwv=}Lwsk|$+DahChEQ}`L3k$U zC-lP(x!(EZX-w;C7fTem!E5a&Z~#rkZ@{Y4<(W<9>GIc0|8+jf0&Mz^t68TqU@{(x zEg{2p&KK$x=S?u%BG>v8hs~*?8a4Vug2=ybop%uT4j;$1CBnumwK5_8{uE=(Vbi(P zp;wWy@9>`^yz~d8CH7e+hU$iP*!v2n`Mz0nFP^c17v!u3$v}FE_jj>&sEzP~7XYo+d114LR>2%Q)vc}}sI|LR6d3pS z#X2O6?G{jciIPZR2vGu}zLbrv1E%Bn?Y{8ys46&-65#MmpnABH+x7p=i$Q!YsumV2`S>&Ad_BcV`vufADGU}?S~CB!P?-P?e^#q6kcQ}~59XX`C@6%bDyydV}eqq!`faFy^-(fmU&R}RwW&^^jm6P@ z8hb?>F1zCgG-uKZm>)y&__sm zj`1W3#Q<}50@s4H+>a%Os=$<}U0 zAH{#3Zhf9z2LJPRBW2VG@=xxp=Qfx$G<6HqC=VcIHHp(KOFSBN&W%|dFBPqWgyu&_ z3LV^UCCV(A5%&s^51J}@E%>zS9J~c5&bt1tsc?GmP{@EW4jasg#zc#--G6jVUmAS% zFZ+e-Go)UK{TKV)NwYdBjKVS78@pZFWzt2YLiaqy`ZTD@4;F#QjRPS$1pKX8`2dD% zto+CNi~NwF6b#;uf~^ISYKY%QVxh7XqHp=5@q3+rlh%IDU$hQ@hIZA+#<$k+KEJtg zw`Zm=-ugT^z(+0z^)|Lz{{ebDnA!G9i{ha@Dpb7m6~NZ^u2V zz`y)x#9&4l(IOofRXH;d<54T!fO*FCW7;h#1bZ150 zx6i4eyV|t&Q*&%yzi6pOT00SQ$$hug_&;;0@@bTphttmcw`DI;YahJEtYLWTCeG4I zY}PNIJA;2O%&6Rq{9@p33M0stS&0B2(ehi8eiHg*n2%9=%ts=8g)5`KA0Qws8d9Nc zb*(D^i^agTjb9PozIMpc%$5sH!I-=5YI-#v4}qZgzRo8HbtCIh3w{u&=-YE)>kvY7 z($xHgh9Za8AtQLk5Wq(De5Rj$e(VgnE0on-u;g&59xMei~+ zbYT+7b8vWgnn90|lRLPFg~MOyKG+X6#GK$X@536Vvk5g_V()BfiR@5O+L>UG4H^TC zu;JamekGYO?8^=%X~b29-#t*-qn@vWPE zgSE9=0Uf1pruNbn1C#WxQkKnfts>3pkX$atQLr>czP~P$a{Nnzt6s4=3M-a>zc)d>-epwVgi1F83OP_S0qbmvMBQvG@#?@+(Wi@o4&iY) z!PA|hpJ_JR4N2Zes*!O}+Za!wv4P-(FF|nKkZ7@Ju* zYZ#ino&Zu5`VM!8H~nZ~*xnFxPeBsDmda$l_VuNb)ezTnk}6XTU4lz0!L&$bxG&MS z_%F#^%1UO#_+}#aACiG@;rzk{7una}`tr(01Pa2uh#d6Qzz;>{sBIMU@6SrB_Bx>z z#~v*Odrey8&3=bouMQo_KjI z)+LO@{>=-YQEmYlaI=l959z@A-)I#tL!dm#dCGe@cv^zh=h8*rhsBw6HQ02n6s7v! z3ZL`iWZg3g+Rvy=yOa(IWLg^tD14*!zt|6f;_i%O2*10olOQ@AHedeGc#-H$TYr*0 zh+2ctSo=#`(H~+wHb?&B+qhNw)bNB*SZ*+*?KGAr=dCdg{-FY#@bO7*m9E^Zi{WeE zXtNum*;qC%kFNQg=j*YzX%5mLGGPJ{&tQzb;Z6Z?bx<@4${M^HUqXyEDJEC8d{Fr> zaW5+=wtk1xy#`gL%M=CNKa7>`3kQZbYg)=Ht^&#LAM=Ll$J+_TtK0?8Ui#f_6}!ge}JM;neLD)D}HX zjTKde;;CQGZ8&L*lTIgwT1&BTgsDk=?b2hGx3)%b*zMV6)H%`=`REja8@5ac90c}JA)XHfQ3t?u>rGk$hs`QwhgDl8MrJd(N7smYPn?1&g zd@?yQ9XI=~<5;YL9xrF}`D&_J9z0CgSiGnI`Y6eo`TT0ShEiJsh`T-mBRyt*F3vUs z+~m&vPx=11AzPG)RfK*zr{spgOA=Z=`wq*m(k<~y3q;=JcdwxqS64?I@)Aw`ZN*4m zoP}1kn$4BUx)g=`1dCKn&$-(Wtep+Df3?hun6Z*=HjcTOaIylfA_64WLff^~3U$mt zgDptM478P2tJ^)NVxw&r#aYcP#@(;}E(zc*du(W?Z`n|HWhPcxrH;SyFiys`;rOXh zs!8APia|6n#EZ)*`a<@0WXxR|kpDyiDqX&M_RfE3RN-&nRS?pFN_S`ufpZf?_=KB-W8j;gmIFd4ki6-W$ z7vr&d{oE2tn03I~?KqCB7ry>#X#|s_UQF%2txNHIS0S`8{o4IpGPr!&Wv>;V;tI{j zQ<{6Oe>&+L1s(K+%s?%xi5?<{(|@_sf2%v7ngV zpS=ug4=h_Zcm~>Fif)uY{LA*8dpF;$50Hp)m}b=UwTFfF;dk2YqxSNqe4QJgjQ$W? z&|J`={-!9fuMgF>=(Hqtz@C%&9}OlQ>~d#w?Q6pS5=)v9>uNS$e|3qx z=rk@r#K&wWX97G>Gb1df)RKVTJvvPGsY98X@cYmy@rAm9!+;-ql~! zl(%KthxzRitR^umhZ`Urp6W>wT=rs=dQxaDaQ;17pcC0S58bOQ+Yqh?+^Mo(CMGv! z-HPhz<7Et^X|kDR;a@17|CA|P#M69mT*lMzP138Qo!Abr_`6^iai4f!cNy@z<)Dze z?fQky9tQrOCGRX+8V{;r{%PJ@!5xD?c%EOL-sO9Irc0W;AAYu5qVlRKtL{+hnebK; z>6Wm8zjkI8!?HCcOcaj*O1g_N+=Ne-P%qMe{ZnL;QXETa&SLvh)?44~0>p7eD=%oE z)b&)EyNlU@gEBiwvn4o?D~N$f4>{Aq9FtFotF&_AM*8jWd-0ri!Ti!<<|%at3<)ge z??sXJdr;Qzl71iRZ1k|x3E>MoWIjgDGrv0<57HUPH%tU2jCg65q2KKMLBCeXQRpJG zq5V^mze+0)H;SE=e~+Q1wE*E8K=2`;PH)L8%PlLc&3#@m-yi1>F1`nx!uG0)S*KcY zyCbu9%TQlsuWsj@gt_;1@An^kZdvC}RaB$BXl2;vTI;flD5<6g%a6?JbQgAVhO`!T zX0$Apu0FMF69fn}FtN1MrjS5Km7gXh8(^`%@nF{vMTLlfYj?#(6+#AC;fbEb0@s)v z^OM{bY9g?TxJo)+FY8^gL|S+HQ9ph+Ic45u#7zsl*q94iJj%j3O_41xZm$&@;#F-P zZvXT}%eOi{oX}E;{FO4-N68}M|o5&GbO270Ll zn|oJ>kIqKzh$;Zhd{g`3=i!_aVm7^_F={_#`&O}b2^%pEwNlvq2#XuS4ETa^g1Pyo zNJG$^U7_&vl&#@awqe`NrP0i5v2QdkFP9;DmN`M9HxC~d4rR$*IHT*9)L`EGLoW?E z!yGba!XDmqy6>c@SOHA+!EsldFr0aT%+e&QcSc{HvY!WdW3}QAWBpmZU9rO)l?24t zN*8uVGsCyObz*}l+onPyek9BhTG4?Zz9i&HsDJe^{S>+(IEG>P*J#flZ=@oMacJf3F8-c=DPv0R2!^>c;M@y zrXz)#BF&We5o8`C_c#KPmxO}k*Tk8jA zhi$%x3^7>A1PKky?pwwEIR3a5Owe^SiQ=;u;V8p7&)Q6+R%AiiAwYzOm&m1F?h79cQvT=$)A8(4v-6q0_W8I}6 z$D%2$$J#n+pIismYI1km^M=~iyIDG>t z1E3+i&Dh~d?hqIQ{JGkNnREKz~)k zYnB%8l}y-x5?>2PPo4+_Fkjj^V7lWeP)q&tj=${)9dqu&eJngm!2rZ41$yX|b%xcx)11EBMPx z(7|<B7rh0EQWJY$sON z1IOw&l$A%(kY@#q`kCyPC7#3Fqg#?~;RY2I6VY{!48Hj0L+)-l;L1G_piA+`Tc=ez zcH5+6PC4r6(?~)1?D(8)rFx8HVfTRB8LykRx=)J{NeM==qoEAb&R1I$Sy(&tdC5e` zvQPKpAM(Jp(PmkQEl=;uK@E1tRGzAAH5C-uHV$|O<2CFf;1}_@s+%T~ku>!!QemJh zr2`I!p39DB!dqv*&u+dS>w8mq^PZLix`!|*%uozS3hfo&Nu7S*{qzeQQiDPg8>DQn z*`t=Z_4RZN0ydZ~?fh^^7AV_Cm(t^K0Kdzxu=KGqN&j@Z0=VBsLs4=`pK&-~JrQvJ zsn#*=qhHxNux5_uxmX1;Tp_ZE8<{`JB5<15YC+q2TW?-(l^q)#|#02ekjuJ%sDKWz31Z1cp%Y{&$P zvRj|A^$SZ`gTw?L=qo#te&ig>ieXY>#?Z!&LRIjwmb^nENa%!A$V)&XPS0A=oT&~1 zR&^r*{5_2ltB=*YvXb%ne#!Ou8BIusd1v$dGmer`^JxuFwi(xrA3LK2zXC+>5-Eo2 zsU8o>QOR=84M|40yF@Or?v{4gN8hoBERjV zWX<}={XvDYy(NnWt;L)N!vp$E8(!ZfQ{DFR+g-b8S|8cQWC0ADx&#gVv+<$63XOxg zU9GlXD)j;ruxL#icZ^rj{+Li}3J%p5LH=^ELQsL>*TVxo3?Hwf11{7-7FFB+xvD@p z==Z^nrZ!SPTQTW%O!3O<@Ekc`QrpE_ORi=jyY8}_9|Gbr(={teep`emye6CWJNdBnKCHFEG0Zro99ut3OnRGAk2lzvwy1r#u` zO@G`fZMoJ6ZK<%Ri?`OB8{`4H&1`JKg8!Ck4u%U{2Sl*6%-1M%92EidZx6QCX~(31 z@-H}SHgDW1187L+PZuCumqT`?rg^s75=nrN^HTVGbD2~E9-}<32a4of$Giv-%ggSn zzbQbb`(wzo?8@ns-$KP9P1zgWD}~5akj)(hRA}w`Y~RVP#rrAT)EVGK;BE|WKW3UGOZx$o=3EnkzzLvdycUrUap!ZzfJcN#n%6hc1J8k*B3)mY4&qZVp8|IXVLE1IJ=3D1NVny z<$E@Xy_T1sOI9h~0VqwC@gpJ>ULtezYf|)d;Fmm0y~ucBFrt5;Vi$a|K4nA6rlhb4 zu(xH(ON|c7Bduv-T+eS-mpn*ru{v@ zqX2|KtUjmu)=YhH3%&lZzB8cZhUVf>{N;2xA^-^dX0c-RjxassX?$_teCW5LE+RFPujt@@` z?%#$gHAB)5$M`EL#XhubHl=AuKZm-qjiB?=Ly9 zfQ9)^R3EaJ8B?h8N{!>09-*Wurqi$G2IDVES<~S;+NX4gy&#=U^mfC%&B?SV)~1;z zBm<41%;_)#+&xph1sP1n7nrR#ZgC9T>@2VnJC}sGxNayT#C26YyA;|TkMVyI@X~2n~!QX zeT$()@fS7uUnb_<-MwSiI}5;J$u|LV9o)ChfehX*S$n@Zh|sPBZtrG&tPAB;>wy^Z z;5v;YGo_6To8;|C0@j*T{`$LmF9Q{Yx0(;%pe}FQoav|^6j`Xxy@xFT$_ezVb$G!I zJfKvTeKabPf!ePNRBtp|Y?sW(w+70e#_2@LTeK9HRGz;&jOSSlR`}SI4w6PPU7pow z4vY#_Yz@KeICEBlG?^N^okl8rKW%NWQ{{x3!Cz(XETrJgh_W5Ny5_>OT;F$a8}xG# zDXeyN5&|son@LPR>l{c1wpyYkcL8r-H8n8Te&_kQM|tvxK1ieY_LZ&K1cL^;?B7Da zG8fMC0-H^_br^PEef@ltX2v6~aYj?ZZCG zOtO8k<9$Oki;{Lb*e&Ku`Q}rk>Y`PB)d4bxO-!BNE3vmEKf$*pX2p{~A82wCa+Htf zlEmNXJwB_)-_ejiR4(q?BI8>o94*YSv@EMQ|H6S|Cty){gb;LL7%(_3$eTd`A9~Rr z2DK8_mG)BEmNWdBge*6Vf8ppI_+%dg0#+ zgyMXdMX+R|lk^iEI;tZaU*mrh<1#Q()Wwf{V<)&%Zzv(RnBnt#7GO2STvbw)2oN08 zKwJpyJTNzVEb`UIrsKje(eOl+8z|NIkzMz>gjdy{s@NR7P3tA-!pD#x0Znc?uxnlG zKZck{AZ=wn7o68dyDR`_wy{xQaNv``^$|&eZ=9gNLzrJRpJ=FwD0Cv&Y?ayuMe#|{ zfM)GH<+%s`_NTCkYB;IkB)8M=DnC1G)?@w*Iso-Aa&_vgLfO@)scR0HnKOIUasY;zgAsSKm%O!TS z)f>5QeWirouvA)W;T*N^L?{cljzTm-831gM4nRNI>rg2Aw0w za$dqTjMd(2@o3C?r?!|2zoGsQWwzP_Eus#*<#H6t-Y_6>jOWiVn`Dd|G=J!x zCfTLK;zp!U7mnCxJ(Ivp^K#`R85dn68l=kgpQKYVgXF-acid;jjFuZ2HOj9FPy8KK z6D~RL)t>W;){|Uz$@@$8D94hW(m%~a*cTKRr9K{bENFj&Iwijp&K^Q9jv)QbcwBrm z?58|7j3SH}O**&7`-050|5I*ugwG(P^hmloSg11I&}cTuX}tYsKj5=d0&B%S`NJM? z?ZuCnaV=g`qFWk!*((|rbzpULyVc|Cv!65GUqsHr-fNV0tap7n>QlEbmgE?4(`nS^ zix;%JA8??Z?_+ff&pv*Ipe2{N_9@^4l+%27=08D{trqP26k03iAsm@z(VGBfA4s)e zIMoW|1EOEr;a@ko`3h6N){Zw@#>+e&QoAztomaAV2s^nX(?auxitpKaAwHWMp}`0D z&iI=~k~vM=RT3mv#PSQ;)1Y*)Uob%+mu`$QZomI@-;>36;}08NKBZp-qh4Ne@a;CW z4K)>O49W5}%d1?D-Q775WT9kExXSGP#*+mM^`5LZHD@{1mz2&gSf!hU+x3octlC@x zSfW;^J|XgoL+bxqFHLd3T&;gvBUShLlJ^=DqYkl`8!)@tr&2`; z^5y$(M+%B0uh_rXJrrJ7$h$o%774H9xP4lnH9sb7nle&J(J$NmhKBVBdae2O`sE|p zy=Tp+LKEn#HNq4H^=q^P3fB6?pKP0W9D)`M!dz%}r=`5(+3sV(UlXZO!{_JZ+U*nP zD#g0PxmH8Zp{^HJ`bbX39UsbMB=tTvD&4d_QSZkI7dXsyqxwi>6ntleWAhL|RT5E1v z`iMlA->`fw2DlIQ!D6C<`o_waqlEp<_nrzb?Y6$~npK5Y(bLBW+|^uX_nag*xa|Am zn@=ge(HPrEY%0yS6g_U~kDiVZ9VZ5hVceehYT0jg9pa%;-O%Q(c0 zXL7L$q*5WfETPRZJIumS&T^qr3yxUlWN9W*fg>9SMno27(b}5c-|XLfe!dW9&7L9h zNYYI1JIrr042xF^Ta^BLlP%Iq$?80=?ClFl<;#BsTOxw;Wjn*&Frps}5yn6gcGPNj zLO?|qa6G2UEoPVgW_@uqPG1rS5_{5zmGh~n_*o{KzEUyA>FZ9qsE19mr%`L1x>NL{n^!>j4k*-_Og35S zl{geEVz>8C3;r&XH%Fm6p`FO+d7@uFXyYY#$@1SKSQDQcjQ=Tub^dk3&3Ee|7Pu4@ z8d_SlvDbQYZ*JfNS!9_9@HMk+9FJF#Q_nK!%@yOM`PG{o#VSpT{-L6%=5ppG9E(@* z3k$QFnc%z|1Zfq0IoYLic#!Y zhRWxrYx{^UZ{^YAgvnQ?$uwh}Tc#v~d>t1R`-GSln_=oV7!L9`j3XCE9eaL7I@_k9F3*pP+Xs{f3FO(q4B%#(%SbN2Kt&r z<>c;_FMtnfMfsA=b~FN=T48+|qi$1?+@6$3op2$#=SjBB^Ff9OEYB+%%tE2jEd3JQ zhhH}-Y!SXu@-_S%eF2mZhgBFB$R7E!YdsT1D;mpHg8%z(;RQ-}(pEj{`6Iac1!VeC@0iv~@k z`j=Hcs>s!sOpSKvL*?y(p6{JJz8R{c9^G9j?vCJ4@uEV(O5Y#x!Bpjp0 zvM)}R+0aeZ9<@3R3UXExER^+{9cP*w;fzsvC_%A7@TR=wte*e|ZNBs$V&r!yER02C zK$zYDm*Mx`7H-b7{EmyPR$*jP|Bdr?inc+nvjl2NOQSE!zwt(KY~0yA$x=W5EwMyb z)*7S=*7#WX6KF5*be{G#&+ga;-E<-88y>S;jEo3gR~>&^(hs@xK&SAJ?sAbakqjzl z>vvNhN)Ea_eg-@;Mi`oQAa&8&;G5yz~% zj=}!sq|ne6ekj00Q1}tUNb~Sy?%Xb5Tp)`(kN3>*F0fwA3sv>S#| z^2$s-SJ!nG44+svZMf5(-DRMT#|s-P$Icm60mRQ%&FDYKOYZDV^DyZw0i<+EoXqQV zjeq%xf{uyW%z4?%4lHYZf_Kxow72)2E{Ba8s&JAtr`?o}&X-yn^*tuJq1Ey4$X4)& zuc*B*f&!30S>?BSbKG|2VDm0^s$GX8v(`J-B0!PK{65*>!-V$;8x+wIru`Wh`$W`OWP>hvtJU4YsY3E!=E z$pt!th5n7*-E|B6fm-Z?OpD^}uZtf^%y`0QIY7hLf8(K2X>YgTLFPTSj@fv zN54ui=;o+|%l>aX#p%Gz zumBKy*-Xq{8hqQKmVksI7I5Ork_dUWV%pA1^&k)s?=_!t)O_=1>UkPYX5`WSKr4dR zdO|Y;5%xI}^|`rz*HK$YwA1qUW#aMTz8B;RLW9JOoM#+89ZZ3#e8$VTO#K~w$*1=$#QT&8{y&Xj}8{|7Xua2)eq;p z65SD(66xM3pU?e;s;L$LLuISttW#>N~JUMc$Gdnr;=n)8)F(?E)3F zO5y(g$2?qaLnL?$lBxTS6TZV)V&pP$XoAVyWd9Bu$~a%c1*!QWk;=&{8^ke(;{6X^ zgD&9~XoWm5lYiJZsN0ypH{yQUTf7#JdNAcB!jOvB{Dx$MBuD)T3eiQ6r;hKj^0B8i zyUCY}(imxlVq74A>iSF7=1k^R^aNoMv? zIJ22a?w>KCJmWi&t2lPBX?d(Z5;~w6@NLK~9B=on*fQCJoU+F^<5~Mu2A%($>)<)~gi{dW-URYtmg*H{$vmAd%u2ac!Bn*92AS&WQeq|AifuU%0RBh$JzNg^+fInxju z{Z;EXdXKx$tE*Sq5Et>i2Q+#zR?$H84Qxmt-;FVu7agee3&`%wFqRpR><|^T@dz|M zlMsztNuq3~lJ(%Rz*+`w^`AJ64b}zbd>#)Y9I3rDsCbd zkoB&8b3hYiu&xIhr*37M{6aRy$dJ0huokFu{PRkNK1HzuPf~ zfKVY+=Raoj4^Aahv@V!!5k@RBTIk%Hqmn~3&*^|^dIOK0Br6X{;%>pnftN70DRmWU zbF~@qb55Z%;H4ZBgH6tPQTvFM$5r9c+>BAh)dlOx$lXc`iPN2j@~t(7fKUANW9@+H z&6bhkKtV|!GW|w@9UX3``=*PSoy>`U9M-WFu?W?plbHC!ImLnOG;af`UNy_dpB~|! zw_0aTUUPos5ti8cUfjC?-13BnVQBN_b>~|A&!9QOgN=`tyM$Pts2p~I z?W?epB%6}bG(2f}@4)--Uaub*i@$1<-BLdX+AJ%S3O9c4jHq|nn2_;e0GK+Vc1Vbf z4r^}xT&Bn^=IMsfBG%O9h}tg}wK+nA+f}DERiw1R9OQXJc4!mTto83t<i z{(V!9aYUl{Wzt!4yhoq=6@S1z(Fbl-B@Wz3PscY5!dJPRWFw~o-~E~I?&o(mFYR*# z+*}9``bT_T7b_WHKG332c$;q&3lmq2(5xL+9ChlRg*LzmCQbBz)QoBfAPVm+dmmI? z2l>qDCjYw&fPkPTbVzt&QrITXp{8E@FIBrX>4Xn=R-0IuNv&jtdCLuV>m$50*F33L zXE3hjN%A=S)hS&E`@hvZ>4tG>4g|x5gA_jH>iM`T=Z3o%V`OPS;f-a{;n6n8QJCxJy2E)NGr7&Tl)P8U%kRH4{ByWNFf6OgdRvEG)w`_9E9Jyfdqh{3)hXd8dy)HiDjv@))>`*cxF!e4 zN#K;6PGbG)d>PjSgSf2ym1HySm2o@jocsO`p|DRzfQIo(?QT{#f5~Jl#%qyCj*7#Z zU$)y#RktX0&U@)v>V)RP)_(>`brZzwHAU>6vU&BS#{{$3Jfg;St&5sWmtSqI{2vXL z%#SwEoL!dT{Hx9rljm87ZKomEwy`%RP4{V@!9k06q5FQ?A0;|NG1lOpGkf0z;6^dw zg=Em1XKoxJb!-@N7ltnYfSz4rb9 z+Z_cUi9dYYn}smIm993i{(KBy$dzcwDtfkF>XYmSjeIF=KK$%Ye`k=d zLve8Vy36kHt_Y4!WonsxG2v<&K&TYi&8*8*?jB+-L0|TscFyhY9)b_Bw1UKDKxb>> zgF%U30mt&);CIRG=>yn@sbyGlxR9G9RF=0)OHtdTV)U%?C=oA7)F- zP5!jb3=48u9631!JfTl*gyP2vJR~eB(?}{kM-Sr+{bWH3^}oB1|J~8P9X}zen<3D? zz;qq+Q1pskriIJe8_)53i%Ac8k{_%hVqq-S__$OmZNBk5f}G3W_fq11dznx^u@+Gp zlxY?I*TsER!V)*bDq5V$q=>ei9U<1dk>?5oj@upQM4MJ_&r3Y)5ev0R?*Yw!ljuzR z8vCs!<8&l+a`OQ zUO=R%GYuR%Ie>4$46%_sqsm#(nFS1vQQ?W`Vv=t{Rhl%7a=AQ1DbN(O@PhL0)w5Zf zOYkkcOUXZR<_oQOSQAp*$d~#v%mNY9RX3XiOVAfNSN^GC-_^1)YZ|r@=Xdq&y@$I0 zM8l!=3q}A;v!`U~to4fU!g{4^Eku|k{@fac(Zvaz*%?6j3OU^k>A-pKkMEYnn0mQ1 zU-l=CX!>}z>Xh*9Oxf+v()&lFs<8tT?#2eBDoeG{D7JNiFwo0;W-W%h+#GWd@6#y4 zHiXXtrn?w7?+cGgYhEQ3Y$aqrdC*?=W-Pd!R&P+^LP$!eZ&5`)<4Y|R%5`0BfO(uJ_=WdJYJ$Or=-t8l~aW^_w^GhTOBV32Y2v%wkSbPJF%?HDx7my$-TX0}$&?&P5?Mx)#JTx2iwQEU9nDdLv#tUn9rgxjZ ztnof$!mM~kRAv6hqh~DSE30IYImJfCcD{89Dug5QK1opS@Q;FI4gj6c+wzIGIs10& zVjYLBkH=qQN{kt84HmX`UwvBV8^Mp92xuNp1QbiD{h4azR1@}bD}r7prXWR?pd}E1 z<&Lfi6=P>0V~n&LMnU&mECr%p(8Z8qzK_jz>=nWsw8u}({KBMT+oFA`F=e6~Ygbsp z4fb=AL7LBpoKRkqzPscXpx43(uoP)h zZkg#L`VaPt>r!(tKV??0ltp!&h1*(Gm{ewIDIdhjF`3M2`CRddBJ_}K_uc8fUs7K| z!*vYQWvc8>dYKXnxpJ}!(ejE#eLNbSGDY>h-e#W%(vFAbDDdiRRYvk0;__5I9+gUO zeSE=km3Ljc>=uMP>E3Si~c5v?O|G1DLVjrpZ6V(&+NH zN63D;5mvEsiVO>D&AYW_R21p%b}s$?XtreYuHZw$})X!zP(kj`` zz$V(WM~t=4210*pbuSb9O*_lS@yYE+%Rfeq7)^mB_~7w;rDXDB7;Otwf%GC;Dj8JF zc=Usri9)1ZBi#bv$QXYS4GQ56^wCG@c0o0=kK(pui&y1A_k<=Zo-KP%$}a500mIdz zl4tFGX+5+>!T5qK2Tx1!GWr#DCwUA=YDB-5J8=d?Q_Q0HznPc zPtEWkhc`mw7r~iJ^)kKKneL7Zg1(8wd|uz|K2GG#H_1ujxYug1v=bb*)==KZriR=;Pb?G_#Ra%RhlEeYjA8MA01*eI&L)mgb=%j zVJ)?*Tgoq0mC_jmb@6jmN$Qt(W;&Ni3jKfC@U2<7?PA`K~O{7Cuh8eIFItN$A6L(o? zB-$D!;%~R%C;_%mrFcD*dlCTW@D`5;A#)*jO^{QoW0x%I%tmgw+FkZt)7z5^3b=Gz z_&(}BV#e~=;HJ9q&=}XrH73emy?HM%ilU=>TON^euR`JPzCqBQ#xxo?rw6bvl%Sct zFT!oNx~6D%wt-m;*LVKnqW25^Zr;v*(NO#$OpgUFRWLj*%ewJS7#PAj)TGdmds&m5 zLUReZ_sQB!J+s}|Np4IRC=Cr(BvbSkAs>eys2f4yB{*>*$|N_oP{D>_YZ7S#`RM)P zLUxnxgs@FOZYJIAs5mt7y0o7{0c*_Dxo4GA+i>-(8=dCN_R>S~axbTbjK6>1;vp6B z*j}e#0!6>f4bR~pT)E`m?FL63P13#y(xAMu3Vo^7zEyM*XM(*fr|sN@pp!!+6$p-^ z{eaZdE+Pfr14T=Li&?uL-=O zB5F*uq-Fl~PK#<>>~-(4Z=#ro!Kgg{Xkkcv8v4L0a+8>-YOjDnioQ6a=W55MAQKh3 zRdhS5-tL%})NR^vb^VD^0Gxr2#2#POdy_Rh6aW$9-C9)_@ad|W+m?Kvr74UCEtWMV zn@AfX5Gacg);LQ`koVMwVeUPeqGSVyxJ;Y4OhlE6_9}%2y(+inWdT?rs7*1AswfyR zeXZ;DZTX}e?W|Eh?NHw&x&9VIk!oHnKMkU6wX+h04My}xM01`pT8KJ-rT*sb|2o^W zgP9{T$aCVyIwMJc5rD^qW)yn9FIs4$#hN*JluC=}z`E`JO#i1%#c=){sr~YIi#s<7 z7Kf=Xe&rk|eOG(Thd5l{EWmAA&r6UN1;dfkD$gzp_7PLb%F#Lacjd>7-($EW`C=6| zB4GYJ4bTg_f_^64Mb5d+|0zr!17Ud>WV`Mw`T8AhQP`{Aoou}ozqT6()1pt^xI~!Z zAURke%y2u-|7nrximRNB$ldXL$sT^&F37ueGpW45Z8w`2E%rTSzWbXHh8GbpoCbYg z@<)Rx1MKl9+*zMyz3e*i<*<3u(@b)C&uj8#?1QP_t%G5|SCmLQN*)}KRrDZMuyK0^ zK^MkNTpzAW7fZ)!%ha-yGn9GpmJl@00V|4I7-6`dw%!E;^G0jX6oXri$Em4oKczKJ zk8rG(v|ZOxk`Hvx>L%J6Y)cB&@NH|5O&AFkP>$|)@}j;`z1k*)4rgmd@qOm`v%t&F zv}#|IsmGWJ&_2~&ZQwOQBlDQSj%4@gbefAFQC2h*rRcC}=>Z4iCj=4rG3{9+Wo6Ob zwa`b~Je688$AcTt@RrO{l z>l4_6BNb^`eTh#y5mrb_qh7;L^K9$ppy30mNODjKF@HWTyObAW-(TWCLtj^dL?eI4 z0dcTtRoQ&e;Jy>eVvj_8sL$^RN3A0Hi97^noIe@&^5F@z)NQtA=grrSeUj6>(Epj0dUD&A zgQhuj0Ks3K4+=NeP z;b0U*m674#*!PH=RuP-42lBhYQQUhj^(U@74N=o=btJcidylbo!?Morgc~@D0MA%j zeyuQ36s6r-DP6M?y`uHtdR}0`;LUUPP+bH6HZXGjsy^n+Pv$q>iE`{c@7$7l%Fiu~ zqtV7L8NEp^IYiuc&?8+=C*pmQ%Fm`-nt*;$R?7iqzBO`Sb~;DN$}*5B_nzy;1r_?F@AGH`kJoY29dl^I*L z^n26Hl!_psG)(CR{$! zB3fH1jC^~o+yTwi;rn}hNW*|9jMv_Ga;7R_R{vxX0R<>xe8yoO7julm|Gt4P2S3WmRGfC z%v1i1cKSF^_yLpSSmZD0Lo@?r>AzAWFiv+1O#}LdV`YXo2T7Us&^T?QlC}ojxT$R^SMy5Am^@qI2iK*HP~iexCq` zj!v=lv6K`jKWa51U+tNEYwjOWba)NJ?CyLUe}1g>b(n;$wfAZ2u|n6)BdL9vm$6G! zuqJlvbktq4gSZy{ht>ieo+HTrLu+I_>D;uJ^qD9NPLz@N?_kdcjM6>)0}85AH?A9@ zCTC2?U4nVemQD_^W%GzOQFJ;wXs$=XCCy{P$R%8}qYJLbAHxi7QD_&iG|TIac*~8T z9s5#G@X|U<)kh{ivi_0Mcw_Wmy1UZ-*>v0IuhGQ6WCy$rIu9 zVBXjS5A^8(&VR`@0^k(?9I}79i6x^O2Yxk`D2nUnT|OUcs?IpdksKs=07>h5Avk%j6W&?$Dss_G%Nzs! zH6CCeetrWd5eOp+xMZQgj3zQLXN(I~Ubv?3gd$JYbU|>4?txS$hE8a8$PQWUu=1yq zPi<;R_}}}=sazQG__kZqxq)*ybu8+7z~5A()Un2E?L>oCyec)su?m8Pq5I+HVM2`T z=%ZFw^U#-2jkNspe%Q%cz(L#4?`Ad{*8vVpRR;-mY zu>)+rFx{cR6`8;9auGt)3|YDuO(*3lg+|qCc#=KvsMH*tm%ykl^T*r@mQShE@X5y5 zMQB{g2+blQX=fVuAA;Sgdt&Xtk(XN!4on6NZkjS_YhMSqZe8RimTl<_+2beE;<>4= zLFyt+g?Xql2eIw^iTy7(0S`#sT0~bYB*kN=v4PzCzEjdE8SD$cm_r6H7B`cWz-e-O zAj3Xc>3p~QQ=h5W4<|$7fj`q-{d?F2taOfYHDi*(NKDQ{h7^Zr!^G>>M{!h<*#2Dc zD|Fs6Lo^23U3^a~CcyccclyDrxevmB}k0wQzWN==FwsoR;HM1&o zu!u68V}YVA`x9^605X9<{o$-EL8*s8^6@Z)Fg z>&ECbXzLK(Kh-Ud3rsBgVW%VjQJqx6rnML5C5Z7M>Gz`dpjui*lE8Q9DjV4aR8ty0 z+jmc1FZ9h2aCnGO!!f0W4Mv~J(w79VB{7$$_(Fg29tilxgYs+#04r*r?C z7m1hr$&qy4ng_8UljynsrU7loM6F21`eWUbFXu81K-d4(3~dX85~r8;+Heke?x*fa ztUZ6^llu}4p-GeT`hiD2RfL0+vcVT0l}N-SSb%3@SBXTelgFE(BdI=TOY2VGYZ*@J zyW;xmw;}<4m536ma@{5`3tO@*_})AF5k7H6tn&sceEvyO==B(8U57~JoJW>czduK{ z0>A;L={=0*7a+E~DTu^;4sZH>9RjpwK5dvd}t*FXEqPN(H%(g_0wsWo!QOtvzk z`<9ee-RiapN?UFLpW9Cu6LOCimE}5C(9OxaT}ZtQEtbn!(3KxmlD^&1gr{F8KmWtd z{nus48wTA#jtPEyJ5Pw71wG08G$US{oS208Ta6X+_RjKD>J8SkVA9$LP2L$zi{wSj zESJKSl$!VeU6M;Y!!uz+uWr0;pgC(LqSPQQHO4vMc-_mFVm`=ZoK4+YFnfJy zSlo116@tt_0uBb>C7gxvuO#E$DXf-Z8|8!E`r}+0=9q>m(B(vM)sUl?`X%3=|A{=I zx07+NynLq;U(-mEc2J)=7H0BsnF!cZSDuxdmp(aIIS${Sn6_>VEO}PoU0~jlQJC1IGusROo|uES^~(bcXWHVOoLi^7S- zFSE$y$^+A%%GTbGStIx-f=F;Yecf+fSu6N<18B_Si-GHZH~(R;sG2b%^u*7z*sl*o zT9*@3qf(VqKV+v>#G2gI(8(rln01Y*dcgqGD~*<*Mq+E~sm9er$E>_G-Mv4Y>rGIV zC1#+mah5HX)41b0^tiw;4o^>0OJR0AYdCiR0G~KgP6dO&XZX-*M`u=_7uCmmX|Ap{ z)~j*M*vx)O`M071qsI!iQX-(!v^1cLmC3C_FZN#y!nXB5cJe4(kL5*DNH@kG`5Ve! zg)9d@TU^D-uz9}3wRssMzM%E$lwHD#%SV)bA~@={dRLd2$wI z>j#==N;F<`BV8b(bbYU2D*9@Ze=SehR8dcQMQZR~DsGf1SqpxI@jro3l6FxKOP?v~ z{eSg53!Jsz4Kjn(0(XLSBNm}G6(HV_MZGHo&$L5Ba+lbpe3svQ48PBZlQeQ4-0SK0 ze@3e%L4y%GJj(|3prqEp7f=&L|HRxb`0)uAXumqo`Zb%!EnYkVwxi@8czN*I+*t_i zDo8@MO|a_QU#|R-bM4v4B$k@o62X zTcxOpKB`?B2vIlo>OKNW6SPL{(~#~7oY0&`?NC3%(rKPAoRIWw#ZAfW{>}lp9WejG zw}mO|amPT3>Q=mX(73VAQV@GbtG|Lz-vg(rnn;F3Z=r>ZR^?xO>r85?lWlegi37b2 zOF2=DY-eoR!`zql;!S+vlF}0A5SN;eg&g4ZkdarNY&};y z;<|OpibME;IcE!a>7v+;2k3oou70uIwROGuTMWz|upU58uDBm4bEp5%lilbLc0VDu z&WIP*-oL9$wqi|w;yW69M!icDv$1x|*DJt`77V`*s+(i1@T!PI)v0-1p}J7v z3!eUa8U|p`*BZMg;u(W_bFx&j4YsBG*7(ZkF7uJ%&o)y6+dEckr)`2vw9dpDu3LJ2 zv{6c`Ap<_}upP70J9S5QS@+#0v0F2rm>sj6&tG>LiUEz%X)SU7LG|L9O=~|!l~0TE zJ|79X>%+oNALF2DOJT#JM-J2GF8_za+Wz4%hCuLtSEQaF++o@xsf@Gkqo++R^6(14 zfOW>Po+Ys%g(aI3v-p{Cwsm>mWkRh*$3T1)g|3pylLn^E-~_w2*bn6o?v_K~RdcSD zElpu3?|ob0#9z9^4EsMSjn+)x&QfQpl{h~87CWmPv~8|-#psaobdOrgHp94_Nb{gG zL_k`p<5t8;_OySQ@3ZMZ#9af+)wXwxSf3i`nzUT*D3Y)N?)wYHc0e zLaVfl(+cb^<9ij@zn%z|A@`_KI?ZlFZkuLZ>}(AfN?b)RN4v|8nS_X|p}7$oYlgFO zM|F%>o5l8v#RHu#E@Lvs|JxG@5{ll2Tb>o3_LrRdXfk{{d{9(6!#Axp4DU40GOmAQ zEAB%Ta3<|^qjj4xrHQF>S<~$X632QX)EK$^NJ*-LA6annE?N&Z@FO*EN=B&G8vFwI z?7U916|@Ej-O_C5-QWy5X_Z7`$Vh^afE7OeDBxKuoo)2RQt_w0`hCGxUZ}Tf@=%FO zWa0f#rRdk(l4&_oid?sSjddOxaO&?A#|l~tgaGotG71xykWZVSq$5^=n`RGOid z-JjP4_T{&?7VmsC9tWR_wgwHvILv_XE-Vb&zWQ~$SS0PpaO)qMOby{Jze8 zljIN8IZ#DNOw5@I!d6{dN??!?BFt51#83YWeOp=xcdxO4D4zstLcVE*w{RzGwp|CT z;8sr52f4$s{%{d$cJR@%!oQXn=WE~i+Q{R{fMwSTrN5JygELd|(0$7wa{JQ9o_&J! zU|2ii<8z%HU#!E?M*&vt23oCEU}`m4GBPieuE?PpbLU_nB=r z_z-ey`;>=z65HU(kI21Q>Y_zekB!8#2}d1C6{?#cBFBw2H;9 zr`GR0Pao`6eBJl&sbfPOTDjXUOMLg*;!o(9ceig|k6`7{Uz^F;ZWSKvpT6rg^Su|r z-7-NW3+FU7HEjcCN140tZJiM1B>iWVJ;B(qb>)0fdwasowo{^0KwBiTX0rI9p}q&_ zIQIC#gUslF!tnV(1Jhz3PDz+OU<-_5;GYCMKHv_lJ(fLtsY}S+n}$_42JoY~$o}y2 z*EC?@Qzfyv9JhV)9D{9*6iKGtBx`d2m5lh)p-kSAAd*%wPjoHr!5>3ywYf->8_< z(J1Z+r>MP{2$AQb&j6^v#$hobHIfN%GSIw_(f(OgSm2;;lTJg!Ua+ zxe98n#Zj690Dt8{TZKW0fOsj;Y9dRH+J&8UF`97olL2)cD)OgbdK~K~KCeON`p%;s z%&6`y*Ge|rcf$R3sM{Sm0&-QB38lIXl7i@cTv}P25e(T?PMnRM_s=XP0KAZKWFq$h zPjq5Sr|yl?wAXlr#`10=m$sQ;GkH5B9{Sa012;MydQXW?p{{ikA~+@=nTj<>6#=|(w@9Yxse&7z z7+MD8gJ{chev2Ocgj$A8yM8@ZOqkbHif&M}NCx{oq@yO`f0HBcMx8GjILerhS+6!D zXijcfvKGvv{B|vR1+gb-nX3j>L~|;%;NIdgFR_kq1E|&Y=3#6HCZM#793-#e!1z^Q z4#KPKP47yyprJW_Ke8mW-A|nkdI33ufH@ZDYK;N%dUr(VWtp~-Fjx0%oOOBPx-A!p zPDw?J4N}?a=pZA@;ohC`r})T>f}9g z(=t$n_7NE&c>FTZid;PbEGdOPj8^1gF-RHLiAFAn0 zm*4$Dd$I8T=F84c<9Apw5ne$N+-Y?A8ss__$!$i+&mWPW$t%90OCV$4ucO{h$$Sic zVQicFaef3NbESI1#PDSvkErgq2}RnjOcUHeZO?V$rakR9oB`9iZ?K4H_5|FEPNj|l zgX8+wsU7YBqOQOR?Hm34oIye#0H{;(rn<(N2JRA2OZ7T9BmOdz{l}vLa&F@=D)u!7 z?F#G8Csa=4wh~(#o6KKtOg1X)3NM_|UNdnlXbL2!>Lv?FXxV$Q`ev~t3kStr_y!P< z=+897{%*7k)^#RbC<;?-QWPf4NF{EcaCyYrQ<2%C64siv4_}rSvrb@EM_1p?8xGIS z4S~*|h9mrW!4ohm@<>z9M0uMpvOc>DxQE_Z1`%t_gDvb!Z}RSk%pmWKagRJaFbYZ} zAYRsO7AG_Yhz(qnOz&LbS&{!h+{?2er;JCRB{Kom*uGUWYUDSOBJ)UIv{b#q_W4g7 zjcH>;cAQk%$JD&|D^_@K(S=Cg`!BK+%bI?)S$u-S?P#!--9b{s#@4KtF5=ZDVdmPM zw@2RVnuuW++dsI+aRQ$KR-2{+%5JQo&a7Z{B|1#39A6sB)^XK41>xT8@ak{(3|Yy# z*9r>ff^M1u6UMEH2I@L4sHzSDED%u@>r}0o0JfiSosv5V4u^f1l!_{aAysa>SXjgY zfry$ve>C8feq1nm-%Lrqwp3foIwzMLS)IX)pc36(iYcz&F&^xDZ@?7&W}{U28#75* z-YTIwO@;H<393+y0RIyfd1rbJB`iZfXUsdT^eek+L5W_BX+NbsW)a5PB1rpT2wMc~ z`{1E8?SWYHxd9#4y7!h;I|6=uHQn<2BdQ3bz;qHu8}N#Yf27_-+Oq6KtuC)zaELrH zs7njpd3a*{ScP=Xn|HGy7A;H$3T(_$6MjyqEA-AzBG}fuD~2@2K|sN=dz)O%pQ;$<80>lqlA9KWUfGdA$31I#-F|DxPPTAz zZWo@a;I*g(SJEr#SaBtb3BUnfW=itTb~Jyy!`YYoynTL1@k3EJFx?QO;dkHtF7Zlw z46`ja;mf`8<>0|ktQfU(I+sRa`BmS2YJ~A!O>d-{tAQoN5)T!#@BlA6*={JpIh?Hu zbcowh0`Wk{iTio>C(z~W+)KfJ!t>gTw{H4=0T4XHr{?x!vTL_%Cbdw2DF5F`5$5kC z9K8Sv6Xp3LESRe0139wws9_rUW!0m#oEBmpH_2b;fK{MuPR>lvAR9wx?GKLzZQ+4%@@Oa+8xulzMm0 zcaZ1RJ()xl@wl?9H161pOTJ#N*;8KV98grA${4SbD){u0f~Xud`5P5T*<@7S5^`Q; zjqOye%K&~GT< zd+wB>FKnsPG&tIA-=+BREV8X%)BBX!3mH++zQS~)WayA^KvnNQeZ`m! zSc{D+g^`ts@!#9;K0_QGsoQmmR~kv#W48rC0TP1-(5=PI)#(;_W6PIBUYZ^V^@2sI zf7*gW?QWT*eIYn=5nTSslQkjAhC5L)PxfpUw(DLxtcd1mq9{Y6pMRQy;fRVN3@k(& z%0+SE_u5R9i0ir;TzJ_1X$z~t7FP-vPCU+$x8ATojWncR@@I%1IgudGo0RS~0nZ?}z&J38{|< zyOaL75&eWn`Z*`8sD%cG*yKiggi$U{*gch|!@(0`4QNe{%`Awk$e}7O_e>R_7kIz$ zgFYYE!)g>pXsa`>S}WL9K(cM;PY?N~uI_UCjMlq5iB(NVMfq8}wZXe>C6gC5_?WQR zcz^Zf>Nq+c9e|i*{6?@G`kU>e4hjx9uXO?87uai#;&q zZuEDVWBZ8~tR)ort#?Yb6ipsRI2WPa=$>co(>E{PyMELr3zR3*`b{!!)&9T?8(ZY( zCrsFS$fQ{~ecT~BFRbhC&;n|dLiQjVeR8N+#?6f`Niw17p6yIbCMqftiQa}fDO_}7 z6>S7FJ1TGFoFijau-*p!nerIAU}z=%L6(*AZz`5kV)HlI4D>rtM=A2~RJiD+88Abz z*r@IM&wzVa&bUnCuR{-R4Pd|*Eo=Xd58o%;?OjGoAt{Hflo6wC_ubmJUpz|M`zdP# zaDIv6k1gl1U7S>ZTUfe-E60kf-9>nRPCiG=XuWr>(Xp2_(qHQwVa<;ps9N0zBT?tm9hCG3p3pZhI#Z)4XCX4|L1v-(U4Pn1ICf1-s?`A zrfsX;(>!4+G5GbP0;O1&*9dtTJStGf-+06T5&RLu2uB!Pau}$&tD$WLnFN4Q`S^2D zS;zO5o&{r3njai#bYyiWP#aI9&~fQp4Q!JE%3M#frNx0MCP)<_LD) z9CXMIyLyb*m-y>86@PAUvWWCMCc?KABXgcs?s6VJZCo|S!&xPQY@%UvVWy7iDCIXv zKDP-!e!NVk1%6Nxu~c^Qq2MM?w?y0SwAr2891O=A6CHGBEhH*()`-rGW4>+b40ccS zNQ(=YO`LF8R5zJ=iS0H1^h;}Ct4n>=i*}<#zXG-v=OYEEGjfWp7+*KzSa9RD2<7o` z9yqV%tloQ4AUR4=qq=eOg~o9cza{F9f->uuwf@eCTk`C?G#&9>vP-d%-|@WlCI2ME zGw^?8s7KNoXW!8LfkLH1>LlUNqh&?34*3Gck`u5aIvRUvMErAdq+?U5eGIK^BsJ=a z!BCCz_!m-XTq!A2U?v%H4@?dsR>Q!!y}I z^YfWs^<=jn2`mtArsy96>)U@EoMlHXb8^CuM_|#F@a(krk2mYqf?c}*Htpcc5=V2e z*;k4LwGo@LBABw~lp=iYwu@z+<9xcTpERjDhj~slit+jsc`oS}jbC zxK*TTPc}mn>c!I+4jsal6-zG~0=Njo_pO*EP2F>f|zN&PcF?FMwh`ErJJ+kf2 zGHcoDrJ2Ji_=%wyqu@dbfzR^I_Y^>keknhr0x?|OGH0HmnCgty(douD1nmj!G~BhtNi)S-bhBjt$dfxW)@eq0q|3h5NC@9gtDk$rs#Vm`>u zEQwT6Jw=bl6WiDG*MZ_S{hG@+AMy=PO0iKRw}jvs*_5eXstyv^5T5jV1-31wty00J zF~S)!*@p78dW~&REgIl1#X~M3j`6P47!(}eXq8I-`(n#CU&_zZM)j2y<2CKB01?gSpNjQ7)S;4{w zXN?}nYuO-p+n#%T@y@xS0#M~uI@#b)q-aA~7GRyTky9s9XC%Uml!mokroLczN=DL} z=UvwSU6KY@B@h1BWexWW3@3z2Yp#Dm(3--7dK)9@XmRavBH~ek{BulJc^;&N23L8{ zSo|urHl z)XrV?(+OgcR=)D>adVB>617W#bJUk*`9iV3wgu&T!XKetpTq3sb~u#?b8!&;r!1)E zGEFdq?`Qxe#qmJ#R0NNnZwRjP+`ZkdEK^_8*Xek4Mz97Y5wt;da?T`c^1xIwp$S6y z5+kf8(JQ-)H-#pF=XSP_x9^XqdvUAAxQ~vrai(HwsGJ8oV!QRpDVQ%azA=_cFYmF~ z-n;GpW>f7Kl3*nOmb*V8^Qarvw{nG&wtpUX-vTVBq3fHT^#^>!k||rK5Zc!LdOGuw zhHpmfl{zxEq;VF6zRw?aRvi{-V)6{Gou3QJ##;??A-WRIstMEjK^x-Y^b zJH{=opX0y;oo-*7iiGHvk*v9QoaDKD&eU@3#qv7Z{DjtnkEgr8{ymT;nms4_M3@z0^;>kVJ@2Y=Gv5lTm1)j=oqu>8qRXG;WLRK%pWjz; z^n2Z!+-ae@#A~XhS9rosuKiR9tOlFZu>?4!Io$xN29akx^vL+T?s)zJ?7F*y^Y*jI zqzoSGH&J@+9y^6xwNAr?g@K=<5$Aew6!Rs6(d_eM>#rhbnip^|j&M=mNJB#9YJokf zm+MpNOJh%O_RL|FcFwi*R3g5&wD?O>+OuNiiqNLE*$p^TK7SEbG`WaVsK#l@P->vd zMgu#@_--IEFBefLyimyqaA?!}2`L`@qIqeH;-jP1%t09FHrxSX<^sj-Lx?~Z#70fb z%2a2v>vr&xBt(r6|4E8|Vq~uV9T_aCAkWpyH3s&l-mfC?z>J|zeU9PbfM`DWZvkZe zI9%>K2w^DwAcTl>lu{54-ilYV5o)=`4s4^)H{(vV_*f*?c@`SC4PZ^eG z4zn3!Dp3?h{-3oC9}wj6p1R7C8p8p_9**tH%c`l?Rn;QTb2!LWCHsL9m_d33s(x;r zW!S602W`Fu48)v>bvotlD*$Y>v=wm6fh=&FZ8pb?$&#Uw$MzeUW0dA`v(7iA=Jw81 zQHj9#)VE4eeD`7-f*1@w;D8Z(`?Kq|ra>oGkwi=2z#62=wZj@92oRDT1S$a%k3 zyaPymDR&)@Q}(82Z!92~%-M%-eyiLZ|aiacO|7PQw3Y^Ca8Ngx<-q2 zZmM{8mXPnv)T&(|6IRnt*?CrfvKhT$w_V)Bk=P_sHV#~FR4BX1){}jle|<1+SUk!u zwJA&YV$TFH9Bi3EHR1}SnZa9_bS{{5*K1Wazve`XUkg(NC!g=&y}9FHq{L6GSxVIB zmim*#=-7d1BGk4}h$$OTdCXX#_%N?;f7UnnR911bleCYSkiTNF2VQfyG|{A6^MYrr zPtna6uhXwMIk*CU(nq|=zYOFYm-QB(1o-&x+Uh?OqyJg-HcNG)=y@!AT1ktkY4n?B zZ^louZM!e57&gSV$J(^fxzHV7=`qo!d!$|H$+d}gka;YKEir&5qM*Jt6-#B~X||G^ z>y0K}BY`moylB;P25_3|M0NWPq>VTc4{duq!o*j9iGWYt@CU%;KAp zMlMM`pdZvIf`Hdxu|eT_#CB5=q?~MAR8Zk`N#*;`n{RW|ci%l=RCD8-as<=8WV*S{ zDD_BwLe9LWWs+UOwX6XCo%$Rn0+M*K#8AlE!H+`;)|o2PcoWtcFg-yp7XKfdj7x6X zOrptE7n(-b+q=BkHm@KjSwvXWO|wc8Jy`IgbCusEdi#4hcY@ydmH=u1Xc2CGTAW92 zC&BnUv`HU7vlv=IMTDpNutwNH`)sgt24nCuyd^w5usEli{GiIcI5cN}J}6L%H94NS z6ko{|IO3w`IzNOc`-Yw z5<0^i{EH2~9^kacrk?OQwXl-9v2Q5C)jb7@K`(jkyZ`%);qtg}hx43mQTAj`-hm+t zSs$ zyFGP~q|3QEm`WwhxQ?6C6-@T+skRCAUE{Np5w~pTL|V!+PQ;k;AkrF*gx?f}hld)QhDjo-&8#h|2A_22#D{^Eabf zUaX9oNHYkUJ6OJSD6Ju=((&(R0`UP5mo2t_SO-(}uO8f{G@P(%WBF_!N}t}xUm%!s zX9H%Y<6`mK%ADTuuJ??0o3UW8eVQjvzkKPJVw0vKhG*$4n#j9P5%O43RDY}}avw}{ z{-aLh-TjY~d(4AJg6r@NG~0gM+;G^?DVIc>sUd(i#+(I>yT9(+rWK5uFY0OUv=ZON zVA>r$$C|{<)bbQ|DNMEHxiJ9C2*MKPRe^h)Oe%jEhNrf1pdQea$2-gqB;+=R0zF={ zE-;Jwo^Pfb$;z?XxGUL)6Mpife}{x>pKz%XZjxdoOmqAg(@yO>E6X3ypMT$*1wPGY zgKS(`Q1TS3en2ZBm?dvA-o&YX{IK_}oQ3HSsI(N|embqn3LW8Qr1kBDw@8w1b`_Ad zU0vK7UV+#s`$u``(?{c^uA!bD%8v!Q@=aK*Kb33I{mX(`n;=nsP}+B9yp)8m|Cl7sjFFDT&D~yNrE2lOCHyYp&baF6P$V2H<)Pp7PVI@lNaQXo*dyd&%y&`&!e+w-pag!)V^3keMro;jFWJ~vII`hm@&#eZ^zsPM#b~I zl6@Mj&wqDx<9t5w7=WII`o)Ao5U+A|{p=!)Fbt)XB`s#1JP1}dDm7F3*yo>Ws_EKG zroSCOZ}vx?Fdq%1-P=T5#^;EJ+ z=7!Xb?O#I0X2&vBcRxtSI@Fnn4NdJ^W-wnS%2&FNXqq^9C3xyJr=p1}p)Oz^+)Q+;n zwSoWaNS8T;y`18)9;J)n08@NUj<)O&d1fthv^te|HZ=0-FrBglpjtUheRHH=U(Id|kNqM=PAkS_+-Q9u&)o7|!ss`>uXi(1=jTlZnhEWAmZEvqt8U{&aTv zQ@9+y*6Ff7dt_laYFJloiU+O)&k~m0!*F7({V7x)016@(SLsK?f(_0lI=_DoSfHn2 z9>}}8wyQ5(ZchKbUJaV-8U@bAZ~M`Gv>=`QK+ zmXdCU&Y`;-q`SLeVBj7-&)>DL``-K7ul->E3&(?XeAg%5D;ECpt3PLkHO}C=m`W13 z{(F&nko82F84tUw+E1SdpH;$tLPK+~&X^Hs6f_NsP~#8gwiqmvj;v`XHgP`ETt`0D zGLFz(5pRbnZ?=3-Tw|le7+o^Cpz2BC-3O##xdqkJUuQpOz_jf6`nysy%0}1}nljcm zhe)SCiU^KT_#M_f(4~dI=FnNecLwrlnp4&ys&u%3{O=-CyBYCpVc~pf(qx!VkZ*qG z_*oE5*QAaFL@GV5F@5CscyMfaFAlRN!dwlKd*3hyNH)g4%t9+q(0;!qIhNsy-PUPe zXO{i;-ggGeiw>olq4`${>aY8c4oI9yzPr*7pBy_=`{)@6r0JHoYon21Xv`&EAPE&n z?lh>w1W!7@Uiht-ExKWx%t8l z6++4KuY%T*#7FF*^A+Hi-+(E@z-x zpx}FRXplg_rKM-o6(Hw(K{MM@0wI!CX zuhPGIIRdf6=-*?a6o>^N$Mvw^hV+Qmu%oh2B3JNN(hy5vQfv730I;=ulUT)tjrh?! zK(4gak?tr{YO30ER13Q-JFkeRm-Ry)zdhW$z#VR{sVf4D5Un~0vbU3HV|pqxmP4Fu zKMk^H?@#Xn%WH;J?Z(HNqy+bSg2Z?uQ@M0Dk7(T&iwfZxM)3)lBHwKQkXm+e>wqY3 z@0zw2C_@rS-h~9`*K&T?6zZ*F7=O>Z_ABO@0`y86KvqsbXs6dwZ3J()zgdW%=6W9> zoV`!$Te@{O$4Gdd8!=gH0=Tg&-fJ9jq3FETS@=MRn6D#fNc@>0&^x}bBDX%r$jT$i z;H^X?Iam9J2%YITqXadGXhT=u_V_xTIT!UIr$~pFEUcfATjuG8sqFd)tZl9cSh&QX z*kQun|8Yg<>6Ik155tLoF3reYoC>wQ`f=JGAb*XwpvyO(CS969C2v* z6)~cFzhVRD?QYPk8xNa{wz1dM9{qLcpLBTEsBfOMit#)baA!m%D^@SSMg^JDm$&QYMS zJ7-@RB2Xo9BDLh#kw0?pvBLP(7JQo7;p&;c@{@d65)dX~d(KY|4v&46*>*kc(HbGL z=lbPNF^8{BHyb~mKZ;+$#wxeVN?Q8^^&1 zzI&aiF-Jq>ARHt*8xE5)U7i-@jM>A$#!4XAMs0yJ(_o##vetG4@7k&@FSr-XVjx3B zX?xaaQ9zl7GKE|Bat?PXp*$gJ;G*418YgU3z*q3YJB9Oexb@igR{=8lTU<29^`nha zj10m~o6ItZV-a)P3SqO1z#}GyFO`asP31@5!+ggM-E@D~g!LwMlIJ+Eulb}VniXdk zS(x$yXn;MSZ`jjz9My00o}-zvtC3?JA2=NseJx>;?(}WdF(PYY6QcFbl*YU@M<5Fa z*f66Z5HAQ4E`qzetTyloE$y+m?X`6S0;Ai9fmtMi=xiB8KK|i>;RMLcD z=DWN(eby>np4reJ6YYfQeb8{|vs5xXxvkA1BRS+tM#Tb8MWDzzITZOTw0zLB7$TD< z&(&1ne$}~h3{Ek4rnTBx=DI*V!y`%kRUL&nn$5*sgn66^PaY$Cn83>o2Ad=iBwGay z)K5-Ph8%4qD4rtK9pIQ0pIv22wu?r&j!|dl8(z*SM&nw5SFWH@3nfzSiqIpuiRwx( z?%au9&}Z|p@>Sc}djJv6>T={+w)cAqaet-k?r&4B!+ZX~=0BuYCWV?t=5Y+dsSbNW=o`DRXeg|<|$+!9XMl%IY_9Th2b050<&sMoNjx zea1N%pE=`hmQSczULxtZZc(?WrIrb*(C`uOtK&t3BCCscbFD%$p?GZi8x=isJN*_p zsEwAh=v**8tEORVtozKS=H)jO1te4+h;%PM57k1vEmOq>JJlSY7KRxeL4I8cVFjL! zIl_*vrACaFFpWKKMw2HHI(`EZoOG^u`A#3LEw-?nZZ+ey%;V*DV~SG^gFf2Ng4W*( zo{5BP`KBhTsfI|+s@*GNGIb)g>=o10fb(Zet0$|PY3>IUHHYh`s*pPCptVTilMa}qpW@3~nNY}(L+VLEg1!9X zIpphQx+pU--F8kcK-;>qzhwpYL3X!sQD=n^*3q~8_RgL;B7Y@rvnaZr0sBc9u_A7a z02O3I`D9{K-c%d@wE#R8_skA$TJA(zAzl|%8fsmX#yn%h?ixH2Xou4o!H>=77?YDo zYrt3kQuAU+hLs5cFmd9JvGI(7!1N`pvz|t2XEJuZQ_R9KRvrf#I%~2Kt8J)Nnz=1@ z``d;jW=Vp&pV6#2m&8*Qy$)$3Kp&3NgCsO}@UW>{MRREoCxcOLqUfz-(@$$%IP1p# z)GL`MAI0r|A4sSr@&({a<;bBw=ntK{;n+#@3+j(nXy1D^6|=Ut-hfXh^7EKJhG`Y_ zACr=}G{9|BKRGfqGKX7{Z)nkI8WieY{P_L03fatrxSw>cT6aRYe(mfFDV!Ri8Xgy3 z;?u9%Na->!!_Q4rwi^xds+mpi#2Nww7`X2R^SboT<~f@80)sKme*AqXkuphe=ipZ> zPgm`53;J)~U35m!p>06y;tJtg^=98u7!`8|R7@Z~xMT%!?JD>B3dC?vtA;VECUeNR z7X$Bi)NRf>k-;-7f#Y=qZ)nr3YS@_2Y-T1ayfZucn*uka|6K-;g!wTtTd;D$R-wpEmVqTCH$5tY^wFzIZV zbW;bpKmvK0Pc$!99bswTV~G*Sa(%Z64aF%rb>z|)Kja|Gv~f8v)*)-gqdG3`Y0g13 zn!=(`8TJ8yvro)pGM_3qJe(bp2~@$TW<;WA z)5#*|XFNHxHg8y*Cz2xcOx|zen#Ah>C^XKDu83|XNeKgT8>9Qbq`5B)N?5bILJxhY zR(=kdXPmj{XR>hLPu7`N;PiP?zFi*tM`jN5e7-5|9}8;p*7RaFphgjwzA8f9wL+{d zYfElV3GviNI_dzYKRFQM8{2X-^y2JE%XEXxXvX~g-jPq&Q zcHl3#S~@|llAww|accMY`utBdS&!uIuPrV-Gv)UQ`+%J+G9|q7p;SS)T!3W_@hQ3Q zuzBMtpDU-c&|~hS(~bSA?)dxb-eKn0IAY~&%zRX3+~>-u8}WgmPRz14 zX13FPWj}T$<{2$uqu}9d)4x+vJ2O@17F^R{y6cP}57E$-4=Q@-?X9oF7j#R963M%A zsSIZ4XX!YLn#8iE!H7s|2X$H-S6V%`u&vVf42u9U5eJdCid_}c(;(SGpjYDu& z6R)oYcxUp{gYXvbr!&^+A)VX%Vr06(Y967M~9n~Cp*l`{)TXY3>A*R+V0J|gTB2r!|w^X z-$N^BL`cB)vyD?S=OYf&{y^ZF*NhS>w}g}Hk}urC>f^zBl1^(dPTLP^qBC2znNFX%h0Ng? z6*}6me)5BNYS+vMkO2Pit#23NkFYaL$hfbGoJa<&D#@G6`GU_3GFh?jvJFJ;TPlLk z5FHJK7$kD;xpHj#@HOi_?~HT*b1=>b`J7A-pkB0ra=(n<@8F z#ybKZ)r~+U!);^=`2E`MjQRo^n`tFeDfP>ghuT6WLUgS!XflPerwe5SYN9eZi;x`c z(|`F@33!VUcKZGFfW|L7PoUWJY@L8p10(d*Tp`~h3YCpkN0*nwr)&DV~ufF%30-{xxm>^ zhL7y~f>LmQRg)GefPgAQOG(Cm9x#fB92 z$J|9(#IQ3Yv59RweBr6HSS^p{8XG(%hxj{jd@_^>zPo*a@=bQ&cnP>XYyh@q>68yT zJpktYz(7(PC*yJXpML?1CEq?I0WNO!p9>%2s~rGvxj>rKYf{XVomy;Wq-9q? z&!B=JTT^j$?)3)&Aizf^FwP=I6{aq3X1&JV&@l>VyNt($>$d$8ED)9weU|XUHQD%F zo&AMSP{2WFLM%`0pnd?N0RPtzD&p=tAL8b6HiHF4k*)P?i?ol!wOQ+pl!)Mj@>UZf zN;8ySxC@_v3)?(nBb-H2NEwbcisvM(+)L}?#5}TxAFOO3Fo;K^V{s*H>0Ou3cnW=a zHGW(|A6gxT=(3gMz62Y@gxZV=EyTt$vj8CQ~-8}4x!`` z%AtP3y+!N7epB&7G}A~3-2?}T5nIK(f%t9E^~ni&I4(r~la(t2?v|s2nUGz}jLlU@ z$bG2SnxV};Zn_v{81hZ$N6=tvPswA33s6ABliL;}4HM%mc zK;YDlGzgAf4{yZpeGz*2DKn=wbR z=rmyAE@*CWvk*zqP$PIzP4fT>74B5u0-W4v%di8|mg4C6yG5}&Y?V=yGOwHP<=@Sr zvkgc>gNbCe#y|@WP~{k)(^vFm%pe)EW(-_EB2`4P%HSjYsA}(XNY-v`(enY-WOYvW zMVRl|`kAi=a@*tO-04$M6nQroq!3Dx7xUX>tan$#p$c4nf%6|eYMX0pY`wiW#q(q^+@qiLV4n00 z|ItC%QR8c9?B4BENgUg94n8^Mk|vZYaUDQqN?UfSQ}Z-l^e!9C>`OAXj;6^e^X(*P zen3Jco=M-2U?T8n+gjlwYcvP3D~CiF+0F!UI6}>IR-=CGXNWEif|nK@cY&UVpo##0 zuNia?nT%_|YR`O7SM|XOlhzjO>vlkxkp(v=yaC(wU5DP!Ag{dhk}KwmOxD+mT384p zUrI73%231JaE9aUa*mjKG=KFQBRw)No$IULn`KLPi;W&w``@aq%a{1ke^hF^zzbN{ zOc+LpHJro>53%{vZ4KZ(&(MYG(nwGL4-cYK({iJ1Tbw`7p>-n;{A^!){fx%u*61-Eot6hW;*cNbXQ4bz0|M+*1>L_jNi{9c5BnFaa9pCG?OGv<}POP z*Z8~;!9nBii86j{N^r4{vvx^~40t1VN=LPzelcsZ`Kzx}Omva;7|qgD>c-AtU%ldp z<3}R*g;Z;{LQ$fS63J@z<`kJ-ry`JxXlUP5ldkPsC5egRaCYoSqq8W^^%dE>LuaUb zg1qO-wse>1qTzZHpkgD~0QzyPK2}RBcrn;m>VM`fvpaRNRT3X_e zfGD#bzC;&;<@K&B_252wF&d+38iFIKy>^8v6CZm(e|F zc<*b~@YkM&;$yWr!;uf1`B_db@ba@0w>Ix9Z^AnROQ{_T?+sa94#^SmK;zq3byc`- zzIk1W`nGA6-nbkc*jz;(g`=gL8{qXBAtAzfc0$=bu}?F0?dOM)POtJ9uj;`X1BbqC z17^Lc0-}5CKXRfN)agU-14PoIBabwo!pQBWMBp1-BJa{U)-k6T0nrcZM~+wcH%Zls zpqS1e?%W;t8=tOuv1yI$UxwtpUi-N!k<&2&&ED%Vo@Ym*iDhiPV1%1T5}5{|twT`+ zR)?C0;DO*GDt0g!Z%Q<7{kJUfAXB2WNd447{ei-Fa<6uwr+|JfsG6l#CJtvc!-IO%E-`LL~Y%+uK(TYdlN_^q<;)L_&v=V=E&|}Q#WM*R}0pE0y)y-+q zRGh78%H@k!rDQ2;EvgD>TH!;X<}&F4Fo+A(2<~E{?Nz{rgkI#DpOvZ1`aD7>gQhN8 ziE`Z=V!bE$zS<1Xw{nPurOlYh!7muFvUUMhK@6NaDJp+BE#^$yq}v#h;a8p}(q26| zgK&t564MlkcabMqV^j&XAU1xeRnPr|IZ3=k?GOYZL*pZso+44%pZOP~0qMVK zS~`ur@;hcL`)=t|_kYc7)z~$?-tFeI513-hb!(!8#_*TJveq=#2lg=UnTt7Rk?@)I z?@D(0fGwcl=zm+YggL63DM3KjErsj|x9|6FE1ufqPb+>|R{7~)QqVuGcx}$OSOV49 zFcesp5QxN4RBjZ2d7$z5lEEv^t+RwLxu?Foy)h5^hm%_R~5Rq?VVviJDV+zSay5pB=P%xPcFvrX~yrlwg-$Oj-oU0{J{DO zh~;X&_`0=3i+K7jh$J%F6xG}G;1}P%FYgfMZvnNp@Lo0_s9J?;yXeN~xEiZo(i^3u zLXWxc_}7siudwt~^`0w|(^!uVO&!f!SB~A^UZY_fCt8ij<=VNZKEYJO}sTD{&ASn!DX@v22HY zvyR^|j-@XX``C=j*K9w&9zn%QdTUin@!8p`-i+JLoY`xT)jhLmMWKia92nE2>4a7S zSJ}wv;+R2O#VQeI%McE$D%o%MWx5HHj(56WTgSM2m)9a<$md;|q`jKHRZXAxJ*WCdrjo$mAJeN=P83(6>?IWUND%9rpM$W05P8ps)~e^Cfh%KPKmF&U|cgB6*S` zGM?BgFjN1YfO+~@`*Hv=^y zo|i=q5w8gbxjfCEIe1+^wfY)bK4%Q)=|u4!bZA8!6Bvc&AWGDym+Zj~d^*o=TZEIe zEK~iW!OuNm&v#jp{FEwQ!*h*0@PX$5+4CR+l9V%qZuP_>jJ=&C?5Q)jly-l=I+MT0 z7Ny#Tg6G=nkIGr}aA@a(emsK1_%tbIpuXv^OJjjtMQbUYZr6jrPsI}Xl-qT<7#i@TThn5RCD6hu035dsWpEl0Qd496*t7Qh8I(|Brv+DJTU~ausHZH226kb}qTSKM{e0v@1UQyTGt%u&7*38k*y8b(J+~{aa9_f5n7+-e_;OJ;B`b`J zuZ7+W)qakpZsQ>nyYzU~MVg@-5)5ui&+YJ$j*@%dt_jVkh;4icA9>|T(qwjfk5mlg zU|smR-jet~g|O1jQ3LKp+laIpgP6cR|HiU4U)EB1u2q$+u+Mpw_Zw{=-cy8;pA`A- z8TMV6BkKC3PE+ADzF5AuY4>4+p6pYD^OKA6O<|*h$9#|M(2)%H^tHO?3=NV#FH$Y4 zEE@nD*Osi@Fe7QaPibHf2DlAxToN$GlMHP3IUJmIlt|R$wC_qA|}Yf zAjHJVNi1fjhTnWmR)vK4b)PC`DYIp*ghL1|QrS)SGQ2GKs8jEkBQiQS#GyOn&dg}A zVb_2P-8yftmV-m-RvXyo1&F(q&FM48k>H+Yx-qK+^^f%w@F6+0{)I3hCkF2xZzZtA$@KgfYO-&d#C~6SPmWLaE|yqG4s8 z?OXjhf~x-S_!`l+0DMN?J+9-hWOGfVS8JL_n0I+eq73#Xyn89LTr>5tK#8cQkH0Y% zO3yc2|9nHp)291-t;$o2#+WCDU8AI*e&bP0j1GBp8qT$u4}7i*ik?HRA$0EdSp3{X z`=ATkBiG>8B)4DqR$ce2s!5laEn%j+_Z6v-tT)>N2#TeQ1 zn6K97K&zdHYq#&eR*Z1y+k@J2!xp{!5^i>WJvE+gsj3oa7MaYn9Hu!bXluAfz9YWX zOsf%9zeIMla$k5@BnK~yD?;o|c&IRSIr5*?GafRre$K&-&Y1kt)N75<12KAn0XTj) ztadQhdZylt$FYZW>`PIg8!g&sN$h;liQP%`&SkcxpYwmKz{>)WV3D2?9kiv+$bah_jnP0$s3Z0F@D&PS&ZW-(iOMKw{ifG1kGR5d#BbMa+({tvz3C;r zNCup-{I|3j)ma!Vztfz7`Amct?U1iwKkf~42lT9oNTnw@sF8m7>O*N`py1-XFo(U2 zh~F2=NA4CjAJM+JokPmyrQ4_7Hy z{I-vSG-fk+|9%3CzXNVXCEa|B>SLbs(9Lg)13lB1NrZ{^oaiJWk7xPY8q=6raZK4W zbD^DvuJQ_3uTv|An6m+mRqA(kH7-K0X6VvSjbN?!{ES#qDin0uRXUeIoeGgYQI{Kg zg$P^;hk^BOFLkhfZ$pYAA;n9%l|!k<$5yVg0cfF1x3-=)s*U z(YI#XcNgt3ehEEAO1 z(nZYRyGXdRu-nfM>tgXA9rHbV4o_kHpg1W-$w`X?f6wQl;8LdN?ylm<3eF~+#B@Zv z!~vmev;~P6V2xZEF)q$`;&T#~BOI5rlC($P4Uv}MPJJB)FT&WC^`mZs&N*gR)%axc z=I`Kelz_t|(@MeV3_&x(w5K6{O2^^fcFh+^;i1?Mz{r9}H=|zCC2hi#x6yE%o&@l? zyQGi@RoOH#7gd#)2DVX(P2_AdAeR(rbbC9T1D-|vI~_+7VBf5&=Q32O1rW(CakX}( z_yAaay{sbR%)*6ud}Q~5s)OZV)dA(2w=q{0vGC7Ef?se;M&72CZkd8~rJj?|vzpBS^sK!X#^ z4ZnlA;mxvglvl--(Lp?(U;U&~Y&P>>B_$~5e`i4)reOGMQq&GUl7Hc>J<=$1i+=sa30gL%AZGQP5qQe7w6ai7>r|rk$37^mKKB0 zJFiJ(bH(&HX&~7&As>oqEv?{|A%g&D5+wxLC<(~KWt>^K9n7-){BGAgIoc#EfT?Zf z;_VOjjAmO3J7zC+h|4|aA%$i#ifNZW5JFTZwA66w;P&?<#`UXrGyV}*2L(PK_~u~8 zd3HQEnTtYiA{7Sy!TeysFh6ns3diXsmIrnJqzC_@{49!X!#JybQgbTptMPxC8vS_M ze6MI+fHByE;q|-(;b+N4@Tp799+As|PQ``8ZZ+LlUONFmsB`tg+r!_D@?r?N?>W;n zE|<`g?Mh5n>*J}akT7s&zL6;Jrz3kZw895ApMM!?v0C3_m*t(fyP1uAX@3i)x+TeQ z?fybsz7!z2c4W7L^dZl2`{mrKx%r8$Y8@uR$U5(dND+MTe(-QE5C6KjW&vL2pFH{h zfIyUF?Fr65$?luRj4ByjdmYu(;5mGl%Mjk2j0qaTe~NeOwD?%mY@}HqqP|oaaz+RXQ7Gn$pc2AL^k`w=E)h+=!b}v>| zEOkPgs}Oa1h{IP0WW>!db~r^*R^c84grG2-A9+(V#sv$nJKT!RRZjIrGvqt1qus`kg#?ytS7-Dr(#@bXn;@_VHDLY<87RyB^bd zjORYpmD!E*C0)4_xbj4$(V^0a6D`fJ23^sFHpes46(dMDyL;!bs^+D~I8{soxV#wc zL5oIsSWvWLQ6u=_>uxl8ACf!wWmP?{Jj!f$ek!_v;*qcATk^;J5>vJ_HVLso) z%Od?$41Sld-QjZ1Pvx-`#Aj^CegAV%D*hZ9^;)d0?LwDNDzT5pDwHSMEV-C^rd0AsHbAMXRNt`P}F6#7l%W>&++i`bc4)pC(7E|G9@nX2K{@w)Q zqx!zb{93^q)o`~VAlKTTdtHaRUnD_Qnc6cAOoD`>4xjR_eNHd@PuZd0IQxudfkGg7{kb3(@46XaPa^Qa>pl zlbxHrCq#2~PTJ)&PjdbAn4iuTVA3yHHo1CAK_~n_-c#uxxV-x9e}l_^iAULbrL#A^ zjJC986ll(i5Nj>^>cL!KDeX&cwfPrbksze3<_z5|I#(mjwUp@faf$VW3{{rJJlo>< z60ZE?Pre#Uspq*vojwa3OKf#7-54A0X)GF;r2O>1O(A6)>?HIe_;N_t&0=pkN;)eE z#3b|e+sn$W&gc9Qi>qzl5%m*(PZjman<1p?iR*Gtv#|P)e-2e7T!*5=)S9XR(xG>@ z7$Qt;%6Ln&&8B~_V&WQ#R0S-*zw6qp?!(X;mcy2V^ir4{leoi49P+M~A0dTYd$KW) ze3>plJ4R!CZ728au&uQW7utC%TLv9h;DTBJW@Jq5e+?aZTiXAT?R;MYpl1D-?5uvc z;PKDYIgs)ilA`5`6F)m>gZ0{J)3CYLdu#{cdH;gNsQP+^-tHKeZ&o9Su6Fr%_?0dA z2Hbo7no%w%c51oncS21>_2mwGP=0tD_=d<0>c{`Y!8s;(y*B12fdA!^2gRHITir@u z@*YHBPSJ3)@SM9y{9M{;Ke189SY7ci*_j3*@xY!TGcTdsVl%@r2R!ud0T`$S&xUdB8%ZqX!~VwA-bs=GA}*9MvivBZ zhfd+Bt^3ek{R5L+oiOjq`)#hFlg1*88~e`lcBUiMEx51_p2zUw(1*FjIoNzC8~|;e zqX=SXNEfWx?1Y+>8`1G&x~-CvWw$yT;_CbsdB@Hnfk^P3i!OMObOUxXy|F|-k;#51 zUMY=y)jMW*^#lA>HCe*b3Ac|^rI7MniPrmI2%o`OHAyKz^^pqp^{ z2aD5c|E`A+qpN!m#B4+B$YjevO;B1Mu@xv9gvH`5gvCh?9}A7FtI8yar!}Xtlib8% zU!X4oCo8@2E;XAg z(^4b(t2%8!eMV3lk7>8db2T8TEo(x~@2q=_b7^iLdtPp;XwqD}aozfG%s0mCuc@62 zGqu5`5C5{`WoXj_ZNG9p9cazSCrYG-^aaLbWkRI*Yx4b&LO$21*vWFhy(Ud=?WA6c zsM3}BVa6@@_$ts|8Y%HmVnpd8^dNE-oM|J=l)Vhu%xccs==d=_V0mFfRNVjCXF${Y zg(Yv+D~{*iJ`X>hULM2`tlZ(tlXAg-scSrFf|%9He64JaOsn? zSn{DJX8YqFl=8nKrdaSe8i8gez)>e_$X{Uw;2C-oZ84i>8dZ|^&-n^*4VR?X39794 zAO5*tfpng#@zGcYJZ^A36{~fpWgwy?ra9*;S8Soa?hnx#_JRQxLal*52fzQ>tm;$~ zgj}Tlb8N8AK4H?a#^gVg)W7haLh-H=d7ls9@)#+`&0|zVaz11A4$51`OoRHYbXwYK zA_m)lOjkX?^Kz+ECy!IPGF4Qd0cdH?hutF|Ep^4lX;+fHl)%FwllO=zauANTAq>zu z_~pfkHM`WIT2tG)LrP*WABBdmp(MC%J)zP*!LarG+H>vM*C&5^zNniWi#Qht&_oYO)POY*{ZElBjW=-Ut-pAgIg#e>J~xLqd0 z?a1YGUv2{}bB{pZ@JRLr{}Ha=COmAuR%em)PU)0t+s^7(7GPiGe9rCGUTY#}7EY_% zQz=CXPVL>5j()2<;1MAtAL)j5elO|Z?E%0AyKuRHGr$WNRsS+joq+&b+2!0r{`n$< zOWW|plo``IX8rP7=i%ZpY*UFQJYJ_A{(ODOkX=Oa_t@g+O7}0Lh`l6%fyliTbMIXo z2gZ9!bS!1lp^+-T44d%)L_>?dgy>hQ8Ag^FC!W8q@N#g9HTr%Z`jdU~Oh}u?n=I9 zufWrrqtJR9m+uzH`yY>g-w|2UyGN)JhSuM?z8nIOUz^j2lh&e$e_5acM<>7^tQn1! z4;vJX{FJy}NRI&Gj4|$f`D|){Y~$i|u0hdcbR)g0>6tpG<*4tn$u4wS&CbK>c-aRx zt?;Ck0UYTfQ;t1MVEU;4Iue~S1DKLCwfhrC+t@{-N2c@up-q)@VY*l8u1IbIlgjUF zv9P9lCL4I)p;r?PI|_<33cH%rL{+#t?!DnDDX2PTa^&Js0F`#q7Q2c1#IE*n43Bq@ z{L*pZjDA;Bxx}-xIrA*Dj)ML-}C2QK<02u_ye9Rx2JWSGv-gr z`R9a-HqFy;W~qR!a)X0B^OEl#zzNcsIMhOO*Osi5>N>S8M z^F>QO6JLqB1V7u$!sqdTeq#mfAS6!L4)q+T&8}sKd>y_QAIrzXAU&BO-rHqrtlEqe zoZO*tY^lC}*B{Zg&|5>94^6m6wfI&$LS6&ak|rvw*X;pJE}5P=L&XRvHPkSFvW+yTKZ=Nx(UlTGbkYxmA8J4@^eov9v@D9C)KXj zaH~X8Y?yTb4gV(DM<11^cUU#$wALG-JGG~fshYT*(74!=3{(ksj;Ofuq91Mjte{bA zuV{9#%WI{xcQg#~xYkiNpF0h9uq^FPAGw|I7b>$^4ks`GcPZFT*Q+Du67&WIJWdy; zT(zT~gx|VM9>@3RHkRtx+4Jc~=F4S+Mimo@e>kpgXN~V_6xu`?lcq3`7^h;5Rmitc zlc7bNf6?t2=v4kJFUUm}M8slMSQxZbgm|}8cs}1qNS9Vr&nkcL_6gy%Uw_U_U)3u~ zwGxJvr+=tin!6j>l;6oee&VMo-s(ME;{Di&(iYYtvg**`MVV?}`+k(pPW`BepugjF0j6KOg3InBP0q+Z{A;bB#9C{fC_W}jK>&nRHpDS$0`uV7=3DVaNM0HcAoHZAV6FICsSEd z5&naB2c3KH#BO;Id$E^{PU6b8Yak~f!Z1(!moE;HLTKriKeycj88N%Quo&nL0QmMf z+xCVR${o}BNsXuz80P^eVaOKG(@!R8QC7&O-XrTbl^*j2gS{4Q*!1K@3uZj+--tgn zlWo6;>Tz1GDr3$p* z|7Opaq4m+j?zDFej^b)&?n$B0*(?-NBG~wJ5X9o5$M%$^{mjV3!Qb~athlwdZehVb zzV{V0jysY$vx${yCizEeufl<~XW~buPEHi^6DEieY8j6>L6pn11(7JvW{@PG{d;0sYXG099T z%%ijqwR&l@v?5fdWRPl8dI_0ob`a7!)vQ$_&n271kP&~9L*o$=%3l=OIwubY?8|n~ zdMhG|d%v0Z;#hOegAHLJ2+$ue$nDnb+~DSRu+8&R-51uz2p!sJtidRHjSPM;Z8((H zuE0ip3qfC>M5)Zehw!q&s#kKgD4O!N=-(5_GxoGO*T$VQN4O5yKBRmdd1agJB#h34 zo?s^1;olbBiT^l_irSnJMu?W|slphY?}`mt_yIJMTN*$~OX1v%8}kPcz?u^N4~Q(h zsYrv(8`@K$m;Nt&}`BUC_ zjwNrnwNS(AwdV;NLDyR%Mzc!?p#g2TweOdZ+b zIWDgd9cX@3GUC%0O6&1Bh!*H1^MgkSbK@_-#A!a-hg$?Q;uE6LI63_Dz;O(XEQrVM zp_K~z1lHxMt*IQQwZd)QyS>___g9zL>n2CgOtQpM*p_+QG&_-plN&^m^dA;LHxIcF z0S^>Qawu!=`;{Ak;964aIH!2LNUUYT&FAKd<}5^;-#?6a{Q-=PTI!|@N|n??yU0*0I@ z*;kYBx266mNwWks3j?<_i!V3kr!M7t>mA8^^zMWN)f1jd485A|gk>M3!!Z2ZCTELr zXq~C&i6*Ck*Ffk3R0%8BnkC6bFu#>>axPxm6_xa2_(E~@?q`z+661A21pCRSL3O~S znMv2apA1zBZ*$9|Ez!Z4egCYF3$fcbp0&pAG0K_;11KEqf*~ieHp7yI+M9q9wo~9? z3Fgt<^?uw6{%p7lw|P$6Sf){#35e10%)5Bf6Rf$@>@uP*SBXiu_zQhN&hf?fY!H`& zc3)@@d9Gc)2a2*~-21F*ZW|Nw10R1XY&dDEIM1GH*WA)<7t4uzF4Bd*lvlEvk;0tD z9cfGi!YJ2kyr{BQC+%k>b3S=DaHZ#=MkwZUg+H>*Qh3Cjyc}ydG2wl7y%~!%iP$(w z*7n>IqaXLkkpi+MUC&+#OL}c9A@9GOBB5{b-4hIhn`lp!9cGk8v`CtxKvJB_@|u@h;_1x7w$yKg;?_$cBEpAzjgg_=_CXaeAD3wyL%aI=~t zaVGrdiRN8AG6SaLs|Al!8?t#suF)M|3{b<|6@A_Xt*luLGhk@9j@HyhX8=k&rqyjN zjXwI7KL)n6(KepF%@NTT)70yroM8vq2zxRbujU{HZD^8yCH!;#^lics4}i!&sp0zi zjrBr`^Q2R;{<}Y;X%KdHuocI3XXbt+%A5^+@IkVJPBtZ%WrgUE8#erhHGlud4Kw(U z8^-Y04U7ENEAM!;HDPnra6T&U8m& z04?8s8=;S?p=%Qywm$Qp%2}%MzDh7_v>~qOXb&^N&gXLhxxWY_#%JH?m{K8J3`ZCn z$b3|e5+fx_9f>R*mdE7X4a%mn1P5{)!RAtPm-X-TVYxqNE#lcC<{F(p+u?iRxCgcF z`|=wp)4eNv64p7_=jtU#emk%2uhZ9OAEML)3YWa!&;iqYUWw-v`;5eE^w1>4%fdqE zwv$d{Gd*>WmlaNvS)+hsg6|=3iTwKBw;36&t*FigbwIN(Fq-xRBZFX6C5NSztnW6~ z586o|ghK5WsBK@Gnd0Bqd({Trtmd=NwfFR4Mo4%0%r*vK-qQ8AupHvL*c>~&{=(O6 z_@#-c4chzEn+bL75`2y}iEPnYJ1-TmiMBjIa`^}?Y^gkpzg$r)9?HFFHE<~!;JSvl zrpG<;Wv(Qf5Nvxct%Dpfkvu`_VZHV9DbM5pdiSFBgiGFIa&k;7`#Oak7QXJ*hx6*I z$0=v;heZ(cbnZtT<_jp4xNV+%znRId8KPW(wk2$Tp3!NKGFGmj;d~+TB1lu&&(8_c zx4Q0QGPh4?EDD^?@sA=9^te5Ij`Pg@CgPYHmYGp9f$8o9EoTy45IO3VLH^!dR!N3HV5nORjv;-;cU^au5Ui1yr~5_+*NlI%C$I)$R9rZ=`$onzRe#o`smAptcu~sHlR~ z{KyN#$%bnp!=Yb6ic{JaE-W+2AB7vswUsB5K5Lhhb9^Z-XZg}w*yl2UJ9#ih<6iN= z6l~nud6_V!tySEN^7CeA^0DeBEbbMN#t{3jQ+KV6Gk;RGF_N1vbCs=AE0xS9=`Rj<^6W_}cvqybL1JICZZzwZqJ$l#`+X_C=Ev6nyGfkyL#ljV zBv^I#id~3V<*kgH&PAZfnjX9oeE*BBw~UHvapOisq)Vh*M7lwG2t`0Zy1N;=V`%B_ zZYfC_K|o>v>FyZ1yN4Y5&N=^c-*?@$?x+2Iuf3lAJioL%()BgAu9l;y6Fi0o>9uv@ zS6B=?HK?0Mo|@KH7&G+Z_{&uop~uu!m@9t#_CZyFE;Yil!!PsktFooxaDJqAN@@w;5YYg9R}qx2D@(P zTzL|moa)3s&4ioDHfKu~?oUk__V8R>#;WbS2l|eG__9y*r`A}GGq(TjP$ghN)B{zg z(0_Ufx0MX(SEB(J2M;?ki8$bMWM)t(bFzBaGTlFfnHPdE6A8dEnc_o?Dio59+^L#H z9z;WP9o6l(cq|^pzJoU0_~fzfa=6we;)Yu~8Dass7_D_mwQ0dI)GI)&QjjZb`x-i)HfxSkyqtr6AK&34K0wku0YUa__V7{K;b0U`` z^RWfj0~=Y)-(=A5qyWsCGzECSNYx^c22fQs_H5gw`r{f(RY?1MOZVd>p=M1zJSvHpiwX8eXxUGF~A)qqmvY z-QZm%-l*}nm&Q#ZgsDiV0g#$$ONem)z82b*jrk7o*5FbQ1j3WvT6PBV(T8|HgRcQm072{3d=G+EAgjUQ3e+GQuu839cY8 zGzFPGvIUb+oE8;5F8$;;{+P$<$Py<&Y@}G+Zhy)owLf;&St;AgaXtCz^w#7Z>g$E% z<^no+&o8=G``j)8VP$j5%dom~oOu@VG46bj;yO-wP0+{k9JwQ*-gq(|%2bumnosTc z4e;eu-Y8$*EbmhGMscq;{QLb0sl{m!zh0J_tJcxW4dY-mdue^Iyyy^SohF*CJQwk{ zRl*C3i3}qRfq42)H88P=$)@gdp>!_Tww>wcNY2aQ8W4{P@Zig%w82E^#U#Ih(Pyzm zbe$*HxRm7cW_$YpZTDPv=w;bm5kc-jFnv8V{*WxVft{-O=_l6WTPxfBKVDXmme31c!sZWHjubuzk2{1OBwRlgm|>#k)V=fiZpxMQ_Jb3ET28}haA%>0 zL+F&Pr>~aUW=M_cFwye_?|ZoM zq0+Hc&RZcx>SHQ6W=D9MWaGTjPp7y1!KuqI4Dn5A1NC_L!Amg!$CAJYS!+!meXkSN zOI)QnRKa-PUFMC*(N~F_kc^`>$$%nsTiQ?|>I@d2M0No2M(@`5fiY({#~Nox4$ z&ZnMg#5OhUw5^db+1VneOl>E?)d=45PX+SJmu8pHA+{+=@0|>&+=Q5?6M-ObhhfEw z5o)#~*eF(t6+Q^&qIc6HL|l6F?$wYqX8l*Iga5TSr?FLv&8ESLhIX7M`H0T`w@~5L zhLOZeU>eevgx2;_Bv2;P!IM#AitiV0*NJk%JgTfZ%FbEyxXuWaxLB2Z`75bZb~IKt z+%`VcozQHF_c1?!dzR_D%UXbAHL?4=gInvr+H*>g@=G|Z=|XL=kMhg zFI}Iv^qt|nB$s&x5cgm-l^KTG1ws1U^H8z2UijeeLlHluM86jQ+bhQWVNQRC5@k_> z2^xFz$vlH2$54Sp-C5v(MnP-$7V~vw+mDEJZL$ox;9*fvUBMyef=tX~yLVf`sj`4u zI2w_mt;;XLlTa@d!d8=`yIq>SIIrj;xx_hg@dKU0U99iIGtXz^%!Z-Jx zN|%l;2xP;fX$l&)1s@1h=I8l7+h8`S;|&lwGG3buQFi}$sM1jJw57}~LYoPzsRJMl zMeIG*w(c;@7^Pk;?v-cnYQ;{G#=E1br(ToTPdt_ceNmv&kd2Q76;G&`OkX?JJurpTyTfqqJ_%w@j-|t7o^ZjLa+Vrbu9>W_dLv=`rBF7s#9! z+N;2mgJ<@u?&|J`s3nlZq(czxPHV?u^_p6%h3~q_0b|P)t2$-Z*VYpa0x6cS7L}9p zm8|5}T zAbfHquFwY+@geqk`CSa&^2S9~&L=Fi8cmJGzeIFl9lc7Pt8Uq-je`c4-(;oJ-doz2 zKE)mmMdE|Hgo*vDR~f`ygPlVSrlbJtR7zD-G=r4%k^9BWscfWzsR}JRuMDonluH#{ zU+wVoOW^UaM*juzINic}oecO-&oJiZI>RAwk@XK%Y7AbANU#8qUd2{9KzR@KMMQW= zTN?Jun^-E$4Zw@fSK%i8ejLC2dXn&R#Fx7;7M)B?i<+yX)M1#kk?L`7I$=@mpD64T z0Zc>^WoA%0Zs@oOKkalKXTDtr;i2r;os@wCBd@uf48LGF**Qc;22~) z7dpCX<~)5~?5Mw%d!T&=)Gg8R`iI)mZc}u(drN_qDa6EggXeS#D>XNYbzPig&a{o| zb3yhBF@kkucpG995u%Od8Dn$@=h_sp`4bGp(}x_+2kBus=TY&0b$!g*B+H*o#jK)X zHV2*o4vbc)j9mY0th~s-V;xn&1Jmn#)H@!)01y%}?&i?2>~`B)T?j9pnmz0U4v#RN zDgIeuMEOAZ&q?=-;19R(HMb}!c9eT_?o4$R{qkstwHAb>K>+2(nMhJp&|WZ!w!?lv z*Qd2Nik7$cYl0WK1Z#- z{ko=JFk}Hie}Pt4_!J~0?_AaMJMGprfN?=Cc+gM_^m3qI6;+mE?)C7lfl!H&VR;u8Yj5}Q zW8I$3Gv(mlw_78B5xE=n&mB5ghQzI;I)lHlkaJ@{ofu~ljK_;sbc3&pR1YhCAqej?Kp%^IH_;mzQ@?*Phb#S_9%Nl(} z5ly;&xuj8LdxZF7kGN@Z{sIiCsOyslO%GFL+dd7)QD_-M&#&58gV1GqUju9}O&Yaa z-si7ij^6q^>7PaxuU{_Lj|(nPmn@0f;J#D8n?S#GPx2GFt{@1iZl>jTzldDd_6Fak zReyXO$X8(ta|{cu&FSWe;k^AtRtd|w5a+l$`6GFoMD2zCUYh1MQ$uB$)SSs&y9 zQ4SW*<^nRnCEwjsYDD6AH+H?NJ$~K~2bJVjKAnLy0v5LB1DE72PgqS$tj!1FeIr|N z%ArLahZ-FQdrB^%JuU`7QSH>V6MhHZs!MNA-Fx?$S6J!4Q+Q`e^vzB;6q2g4^_lptUWBZ3F;x_x}J_vMDJA^qZ96w;|qkqap!E> z2a0~-YA^eF?wHA+#JW7$hn4ag_$dZUn7qou7 z-^d>BM0-W;N{fVEpRwR)z|-V^h?z!|gpRSh)mi-2M0pzjKrV1>f|RK8Y*+!ox6!ZD zHLA%Zo0)s}I(CfMZTuyTBjDetbVSf(A$updE48Cbg$iD=1k@$ZtPr`3GYs1%#sHIl zI#cP@&+{72m{edt9|%_Ju;Psm6R79Q<3j(*Mr3sS%_E9fgiKSoi7;a)rC73N9FRi# zAcnU|={2vb3J(cu6g9+5z61S1^nc2QHPw4cU4@BrGKm{f8Rt+>Bk1?l`%r70RT;*c z1HSphuiC$w(L%JRsxFB5Mg_<N2*Vy2Onzbb@jcb+!jrX<^u$ork$b(Ny>7tx16(!g|49 z^}cP$2S>7QhhPaYh_bP2j6eKC>1S=|AQ=56B-n?wmHZy}nD?-#&c*KyU?(9x!go#i zmh07isdZ~l$$nW!xy7S@N!sHYR=R&pIx8rRE96;I@2jE>#4|=fP(Vb{e%V|gkV=V$ ztb%o$XC>CGzZhgTY$9B1)Bo2EnoB#&eU?vnH1%&ZCByQeY?dl}W6^t5YRq{kwooz0 zsUtmLtdGD|>q+(%`*@shs7f!B7vao`4d8;VM21zGCbnkva$B}d4xZb_1aAejR6rPq z6-II~lnAlq#9<2stucEvz}yX;E)v3RjH?#@6wlnoc71pH$g`9LlL#=e%bk!5S=pY* z)lq2Qr1&M#XKapJ%y-9bLPzceehb64Kixl|X_VV}bx}xgpUYNXS@iW=9Q)opDz#VQ zty^Eq-bW1Z*^cQkvl`m2!=&7!p$j^H>X0dxHx0r~UgHVwd%-G%VWF}MFOX+`gSjmU zo_*(?+wyc)9M3koNv#ZR#{}97n{53jrW7mMzW_-o=Mq-$O{|Agc2g#Z1KJ{PTikZR zws-RJq+NTZCCek1oS#29WQ5yL%2(%~FGYu4UEaV z^8p4_iFZcMTWzXW@g_4gBgW!9BjF%-szY&&0-N=|b*(`ntlwEqNwnkw&1Q(lCg`oHs^{fiq-ED=)fiyjFK-~|cQ!}0%4bp**0 zKfjYyB|F;$A{Ra|7l%}|?vqg83I%-xOzvT}Mg*&jhbSa#H;Rw(rjaR7e*eN}Lf5wN zE+L5VbF~Plm=A*W%x|p@&z-tg~0P)%PgYPJBiz${`<)W8Qx;V9Z1`{z$Gh{YWEt z6E)bnV>Gq4-0{tDJ=dXiHFXJ;>w_jTQK-By_>nkX~uHC&@vRwNVEwx7Bl>tpERPrW%67`0~t#)#~1O@szFP714u3s zSDfYCBv`;uYm=935c42-ru07zCKcoxS>G(NRg!{&fHjCSAtqKdQiGw6qCzmS&{Dzn2M|W*elO39`1QH zwQ2To^qb}!`adRH^^_cA665>^XjL?bOBhS@$o6zxM zv7}EumT^CYdluDrWZn02Pusa@;OYTN`A{ZH|FfJyh#cSw_D}*ki4U|mRykHSJG9;M ziaOqM+}ILAuyf78lI^FZt$9GqpRnI+J2d?H@^3t!@F zSWqUmdpgQ)VSAQ*frawCnY@6$zOSg4mARL)Go1(cP25?{+;+kB#j^3z(TwRZTR&gKU~*m#udpV$e??MQOfq`MX@}(ZgZr`!2YA59^a;{&*Tb#Z%4dD!5Gr9=YP#6 zz@;kq<%!E-YmN*(b|C^@2y_X#y)acPSJL){SJ~XJVkl&8Z^;n(0Fe)P^+H5EX=twi zykGPDn%ZF1`CVy%Q;;W2>vBiX707ZvM}BPMZ3k*-p|#k^`g98G-_pKHIkchteVCG zd&4#+B``rv70{wxtY}}&e5KfxP^YP&nS&jDvW!u8 z09&j%_)nG|#A%Il1|+9vTpveGw9^09N@J7f&+_Y?D}ikDk&E>Q1EDm92v4WMd2i=&f;@Fp7YBRYRxztr_>-`Ky*|zAg$Z(W^Xz-#z67X{?Z$ zx{1egU$fC-wP0DAPco1<)`lQ7O>yr9pV>xyqqdNN%EWdX0zV4r`n<~v&kP@&2$i(9 zPZSw{Zqsv!Jmh`t*<6$E-hpk;rM_G{CVjo-%Dv-6Ma_uw_`dSA@$TNWW;btotIY&i z*5C!<7rVv zcQfXdUN~Qx$Kz`c*LN^Z@ptngc1?f2aK%cVaC1*{(6NMhG6z5Dxh4r|wx|Z=&QWn? zZQk}Hj#Rs!KEB5mdv{Orhw;+~!8s@45YsFVRy>_MnnDMKWJBhy9gxjgMUO3?zn4c( zt3q;WCba8R^<@~@l^p}sr)S}fytP{01VXB{a47pzct-5_2dcxRxEV=jNoq618AyF6 zp*5S$xZeF!oce8Q-^Pj{>q%Ux$Jv&IjOpzm1-onD)2sqino~I2O}3?E3V;8@pu%Ru zSD*p)`u;0~MvpHZ!SCcY0a@T+6W~d`!Kgl5L&gEoS2LxheIZ*lr66EJmj`=ShqU+^ zd1vnn#V*US*^^~GU9#oDrqFS*;C9Z3$T8a}?DrR))m62`KY2CvKgM8Z`Pis($6!6S z2_`9ahOskhq;FWCBxThg(`*L4@`L_>^kW-5V0zNWwxg5KKXmp%JoK3@2ye)d`B0lb?iwjIcRkO9E$=LM3hcaaVD zC&+p@O>P1SRsb*awazCQLYB_xMwU&l<$%LB1VF=W{uX@?C^m`sAPJ89dBC;?6IF5z zUJngiLQZ_kr2aLlD(%_)8x*a^MORaG|2@U(vGXXc-LW@XrG`~Oyx=;6JL|#sr+azT z`mR;i`~2aGW*HLU>qSPo_HAs~v~`(jUKN9k;@g*v%;f7wZ(aqX0hobT@>U937Gy|~ zlYgRsf8V0y@kkyD4Q^MtcxKX*JqWTq$NWiCg!>_ir5!*&QtoSiU>hGDJkg7d4Sv$Z z6?N5auM1Q3r?sH0{f1voQtVxJ`8(R=??8GT*4RaJxk>u2#Pi$T#swmu_QAJXGq?+L z)`J@B^=R4$G+r7(Je8tE%QEM(s6Zeok6NU_)3A?Jw4JAn>~)5xQE$Ddq*h}L8K#`c z-a_dW%b5+DD1+S1VI5R_EdF5TB~Rb1;GZ@x{<^pc>&t3)9qc&FaxLt^EBX6ef8ep) zEYjk4DzZ!>dZpUEv6VNZs`K}uJ&^~mIa99yRo=>IinEw7fT>AU9ZCNiax)ZZyvhT* z%+lG!(yh4C6JEaJKidASWA|^?-JSK2znl zVSvX-H!dyujvhk;6Ys~9u&7LnKjmFzF=W?AN6NG8z93#F4S>+h(+swgwt%W7mHG<> zNbMUdqX?Gzck6(_3Tf57W zx0@TbCtE}{xVo)WPB}$wkP$7i24U!g(&847Y1VCOP8K>di-Tv~UXgv909a96T6kM! zQ~xf{Up1}VIwYPFJelf%>ZZQDEdN#oVf>h>6puuZ*AD(hUVk3=T6Y_WU zH8K)+KNgjR8&se-EE=vOHMlUbo~}rrCH`JD@txc(pQnn4(;29-qVxkoY+SUUOXnZt{;{pm0{aN@CU5)3{&;FLo{{T!0&S{y4U_4NB-wCYp>?8N z45S^?dbCbcTS;Q%D2haBh1Qq4PM(5R)^BJesWHXl5~LNyw#B}Jtu$?Kxy zU~#Y-=Wlt@u1q+dplxTSV0WpV0qV30o0-+%kq|1&T5}41A-r3#w+lru@g?qAgvrk} z3#)8wWU^z}(2v!BZQBI1>&t|m*u=dKs@N!UR8I4G$a~O1 zL8I%Nm)n9Rr+1+{$drTFgL=MEz<;5ucNEP!C}kmH4B;}MGx8@$+(=C?(DLnCp(C-} z;d&g(J4U=5@8+XWB(U(BbBdHxI-`S6kY0{P0l=+RV*Gbp>kQFux-(~t=DaR?CnL3S z_}fIB4^#%4lOzQsXL2-WtP8U~Fn01z5&wU2be7&=KtR>g^!ThJd!x4cPVBe*hldOw zN=nxd83klR(3ibd#~5Oo`8j48bkR z(=HnWJ>cAxb9Cvowf0rTJb>!4T^3*}$AJj#N{~ zh8ZUnHVI9=hBrZ2-O+pr^uZjR1sQitnDWworMaV}63}A$H6&nWsLVzLJMAcvZminq zY^UQs)Xz6TX3i%5{OFDiuWa=_FPKZ(5~Sc?364&0|JF~fzR_hHu~Q;X=2&*`xbTdH zOyCX0Hlw63M)le(iv#sKTl&p3o2+ui2d|n7PW7?<7MLyeZlj$bK`u#`D zGwAB^8t+(0As5c_qji|Wodpp>?Y_ROaVT1WYe}}YHN(S&N5@PD-ysLgrK4xu2uHHTi!(0{~?=NQ3Q zs@y1%ekM%LN~cV73Q)M|+18Vk*26W8F>}Xl4SqVk8egc-F`=pR=SUTXQ@b=DBL+$b($bf&t*C)EJ2@~< z7=y;YBp*QF4+3?+p_~*dbfSI{;0B!dw1~%q{oP#>p^McQRnv4h`5N-@hVk)OnMw^t z+El3JwC^Ouay0O$QY-TXy3dXG^jm|x$PB!9YFz+7yiEKl7qFGSu6;`+wjnzBCdZ5Y zASAgjBK=OhB3l-9+0=i~XQcfHZ5HA0d_!juQRY!^0mTjkq17U9mr2(rr~AY9fh>QG z@0nR|@XIP|y~8jLDPat)oRL|Tc|W+O)0IJh<}uoftmQ3FI-gg|39EzQ)+aku`Wko! zjdd9vFqLesll!&#GJwkNQ`fb{fRXi3NY%?mUlMItZ-w^YSE0b=Zj%Ohj$5wqxmIph zIh?-O?__QLv*PZ_)5e#z)KxgWuhF7nPTN;(^F!HzN- zjfed%sR$yY?a;h-1}@@T%u`#9JYj!i_uaokQl@{WIXLK3CJZzCj#>B;&gko^{gVHZVipCX=) zK6Z4zhk@e1GxnN1CAa_$aX}xbI~lj*hN@_O7o=Jb^WulO?!m^8SM;{Be|wJTAP&f-pY2yQ@Ijs zACLK8Oqhm4lcpGZD1}@;VRu(`SDF2pCc%M z``R)MZTeKWRVK|Mj~)M(OVdQKL-~pqwSA_KIX3XIMfCfn_w;#LAzsSp43CZY0&W*` zFThabu&D&(J8SC-5#9weJ;mi*`W-af1?_UdRK>^R0NiWFpsD7cao!3zR0WkF50f(% zN;ydZ(jVM%8rcBU`V@MhdlzNt5ryiKte#0w1&BcgOHHFF>wbkj|zUi33ys2 zz^69wYWHu7lnriDk>YF9iM*g`%s;1Y9QMclgjwQ-ncFu{H9FlbvWXxREB+%COAw{4-ED7N$=vKlh>C`R)NeN#o!yBCVNFtm ze^2)Gl(6Q^A*)4O&FzHhpUtZy;6E8vr{lquV&3krvn*&YJ^v6nY#AH50R!zH;7hO+ z0)4O*cKP6TP*h;DyuWMlK@m~YC<8xo7<000I$hJZow9JdjrUcR+oA$tL;QxxZ8f1t zj*_v`MY(V??PT1FP7Hry$LM_2cX;w-Q)>s~R9%EgC4)SlC45p}$wCi4biVY=>~}?^F8-66s%{HPdq|MJ3#$ z#~j^Z+I0jb>OC%J?3`ln26oiU3%#s*`$aok(5+J<>1=szIdAZFf|xp`qUT9yrPwge` zeyQ*CHg+ec!;QvSw^WfC8rQ0{PqWiIvlS+@9*ZshvRS_Z^f&pJYD{94C0f@J9P7fa z+>=oyfbOHTlO*Uq@Je%ilTcjx2$v*T{*G+uIpkU-iUUgRjh0IoM_U(`SswLfecpEd zce1TZ#H#IjYa9_sryqYLLdH8+p}}2975D_#eZTUR^q}@MS@^YkSr;sk15OX%{!(IQ z5kT)ne5^14?aU2Z`xR8A=QG^Ti*m~7bNzkFz(3W=hWG19HS==|jk*w-&p9T|VHjyF zt|)v)Xg@f1cqZ>;(~*p`@aFo+bkqH(3|GP)ShNMs^F8Jx=#{k+>#59kGa{lW0&mRs z_(-NrpQ#AzUT!&KGaj}q23nO#@UHthMX>QYP9ON>7ufueWD_>g z1`@%V=d-G0#(GFqz<_&&JOiGL$J_8b(_<>2*qRhIXniIWHiG&}uTm|GWc9ZqWW%d6 zp9NuD5vM;E{J|aEk(69tvZWWM5(!2|W&_*W#PgDwC2}^ud0tkE-i3x$kX)p?#TNX= z{>_ak3$1INksVWYuApJG)qyl|&GInz@WyjP?qIF7OXDLn8}Z>k#)(9oe~$uT9S4FN>la^Sl$SqLx;OVT%4J7BJLP4e$eB!C(#ciO^Pa4wUuOsVfQWz zu08`cfmLz0KQ?C2$u4xcwr``;5-p!sd$P?HrfM>cV1O+Iz;B0pb(Uvj?()0V(ERL9 zhcS$^w1jovU+I+vA`6NW&uJpx-3pl&S?$A{&R7@D7+2OxdLP_!O5W2DGq(f`M}I2Y zq<`ti&_3I_OdtDnUK0QP4fbUbi?2COzFQ(!&fO|AV%2VR$9rEmv(#2>pZIVliZkvS zH$1#mDQl-{t`p%923AA_cj!GfaHBmK`6=ZmqLS_WUhXLKA$g1n7jo)639Te+F(=v{ zI2|77LJ_cW*xA^pc;$k$smyHosZmaS!3=65hYs!Tz^i^YSoUUQ=wS_672cBJu0wyM zzDwrCtq03(g2%ea1dD(x*QTKnJ>XB@x*0V#n&vb1`cE4ezv>)qBY2@UFLeXe?u#Q5{H`w9Ru9=zdWccPn<7FA!djdKrTJCU4kdJf7JoBOt zd`7MqZfuZTGy~)DnV8N~=HNoo>~eOfj|Z76f(f32rJj>;`Gzi3oUD#ltwqX?Qy=x&ylfa+FN+I z>ZWZ@1kw(7S^LH%@U+GAbUW+&VocOWKT4~+CDGA)-;>*}`ZzGJ$45S|q}sZ&&oVK> z-_COW#{Ioe_6I0C8h7-%7O&-@jYRFr&HY%$R?WG)_mUmUaG;1W^#06ZmcA%~w?2_v z1}W3^F3No!-r(GCvCSMa%rW5f*1c{8E9n3tv z+ug(yX2r3NxmQ7|faKl0;1!FR8?i-v~IH_=a+Fhvzv9#L! zA>2JJ>4%b_RZ`pd)K3@2<7RB7Df&3b6qZ!qcm-GUIOxlgMCJ5hQ$U0rlveI^r(d&; zaTtoGtbkg=+jPw;iF-voO3aODaIUrAJT;XI;pc!AATU zG=49-Lix-3cuz^hIQ|}nzsYVY2APBFeG~CBxn9a1sEEg?d-36Ql?vzqSNVS~AM`hC zp6~H##yH;c6k{%5h?F2KvIKdK=O20Htw^o6We7X(eUFBh#*tV8`Nzw4~Wh&*o z_Y1FC-!1Mgcp_8MK%S*fe08BWk$?2kndDS|{ zhn9?G5FY4I7E~29VMr>P;p) z>ka5(Q(C$OG`=InwsqE>`U7pSP?t0<7^d1EISmC$M8Pd!P${rEj0HNoK5B?Rl?7!h z+4Rqv|C|h5S4W>r>JCCWh49Xne}C_%6;ExnRT%Fa!KQ4y^&vda0yL=e#!Ekt_Q57D zs6$Nx``r>SR_Z&bwVKi~L#hmLN-Kr5(*aJ1Xi5bz-*G za~oWA?E|U7Kz^HW9}HOvo&EyPbJwaQ*7AfTp|bEjINq(Hy2S*mAl~8NR?~!@UL&1k z?Vcw3pQ3=u=La>`*gKeDG#>SiMn$YY<}$RlfBCVekliG%DVAr~`G=+A#{Y&^Oc3|` zUlJVg2_U73wrTb&y5YhQ=U`_skRqlsqSWM1 zp8#dJC11b(@jWGrkVYQ;vp3Y};0xLLha?&0Sg?!g_qeJlcoaF9${&HNHC)e zWk%s!Lve?dsoV3l_eJ;K?P)qjOy*Qy&}10y1a^UK92MR@R%x&W-wK6&)|mW-^shed zw;D<#@mV5Ou6N~E#n)4fft)@}^T8vVmRB>Tw;Hr}Gukzbgk~sU9O5?Iy-N(r0t7L zD^@2|>)VrW#MKPWi=ry}(B5bojrysER15O>wp88l!8Ms)TG)+)Z=KiC>ZxduneiI3 z0`TWWlkd6Yl<&Du?79*|dBQ{kxABE`X$tnrnzdS4A3U1K#;iF&1gA0W(E{cFL;Af1{b(J3c{{-i^a$cfJ1`iWnf|${14va#2(cCZ6U%_hLk1OcUbbbOs)Wm?1hTmwK{_T~CdHCz+T}|7Rzg zQPE427`vomgX~-tH>K52QlAmHZp1bg(Y1b2^a9^P-L@R|re!3%ny1Ewta1*}0s)=7 zPM|Q}AwRTT-E5Acp=V+-jdW%_$m&E_xFB&vH0cT2^7K~qlC8pt)4Sayyap8Qb=%A| zw%+4O4zvrn`_Wzp& z(83XfC_GRi`=7rVQpye(RQoS{feuvG9*9is-&5)e?U6} ze>sbsXw<6zp_7=$OXvNoKk}HIMCN%x*>QjAzgcO+oMWd?546Ss_fB{ud235DLhjiD zY)&w(Hd{MNn1}bscK32R{i~i>?ibwv&A4EE&L{*g%u^g5aPKuPwjDa}1{ z15G{l)$ePOg;^@~Rg74@; z8F3|YQp(ya&9DtoFS#kSG0I82`*l>az?|~^D4AgTDiur)jBVRUrgW)>G+Xr@*GGk0!dzQxDD@zdFCX#FCO@)+Qm?Rj z1s%CK(fvA%3f*w3hECO$0tynKDm`6})a7yGYO)kK+?0bEH!b|-V+tHz+``LCh(e-BF%a^&m~J zj7Vm5Nz#SWIU8|U-qV?#=6Y1>Janv~APWnnNifjsz2eIVOn(tF7N@2GJRlqS!`$fQ zc~l+p;mJU;E2KzVIOb+|@;jJQZ8V&OJpM`MQJ$0g)xolJhf8NFJlW;XFOf;pBkf)H zDjphyp}MrMP6IKe4F7A+#f<;wfUL;p6`%)nuCNNYU69`%cyG5%;w^jsZ$iz@ewRB( zEM38T%s<4c8*}prOF76(*Zvt%XMSP4hiAB>_523X>Cf!G6&z~R|Lj|M&TOwqP~C?z zdBFS3vIv9Vf=q9c`hRIuKk??mo=X z{VHJZ?4wAcm}+Dq_Wx`-Uk|DV)_XrB$OoZ<;G&$+rv<4}+CUH`AGf7kuKEn%8&DC!S0g`%;tW14^a zj*a%NPNnb@(Z*w6|8&Q@F1#qhwNKv;pJF=#U5xc!YQ0I=>`cXuNJInf!Ak;4ynelk z2a1nz)_;%NFYHq=542AuLTsfQukOhdxtcVE5HMG7)-N)dLS8EJ4t|gfvB+daBkqF1 z&m>U6Pc)E|vXC?;MZ6V-c(ANE<`8L`a^|d1)f;CF6xYt4gv#9m!r3wNLfuz&cz`6_ z*lmf3=&DO9V_XUM_n|tRm5SRs1WPEdjAZU)01^p|z3d%cd;`%A360OmVf^mPR|f8p zc8sTgndtj6eSTn;c4a@{EZ`SRg{dy^zx~z77M}p|HgzO1;dv@_C4K^tnxJdDYluFX zJ8$0+K4ThSQ@XW@k<%R~_~o51P#O_#k>&NI^ojQN_wY);x%Qd3Xla0U&ZUgZRxW3| zzGoWLI{z#Hv-*TeR%rfW$q;Lpw{msHP?xlMj&T@E;g7)fyRHj5tS>)R=J1AA%yf&+ zvff`!k-2K^_i|Kn~;WUaouft`gp9Cc7rH8R`CRT`I?W{tzK6EAXLwLWJ z7aq{9KA!3H7M!-PXCqKFZnauY5pq}t`_99%kC zB4Q4G{M4J5&y`ljzE|3&R3=O3(|N!&EXlW%xOy)_%WGfs`*0npqH~;e_ZdTGCl%71 z(>x_*_ukY$Ss=TwO-ivv$ECjDl)F}LL1JI)u^5e_N!>o?u38+X0mXc#cb7S-BdO(G z+8F7XOB@CjVeG9JZB&l2%lbMlDM~(Y1)4ato*dsMc@%jq#bI8idsPT7{V^D^SX}F2 zjcp>Ud~thw99K)|Ikpp$CzV%D;QC{wr=`?OA-6X_MMl57O@nKkZ)0e#2aI7y9#K=t z@M5*wD`Uod2uu|K*LtTQwdx#fn>kI}~>*6n7{=ivxS9H~lP6Wn4Qr5i$mMl5a?{)PMhg)Fcwe7XLo83C(hji^Wth97nD zjUrJLoao7h82pc@CNW;<}5d71tli#tD4Ta$v!`^Kr zhGUT-ckK)Tm=3tllAj4%n(>E^O5^*J-vjG=XOh6)x^%5#vb3ZlcfJMS49-_B_kuDQ z(+%hG+i1xy%)TfxmgI)2sAL#g&;XLm_!1|k>4stln9LB=AL}mnGZ4EF$myBoUiV!ARbCO{LU(dv72kRWb*~S7r!PA z;hn~wAGa<5miF|ORZjdxEI;#Poz~Ot^6UM(HGjsUFBc>oMS){M+h(xlwgm6!#>h2> zBgF#0*VxpOtlxFQjR;OSxToHypdOAliAAJ?BN@=@nR8;Ku-g7OF5V=Mo4*!t))gJF zFtww_rarN#%4@7>#;`_fHL&oK{@slJ29Q}ssF&XGLL%It_Njs7+=8AY8FXfn-P>|>tRXgE1*-f8pd5D9 zZP!I4_r^AYx$Gk(4AxAT8?0}&I0vC~hP0mDG}|Ov+~_%T($a=qX)@=b2R86$meJ?J zGbJ9D3+pc1Y6u{_eKua8s6Fnx<0Dgad0kouP>p?R#y}63M&Ps@=cXnAckf7W{eF-v zi;+>^=~Hv#i*(oKF82B@-gjdW6*t95~#d#LN`hffW=Mh{PO&fM!Lpb!+v{2$nnI09}wm zb4I-6gdw)Ip*}l8P|$V`Ga`mhDsp7_P&-#?HUe_Pw-$zBo_i?09jTr8*k7!OF|+=Mad;9A8@X31QRyaAuD z5;Gvb&0y`;wtI>0iodrLNZNoVd~ffY2_EmXS*^e-C*hs|Oif|^R()kc@5yG^jJ8bh zw}>%LNuyBuV?SQg+*^=9LN*rlOy&W3uvTI-nQRhfX{VDj{u%b^o`C_z7ms(V96(6? zSu4ru616OaDp44_^C4m=26f?7d_cr(&5uU>`(<@lo$53_^CGp7Q`pb9cQa7l+Y=7I`;Uhs9S;M=8G zggJb%ZdhK;K0QOkYnL9IJGo=0TxRI+AcOedl3F8DcnuU6vn;XwIGinC%%wn#%Q)khUikB-^@+3*CLSNPigaQ z9Zcl|ZCyk_3Pm5Y){hd2!@aCJjLb3&kF9Icl*TPX(U=hm^D_4VG2S^Zr!JcnFc|Z5 z(`{AVP7rsR>xN!ZHs?ICpz;TeSQo5ThbZN+!E<;jN^>l6G4**^&Er56J@s z-fzWTRpvBal+32+S`Jva5l$4vV}T1j;OS9AV(7zVtM%!}^B1%Vf=W?LryiPg)2WSx zPi|P3Hjp9rv}^p4cSVyt&V?#iE0VH>KhW47<9e}~gm3Q66s(Az2+MF%NsG!mU-rBy zM$Y=Sy+TOC`vPfr3ki>>{%x$h2z$sN2y0cOoGUq#@=lFmY;BiF2iT{^*DohEBObNO$P`3>WuG?f3s?51H$@ z@oRx?|E^ygrtcTgPpsX=7$IUW%+nLbD}+I;SExxJ-p2PREDX|;^%FYp@3Oy@NM;wr z_DamGFp4j+V1kG{R{5fIpHS$tBWb{2+2?asl9l_WH8x%5R^R^@23&1AXY=|WL&#YH z^-dy#=od|k%nwS_XlEp*TufR#_2l9OpVI)xshFFAkk8-dl>=SK@UFJ~SKWM#$Hf&S zoMo~NLu!}|2fgXo1}C#hA?l6DxY-Pw?;#2 zhawsy?Igi5y!9`r#pQ=~qR+tkQ+`W@U$)};X6mCFBx6CY%30q?$;o9J3GDuO*wsqZ zqIc;8ZApYtn_ckG(6U{}em;>P3z(L*Y8x%Mo&x9*iRT5-6`!Z{M(<^t!*DVpo|GQM zW%p2QPQS`X6zt$S*4WFVb}LS@T+~KI1d|v?28^@z^$SlBXS^_DTq0(Ww z5;wDd?$yiMTTa;ZHx}EFcxW#bF=Bh4?Lie$_h#uUFL&)&?tGBLR&!ZWp^gES>AF*Y zsS|!hz{b&r_~&OHQEdUzSZWs~$fOe<8nK0T`glff3<61}ikA+KJ)SL;O=ejwzrx<>t3ajsyo_4n5FUrk zlMoPD#@92$T6D*qb9rprOm8!Di4I7>$Y#xC9B`H~1k?v=yx7sJd_8@g~9gI;Yq|$6ZUM=tV zV7(oby3pz^?&oqrUbUJ)3LKkYgV(4dEdrmRRPvA?EzC_FE!|e-r1D6AHFk^Kpm$=Iu#IxjX;1sTuj1}-9pKT?1w{5C@Pk8IHaz|{c{+`e zw@NXhf3i&ffor-W!Q+U-|ANQA=;J@iP-~wr<1D>0Q^ZM!C#V79pCP5@@+y8^V`~k( z_V%sU(Qxk8i^u2qTJ>SI10oorXAH`dBQJ?yprm(iD=r=vTGvXk!}?MYfTU2&-eU`9 z{%h`OvZ8vV@lWn%`i!KVptl%Mt)TfJY}kMPw3 z{gX&YO`n9;eerlUqnZ|m*44&v?kOGj$zNrm*4<`9+p{V80cC_@R1s|Ru3(Q~p5=QF z!FVLG!MPTG=X4bMz)G>(YTr*%x-F?&7ggtOX-&R5WwZMo2G5D=`Q>;MtFTF4W?dpy z`V4gZ4A&E`IB%vK>l~}uz%GAl(kLaC{mt>s& zN)%{SW=p}V*B-$0!y;t+jhB{cso~DKI7t05riExNqJ!VL;xQT8$^Oymk&To}pZvz^ zw5=_)g#&H=CI1$NLF@oi1_>9@@!h<|4Bfmjplfzx=#t}ke-Y70@|jLxyU}WMRn4+K zG5-1-s%X^A{Re(83G^Ba$v4nGdOZEBjG804e64Ow1O25Qc?$hW>fjtRXSfsbrq54o z7Mx10n!zLaw_yF5V2T?&G}fCK^%bytpZH!C8Xy73R4ad*4ug-AGOS1QU|)W&Uo*HICLehu!$It-tdM78$+@ zE09ND9a6cRN-Lwwn$nbWjkxzHEA?U@=J?FNe;x8Xe`2JHEqP0#&n6%*(`H=LmTjZJ zG}`*TvZJ>%;J3!^tZ0*$rL zK)^Lx{t*%;P{@}| zruJHq3^Y=8WM7V0Skr#TYO+HLmc_6g(67NM)(&F$RTQuKzrp9e@(z?CyDd8#CU%hU zIf;;;U+mwcn5g1pgnwC$cNz{j9?RcY`(iBnFLb5KMFYHA&rkmndF|@7oR@LxVo*fI zLp}=rpLOcHN|Fsj;Wb0^{4d8JPA8@80{>{v5O7OhAa3U0edb&_9}! zyj&cmKVWs_^7!PE;_+|Q!C%;lwTQQ@wu#CoEmehHwq3xNo$L`jEv8|v5Q^tMugL}* z3CA`pFPQgAByY1{<|b2M-?NW&2;l|bc6koz>%du5^p@?5Pjrw}1BJUL{#gYbqOD4# zXH)gpv$2~G<1zSD@GbhE78T2Ynyy-}7^(ae{8wwM1J5Wws;=yE%mrk>>oZ$edunV# zh^g@4I06BUY0SQ+MOy-&&8ESbrca@7PURJl{G{uhT<7-@o8SGVP$k(2Zmv1_yRobZ zc%&`PR<$w@uEh-#k4#0|K)zBExVtu8wkYNXWw1(5hJ`YvPic;kAz<&aqm8meIQ^o4V2z zwQLYhRunP_XHq4W!n;Xp3d!-B(Clrz0rs@mkV}@aX6BqjvHEu24$ij8G`ScUX8!j_RV}E}dYV zvHFtFLF`4@eMul!!~w-X94CZ*yroor%Gl%2_ZUZgULfkvG6XT3M~BxhGv{r=@auxf zg8;eVANUJLy4{bLa$BGrjv47o!D>-_%hf9&QSZBhaE7%JDwTU~CYe94O0*o}BBYHT%ch=vQSi-rdnph??%!Ls#NR zJ>zRcz}=OMk^55E6So=UD;1l6U#Xzvs&bjO zuN9n~7Aj?|Sf2J7H0c)b`PFQmZ|}CY=%5%lgbLjS!{%s6H*D*;a;s%Eg<^0oYqU1q zD4C>z7yQMDIJbVB8KDNOg774L&k>*=SCefxfQk*X+Kl$v%<}PE`W0>#o(oQhBPkecM`n1@^Sqqn4)(Xitc`_HWH&)F=Iiw2qzL$3ji10{d%XzGR{#G$< zfc~uGcnPwfY*=khXrotK_ew2^%cm<&_-CCtVO8c$uGfJ|kk<5#fA%fj$f{TuXiyN- zE@8y|5bwDB`0iYcpe{Ta-~m6XuU?+aEzhqgM8B8#;rQ4|bfR|n<#)f+Evr|1-aLOdx=KkTN8Alo zL{0cD2co_UM*QnmKc^dY_e@c#xIu!<6elE*9{6V?`uQh7$oU zr(Hwr8T0MF85^dOdg4{TePn}~=>?#)z(ttHZX;3RMttuZY1ba)ZKv?^ZE@-HmWzU~ z72Ql)WjVJ)xU^{Au%V&d*YqChQ)R%h3AdQ-rSTSE#3T3D2R=mF8=%cl57@w%Lfhsm zb+RdQgIk(c^Fl1RPrzmH-e06vMhDwzy2XNhPHp;6u!M7 zy4L5_SvlCvR7~LzCoBN%e%e|i2OY3zM3$*A-jif{9@($lzf!Dsn{)FV|F9C8O*bq3 zdO>PFSiP)0L32_DnUNwTTF`QA?dKxpN;A_$)>y7t1JwIo(G(;5RNgF|ARD|jMGdF~ zyi;(q%a6zWmd=0PC073V$HVcMZ^ydSe88}q50a|baCNq~s|Q6O&s22Bqsjksrpn>| z_e}Ml^LZ@#)gaa)_&h`FmY03=TJs}d^lZ7T}=4H4xMSBv*Cp@6HO#nLAQ)`tqlJLZ)U;@c4S+SA# z9x9NK$NPcbl)f~dn#bF!F>&^Jdw-rJ9IFg(NX_6Mzcgm7rN$byG}*pW<_`Qz+5LHV z$>DukAK!58`k3*SMq%9r`Nk)q5Q(_Ny~n>b_|2s0#9EByXp&Lfz8m8y7$_8^Ge!6Z zlmbIR4NyJANt6&+kKzqt9uYnA0V!HOy$BL+)INX+d#h>=RLWvrxx0K_&zS3XCg@%R zQD28U4p{iEwn^4&tyM@Qz z*Qk&pBKcf87DN(^B@J3FUC6lkY0^yNfFln;%y!U6dyD~XR7V9=XgAp_zrmIvnz29! z1S}9B(%WCe{8*UMS`q>!_J(6Bl^A;8pDjXKAL78>BlX-|-GOWz?Y*HFs2LsfrcW;< zIA?QZyScz(bD-eD9!}Q=+VK#>sd&HdL#yCsb-AxU2j#|CGW3^aAfVt?6%Dk4Dd>-F zwCLr|b@;S6(TNTP*C7TS@WXBFuUn@5`U4oV!DAaWE~i3gx=0b07xFJfAN?J8mLh9I zqdDFLlz@z&9xG3&_W}WHT@QvetNgATxR{Q<58#mf z%1iYzZ^SgV^~S%)zS{x2DgrpK{d%H%T4Hy-!+-f>5y5kOV*H|?YeV_UUVtUNlb7=u z3gJ$X3^yP4X$B(3{qoJ_TkrB=y8v~a^ad@8pl7(Ig)xtyIf_jjr}Ma;Sq?{ z3zb?)>E)5q@M1TB1(??8Zj#?!t^zL>qJtXa_;+M^9eA9-$DdH#zN7ze+`?$W}a3)vd_?0CB0Soh436Lyh4# zhQ|rIX86}%xWBaXRR;XF%KzMRL^LUWlA4Szk3+9QdtjP(Lf#paYXasSAx1UH&6EZisjL{~c;1zne31LjME1?67mx2!BCy%RxVE^3nI^ zoODor4Pe1;-)3`X&zXdf)LAKWMza9MIZ?=TBtuC)z*MZ*#tBWSCO>!eqHpj%UA&fm zk?to{e%+OVMQ4$1w!TJ&N!=Uw`l|O&u|INB>Zj3tGW~~wmnt)lRY({ zzEXPT#_?_TmC;LjKXY?7UHH!UDZ7XBIppjh5#`IOI|ZIXe{Kz-)DT`Bz3HB*!wxD3 zXUX;&c&jq2-n0w_g>2;DL>)%--U_UCo!ByU44G$~$EEZz+RSFdRyKg2+zg+k#{zaj zfB3+zI+M@)Us^!uBOh#h8B<~2u4Nb5OYT#n5SOWaG23LovDnZ1Bzh&YQyNiXX265% z7rg=HTWg;(_eHkbZ)U5ywLR3-9sEv&_U(ifGj^3uisC9|muxe?JyE~Udov3(iSgQa#>;;PP=gzre+EE#w#7L>ny|ynl2WmC6gc}UTLn_U? ztToN@MT!Xe6+m~>Hz(R#h$zm|pWSJZ{MUsUC6tTtJK2^RN<^KhAy?jQ!hWvY`;3<_ z2ILn{=u|Z4$de9EM5K;tTLKS6X3RiEp0IH5iJr;jE?j|o3F$tfX3XC^^=AXMHynQ3 zoS^T{0vEG!_Af-<+R<45!2Pw0Km1OW!%*U5kbQh-?|GQ~A;@(~?X^9?QXR|WW33h; zpWH4i~E6d5{sV{BvA(4@j~ z$=?Mz3Wri_lTQQe%LBRitDaD0uSA7EjhEo%aUsN&wGjI%5I?d439YFxWmzOCm>~Q7 zZ1vy(9b&(t_tx)4R<0Tov^{#Y_AWOx9=)Eaj@l+4?N~P$pK{6Cfa#FLn>MO?^+T|l zQmoOY+GC}{QBcV^QX1a-HZe;Ze&q%*z5XpJ(LtW)QIZ6&YGTC43|^IlfT z##+raRSLi9Pi8A0THBF;un$QOt~g3)Z8J5ahueyVx3=O!HK$ezWUC%}4H~yU%f7ZD zT*}0SqJ!hDyH$*E3_D6qnLKfpJ00$`s47OTA)bt6gXZf2m3wo<*ELQ>eY8q@%_Gk* zg@(Q$-lohHAbeRBB;&-&;~RFuq7aQJk{H6ZJ^aSGgx5?zgNSJ+-*{>%O^wmYnQ~fW zJWpF8CMqVUtBJ;o_j~qt>N<2s&z%^ClWx8KH5U2%yd^jAw&PTL)3OAY=aP%4c!ma$ zd`RF0auzB5JY&gZAeegP8Ns!;|pB=(pO9;_Wv`*kj zSehZYYUV8$pJ_|dpCN0)Jm8l<*7l|v@4ecv%0c)WQ=cvXaS?lj8^L% z+;lo`&pB7Yn#IpVcIwMBW2op67C8l(ILa3_YdEjK!_qI=U!sBGBCv^m!;q$3e3xIP z8)lMgQU=Gp@b~M{P=ABkN+!G=m{GTsfjmQ|h_^fVP8)BV1G+Ocw4gL|ZS0h>IcgmN za&2ZEDVMT?(t2N)I@V(o9jcEUR}@1d*~hl~Z4mJW2YweM9uj@I(l!`fZ4hjNo%NGH zMBksRMXH_WRIQ-qBM^)mrI7W6wiLSbY+BM2o}@|)d?uP>U-eDKr*M7^t?(Bq=@VsRUSaT`-TBal5nGKC4cjRe!w=Jvw(n`F&;2& zFwuR_%Q#$_CiXpZ>y9HTEcxs;xTtk};N|`iYN~OF!+VO5&8NE$^1?eGb83|On6K4> z-G8cKkNP1L@Q7qXwou477!AILr0=QYiS%S!8yV zNN_BhQ4U>-k2WIi+ zQ#M^gaS`3B(ln-4BCz3Wk0aom`D@arzx@}I0U~m|PNNtCERcBuA3R>r;n26}#py@! z6HQ!&iEN(cc9PHZeXAAOrM!7F?{p3H>ANHdaFk3lZgyYdC>-aM#*dBN)F+=t)?j4# zedQDXIh`#F4OZ_jI*(#ZsNz!)BVUeDLjT@<_)^`2+4HF5VzMNeyXJ@TwG~PdnlNcK zyCx1wvMxh6Qlxh@zktLQ_pK8JkcX#;ZZK*HXs$Pbu25uADxL2JtYb9P*MhhLpWvtZJ>B8Y_VT1B zDgQ^H&M&{+Z9U!IdIReDa?B(lGXeVPjz(R(YMvc3%BNp3VkTt5v#o)aCA39hU1Fuy zSjDg0WNf@rcn*>qqPx)fFxeqCuh-(vi(NpJ08d%_E^O0J!D6y?k_LsGDh;#r5jd{# zL=x@c37m|tYH$*KVXlcu>>{|QYCXsw5KFlh3TK-CJzp)X!Ci-Z7EEe>{cT+P!HIc8 z#V@}J>KH^tgXJ=!VV08JG?v`|Zs&e*7QL@3raUvcKf4?Ss+XQjm{w{WIkQ`Gz`T}j zHqFcn-yM2OY@kQik>#aHseCd|)i{O4%89*s-UzsgE=XakI)WLQL^FAp#)MkT=(P_M95j2c-7=dv1OVdC~-0N+oQqa)~h*qL*ZKwi} zwo>K{6_}<9&?Nf@+8iHwIra5SL+ZHBZ2fgrXb5d@uWtxZaLSO7Jg0I-0!?=nNTN7) zSj6|OWor%LKA~{aeZR7bjh5%A6QF6__I^q(B5XpfkEYRy#H^ctb!OcnnWTA2=^#kF zRa3N&gs@T-51gpxQg-3LO)Xe)$F)2@{j-cRA&`QfJ-rh&US)Qd&DJEkPa&B(d{5ua zPr5moIgwndl*{*B&Mp0LyCd%MQ;^^%lJhLb#?BJF{(X+=jH0;Oeg+lu)MkT-D8O+4 zR*Zv&Svv4ra@udH;-&?<9Oyd>a;Li(1)GpcTX(d6I8V=_RY!mL4Fcr%RHIv9FXAHb zE>-QF=xV2--bMBoXXbm(%St{%$MNCwb#D!KT5?|-l1{zMTbd7iVt!VmMM zSOXur%kTe3iGJn-kxC@i6s(Sk^9WK-!l#I^Ms|w8NT;bPE@z!4CSEV@h*?JEPJr68 zrmw?v?H1GfiF8<770hI3bUK2+g;co-x}>`soQnrY70EWSuCMG5yud3s(kA&-hsX)xWdP!LcwxWi+UK_;>uxfqc_Vdc0QWy%$CB?n~)Myg5pYs zW}_}c(aCgTfum7vv|p~`IXCca#i(sHw^y4ZX87UW1h@y8-+|NVo%Sr5G|j_xRQM&< zj_Z0Mr@=N)zwX+NeRL2S`ZoBj>#2Ll@VWD($4+j?XxgR5670)}5Cuc=q8CS#>IY8+ ztr%Iu^5^FI%=UhT-p?~unHoC&#I8opQ+K(^+<6wwA9Re*PJT z;>9MlE`VbYeh_50pkl{FKAB4NM4@c_yM_-|#A*Mlh_lU6TJkaJK9$`rtOhpvRoyy`AKA(^kxKQ?lnC-&}W1FX41&s$XAUUu(KBBv!B zQ7qv=WW}Zz>z`Mp0uat8iyW8Q^Zi+y41}|~zI9`0Ky=9j1W_D8P@H!+kFf{Q6&12x z?L{8|rv$?s+K& zkI*hPntE#r2`|@XTM1a4-VUC4Ha1mKm*o3h8_=+_GPd237J_U|Lm+~Xkhf3tJf(q7 z4H5TyUH`KIly^7eEZ0z$eI|O#jmWo`%}}rth?)N7ZuM8CxFnT9pJ_A;ujA!=6=&IV zSiP^}oH_*_?eX@8SC8PMk=Uz@i{SCn&`C9Dw(8@_Fszpw(3D3AaGDRmxPgh2cC<1> z7*E@*SySs3+R;%K{vi6>cSs=2^M>TB#<{5m5~ja;AhrxhIxd*Lb9hFfBIs%f@l%hBS@$ zFgiYXAUi~b0J8gX;e!k*%aXSG=wR@&is7bTPig-#M{>-<$!CvOsZwY$y4^dxhZuH- z5xr~WzXJ)W@$PmGvJ{CiPS}iN`5=v4zx!r4Gy6aPF~NZ|2h5Z|2)j0c@VdSTa4UpN zpJB>fm7snUE4W&+P|ncT42v)z2CaNH2HzAv4N=%7-st%l?aFJw-UMEdMh^e{tD~0W zj5XHM`SteX4)h4#zjM;q$O_K>f5xOZ*=jYM-ij_B%|rR#LiFUX2hMpU=>rz(!K`a8 z>baY;KYuwKscn2ELZfh3!Lo&1LsHn9FI+DIZbTPTbdx5x!?s>bNunHbBu~Zqkw|WKcnQYC^o)4 zb9i#b;5hvh*Qc?)bcG8q3x)26XWBRR>#hVbKpR)#mHKl}Z?Xnj2)?xf1nxJoG3NQ~ z8RgS7)@!o$#9J|+?s!x3q~T3PXpvSi76Gt9k~PgrkO`}Jwm1(Qtj&9{CeYQ&8~Dax@pv%^8=!UhM>3=u@ho~r+Bp6cTowG z+w8)_BGNtsD$e-ZvO~KE4|Rnd;hlK_1C?jske?;WA5Q8kY66l9*#prru4q?3|DZ-9 zgs}9YWS2Wer?j2%dYC;Mhqm9ECx|v=Ok`}5{#^!*g_mOvS3L)1N|1;2g7F5p+M6L8 zO$JPlBXs#otw+$a{`=3;VBU7&1n;;Of6vq>vYId#k0GUaF%Q!E<59dAKSOeITI3oS z)KU9a(QHW*=PgkMwx6Db{yCZUaH4vPEHj_v(pFo*6b^joz1I6p{g3@#1w7)I${M{I zlhpoJWe<~+Jv0Km^7+o2eW9Fo*`9pSw!jF<>D_2obI+I#$2TCT$cu{APsn+RSDV{RaFu>eY8hP08nmFR zc-~i?2w4+s_GIWMfJy9z5iesST~`l>7MdLd*B!b|NhLbT0&opk1M7rSI!V);%e$h( zW&tHv{>IK~G~$ZKi9vj0Y8U+^?hZ|#Xr~%*o}ctHMl-f9@unE1n^*P|l4l|krHS>b zv)vQ9%z!{2=rrg)ex{3ki$Ky~wF54Bt!~cCUYtGEj4c2^7&oCLzIwlYW=rf?W(L08 zWpAII)o6GZGGDoo^?+D64%{%qHme}Obiep5!0B4~NVqw7j;!!$ZG@~m)q%WUe_H&o zrnPiGDt;Z49JMzwS0IJQuYPo9iFh9JJF^r+WYqGo2h@Ke&RUC6ENBy8e+FwD2{yTj z!S?mme+9(z(y1D9kP^%_G^~T+eK{d1k-Vud~XIC_ctJDFGI0GzyNO3{-n zY`sQdvj138_A4?q6N2g%GcmF{yO>G;?0alby?=yk*5g3uETYoVmC7ZaFrpDIG!MvM zqqVL(b9lqQ;lM06#Wc0wM}!jOQ!G9%bLx^OfW7>5d1GlW8FBIj7mY3_CVIFckD@Z5 z{#~2M+1k3FVm!WKz@%h3$IgTbx`!EihCU%f@m-%S{ixZiU=emllxW1}sm%~Hu zf|)hnPpN=AH&8gnJU=0n-SQ18O8$xhw{;?~T;rdk7i`jFRX6{`0w8g;d}sdJ zhMRz6{}oPiXQ#~u%8-Sn7#SE(caNL#1)%m5Xa)A;p%c~mV>SbO0N01mKn5`LeKw$? z6ez8$qOTRzwf}_a;DxXW%u>G1`dgsXpPh#f+#URW)#LOf^|zd#vIEOk8XPrykoYha zW+xoGe$3_`O?`6oEg6(2)^bp^WgU^5ZQo+aH`Y3<6b;4(t9r1oyTGuKur~s}8&4on zxP=+ztJ8hRXND@ox6&YCx*sp}V83D_>tMraEN+6d@7RLM5lN-M}hm}N3 zgCLXa;X#7pyG-j%QtF=B0}uJH%_~!;nrb*747 zlNfuD8oK`LaxK=HodlvbVwA^w$9(qkrU2`r;hR+17PS9~bI%7}CbJJ;sgep;Pq>Xf zk+BsM(Nu-@UHJlzW=PPt(9>q?WLoEE=!EMW%l8?9A%vDB+EjGqXHU2vush=T3X;pB z;pMQMm4=s>FHDIYWsLyu;nOfKw1DViI@{cRewEavCWE^7<{Lr2Xk5_jnvkUCY`CBa zRZ-J}worKhV)vRVlgvHqECs7r$e{=TbgK$)3Ch#Tn?C;btkK+I%4a7Z( zqZu!r#!KO&L$5flsJ8#u?P6UbXx(>(^X-M}24&+@Z;uGt2n>>Z? z-XjJ&lMaa*+LMV9U%kmy4XmT{gb4QZl};?Y#5_)tsjfqxI{CwU+kebRaj37(bv+Cx z2ubz%VcRvIgFupf9^bfJF=l{l!SvAW*&0JIwm>_^HSnM<>+17u%x0o&|J|r-T1!q? zGIZ|`(bWM@er+P6q{BC=VyyhlSALw?&QVC#agv|gC>PnC@CWdfiC4@6 z`;M_`Psy>=5PhX* zb2xJ1jqwflEudl+K&hCG5?UO~H|;@0)Dv0<4t@A6m<)Zx&W~z6`dWp(Qa&oe&Q%89 zwqn%P5bUAwnMG`qe`8g(F}Sc} zI=+mu{%=z6P_3s!Ed3(CXRoJ=#jz=B{=`X9+a18xo+PT8=0GUxI$|ztIxfantdn-U zQ|y<+;xnWRz!wd0#+ypi)S}nAp<1R@vUh7Bc6!hB=7+Ei?b~OmJN)h0l5t>&CtC~Uw~k$@W^?!lP{K3)~=Rc88pl-S&%;n5BhdWWY&0m=%DvC+DWe2iBIFWhd&8xDb0iZ0S?v zXo!^%RE_<-eZbh)*x?%?3dV`LW5>2n-=q>_u|4FWU4Icgl0bKk zDKYd5_&Fh}GItfjGj^*>U#=V;1+&-rGyK@?SyYA|>lMHO} zH5CJ$2$^;JpK?9CwEfis6rP|;oz{0MhRh2gt38k(vUa*eYSqLEXai7-IUZ#ID%e@d*nJ+@;c$jB6AaUR}h}_Sf(gc7!o3BQ|i^_5mOoB zI0MRM^3u84pb!t5{GvhZWD?P<&DEXG@$h2KN3Zn~uV3cN1b?aEu(qrYu^L0BX`(FG zaKU2bgG=i;&cu>l#22XK_HAK3cp*oLPX0?yC#5$yWT8fB2w^`HLsv4YQsSM|Jy&0S}Mn2_;Nq_M`%Cn!5EMcjDhh-qki z_o7w+SF4#ZMvEvDPC}iTpBO@@o8NGVw8IOQ{w}WC-zDq+Gw?M`y_#MNxgm(=LWV~_ zM|m)F>UCK8b`^C|=xH0C?iDe&$f4(OHMs<<3|ob^0q8-#U)S=7CDzY7f52Xfo35uQ zduAWt61aMa^b4$caB+kbP;tXO1rSAdnqm0nIE0EH%Cu)DUe&F=D5}^bGk!B+C1pvh z%^-TH(HbaG8z_7Nw5KPV3ZGaV?lsmM@1Sc)#x~_|Z{hZ28Pd`5Z!qCG?0*w6ss;hR zbIcAHQ80d2eudv$WMJz#D(7pQD?2b5)NY zHeu<-nEFg*P~J&m*$bN(N17i2f%kR!xsMBjQ`}@eQb^0(`>VvgZEd6-f;x-iIxBGn zr!+k7$$IPoH^c?0x8zGc>VNU4Qm&r85~zzUfJ$P4R$V%c($Cjh}leU@dwdoy9Ti;?NB{Od0~6 z>95Jx0xFa6et<@2CJoKWzjFWu9Ufd%{P~MIA+w4@0NYqc`p3NeDUgYeQ9VcKc1Juzu{Ge9~t_3!M9apXI}` z-0r)EDhx@VqlYcY@R*BOHPUG3m%%p-eb4tgbLieTG%I&ENl_>eDRg@?Gsie>w1h9+ z$)@od^fOsv6a4_L{W1}5s{`SUn;S%#1^Bk)1W_$9AavXLI|bk0@?#k4sFQcYJFQ)z z@99*}Q4&A;aS03?ZSdg|>n-{077P2cH<~)#Y(A7AzM$`UMsKoowdf~bZW;&nBt#3{ zja32!;ClXbhEt@McGAV)q0c_pF@s%BPD^7iRGVO0Hi!4g8NKhWjy7Gq7Ei|Zb14FG zMHP3@@oI=%5sbi@MElP59L|pQG17TMFuX_r)l)kY99b}fzDy1gw*?eIjLY25#ulrw z{ugI&85Q;4ZV!tfC@CThgLHSp45@U70|EomAtB`qDN-UO-2+I644nf=cXvp4NO%3m z-+k_L?q@yg`LDB{xAW@z;aJobSmnl{Q)>fO`{55<$HmaHgk?-;QQdi=O@Hy11 zB)xoR@3Y*CKWS`^xXtsDaj#*&eR}=(Wna3P5!)C_)=RGG+5pL_-gaeh)zt})&pMfK zmz!;^ETrF{mmBy1^@8Q6vS*IHt+UtACGW^0`6W#>_(3})#YyRp9aQH)D%GgMYEXaY z>j8Od^(1B)?}u7G)QZw6^0(#L|Lg`Ff^sm~A{W_Rnz=gG5eY#GtJ+UiZ!X}iM!hzO5V`mH`fsw-Qg2fq#5x#Fd{ zAC^vxf*#tlh-YDxdePC~oI2kyuc`9>a}x(1KP>r@ILJ|tJQ;@>XQ{cG9{AfcS4*wi z=v;<8S|ggRUKI=qCu0#b$NAEI=(2FOj2mMDK7GQ;q;6`L=C<1tq*A_CR)dVOOiWFZ z--uyMwZ{7lj(59tm2XArQ7IVl@ED66B6=#W3Cu6rHgyzzFe;n27SaA}>~Sa%sy`Da zRl}Vl{T-VSB?nr@Smnr0FP}C< z9N73Z7=mw2TvV~LcTLrqNj_qFbx-OAh~<7g;c=Zb&QV@`b5L#mVUwCi%P#D>2h037 zOy$o-f8HX|g7b#^m6=b;n-pt&{8H%sw$iM7I*9j{-Zmym<^7s_6fYt$FP=(cpM?+4 zVB(jy-6Q&ROwC9YP4F$yJ)4>K@C;=`d4DLjXa^_QbWu$~UVAN0KY>i3eajiYj$zvg z5l~kRIr*hzmj{%lzHqPU(afwGQu;HuQV>$wRviA*cL4EH&FGb|Jyu$J!uu(4=Mrc; z3M!-`4 zkH>a#o=k5%^O;>ab46;C`9SfCn~*7kQ)vWhO+I|Ei(Y4f=H) z)O)X!eOy6euX73jANOJt2nMvY?dzh48UGwF^1WkhXX6HLq1(!mG3m+)En3CEY$w~| zd*4)?1b3gZrNg%9AG4A=Yizci3?MKkQ7!~y=W%o>rG8Zz(9?b?w&VJ{-?#;N zXMpxl$@?Xk#1NNKMmjDUS)5kz4s)UG0Z!+o>id=-Tz28?j&vjs<5t%qB7#hmcj&33 zn8U{Vw3Y}eRP~>TCvaPAtvv6!a{*q~&}qVH%7l1K?+|De{p)#VRIc}kF#lUc&#)@9 zREsc+`9eV_3j@1Z`&0{LGN58SW=E!Z_r~nw)hqC|*MvI2$6CQ2)tqqmV{$Lf0C=D! zKThNpQ|*8IqRvO#{rbn!+Qkz32VPUIm5&(IY+~zlkb3IXlJPgYKivZyY?OU)REjxL z+`_`l2ARnxsONvk+w&VfvwpvW3E5*w)W<0xDyEcK*3rA@`mf6(;V?4sgu8I@;oCMAu*}wVWvWcL4 zdBNh4aYVJMggy03kV-T)nK-q_eOVi@nzL9JK+!|?OW`RE^vI5ih&C$mi;ItdSCC*N=73Tw;iP4q?T^7i~N)}&S(P>W!dDbp*9 zEqW2ZK>@7zSoYt<_h0yPFmkKq7?UHMmLrRn9KsFbrnmi#^`4fb?581PtZI~ zSaCCaSi-9!H5*^^d>T3f_>!QywP=tRKCcaxE7-<=>X*8HpulCrYGRN!bW-%r`hY0= zO}t6q_%H@mBwuAO`PJ-ab1zlQ zEZ-Nub}IbQwU)CFL=N5VC_b5NnS!=rpE0@_OR7Q>U)xQt;u%Y4?{96siD7yj%d7gz zsf+bO8s1&iSCL@aZPtFmqtWMq&aAUp83$&l4K(YN(512I#}}N$HfS^95giOzKw#Lk ztT3OrL`lN$BaVWqm)rJ5aY!Lq!QZrj;uoU#G@&x5N&N4xIJ`PA;}_F*gVo z{%>1^7yaW?H&C7Y1-;MfHjPO4>3Z1{Uc3CAg0C&`vuqd-W=Ppc0|>U3l+74qGv)PZ z`+T}}R+@lQ-lQXq%wzqJPy!fr{r7CT*`lNI(11g08t+IJ)cE+8W*CR6scNoPuNay|noWg44T;VCfE}kKp4VR= zR%^bdm?twfO=r~bm@4Nx8XAjRor`^S+NYt-?G*L?CE zfKC(d3>r^E5X``=5x4A^T1zV~p@AL8gFG2vk>x=}J>8UxPf0D104+v{X*KQ1S08Gm zX_K6{nEFh$WsjVT5ziFFZwYL|ReY!Rzq+UpW zPG^qPP0uR%E^goSZmw!=?Y5Nuf~OO7%|!r0;=RB$T*T@QEqgWpwxpZXv94e6P^jWr za9!u2$lT4i(%pyc+?HK&iM!u8=le3;3fL%jYmV(d?p8pIDecJCrJ>^jje3r0EKHw= zAcW1u@Yp(lqPAE76N?oR=an`jnC;xCpR34+uFCh{=CPkZ#%tienc0`$0Q&krT^+9| z{lRc4-rB2>RA%y;RU|o*Jy$)n?DI*25!esTWx(n))a%Js^F*uU<~{=9#Z1OV?iwfB$Z>wA;G0 zftJ_mKd)?&g=g3TlK$-8+F`@QEJc%c+hfnwN}XiGj<13#h+#|Lttf;YgZaTE8x8LR zlNx+D$FPYIueUR}<5zH&e+WXvGcnPrdSe!t+R<+nPHb$#{GQYW0TTvMCMltaB8ok;Xvp;rB_A=^Yhur`!MkLCsRReJQ)0HBJEEnk#cVM z3$168Uf(V2f|!q2r-GhoLn%JL_7?3iBcE9`1Qj&Ga>e{5BrWL zA(p*@S7`hfrq*h&>4rqw94mWzS;dq*zS8@(#XG7Rh_rs=Q}cN%t5=+(H=4L?0<=}7z{|1h2fT*&K5kSq6e-K*Jz#j6Ey*1qCyu55RIu*# zRKQ7*fF9Zbbk=XH_JAy1V3`FlOT0Mx)X%P$@)-_S=~%6^K&r;hhhS1Q9?Iq=6U6cw z=@vHk(+{iF)fz8-lPjZO|+qNCN1SsrAR8=GF~~9yGQqAV)oI@u>zAP zr;R2V`QVy}KVHa(4=pC2qmtgR;xzHqW*=;QKv@(d-m@%K36Km!$#MoAj?|NQSmLG< zg*|h_HHN+`4xOq(SpAzDBg`|L#Y@XPx<$DwduQ2khPcl6qy5xybopPZb1O`g!h4Cn zy>Y>%d{0_+ zhbT6G%AFMW2IY`p>P!m*b-#Y#98zJ65;ONsm|C>Ksti9egKYbRMJT0T9N)tg`ks=Y zC*bnF=vyC~FfP~C*L|JfLRrK>(b>PS_|182Q0E|a>Le6Z!WMh%8wbOl>t z{|$6=09i$qjM7<8bpd3ZK>bxHZ{_F+o>IgQ1cIJtxfhNDR1LF;w+z|ZmU;v0C%q($ zzx3eRF&sK7f>T~#!dtp3vr?V(KTbNWmnIbNZlf#&DB5NPV?3kCJ7F!fOI4_Ku8oWt z6;j(ADcyLB$9I;3pF2J?^h>2ZbV!NpIy-H>N5jc>2zK z)F!N%S-;2K39oEuSQ&h{JQTKssxo>+P}YW%xQ|leFlPOxz3R3@9yT779teb!Fq%qwAmizo z^!Zk+2Hn+FT{_NOR@h;Z@Z^H5&|$YihDB3yZmg5re8N7qk%i5-k;$symp2N_N&X20 zh>(eRs$p&4ej8r*qd@VdVpMtkpYK=>1BAB{4lOMjHa!>g>if&>3%Za!8Gm8IeOa(CN~G=s6dd%t?QrYNB#k>;ZtM?JjlWrCGRm+gBuT^ z+cm42#gSz{t^=S8NZ@oiZBOA>uS%K`KMq>?B+x%nqTBXPwp-!!sVs4)#J1snMnpg8 z`|c#f)PZ%yuS`jfKV%Oyx%Zol3zir$MFBU`W6}k)EspP>?$zr_x~Nz1!bgI&NVbgt zwg*7Sjp}i!gibKoJ|saR0@%%dk?_e%A9)Z3I{8zDcd0Em5rV72-5m5O3Nke=;fzwM z6>Qp~i(AMh;N8Ptr8)Pr-6%qgPs2D0g<-8n*H&Q_WY&!8FxQU=z@F!=G+@^M4`UmD z%#C&DEot0(#hWRMur8W)?K$&-BIn6lvv@+@PLW;EMkSaULEaEq9-Q77{)_8dw78jC zhs34een7R1?=PZo_u6oTFkj~HOHTH4M;+?rqnr281=jHloLF67#s*(05ONaQN`n9=?fT8H~|y5iL2X3(_gEnih`<9)Xw{$-?6{ zpZoUQ|7!(+!G4;XY0V+ye4=Z{(FelQwh<(h>Vig_JDDwK4E`{Nv#nq!)na+E{to^4 zXW=A&epig>rAX@w<01BQ&fDE;pF!an>apS|P>O1a-7Wo_e{?hNlvhQlNB_^@`Eo)~ z4Z9o#VMI14K(n6i{donZW{#VNbeiNkPy~mNxrFZ|siBo>IC{+w$r|TGTw8a2i7iv6 zGBu&Rx_(igY%Lxc1vVo-ctiJu2;J_GY-*TTHHT)kg>81U<(q{*2|PZ{$8xl{;&Ruv zj*;|)@J;yUSZ|!um{x*;H(P-#&ZHNo%5q#ut5n5OkmhA;6SNq*go%ap^Q_GUMA$qh z`4GU4Ku4Y!*|^U}=qiBQgI~*^XCBITaEU>ee+*IN1T9Q*vJgsJ` z?9;uCV^j4G*1@Ro2~t&z1qwO{=SZ#E3z`Z#8Iz8@vz;O8Xm*`IqS`SGi_x}5jO;H@ z8jl{64iQ@&;^H2!t=`16u35bty#55(VV{qD8e@U`OTi%kO%@PEi6tVi#3e#z_luC4 zsfV1Q0KyDJ$@8kqNzpTa8Sb)91pg&A4rTv<=ik^ROF#Ws4OHYrhE&kfta~_# zP5HX{EQnC|DloQ#`>9y5@BrE{<%O@R9<1=7qxdw^fqyV#tPtyInjfMl)sBo~Jvnr+ zB#(SoiihGLar6jV@Q;3e=_Ha;ZtGpI%gjJoXXue{Mt?vCo>~8@> z9qvD-|1^n)+$m7qS@dDkH$E?dJMCBJx}y;^_F_b$oUF>_)vvgF|KM$;n@BAY%e<7K z%$#5S{zvb1R`V1ly$QdK${Z|xmju|Nbh7ZbIF}o=7-zz`HKPdJ_ygRb5@-RZ#m9)#cm%Cd{TzbL3D{WW)?pu@Mdy%b^TY>nIi>Q>F)>!XfTw__g@Z0w= zAGMpaCa5Xl)$k@oF6X91Jg>Sv2_YwX1aEh6v1}7<(Scb#X|1>8YHuBb-@=&NXe?2k zvefn|PV*rS-d%)xGc0m9I$qig4ftvr<>;VSX7ev%Hr~07g!mf8J9-C>#FUg|^Hl=l z!K9K7?{EX$3^!fySPgA{_}f{d)&r{4|6UKQzlBd()OM`mE|lWeap%7;BrJ59{>9jM zO|1F*D~wYh{6$X&HsdC;JRc2QzfS+C&aaoEsB^ZL>(+32v`g0H4Z%d)(QkjTETY+# zz686b`P>V$aKv)!?ElQ43H*m=sZ{9OUf*^)E{lQB5an)wftz^X1u7ti9UnP`9CV5} zPDLehsZ}<{^VfD=_GXUmB+t{)Tj71Fd@`+O;&jmC&5b$;*h^&Ak)Q5%Rq15glp+en z_j_q?gj20WJ%-IZUX_UX?)L2tF=V@iqsa}=a_UVB0r*Wre;i4KzdzR(_4NZ0)rL{3 zkKeRg+7Nm|kxIs>SoKe%?O)$&$)6CJj~k4Bsel_*(@`ZW_}cj}N}uSBeJ-JT_UU@| z6Xk3s@r79W8d++b+@WnV5fFW_*^F9jV`9F-WXH>7uswwSz3V%onA#njFAi5z6DMfD z+q@8_Ts$P2!&i>;b59XsA4$M@-7(;IiC(;l?bdApLNTRe1Pp1N6u$-CETWb5yD-Lp z+UIAf#g9dUWNEfaioP&Pnhs>jhF7&t^0;~e1-_`FFwmViFHo7Z5u64%C{6$PwL z@(;nna>2p}kIvK}4EAYW5=i zC62PK&{Qa&&>&byx+j<-U$AXwL{ghY`ZWR^u-Q$5CO+w>O^0F_ z7WaH!uoC3G9RV@V-+53KP3`f-an~0WF<*V-wKfw!=`BGaF~pJCn*5w0Xkhww?%FFi zc!G1rmiMwfC~Ima)aGPQ=0hi-3a0>Nyts}w4wHa}>ww@d9Y3$AYI?fk5fGnGRiSZv zKfgbE+g70*Y6(o=@G-GM($^SESJn{9(VZJ+*@wi^;OLy~)~u^`9L&Z%190>o>5YMt=DFDFYDSbi(ies zu)!iD)|H!O@C5g6*e!=$Y5k^12 z7hr5ZY~@Am9dKSzTFx~5^IB#0xfy(A^5<-2*WBjFeybs4vo|ix24}t~dumev`m>`p zZ2*okp3U;Uh>0fCIE{RdDU0i$BivAFMXKmQm(haFp4;5oZRc8kYPNsz+CKJ9EZt5) z+3D{lGeglbA7#S`LHTZzIIGd{N1S$_o^b-K4ug32rZU#=MCLWOct(p{Z=y+Yp54fq zXfd7be0V|kc4dov^{ru6-Lm2$J;Yfnz7Ymi>eK$d>x^Xu66VUJk0dlceR)d>#iSXd zJKFauIf{H~q%J=naUf=M3CaC6b@C2An3oR;8|sZYWHZhr}Rf1kM}JD#C!3spp1q?s9=t|%6L+17J~qawSO zw}oc!mIJO2C>w^cod=9B#)4I_0^t zh}col%15A}N2#_8ODv)jE5)-HuD+ro@4hGXVf+QoX8VNX8BN<)7XtnQ7d~OY)+68& zGuNy6_FoXC`o}GaOFhW7F++~`#@gx}N@)-t&*7)J&<)B%a%9)U)kv@3cw6> z*WP>PoY`RuP!Y8XC;~~=*Hf`HNTNtbdaaQ3f@V*C2VZ%XJS-N)ZND9Sn<^a^rk2U| zB3ZE4{>d=z!PY9CKou^83brM|u=;H&FZ$*o~NcukHT=3-& z#mhl+dPiLHUV|W9PZAry0hzcb?o~Jh{$k7P@t~?h9WA3{ibB#HE2Q>YS9Ji#0xw%f zg&mnoVU!=OcNa^`$_tdos_jRDyE9OZbJbqQ+0*5iqXgEyBAcAsZL&~ijEr3PPI8BSP780`wXA|o#sbw(vQSr_c?ecPc%*aIuvXEy3 zhVElOD?tEDXv`j%&TFyDM0qEQQ(&LUgNW#+c;gjbBPUt9Jgj^fHnI>WYfPP=ng zuHc=GpG)2Dq+RV8ohamY7R`8`qxMs0!y@LnbHL!1-8Zol0@Q41Q@aXrCW!p_I>k%7 zFb~t%lqcKsqh!UUe7ik?oQvg$4XT0{_{0yW+oWG7r-`UrCF$KteBxQbY%PcNBxelv z@fv7q3#BC|Y5}$+o1pink?_X~Swk2|v6fcmjg{u6CgP}J-^;HfO@WAnG3bCHxx?@L z8v2w&WEZySvc|hoazK3YSF94~{TT4`NYD_r_g@(E5^r%=;(tquOfNGj8`?>$2@N;d zyi3pymaXD@`rlI2N>#PSmohJ>og`cRD@Ec^e<*FzHG9g`L)DkWNCcyZ0T+ZDc_?c* z!sU_^xVq-2mV5juHP^@{LZqW$X&KCsBA$~I8D)$h=esei+Vsf)J)%UIieKEqG6)d@ys!Jog;-?K<` z7F`1Das@3UNx+HQbI;FbGOUtq1N2pTz7N#bDYKmddwVb3ENhm+%YqAi#^Gym{!`>|(Y>2W50{MH z$|Kr@b3kjxA^aTF2gd2C@j*G9#ACvJXv1gb_HCM!ScXcb05K*)+ZMBcD zow`e9-I^NTY=S4*LM3Ej`AFxKEambC9fGg(xwNES-xoOK9IgjkTkLy*D6S}f-8H+b zYU>#XX3a61RPA1BTaC8$Yg|}cw*;Kuw=9v6`$L^OU*(@~-;JyZ26@%V2i92VT;+G2 z;Us#|j_(d*DDV*p@SF7@Ggrn&lEJBexA`8&yO@!c(kF)aUK_8<|~ps85hGb9Pb z7$?G>Ngwa1zYf4MtbS;^4%yhpyOn;np-dC+(1WjfD2vZAREf6X$Lp~&v3XM(aKWBI z;FevzWhndZ z3#AR}o&V2@P^o9;C}8|xgK)oz45*O=8wE=>Jy5O#t}A7pw`Jt zijve%-%9f69lAb!3p%jC_5vZ+{}+>*sfeRQCa_I1ehrt|IhyXnG|A`M2v6rlW&X0H zs+U1YtJnsue?l28T2QBt^qNOKRVIILw;_3Zp)*0?K`YlD##(W>mZcHZn7m=ar0#Nn z*)hd(!7X*34LF%z?IT+)$}jrgLs~y7J0kru%p(+|MSrLPRYi0EPZdpBZdLI>&Dd1a z+%SIH`r;MC%l{tGlnez++d@~9xu=I5_t?}1d znHczCZycX^P^}^FK#UE0S*4PkEQFd_=Ze(A7$-6WfxVVK{liL?@$j(c2ZU$Wt(}<3 z@zm~s`-y^lvq}(XC{c`f>P^DWp(tt(#(W}o;gUzSr&Vhh-I_=6ikA%AUHkHEm(2%~CgMNFxkGzfB*L#F&fXt$DYuQ!h_-Y?Z0g>l! zjs*@s+oeCSDt11?rt*mVD6=j;Vb|j`?bt_vKDwDdgq&rREe=!mIU5a~upL^Xk{}zf zRp%8l=BD>|sknPLlvqb;6!x2;h#cxrnJ@FAby~lkfA>NbKH2l+tE2WF6HB2>D z-7T*iMny7g5yJz>8V8dZY>MPK*GKmD@n8I@N~Mu|mLDvpub zBSCfEDvGb52L_R|7SPy2!Pu(Wt)$c4-u@FKxspk{YSH0Z+m;FLSk9of*{l(lY+CJe z=}`CxyX35TP3CCNiD8!HY}F<$;3h|@=iTw_5@UAJF{Pc@1uQMiw}ZWU{6 z?*TQJ`lJ@nrEc7=LIF>$qL+ZMU=nFQMZQ%5oz zj3=ljBAfpQo+$YZPB5@1{=!ov^KY8KG{r0dY;JqO3pv(z}ny&rn5K74T2qfqZvA zui`KU%vA04QylXU-Zh!jVtvggiYy}R;Y$-f%uvkOT3eG~(k?;c(j*he=2#_rSDZU4 zersk2We2xG>?#s5a06^wlQ?)YD^CUl)DAzh;H;B1GoV%Euj}0~#fVE%jS}?(8CIGK z4Fzb=*0hN!@tuVt__SCw! zGaH6laP4HRLX&sP>XFZ#=W@_SyBaGyXJyCZ)?zbP=#O_y(o2q0Gb6eq?xVo%r=^Av z_1jyIX#v-$gNkDb<)Ky_dL3YPD|kMW7eRMIRlgl*4meG0D%F;M|*DDFzAN{IE@3cKsvO4qSF@{$iiLv|c0 zc=jxqB7LC49V>BX3J`th=i0@vb7^1icb;_H%3qO>rx6u&PrTFlR%UWkcZ{b(Bgk;= zi!;;}0N3H+(n$P*MY9QUxbDIy9O@pu4ZIChIq@uOsA0_jj|TP9Ri}@hp4i(v4ajkq zUM`rmDwrHodEr|oUR|ZuQ$BO!%JHn-?;gM%|Jx1PnLX;Z1i-fB71ZyFF8y#=5J`S= zN%qSO253@m&cye2e6*W!((?fMRmF0Dbxa2^LOSelZx!6dZlx>&AQt$|I;v6d@x0qg= z?EN?({rw^K^~qNqeJPjpdQL}`piAtmCzht!enF-aQe6zRDP_wufo}0BA6*Wf*Et)? zK5mW?QndUJ)u@g!a`x~aixqaT0$xN;odl##I`JMG2Z225Rbrkpm4`C)sL z{!!8HP`}Sk$8?ndPZvb2Zl^)%Qt@VM{~5};NDBJh!G9eUTvcUNQ5E^$d%FfT3V`?9 zb3epp6);U*vym)4j(J(PdN2#N7`+Kdk9O+tyGWiQAJl|vQ{l8L;vz8n;v1|{C)f2% zHQk97>;C%~m>2N*i&0c*=p@ij=I>N@Kq0>!1E3CCKq88%Y_#Il?&(^@4FeJ;q;cGrRYxW5SQiS#4mjA2)iZ|7ew74ii|~|QVp9z zT;5G7jNT&JxH6n-Ek`jPfW+f*H~HOt%02U5j!&4k_1m4BC)Sh#cmph=K7R^4$V`rp zor-8|woHANF`7%({SDsUPitJQAzji}o?C3eCGteSmH5<+iOhnA=571m40t`m+g~)- zz3cnJiw#~IbC(RT)1AD18CitB(+hxD%nDUJWLxfzsQmpGf)?0270N=IKcZU>4`}%? zrClL0E=Wg|R_txm+m}J)O80VTxw0Q?u=QG zkH{AA`q&U6&$Qi4+q=p!39rLDVJ-2ZVhoIJ`*VpGV%DF9Z^o{} zlP-h;A$nl*WaXsytvv&Tf6d6#{YX&mwB1jFoV2{VQ(MU_faHry)<%VxkVTr78idgn zkwq49ms#K!#LK-)&$(fd{}okV{Gv17z+&u&K@eM?J^i~$<5PY7SyrGvaz!;zIK~V9 z!JJQ~)Ub1cJ@l^RgM#7{1GmVfOis_0jUckL&poe5< zKUV02ciQ&R2pYy!<1)|HDugNQ-75`t7+tWgV4g0nR`c^~y|T^r2~Js?l$yvJOhULGmfN=&iKPVqpG%zycZ4HHKA?HsK_ z-E)?0GjeY~oz(qTgkaV$8V@+De7Ryt&GEvr4qXf7f?k=PL2$%+%{GJODf+7P;ZA4EOX>Y9+=(g_l~1$ivi7eSgfnshIH^ z@MRih=O7|cMM9wuI8ugPpt^&cr0M>5*lqnEQQ4&*Ws6dAP|3{-`G>pjO5=7C--=`| zkX8_6;RLT3fHTA_jb7e5=~lYw+B^sRrA^4cLmj>i)src=f}st~ZFYyAPs{^IvBs+? zi#GNLDQw7pDc|^V&|;e>26g|nb^F1$o5K`w2;$naZH}41)$5FKv)&wWI_SW?4Qzl^bC zn=$+?*_o|mapW(`d43puD9fl`8z@0CcQ(<*WtWitVqNdOU1Ie9QHW#e`b(f3Mx z@&3>H!{v#i(fuCHeM#TLxz)mHwowJavtmU7$=zths@&Gm0>tR5vWNUA4$1XQK9fL7 z>f@qWmjz{WOkxv0$SJ2q@Vh@P`d9D2?$RdBYkv@0FW4i8>|s zcXMHHcoT6DcXRYUxZo+6@Ujn?`4~)lzll;Tp}v6w+lWecYO~RX$MA4yJH8&5+}mk3F^LmHwli{ zC#dhgko3n%JNJyP!M0<-uc@x+8&fL|OfV;W3y2cBfp_saXYRfTnaV zIH_A?d&t`e4J&?HE#3Scrn+#TWcDRo^j*v)nxE4aKh$cL-FEMj+nt4)YefN?e( zl@TLm4x{-AoZf1J1gApe*Rq^7;PQxOXV@JcZbWq4k$E?{f6BRCMVSgh$I(L3s9ah? z5@NO%T-JZ45EK7g=$6&DQ^eCaZfdOCtT|idLu%wEEoS9$Sf#^K`24Mah6zviz4g1f zl^K+UgG~2J+D(P&PTa}>7Ugk8&HPqZY>!*N;<^$|=PO0Zjw5Nz_Uq5B7tdtDaYAAP zQ)uf^-4a^1?58lwZ?yzP3X_RABCbVz<(rgn%`0on_0B|*>!)!`ZL!2x&k@}Ndf}+F zYekBBxRRr~F|itl&ByIkI+ojV+^J+B#$47y3$>4ucnk_ybUMj}^72*) z;}@+bEj|8vNnM)%MnVrW@Q>S27q+9s`r>jS6?;?n%GTNrbZGBh$PZW2p`Ut2BL=XJ zs}O!pldWgerW5^Et!{CCeR#xnWvv21xQ&eZ=3hTFz$sT%27`a02DMPD5_tf9)7CZ@ zd?XS$yiZu&Ipw5Sq3sHy{wUt0Er7Ijj}tGL5_A zk4udkZb2@wn=ettJ}=rtGlGwCAM~R6Q@a(vmA*H4v?BKBuV@9AHaL!J{Bg}~DeiHbU-`}TEev-m zV0(WnQq_`SZXPH8#mgP^-@i=7&7+hscRVb5)V}b`&v|k0{RX^!4b7x8Eh70#0I7S` zVvj8*hgp|c<%zXO&Yv5XHt?-#-o*~xlQLX>=)0+G_@?e*_Cgmc7u>9$P=-i1ir3Fn z!YGJ*F_=2is*1Uo>|=mD|NC?m5}`9!68fh^B_6GbU_cIg*+rz-upZc3NK9tIfG?&z zmlhQKSDaVe!3btUM2T=Cn+_{{Mwq!JH%WbukJ}zH$N~M_ z%l(3Sf}4h`JT>GIyFpzTDWarj6o6#Yo(bz=0?5Q{c8JRgEE2UBF4DUYk?qlz{uj# z@KKoc;wkAuo!9CCZ1*_fKCi}pypKKGY#G+6dRF{)2h0P*bD@ATh#$W`f4@}br(n_f z`E95^yQD`)qCXhrai9Nf%x4&8Irzp5Krne^=y9!wQNl-~61PSlTQGfVc!Q(290&~{snUXo0nA&pG)3I|dS z4)$uBUXnZ&hePh?g(c4P{CK~E)|)OVC48>F(7@06-CWFX`Y%z33Cc#1$8xke5k@uU zuI;{-8{7ehg3W)n*M0A4Olv`Wmvzl6aeYHqIqT~rQMo$UBEZ=mxgwdn2}hrcV*_7W z0)xuQ4t6EUPSb1JeQtZFtn2S|IU^Tb%_l=Xr^h|qEYivg<}L>3!@g&v&EBZq%iE9* zyy!Uysy1X*S|4|4<1m$vVCGOj+wrR~DcmrC{9q!jD>2e2^G}h+1eC;)Yb6>-y!HFQ zE_xVb^ut<8HG1Zgd}~rrumdw<|8v?PIKOn=e!pE`JKnSXB~lonz`HW zhG{EI1gWMaN+hc<*8IV-Fq^kqC6Bl=I?&>>;i?X38$P=%YZ9O{1FKTC{jB?xe~j{t z_F6ko|NN_wDe0H|O^I$o{qz$Es>RLLSX?3W9>!0kT6q%8&USDh-mls#T&M%QYAHBp zKa8>XZ2_(C51<5nk-ZlP15LHlQQ|qOznOPpt^`a$rz%nMH+!(brNaZ|3NM22V*TG# zFXsOj)uXWPBcC_(TGy~~Q+0^qwd%u~dxrbxAFdak^b__t(DpCab16xsS27)WzXgT{ zg9`&UQDXc>$A>m>xLYl(Y97z!{(lmnvkda$T>$qfdhnmnmviAUe0z#dgu3yJ_yIe0?W}`zOK9c&MCu7z?wA z62tFuSPRR0-0qbrIW)o8a9__Q zI6kC!tbp2`8b%ll$FwZ+VLhf^JppfLP5303YOf9>lo&C05laepTCDTLZX8{iw-J_* zY$Ycn*a`iB$$ZXtnd(3?^zRvuBl4jJRHVy83 zQOI=^=V;Wl+_&f|eKERUW<-=sG^)_p_5GZCWm^Jb#3y$6Jnp*SZg=>k`D|Cx z%O@Bgr+e=mUrDZ2jF-hS!{dCn)f-)|X1RU5uIv_z#Twl-rug_8*|OM!3Mcw7Ud)OE z%%JwAiXeo^pgRcW5oa065@w0$8Qaj^SJefW4jSQ&!Hs5Pu(K9#?q67vbH5|lqj85_ zg_y(Ej}>klc;iP8m2SxTn{J1E_m0sY(8A2qDZ6OyhSDKT_cc9ItVSm}YLK$qYsV{O zA0-V!%hw_Hgf{20o*U+|{K*^XdllFGFtVP0CtNsSXg2Tf)m78@=_*iigTEfZ2XNsA z*#EKR@{H^Y%^neU<)6CSi}bo-p5TTAbgdsR-sUZOUj=c;#7!?~#rpX?<(};httOJY zp*XnGcw*)qb_N@mG#X;=^rYG8+PCSf4MM}$|84_y4!`;RYI?1K|HTtfQW(E+=7i5z zAKj?Cjql-8jS;){Bk(OMzwus)Pjw!N6tjD?H>QovfJ})T1CcuVEa~9z#fK_n%l zySo$xBu5X4A&eABiP0b>Ii$P027;8MySuv^Ho7_c-S;`?b)KF357?vcYuEL;-dURL z%q9%5K1!5YYj7$Gc?~`l8qVV0x5ySSt6GDKHI57hsPMzDPuZ#C(yc!ZJIKrnm`|CdZ}8jG z!`JwCgsq~88QUP+<1^bx5+BZ0Txqzq$ntj5#H~o@=J#-p2l0$EER}*!#Wa^YsMG1V z*IaNi=?^5RZ?O&@C&coLB}uunGjRoZWEy0*P{}85JjE;+!8w#e)^4A-X=4K7Tb>MH z=1|Sk@I4g6uUZo_P^Z#g08;y)z6;XFj+BJuv(x5DILjzm|BN;$KVVe9JW{DPPYF+< za#DJl2Fb>Y+=m3)9P0dJd3<915O`d7H)+QP3u!FPs;tn6E{7PeJ`YMiIZZAmB_sP@$+MUF z-A(af2~C))WYpb-8(qoGc;cs_%_#DXW?8zF4D&oJ=-%|u2fH%GFHlJY-0f16Jp7^8 z=jd-g_rqX@;m7GpibA;bgGYve_p(oE=ZQ;UYw^(RjAY8Kk~evA-!J9)F12ZRLGjYx z<@iR$g`78O0652+(`$-%SQ=CGy-^p@f(tYJN*92?9D+=9TBzTde7Fa2qCM@dR)&*} zxC!``QPc7!o_8Wv!0_*N%H})NCc#~NZIHTp!&uuLf!99sV^b@8W6xo1Fm?0EWwXO| z(PINgKes&+_lo;UeYr?z@Ymh!#ZERS|Fa#wF0JwjR`_jG4=S$tVz6KMjtcmF+qSUSG3_=r zhl!U6lhxm`Eo=*{K_GJUG>tWIBPnAAzzId1eHSkyf@mCYeFEXDg3~0__rW(zL+^m2 za?wBBaIwq3YNeGSaWhUUpV3cz)xZ;^c%hvb0{;ezQEJZ53HIc+0@?U9hD)#;nW;XQ zX(6>Qj}d$C6~xKW6UlIhQp$cS|nCxJ{Ri+J|{|rX)Pxgf-nV{HP9F zrKf&z5KIK;;)DjZw%L%{Se9PgsRQn{`49b1kMHAwVDByNn=72<=6@glwU*3%NWg(} zHB|?LKsJ^~+`RZ0rM!8UWNu3i{P$yzD9r?#`@m(Ti1*LDZkM5tC)Pd}DJK_g-^$%- z(2~Yo_E1=L<(*PHJ&7@ErP=h{(~pbdurHd0r1;$U33llMO|$Gu9_GFDVfW3@ThmCN zws!XYUgDapQ5mgUpGLT3m1D+_Vf+GC*u_J%^7T6isbl&0I zoUU?%fhm#ht{hj{WrR18$TrLnF8H|kczSQNuyS5P^Pf}E!Dzs1^IG;(#zHFJQk4%G zMAAr&^J8PTdnNof*UpUMc)4};S8dxF+aJk19($DH>Di7Je0zn-TnI^`>)5#Tf$$Y;L9fHGZKqsOGS=)!0xq{X$5G?4afEg>T*sW2RV zL9rgs~p9mOw)pZJx&iP<0rNQ&ag%Q(3f^WUO91*dz2 zMIPVdc&oo2i1h1v2U2cH zJENnp>!EUB=`+jlgQiteoPEjd_X|-;_-Pg#Hc%9$Vtph{28JusRLE5!FQuzCecVn zU*qobk4`R+z+s-KEArsSTugBjYup()P6IocHj`tgnOgSuKx;%gB4K$2s^-lQH4V zSuJ|9OiJI1bNSpqw#|ph6myPIztYZd>?YI)jzFR!}tzg|{T&O!Ak<=nRpJ^Wc(k69H-^P<#rHR9UPsdcF}iZ~tVU*g z80lYv4Lv^)+T|tx3ef*?tfn!x$L5IuLntBWyxz}+P;`IZNI6~HpD{rWlG-lK6A~Hh zZNW~o_!_nOD&cZ1ZhM5Zj25fhn6`SzvE3OX>o;MdeE&Qh(dCRBpCx<~k&m}GR073z zR~BFAwb!W}y!tG@)C>e5IfeSV*ZgmO6>qc#eK}16s(u+O68>a&dgJHxzUthE@WP_ zh4Z?Yq$Ys;Z^Y;}o2=7|_7%%35cIS7DXZn!AGN8kM}>})f)AGX5s%0aRztBbZUOJ)!YOLYgy{U_oAPyU zf@z5SHheaXB(NiQ8FSP>6Q?VEaPR42Fo^|jz7-Vf!xO47_$X;#zeCG;@(u@1(PmX8@=t%5|-Ln@~U!hp?vFNh$NOM!gUQqdNq z@K19k80%RlByvkcIPVAExP2?(w)d<7fXS?#g>|aBbomka-&eAwdD4qj zC!@%L&3lh`J9x0?+~!Ed$F>x7PX+|tTN`onLErQgD!oPVoze`(C6q}6=WzGVy3N_f zUwE@1UQ)1JLP*d3VsbkNo&X(b6A3jQ0a0V(C+&d+3d!;KsK4&Fv`k{vd6E#hbu=HX zJEzf)z3U?KeCiJ(MSL9OB~i`#bIs`C-1yTXp?rZ>O@XA#wnPUi1cF&;IFwmWEQ%8D zRloISwuv5S5!`LNq}Znz0}PF8Egbed5oXh6;Qek)HW3s7L3SfMO*bAbj#XB?WH4&lCtj$k=SW2~k%eI3|5E(qpE-ajh$80^@q0%T)lZwvCu@pD$0oNl zzB2xcA@Nh^)96gvUkSU8+x!_Z%eCGm4LX<2kB|4u9!7x7#>-}pGCl3T-zF*C%l7sr z9jboeYX5Cxq4Z^9N`F$4WkQQtzOi9Q9m-!hyv(#;lwU)4v*CVg8%&ojTE zJ{%%3vMnK$;FYWLYE5iXXZP#SzctO!pTKL%r2?_VC_M%yZCE|U5#5{SV6D0Yn<9=b zj=KS`p{nhw0E}Z+eBvC5(8t#WEA+!p&_s7+d48fl2NpTti_<#pjEnmYfF7$s|psk9-Z>Oop6&OM71mS@u zf->;E9BNT4M-i`uh&s48HZXlL`_q63iubG`IcM*Qn=E~?Z3)}jE8YiVF0{C^3rS0| zhCN}PJSD@)`iTx7C&l|#hguo_fMgDR@+aLA{bnHy!eYv*q+`e8?zHiY+RBD0;8KKR=HWLc zc6jL1lGCb5bKUWx=#?o~iyig2GP-+d+jg!%()ZK>*Z;5|Afrn z%i=Y?X1i15B+Ifod|D**fdl4zmMeZ}@i-rMlfAk0#yqJ%crSEx#V^Bq(7yfeN#E+O zx5Vo7M@vqjlV1+T%#IyZrs|1IxOJa>GrykEbd0APl~9^t5P^0Jgsmws(XsHA}uqx0-0| zfT7yfLnW?Ezl!<$S)GaP%>{RrBgt0dINi)EKS+YV6zSee48(6n(pw0)$3{Np zQy@k<`_ltj`yrap>mBbNR8)q9kZ0?(;PANbcb^95m*8sE`?)&8TjgH+81SEX^7e=> z?8T}o&c|~yMB?VCG$R;_WOF^|me?4hj1%sR;@mVr)Mm|}ib!bQL()H}=be(B?$*w_ z)=qlO9}_xF@$4u&&d#UtYLAZ2bb^1E5oFdT?*F3xa1N1nJ(iAa^si_c@G)hnsjRyh zt{Dr|y+mwFek7gt`btUKqVYa3e>Q93Dw@`a7%&9t5Ha$Y$rZ=a> z`Sm5P*&Z8%=9JFJXFjY@Q-HQ!AfE$CH0%CJ>TCI)FW%(~PWGGf<&!KV4z_WRkHptDtNs)7rV z0FT-DzCV2yf^gA{*!p3-B`^p@?w(bY3q`O%R}rs|u_ejeR7U3)Uu4BQI(zb!z%Tj1rweF{RFPXc&O%Q9!3UAz5b(Dr}H&{#Vw1v zo1U(~TOI%J_`U@48M7ksK5pz(<(wiZqr6AW0*+B_iCcf*dMctdze55*rq~nfz(=n zQ}KM*JeOzkFXmsk(Z=3M73;a9H%dDy0g*MIqCnrdRP5h(R;>v# zf#9O&^a3_s?~7Fk`yp>7cU5oGEsS_QukWQ1h)s%YkJ86pwv^wXWICsrc94ZV zQVHYFHa0ko9bTU?Y3{MOZwkG!Q<#a3XeO+~g%tPEFr`+o>Ih{}EB{IO+Ii>p!|LJX z!QgLaR94f&K(iR{-9^a^M0ve>W83wny5H9V^j(8y9s7`e{)H>X8C|;()U474#dMzO zD|iykrJ626FU9fs?;8Ga`5Lj@C$CA=$Sd-}NE3l^e=o9S)H0NlI%u`G{3@-aaQRSf{wOUInvrhG%Xcg6r;!Y8&4zOTkuFDDWd*+^Zn%1Z+z z5SEXB{#4)Ruk{C=q=uf0->RT86%U0?&FxpB4#?V#GC>rV2&?-@xB#CHYsnUG2ueji zV9tY-TWh0jy)>ch+znCRmV*alC66{)8tL)~l7{zYTgXxW6Qv_)J~Qt|Zh<6{hCfsH zni2=&AvzlHM5;=9gX=$X+HBQiIzY>+Du+ zqY|mN(J%c-Ad{M=%F(`~4B4eF(Kq~gixKKgoEv=nx^Y8I;fmLUROPP*7aIU@#C|6wdo zg~`3nEaInlEkZHOMb&D$)ao(dEYfm|(COzJCX)w&R~@!_nxge6 zUsQ{b5I82Crmk>qi00P{4KVH*$BJD4DElK@iVC9?5tVNw_GK@4Au_Qitl8z16`+cd z^0_*+j>*W9XxZdV;A$>CKu&ubCl`Me_O(`Wtdv`ZjntX+T$TK7*Sy$!^@29i$YaUu z6@?T9N}GyN1#;PEArtfte6IhV_I+%AmDeqgMd+kk^I8n9iW4A0zLb$x6HXwI43wW= zM-PIeX1D9quKc#r5lk3PiK2?@b^@?rmY#*FOxa+bfwv12sg*=lH?()}57q4{d2zUF z;TfkG_KC7tPjzgc)O&EVouFxF{iam8#9qc@V+UAqbua~+EXXc9{1b(ebX^9v~CR~{!)P{SCLR{Bn4Lcb- zPVW+Kim|+!n|^^3!j*5{T2do$TSaEaREZ3|;7+DY0y*L7v?(xE&nJu7iR<==Y%6$is4 z9?q50?AWpJQr-&`n#Lf8A>;aQ3$YE0!ad}fB}<0m780%xZ&B=fgU z^;J}%AkuobLJj9A;yBo=5iG7_D{~l^FfMdl#FL{}rvlecG9rS5f*GRhkXrq6K;*t2 zQnHdN-}yL9`7al=@sX|}9Ixz!HHY!)tCt9?&>FbAmJKuT>+68lE($Y7&^WJmLPO zr$M9oyNlkf>Nk+z?0h4M81-KpGs(&E(L>WrEyW% zm@cgQmxz4lj5YDT`J<38#Tic08cp16x>dJNzNPm3-K)2_7O^f*8KOvVJD)SmJfG`f zF=5&qnoNXBGZlQGv5EQa^l1^ND|-qXBGWIISl@%i&77hd40HVKFT;kDepeTa#c^%^ zAoQNrk)7P&48sufOEAQZo^EU((`=bEaUv?&P1K+gu&ekcLVbg~4NM~mi-x~)x>Lb} zZ=`K1Z1LA+YkKDb8NQ&~IG=27BBHUp$37qp~bv`t0bUgkHD;WeWAQyZJah`31fKUGB>^n|J*egq5Vhb5AZA+@GBr4|7keC{qcJO0`eNIWa~QrWn$FBhE_uUDm9@9|(t#JrjoUMeJ|Y6)L+Rjmm=|HUzJE6rhEv;r+{jvx`j%j#hn zx!1_%hrZ#o5$tES6Ky}S3;C3hB$D8;&0V*HaaobSpF|C(Cc5O|t@|6uWAD$D|Mue~ zg*2s!!t|sIVIb+eCr&H%qSj=C!{@lP>B^6Lq6x=tki$n}&aLVLci7cJN)SPl5k7Xo zL4@-I)sik79UNB$KSBjD=XUiSYW8*#L=NkFLdJ>dWIySLB=RQ(f4Lw!?HZr;UU67` zBFAr+Ok?CSA;}V9UV(%COS527Xp_nzsLYN*spm%%8hSAE-=jah()R~VQc_kykYj^f zM;1aICgF`d5+~$0itEDn&UShLwy|MayohX zra#AlAB_4=aDbe``=TqYpFaUYnl5DHx z1c2kd1X|{3Nd%rSa3C7aCL?G3Ljx>3C=LWrCLi4wCQl@f3lSt=!aM@qD-*J;<9M^t ztN#lgNH37c=D!~M2h3_Zeoi^h0}W|yygD(4fcBHfpT;SvY(LebRF4Pmc%i#p6k$)} zF&ZVjMDo~l13#y9xs!)EhC;4{c-GIhH|hs+pbzLepAjCnZ|{GVE}AQ0KNI@>j9n+# zx5e831)dBWF;_z*o=O{*2{_~|lZrII+VPa=cf!4mgVjbZ4pW3T^-&9iI>JIJIX15~ zx%w@y@M{Sn^jP@Dn5brk3mS24duk_%UnEw_^1OrRB_pJ*m>M@a3lTZ+bB;Mry%e3$ z^B4>T7QT(yh^o1g9mK>bz#{kwNSTwYX#L4;ocBA`iC*HF!ykN-@0%BGQsbTN2l6^B zlJ6s&O`pC;QsDh^k0y-E<@-&b{H?XDmmZ;YY<57DUa#+~yYo7NJ=F29;~tTXc=mm6 zRUUSF+i&5DfOhIZa|BQ+(Sa?V&BHP8Pek$hWEpnd19eeQF(Vn+33pgX_$+IF0> zij^e1?PIC>n`$aWa60 z@fuz&L*`8l@GexaP9`%bNe6suJ$~~u3)=swHnt3LAq^!hiZFcLjYXkb*CoaDMxfU zm%A328JKF68PLmUeXHIe!<@lh!R~P2&^l`&Uj*=~ zz$~>}>f1{&(3lvN(mvz7(-U~%g8KveFZl#9B<4nk|8+W~nZ5D%MUs(==hbhJQDm}I zcH}a2bayog)%%JF%Pg#yo8s;yO=7alK zBa7YJXU>ZY(nEv@yU1^J!&TamI~h%;S7Uaw^TF?3YJ(Hz$mO9dR<^_-N(cPTu4B&~ z$lh_&UCD4KF8Zx?QXU z8zz5P37M0C^0uLTb0iwFM^bAflGEkfKFKZP=ZJI(1Hj(x1Sy1kov*x79b%h~*dg-X zzvRucc-+5~BMOGPbDZDhEANkiR>jE>Z0op53U%k5a{DZXDdqg+{pgL%i_H zSNS^Mx81_4-&r`o)wmD%oSZXl1?X?cc9I5v>jd5Y_4Id4Vvd`o+3|n54(00)y6OPNskVoEx%wb^HgHyWjDoxF}S?meC6;AB+a#)BjX7i~`twa@i`r?SvPf0T3! zvpaDQJ@>D};Bqmwvwh@Q5x)FzocAB60z3H+|IHd(VOxLe0g{LP#I>94%+A3vUO2%PB$`{E)J>Gy>)<1Qb>j?@Qt_)vv!SNjL+O86M$>%2)H9eMuX+* z+cI;d#QQq+bR2mvqrcA(oH%cSGcH#0eEhJtY`ZznHxuJNQv+=GvM;3MfzfnRgGKDy ze!4TlGN3`^)^sap@Uis3wnTT8K~I<6JVw2X>vtZgb{F|c04TdiK==0kT-Hu?IO5$O z5KdxsHuPfG|(bW3@C0ka@PWD(UrK_ZxaKnS?Tx_>IZA2ZN*a2@%YRY&f-~l< zxE&xmfO=Tq)m>3PqhW)P+VF`#u!Maref~KtL|LB-srR|YOsr&`;`6iRG#f#%zSnX=Ct`8Fx*KoZhXl5lHKBgtC=u4139FefTSAYz4y;d7Ic@Bc(U`&k zF6~e3+};z2;Gs=#&%=#!StbFP7aD#WClEA%qq25h?M2pLweb3#twM2&Ypt-99#%n`#@d3ozwp#gyfe(hHYd$8j}dI~4;?xXuSTlPl<@yU2$)QxNW^l8oRztEmH8 zm#%)0q?}lB&u0x+W&Tcda{ZU7aaA^9m@CyvtvG^lF;ZJ4NeG)LkF467=2zcKDc)d;5MxBcu~(Ik|AY7?H%_DgX*5d zO<=9}g&%0<^x>*zOjg?a5r7(sz#SB% z2RVX{-G(Yxm5xD&w!)sSLQq*eEy~GoKab+MT}!R#(c53&DNC2ho%F54nFbsEFK9e0 z`hKIrZia7a-*iwoT5kEf(_({lWiVNi0Em~$CGfI$Cn)tC8 ztY;f*N!oo>G)0C`6vZ!UG2S%I-jo@my$ak0Fv-(pLnUGnpag&jpS$e(FNM8m`%}*? zv$8X*J_@MsbZ!KItx+ba?msBv%b>o^UpOJE&>x*3w#j#iPC?$e|DgBpp*t)9-yNof z+}Y&;Vo$x$YtG(Eav}7P(dZ(4{iUe16M`cZEV`6YP#TYFtv?SpWNF;Y2_w}nvVLQN zo&uLLoQ|aFimLj7Vr&D|_(I+$5v19qdV+~8pPFI*V&Rq@qO*w~Os0(1FhBH(f&Ijq zA0#?Cse+R|LkIS&c<`97W2S-&oSQ=lj?YE5%p+yPR5ddo;NJr{h9u?V?9{~*MYQC7 zB|l11Q#>o)`mMZk2CjFsj9Mj|dqM;#dI>e&^ZZe-*JElDp&R-pFr`f-f)8xRAzRKP z2H#=qUy%Wl@Y}sMYY5;0X72H(_`^=+_C03^igZ4eQ$E?RFT9d!wsUwoReu7oHUW08 zm~n`v@1i$Vuf-*;R<&0@Q=T7qhSC%UrZ@#LNWiz0!f%ufWCHNQ)H>hQcfL%^Pnjkn z&{TCBJl%CLcgqi_hXQ`FT>F|r!!6hyQ=}ml;YK5QO`qZoXu~?go{{QdsyKajV=N&%4wf=T=)q#6*YlqMK@bb#f!$|sW+9<=Dv3y&U z)1j04E>|hYL;4O1yHS7Ty&IznczpQ6JfAvxY2QU^$N4jB-`7Yp+9Kg{Sb|65{$;{T zP%q)%<#pgP)E0W-nn5Ln1aYX{i;>EWTq$NDdOYFfVpkV5QbG&c9x%D(W#H1W+wJwL zxYQ<7CCgGP@DLwP1W+L>UuTc29D-1pyw7q;4ddIJ>w|0O&o0#&xA!nE|LHJlA9rXk4Anq^ zO(7{Qv02MSEp=UWX}*9XM3YrOD!ICW{KK54LFw3hs*i0OLnG!-YO`=S-o+3uyjdjR zy%?49O?)h;-@{lC#$`Ii{`2B}!T_)Q2bcb4ZvSb5R5I}CK+K=NXO0dTmve6_dl)zS zQg}3F{!bP_WqV5FJcb}Of?iXKFu-!Z-Am-?BLy+SBJr#v;Z2G>>0Yn!)K_aaM4ZgQ zXyj@{7A=;| zerqThuG&KpdJqeIN_p3X>6E4x2MoMfyDH80bHWP11f5)t*yi>(X zJz4NM@~wDr*D8o?buUoB-NdmRo`g(hPPwd2k&o{s;`qNKCf385LYI}ict3XfXp`ec zzjKDsWoze&8IB#s-;Ye!sL)_UZ5tV0I5~t0P+0x3aCg8V1lob=OX|EX<|z3pXaE%a z5~(6S(*mG+&@~fZprTQ{By7FRuVn`L!+Q$mI=xo1D+&+PCx!##ppA!|?T_6Raw2DuTY=sa*$K zr?;OzM}7%5q>e$f?iNwG;FFi187N=w#(lFqDPhlIg?E%TBo@@xarIY+5HJVIZvt0k z=aKK$MOB{+GLQee;h@%R$bQyD#|hPTNBNho^tML@Y{xAq?>^MvVz*1&R0o_dus0t6 zdtRod@qE9WX$xs(Ff;2duLGeqE!7Rom%vpmvM%L-!i>Zs)VZHq4$APn3~-Q+ZL--o ziQ9cvEou%q2(@337Sw^w`jKSU#<-hcoI!Zkou$<9T-~TktpI4uqoy1HxkHf%K2PaQ ztd_79!1>h5i@RkhZt++;flFdSu-`royYJYE%Tf0mpUnlxMe~<4~HE6Q+xpM-+bh^5a6IO>Szc&pIP> zaDl{?``_{W(g7?WhC3nXKCyvnYV95PueH93=Y0+rb<@Ld(vo3@+C`(~3v7#E`MaEn z-B>AEUMGqalOae*ikSwsX;v<1Y%Na6?Q(_n^(n6!gl_9X>2Y(CR)TS8Z?a}alcQ-w zo|=}~Zf$(M@W2j^*b3zdSJtcv+f6z}8fJL=Uu5DAcK3#TmU{4gs@O%=c;k9h+Nc_} zIh5gKf0eYnoZ&iMq#v}3T6PLU+)6B)(s{`(sS3IDRxqt6>m$F-9$%rbgS#J3WZToq z=j7?5+^JCncGM4du$v~8`E`;{l*y5+ae31cKWu~L$n6lV5-Afj@vaHs0_=F-;)>iX zP;B(@k6lw-HeC2HUBZ&3>*96Rd;G$t@p%mY`PO!5b%oE2-|B3$t1PtE29%>&`glS! zgg=b7&n}i4ZF{YC?9RA;ucKY3^9U-VvN7n)k-def2d#I*R3#TzivA$YRS}&|H}A=w zPD1p*=GfV~@<#HQNs)SO6i!8s?l}W7!zAo~y}Rz z)PzC7AG``~0{FJ{QT%Fx&rL31i0^d?LA*sSgA5nf&5wUy4=Pl-oo;u^D5axpt~0B} zWaJIV+f%kv0K=X(ycaancj&p5h1nS@LDnG{e_C9yCjwZbppV~6JlK+tF~J8DFTeSR zG{=G$ivV>id4w>2VwDOoe7%#&dY#JsI=6s`avXxp{&6xPn5?`xiDP)JRj35Ee6MQQ zm7Wc6-Hp=W)uCe?bfk!g#IVbTcD|2^o>{;j=Gx^54;&*%YxR#e$QcD*#`XDbcQErL zoXi1I@>L-{364ZA%(Md1I=e0D)RC9ld1pUK?!Chek$9?Oj5g6PA8o!cElB{hvE3kB zxL^Z8{_uiFz2}`WEHCw(Gzx3#KH@X zR+DT8Hc~(k9YY3B7wu}NIIFE0k)3zY9cvPBwbxuMu#?Z!yY~;&)P0zJb3W>gP0m;t zO*W}S-_sWtkVW8lNPTlzn?J|SP4$`?%eLb2t?dW)iFKm#SX}Y~7`NY7PaIjP;`hWMgND(3!U>$^nD&lK?frc3rTjDk6^n0s_o5aWe%*4*yW~aq zWz-rR!ZSqKg@OLy+pTn^%#jkmoyAU|*>6vrv3_({#nRc!by;5wE0&G?nmAjz67Wiv z*LY#^i4udeaxEYmP{wb9#Pw3(+&RYfSuJnheyr+`-Ytg%y*B;@PdDy}?cz3mD5rvq z^>s`(2#-hOxNUWVe6}~+=`-AeMWOpw*vmg`ui{IV81MQDexYfZ@^3B*HD>9(UVSMu zm0{<}eICcJ-Bn8$!A2*&9ux5GdITUk8ylVy|50gOQLU zM|(deb>goM$4*&CuZ_x{9ii%W8}Qm8O>6P14VT5wzgF0VeU!-OF@@5U zr1d|1gx6bM9dIxIqiQ})u4;!Z4pRrQ)qf)#=CYYX$7V|iHXg-L74zpR+*h>T^%83( z>$DECv%gNXdEW&)8xmvtxLwDce8m7W$UM#K|6ws=LkY5!_6O(1dSUrfRu5k-_M0MQ zelF73MYryg53U4)zLaYIC6x9Gb>EU=ysx~RxwCZakO#=nLW!g5oXdvo&s!!AuO@Gr%6xMsHg~ zf6&E`#n;*Cv!{wSvgy(nT=~tX<}Xj`7IP(iCG+H9G;a4Z>+;Y_IUa9#x&H73yicH69EnGAMmyG0@0S7GT?;9LHi zk&k!fF49)g!Um4+Psf(hdewF;EnlKYTJWu} zS!<8>AGPMG>1yxV7%OA8-(QBx#f`-zyM=$D@i8p#CmLLbB#>vgwVgUZ3oSJO7(mPx z`Y8BoWoZvY#_Q7S*bj+$`L+UxCfE61&${b%Z)ZAJKZmP1>Y~|a6K}-Ga7J;RQPmd$EQM4-|8^y&}PhB`#&KwndF@$ z_xej0eq*0kGsNLN^sjGK;{t`NKOA{!v0sS%BWdH$KMdddb9IQ0`55WPI-YPQ$Ov}r z!0{8w8jc=!8(MEfOHe_MN9`(7GkS%*UNOokV?R}RQnuDp-E~Rx>0CtRb9=4c>g;KG z@5EbzP_2EP0!8y<iy`bc9fHrT zVl0`)l$G69=oH9u{<;6fWW_c!PQA>IhSRJ+?FvXyAeX+x>GHVmQpi@6z93-t z<1(`|&E(GvILjBH3UgZeRIh(2fyc?)Adh{L?nY{Ydqi_IvtrY;SOY~G9F((m)#Ro7 zfb-dk@;KiGu<3kl)x+ET82;;yKpI<=j1 z;TAWh1Pl=nc;1<{CoKCMQ;JC*{D8-({{3k!A3r*3#o-_XZfEy>DB;`fF0q$~x!t!& zaB5MUgoWX@{d@_e6Z%Q>QYokQ(^mDD$a{C!N?MN{i)E1oC&@#7M;;x^)UQTt05VU9 z3WSehJn3VnS^H)!9IE(lFi!R~N9x$Tk@}#O5YkeUZm9V{8nm8}MgzVmZ}g~LHM~F$ z*Ne`Wm^Rq2ljI($cP#GSrSsD>%y6$_ zluW3{kMed)JTWH<$rY$}0$nh?EePAfD5z+lyRU zTIp`HkfRz`?4iB@LzZjRat)hIYEED5DixKSO;ByNM%DXo&lgkmM^zs5_r9EioEtOB z*FMS5n0voq=bVmK+ux1v=G&#DEmYX z_eVTPyi|zlJK42pc~24uiAF%bl`+*T4JWcM_pm(&%rq$g_uiR9j+*<@`{6yNZv-L> zB1r*C*ZgnGRL7%CSRYwSO-O!5=L|!Vq;*T2O3;+J1@lu25lI@v9sAe%C#fJ#2#HA$v3#JIn*dovdTpBJ*}|n+9ujc zQ?CNYywTu&)}FDU?s^9AUaF&Ra0!z3g|||5$<b_K~ zEs>K(eEu@077IIH-Aj$M2U@J_W zb=Q;NKmYo3U~;vzI5B_?-Hg?c;Rtk&|83oZ3G?6tAv*+D7TD{60?$NBo~;$QaXY_i z26W3zUw%6x{V35Y`-ay>j>ka@5qXe7r{s$kwxG!_%{8k9fu9j8ydyK(scKtBZe~4D zqsx(bd;*WLrKW+a6nKLJbgJ3gZotqnJ^H2(B)87WgtKcY9ewEBIB=>@Q@l-1}S4fU9Lyb#J2GkC(@v!m&mG?Mlnu%;eAAaX?B9 z_$M3Kiz0VjN~SMHcsF?>%5s2pxkMP`qG#u7b~oW5-!0o!$<=xuvuprIp3Te7!&4>rXh(ACB8`+#g3s+*cL`BijO7BFjmR=gTd1*ylSmEC2@= z)8WC6cdacn5-{je5Gw252uk~9Sp@PF!D=7jPlrQ7RF%+ND+{mEonDC%I2w>ip^JUi zsb}Zr52tbT0+eGme~X(*{p@yz-^oX^G15rAC%51JBiMj#)(o&NEB?@I9Dtl)@K*Qj zf$3$mS1UuSf~bu)YpEZ(;IBS~ZR!ydj-qC|HIh6&v}xW3NtgpW*+@uU&ML<A7 zktoXB#YQpH!+xhplAv$7nS_0w zHVIKqQorT_ds?6||H-c_t=V@`y`VBri^nNYc|A~?#S+$eWFRSRpH`{ZMC>K-ZmdGy zD<-8MN8WbT&p>rG{&ZOPIfD`JGPeY26z>jvxDxvburKIXLlhBhFlP^_|D%CPX3q6P zS7>yGZ`!n+=#f1`&wN0`{hEZ4yT41?S}O&fD86_j_ym>RGB`@&lC4xzW8jr&0fzg6 zK1GsVFeNB>%1<_GKZ+0G(k~Sk-LvgF^g=*!C(jsxlwd>o=r4awN2|wdhcv&Ngg%s) zY-R7`YGqeQjnAHcx1#boK|7nlj7A?&eX8h2#uEEf#Qy)m*5?mQ&Rome-8|xCpzRz3USWcBf+dcEo`x?xcpJLh*zUyFA#`tK%| zvUs9&CHQ|YdMgt(!J~7sTsQX z62mEeHTh&)Y4Y!`fRFSP>~oD`g_{-~ov8QCU$pY!?Paphxw<2{-(Z>=N%URsRju5K zLH_P;S231j?fs8qZB!<;?MBVgM)DrqUADq}5NLTv;cUTy`2w0C{z8#5#u;AH3*R@i zscwVma!Mme9iQ5fqhFU{c!r`kgO+>4m&XI8AVP%i-Y~5)+>E96kI9(ZgVk`t$9CLN zF+aR`YNh!GM1T9Eipg*pyu84O{qui~$bcwf693sn!GpF91cKxqGq~oS1Gm47kubk(hpbfwyBunIY#V0k;wogfKm4cY)w=zf zN&{B5#`d8$0QDQ5bBg~`j_>?FhxBENy4EIKsc+XGQ?=UI$;(<0HF8wK&G z6d>$T_v>sW0_#X#CXRa6v}iy)LLJ%rHhuNf2X=S9MfhhMw5W$Nth6h!;SiCDz>39>6mM}6VLFKVUPxq!o}k;GrC*pQ-5sty!I`J>yV3i z3DJh>pDEvdn0W~%?SfzMFEy%_4}_S3-FMibzC(J)E8l z3;!sHI|&HChNMjr3cKu}`WOBOR)sU{n|};-T*25g){q%zH!!ntTu}SO=4$PurvTK= z)Yr4KMxL7Lt8*AFqYemvBJCF~Vyiu0kS5b;0O5pdO!D5{f(FgPP+ki9`$yf38t_S*s*<*M6Hzv$uju#=unZNP*ukH2<7uYP5CI5TJfiB= zr9L^OLgo*o3x68vz)~%A&OVlQF)8+HY#aGfMY9-vez^3D3eMVcDXooOGgas~QF18Q zyXGGav=qJ)J~-WOglI4A=a-zaz*L3BBV7jnL#J24m#HNo|j#b$hnRvGrwy$h!ePLh%TyHDe-!F zM%6>tZvWsl5uiz(A$KhN3Aqm@H~PYqyU~cDic*gEp8U%-2R$b>GS$$YyMQbJb ze}cgEdv`-1d^~ta7*UzvL7vW3JNIaJhahk%{fUcWKu&^Ae92?jTf$f9Y(eX}p&6d| ztg%)-KLHSt<>?=FXc$qKSN`#@2#dQWU*ZFu!_5RaY-%z75dJp8u2RY}?zWG*zoeW= z689T-lt~0Wf)ggGPCP)`g9m2T5LwyHD3PpRT|Xy#(a|8)Vr`tqO&HQ{p#K5+g0Mkn zOmF#6r?aN60Jc06R7P-oUM0^!t9C62SvU`f)zXT02_gLs4KUv|zrZ1l$UTo*@l*6C z;p(sSO&SsUNvC+t2idMY1hq(;k)IRx=tpWy3ygXWPG?rf>X0;cRL6xF9VumMGOX@V zxi@D&zvhA5cC2D^mZxPJOgcl-Z;}`}z2IniF6Hr_b)u>wveQBt)DhKUL&V$`ZcCnD ziFk*|Y%@hWz~uEi3@M#n*MR$=uf!RJCT;$-ZH}@6&C{_gw@Kfh5|qQgFp(n4%}Nvq zoW>F#n?gau-BeuIYdkJ@vj=+qF^YuRg3DbJM#ozN!7|><&q78wI=2UP2S-WzWlP5i9y;1TZy~4@zTgs%Fi|JE3tS!xX_vq{GO>Xh+Vy1EnXT*qy*0r*@scIs9G?W&pUW zWf+8%I=$@C%Ned|`XOL6s!uXM< zw@y~KnUap?U!nhFProREUVvCMJT{HxT7rllMWZGWCFbkRnK|hREoSu{uq9Um<<3{# zMlJLF?o##n6`zZ4ui$A~cBPw8wXW~bE89Kui7kAqm&oiKs|>S@UwuYu$ocmMF(SQ8=k&;YVq@H z(M>Wz4jY#Tu1%RiIEC#O(7TwhR4_)^ zu+V!)W5ZfQrMLAQl|)G5H@fpqDTQ2|nvIPA{xYA93|8WioSVrLzuO$vKDDgueN9PmXDu`@JMvDj}BL?}x6N=XA z!SmfwMKuOo5QB|2&(7qb2Y($cGi{j0TO)X9bGcgm04M96nAn9elvw5;q1C0%JCu88 z^Bt0fX2cvA&NXOjXNy;>>L2ABvVQ9UX3(QvOIgodt2G>4)LR}WzGS`C^SVwKFXsZ6 z%unKSCz?KuO^V=Dwr6G%;X5M->-ge= z`b(o~jM!3+2iS&a*#`=GPN*8Qi91qTI(U@5=xDa8feDur-3C<(`K924{2f)4FMUQPe{CF0= z1$5weTKmLG5*N0+2=#;XSuVcMxE?O$W_{(bQ7TpTR$$bd@Ne?#mX)z?aDu^UuX*i3 z6Jr1pagPo~QD^y8N3I;_%+5^MvCq74-mOSLOkIkE;r6DPWX`Soyyc^NYx8|<*VTw zJEY4W?TvqU9?le?z4h5!xKvHj+uih+v`$r2a2RXcQ|$5S0oQulV;omXiQ(SuQr*&D zlRnVd{Uf7Cs}pjGS34}0y74J5YrFBtZT;9~W@WmRN_AMQJatpWv)`35>IM2WTI~EJ zzd~44TxvQ#G0uNjmF^%eRxbIvBOYP1qwd||y?5qrUFPnk_hVQX4c4iPx1g2KCh-bI zS()%yAecMfY6Tys@*DaF`utIqDh*e4aa1A@Y%fEQzth|iF{l``qMz>fGqX*_v< zk2XE8v-cFOqnJ-ep?&2R0}I{dGIWIG{>A~xI-cYoO!2wqJ284o-QcfN+1?5gC|-3x z+G{iX*HASW1Q*Y_|D=}G?lp3f(VJw>W7oE5E*Ie9!6t(fEP%(`G7TrcdOgn_B~zugEP<0<&n>VcmV zWCgZgZZEMeEkx9I%~n`bG3H~qo5tTUnq{_7h%Dkh2psZuFNR0H|9y?yla3R+`yQr? z?V7||r5*}?+N1ACBJ}De3Yh7#D$pre5m4|-IubPl#(zq2d8h{Uo;@_=^%?3dKH5h? za)C5im-a$ILR2v0nNe-{Kc((Tuq%8j1?#hw1ae?Uq35nOIHLdc}!s1fvyfU3!xd5Jp?@ zpwXh_i^3eq?6if!G?zofvOAMR-}7LN{s;CJc7rAM^i>UJy+yGy8W)3liV>IU_m6K>^_07jIW(2(r3xu+$NZQpp?+0cO+U5!6 z=<(SAgQm|=V02UiuLob$>Ma!p1ztqBZshM;Bp2?#TRafsrLxp{7R}Lr;Pq5zl5NpO zUHPzEOOPfZRIo&5?KXkQmLEbd=9Yq=`{PJM`RB#Y$|1H0T2A}Fs)dhXimvSiOhW1F z8`Xpy7(L7z(NGG-!1TI^O2WvF;fCi4tY4furrr0ZdGwm1gR|ddNdbB_9%Tlm1HE|N zf_@;63@2Y1Ca&rK;Lvz&Nnc=)4-P8Zn455j1})AACeA&Ee|ncO9Nam5C`|=f@gdBX z{KOa@iNO@knIy!i>BzT3-)~cbu2i(p49p@Go$e9^`-ol0d85ine|%~EB(boA>5I(cNHPqg~LjFg%=l^U~S{q&8UH*fkAZmLMU+*(T(M+Yoooa~6 zb{PI;aLuyTb~j1Y4`vcsvP>NpK8vNo@9%(}{k_hGAVhaH#(_&KALM1h#3x~V6WP*-s-V7eUQFGZo117nOGlFvXn@K~np7ZK; ze`_!<7XLp{_TMAi_T{fr zLjtaBc<@Q-VH*_ij;wB&(#UVJ7jA$0JM5fPRK{5Um(BPdax$4c_ch178)72~biJ@I zn$pF0&iD+6m|TtO<7Nogb-`dT=)Obw3pmJb%?R_Ob(OS3?2gsm@V-vc{B3ebF>g=p z><5yJ6N9W^k$yfAsuD64(dB`-wXB;nF_J8dSMyTycEK(-BYn4b0qsz2mL{p3{?eXL zr{`qNBt4CrbQb>XO)6uw7izte+IQiAD80{pNz4pA>n7Z5hhuj~cYP6Y4Xk_#3*hO7 zl2{K*W&I2iV;~2t)xknm8eWJNsA7zd4(0-rI&5}4e&P9_Uk7`(sbVDANZ4qcfpO_0 zl1ah~8^n0nxV4!hwBUB{VgU$0ZxoWt-1I7UICV1^uqJjD>%!R?{Y(SrxBN=aySVdD z!uYaIeBkU>hzRCid0ArI$!!}dsAS}=aoZV7%aNViB_-z_a&dQjXAJ)$ayIgXBb3K- z-%lsx6crdyxTfc>X&eG{cv~_n3EGuUs~e+nd^?N0v+&4T&U9A8h0X2FET7Hy zrqh9aYvbE&Q;s=J&Dbs+2-ktX`+U!ivlolfsT4r=w8nalrTm+DNn>fgn>?|{hxgnh z=hV558*Ir#(ipE)d+x!jG0QWqIZAMdk8X^h&9oI-)agJM(EMH)7bZoP(_)?1kaA4J zSrES9^?Bi2V%ivQjquR_|0HA)VX@UD7sOimZDn%(wrOpKvWJF5KU)*k^)>x?>zFNK zN1`+sKATqY{qsxC??+?(@my58M5p7c(wPp=qZIi- zKU4Tm{z>6PZ(TM{Yzq!7Q%JrGq?y_iImOiXaJn7j?pV(X;%A;oPwoZcAn2Wc2r%=| z5wRBx!GBsbyy^c0CSV>7kg>*68{5M)^VxVRyB~U4%%6&|PyG($WU^!jrS4Xk<(7`z zLZw+PANBtO6PBxu$yiOJH|cCao;N?<^=NIV@b&gEIR(8V8!r?VR3b=ReoDr}F0j+I z`|B~cimSny`=FFJ1LdL!5m@5};0ln`Z%HY2F4zL&h^DX_Kcjd(k}?a6klr!?cmMFTVG)GbQ{ zl~N4q(E^#J@h{z%a$>ne<})Ul>e1hEi~ff;{DuUu-*~}XEr;u!ZC(c%_%#+=M)xNX zf~EfQCOV~gc~c46ygYQjnNKO3VxLaYJD3ovnlBkLa}Gcm+$oQvol@^7!EKQBNKr%4BEATsJF?eM4|-Ec zZs1nxXVQ`_q0bTnKaUhFrjV}%$XJ|1J(X(WH@{Aoo1(+${&9AgL|>3d9zbK$*ly`( zRx^cc9IFk*@(+;9XHRM1Wt(3IO&F!k9b9_F3-_ZprrLDqn>oCO%wdAoGHp7Bu0fa{ zwR}T0|3x#k{!ZnA1^uKk*WCT9&Zp--RqPQocC>lt<${2G^S!ADk$zUIx%Funqpc9p z$g7AyLf&q}>y9o%?X;_>)Ej`DN!`NipL5ll6kF=3h+e}0UWL1c`=4q-qmb{B-%96fRSr z;jABa_`lx#7o0QnNrF)L4g^UQc!@-n@>g_(m|;0P17zZL&CK>l{|j1Um}zvt7R2q* zaWVYqMJ200NydO!{?-wOo8R>y;lPLP5TUqkn^`C_vVLgo4c4OK#!!Sno4g>YtE>`m z&kpB`(weycor2B@qJMcN)cr5a!gn|%ZH-N{^enu&D5xGAcw+nos#MnhD<-4@9CsV}_;ZvgMq$BssI$Qf5fm|PfsHuy;paiQ4!h|&f5 z%Xj(NaD}Ill?oA|m0rQb`LKqopM}>p$96hKvEg3dHfa-GRrD}b`5zLf#_@7#2sQi5 zP&si(;q?BAw?ac{jyRvUe;BL%kZtN=OAZ~e?%|U#HH-b4xv`O+UXsUS98yT$-lYnx zpkpaDb0(&Ff+%Bse)$@L70}|ry|a@(1z6^m@nE1B4Hc%OUm1Cmw#+`7I`<-zUII`- z@W!xvDHD-SRcyU)*rFTyEK%@-MeLTby_wVjCksZiNVN_Qu!0rzk$(&Q;%8**^8@%* z9CxUKjf7?5C_B5$LJe>9kG$kqkvU_Er4VWplu!Ug8TM|OW@{UDGsOyj-Zy3;nLi1z7~9dX;GB9ch*KG0T+ z%4%J4lek_#Ay>M(-}7(VcUm{9pU}&aL{hefT&02t(pmDR}` zCnLwQ>WP!`ZAdo`hX~htg0-0Og&DfTCd9*KCE4I*sh6e5({dKeM zPUY65K|841&zZNTCsj2}YCTnwhaI_;eACZ&B*50XqG%$)Em-vTJJ_hp2Ik&f(fclG|<8a}( zaTme|{IPsHSKBDcfDZi@_Wru~don_2o$a@Bo)ktwDoCc})ct#%r7M9`M_LLk-eDbDv2w_7Cztf;mN-eG9bLt9N zN>48px|%UTR^K(`8(CllHE_cYa?D z;W&lQXtrslw0z%Bf5&Oq)o6cb)$K|VCnZ;<*>&CjwV?A)2a_@P6K%AFUDT-a9vSoC z^Z3CUW#Iq#R;~@T`CKNV3BKj|@5dbOF!>=BA>6N|=vt3`Koh@MEP@FjFBMV8J^*Rz zn^0e9{(=Y(bascv*sI{31=Yp8_5Vws7DW<#O?Ytgewxq!j@2@x;desnI=?=PmGvkqF9n^QxVm1=fJkLhNvNCUs7y*8( z`TE=*X9uLu!B++dOK{b*0AJ+ zcVZtH=l;6!Y@@PVC!9L*^kLeTxoU_m2{YclKPhcsRU%n?SSdHfAb|C-Z2BV1oeaiX zclVWx0N!jqigFh^S``qV4R%#;6JuZf*2?d7ntc9YimMkVY1o(z_?Co^kPQJnStJ6t zoxIr3@Q?)C@6M%(=2A^TMo^T!i)N;IphIfY=E!SQHx11m#9kae-(j+FLgqsEqJBCZ zEXhIH!XxZG77cxp(3wnSYrB2Pz9{EIuemN(*xlIDlQ_Go4o=zhK;8o2=$}pZ3pIH9 z5cjd(t8B&u`L-576gFN3I5$H#sloXDkm9HKJlAePLt9|p=uE4c?pwN&LvC&!`O)Nl zuZt1+9O7 zEWs<(TXN$%9|mH<&!nr~>q4SL87IyyPm)j4UAC`u8a7v)61roD5T#yPtrK(vnOE&l zX1dLKC9!^?SdmcEdTvjN(wZk&;g^x-Wl5|Qu>X|*;9a9o{*yG5sZDIlIm1+t!Rk;t zN0`}E_BIp}1TBf$OxDQpFJsX5c1)dMmt@}CBG^fdtlP=BU+sFj279L zTi?9X)?L%#k)h;kS#|e#FR|B=oZzKl*|e)o&)2W+@;CL(vLM-7AER%{!RsG*8504# znDk>UUnXOY)Mqq;33I$M%HEMY+h+@$bn00f$U&pMpt_C5nDj zI^iG2dVnH=%oARpP+#qBSs?O$#2m6MIZrxW?JqaqN{d~G&B5*EKKUpgb~-gji}U-e zh({55w(cmv7d_W|>7N0vmkgo9|J4Pg(=VP$f=C?iCp0?e7Ja@np!Ds(a%c+%^ISQQ zJHM+svdyF)D+rRWI}J$Tk1F`eR+&`(`z^GcmtWN&;#soONg>D>06yMq_ScFxWW0Tg zBzC-Q@lluLlHG_uR)r0ETM5TNUc!wJTrL(SMd4k+f z{l4s?p5@H4Z`FO$Kz{eW>ApO+@qkdx#xg{!*#7Bzk(#W@egpI@AlUyR5qqAl0Zl^` zydqXZMgrUSlh27|+ujSV<4xL<)6PTaC!l+u{l&F;^Z82ThzDef7=bf!Ltxv<&0b#^ zx`>qZ;*tNhv(dfL0Ncs@T@%*&czfWTv7>>jG8t-&li1pUu5TfPw$CTEE6Z*8A6-ZS z*M%)Rp2HhJDzXO1!I zLn{5xHJ?=FtOE)`N2xr}WNQ*x_~|E_nZAc&nYKx-b8dpFm7=i z3XS#ZPcT|o`h+#cWAZjM{GO1nW(>TOEoeE{OmwzyYCRkx1pjP`h-m0`Dpn{kC;%gn z>0dRgw?tFa!(M2URcZK@ZfjC1Z5o}T_r3&pcc1h>@qFRf@&3YoardjNzX08L_p+yi zpL%1>l^c}*Sa@$z4+Z!KM`TpK^>a=B&(P>=)Z;0k79JEYDkM}kpB3&q0*NGrHC(1z zG^ZW81*B!S`Fu?7eh~ghz{Cyk#2}SI#a(xONw0O?rQxw!8uHW8-tOe6l0o@lam2)F z-fp9?p#@8dcbV2z9MZdy&k+L3XPxbGt{6m6_k9uTWLz#gfp{u_#N;27)K6Q?x6)DHxlrCl6LX1J+!QruV*qMJ zdE+m1;MT*6hm%Uq*ASRxbEo2AJ%v%-7Nd)era#{g%%g2J%;x{j(Gt@K=Va?LCQ(W0pV@g{+B$ zw!hyKAKr1NcL!&yA?|R>AaOkk8Nij<%a2*=!n;vyFzePe#GM*s=G4o9{4Q9m#j#-> z+f0}vP-Sg5QX0yKC&mz3aG$Vy`0^@oOiF&(-Z!8iFYL z5}uYr*}!2wAgDw5V>1Nr_VQyVQrzYO7JY}h_xFGdwK#m1n~OK{vX<`K6>qfEdFkSI z^#64Mz-M!>+!(G%c_K96oR`0tjU;@|WIbSM6PT@S9LY&st{&!%6*Lg4e<4UyZ1>MZ zWgwvG?=sh65qf02w4%i`G1i(5t(ZG=1=%k*@BOVbv!~ZRy&dGR@%Kw&w=j%1n9jzG z3o^J4^~)b)09&Kgvyd{hh>B?;6%X1&gdzRUGb0~%f`tL@?pFQJ!L?@9Ucr`0;QO;o zqdocH+(7xSke=C)s?b95sDIdQ1@6mpQ8iV&32idZ6L(`=u99?2b5n6fC z&@UmYG4HekT#9LQGWw(5p}leUgp#wQ+T z(Vu6Hz1%oz+s{&UvpN9nOm}@WnI)|~zLFMGlg43rghS8D#Cig@0>((*dV;m~-uG^a zfabY3>|ZFZ$lD*fVS#P%@zX5wU5j8WuBnBo78~hP^RMH-r0JRL9C)19_chf^bn@{N zV@VN@o8%YKV6*sRGnse7+tx*e|W$n8PS7QCV%& z$?;d8`pXv0`T18PT?&$Jo*$qYLABaKTYjedv`iosI7kQ3WL3%Y+#)7%{UBTGj=`ol zOgU>RgdkVO*E=X9di8e3Ehy=TB`?}y8hK}E@m)D9mj4OD$0wscLdypNGC;6KHF9-o zFoZve0^F5XvQ5I+M6JEIf6s5tvk8~tx~+ajj<%~R+#i`4TH~}FJnA8JpmiKqW&;!1 zY*#*rxZ$lXTQFhp63OrGx9kGZjS}Yj2BtUB^a!YJ{8bIcxPW1f_)%?NRKUO~`zFH#b4e-_rqRN?8=Q9Gb0|k$Gfw)V0y2|F?KyIV|f|IDA=$!S$HswJ!UC zX4X->I^QNbmMJ=8t^Bkst$F@-3q602IbzwGLe}4+iczM<=!#%tpTo0ngCUO4J^PTk zg-SH{tb}G~lzEU2KpTEmup63-H@P?Fg#4jlGs^DrIhG&3F}&H4}o3lrvgV znpqIma0XmIxqY}#&92Yx`*FGQPyy=;?ED!=ZjFgI8MEBxI(6(O<%Yx-EDywNCp^fD zvMyUD=BQH1Q{caqyv%vs8Rxezm{L``D=V=(i=uZF&+jHStL%i``z;1_%Hfk{1x+jh#%rAC{x#Q@#3D9@YzD->QUzU(}A`i63tx51N4>JbL5 zW=H!Lv**6rs|%!psY#yNFT$%FUHnZ=BblEYTdXN(K)53>D9`o-LXOI0lH}L)-rh!B z=vcZbLEAIn>uwOqna;i6Knu>s1VZo&TM}0P+N)LS?ftmbSdN~gaeH83QbHgS0NM>= zKIgBxQO4wNXj!6W@^Aa1N>xg#=`P|*8TYCWq=m*m1IWh)OQOny^7i7}qL(Fo|7N3N zBx|pAocm*u)NLF1`)7sjvmP+kL)p=n_M%$;*NdBmZvv2uYj?{~&aTqrz82s6xi`=O zi|CTezV-(B#NHpR94EhauN+%G_*#%7&a4Mv{Np%RE-0F4B*0GK>*`yY3*`gla!D_h z;{ZWy?@q9GxR=(<^W{R5eK`-M?ZN&FM*(oxq9I^?wloufc779f#8)Nnu=^vt zUa;zQgk;o9zO317WBK*gi5<=3(hGsNg18)Y+v7CwaWaXIBn8eBSsiQ}LxPr#EVatJ zKQ`JP71>b^%lr)lN@>EuoF|OVSkEP^&G`NI@^LC`F=fW!gW7QuWp%n9BV+%( zvQ^}AdxVp-yPCcFtji0o`Il}^3|_rNotGn@i@#4ft&Tf5MG6*46ZpM+?|=E2>!Uzr z=$ju8Nfc0z%W;qXI=bQacCkEtt#L-3#AXM|2Ch+*O{xMy46mN-YDr5=Ir6u2spRVmQweXC9TFPHa@mRYB z&^jKNY??c(CU5(>^Uag#oAhPZDR(PQr(((Rq98)Cqyn;h5t4}?;Ss?xUEY#}pC3Ll zq0yjvU-WdBzq2Cin@0T`mm*c}^H{u|^l!4Xs6K_Opwo>R1|Wb7)vmbRPu(ul@J+-{ zTwq9vM5=SYrD24#0$8 z$n4dC;j7Fh+{S_*ufnfif!J0v-}{>V>Ozl4aQw?!muXW$9~T|hOp!jj@IzB}c4qll zyt4Q8y&ZIXk$L&|;^~sl(TaCR=^?-2*)}x2*fEB0k+$vpM2HvxIlYz*;Zchf_x@05 zXnl>zCt}O9Wj>AE*!}aar}ZD%mwB6YrANAJ5DE-sZvW*JCd_o?C&NCS3tfn^G4tC{ z**s(=i?%MY>Y#vfWY@}hym8wwnkB0Z%J!Tf3)XV{qE6 zL|2(%?50JiGqLj$n;y;=1+!uC`F^%$)W&n2vSHmc;Yx>tHTIdTl@Ufh`4LC}&p{>E z6zOmwjmh#CE-|p!Kkb)r@H?}i1W$2Z;9)eE&dxzJQv7}heIn_@+w%$ zGTDZOYRjrcyk?q{M_5VM1PY)wII~Lc;S+Kj3|@qoCOG5{6NK+;br2}~uBC%}hoYm( z>gacrDa%E3EB0pcF)O)`%3H@D^rbQbH}j|?j8HR_tG(lZDto9jpBVjV&Pk^Z5RJ7g zi2K+*JTOH!-QWf#pK$ay#}Uruy)K-UuOXcc-BHAim6yC;K`9?J*I$1Ez$Hu#U<2;F z`u;ZkOdw!#D6;kH7aqwo02d8O5#Q~s?ziu8l_%YY4Uw~*qKj!LK}a!q&{xrg<(Wxf z6MD~L-+q;p>@vH6-W327_vrP@vES<{^||$PPP5R@+QEcNP`SU?n-vfbx&5SE{!~nQ zt87;N{B|qA{w}~WlkxdZ*dC9N3}W6jwKdIS3M*Nw#+E;R{$gTWKp8)DLf0M?=S)Rb z5YJ}C_U#z~wa=rSSG!h)P8-jqrfqjFd3>!O@0wkhXSuz2?$8B%h}sUzN@cc4S_il> zOr#g^q-xlJ%jsMT1zAy03BS?!DFb3qg2-EKYWdpKYkrPe3KpN>;knJZB#)SqO@Bwd*g zxVu5@GS3iBQ;rl>W^_B$BmuKtDj}Y7eQce7bo8&gofPD{R&!jraYIa+W0i#GB4-&X zVk!iCk>7L0M>#j;&51mfA1;`e+2vTzzIi;WTt{P+hNlZK`Qj6-ZB-ume8W%3Du?2< zJs`ag6xT8ZrmH8+6FvA8oY6$X%+inSWxYH1(YB7W1z&vKQ`%wI3%8B!?clVL4qu0e zs{{uc&Nn&~OS-Yx=V)U}P7jdr=-;Meii}UxR9BvjQYB=&Zhlnq_qlgHhiI+`!c?>R2Al$a}z)cha8>O@`dd9u@5 zE&bVp+Rl^C?j-B`ntK1e*4ecS|L);V#@jk=)n?H=G`AEgGEe7%8*NqD!XGNz=M)dog9|e@Y7?^uNhkkD-D+A8268#KK5VMeE3DP!HRA%Q*GxqoJLa>Mstj z!~-p