diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index 7f86ba39d2..78e181b2ae 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -57,7 +57,7 @@ jobs: builder: macos-13 - target: os: windows - builder: windows-2019 + builder: windows-2022 defaults: run: diff --git a/AllTests-mainnet.md b/AllTests-mainnet.md index 3bc8cd0210..18aac6c135 100644 --- a/AllTests-mainnet.md +++ b/AllTests-mainnet.md @@ -109,6 +109,8 @@ AllTests-mainnet ``` ## BlobQuarantine data structure test suite [Preset: mainnet] ```diff ++ database and memory overfill protection and pruning test OK ++ database unload/load test OK + overfill protection test OK + popSidecars()/hasSidecars() return []/true on block without blobs OK + pruneAfterFinalization() test OK @@ -159,6 +161,8 @@ AllTests-mainnet ## ColumnQuarantine data structure test suite [Preset: mainnet] ```diff + ColumnMap test OK ++ database and memory overfill protection and pruning test OK ++ database unload/load test OK + overfill protection test OK + popSidecars()/hasSidecars() return []/true on block without columns OK + pruneAfterFinalization() test OK @@ -200,6 +204,12 @@ AllTests-mainnet + Non-tail block in common OK + Tail block only in common OK ``` +## EF - Fulu - BPO forkdigests +```diff ++ Different fork versions OK ++ Different genesis validators roots OK ++ Different lengths and blob limits OK +``` ## EF - KZG ```diff + KZG - Blob to KZG commitment - blob_to_kzg_commitment_case_invalid_blob_0 OK @@ -670,6 +680,7 @@ AllTests-mainnet + Stability subnets OK + isNearSyncCommitteePeriod OK + is_aggregator OK ++ nextForkEpochAtEpoch with BPOs OK ``` ## ImportKeystores requests [Beacon Node] [Preset: mainnet] ```diff @@ -826,6 +837,11 @@ AllTests-mainnet ```diff + prune states OK ``` +## Quarantine [Preset: mainnet] +```diff ++ put/iterate/remove test [BlobSidecars] OK ++ put/iterate/remove test [DataColumnSidecar] OK +``` ## REST JSON encoding and decoding ```diff + Blob OK @@ -960,6 +976,7 @@ AllTests-mainnet + [SyncManager] groupBlobs() test OK + [SyncQueue# & Backward] Combination of missing parent and good blocks [3 peers] test OK + [SyncQueue# & Backward] Empty responses should not advance queue until other peers will no OK ++ [SyncQueue# & Backward] Empty responses should not be accounted [3 peers] test OK + [SyncQueue# & Backward] Failure request push test OK + [SyncQueue# & Backward] Invalid block [3 peers] test OK + [SyncQueue# & Backward] Smoke [3 peers] test OK @@ -967,6 +984,7 @@ AllTests-mainnet + [SyncQueue# & Backward] Unviable block [3 peers] test OK + [SyncQueue# & Forward] Combination of missing parent and good blocks [3 peers] test OK + [SyncQueue# & Forward] Empty responses should not advance queue until other peers will not OK ++ [SyncQueue# & Forward] Empty responses should not be accounted [3 peers] test OK + [SyncQueue# & Forward] Failure request push test OK + [SyncQueue# & Forward] Invalid block [3 peers] test OK + [SyncQueue# & Forward] Smoke [3 peers] test OK @@ -988,7 +1006,8 @@ AllTests-mainnet ```diff + /eth/v1/validator/beacon_committee_selections serialization/deserialization test OK + /eth/v1/validator/sync_committee_selections serialization/deserialization test OK -+ bestSuccess() API timeout test OK ++ bestSuccess() API hard timeout test OK ++ bestSuccess() API soft timeout test OK + firstSuccessParallel() API timeout test OK + getAggregatedAttestationDataScore() default test OK + getAggregatedAttestationDataScore() test vectors OK diff --git a/CHANGELOG.md b/CHANGELOG.md index a9e1954b26..a2f60a7f9e 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,3 +1,48 @@ +2025-07-31 v25.7.1 +================== + +Nimbus `v25.7.1` is a `medium-urgency` release, fixing a potential syncing-related crash. + +### Improvements + +- Use Nimbus agent string for builder API calls: + https://github.com/status-im/nimbus-eth2/pull/7300 + +### Fixes + +- Fix assertion on syncing: + https://github.com/status-im/nimbus-eth2/pull/7315 + +2025-07-10 v25.7.0 +================== + +Nimbus `v25.7.0` is a `low-urgency` release, except for usage of the validator client with non-Nimbus beacon nodes for which it's a `medium-urgency` release. + +### Improvements + +- Increase default builder API gas limit to 45M: + https://github.com/status-im/nimbus-eth2/pull/7234 + +- Ensure that validator client attests in a timely way even with partially unresponsive beacon nodes: + https://github.com/status-im/nimbus-eth2/pull/7276 + +- Implement postStateValidatorIdentities beacon API endpoint: + https://github.com/status-im/nimbus-eth2/pull/7223 + +- Implement getDebugDataColumnSidecars beacon API endpoint: + https://github.com/status-im/nimbus-eth2/pull/7237 + +### Fixes + +- Fix sync-related crash regression in v25.6.0: + https://github.com/status-im/nimbus-eth2/pull/7275 + +- Restore validator client compatibility with beacon nodes providing BPO schedules: + https://github.com/status-im/nimbus-eth2/pull/7219 + +- Add missing `finalized` field to getStateV2 beacon API endpoint: + https://github.com/status-im/nimbus-eth2/pull/7248 + 2025-06-16 v25.6.0 ================== diff --git a/ConsensusSpecPreset-mainnet.md b/ConsensusSpecPreset-mainnet.md index ba8c0479d3..c4b0e07111 100644 --- a/ConsensusSpecPreset-mainnet.md +++ b/ConsensusSpecPreset-mainnet.md @@ -3046,6 +3046,7 @@ ConsensusSpecPreset-mainnet + [Valid] EF - Electra - Sanity - Blocks - deposit_transition__process_max_eth1_deposits [ OK + [Valid] EF - Electra - Sanity - Blocks - deposit_transition__start_index_is_set [Preset: OK + [Valid] EF - Electra - Sanity - Blocks - duplicate_attestation_same_block [Preset: mainn OK ++ [Valid] EF - Electra - Sanity - Blocks - effective_balance_increase_changes_lookahead [P OK + [Valid] EF - Electra - Sanity - Blocks - empty_block_transition [Preset: mainnet] OK + [Valid] EF - Electra - Sanity - Blocks - empty_block_transition_no_tx [Preset: mainnet] OK + [Valid] EF - Electra - Sanity - Blocks - empty_epoch_transition [Preset: mainnet] OK @@ -3094,6 +3095,7 @@ ConsensusSpecPreset-mainnet ```diff + EF - Electra - Slots - balance_change_affects_proposer [Preset: mainnet] OK + EF - Electra - Slots - double_empty_epoch [Preset: mainnet] OK ++ EF - Electra - Slots - effective_decrease_balance_updates_lookahead [Preset: mainnet] OK + EF - Electra - Slots - empty_epoch [Preset: mainnet] OK + EF - Electra - Slots - historical_accumulator [Preset: mainnet] OK + EF - Electra - Slots - multiple_pending_deposits_same_pubkey [Preset: mainnet] OK @@ -3270,6 +3272,11 @@ ConsensusSpecPreset-mainnet + Pending deposits - process_pending_deposits_withdrawable_validator [Preset: mainnet] OK + Pending deposits - process_pending_deposits_withdrawable_validator_not_churned [Preset: ma OK ``` +## EF - Fulu - Epoch Processing - Proposer lookahead [Preset: mainnet] +```diff ++ Proposer lookahead - proposer_lookahead_does_not_contain_exited_validators [Preset: mainne OK ++ Proposer lookahead - proposer_lookahead_in_state_matches_computed_lookahead [Preset: mainn OK +``` ## EF - Fulu - Epoch Processing - RANDAO mixes reset [Preset: mainnet] ```diff + RANDAO mixes reset - updated_randao_mixes [Preset: mainnet] OK @@ -3345,6 +3352,9 @@ ConsensusSpecPreset-mainnet + EF - Fulu - Fork - fulu_fork_random_3 [Preset: mainnet] OK + EF - Fulu - Fork - fulu_fork_random_low_balances [Preset: mainnet] OK + EF - Fulu - Fork - fulu_fork_random_misc_balances [Preset: mainnet] OK ++ EF - Fulu - Fork - lookahead_consistency_at_fork [Preset: mainnet] OK ++ EF - Fulu - Fork - lookahead_consistency_with_effective_balance_change_at_fork [Preset: ma OK ++ EF - Fulu - Fork - proposer_lookahead_init_at_fork_only_contains_active_validators [Preset OK ``` ## EF - Fulu - Operations - Attestation [Preset: mainnet] ```diff @@ -3881,6 +3891,7 @@ ConsensusSpecPreset-mainnet + [Valid] EF - Fulu - Sanity - Blocks - deposit_request_with_same_pubkey_different_withdra OK + [Valid] EF - Fulu - Sanity - Blocks - deposit_top_up [Preset: mainnet] OK + [Valid] EF - Fulu - Sanity - Blocks - duplicate_attestation_same_block [Preset: mainnet] OK ++ [Valid] EF - Fulu - Sanity - Blocks - effective_balance_increase_changes_lookahead [Pres OK + [Valid] EF - Fulu - Sanity - Blocks - empty_block_transition [Preset: mainnet] OK + [Valid] EF - Fulu - Sanity - Blocks - empty_block_transition_no_tx [Preset: mainnet] OK + [Valid] EF - Fulu - Sanity - Blocks - empty_epoch_transition [Preset: mainnet] OK @@ -3927,6 +3938,7 @@ ConsensusSpecPreset-mainnet ```diff + EF - Fulu - Slots - balance_change_affects_proposer [Preset: mainnet] OK + EF - Fulu - Slots - double_empty_epoch [Preset: mainnet] OK ++ EF - Fulu - Slots - effective_decrease_balance_updates_lookahead [Preset: mainnet] OK + EF - Fulu - Slots - empty_epoch [Preset: mainnet] OK + EF - Fulu - Slots - historical_accumulator [Preset: mainnet] OK + EF - Fulu - Slots - multiple_pending_deposits_same_pubkey [Preset: mainnet] OK @@ -4495,6 +4507,23 @@ ConsensusSpecPreset-mainnet + ForkChoice - mainnet/fulu/fork_choice/on_block/pyspec_tests/basic OK + ForkChoice - mainnet/fulu/fork_choice/on_block/pyspec_tests/on_block_bad_parent_root OK ForkChoice - mainnet/fulu/fork_choice/on_block/pyspec_tests/on_block_future_block Skip ++ ForkChoice - mainnet/fulu/fork_choice/on_block/pyspec_tests/on_block_peerdas__invalid_inde OK ++ ForkChoice - mainnet/fulu/fork_choice/on_block/pyspec_tests/on_block_peerdas__invalid_inde OK ++ ForkChoice - mainnet/fulu/fork_choice/on_block/pyspec_tests/on_block_peerdas__invalid_mism OK ++ ForkChoice - mainnet/fulu/fork_choice/on_block/pyspec_tests/on_block_peerdas__invalid_mism OK ++ ForkChoice - mainnet/fulu/fork_choice/on_block/pyspec_tests/on_block_peerdas__invalid_mism OK ++ ForkChoice - mainnet/fulu/fork_choice/on_block/pyspec_tests/on_block_peerdas__invalid_mism OK ++ ForkChoice - mainnet/fulu/fork_choice/on_block/pyspec_tests/on_block_peerdas__invalid_mism OK ++ ForkChoice - mainnet/fulu/fork_choice/on_block/pyspec_tests/on_block_peerdas__invalid_mism OK ++ ForkChoice - mainnet/fulu/fork_choice/on_block/pyspec_tests/on_block_peerdas__invalid_wron OK ++ ForkChoice - mainnet/fulu/fork_choice/on_block/pyspec_tests/on_block_peerdas__invalid_wron OK ++ ForkChoice - mainnet/fulu/fork_choice/on_block/pyspec_tests/on_block_peerdas__invalid_wron OK ++ ForkChoice - mainnet/fulu/fork_choice/on_block/pyspec_tests/on_block_peerdas__invalid_wron OK ++ ForkChoice - mainnet/fulu/fork_choice/on_block/pyspec_tests/on_block_peerdas__invalid_wron OK ++ ForkChoice - mainnet/fulu/fork_choice/on_block/pyspec_tests/on_block_peerdas__invalid_wron OK ++ ForkChoice - mainnet/fulu/fork_choice/on_block/pyspec_tests/on_block_peerdas__invalid_zero OK ++ ForkChoice - mainnet/fulu/fork_choice/on_block/pyspec_tests/on_block_peerdas__not_availabl OK ++ ForkChoice - mainnet/fulu/fork_choice/on_block/pyspec_tests/on_block_peerdas__ok OK + ForkChoice - mainnet/fulu/fork_choice/on_block/pyspec_tests/proposer_boost OK + ForkChoice - mainnet/fulu/fork_choice/on_block/pyspec_tests/proposer_boost_is_first_block OK + ForkChoice - mainnet/fulu/fork_choice/on_block/pyspec_tests/proposer_boost_root_same_slot_ OK diff --git a/ConsensusSpecPreset-minimal.md b/ConsensusSpecPreset-minimal.md index 32040f9b93..5e993749b0 100644 --- a/ConsensusSpecPreset-minimal.md +++ b/ConsensusSpecPreset-minimal.md @@ -3210,6 +3210,7 @@ ConsensusSpecPreset-minimal + [Valid] EF - Electra - Sanity - Blocks - deposit_transition__process_max_eth1_deposits [ OK + [Valid] EF - Electra - Sanity - Blocks - deposit_transition__start_index_is_set [Preset: OK + [Valid] EF - Electra - Sanity - Blocks - duplicate_attestation_same_block [Preset: minim OK ++ [Valid] EF - Electra - Sanity - Blocks - effective_balance_increase_changes_lookahead [P OK + [Valid] EF - Electra - Sanity - Blocks - empty_block_transition [Preset: minimal] OK + [Valid] EF - Electra - Sanity - Blocks - empty_block_transition_large_validator_set [Pre OK + [Valid] EF - Electra - Sanity - Blocks - empty_block_transition_no_tx [Preset: minimal] OK @@ -3267,6 +3268,7 @@ ConsensusSpecPreset-minimal ```diff + EF - Electra - Slots - balance_change_affects_proposer [Preset: minimal] OK + EF - Electra - Slots - double_empty_epoch [Preset: minimal] OK ++ EF - Electra - Slots - effective_decrease_balance_updates_lookahead [Preset: minimal] OK + EF - Electra - Slots - empty_epoch [Preset: minimal] OK + EF - Electra - Slots - historical_accumulator [Preset: minimal] OK + EF - Electra - Slots - multiple_pending_deposits_same_pubkey [Preset: minimal] OK @@ -3452,6 +3454,11 @@ ConsensusSpecPreset-minimal + Pending deposits - process_pending_deposits_withdrawable_validator [Preset: minimal] OK + Pending deposits - process_pending_deposits_withdrawable_validator_not_churned [Preset: mi OK ``` +## EF - Fulu - Epoch Processing - Proposer lookahead [Preset: minimal] +```diff ++ Proposer lookahead - proposer_lookahead_does_not_contain_exited_validators [Preset: minima OK ++ Proposer lookahead - proposer_lookahead_in_state_matches_computed_lookahead [Preset: minim OK +``` ## EF - Fulu - Epoch Processing - RANDAO mixes reset [Preset: minimal] ```diff + RANDAO mixes reset - updated_randao_mixes [Preset: minimal] OK @@ -3544,6 +3551,9 @@ ConsensusSpecPreset-minimal + EF - Fulu - Fork - fulu_fork_random_large_validator_set [Preset: minimal] OK + EF - Fulu - Fork - fulu_fork_random_low_balances [Preset: minimal] OK + EF - Fulu - Fork - fulu_fork_random_misc_balances [Preset: minimal] OK ++ EF - Fulu - Fork - lookahead_consistency_at_fork [Preset: minimal] OK ++ EF - Fulu - Fork - lookahead_consistency_with_effective_balance_change_at_fork [Preset: mi OK ++ EF - Fulu - Fork - proposer_lookahead_init_at_fork_only_contains_active_validators [Preset OK ``` ## EF - Fulu - Operations - Attestation [Preset: minimal] ```diff @@ -4119,6 +4129,7 @@ ConsensusSpecPreset-minimal + [Valid] EF - Fulu - Sanity - Blocks - deposit_request_with_same_pubkey_different_withdra OK + [Valid] EF - Fulu - Sanity - Blocks - deposit_top_up [Preset: minimal] OK + [Valid] EF - Fulu - Sanity - Blocks - duplicate_attestation_same_block [Preset: minimal] OK ++ [Valid] EF - Fulu - Sanity - Blocks - effective_balance_increase_changes_lookahead [Pres OK + [Valid] EF - Fulu - Sanity - Blocks - empty_block_transition [Preset: minimal] OK + [Valid] EF - Fulu - Sanity - Blocks - empty_block_transition_large_validator_set [Preset OK + [Valid] EF - Fulu - Sanity - Blocks - empty_block_transition_no_tx [Preset: minimal] OK @@ -4174,6 +4185,7 @@ ConsensusSpecPreset-minimal ```diff + EF - Fulu - Slots - balance_change_affects_proposer [Preset: minimal] OK + EF - Fulu - Slots - double_empty_epoch [Preset: minimal] OK ++ EF - Fulu - Slots - effective_decrease_balance_updates_lookahead [Preset: minimal] OK + EF - Fulu - Slots - empty_epoch [Preset: minimal] OK + EF - Fulu - Slots - historical_accumulator [Preset: minimal] OK + EF - Fulu - Slots - multiple_pending_deposits_same_pubkey [Preset: minimal] OK @@ -4978,6 +4990,23 @@ ConsensusSpecPreset-minimal + ForkChoice - minimal/fulu/fork_choice/on_block/pyspec_tests/on_block_finalized_skip_slots OK + ForkChoice - minimal/fulu/fork_choice/on_block/pyspec_tests/on_block_finalized_skip_slots_ OK ForkChoice - minimal/fulu/fork_choice/on_block/pyspec_tests/on_block_future_block Skip ++ ForkChoice - minimal/fulu/fork_choice/on_block/pyspec_tests/on_block_peerdas__invalid_inde OK ++ ForkChoice - minimal/fulu/fork_choice/on_block/pyspec_tests/on_block_peerdas__invalid_inde OK ++ ForkChoice - minimal/fulu/fork_choice/on_block/pyspec_tests/on_block_peerdas__invalid_mism OK ++ ForkChoice - minimal/fulu/fork_choice/on_block/pyspec_tests/on_block_peerdas__invalid_mism OK ++ ForkChoice - minimal/fulu/fork_choice/on_block/pyspec_tests/on_block_peerdas__invalid_mism OK ++ ForkChoice - minimal/fulu/fork_choice/on_block/pyspec_tests/on_block_peerdas__invalid_mism OK ++ ForkChoice - minimal/fulu/fork_choice/on_block/pyspec_tests/on_block_peerdas__invalid_mism OK ++ ForkChoice - minimal/fulu/fork_choice/on_block/pyspec_tests/on_block_peerdas__invalid_mism OK ++ ForkChoice - minimal/fulu/fork_choice/on_block/pyspec_tests/on_block_peerdas__invalid_wron OK ++ ForkChoice - minimal/fulu/fork_choice/on_block/pyspec_tests/on_block_peerdas__invalid_wron OK ++ ForkChoice - minimal/fulu/fork_choice/on_block/pyspec_tests/on_block_peerdas__invalid_wron OK ++ ForkChoice - minimal/fulu/fork_choice/on_block/pyspec_tests/on_block_peerdas__invalid_wron OK ++ ForkChoice - minimal/fulu/fork_choice/on_block/pyspec_tests/on_block_peerdas__invalid_wron OK ++ ForkChoice - minimal/fulu/fork_choice/on_block/pyspec_tests/on_block_peerdas__invalid_wron OK ++ ForkChoice - minimal/fulu/fork_choice/on_block/pyspec_tests/on_block_peerdas__invalid_zero OK ++ ForkChoice - minimal/fulu/fork_choice/on_block/pyspec_tests/on_block_peerdas__not_availabl OK ++ ForkChoice - minimal/fulu/fork_choice/on_block/pyspec_tests/on_block_peerdas__ok OK + ForkChoice - minimal/fulu/fork_choice/on_block/pyspec_tests/proposer_boost OK + ForkChoice - minimal/fulu/fork_choice/on_block/pyspec_tests/proposer_boost_is_first_block OK + ForkChoice - minimal/fulu/fork_choice/on_block/pyspec_tests/proposer_boost_root_same_slot_ OK diff --git a/beacon_chain/beacon_chain_db.nim b/beacon_chain/beacon_chain_db.nim index ded978182a..e01f750fca 100644 --- a/beacon_chain/beacon_chain_db.nim +++ b/beacon_chain/beacon_chain_db.nim @@ -19,7 +19,10 @@ import forks, presets, state_transition], - "."/[beacon_chain_db_light_client, filepath] + "."/[beacon_chain_db_light_client, + beacon_chain_db_quarantine, + db_utils, + filepath] from ./spec/datatypes/capella import BeaconState from ./spec/datatypes/deneb import TrustedSignedBeaconBlock @@ -152,6 +155,10 @@ type ## ## See `summaries` for an index in the other direction. + quarantine: QuarantineDB + ## Pending data that passed basic checks including proposer signature + ## but that is not fully validated / trusted yet. + lcData: LightClientDataDB ## Persistent light client data to avoid expensive recomputations @@ -508,6 +515,10 @@ proc new*(T: type BeaconChainDB, if db.exec("DROP TABLE IF EXISTS validatorIndexFromPubKey;").isErr: debug "Failed to drop the validatorIndexFromPubKey table" + # 2025-06: Empty name table that was accidentally added before Fulu (#6677) + if db.exec("DROP TABLE IF EXISTS ``;").isErr: + debug "Failed to drop the `` table" + var genesisDepositsSeq = DbSeq[DepositData].init(db, "genesis_deposits").expectDb() @@ -516,43 +527,33 @@ proc new*(T: type BeaconChainDB, # V1 - expected-to-be small rows get without rowid optimizations keyValues = kvStore db.openKvStore("key_values", true).expectDb() - blocks = if cfg.FULU_FORK_EPOCH != FAR_FUTURE_EPOCH: [ - kvStore db.openKvStore("blocks").expectDb(), - kvStore db.openKvStore("altair_blocks").expectDb(), - kvStore db.openKvStore("bellatrix_blocks").expectDb(), - kvStore db.openKvStore("capella_blocks").expectDb(), - kvStore db.openKvStore("deneb_blocks").expectDb(), - kvStore db.openKvStore("electra_blocks").expectDb(), - kvStore db.openKvStore("fulu_blocks").expectDb()] - - else: [ + blocks = [ kvStore db.openKvStore("blocks").expectDb(), kvStore db.openKvStore("altair_blocks").expectDb(), kvStore db.openKvStore("bellatrix_blocks").expectDb(), kvStore db.openKvStore("capella_blocks").expectDb(), kvStore db.openKvStore("deneb_blocks").expectDb(), kvStore db.openKvStore("electra_blocks").expectDb(), - kvStore db.openKvStore("").expectDb()] + if cfg.FULU_FORK_EPOCH != FAR_FUTURE_EPOCH: + kvStore db.openKvStore("fulu_blocks").expectDb() + else: + nil + ] stateRoots = kvStore db.openKvStore("state_roots", true).expectDb() - statesNoVal = if cfg.FULU_FORK_EPOCH != FAR_FUTURE_EPOCH: [ - kvStore db.openKvStore("state_no_validators").expectDb(), - kvStore db.openKvStore("altair_state_no_validators").expectDb(), - kvStore db.openKvStore("bellatrix_state_no_validators").expectDb(), - kvStore db.openKvStore("capella_state_no_validator_pubkeys").expectDb(), - kvStore db.openKvStore("deneb_state_no_validator_pubkeys").expectDb(), - kvStore db.openKvStore("electra_state_no_validator_pubkeys").expectDb(), - kvStore db.openKvStore("fulu_state_no_validator_pubkeys").expectDb()] - - else: [ - kvStore db.openKvStore("state_no_validators").expectDb(), - kvStore db.openKvStore("altair_state_no_validators").expectDb(), - kvStore db.openKvStore("bellatrix_state_no_validators").expectDb(), - kvStore db.openKvStore("capella_state_no_validator_pubkeys").expectDb(), - kvStore db.openKvStore("deneb_state_no_validator_pubkeys").expectDb(), - kvStore db.openKvStore("electra_state_no_validator_pubkeys").expectDb(), - kvStore db.openKvStore("").expectDb()] + statesNoVal = [ + kvStore db.openKvStore("state_no_validators").expectDb(), + kvStore db.openKvStore("altair_state_no_validators").expectDb(), + kvStore db.openKvStore("bellatrix_state_no_validators").expectDb(), + kvStore db.openKvStore("capella_state_no_validator_pubkeys").expectDb(), + kvStore db.openKvStore("deneb_state_no_validator_pubkeys").expectDb(), + kvStore db.openKvStore("electra_state_no_validator_pubkeys").expectDb(), + if cfg.FULU_FORK_EPOCH != FAR_FUTURE_EPOCH: + kvStore db.openKvStore("fulu_state_no_validator_pubkeys").expectDb() + else: + nil + ] stateDiffs = kvStore db.openKvStore("state_diffs").expectDb() summaries = kvStore db.openKvStore("beacon_block_summaries", true).expectDb() @@ -593,6 +594,8 @@ proc new*(T: type BeaconChainDB, if cfg.FULU_FORK_EPOCH != FAR_FUTURE_EPOCH: columns = kvStore db.openKvStore("fulu_columns").expectDb() + let quarantine = db.initQuarantineDB().expectDb() + # Versions prior to 1.4.0 (altair) stored validators in `immutable_validators` # which stores validator keys in compressed format - this is # slow to load and has been superceded by `immutable_validators2` which uses @@ -634,6 +637,7 @@ proc new*(T: type BeaconChainDB, stateDiffs: stateDiffs, summaries: summaries, finalizedBlocks: finalizedBlocks, + quarantine: quarantine, lcData: lcData ) @@ -657,6 +661,9 @@ proc new*(T: type BeaconChainDB, dir, "nbc", readOnly = readOnly, manualCheckpoint = true).expectDb() BeaconChainDB.new(db, cfg) +template getQuarantineDB*(db: BeaconChainDB): QuarantineDB = + db.quarantine + template getLightClientDataDB*(db: BeaconChainDB): LightClientDataDB = db.lcData @@ -683,18 +690,6 @@ proc decodeSnappySSZ[T](data: openArray[byte], output: var T): bool = err = e.msg, typ = name(T), dataLen = data.len false -proc decodeSZSSZ[T](data: openArray[byte], output: var T): bool = - try: - let decompressed = decodeFramed(data, checkIntegrity = false) - readSszBytes(decompressed, output, updateRoot = false) - true - except CatchableError as e: - # If the data can't be deserialized, it could be because it's from a - # version of the software that uses a different SSZ encoding - warn "Unable to deserialize data, old database?", - err = e.msg, typ = name(T), dataLen = data.len - false - func encodeSSZ*(v: auto): seq[byte] = try: SSZ.encode(v) @@ -708,14 +703,6 @@ func encodeSnappySSZ(v: auto): seq[byte] = # In-memory encode shouldn't fail! raiseAssert err.msg -func encodeSZSSZ(v: auto): seq[byte] = - # https://github.com/google/snappy/blob/main/framing_format.txt - try: - encodeFramed(SSZ.encode(v)) - except CatchableError as err: - # In-memory encode shouldn't fail! - raiseAssert err.msg - proc getRaw(db: KvStoreRef, key: openArray[byte], T: type Eth2Digest): Opt[T] = var res: Opt[T] proc decode(data: openArray[byte]) = @@ -796,6 +783,7 @@ proc close*(db: BeaconChainDB) = if db.db == nil: return # Close things roughly in reverse order + db.quarantine.close() if not isNil(db.columns): discard db.columns.close() if not isNil(db.blobs): @@ -805,10 +793,12 @@ proc close*(db: BeaconChainDB) = discard db.summaries.close() discard db.stateDiffs.close() for kv in db.statesNoVal: - discard kv.close() + if kv != nil: + discard kv.close() discard db.stateRoots.close() for kv in db.blocks: - discard kv.close() + if kv != nil: + discard kv.close() discard db.keyValues.close() db.immutableValidatorsDb.close() @@ -829,20 +819,14 @@ proc putBeaconBlockSummary*( # Summaries are too simple / small to compress, store them as plain SSZ db.summaries.putSSZ(root.data, value) -proc putBlock*( - db: BeaconChainDB, - value: phase0.TrustedSignedBeaconBlock | altair.TrustedSignedBeaconBlock) = - db.withManyWrites: - db.blocks[type(value).kind].putSnappySSZ(value.root.data, value) - db.putBeaconBlockSummary(value.root, value.message.toBeaconBlockSummary()) - -proc putBlock*( - db: BeaconChainDB, - value: bellatrix.TrustedSignedBeaconBlock | - capella.TrustedSignedBeaconBlock | deneb.TrustedSignedBeaconBlock | - electra.TrustedSignedBeaconBlock | fulu.TrustedSignedBeaconBlock) = +proc putBlock*(db: BeaconChainDB, value: ForkyTrustedSignedBeaconBlock) = + const consensusFork = typeof(value).kind + doAssert db.blocks[consensusFork] != nil db.withManyWrites: - db.blocks[type(value).kind].putSZSSZ(value.root.data, value) + when consensusFork >= ConsensusFork.Bellatrix: + db.blocks[consensusFork].putSZSSZ(value.root.data, value) + else: + db.blocks[consensusFork].putSnappySSZ(value.root.data, value) db.putBeaconBlockSummary(value.root, value.message.toBeaconBlockSummary()) proc putBlobSidecar*( @@ -881,48 +865,37 @@ proc updateImmutableValidators*( withdrawal_credentials: immutableValidator.withdrawal_credentials) db.immutableValidators.add immutableValidator -template toBeaconStateNoImmutableValidators(state: phase0.BeaconState): - Phase0BeaconStateNoImmutableValidators = - isomorphicCast[Phase0BeaconStateNoImmutableValidators](state) - -template toBeaconStateNoImmutableValidators(state: altair.BeaconState): - AltairBeaconStateNoImmutableValidators = - isomorphicCast[AltairBeaconStateNoImmutableValidators](state) - -template toBeaconStateNoImmutableValidators(state: bellatrix.BeaconState): - BellatrixBeaconStateNoImmutableValidators = - isomorphicCast[BellatrixBeaconStateNoImmutableValidators](state) - -template toBeaconStateNoImmutableValidators(state: capella.BeaconState): - CapellaBeaconStateNoImmutableValidators = - isomorphicCast[CapellaBeaconStateNoImmutableValidators](state) - -template toBeaconStateNoImmutableValidators(state: deneb.BeaconState): - DenebBeaconStateNoImmutableValidators = - isomorphicCast[DenebBeaconStateNoImmutableValidators](state) - -template toBeaconStateNoImmutableValidators(state: electra.BeaconState): - ElectraBeaconStateNoImmutableValidators = - isomorphicCast[ElectraBeaconStateNoImmutableValidators](state) +template BeaconStateNoImmutableValidators(kind: static ConsensusFork): auto = + when kind == ConsensusFork.Fulu: + typedesc[FuluBeaconStateNoImmutableValidators] + elif kind == ConsensusFork.Electra: + typedesc[ElectraBeaconStateNoImmutableValidators] + elif kind == ConsensusFork.Deneb: + typedesc[DenebBeaconStateNoImmutableValidators] + elif kind == ConsensusFork.Capella: + typedesc[CapellaBeaconStateNoImmutableValidators] + elif kind == ConsensusFork.Bellatrix: + typedesc[BellatrixBeaconStateNoImmutableValidators] + elif kind == ConsensusFork.Altair: + typedesc[AltairBeaconStateNoImmutableValidators] + elif kind == ConsensusFork.Phase0: + typedesc[Phase0BeaconStateNoImmutableValidators] + else: + {.error: "BeaconStateNoImmutableValidators does not support " & $kind.} -template toBeaconStateNoImmutableValidators(state: fulu.BeaconState): - FuluBeaconStateNoImmutableValidators = - isomorphicCast[FuluBeaconStateNoImmutableValidators](state) +template toBeaconStateNoImmutableValidators(state: ForkyBeaconState): auto = + isomorphicCast[typeof(state).kind.BeaconStateNoImmutableValidators](state) -proc putState*( - db: BeaconChainDB, key: Eth2Digest, - value: phase0.BeaconState | altair.BeaconState) = +proc putState*(db: BeaconChainDB, key: Eth2Digest, value: ForkyBeaconState) = + const consensusFork = typeof(value).kind + doAssert db.statesNoVal[consensusFork] != nil db.updateImmutableValidators(value.validators.asSeq()) - db.statesNoVal[type(value).kind].putSnappySSZ( - key.data, toBeaconStateNoImmutableValidators(value)) - -proc putState*( - db: BeaconChainDB, key: Eth2Digest, - value: bellatrix.BeaconState | capella.BeaconState | deneb.BeaconState | - electra.BeaconState | fulu.BeaconState) = - db.updateImmutableValidators(value.validators.asSeq()) - db.statesNoVal[type(value).kind].putSZSSZ( - key.data, toBeaconStateNoImmutableValidators(value)) + when consensusFork >= ConsensusFork.Bellatrix: + db.statesNoVal[consensusFork].putSZSSZ( + key.data, toBeaconStateNoImmutableValidators(value)) + else: + db.statesNoVal[consensusFork].putSnappySSZ( + key.data, toBeaconStateNoImmutableValidators(value)) proc putState*(db: BeaconChainDB, state: ForkyHashedBeaconState) = db.withManyWrites: @@ -932,6 +905,7 @@ proc putState*(db: BeaconChainDB, state: ForkyHashedBeaconState) = # For testing rollback proc putCorruptState*( db: BeaconChainDB, fork: static ConsensusFork, key: Eth2Digest) = + doAssert db.statesNoVal[fork] != nil db.statesNoVal[fork].putSnappySSZ(key.data, Validator()) func stateRootKey(root: Eth2Digest, slot: Slot): array[40, byte] = @@ -950,6 +924,7 @@ proc putStateDiff*(db: BeaconChainDB, root: Eth2Digest, value: BeaconStateDiff) db.stateDiffs.putSnappySSZ(root.data, value) proc delBlock*(db: BeaconChainDB, fork: ConsensusFork, key: Eth2Digest): bool = + doAssert db.blocks[fork] != nil var deleted = false db.withManyWrites: discard db.summaries.del(key.data).expectDb() @@ -957,12 +932,15 @@ proc delBlock*(db: BeaconChainDB, fork: ConsensusFork, key: Eth2Digest): bool = deleted proc delState*(db: BeaconChainDB, fork: ConsensusFork, key: Eth2Digest) = + doAssert db.statesNoVal[fork] != nil discard db.statesNoVal[fork].del(key.data).expectDb() proc clearBlocks*(db: BeaconChainDB, fork: ConsensusFork): bool = + doAssert db.blocks[fork] != nil db.blocks[fork].clear().expectDb() proc clearStates*(db: BeaconChainDB, fork: ConsensusFork): bool = + doAssert db.statesNoVal[fork] != nil db.statesNoVal[fork].clear().expectDb() proc delStateRoot*(db: BeaconChainDB, root: Eth2Digest, slot: Slot) = @@ -991,42 +969,26 @@ proc getPhase0Block( # set root after deserializing (so it doesn't get zeroed) result.get().root = key -proc getBlock*( - db: BeaconChainDB, key: Eth2Digest, - T: type phase0.TrustedSignedBeaconBlock): Opt[T] = +proc getBlock*[X: ForkyTrustedSignedBeaconBlock]( + db: BeaconChainDB, key: Eth2Digest, T: typedesc[X]): Opt[T] = # We only store blocks that we trust in the database - result.ok(default(T)) - if db.blocks[T.kind].getSnappySSZ(key.data, result.get) != GetResult.found: - # During the initial releases phase0, we stored blocks in a different table - result = db.v0.getPhase0Block(key) - else: - # set root after deserializing (so it doesn't get zeroed) - result.get().root = key - -proc getBlock*( - db: BeaconChainDB, key: Eth2Digest, - T: type altair.TrustedSignedBeaconBlock): Opt[T] = - # We only store blocks that we trust in the database - result.ok(default(T)) - if db.blocks[T.kind].getSnappySSZ(key.data, result.get) == GetResult.found: - # set root after deserializing (so it doesn't get zeroed) - result.get().root = key - else: - result.err() - -proc getBlock*[ - X: bellatrix.TrustedSignedBeaconBlock | capella.TrustedSignedBeaconBlock | - deneb.TrustedSignedBeaconBlock | electra.TrustedSignedBeaconBlock | - fulu.TrustedSignedBeaconBlock]( - db: BeaconChainDB, key: Eth2Digest, - T: type X): Opt[T] = - # We only store blocks that we trust in the database - result.ok(default(T)) - if db.blocks[T.kind].getSZSSZ(key.data, result.get) == GetResult.found: - # set root after deserializing (so it doesn't get zeroed) - result.get().root = key - else: - result.err() + const consensusFork = T.kind + if db.blocks[consensusFork] != nil: + result.ok(default(T)) + let getResult = + when consensusFork >= ConsensusFork.Bellatrix: + db.blocks[consensusFork].getSZSSZ(key.data, result.unsafeGet) + else: + db.blocks[consensusFork].getSnappySSZ(key.data, result.unsafeGet) + if getResult != GetResult.found: + when consensusFork < ConsensusFork.Altair: + # During initial releases phase0, we stored blocks in a different table + result = db.v0.getPhase0Block(key) + else: + result.err() + else: + # set root after deserializing (so it doesn't get zeroed) + result.unsafeGet.root = key proc getPhase0BlockSSZ( db: BeaconChainDBV0, key: Eth2Digest, data: var seq[byte]): bool = @@ -1048,39 +1010,26 @@ proc getPhase0BlockSZ( db.backend.get(subkey(phase0.SignedBeaconBlock, key), decode).expectDb() and success -# SSZ implementations are separate so as to avoid unnecessary data copies -proc getBlockSSZ*( - db: BeaconChainDB, key: Eth2Digest, data: var seq[byte], - T: type phase0.TrustedSignedBeaconBlock): bool = - let dataPtr = addr data # Short-lived - var success = true - func decode(data: openArray[byte]) = - dataPtr[] = snappy.decode(data) - success = dataPtr[].len > 0 - db.blocks[ConsensusFork.Phase0].get(key.data, decode).expectDb() and success or - db.v0.getPhase0BlockSSZ(key, data) - -proc getBlockSSZ*( - db: BeaconChainDB, key: Eth2Digest, data: var seq[byte], - T: type altair.TrustedSignedBeaconBlock): bool = - let dataPtr = addr data # Short-lived - var success = true - func decode(data: openArray[byte]) = - dataPtr[] = snappy.decode(data) - success = dataPtr[].len > 0 - db.blocks[T.kind].get(key.data, decode).expectDb() and success - -proc getBlockSSZ*[ - X: bellatrix.TrustedSignedBeaconBlock | capella.TrustedSignedBeaconBlock | - deneb.TrustedSignedBeaconBlock | electra.TrustedSignedBeaconBlock | - fulu.TrustedSignedBeaconBlock]( - db: BeaconChainDB, key: Eth2Digest, data: var seq[byte], T: type X): bool = +proc getBlockSSZ*[X: ForkyTrustedSignedBeaconBlock]( + db: BeaconChainDB, key: Eth2Digest, + data: var seq[byte], T: typedesc[X]): bool = + const consensusFork = T.kind + if db.blocks[consensusFork] == nil: + return false let dataPtr = addr data # Short-lived var success = true func decode(data: openArray[byte]) = - dataPtr[] = decodeFramed(data, checkIntegrity = false) + when consensusFork >= ConsensusFork.Bellatrix: + dataPtr[] = decodeFramed(data, checkIntegrity = false) + else: + dataPtr[] = snappy.decode(data) success = dataPtr[].len > 0 - db.blocks[T.kind].get(key.data, decode).expectDb() and success + var res = + db.blocks[consensusFork].get(key.data, decode).expectDb() and success + when consensusFork < ConsensusFork.Altair: + # During initial releases phase0, we stored blocks in a different table + res = res or db.v0.getPhase0BlockSSZ(key, data) + res proc getBlockSSZ*( db: BeaconChainDB, key: Eth2Digest, data: var seq[byte], @@ -1108,38 +1057,30 @@ proc getDataColumnSidecarSZ*(db: BeaconChainDB, root: Eth2Digest, proc getDataColumnSidecar*(db: BeaconChainDB, root: Eth2Digest, index: ColumnIndex, value: var DataColumnSidecar): bool = + if db.columns == nil: # Fulu has not been scheduled; DB table does not exist + return false db.columns.getSZSSZ(columnkey(root, index), value) == GetResult.found -proc getBlockSZ*( - db: BeaconChainDB, key: Eth2Digest, data: var seq[byte], - T: type phase0.TrustedSignedBeaconBlock): bool = - let dataPtr = addr data # Short-lived - var success = true - func decode(data: openArray[byte]) = - dataPtr[] = snappy.encodeFramed(snappy.decode(data)) - success = dataPtr[].len > 0 - db.blocks[ConsensusFork.Phase0].get(key.data, decode).expectDb() and success or - db.v0.getPhase0BlockSZ(key, data) - -proc getBlockSZ*( - db: BeaconChainDB, key: Eth2Digest, data: var seq[byte], - T: type altair.TrustedSignedBeaconBlock): bool = +proc getBlockSZ*[X: ForkyTrustedSignedBeaconBlock]( + db: BeaconChainDB, key: Eth2Digest, + data: var seq[byte], T: typedesc[X]): bool = + const consensusFork = T.kind + if db.blocks[consensusFork] == nil: + return false let dataPtr = addr data # Short-lived var success = true func decode(data: openArray[byte]) = - dataPtr[] = snappy.encodeFramed(snappy.decode(data)) + when consensusFork >= ConsensusFork.Bellatrix: + assign(dataPtr[], data) + else: + dataPtr[] = snappy.encodeFramed(snappy.decode(data)) success = dataPtr[].len > 0 - db.blocks[T.kind].get(key.data, decode).expectDb() and success - -proc getBlockSZ*[ - X: bellatrix.TrustedSignedBeaconBlock | capella.TrustedSignedBeaconBlock | - deneb.TrustedSignedBeaconBlock | electra.TrustedSignedBeaconBlock | - fulu.TrustedSignedBeaconBlock]( - db: BeaconChainDB, key: Eth2Digest, data: var seq[byte], T: type X): bool = - let dataPtr = addr data # Short-lived - func decode(data: openArray[byte]) = - assign(dataPtr[], data) - db.blocks[T.kind].get(key.data, decode).expectDb() + var res = + db.blocks[consensusFork].get(key.data, decode).expectDb() and success + when consensusFork < ConsensusFork.Altair: + # During initial releases phase0, we stored blocks in a different table + res = res or db.v0.getPhase0BlockSZ(key, data) + res proc getBlockSZ*( db: BeaconChainDB, key: Eth2Digest, data: var seq[byte], @@ -1150,90 +1091,7 @@ proc getBlockSZ*( proc getStateOnlyMutableValidators( immutableValidators: openArray[ImmutableValidatorData2], store: KvStoreRef, key: openArray[byte], - output: var (phase0.BeaconState | altair.BeaconState), - rollback: RollbackProc): bool = - ## Load state into `output` - BeaconState is large so we want to avoid - ## re-allocating it if possible - ## Return `true` iff the entry was found in the database and `output` was - ## overwritten. - ## Rollback will be called only if output was partially written - if it was - ## not found at all, rollback will not be called - # TODO rollback is needed to deal with bug - use `noRollback` to ignore: - # https://github.com/nim-lang/Nim/issues/14126 - - let prevNumValidators = output.validators.len - - case store.getSnappySSZ(key, toBeaconStateNoImmutableValidators(output)) - of GetResult.found: - let numValidators = output.validators.len - doAssert immutableValidators.len >= numValidators - - for i in prevNumValidators ..< numValidators: - let - # Bypass hash cache invalidation - dstValidator = addr output.validators.data[i] - - assign( - dstValidator.pubkeyData, - HashedValidatorPubKey.init( - immutableValidators[i].pubkey.toPubKey())) - assign( - dstValidator.withdrawal_credentials, - immutableValidators[i].withdrawal_credentials) - output.validators.clearCaches(i) - - true - of GetResult.notFound: - false - of GetResult.corrupted: - rollback() - false - -proc getStateOnlyMutableValidators( - immutableValidators: openArray[ImmutableValidatorData2], - store: KvStoreRef, key: openArray[byte], - output: var bellatrix.BeaconState, rollback: RollbackProc): bool = - ## Load state into `output` - BeaconState is large so we want to avoid - ## re-allocating it if possible - ## Return `true` iff the entry was found in the database and `output` was - ## overwritten. - ## Rollback will be called only if output was partially written - if it was - ## not found at all, rollback will not be called - # TODO rollback is needed to deal with bug - use `noRollback` to ignore: - # https://github.com/nim-lang/Nim/issues/14126 - - let prevNumValidators = output.validators.len - - case store.getSZSSZ(key, toBeaconStateNoImmutableValidators(output)) - of GetResult.found: - let numValidators = output.validators.len - doAssert immutableValidators.len >= numValidators - - for i in prevNumValidators ..< numValidators: - # Bypass hash cache invalidation - let dstValidator = addr output.validators.data[i] - - assign( - dstValidator.pubkeyData, - HashedValidatorPubKey.init( - immutableValidators[i].pubkey.toPubKey())) - assign( - dstValidator.withdrawal_credentials, - immutableValidators[i].withdrawal_credentials) - output.validators.clearCaches(i) - - true - of GetResult.notFound: - false - of GetResult.corrupted: - rollback() - false - -proc getStateOnlyMutableValidators( - immutableValidators: openArray[ImmutableValidatorData2], - store: KvStoreRef, key: openArray[byte], - output: var (capella.BeaconState | deneb.BeaconState | electra.BeaconState | - fulu.BeaconState), + output: var ForkyBeaconState, rollback: RollbackProc): bool = ## Load state into `output` - BeaconState is large so we want to avoid ## re-allocating it if possible @@ -1243,10 +1101,16 @@ proc getStateOnlyMutableValidators( ## not found at all, rollback will not be called # TODO rollback is needed to deal with bug - use `noRollback` to ignore: # https://github.com/nim-lang/Nim/issues/14126 + const consensusFork = typeof(output).kind + let + prevNumValidators = output.validators.len + getResult = + when consensusFork >= ConsensusFork.Bellatrix: + store.getSZSSZ(key, toBeaconStateNoImmutableValidators(output)) + else: + store.getSnappySSZ(key, toBeaconStateNoImmutableValidators(output)) - let prevNumValidators = output.validators.len - - case store.getSZSSZ(key, toBeaconStateNoImmutableValidators(output)) + case getResult of GetResult.found: let numValidators = output.validators.len doAssert immutableValidators.len >= numValidators @@ -1254,10 +1118,11 @@ proc getStateOnlyMutableValidators( for i in prevNumValidators ..< numValidators: # Bypass hash cache invalidation let dstValidator = addr output.validators.data[i] - assign( - dstValidator.pubkeyData, - HashedValidatorPubKey.init( - immutableValidators[i].pubkey.toPubKey())) + dstValidator.pubkeyData.assign(HashedValidatorPubKey.init( + immutableValidators[i].pubkey.toPubKey())) + when consensusFork < ConsensusFork.Capella: + dstValidator.withdrawal_credentials.assign( + immutableValidators[i].withdrawal_credentials) output.validators.clearCaches(i) true @@ -1296,31 +1161,9 @@ proc getState( rollback() false -proc getState*( - db: BeaconChainDB, key: Eth2Digest, output: var phase0.BeaconState, - rollback: RollbackProc): bool = - ## Load state into `output` - BeaconState is large so we want to avoid - ## re-allocating it if possible - ## Return `true` iff the entry was found in the database and `output` was - ## overwritten. - ## Rollback will be called only if output was partially written - if it was - ## not found at all, rollback will not be called - # TODO rollback is needed to deal with bug - use `noRollback` to ignore: - # https://github.com/nim-lang/Nim/issues/14126 - type T = type(output) - - if not getStateOnlyMutableValidators( - db.immutableValidators, db.statesNoVal[T.kind], key.data, output, rollback): - db.v0.getState(db.immutableValidators, key, output, rollback) - else: - true - proc getState*( db: BeaconChainDB, key: Eth2Digest, - output: var (altair.BeaconState | bellatrix.BeaconState | - capella.BeaconState | deneb.BeaconState | electra.BeaconState | - fulu.BeaconState), - rollback: RollbackProc): bool = + output: var ForkyBeaconState, rollback: RollbackProc): bool = ## Load state into `output` - BeaconState is large so we want to avoid ## re-allocating it if possible ## Return `true` iff the entry was found in the database and `output` was @@ -1329,10 +1172,15 @@ proc getState*( ## not found at all, rollback will not be called # TODO rollback is needed to deal with bug - use `noRollback` to ignore: # https://github.com/nim-lang/Nim/issues/14126 - type T = type(output) - getStateOnlyMutableValidators( - db.immutableValidators, db.statesNoVal[T.kind], key.data, output, - rollback) + const consensusFork = typeof(output).kind + var res = + db.statesNoVal[consensusFork] != nil and + db.immutableValidators.getStateOnlyMutableValidators( + db.statesNoVal[consensusFork], key.data, output, rollback) + when consensusFork < ConsensusFork.Altair: + # During initial releases phase0, we stored states in a different table + res = res or db.v0.getState(db.immutableValidators, key, output, rollback) + res proc getState*( db: BeaconChainDB, fork: ConsensusFork, state_root: Eth2Digest, @@ -1390,23 +1238,24 @@ proc getGenesisBlock*(db: BeaconChainDB): Opt[Eth2Digest] = proc containsBlock*(db: BeaconChainDBV0, key: Eth2Digest): bool = db.backend.contains(subkey(phase0.SignedBeaconBlock, key)).expectDb() -proc containsBlock*( - db: BeaconChainDB, key: Eth2Digest, - T: type phase0.TrustedSignedBeaconBlock): bool = - db.blocks[T.kind].contains(key.data).expectDb() or - db.v0.containsBlock(key) - -proc containsBlock*[ - X: altair.TrustedSignedBeaconBlock | bellatrix.TrustedSignedBeaconBlock | - capella.TrustedSignedBeaconBlock | deneb.TrustedSignedBeaconBlock | - electra.TrustedSignedBeaconBlock | fulu.TrustedSignedBeaconBlock]( - db: BeaconChainDB, key: Eth2Digest, T: type X): bool = - db.blocks[X.kind].contains(key.data).expectDb() +proc containsBlock*[X: ForkyTrustedSignedBeaconBlock]( + db: BeaconChainDB, key: Eth2Digest, T: typedesc[X]): bool = + const consensusFork = T.kind + var res = + db.blocks[consensusFork] != nil and + db.blocks[consensusFork].contains(key.data).expectDb() + when consensusFork < ConsensusFork.Altair: + # During initial releases phase0, we stored states in a different table + res = res or db.v0.containsBlock(key) + res proc containsBlock*(db: BeaconChainDB, key: Eth2Digest, fork: ConsensusFork): bool = case fork - of ConsensusFork.Phase0: containsBlock(db, key, phase0.TrustedSignedBeaconBlock) - else: db.blocks[fork].contains(key.data).expectDb() + of ConsensusFork.Phase0: + containsBlock(db, key, phase0.TrustedSignedBeaconBlock) + else: + db.blocks[fork] != nil and + db.blocks[fork].contains(key.data).expectDb() proc containsBlock*(db: BeaconChainDB, key: Eth2Digest): bool = for fork in countdown(ConsensusFork.high, ConsensusFork.low): @@ -1422,13 +1271,15 @@ proc containsState*(db: BeaconChainDBV0, key: Eth2Digest): bool = proc containsState*(db: BeaconChainDB, fork: ConsensusFork, key: Eth2Digest, legacy: bool = true): bool = + if db.statesNoVal[fork] == nil: return false if db.statesNoVal[fork].contains(key.data).expectDb(): return true (legacy and fork == ConsensusFork.Phase0 and db.v0.containsState(key)) proc containsState*(db: BeaconChainDB, key: Eth2Digest, legacy: bool = true): bool = for fork in countdown(ConsensusFork.high, ConsensusFork.low): - if db.statesNoVal[fork].contains(key.data).expectDb(): return true + if db.statesNoVal[fork] != nil and + db.statesNoVal[fork].contains(key.data).expectDb(): return true (legacy and db.v0.containsState(key)) @@ -1540,26 +1391,17 @@ iterator getAncestorSummaries*(db: BeaconChainDB, root: Eth2Digest): # Backwards compat for reading old databases, or those that for whatever # reason lost a summary along the way.. - static: doAssert ConsensusFork.high == ConsensusFork.Fulu while true: - if db.v0.backend.getSnappySSZ( - subkey(BeaconBlockSummary, res.root), res.summary) == GetResult.found: - discard # Just yield below - elif (let blck = db.getBlock(res.root, phase0.TrustedSignedBeaconBlock); blck.isSome()): - res.summary = blck.get().message.toBeaconBlockSummary() - elif (let blck = db.getBlock(res.root, altair.TrustedSignedBeaconBlock); blck.isSome()): - res.summary = blck.get().message.toBeaconBlockSummary() - elif (let blck = db.getBlock(res.root, bellatrix.TrustedSignedBeaconBlock); blck.isSome()): - res.summary = blck.get().message.toBeaconBlockSummary() - elif (let blck = db.getBlock(res.root, capella.TrustedSignedBeaconBlock); blck.isSome()): - res.summary = blck.get().message.toBeaconBlockSummary() - elif (let blck = db.getBlock(res.root, deneb.TrustedSignedBeaconBlock); blck.isSome()): - res.summary = blck.get().message.toBeaconBlockSummary() - elif (let blck = db.getBlock(res.root, electra.TrustedSignedBeaconBlock); blck.isSome()): - res.summary = blck.get().message.toBeaconBlockSummary() - elif (let blck = db.getBlock(res.root, fulu.TrustedSignedBeaconBlock); blck.isSome()): - res.summary = blck.get().message.toBeaconBlockSummary() - else: + var found = false + withAll(ConsensusFork): + if not found: + let blck = db.getBlock(res.root, consensusFork.TrustedSignedBeaconBlock) + if blck.isSome: + res.summary = blck.unsafeGet.message.toBeaconBlockSummary() + found = true + found = found or db.v0.backend.getSnappySSZ( + subkey(BeaconBlockSummary, res.root), res.summary) == GetResult.found + if not found: break yield res diff --git a/beacon_chain/beacon_chain_db_immutable.nim b/beacon_chain/beacon_chain_db_immutable.nim index c95cd9629a..5debdbe87d 100644 --- a/beacon_chain/beacon_chain_db_immutable.nim +++ b/beacon_chain/beacon_chain_db_immutable.nim @@ -502,3 +502,7 @@ type pending_consolidations*: HashList[PendingConsolidation, Limit PENDING_CONSOLIDATIONS_LIMIT] ## [New in Electra:EIP7251] + + # [New in Fulu:EIP7917] + proposer_lookahead*: + HashArray[Limit ((MIN_SEED_LOOKAHEAD + 1) * SLOTS_PER_EPOCH), uint64] diff --git a/beacon_chain/beacon_chain_db_light_client.nim b/beacon_chain/beacon_chain_db_light_client.nim index e14a7b353c..4d1f1f0386 100644 --- a/beacon_chain/beacon_chain_db_light_client.nim +++ b/beacon_chain/beacon_chain_db_light_client.nim @@ -15,7 +15,7 @@ import # Beacon chain internals spec/datatypes/altair, spec/[eth2_ssz_serialization, helpers], - ./db_limits + ./db_utils logScope: topics = "lcdata" @@ -172,11 +172,6 @@ type ## Tracks the finalized sync committee periods for which complete data ## has been imported (from `dag.tail.slot`). -template disposeSafe(s: untyped): untyped = - if distinctBase(s) != nil: - s.dispose() - s = typeof(s)(nil) - proc initHeadersStore( backend: SqStoreRef, name, typeName: string): KvResult[LightClientHeaderStore] = diff --git a/beacon_chain/beacon_chain_db_quarantine.nim b/beacon_chain/beacon_chain_db_quarantine.nim new file mode 100644 index 0000000000..070924a847 --- /dev/null +++ b/beacon_chain/beacon_chain_db_quarantine.nim @@ -0,0 +1,222 @@ +# beacon_chain +# Copyright (c) 2022-2025 Status Research & Development GmbH +# Licensed and distributed under either of +# * MIT license (license terms in the root directory or at https://opensource.org/licenses/MIT). +# * Apache v2 license (license terms in the root directory or at https://www.apache.org/licenses/LICENSE-2.0). +# at your option. This file may not be copied, modified, or distributed except according to those terms. + +{.push raises: [].} + +import + # Status libraries + chronicles, + eth/db/kvstore_sqlite3, + # Beacon chain internals + spec/helpers, + ./db_utils + +# Without this export compilation fails with error +# vendor\nim-chronicles\chronicles.nim(352, 21) Error: undeclared identifier: 'activeChroniclesStream' +# It actually is not needed, because chronicles is not used in this file, +# but because decodeSZSSZ() is generic and uses chronicles - generic expansion +# introduces this issue. +export chronicles + +logScope: topics = "qudata" + +type + ForkyDataSidecar* = deneb.BlobSidecar | fulu.DataColumnSidecar + + DataSidecarStore = object + getStmt: SqliteStmt[array[32, byte], seq[byte]] + putStmt: SqliteStmt[(array[32, byte], seq[byte]), void] + delStmt: SqliteStmt[array[32, byte], void] + countStmt: SqliteStmt[NoParams, int64] + + QuarantineDB* = ref object + backend: SqStoreRef + ## SQLite backend + + electraDataSidecar: DataSidecarStore + ## Proposer signature verified data blob sidecars. + fuluDataSidecar: DataSidecarStore + ## Proposer signature verified data column sidecars. + +template tableName(sidecar: typedesc[ForkyDataSidecar]): string = + when sidecar is deneb.BlobSidecar: + "electra_sidecars_quarantine" + elif sidecar is fulu.DataColumnSidecar: + "fulu_sidecars_quarantine" + else: + static: raiseAssert "Sidecar's fork is not supported" + +proc initDataSidecarStore( + backend: SqStoreRef, + name: string +): KvResult[DataSidecarStore] = + if not(backend.readOnly): + ? backend.exec("BEGIN TRANSACTION;") + ? backend.exec("DROP INDEX IF EXISTS `" & name & "_iblock_root`;") + ? backend.exec("DROP TABLE IF EXISTS `" & name & "`;") + ? backend.exec(""" + CREATE TABLE IF NOT EXISTS `""" & name & """` ( + `block_root` BLOB, -- `Eth2Digest` + `data_sidecar` BLOB -- `DataSidecar` (SZSSZ) + ); + """) + ? backend.exec(""" + CREATE INDEX IF NOT EXISTS `""" & name & """_iblock_root` + ON `""" & name & """`(block_root); + """) + ? backend.exec("COMMIT;") + + if not ? backend.hasTable(name): + return ok(DataSidecarStore()) + + let + getStmt = backend.prepareStmt(""" + SELECT `data_sidecar` FROM `""" & name & """` + WHERE `block_root` = ?; + """, array[32, byte], (seq[byte]), managed = false) + .expect("SQL query OK") + putStmt = backend.prepareStmt(""" + INSERT INTO `""" & name & """` ( + `block_root`, `data_sidecar` + ) VALUES (?, ?); + """, (array[32, byte], seq[byte]), void, managed = false).expect("SQL query OK") + delStmt = backend.prepareStmt(""" + DELETE FROM `""" & name & """` WHERE `block_root` == ?; + """, array[32, byte], void, managed = false).expect("SQL query OK") + countStmt = backend.prepareStmt(""" + SELECT COUNT(1) FROM `""" & name & """`; + """, NoParams, int64, managed = false).expect("SQL query OK") + + ok(DataSidecarStore( + getStmt: getStmt, + putStmt: putStmt, + delStmt: delStmt, + countStmt: countStmt + )) + +func close(store: var DataSidecarStore) = + if not(isNil(distinctBase(store.getStmt))): store.getStmt.disposeSafe() + if not(isNil(distinctBase(store.putStmt))): store.putStmt.disposeSafe() + if not(isNil(distinctBase(store.delStmt))): store.delStmt.disposeSafe() + if not(isNil(distinctBase(store.countStmt))): store.countStmt.disposeSafe() + +iterator sidecars*( + db: QuarantineDB, + T: typedesc[ForkyDataSidecar], + blockRoot: Eth2Digest +): T = + when T is deneb.BlobSidecar: + template statement: untyped = + db.electraDataSidecar.getStmt + template storeName: untyped = + "electraDataSidecar" + elif T is fulu.DataColumnSidecar: + template statement: untyped = + db.fuluDataSidecar.getStmt + template storeName: untyped = + "fuluDataSidecar" + else: + static: raiseAssert "Sidecar's fork is not supported" + + if not(isNil(distinctBase(statement))): + var row: statement.Result + for rowRes in statement.exec(blockRoot.data, row): + rowRes.expect("SQL query OK") + var res: T + if not(decodeSZSSZ(row, res)): + error "Quarantine store corrupted", store = storeName, + blockRoot + break + yield res + +proc putDataSidecars*[T: ForkyDataSidecar]( + db: QuarantineDB, + blockRoot: Eth2Digest, + dataSidecars: openArray[ref T] +) = + doAssert(not(db.backend.readOnly)) + + when T is deneb.BlobSidecar: + template statement: untyped = + db.electraDataSidecar.putStmt + elif T is fulu.DataColumnSidecar: + template statement: untyped = + db.fuluDataSidecar.putStmt + else: + static: raiseAssert "Sidecar's fork is not supported" + + if not(isNil(distinctBase(statement))): + db.backend.exec("BEGIN TRANSACTION;").expect("SQL query OK") + for sidecar in dataSidecars: + let blob = encodeSZSSZ(sidecar[]) + statement.exec((blockRoot.data, blob)). + expect("SQL query OK") + db.backend.exec("COMMIT;").expect("SQL query OK") + +proc removeDataSidecars*( + db: QuarantineDB, + T: typedesc[ForkyDataSidecar], + blockRoot: Eth2Digest +) = + doAssert not(db.backend.readOnly) + + when T is deneb.BlobSidecar: + template statement: untyped = + db.electraDataSidecar.delStmt + elif T is fulu.DataColumnSidecar: + template statement: untyped = + db.fuluDataSidecar.delStmt + else: + static: raiseAssert "Sidecar's fork is not supported" + + if not(isNil(distinctBase(statement))): + statement.exec(blockRoot.data).expect("SQL query OK") + +proc sidecarsCount*( + db: QuarantineDB, + T: typedesc[ForkyDataSidecar], +): int64 = + var recordCount = 0'i64 + + when T is deneb.BlobSidecar: + template statement: untyped = + db.electraDataSidecar.countStmt + elif T is fulu.DataColumnSidecar: + template statement: untyped = + db.fuluDataSidecar.countStmt + else: + static: raiseAssert "Sidecar's fork is not supported" + + if not(isNil(distinctBase(statement))): + discard statement.exec do (res: int64): + recordCount = res + recordCount + +proc initQuarantineDB*( + backend: SqStoreRef, +): KvResult[QuarantineDB] = + # Please note that all quarantine tables are temporary, each time the node is + # restarted these tables will be wiped out completely. + # Therefore there is no need to maintain forward or backward compatibility + # guarantees. + let + electraDataSidecar = + ? backend.initDataSidecarStore(tableName(deneb.BlobSidecar)) + fuluDataSidecar = + ? backend.initDataSidecarStore(tableName(fulu.DataColumnSidecar)) + + ok QuarantineDB( + backend: backend, + electraDataSidecar: electraDataSidecar, + fuluDataSidecar: fuluDataSidecar + ) + +proc close*(db: QuarantineDB) = + if not(isNil(db.backend)): + db.electraDataSidecar.close() + db.fuluDataSidecar.close() + db[].reset() diff --git a/beacon_chain/beacon_chain_file.nim b/beacon_chain/beacon_chain_file.nim index 46b7a4bc31..9124b67c15 100644 --- a/beacon_chain/beacon_chain_file.nim +++ b/beacon_chain/beacon_chain_file.nim @@ -84,14 +84,9 @@ func getBlockForkCode(fork: ConsensusFork): uint64 = uint64(fork) func getBlobForkCode(fork: ConsensusFork): uint64 = - case fork - of ConsensusFork.Deneb: - uint64(MaxForksCount) - of ConsensusFork.Electra: + if fork >= ConsensusFork.Deneb: uint64(MaxForksCount) + uint64(fork) - uint64(ConsensusFork.Deneb) - of ConsensusFork.Fulu: - uint64(MaxForksCount) + uint64(fork) - uint64(ConsensusFork.Electra) - of ConsensusFork.Phase0 .. ConsensusFork.Capella: + else: raiseAssert "Blobs are not supported for the fork" proc init(t: typedesc[ChainFileError], k: ChainFileErrorType, diff --git a/beacon_chain/beacon_node.nim b/beacon_chain/beacon_node.nim index 04d67e1505..f7b6e4b8f9 100644 --- a/beacon_chain/beacon_node.nim +++ b/beacon_chain/beacon_node.nim @@ -17,7 +17,7 @@ import metrics, metrics/chronos_httpserver, # Local modules - "."/[beacon_clock, beacon_chain_db, conf, light_client], + "."/[beacon_clock, beacon_chain_db, conf, light_client, version], ./gossip_processing/[eth2_processor, block_processor, optimistic_processor], ./networking/eth2_network, ./el/el_manager, @@ -171,4 +171,5 @@ proc getPayloadBuilderClient*( socketFlags = {SocketFlags.TcpNoDelay} RestClientRef.new(payloadBuilderAddress.get, flags = flags, - socketFlags = socketFlags) + socketFlags = socketFlags, + userAgent = nimbusAgentStr) diff --git a/beacon_chain/conf.nim b/beacon_chain/conf.nim index 5734424cf8..e6810357b5 100644 --- a/beacon_chain/conf.nim +++ b/beacon_chain/conf.nim @@ -41,7 +41,7 @@ export defs, parseCmdArg, completeCmdArg, network_metadata, el_conf, network, BlockHashOrNumber, confTomlDefs, confTomlNet, confTomlUri, - LightClientDataImportMode + LightClientDataImportMode, slashing_protection_common declareGauge network_name, "network name", ["name"] @@ -52,7 +52,7 @@ const defaultSigningNodeRequestTimeout* = 60 defaultBeaconNode* = "http://127.0.0.1:" & $defaultEth2RestPort defaultBeaconNodeUri* = parseUri(defaultBeaconNode) - defaultGasLimit* = 36_000_000 + defaultGasLimit* = 45_000_000 defaultAdminListenAddressDesc* = $defaultAdminListenAddress defaultBeaconNodeDesc = $defaultBeaconNode @@ -202,7 +202,6 @@ type web3ForcePolling* {. hidden - desc: "Force the use of polling when determining the head block of Eth1 (obsolete)" name: "web3-force-polling" .}: Option[bool] web3Urls* {. diff --git a/beacon_chain/consensus_object_pools/blob_quarantine.nim b/beacon_chain/consensus_object_pools/blob_quarantine.nim index 6b36d0741b..1f1d5b9854 100644 --- a/beacon_chain/consensus_object_pools/blob_quarantine.nim +++ b/beacon_chain/consensus_object_pools/blob_quarantine.nim @@ -10,15 +10,21 @@ import stew/bitops2, std/[sets, tables], - results, + results, metrics, ../spec/datatypes/[deneb, electra, fulu], - ../spec/[presets, helpers] + ../spec/[presets, helpers], + ../beacon_chain_db_quarantine from std/sequtils import mapIt, toSeq from std/strutils import join export results +declareGauge blob_quarantine_memory_slots_total, "Total count of available memory slots inside blob quarantine" +declareGauge blob_quarantine_memory_slots_occupied, "Number of occupied memory slots inside blob quarantine" +declareGauge blob_quarantine_database_slots_total, "Total count of availble database slots inside blob quarantine" +declareGauge blob_quarantine_database_slots_occupied, "Number of occupied database slots inside blob quarantine" + static: doAssert(NUMBER_OF_COLUMNS == 2 * 64, "ColumnMap should be updated") @@ -26,19 +32,41 @@ type ColumnMap* = object data: array[2, uint64] + SidecarHolderKind {.pure.} = enum + Empty, Loaded, Unloaded + + SidecarHolder[A] = object + index: uint64 + proposer_index: uint64 + slot: Slot + case kind: SidecarHolderKind + of SidecarHolderKind.Empty: + discard + of SidecarHolderKind.Unloaded: + discard + of SidecarHolderKind.Loaded: + data: ref A + RootTableRecord[A] = object - sidecars: seq[ref A] + sidecars: seq[SidecarHolder[A]] + slot: Slot + unloaded: int count: int SidecarQuarantine[A, B] = object - maxSidecarsCount: int + minEpochsForSidecarsRequests: uint64 + maxMemSidecarsCount: int + memSidecarsCount: int + maxDiskSidecarsCount: int + diskSidecarsCount: int maxSidecarsPerBlockCount: int - sidecarsCount: int custodyColumns: seq[ColumnIndex] custodyMap: ColumnMap roots: Table[Eth2Digest, RootTableRecord[A]] - usage: OrderedSet[Eth2Digest] + memUsage: OrderedSet[Eth2Digest] + diskUsage: OrderedSet[Eth2Digest] indexMap: seq[int] + db: QuarantineDB onSidecarCallback*: B OnBlobSidecarCallback* = proc( @@ -51,6 +79,15 @@ type ColumnQuarantine* = SidecarQuarantine[DataColumnSidecar, OnDataColumnSidecarCallback] +func isEmpty[A](holder: SidecarHolder[A]): bool = + holder.kind == SidecarHolderKind.Empty + +func isUnloaded[A](holder: SidecarHolder[A]): bool = + holder.kind == SidecarHolderKind.Unloaded + +func isLoaded[A](holder: SidecarHolder[A]): bool = + holder.kind == SidecarHolderKind.Loaded + func init*(t: typedesc[ColumnMap], columns: openArray[ColumnIndex]): ColumnMap = var res: ColumnMap for column in columns: @@ -71,7 +108,7 @@ iterator items*(a: ColumnMap): ColumnIndex = while data0 != 0'u64: let # t = data0 and -data0 - t = data0 and ((0xFFFF_FFFF_FFFF_FFFF'u64 - data0) + 1'u64) + t = data0 and (not(data0) + 1'u64) res = firstOne(data0) yield ColumnIndex(res - 1) data0 = data0 xor t @@ -79,7 +116,7 @@ iterator items*(a: ColumnMap): ColumnIndex = while data1 != 0'u64: let # t = data0 and -data0 - t = data1 and ((0xFFFF_FFFF_FFFF_FFFF'u64 - data1) + 1'u64) + t = data1 and (not(data1) + 1'u64) res = firstOne(data1) yield ColumnIndex(64 + res - 1) data1 = data1 xor t @@ -87,45 +124,61 @@ iterator items*(a: ColumnMap): ColumnIndex = func `$`*(a: ColumnMap): string = "[" & a.items().toSeq().mapIt($it).join(", ") & "]" -func maxSidecars(maxSidecarsPerBlock: uint64): int = +func maxSidecars*(maxSidecarsPerBlock: uint64): int = # Same limit as `MaxOrphans` in `block_quarantine`; # blobs may arrive before an orphan is tagged `blobless` 3 * int(SLOTS_PER_EPOCH) * int(maxSidecarsPerBlock) -func shortLog*(x: seq[BlobIndex]): string = - "<" & x.mapIt($it).join(", ") & ">" - func init[A, B]( t: typedesc[RootTableRecord], q: SidecarQuarantine[A, B] ): RootTableRecord[A] = RootTableRecord[A]( - sidecars: newSeq[ref A](q.maxSidecarsPerBlockCount), count: 0) + sidecars: newSeq[SidecarHolder[A]](q.maxSidecarsPerBlockCount), + count: 0, unloaded: 0, slot: FAR_FUTURE_SLOT) func len*[A, B](quarantine: SidecarQuarantine[A, B]): int = - quarantine.sidecarsCount + quarantine.memSidecarsCount + quarantine.diskSidecarsCount + +func lenMemory*[A, B](quarantine: SidecarQuarantine[A, B]): int = + quarantine.memSidecarsCount -func `$`*[A](r: RootTableRecord[A]): string = - if len(r.sidecars) == 0: - return "" - r.sidecars.mapIt(if isNil(it): "." else: "x").join("") +func lenDisk*[A, B](quarantine: SidecarQuarantine[A, B]): int = + quarantine.diskSidecarsCount -func removeRoot[A, B]( +proc removeRoot[A, B]( quarantine: var SidecarQuarantine[A, B], blockRoot: Eth2Digest ) = + # This procedore removes all the sidecars associated with `blockRoot` from + # memory and from disk. var rootRecord: RootTableRecord[A] + sidecarsOnDisk = 0 if quarantine.roots.pop(blockRoot, rootRecord): for index in 0 ..< len(rootRecord.sidecars): - if not(rootRecord.sidecars[index].isNil()): - rootRecord.sidecars[index] = nil - dec(quarantine.sidecarsCount) - - quarantine.usage.excl(blockRoot) - -func remove*[A, B]( + case rootRecord.sidecars[index].kind + of SidecarHolderKind.Empty: + discard + of SidecarHolderKind.Loaded: + rootRecord.sidecars[index].data = nil + dec(quarantine.memSidecarsCount) + blob_quarantine_memory_slots_occupied.set( + int64(quarantine.memSidecarsCount)) + of SidecarHolderKind.Unloaded: + dec(quarantine.diskSidecarsCount) + blob_quarantine_database_slots_occupied.set( + int64(quarantine.diskSidecarsCount)) + inc(sidecarsOnDisk) + + if sidecarsOnDisk > 0 and quarantine.maxMemSidecarsCount > 0: + quarantine.db.removeDataSidecars(A, blockRoot) + quarantine.diskUsage.excl(blockRoot) + + quarantine.memUsage.excl(blockRoot) + +proc remove*[A, B]( quarantine: var SidecarQuarantine[A, B], blockRoot: Eth2Digest ) = @@ -135,16 +188,44 @@ func remove*[A, B]( ## Function do nothing, if ``blockRoot` is not part of the quarantine. quarantine.removeRoot(blockRoot) -func pruneRoot[A, B](quarantine: var SidecarQuarantine[A, B]) = - # Remove the all the blobs related to the oldest block root from the - # quarantine ``quarantine``. - if len(quarantine.usage) == 0: - return +func getOldestInMemoryRoot[A, B]( + quarantine: SidecarQuarantine[A, B] +): Eth2Digest = + var oldestRoot: Eth2Digest + for blockRoot in quarantine.memUsage: + oldestRoot = blockRoot + break + oldestRoot + +func getOldestOnDiskRoot[A, B]( + quarantine: SidecarQuarantine[A, B] +): Eth2Digest = var oldestRoot: Eth2Digest - for blockRoot in quarantine.usage: + for blockRoot in quarantine.diskUsage: oldestRoot = blockRoot break - quarantine.remove(oldestRoot) + oldestRoot + +func fitsInMemory[A, B](quarantine: SidecarQuarantine[A, B], count: int): bool = + quarantine.memSidecarsCount + count <= quarantine.maxMemSidecarsCount + +func fitsOnDisk[A, B](quarantine: SidecarQuarantine[A, B], count: int): bool = + quarantine.diskSidecarsCount + count <= quarantine.maxDiskSidecarsCount + +proc pruneInMemoryRoot[A, B](quarantine: var SidecarQuarantine[A, B]) = + # Remove the all the blobs related to the oldest block root from the memory + # storage of quarantine ``quarantine``. + if len(quarantine.memUsage) == 0: + return + quarantine.remove(quarantine.getOldestInMemoryRoot()) + +proc pruneOnDiskRoot[A, B](quarantine: var SidecarQuarantine[A, B]) = + # Remove the all the blobs related to the oldest block root from the disk + # storage of quarantine ``quarantine``. + # Returns `true` if oldest block root on disk is equal to `unloadRoot`. + if len(quarantine.diskUsage) == 0: + return + quarantine.remove(quarantine.getOldestOnDiskRoot()) func getIndex(quarantine: BlobQuarantine, index: BlobIndex): int = quarantine.indexMap[int(index)] @@ -158,7 +239,74 @@ template slot(b: BlobSidecar|DataColumnSidecar): Slot = template proposer_index(b: BlobSidecar|DataColumnSidecar): uint64 = b.signed_block_header.message.proposer_index -func put[A, B](record: var RootTableRecord[A], q: var SidecarQuarantine[A, B], +func unload[A](holder: var SidecarHolder[A]): ref A = + doAssert(holder.kind == SidecarHolderKind.Loaded) + let res = holder.data + holder.data = nil + holder = SidecarHolder[A]( + kind: SidecarHolderKind.Unloaded, + slot: holder.slot, + index: holder.index, + proposer_index: holder.proposer_index, + ) + res + +func load[A](holder: var SidecarHolder[A], sidecar: ref A) = + holder = SidecarHolder[A]( + kind: SidecarHolderKind.Loaded, + slot: holder.slot, + index: holder.index, + proposer_index: holder.proposer_index, + data: sidecar + ) + +proc unloadRoot[A, B](quarantine: var SidecarQuarantine[A, B]) = + doAssert(len(quarantine.memUsage) > 0) + + if quarantine.maxDiskSidecarsCount == 0: + # Disk storage is disabled, so we use should prune memory storage instead. + quarantine.pruneInMemoryRoot() + return + + let blockRoot = quarantine.getOldestInMemoryRoot() + + quarantine.roots.withValue(blockRoot, record): + if not(quarantine.fitsOnDisk(record[].count)): + quarantine.pruneOnDiskRoot() + # Pruning on disk also removes sidecars from memory, so this could be + # enough + return + + var res: seq[ref A] + for index in 0 ..< len(record[].sidecars): + if record[].sidecars[index].kind == SidecarHolderKind.Loaded: + res.add(record[].sidecars[index].unload()) + dec(quarantine.memSidecarsCount) + inc(quarantine.diskSidecarsCount) + blob_quarantine_memory_slots_occupied.set( + int64(quarantine.memSidecarsCount)) + blob_quarantine_database_slots_occupied.set( + int64(quarantine.diskSidecarsCount)) + inc(record[].unloaded) + + if len(res) > 0: + quarantine.db.putDataSidecars(blockRoot, res) + quarantine.memUsage.excl(blockRoot) + quarantine.diskUsage.incl(blockRoot) + +proc loadRoot[A, B](quarantine: var SidecarQuarantine[A, B], + blockRoot: Eth2Digest, + record: var RootTableRecord[A]) = + for sidecar in quarantine.db.sidecars(A, blockRoot): + let index = quarantine.getIndex(sidecar.index) + doAssert(index >= 0, "Incorrect sidecar index [" & $sidecar.index & "]") + doAssert(record.sidecars[index].isUnloaded(), + "Database storage is inconsistent") + record.sidecars[index].load(newClone(sidecar)) + dec(record.unloaded) + doAssert(record.unloaded == 0, "Record's unload counter should be zero") + +proc put[A, B](record: var RootTableRecord[A], q: var SidecarQuarantine[A, B], sidecars: openArray[ref A]) = for sidecar in sidecars: # Sidecar should pass validation before being added to quarantine, @@ -168,19 +316,29 @@ func put[A, B](record: var RootTableRecord[A], q: var SidecarQuarantine[A, B], # 3. sidecar.index is in custody columns set for `fulu`. let index = q.getIndex(sidecar.index) doAssert(index >= 0, "Incorrect sidecar index [" & $sidecar.index & "]") - if isNil(record.sidecars[index]): - inc(q.sidecarsCount) + + if isEmpty(record.sidecars[index]): + inc(q.memSidecarsCount) + blob_quarantine_memory_slots_occupied.set(int64(q.memSidecarsCount)) inc(record.count) - record.sidecars[index] = sidecar + record.slot = sidecar[].slot() + + record.sidecars[index] = SidecarHolder[A]( + kind: SidecarHolderKind.Loaded, + slot: sidecar[].slot(), + index: uint64(sidecar[].index), + proposer_index: sidecar[].proposer_index(), + data: sidecar + ) -func put*[A, B]( +proc put*[A, B]( quarantine: var SidecarQuarantine[A, B], blockRoot: Eth2Digest, sidecar: ref A ) = ## Function adds blob or data column sidecar associated with block root ## ``blockRoot`` to the quarantine ``quarantine``. - while quarantine.sidecarsCount >= quarantine.maxSidecarsCount: + while not(quarantine.fitsInMemory(1)): # FIFO if full. For example, sync manager and request manager can race to # put blobs in at the same time, so one gets blob insert -> block resolve # -> blob insert sequence, which leaves garbage blobs. @@ -189,14 +347,14 @@ func put*[A, B]( # blobs which are correctly signed, point to either correct block roots or a # block root which isn't ever seen, and then are for any reason simply never # used. - quarantine.pruneRoot() + quarantine.unloadRoot() let rootRecord = RootTableRecord.init(quarantine) quarantine.roots.mgetOrPut(blockRoot, rootRecord).put( quarantine, [sidecar]) - quarantine.usage.incl(blockRoot) + quarantine.memUsage.incl(blockRoot) -func put*[A, B]( +proc put*[A, B]( quarantine: var SidecarQuarantine[A, B], blockRoot: Eth2Digest, sidecars: openArray[ref A] @@ -206,7 +364,7 @@ func put*[A, B]( if len(sidecars) == 0: return - while quarantine.sidecarsCount + len(sidecars) >= quarantine.maxSidecarsCount: + while not(quarantine.fitsInMemory(len(sidecars))): # FIFO if full. For example, sync manager and request manager can race to # put blobs in at the same time, so one gets blob insert -> block resolve # -> blob insert sequence, which leaves garbage blobs. @@ -215,13 +373,13 @@ func put*[A, B]( # blobs which are correctly signed, point to either correct block roots or a # block root which isn't ever seen, and then are for any reason simply never # used. - quarantine.pruneRoot() + quarantine.unloadRoot() let rootRecord = RootTableRecord.init(quarantine) quarantine.roots.mgetOrPut(blockRoot, rootRecord).put( quarantine, sidecars) - quarantine.usage.incl(blockRoot) + quarantine.memUsage.incl(blockRoot) template hasSidecarImpl( blockRoot: Eth2Digest, @@ -233,10 +391,10 @@ template hasSidecarImpl( if rootRecord.count == 0: return false let index = quarantine.getIndex(index) - if (index == -1) or (isNil(rootRecord.sidecars[index])): + if (index == -1) or rootRecord.sidecars[index].isEmpty(): return false - if (rootRecord.sidecars[index][].proposer_index() != proposer_index) or - (rootRecord.sidecars[index][].slot() != slot): + if (rootRecord.sidecars[index].proposer_index != proposer_index) or + (rootRecord.sidecars[index].slot != slot): return false true @@ -274,8 +432,8 @@ func hasSidecars*( return true let record = quarantine.roots.getOrDefault(blockRoot) - if len(record.sidecars) == 0: - # block root not found, record.sidecars sequence was not initialized. + if record.count == 0: + # block root not found. return false if record.count < len(blck.message.body.blob_kzg_commitments): @@ -328,7 +486,7 @@ func hasSidecars*( ## ``blck`` with block root ``blockRoot``. hasSidecars(quarantine, blck.root, blck) -func popSidecars*( +proc popSidecars*( quarantine: var BlobQuarantine, blockRoot: Eth2Digest, blck: deneb.SignedBeaconBlock | electra.SignedBeaconBlock | @@ -344,7 +502,7 @@ func popSidecars*( quarantine.remove(blockRoot) return Opt.some(default(seq[ref BlobSidecar])) - let record = quarantine.roots.getOrDefault(blockRoot) + var record = quarantine.roots.getOrDefault(blockRoot) if len(record.sidecars) == 0: # block root not found, record.sidecars sequence was not initialized. return Opt.none(seq[ref BlobSidecar]) @@ -353,15 +511,24 @@ func popSidecars*( # Quarantine does not hold enough blob sidecars. return Opt.none(seq[ref BlobSidecar]) + if record.unloaded > 0: + # Quarantine unloaded some blobs to disk, we should load it back. + quarantine.loadRoot(blockRoot, record) + var sidecars: seq[ref BlobSidecar] for bindex in 0 ..< len(blck.message.body.blob_kzg_commitments): let index = quarantine.getIndex(BlobIndex(bindex)) - doAssert(not(isNil(record.sidecars[index])), - "Record should not store nil values when record's count is correct") - sidecars.add(record.sidecars[index]) + doAssert(record.sidecars[index].isLoaded(), + "Record should only have loaded values at this point") + sidecars.add(record.sidecars[index].data) + + # popSidecars() should remove all the artifacts from the quarantine in both + # memory and disk. + quarantine.removeRoot(blockRoot) + Opt.some(sidecars) -func popSidecars*( +proc popSidecars*( quarantine: var ColumnQuarantine, blockRoot: Eth2Digest, blck: fulu.SignedBeaconBlock @@ -376,7 +543,7 @@ func popSidecars*( quarantine.remove(blockRoot) return Opt.some(default(seq[ref DataColumnSidecar])) - let record = quarantine.roots.getOrDefault(blockRoot) + var record = quarantine.roots.getOrDefault(blockRoot) if len(record.sidecars) == 0: # block root not found, record.sidecars sequence was not allocated. return Opt.none(seq[ref DataColumnSidecar]) @@ -393,24 +560,40 @@ func popSidecars*( # Quarantine does not hold enough column sidecars. return Opt.none(seq[ref DataColumnSidecar]) + if record.unloaded > 0: + # Quarantine unloaded some blobs to disk, we should load it back. + quarantine.loadRoot(blockRoot, record) + var sidecars: seq[ref DataColumnSidecar] if supernode: for sidecar in record.sidecars: # Supernode could have some of the columns not filled. - if not(isNil(sidecar)): - sidecars.add(sidecar) + if not(sidecar.isEmpty()): + doAssert(sidecar.isLoaded(), + "Sidecars should be loaded at this moment") + sidecars.add(sidecar.data) + if len(sidecars) >= (NUMBER_OF_COLUMNS div 2 + 1): + break + doAssert(len(sidecars) >= (NUMBER_OF_COLUMNS div 2 + 1), "Incorrect amount of sidecars in record") - Opt.some(sidecars) else: for cindex in quarantine.custodyColumns: let index = quarantine.getIndex(cindex) - doAssert(not(isNil(record.sidecars[index])), - "Record should not store nil values when record's count is correct") - sidecars.add(record.sidecars[index]) - Opt.some(sidecars) + doAssert(record.sidecars[index].isLoaded(), + "Sidecars should be loaded at this moment") + sidecars.add(record.sidecars[index].data) + + doAssert(len(sidecars) == len(quarantine.custodyColumns), + "Incorrect amount of sidecars in record") + + # popSidecars() should remove all the artifacts from the quarantine in both + # memory and disk. + quarantine.removeRoot(blockRoot) + + Opt.some(sidecars) -func popSidecars*( +proc popSidecars*( quarantine: var BlobQuarantine, blck: deneb.SignedBeaconBlock | electra.SignedBeaconBlock | fulu.SignedBeaconBlock @@ -418,7 +601,7 @@ func popSidecars*( ## Alias for `popSidecars()`. popSidecars(quarantine, blck.root, blck) -func popSidecars*( +proc popSidecars*( quarantine: var ColumnQuarantine, blck: fulu.SignedBeaconBlock ): Opt[seq[ref DataColumnSidecar]] = @@ -444,7 +627,7 @@ func fetchMissingSidecars*( for bindex in 0 ..< commitmentsCount: let index = quarantine.getIndex(BlobIndex(bindex)) - if len(record.sidecars) == 0 or (record.sidecars[index].isNil()): + if len(record.sidecars) == 0 or record.sidecars[index].isEmpty(): res.add(BlobIdentifier(block_root: blockRoot, index: BlobIndex(bindex))) res @@ -497,7 +680,7 @@ func fetchMissingSidecars*( # columns. break let index = quarantine.getIndex(column) - if (index == -1) or record.sidecars[index].isNil(): + if (index == -1) or record.sidecars[index].isEmpty(): res.add(DataColumnIdentifier(block_root: blockRoot, index: column)) inc(columnsRequested) else: @@ -512,35 +695,68 @@ func fetchMissingSidecars*( else: for column in (peerMap and quarantine.custodyMap).items(): let index = quarantine.getIndex(column) - if (index == -1) or (record.sidecars[index].isNil()): + if (index == -1) or record.sidecars[index].isEmpty(): res.add(DataColumnIdentifier(block_root: blockRoot, index: column)) res -func pruneAfterFinalization*[A, B]( - quarantine: var SidecarQuarantine[A, B], - epoch: Epoch +proc pruneAfterFinalization*( + quarantine: var BlobQuarantine, + epoch: Epoch, + backfillNeeded: bool ) = - let epochSlot = epoch.start_slot() - var - sidecarsCount = 0 - rootsToRemove: seq[Eth2Digest] + let + startEpoch = + if backfillNeeded: + # Because BlobQuarantine could be used as temporary storage for incoming + # blob sidecars, we should not prune blobs which are behind + # `MIN_EPOCHS_FOR_BLOB_SIDECARS_REQUESTS` epoch. Otherwise we will not + # be able to backfill blobs. + if epoch < quarantine.minEpochsForSidecarsRequests: + Epoch(0) + else: + epoch - quarantine.minEpochsForSidecarsRequests + else: + epoch + epochSlot = (startEpoch + 1).start_slot() + var rootsToRemove: seq[Eth2Digest] for mkey, mrecord in quarantine.roots.mpairs(): - var removeRoot = false - for index in 0 ..< len(mrecord.sidecars): - if not(isNil(mrecord.sidecars[index])) and - mrecord.sidecars[index][].slot < epochSlot: - removeRoot = true - # Preemptively freeing `ref` object reference. - mrecord.sidecars[index] = nil - inc(sidecarsCount) - if removeRoot: + if (mrecord.count > 0) and (mrecord.slot < epochSlot): rootsToRemove.add(mkey) for root in rootsToRemove: - quarantine.roots.del(root) + quarantine.removeRoot(root) - dec(quarantine.sidecarsCount, sidecarsCount) +proc pruneAfterFinalization*( + quarantine: var ColumnQuarantine, + epoch: Epoch, + backfillNeeded: bool +) = + # TODO: In this procedure `MIN_EPOCHS_FOR_DATA_COLUMN_SIDECARS_REQUESTS` + # should be used instead of `MIN_EPOCHS_FOR_BLOB_SIDECARS_REQUESTS`, but it + # was unavailable in the moment of this code being written. + let + startEpoch = + if backfillNeeded: + # Because ColumnQuarantine could be used as temporary storage for + # incoming data column sidecars, we should not prune data columns which + # are behind `MIN_EPOCHS_FOR_DATA_COLUMN_SIDECARS_REQUESTS` epoch. + # Otherwise we will not be able to backfill data columns. + if epoch < quarantine.minEpochsForSidecarsRequests: + Epoch(0) + else: + epoch - quarantine.minEpochsForSidecarsRequests + else: + epoch + epochSlot = (startEpoch + 1).start_slot() + + var rootsToRemove: seq[Eth2Digest] + for mkey, mrecord in quarantine.roots.mpairs(): + if (mrecord.count > 0) and (mrecord.slot < epochSlot): + rootsToRemove.add(mkey) + + for root in rootsToRemove: + quarantine.removeRoot(root) template onBlobSidecarCallback*( quarantine: BlobQuarantine @@ -552,9 +768,11 @@ template onDataColumnSidecarCallback*( ): OnDataColumnSidecarCallback = quarantine.onSidecarCallback -func init*( +proc init*( T: typedesc[BlobQuarantine], cfg: RuntimeConfig, + database: QuarantineDB, + maxDiskSizeMultipler: int, onBlobSidecarCallback: OnBlobSidecarCallback ): BlobQuarantine = # BlobSidecars maps are trivial, but still useful @@ -563,22 +781,36 @@ func init*( indexMap[index] = index let size = maxSidecars(cfg.MAX_BLOBS_PER_BLOCK_ELECTRA) + + blob_quarantine_memory_slots_total.set(int64(size)) + blob_quarantine_database_slots_total.set( + int64(size) * int64(maxDiskSizeMultipler)) + blob_quarantine_memory_slots_occupied.set(0'i64) + blob_quarantine_database_slots_occupied.set(0'i64) + BlobQuarantine( - maxSidecarsPerBlockCount: int(cfg.MAX_BLOBS_PER_BLOCK_ELECTRA), - maxSidecarsCount: size, - sidecarsCount: 0, + minEpochsForSidecarsRequests: + cfg.MIN_EPOCHS_FOR_BLOB_SIDECARS_REQUESTS, + maxSidecarsPerBlockCount: + int(cfg.MAX_BLOBS_PER_BLOCK_ELECTRA), + maxMemSidecarsCount: size, + maxDiskSidecarsCount: size * maxDiskSizeMultipler, + memSidecarsCount: 0, + diskSidecarsCount: 0, indexMap: indexMap, - onSidecarCallback: onBlobSidecarCallback + onSidecarCallback: onBlobSidecarCallback, + db: database ) -func init*( +proc init*( T: typedesc[ColumnQuarantine], cfg: RuntimeConfig, custodyColumns: openArray[ColumnIndex], + database: QuarantineDB, + maxDiskSizeMultipler: int, onBlobSidecarCallback: OnDataColumnSidecarCallback ): ColumnQuarantine = doAssert(len(custodyColumns) <= NUMBER_OF_COLUMNS) - let size = maxSidecars(NUMBER_OF_COLUMNS) var indexMap = newSeqUninit[int](NUMBER_OF_COLUMNS) if len(custodyColumns) < NUMBER_OF_COLUMNS: for i in 0 ..< len(indexMap): @@ -587,12 +819,26 @@ func init*( doAssert(item < uint64(NUMBER_OF_COLUMNS)) indexMap[int(item)] = index + let size = maxSidecars(NUMBER_OF_COLUMNS) + + blob_quarantine_memory_slots_total.set(int64(size)) + blob_quarantine_database_slots_total.set( + int64(size) * int64(maxDiskSizeMultipler)) + blob_quarantine_memory_slots_occupied.set(0'i64) + blob_quarantine_database_slots_occupied.set(0'i64) + ColumnQuarantine( + minEpochsForSidecarsRequests: + cfg.MIN_EPOCHS_FOR_BLOB_SIDECARS_REQUESTS, + # This should be changed to MIN_EPOCHS_FOR_DATA_COLUMN_SIDECARS_REQUESTS maxSidecarsPerBlockCount: len(custodyColumns), - maxSidecarsCount: size, - sidecarsCount: 0, + maxMemSidecarsCount: size, + maxDiskSidecarsCount: size * maxDiskSizeMultipler, + memSidecarsCount: 0, + diskSidecarsCount: 0, indexMap: indexMap, custodyColumns: @custodyColumns, custodyMap: ColumnMap.init(custodyColumns), + db: database, onSidecarCallback: onBlobSidecarCallback ) diff --git a/beacon_chain/consensus_object_pools/block_pools_types.nim b/beacon_chain/consensus_object_pools/block_pools_types.nim index 986b207ecc..c48fee4c1d 100644 --- a/beacon_chain/consensus_object_pools/block_pools_types.nim +++ b/beacon_chain/consensus_object_pools/block_pools_types.nim @@ -396,6 +396,16 @@ func horizon*(dag: ChainDAGRef): Slot = else: GENESIS_SLOT +func earliestAvailableSlot*(dag: ChainDAGRef): Slot = + if dag.backfill.slot < dag.tail.slot and + dag.backfill.slot != GENESIS_SLOT: + # When the BN is backfilling, backfill slot is the earliest + # persisted block. + dag.backfill.slot + else: + # When the BN has backfilled, tail moves progressively. + dag.tail.slot + template epoch*(e: EpochRef): Epoch = e.key.epoch func shortLog*(v: EpochKey): string = diff --git a/beacon_chain/consensus_object_pools/block_quarantine.nim b/beacon_chain/consensus_object_pools/block_quarantine.nim index ccee1527b1..dca0140768 100644 --- a/beacon_chain/consensus_object_pools/block_quarantine.nim +++ b/beacon_chain/consensus_object_pools/block_quarantine.nim @@ -9,8 +9,8 @@ import std/tables, - chronicles, - ../spec/forks + chronicles, chronos, + ../spec/[presets, forks] export tables, forks @@ -21,9 +21,7 @@ const ## Arbitrary MaxOrphans = SLOTS_PER_EPOCH * 3 ## Enough for finalization in an alternative fork - MaxBlobless = SLOTS_PER_EPOCH - ## Arbitrary - MaxColumnless = SLOTS_PER_EPOCH + MaxSidecarless = SLOTS_PER_EPOCH * 128 ## Arbitrary MaxUnviables = 16 * 1024 ## About a day of blocks - most likely not needed but it's quite cheap.. @@ -52,18 +50,18 @@ type ## to be dropped. An orphan block may also be "blobless" (see ## below) - if so, upon resolving the parent, it should be ## added to the blobless table, after verifying its signature. - - blobless*: OrderedTable[Eth2Digest, ForkedSignedBeaconBlock] - ## Blocks that we don't have blobs for. When we have received - ## all blobs for this block, we can proceed to resolving the - ## block as well. A blobless block inserted into this table must + orphansEvent*: AsyncEvent + ## Asynchronous event which will be set, when new block appears in + ## orphans table. + + sidecarless*: OrderedTable[Eth2Digest, ForkedSignedBeaconBlock] + ## Blocks that we don't have sidecars (BlobSidecar/DataColumnSidecar) for. + ## When we have received all sidecars for this block, we can proceed to + ## resolving the block as well. Block inserted into this table must ## have a resolved parent (i.e., it is not an orphan). - - columnless*: OrderedTable[Eth2Digest, ForkedSignedBeaconBlock] - ## Blocks that we don't have columns for. When we have received - ## all columns for this block, we can proceed to resolving the - ## block as well. A columnless block inserted into this table must - ## have a resolved parent (i.e., it is not an orphan) + sidecarlessEvent*: AsyncEvent + ## Asynchronous event which will be set, when new block appears in + ## sidecarless table. unviable*: OrderedTable[Eth2Digest, tuple[]] ## Unviable blocks are those that come from a history that does not @@ -82,9 +80,19 @@ type missing*: Table[Eth2Digest, MissingBlock] ## Roots of blocks that we would like to have (either parent_root of ## unresolved blocks or block roots of attestations) + missingEvent*: AsyncEvent + ## Asynchronous event which will be set, when new block appears in + ## missing table. -func init*(T: type Quarantine): T = - T() + cfg*: RuntimeConfig + +func init*(T: type Quarantine, cfg: RuntimeConfig): T = + T( + cfg: cfg, + sidecarlessEvent: newAsyncEvent(), + missingEvent: newAsyncEvent(), + orphansEvent: newAsyncEvent() + ) func checkMissing*(quarantine: var Quarantine, max: int): seq[FetchRecord] = ## Return a list of blocks that we should try to resolve from other client - @@ -106,7 +114,7 @@ func checkMissing*(quarantine: var Quarantine, max: int): seq[FetchRecord] = if result.len >= max: break -func addMissing*(quarantine: var Quarantine, root: Eth2Digest) = +proc addMissing*(quarantine: var Quarantine, root: Eth2Digest) = ## Schedule the download a the given block if quarantine.missing.len >= MaxMissingItems: return @@ -129,6 +137,7 @@ func addMissing*(quarantine: var Quarantine, root: Eth2Digest) = # Add if it's not there, but don't update missing counter if not found: discard quarantine.missing.hasKeyOrPut(r, MissingBlock()) + quarantine.missingEvent.fire() return func removeOrphan*( @@ -189,7 +198,7 @@ func removeUnviableOrphanTree( checked -func removeUnviableBloblessTree( +func removeUnviableSidecarlessTree( quarantine: var Quarantine, toCheck: var seq[Eth2Digest], tbl: var OrderedTable[Eth2Digest, ForkedSignedBeaconBlock]) = @@ -223,7 +232,7 @@ func addUnviable*(quarantine: var Quarantine, root: Eth2Digest) = quarantine.cleanupUnviable() var toCheck = @[root] var checked = quarantine.removeUnviableOrphanTree(toCheck, quarantine.orphans) - quarantine.removeUnviableBloblessTree(checked, quarantine.blobless) + quarantine.removeUnviableSidecarlessTree(checked, quarantine.sidecarless) quarantine.unviable[root] = () @@ -238,29 +247,17 @@ func cleanupOrphans(quarantine: var Quarantine, finalizedSlot: Slot) = quarantine.addUnviable k[0] quarantine.orphans.del k -func cleanupBlobless(quarantine: var Quarantine, finalizedSlot: Slot) = +func cleanupSidecarless(quarantine: var Quarantine, finalizedSlot: Slot) = var toDel: seq[Eth2Digest] - for k, v in quarantine.blobless: + for k, v in quarantine.sidecarless: withBlck(v): if not isViable(finalizedSlot, forkyBlck.message.slot): toDel.add k for k in toDel: quarantine.addUnviable k - quarantine.blobless.del k - -func cleanupColumnless(quarantine: var Quarantine, finalizedSlot: Slot) = - var toDel: seq[Eth2Digest] - - for k, v in quarantine.columnless: - withBlck(v): - if not isViable(finalizedSlot, forkyBlck.message.slot): - toDel.add k - - for k in toDel: - quarantine.addUnviable k - quarantine.columnless.del k + quarantine.sidecarless.del k func clearAfterReorg*(quarantine: var Quarantine) = ## Clear missing and orphans to start with a fresh slate in case of a reorg @@ -268,6 +265,28 @@ func clearAfterReorg*(quarantine: var Quarantine) = quarantine.missing.reset() quarantine.orphans.reset() +func pruneAfterFinalization*( + quarantine: var Quarantine, + epoch: Epoch, + needsBackfill: bool +) = + let + startEpoch = + if needsBackfill: + # Because Quarantine could be used as temporary storage for blocks which + # do not have sidecars yet, we should not prune blocks which are behind + # `MIN_EPOCHS_FOR_BLOB_SIDECARS_REQUESTS` epoch. Otherwise we will not + # be able to backfill this blocks properly. + if epoch < quarantine.cfg.MIN_EPOCHS_FOR_BLOB_SIDECARS_REQUESTS: + Epoch(0) + else: + epoch - quarantine.cfg.MIN_EPOCHS_FOR_BLOB_SIDECARS_REQUESTS + else: + epoch + slot = (startEpoch + 1).start_slot() + + quarantine.cleanupSidecarless(slot) + # Typically, blocks will arrive in mostly topological order, with some # out-of-order block pairs. Therefore, it is unhelpful to use either a # FIFO or LIFO discpline, and since by definition each block gets used @@ -279,9 +298,11 @@ func clearAfterReorg*(quarantine: var Quarantine) = # for future slots are rejected before reaching quarantine, this usually # will be a block for the last couple of slots for which the parent is a # likely imminent arrival. -func addOrphan*( - quarantine: var Quarantine, finalizedSlot: Slot, - signedBlock: ForkedSignedBeaconBlock): Result[void, cstring] = +proc addOrphan*( + quarantine: var Quarantine, + finalizedSlot: Slot, + signedBlock: ForkedSignedBeaconBlock +): Result[void, cstring] = ## Adds block to quarantine's `orphans` and `missing` lists. if not isViable(finalizedSlot, getForkedBlockField(signedBlock, slot)): @@ -312,9 +333,10 @@ func addOrphan*( oldest_orphan_key = k break quarantine.orphans.del oldest_orphan_key - quarantine.blobless.del oldest_orphan_key[0] + quarantine.sidecarless.del oldest_orphan_key[0] quarantine.orphans[(signedBlock.root, signedBlock.signature)] = signedBlock + quarantine.orphansEvent.fire() ok() @@ -332,75 +354,94 @@ iterator pop*(quarantine: var Quarantine, root: Eth2Digest): toRemove.add(k) yield v -proc addBlobless*( - quarantine: var Quarantine, finalizedSlot: Slot, +proc addSidecarless( + quarantine: var Quarantine, finalizedSlot: Opt[Slot], signedBlock: deneb.SignedBeaconBlock | electra.SignedBeaconBlock | - fulu.SignedBeaconBlock): bool = - - if not isViable(finalizedSlot, signedBlock.message.slot): - quarantine.addUnviable(signedBlock.root) - return false - - quarantine.cleanupBlobless(finalizedSlot) - - if quarantine.blobless.lenu64 >= MaxBlobless: - var oldest_blobless_key: Eth2Digest - for k in quarantine.blobless.keys: - oldest_blobless_key = k + fulu.SignedBeaconBlock +): bool = + if finalizedSlot.isSome(): + if not isViable(finalizedSlot.get(), signedBlock.message.slot): + quarantine.addUnviable(signedBlock.root) + return false + + if quarantine.sidecarless.lenu64 >= MaxSidecarless: + var oldestKey: Eth2Digest + for k in quarantine.sidecarless.keys: + oldestKey = k break - quarantine.blobless.del oldest_blobless_key + quarantine.sidecarless.del(oldestKey) - debug "block quarantine: Adding blobless", blck = shortLog(signedBlock) - quarantine.blobless[signedBlock.root] = + debug "Block without sidecars has been added to the quarantine", + block_root = shortLog(signedBlock.root) + quarantine.sidecarless[signedBlock.root] = ForkedSignedBeaconBlock.init(signedBlock) quarantine.missing.del(signedBlock.root) + quarantine.sidecarlessEvent.fire() true -proc addColumnless*( - quarantine: var Quarantine, finalizedSlot: Slot, - signedBlock: fulu.SignedBeaconBlock): bool = - - if not isViable(finalizedSlot, signedBlock.message.slot): - quarantine.addUnviable(signedBlock.root) - return false +proc addSidecarless*( + quarantine: var Quarantine, finalizedSlot: Slot, + signedBlock: deneb.SignedBeaconBlock | electra.SignedBeaconBlock | + fulu.SignedBeaconBlock +): bool = + quarantine.addSidecarless(Opt.some(finalizedSlot), signedBlock) - quarantine.cleanupColumnless(finalizedSlot) +proc addSidecarless*( + quarantine: var Quarantine, + signedBlock: deneb.SignedBeaconBlock | electra.SignedBeaconBlock | + fulu.SignedBeaconBlock +) = + discard quarantine.addSidecarless(Opt.none(Slot), signedBlock) - if quarantine.columnless.lenu64 >= MaxColumnless: - var oldest_columnless_key: Eth2Digest - for k in quarantine.columnless.keys: - oldest_columnless_key = k - break - quarantine.blobless.del oldest_columnless_key +proc addColumnless*( + quarantine: var Quarantine, finalizedSlot: Slot, + signedBlock: fulu.SignedBeaconBlock +): bool {.deprecated.} = + quarantine.addSidecarless(finalizedSlot, signedBlock) - debug "block quarantine: Adding columnless", blck = shortLog(signedBlock) - quarantine.columnless[signedBlock.root] = - ForkedSignedBeaconBlock.init(signedBlock) - quarantine.missing.del(signedBlock.root) - true +proc addBlobless*( + quarantine: var Quarantine, finalizedSlot: Slot, + signedBlock: deneb.SignedBeaconBlock | electra.SignedBeaconBlock | + fulu.SignedBeaconBlock +): bool {.deprecated.} = + quarantine.addSidecarless(finalizedSlot, signedBlock) -func popBlobless*( +func popSidecarless*( quarantine: var Quarantine, - root: Eth2Digest): Opt[ForkedSignedBeaconBlock] = + root: Eth2Digest +): Opt[ForkedSignedBeaconBlock] = var blck: ForkedSignedBeaconBlock - if quarantine.blobless.pop(root, blck): + if quarantine.sidecarless.pop(root, blck): Opt.some(blck) else: Opt.none(ForkedSignedBeaconBlock) func popColumnless*( quarantine: var Quarantine, - root: Eth2Digest): Opt[ForkedSignedBeaconBlock] = - var blck: ForkedSignedBeaconBlock - if quarantine.columnless.pop(root, blck): - Opt.some(blck) - else: - Opt.none(ForkedSignedBeaconBlock) + root: Eth2Digest +): Opt[ForkedSignedBeaconBlock] {.deprecated.} = + quarantine.popSidecarless(root) + +func popBlobless*( + quarantine: var Quarantine, + root: Eth2Digest +): Opt[ForkedSignedBeaconBlock] {.deprecated.} = + quarantine.popSidecarless(root) + +iterator peekSidecarless*( + quarantine: var Quarantine +): ForkedSignedBeaconBlock = + for k, v in quarantine.sidecarless.mpairs(): + yield v -iterator peekBlobless*(quarantine: var Quarantine): ForkedSignedBeaconBlock = - for k, v in quarantine.blobless.mpairs(): +iterator peekBlobless*( + quarantine: var Quarantine +): ForkedSignedBeaconBlock {.deprecated.} = + for k, v in quarantine.sidecarless.mpairs(): yield v -iterator peekColumnless*(quarantine: var Quarantine): ForkedSignedBeaconBlock = - for k, v in quarantine.columnless.mpairs(): +iterator peekColumnless*( + quarantine: var Quarantine +): ForkedSignedBeaconBlock {.deprecated.} = + for k, v in quarantine.sidecarless.mpairs(): yield v diff --git a/beacon_chain/consensus_object_pools/blockchain_dag.nim b/beacon_chain/consensus_object_pools/blockchain_dag.nim index badc0368cc..9da52f5348 100644 --- a/beacon_chain/consensus_object_pools/blockchain_dag.nim +++ b/beacon_chain/consensus_object_pools/blockchain_dag.nim @@ -2373,6 +2373,53 @@ func checkCompoundingChanges( withState(state): anyIt(vis, forkyState.data.validators[it].has_compounding_withdrawal_credential) +func trackVanityState( + dag: ChainDAGRef, knownValidators: openArray[ValidatorIndex]): auto = + ( + lastHeadKind: dag.headState.kind, + lastHeadEpoch: getStateField(dag.headState, slot).epoch, + lastKnownValidatorsChangeStatuses: + dag.headState.getBlsToExecutionChangeStatuses(knownValidators), + lastKnownCompoundingChangeStatuses: + dag.headState.getCompoundingStatuses(knownValidators) + ) + +proc processVanityLogs(dag: ChainDAGRef, vanityState: auto) = + if dag.headState.kind > vanityState.lastHeadKind: + proc logForkUpgrade(consensusFork: ConsensusFork, handler: LogProc) = + if handler != nil and + dag.headState.kind >= consensusFork and + vanityState.lastHeadKind < consensusFork: + handler() + + # Policy: Retain back through Mainnet's second latest fork. + ConsensusFork.Deneb.logForkUpgrade( + dag.vanityLogs.onUpgradeToDeneb) + ConsensusFork.Electra.logForkUpgrade( + dag.vanityLogs.onUpgradeToElectra) + ConsensusFork.Fulu.logForkUpgrade( + dag.vanityLogs.onUpgradeToFulu) + else: + if dag.vanityLogs.onBlobParametersUpdate != nil and + dag.headState.kind >= ConsensusFork.Fulu: + let headEpoch = getStateField(dag.headState, slot).epoch + if headEpoch > vanityState.lastHeadEpoch: + for entry in dag.cfg.BLOB_SCHEDULE: + if headEpoch >= entry.EPOCH: + if vanityState.lastHeadEpoch < entry.EPOCH: + dag.vanityLogs.onBlobParametersUpdate() + break + + if dag.vanityLogs.onKnownBlsToExecutionChange != nil and + checkBlsToExecutionChanges( + dag.headState, vanityState.lastKnownValidatorsChangeStatuses): + dag.vanityLogs.onKnownBlsToExecutionChange() + + if dag.vanityLogs.onKnownCompoundingChange != nil and + checkCompoundingChanges( + dag.headState, vanityState.lastKnownCompoundingChangeStatuses): + dag.vanityLogs.onKnownCompoundingChange() + proc updateHead*( dag: ChainDAGRef, newHead: BlockRef, quarantine: var Quarantine, knownValidators: openArray[ValidatorIndex]) = @@ -2410,11 +2457,7 @@ proc updateHead*( let lastHeadStateRoot = getStateRoot(dag.headState) - lastHeadKind = dag.headState.kind - lastKnownValidatorsChangeStatuses = getBlsToExecutionChangeStatuses( - dag.headState, knownValidators) - lastKnownCompoundingChangeStatuses = getCompoundingStatuses( - dag.headState, knownValidators) + vanityState = dag.trackVanityState(knownValidators) # Start off by making sure we have the right state - updateState will try # to use existing in-memory states to make this smooth @@ -2430,30 +2473,7 @@ proc updateHead*( quit 1 dag.head = newHead - - if dag.headState.kind > lastHeadKind: - proc logForkUpgrade(consensusFork: ConsensusFork, handler: LogProc) = - if handler != nil and - dag.headState.kind >= consensusFork and - lastHeadKind < consensusFork: - handler() - - # Policy: Retain back through Mainnet's second latest fork. - ConsensusFork.Deneb.logForkUpgrade( - dag.vanityLogs.onUpgradeToDeneb) - ConsensusFork.Electra.logForkUpgrade( - dag.vanityLogs.onUpgradeToElectra) - - if dag.vanityLogs.onKnownBlsToExecutionChange != nil and - checkBlsToExecutionChanges( - dag.headState, lastKnownValidatorsChangeStatuses): - dag.vanityLogs.onKnownBlsToExecutionChange() - - if dag.vanityLogs.onKnownCompoundingChange != nil and - checkCompoundingChanges( - dag.headState, lastKnownCompoundingChangeStatuses): - dag.vanityLogs.onKnownCompoundingChange() - + dag.processVanityLogs(vanityState) dag.db.putHeadBlock(newHead.root) updateBeaconMetrics(dag.headState, dag.head.bid, cache) diff --git a/beacon_chain/consensus_object_pools/vanity_logs/fulu/color.ans b/beacon_chain/consensus_object_pools/vanity_logs/fulu/color.ans new file mode 100644 index 0000000000..90bf43981a --- /dev/null +++ b/beacon_chain/consensus_object_pools/vanity_logs/fulu/color.ans @@ -0,0 +1,25 @@ + : : : : : : |`-. /-"| |""""""". + : : ./"""`: : : .-'""""\ |. \/ .| | [] | +.:....:....../ ::::::\ ...........:....:../:::::. \..|:|\../|:|.|...... /..... + : : |.::::::::`\.---""""""""""--/::::: . . | |:| `' |:| |:::[]::\ +.:....:......| .... ::: .""""""mmm""""".. : . |..|_| |_|.|________|.... + : : `\ . .' mmmmmMmmmmmm `. /' .------.. .--. + : : :| .::. "mmm...: .mmmm" :. | |..----..| | | + : : :`\ .'.::..: ...:::... } ::, <' |:| |:| | | + : : :./:..:.\.... ..' `. _.../ : : `. |:`----':| |:: """"\ + : : / :::: : \ 0 \ . / 0 /' :: ::`. `--------' `--------' + : : / :: : : """// . . """ : :: \ |-------. .------.. +.:....:......./ :: :: :..'' .: :: \..| /\ |.| .----. |.... + : : / :: : . . . . :: :: \ |.. "" < | | | | +.:....:.....{ :: :: :' : :' :::' :: \..|::|""\ :|.|:`----':|.... + : : ,{ :::: `.: .__ .. ___ .:::: \ |__| |_| `--------' + : : _/:: :::: /' ' \_`---' _} ' :::: \ .--------. |""""""". + : _.:/: : : _-""""(```. `\ /' ...''|---.__ \ | .______| | [] | + :-' : :' .--'_-'""""\```... |". . ... )--. |---.|: ...| |...... / +N : :' : /`` /' __..---.::__-:-_` ::.---. `. `.-_ |::---^--. |:::[]::\ +1 : : : .' : / .' ::::::`--' `\ ` | `.--------' |________| +M ::' : .':. { .'.'.'. } ::`. ::. \ .-------. +B : : | `-,_ `::::::::' .',: |:: :|............| _____|.... +U : : be. `\ """"" ::' :: : | `\..\__ +S : :: at. . `\ ::. .. :: .'':`............_\_ :::\..... + : `::. scribe... `: : ::: `\ .:: .:' |________| diff --git a/beacon_chain/consensus_object_pools/vanity_logs/fulu/mono.txt b/beacon_chain/consensus_object_pools/vanity_logs/fulu/mono.txt new file mode 100644 index 0000000000..fc9e5c3e1c --- /dev/null +++ b/beacon_chain/consensus_object_pools/vanity_logs/fulu/mono.txt @@ -0,0 +1,25 @@ + : : : : : : |`-. /-"| |""""""". + : : ./"""`: : : .-'""""\ |. \/ .| | [] | +.:....:....../ ::::::\ ...........:....:../:::::. \..|:|\../|:|.|...... /..... + : : |.::::::::`\.---""""""""""--/::::: . . | |:| `' |:| |:::[]::\ +.:....:......| .... ::: .""""""mmm""""".. : . |..|_| |_|.|________|.... + : : `\ . .' mmmmmMmmmmmm `. /' .------.. .--. + : : :| .::. "mmm...: .mmmm" :. | |..----..| | | + : : :`\ .'.::..: ...:::... } ::, <' |:| |:| | | + : : :./:..:.\.... ..' `. _.../ : : `. |:`----':| |:: """"\ + : : / :::: : \ 0 \ . / 0 /' :: ::`. `--------' `--------' + : : / :: : : """// . . """ : :: \ |-------. .------.. +.:....:......./ :: :: :..'' .: :: \..| /\ |.| .----. |.... + : : / :: : . . . . :: :: \ |.. "" < | | | | +.:....:.....{ :: :: :' : :' :::' :: \..|::|""\ :|.|:`----':|.... + : : ,{ :::: `.: .__ .. ___ .:::: \ |__| |_| `--------' + : : _/:: :::: /' ' \_`---' _} ' :::: \ .--------. |""""""". + : _.:/: : : _-""""(```. `\ /' ...''|---.__ \ | .______| | [] | + :-' : :' .--'_-'""""\```... |". . ... )--. |---.|: ...| |...... / +N : :' : /`` /' __..---.::__-:-_` ::.---. `. `.-_ |::---^--. |:::[]::\ +1 : : : .' : / .' ::::::`--' `\ ` | `.--------' |________| +M ::' : .':. { .'.'.'. } ::`. ::. \ .-------. +B : : | `-,_ `::::::::' .',: |:: :|............| _____|.... +U : : be. `\ """"" ::' :: : | `\..\__ +S : :: at. . `\ ::. .. :: .'':`............_\_ :::\..... + : `::. scribe... `: : ::: `\ .:: .:' |________| diff --git a/beacon_chain/consensus_object_pools/vanity_logs/vanity_logs.nim b/beacon_chain/consensus_object_pools/vanity_logs/vanity_logs.nim index 7ae2b2e599..a86a79ac5f 100644 --- a/beacon_chain/consensus_object_pools/vanity_logs/vanity_logs.nim +++ b/beacon_chain/consensus_object_pools/vanity_logs/vanity_logs.nim @@ -31,6 +31,14 @@ type # known in a head block. onKnownCompoundingChange*: LogProc + # Gets displayed on upgrade to Fulu. May be displayed multiple times + # in case of chain reorgs around the upgrade. + onUpgradeToFulu*: LogProc + + # Gets displayed on a blob parameters update. + # May be displayed multiple times in case of chain reorgs. + onBlobParametersUpdate*: LogProc + # Created by https://beatscribe.com (beatscribe#1008 on Discord) # These need to be the main body of the log not to be reformatted or escaped. # @@ -45,3 +53,6 @@ proc denebColor*() = notice "\n" & staticRead("deneb" / "color.ans") proc electraMono*() = notice "\n" & staticRead("electra" / "mono.txt") proc electraColor*() = notice "\n" & staticRead("electra" / "color.ans") proc electraBlink*() = notice "\n" & staticRead("electra" / "blink.ans") + +proc fuluMono*() = notice "\n" & staticRead("fulu" / "mono.txt") +proc fuluColor*() = notice "\n" & staticRead("fulu" / "color.ans") diff --git a/beacon_chain/db_limits.nim b/beacon_chain/db_limits.nim deleted file mode 100644 index 567b24eeee..0000000000 --- a/beacon_chain/db_limits.nim +++ /dev/null @@ -1,16 +0,0 @@ -# beacon_chain -# Copyright (c) 2022-2024 Status Research & Development GmbH -# Licensed and distributed under either of -# * MIT license (license terms in the root directory or at https://opensource.org/licenses/MIT). -# * Apache v2 license (license terms in the root directory or at https://www.apache.org/licenses/LICENSE-2.0). -# at your option. This file may not be copied, modified, or distributed except according to those terms. - -{.push raises: [].} - -import spec/datatypes/constants - -# No `uint64` support in Sqlite -template isSupportedBySQLite*(slot: Slot): bool = - slot <= int64.high.Slot -template isSupportedBySQLite*(period: SyncCommitteePeriod): bool = - period <= int64.high.SyncCommitteePeriod diff --git a/beacon_chain/db_utils.nim b/beacon_chain/db_utils.nim new file mode 100644 index 0000000000..a83667552f --- /dev/null +++ b/beacon_chain/db_utils.nim @@ -0,0 +1,46 @@ +# beacon_chain +# Copyright (c) 2022-2025 Status Research & Development GmbH +# Licensed and distributed under either of +# * MIT license (license terms in the root directory or at https://opensource.org/licenses/MIT). +# * Apache v2 license (license terms in the root directory or at https://www.apache.org/licenses/LICENSE-2.0). +# at your option. This file may not be copied, modified, or distributed except according to those terms. + +{.push raises: [].} + +import + chronicles, + snappy, + spec/datatypes/constants, + spec/eth2_ssz_serialization + +# No `uint64` support in Sqlite +template isSupportedBySQLite*(slot: Slot): bool = + slot <= int64.high.Slot +template isSupportedBySQLite*(period: SyncCommitteePeriod): bool = + period <= int64.high.SyncCommitteePeriod + +template disposeSafe*(s: untyped): untyped = + if distinctBase(s) != nil: + s.dispose() + s = typeof(s)(nil) + +proc decodeSZSSZ*[T]( + data: openArray[byte], output: var T, updateRoot = false): bool = + try: + let decompressed = decodeFramed(data, checkIntegrity = false) + readSszBytes(decompressed, output, updateRoot) + true + except CatchableError as e: + # If the data can't be deserialized, it could be because it's from a + # version of the software that uses a different SSZ encoding + warn "Unable to deserialize data, old database?", + err = e.msg, typ = name(T), dataLen = data.len + false + +func encodeSZSSZ*(v: auto): seq[byte] = + # https://github.com/google/snappy/blob/main/framing_format.txt + try: + encodeFramed(SSZ.encode(v)) + except CatchableError as err: + # In-memory encode shouldn't fail! + raiseAssert err.msg diff --git a/beacon_chain/el/el_manager.nim b/beacon_chain/el/el_manager.nim index 2c655c7ef5..9d243e437a 100644 --- a/beacon_chain/el/el_manager.nim +++ b/beacon_chain/el/el_manager.nim @@ -425,7 +425,7 @@ proc getPayloadFromSingleEL( if response.payloadStatus.status != PayloadExecutionStatus.valid or response.payloadId.isNone: - raise newException(CatchableError, "Head block is not a valid payload") + raise newException(CatchableError, "Head block is not a valid payload; " & $response) # Give the EL some time to assemble the block await sleepAsync(chronos.milliseconds 500) @@ -1243,6 +1243,3 @@ proc testWeb3Provider*( discard request "Sync status": web3.provider.eth_syncing() - - discard request "Latest block": - web3.provider.eth_getBlockByNumber(blockId("latest"), false) diff --git a/beacon_chain/el/merkle_minimal.nim b/beacon_chain/el/merkle_minimal.nim index e7c90d3345..587714f15f 100644 --- a/beacon_chain/el/merkle_minimal.nim +++ b/beacon_chain/el/merkle_minimal.nim @@ -13,13 +13,12 @@ # --------------------------------------------------------------- import - std/sequtils, - stew/endians2, - # Specs ../spec/[eth2_merkleization, digest], ../spec/datatypes/base -template getProof*( +from std/sequtils import mapIt + +template getProof( proofs: seq[Eth2Digest], idxParam: int): openArray[Eth2Digest] = let idx = idxParam diff --git a/beacon_chain/gossip_processing/block_processor.nim b/beacon_chain/gossip_processing/block_processor.nim index 0420a30594..620b371c03 100644 --- a/beacon_chain/gossip_processing/block_processor.nim +++ b/beacon_chain/gossip_processing/block_processor.nim @@ -26,7 +26,7 @@ from ../consensus_object_pools/block_dag import BlockRef, root, shortLog, slot from ../consensus_object_pools/block_pools_types import EpochRef, VerifierError from ../consensus_object_pools/block_quarantine import - addBlobless, addOrphan, addUnviable, pop, removeOrphan + addSidecarless, addOrphan, addUnviable, pop, removeOrphan from ../consensus_object_pools/blob_quarantine import BlobQuarantine, popSidecars, put from ../validators/validator_monitor import @@ -856,8 +856,7 @@ proc storeBlock( if bres.isSome(): self[].enqueueBlock(MsgSource.gossip, quarantined, bres) else: - discard self.consensusManager.quarantine[].addBlobless( - dag.finalizedHead.slot, forkyBlck) + self.consensusManager.quarantine[].addSidecarless(forkyBlck) ok blck.value() diff --git a/beacon_chain/gossip_processing/eth2_processor.nim b/beacon_chain/gossip_processing/eth2_processor.nim index 0172d6cd51..16dd94fa45 100644 --- a/beacon_chain/gossip_processing/eth2_processor.nim +++ b/beacon_chain/gossip_processing/eth2_processor.nim @@ -246,8 +246,7 @@ proc processSignedBeaconBlock*( if bres.isSome(): bres else: - discard self.quarantine[].addBlobless(self.dag.finalizedHead.slot, - signedBlock) + self.quarantine[].addSidecarless(signedBlock) return v else: Opt.none(BlobSidecars) @@ -301,7 +300,7 @@ proc processBlobSidecar*( debug "Blob validated, putting in blob quarantine" self.blobQuarantine[].put(block_root, newClone(blobSidecar)) - if (let o = self.quarantine[].popBlobless(block_root); o.isSome): + if (let o = self.quarantine[].popSidecarless(block_root); o.isSome): let blobless = o.unsafeGet() withBlck(blobless): when consensusFork >= ConsensusFork.Deneb: @@ -309,8 +308,7 @@ proc processBlobSidecar*( if bres.isSome(): self.blockProcessor[].enqueueBlock(MsgSource.gossip, blobless, bres) else: - discard self.quarantine[].addBlobless( - self.dag.finalizedHead.slot, forkyBlck) + self.quarantine[].addSidecarless(forkyBlck) else: raiseAssert "Could not have been added as blobless" diff --git a/beacon_chain/gossip_processing/gossip_validation.nim b/beacon_chain/gossip_processing/gossip_validation.nim index 3415b51dd2..63f1b897f1 100644 --- a/beacon_chain/gossip_processing/gossip_validation.nim +++ b/beacon_chain/gossip_processing/gossip_validation.nim @@ -135,7 +135,7 @@ func check_slot_exact(msgSlot: Slot, wallTime: BeaconTime): ok(msgSlot) -func check_beacon_and_target_block( +proc check_beacon_and_target_block( pool: var AttestationPool, data: AttestationData): Result[BlockSlot, ValidationError] = # The block being voted for (data.beacon_block_root) passes validation - by @@ -304,7 +304,7 @@ template validateBeaconBlockBellatrix( _: BlockRef): untyped = discard -# https://github.com/ethereum/consensus-specs/blob/v1.6.0-alpha.0/specs/bellatrix/p2p-interface.md#beacon_block +# https://github.com/ethereum/consensus-specs/blob/v1.6.0-alpha.2/specs/bellatrix/p2p-interface.md#beacon_block template validateBeaconBlockBellatrix( signed_beacon_block: bellatrix.SignedBeaconBlock | capella.SignedBeaconBlock | diff --git a/beacon_chain/libnimbus_lc/libnimbus_lc.h b/beacon_chain/libnimbus_lc/libnimbus_lc.h index 528ba6cca0..3866899d8a 100644 --- a/beacon_chain/libnimbus_lc/libnimbus_lc.h +++ b/beacon_chain/libnimbus_lc/libnimbus_lc.h @@ -94,7 +94,7 @@ typedef struct ETHConsensusConfig ETHConsensusConfig; * based on the given `config.yaml` file content - If successful. * @return `NULL` - If the given `config.yaml` is malformed or incompatible. * - * @see https://github.com/ethereum/consensus-specs/blob/v1.5.0/configs/README.md + * @see https://github.com/ethereum/consensus-specs/blob/v1.6.0-alpha.2/configs/README.md */ ETH_RESULT_USE_CHECK ETHConsensusConfig *_Nullable ETHConsensusConfigCreateFromYaml(const char *configFileContent); diff --git a/beacon_chain/libnimbus_lc/libnimbus_lc.nim b/beacon_chain/libnimbus_lc/libnimbus_lc.nim index 440af8a58c..1457b39a14 100644 --- a/beacon_chain/libnimbus_lc/libnimbus_lc.nim +++ b/beacon_chain/libnimbus_lc/libnimbus_lc.nim @@ -145,7 +145,7 @@ proc ETHBeaconStateCreateFromSsz( ## * https://github.com/ethereum/consensus-specs/blob/v1.6.0-alpha.0/specs/altair/beacon-chain.md#beaconstate ## * https://github.com/ethereum/consensus-specs/blob/v1.5.0-alpha.8/specs/bellatrix/beacon-chain.md#beaconstate ## * https://github.com/ethereum/consensus-specs/blob/v1.5.0-alpha.3/specs/capella/beacon-chain.md#beaconstate - ## * https://github.com/ethereum/consensus-specs/blob/v1.5.0/configs/README.md + ## * https://github.com/ethereum/consensus-specs/blob/v1.6.0-alpha.2/configs/README.md let consensusFork = ConsensusFork.decodeString($consensusVersion).valueOr: return nil @@ -735,7 +735,7 @@ func ETHLightClientStoreGetFinalizedHeader( ## * Latest finalized header. ## ## See: - ## * https://github.com/ethereum/consensus-specs/blob/v1.6.0-alpha.0/specs/capella/light-client/sync-protocol.md#modified-lightclientheader + ## * https://github.com/ethereum/consensus-specs/blob/v1.6.0-alpha.3/specs/capella/light-client/sync-protocol.md#modified-lightclientheader addr store[].finalized_header func ETHLightClientStoreIsNextSyncCommitteeKnown( @@ -755,7 +755,7 @@ func ETHLightClientStoreIsNextSyncCommitteeKnown( ## ## See: ## * https://github.com/ethereum/consensus-specs/blob/v1.5.0-beta.0/specs/altair/light-client/sync-protocol.md#is_next_sync_committee_known - ## * https://github.com/ethereum/consensus-specs/blob/v1.6.0-alpha.0/specs/altair/light-client/light-client.md + ## * https://github.com/ethereum/consensus-specs/blob/v1.6.0-alpha.3/specs/altair/light-client/light-client.md store[].is_next_sync_committee_known func ETHLightClientStoreGetOptimisticHeader( @@ -1296,7 +1296,7 @@ proc ETHExecutionBlockHeaderCreateFromJson( Opt.some data.requestsHash.get.asEth2Digest.to(Hash32) else: Opt.none(Hash32)) - if blockHeader.computeRlpHash() != executionHash[]: + if blockHeader.computeRlpHash().asEth2Digest() != executionHash[]: return nil # Construct withdrawals @@ -1326,15 +1326,15 @@ proc ETHExecutionBlockHeaderCreateFromJson( bytes: rlpBytes) let tr = orderedTrieRoot(wds) - if tr != data.withdrawalsRoot.get.asEth2Digest: + if tr != data.withdrawalsRoot.get: return nil let executionBlockHeader = ETHExecutionBlockHeader.new() executionBlockHeader[] = ETHExecutionBlockHeader( - transactionsRoot: blockHeader.txRoot, - withdrawalsRoot: blockHeader.withdrawalsRoot.get(zeroHash32), + transactionsRoot: blockHeader.txRoot.asEth2Digest(), + withdrawalsRoot: blockHeader.withdrawalsRoot.get(zeroHash32).asEth2Digest(), withdrawals: wds, - requestsHash: blockHeader.requestsHash.get(zeroHash32)) + requestsHash: blockHeader.requestsHash.get(zeroHash32).asEth2Digest()) executionBlockHeader.toUnmanagedPtr() proc ETHExecutionBlockHeaderDestroy( @@ -1600,7 +1600,7 @@ proc ETHTransactionsCreateFromJson( except RlpError: raiseAssert "Unreachable" hash = keccak256(rlpBytes) - if data.hash.asEth2Digest != hash: + if data.hash != hash: return nil func packSignature(r, s: UInt256, yParity: uint8): array[65, byte] = @@ -1667,7 +1667,7 @@ proc ETHTransactionsCreateFromJson( signature: @sig) txs.add ETHTransaction( - hash: keccak256(rlpBytes), + hash: keccak256(rlpBytes).asEth2Digest, chainId: tx.chainId, `from`: ExecutionAddress(data: fromAddress), nonce: tx.nonce, @@ -1688,7 +1688,7 @@ proc ETHTransactionsCreateFromJson( signature: @rawSig, bytes: rlpBytes.TypedTransaction) - if orderedTrieRoot(txs) != transactionsRoot[]: + if orderedTrieRoot(txs).asEth2Digest() != transactionsRoot[]: return nil let transactions = seq[ETHTransaction].new() @@ -2396,7 +2396,7 @@ proc ETHReceiptsCreateFromJson( ReceiptStatusType.Root else: ReceiptStatusType.Status, - root: rec.hash, + root: rec.hash.asEth2Digest(), status: rec.status, gasUsed: distinctBase(data.gasUsed), # Validated during sanity checks. logsBloom: BloomLogs(data: rec.logsBloom.data), @@ -2406,7 +2406,7 @@ proc ETHReceiptsCreateFromJson( data: it.data)), bytes: rlpBytes) - if orderedTrieRoot(recs) != receiptsRoot[]: + if orderedTrieRoot(recs).asEth2Digest() != receiptsRoot[]: return nil let receipts = seq[ETHReceipt].new() diff --git a/beacon_chain/light_client_db.nim b/beacon_chain/light_client_db.nim index 20b21f62e8..702ec1b300 100644 --- a/beacon_chain/light_client_db.nim +++ b/beacon_chain/light_client_db.nim @@ -1,5 +1,5 @@ # beacon_chain -# Copyright (c) 2022-2024 Status Research & Development GmbH +# Copyright (c) 2022-2025 Status Research & Development GmbH # Licensed and distributed under either of # * MIT license (license terms in the root directory or at https://opensource.org/licenses/MIT). # * Apache v2 license (license terms in the root directory or at https://www.apache.org/licenses/LICENSE-2.0). @@ -15,7 +15,7 @@ import # Beacon chain internals spec/datatypes/altair, spec/[eth2_ssz_serialization, helpers], - ./db_limits + ./db_utils logScope: topics = "lcdb" diff --git a/beacon_chain/networking/eth2_network.nim b/beacon_chain/networking/eth2_network.nim index 181bc8732e..e63d1f8bba 100644 --- a/beacon_chain/networking/eth2_network.nim +++ b/beacon_chain/networking/eth2_network.nim @@ -79,6 +79,7 @@ type forkId*: ENRForkID discoveryForkId*: ENRForkID forkDigests*: ref ForkDigests + nextForkDigest: ForkDigest rng*: ref HmacDrbgContext peers*: Table[PeerId, Peer] directPeers*: DirectPeers @@ -2702,6 +2703,24 @@ proc updateSyncnetsMetadata*(node: Eth2Node, syncnets: SyncnetBits) = else: debug "Sync committees changed; updated ENR syncnets", syncnets +proc updateNextForkDigest(node: Eth2Node, next_fork_digest: ForkDigest) = + # https://github.com/ethereum/consensus-specs/blob/v1.6.0-alpha.2/specs/fulu/p2p-interface.md#next-fork-digest + if node.nextForkDigest == next_fork_digest: + return + + node.metadata.seq_number += 1 + node.nextForkDigest = next_fork_digest + + let res = node.discovery.updateRecord({ + enrNextForkDigestField: SSZ.encode(next_fork_digest) + }) + if res.isErr(): + # This should not occur in this scenario as the private key would always + # be the correct one and the ENR will not increase in size. + warn "Failed to update the ENR nfd field", error = res.error + else: + debug "Next fork digest changed; updated ENR nfd", next_fork_digest + proc updateForkId(node: Eth2Node, value: ENRForkID) = node.forkId = value let res = node.discovery.updateRecord({enrForkIdField: SSZ.encode value}) diff --git a/beacon_chain/networking/peer_protocol.nim b/beacon_chain/networking/peer_protocol.nim index f81ad3b68d..3b900bd8e9 100644 --- a/beacon_chain/networking/peer_protocol.nim +++ b/beacon_chain/networking/peer_protocol.nim @@ -26,6 +26,15 @@ type headRoot*: Eth2Digest headSlot*: Slot + # https://github.com/ethereum/consensus-specs/blob/v1.6.0-alpha.2/specs/fulu/p2p-interface.md#status-v2 + StatusMsgV2* = object + forkDigest*: ForkDigest + finalizedRoot*: Eth2Digest + finalizedEpoch*: Epoch + headRoot*: Eth2Digest + headSlot*: Slot + earliestAvailableSlot*: Slot + PeerSyncNetworkState* {.final.} = ref object of RootObj dag: ChainDAGRef cfg: RuntimeConfig @@ -36,6 +45,7 @@ type PeerSyncPeerState* {.final.} = ref object of RootObj statusLastTime: chronos.Moment statusMsg: StatusMsg + statusMsgV2: Opt[StatusMsgV2] declareCounter nbc_disconnects_count, "Number disconnected peers", labels = ["agent", "reason"] @@ -50,12 +60,23 @@ func shortLog*(s: StatusMsg): auto = ) chronicles.formatIt(StatusMsg): shortLog(it) +func shortLog*(s: StatusMsgV2): auto = + ( + forkDigest: s.forkDigest, + finalizedRoot: shortLog(s.finalizedRoot), + finalizedEpoch: shortLog(s.finalizedEpoch), + headRoot: shortLog(s.headRoot), + headSlot: shortLog(s.headSlot), + earliestAvailableSlot: shortLog(s.earliestAvailableSlot) + ) +chronicles.formatIt(StatusMsgV2): shortLog(it) + func forkDigestAtEpoch(state: PeerSyncNetworkState, epoch: Epoch): ForkDigest = state.forkDigests[].atEpoch(epoch, state.cfg) # https://github.com/ethereum/consensus-specs/blob/v1.5.0-beta.0/specs/phase0/p2p-interface.md#status -proc getCurrentStatus(state: PeerSyncNetworkState): StatusMsg = +proc getCurrentStatusV1(state: PeerSyncNetworkState): StatusMsg = let dag = state.dag wallSlot = state.getBeaconTime().slotOrZero @@ -83,7 +104,38 @@ proc getCurrentStatus(state: PeerSyncNetworkState): StatusMsg = headRoot: state.genesisBlockRoot, headSlot: GENESIS_SLOT) -proc checkStatusMsg(state: PeerSyncNetworkState, status: StatusMsg): +# https://github.com/ethereum/consensus-specs/blob/v1.6.0-alpha.2/specs/fulu/p2p-interface.md#status-v2 +proc getCurrentStatusV2(state: PeerSyncNetworkState): StatusMsgV2 = + let + dag = state.dag + wallSlot = state.getBeaconTime().slotOrZero + + if dag != nil: + StatusMsgV2( + forkDigest: state.forkDigestAtEpoch(wallSlot.epoch), + finalizedRoot: + (if dag.finalizedHead.slot.epoch != GENESIS_EPOCH: + dag.finalizedHead.blck.root + else: + # this defaults to `Root(b'\x00' * 32)` for the genesis finalized + # checkpoint + ZERO_HASH), + finalizedEpoch: dag.finalizedHead.slot.epoch, + headRoot: dag.head.root, + headSlot: dag.head.slot, + earliestAvailableSlot: dag.earliestAvailableSlot()) + else: + StatusMsgV2( + forkDigest: state.forkDigestAtEpoch(wallSlot.epoch), + # this defaults to `Root(b'\x00' * 32)` for the genesis finalized + # checkpoint + finalizedRoot: ZERO_HASH, + finalizedEpoch: GENESIS_EPOCH, + headRoot: state.genesisBlockRoot, + headSlot: GENESIS_SLOT, + earliestAvailableSlot: GENESIS_SLOT) + +proc checkStatusMsg(state: PeerSyncNetworkState, status: StatusMsg | StatusMsgV2): Result[void, cstring] = let dag = state.dag @@ -114,12 +166,20 @@ proc checkStatusMsg(state: PeerSyncNetworkState, status: StatusMsg): # apparently don't use spec ZERO_HASH as of this writing if not (status.finalizedRoot in [state.genesisBlockRoot, ZERO_HASH]): return err("peer following different finality") - ok() -proc handleStatus(peer: Peer, - state: PeerSyncNetworkState, - theirStatus: StatusMsg): Future[bool] {.async: (raises: [CancelledError]).} +proc handleStatusV1(peer: Peer, + state: PeerSyncNetworkState, + theirStatus: StatusMsg): Future[bool] {.async: (raises: [CancelledError]).} + +proc handleStatusV2(peer: Peer, + state: PeerSyncNetworkState, + theirStatus: StatusMsgV2): Future[bool] {.async: (raises: [CancelledError]).} + +proc setStatusV2Msg(state: PeerSyncPeerState, + statusMsg: Opt[StatusMsgV2]) = + state.statusMsgV2 = statusMsg + state.statusLastTime = Moment.now() {.pop.} # TODO fix p2p macro for raises @@ -142,26 +202,57 @@ p2pProtocol PeerSync(version = 1, # need a dedicated flow in libp2p that resolves the race conditions - # this needs more thinking around the ordering of events and the # given incoming flag + let - ourStatus = peer.networkState.getCurrentStatus() - theirStatus = await peer.status(ourStatus, timeout = RESP_TIMEOUT_DUR) + remoteFork = peer.networkState.getBeaconTime().slotOrZero.epoch() + + if remoteFork >= peer.networkState.cfg.FULU_FORK_EPOCH: + let + ourStatus = peer.networkState.getCurrentStatusV2() + theirStatus = + await peer.statusV2(ourStatus, timeout = RESP_TIMEOUT_DUR) + + if theirStatus.isOk: + discard await peer.handleStatusV2(peer.networkState, theirStatus.get()) + peer.updateAgent() + else: + # Mark status v2 of remote peer as None. + peer.state(PeerSync).setStatusV2Msg(Opt.none(StatusMsgV2)) + debug "Status response not received in time", + peer, errorKind = theirStatus.error.kind + await peer.disconnect(FaultOrError) - if theirStatus.isOk: - discard await peer.handleStatus(peer.networkState, theirStatus.get()) - peer.updateAgent() else: - debug "Status response not received in time", - peer, errorKind = theirStatus.error.kind - await peer.disconnect(FaultOrError) - - proc status(peer: Peer, - theirStatus: StatusMsg, - response: SingleChunkResponse[StatusMsg]) - {.async, libp2pProtocol("status", 1).} = - let ourStatus = peer.networkState.getCurrentStatus() - trace "Sending status message", peer = peer, status = ourStatus + let + ourStatus = peer.networkState.getCurrentStatusV1() + theirStatus = + await peer.statusV1(ourStatus, timeout = RESP_TIMEOUT_DUR) + + if theirStatus.isOk: + discard await peer.handleStatusV1(peer.networkState, theirStatus.get()) + peer.updateAgent() + else: + debug "Status response not received in time", + peer, errorKind = theirStatus.error.kind + await peer.disconnect(FaultOrError) + + proc statusV1(peer: Peer, + theirStatus: StatusMsg, + response: SingleChunkResponse[StatusMsg]) + {.async, libp2pProtocol("status", 1).} = + let ourStatus = peer.networkState.getCurrentStatusV1() + trace "Sending status (v1)", peer = peer, status = ourStatus + await response.send(ourStatus) + discard await peer.handleStatusV1(peer.networkState, theirStatus) + + proc statusV2(peer: Peer, + theirStatus: StatusMsgV2, + response: SingleChunkResponse[StatusMsgV2]) + {.async, libp2pProtocol("status", 2).} = + let ourStatus = peer.networkState.getCurrentStatusV2() + trace "Sending status (v2)", peer = peer, status = ourStatus await response.send(ourStatus) - discard await peer.handleStatus(peer.networkState, theirStatus) + discard await peer.handleStatusV2(peer.networkState, theirStatus) proc ping(peer: Peer, value: uint64): uint64 {.libp2pProtocol("ping", 1).} = @@ -176,7 +267,7 @@ p2pProtocol PeerSync(version = 1, altair_metadata proc getMetadata_v3(peer: Peer): fulu.MetaData - {. libp2pProtocol("metadata", 3).} = + {.libp2pProtocol("metadata", 3).} = peer.network.metadata proc goodbye(peer: Peer, reason: uint64) {. @@ -192,10 +283,15 @@ proc setStatusMsg(peer: Peer, statusMsg: StatusMsg) = peer.state(PeerSync).statusMsg = statusMsg peer.state(PeerSync).statusLastTime = Moment.now() -proc handleStatus(peer: Peer, - state: PeerSyncNetworkState, - theirStatus: StatusMsg): Future[bool] - {.async: (raises: [CancelledError]).} = +proc setStatusV2Msg(peer: Peer, statusMsg: Opt[StatusMsgV2]) = + debug "Peer statusV2", peer, statusMsg + peer.state(PeerSync).statusMsgV2 = statusMsg + peer.state(PeerSync).statusLastTime = Moment.now() + +proc handleStatusV1(peer: Peer, + state: PeerSyncNetworkState, + theirStatus: StatusMsg): Future[bool] + {.async: (raises: [CancelledError]).} = let res = checkStatusMsg(state, theirStatus) @@ -212,28 +308,81 @@ proc handleStatus(peer: Peer, await peer.handlePeer() true +proc handleStatusV2(peer: Peer, + state: PeerSyncNetworkState, + theirStatus: StatusMsgV2): Future[bool] + {.async: (raises: [CancelledError]).} = + let + res = checkStatusMsg(state, theirStatus) + + return if res.isErr(): + debug "Irrelevant peer", peer, theirStatus, err = res.error() + await peer.disconnect(IrrelevantNetwork) + false + else: + peer.setStatusV2Msg(Opt.some(theirStatus)) + + if peer.connectionState == Connecting: + # As soon as we get here it means that we passed handshake succesfully. So + # we can add this peer to PeerPool. + await peer.handlePeer() + true + proc updateStatus*(peer: Peer): Future[bool] {.async: (raises: [CancelledError]).} = ## Request `status` of remote peer ``peer``. let nstate = peer.networkState(PeerSync) - ourStatus = getCurrentStatus(nstate) - theirStatus = - (await peer.status(ourStatus, timeout = RESP_TIMEOUT_DUR)).valueOr: - return false - await peer.handleStatus(nstate, theirStatus) + if nstate.getBeaconTime().slotOrZero.epoch() >= nstate.cfg.FULU_FORK_EPOCH: + let + ourStatus = getCurrentStatusV2(nstate) + theirStatus = + (await peer.statusV2(ourStatus, timeout = RESP_TIMEOUT_DUR)) + if theirStatus.isOk(): + await peer.handleStatusV2(nstate, theirStatus.get()) + else: + # Mark status v2 of remote peer as None + peer.setStatusV2Msg(Opt.none(StatusMsgV2)) + return false + + else: + let + ourStatus = getCurrentStatusV1(nstate) + theirStatus = + (await peer.statusV1(ourStatus, timeout = RESP_TIMEOUT_DUR)).valueOr: + return false + + await peer.handleStatusV1(nstate, theirStatus) proc getHeadRoot*(peer: Peer): Eth2Digest = - ## Returns head root for specific peer ``peer``. - peer.state(PeerSync).statusMsg.headRoot + let + state = peer.networkState(PeerSync) + pstate = peer.state(PeerSync) + remoteFork = state.getBeaconTime().slotOrZero.epoch() + if pstate.statusMsgV2.isSome(): + pstate.statusMsgV2.get.headRoot + else: + pstate.statusMsg.headRoot proc getHeadSlot*(peer: Peer): Slot = - ## Returns head slot for specific peer ``peer``. - peer.state(PeerSync).statusMsg.headSlot + let + state = peer.networkState(PeerSync) + pstate = peer.state(PeerSync) + remoteFork = state.getBeaconTime().slotOrZero.epoch() + if pstate.statusMsgV2.isSome(): + pstate.statusMsgV2.get.headSlot + else: + pstate.statusMsg.headSlot proc getFinalizedEpoch*(peer: Peer): Epoch = - ## Returns head slot for specific peer ``peer``. - peer.state(PeerSync).statusMsg.finalizedEpoch + let + state = peer.networkState(PeerSync) + pstate = peer.state(PeerSync) + remoteFork = state.getBeaconTime().slotOrZero.epoch() + if pstate.statusMsgV2.isSome(): + pstate.statusMsgV2.get.finalizedEpoch + else: + pstate.statusMsg.finalizedEpoch proc getStatusLastTime*(peer: Peer): chronos.Moment = ## Returns head slot for specific peer ``peer``. diff --git a/beacon_chain/nimbus_beacon_node.nim b/beacon_chain/nimbus_beacon_node.nim index 963e27598e..a7f663e5a1 100644 --- a/beacon_chain/nimbus_beacon_node.nim +++ b/beacon_chain/nimbus_beacon_node.nim @@ -146,13 +146,17 @@ func getVanityLogs(stdoutKind: StdoutLogKind): VanityLogs = onKnownBlsToExecutionChange: capellaBlink, onUpgradeToDeneb: denebColor, onUpgradeToElectra: electraColor, - onKnownCompoundingChange: electraBlink) + onKnownCompoundingChange: electraBlink, + onUpgradeToFulu: fuluColor, + onBlobParametersUpdate: fuluColor) of StdoutLogKind.NoColors: VanityLogs( onKnownBlsToExecutionChange: capellaMono, onUpgradeToDeneb: denebMono, onUpgradeToElectra: electraMono, - onKnownCompoundingChange: electraMono) + onKnownCompoundingChange: electraMono, + onUpgradeToFulu: fuluMono, + onBlobParametersUpdate: fuluMono) of StdoutLogKind.Json, StdoutLogKind.None: VanityLogs( onKnownBlsToExecutionChange: @@ -162,12 +166,16 @@ func getVanityLogs(stdoutKind: StdoutLogKind): VanityLogs = onUpgradeToElectra: (proc() = notice "🦒 Compounding is available 🦒"), onKnownCompoundingChange: - (proc() = notice "🦒 Compounding is activated 🦒")) + (proc() = notice "🦒 Compounding is activated 🦒"), + onUpgradeToFulu: + (proc() = notice "🐅 Blobs columnized 🐅"), + onBlobParametersUpdate: + (proc() = notice "🐅 Blob parameters updated 🐅")) func getVanityMascot(consensusFork: ConsensusFork): string = case consensusFork of ConsensusFork.Fulu: - "❓" + "🐅" of ConsensusFork.Electra: "🦒" of ConsensusFork.Deneb: @@ -389,7 +397,7 @@ proc initFullNode( let quarantine = newClone( - Quarantine.init()) + Quarantine.init(dag.cfg)) attestationPool = newClone(AttestationPool.init( dag, quarantine, onPhase0AttestationReceived, onSingleAttestationReceived)) @@ -402,7 +410,7 @@ proc initFullNode( onProposerSlashingAdded, onPhase0AttesterSlashingAdded, onElectraAttesterSlashingAdded)) blobQuarantine = newClone(BlobQuarantine.init( - dag.cfg, onBlobSidecarAdded)) + dag.cfg, dag.db.getQuarantineDB(), 10, onBlobSidecarAdded)) dataColumnQuarantine = newClone(DataColumnQuarantine.init()) supernode = node.config.peerdasSupernode localCustodyGroups = @@ -449,9 +457,10 @@ proc initFullNode( await blockProcessor[].addBlock(MsgSource.gossip, signedBlock, bres, maybeFinalized = maybeFinalized) else: - # We don't have all the blobs for this block, so we have - # to put it in blobless quarantine. - if not quarantine[].addBlobless(dag.finalizedHead.slot, forkyBlck): + # We don't have all the sidecars for this block, so we have + # to put it to the quarantine. + if not quarantine[].addSidecarless( + dag.finalizedHead.slot, forkyBlck): err(VerifierError.UnviableFork) else: err(VerifierError.MissingParent) @@ -1416,6 +1425,7 @@ proc maybeUpdateActionTrackerNextEpoch( shufflingRef = node.dag.getShufflingRef(node.dag.head, nextEpoch, false).valueOr: # epochRefFallback() won't work in this case either return + # using the separate method of proposer indices calculation in Fulu nextEpochProposers = get_beacon_proposer_indices( forkyState.data, shufflingRef.shuffled_active_validator_indices, nextEpoch) @@ -1641,7 +1651,9 @@ proc onSlotEnd(node: BeaconNode, slot: Slot) {.async.} = node.dag.finalizedHead.slot.epoch() ) node.processor.blobQuarantine[].pruneAfterFinalization( - node.dag.finalizedHead.slot.epoch()) + node.dag.finalizedHead.slot.epoch(), node.dag.needsBackfill()) + node.processor.quarantine[].pruneAfterFinalization( + node.dag.finalizedHead.slot.epoch(), node.dag.needsBackfill()) # Delay part of pruning until latency critical duties are done. # The other part of pruning, `pruneBlocksDAG`, is done eagerly. diff --git a/beacon_chain/nimbus_binary_common.nim b/beacon_chain/nimbus_binary_common.nim index 0d90827673..3cad5f7d6b 100644 --- a/beacon_chain/nimbus_binary_common.nim +++ b/beacon_chain/nimbus_binary_common.nim @@ -298,7 +298,7 @@ proc runSlotLoop*[T](node: T, startTime: BeaconTime, while true: # Start by waiting for the time when the slot starts. Sleeping relinquishes - # control to other tasks which may or may not finish within the alotted + # control to other tasks which may or may not finish within the allotted # time, so below, we need to be wary that the ship might have sailed # already. await sleepAsync(timeToNextSlot) diff --git a/beacon_chain/nimbus_signing_node.nim b/beacon_chain/nimbus_signing_node.nim index 819b3940f8..61bb0b7618 100644 --- a/beacon_chain/nimbus_signing_node.nim +++ b/beacon_chain/nimbus_signing_node.nim @@ -341,7 +341,7 @@ proc asyncInit(sn: SigningNodeRef) {.async: (raises: [SigningNodeError]).} = notice "Launching signing node", version = fullVersionStr, cmdParams = commandLineParams(), config = sn.config - info "Initializaing validators", path = sn.config.validatorsDir() + info "Initializing validators", path = sn.config.validatorsDir() sn.loadKeystores() if sn.attachedValidators.count() == 0: diff --git a/beacon_chain/rpc/rest_beacon_api.nim b/beacon_chain/rpc/rest_beacon_api.nim index 232a6b694b..0a08497431 100644 --- a/beacon_chain/rpc/rest_beacon_api.nim +++ b/beacon_chain/rpc/rest_beacon_api.nim @@ -132,6 +132,62 @@ proc toString*(kind: ValidatorFilterKind): string = of ValidatorFilterKind.WithdrawalDone: "withdrawal_done" +proc handleDataSidecarRequest*[ + InvalidIndexValueError: static string, + DataSidecarsType: typedesc[List]; + getDataSidecar: static proc +]( + node: BeaconNode, + mediaType: Result[MediaType, cstring], + block_id: Result[BlockIdent, cstring], + indices: Result[seq[uint64], cstring], + maxDataSidecars: uint64): RestApiResponse = + let + contentType = mediaType.valueOr: + return RestApiResponse.jsonError( + Http406, ContentNotAcceptableError) + blockIdent = block_id.valueOr: + return RestApiResponse.jsonError( + Http400, InvalidBlockIdValueError, $error) + bid = node.getBlockId(blockIdent).valueOr: + return RestApiResponse.jsonError( + Http404, BlockNotFoundError) + indexFilter = (block: indices.valueOr: + return RestApiResponse.jsonError( + Http400, InvalidIndexValueError, $error)).toHashSet() + + data = newClone(default(DataSidecarsType)) + for dataIndex in 0'u64 ..< maxDataSidecars: + if indexFilter.len > 0 and dataIndex notin indexFilter: + continue + let dataSidecar = new DataSidecarsType.T + if getDataSidecar(node.dag.db, bid.root, dataIndex, dataSidecar[]): + discard data[].add dataSidecar[] + + if contentType == sszMediaType: + RestApiResponse.sszResponse( + data[], headers = [("eth-consensus-version", + node.dag.cfg.consensusForkAtEpoch(bid.slot.epoch).toString())]) + elif contentType == jsonMediaType: + RestApiResponse.jsonResponseDataSidecars( + data[].asSeq(), node.dag.cfg.consensusForkAtEpoch(bid.slot.epoch), + Opt.some(node.dag.is_optimistic(bid)), node.dag.isFinalized(bid)) + else: + RestApiResponse.jsonError(Http500, InvalidAcceptError) + +proc handleDataSidecarRequest*[ + InvalidIndexValueError: static string, + DataSidecarsType: typedesc[List]; + getDataSidecar: static proc +]( + node: BeaconNode, + mediaType: Result[MediaType, cstring], + block_id: Result[BlockIdent, cstring], + indices: Result[seq[uint64], cstring]): RestApiResponse = + handleDataSidecarRequest[ + InvalidIndexValueError, DataSidecarsType, getDataSidecar + ](node, mediaType, block_id, indices, DataSidecarsType.maxLen.uint64) + proc installBeaconApiHandlers*(router: var RestRouter, node: BeaconNode) = # https://github.com/ethereum/EIPs/blob/master/EIPS/eip-4881.md router.api2(MethodGet, "/eth/v1/beacon/deposit_snapshot") do ( @@ -333,6 +389,44 @@ proc installBeaconApiHandlers*(router: var RestRouter, node: BeaconNode) = ) RestApiResponse.jsonError(Http404, StateNotFoundError) + proc getValidatorIdentities( + node: BeaconNode, + bslot: BlockSlotId, + validatorIds: openArray[ValidatorIdent] + ): RestApiResponse = + node.withStateForBlockSlotId(bslot): + let + indices = node.getIndices(validatorIds, state).valueOr: + return RestApiResponse.jsonError(error) + response = + block: + var res: seq[RestValidatorIdentity] + if len(indices) == 0: + # Case when `len(indices) == 0 and len(validatorIds) != 0` means + # that we can't find validator identifiers in state, so we should + # return empty response. + if len(validatorIds) == 0: + # There are no indices, so we're going to filter all the + # validators. + for index, validator in getStateField(state, validators): + res.add(RestValidatorIdentity.init(ValidatorIndex(index), + validator.pubkeyData.pubkey(), + validator.activation_epoch)) + else: + for index in indices: + let + validator = getStateField(state, validators).item(index) + res.add(RestValidatorIdentity.init(index, + validator.pubkeyData.pubkey(), + validator.activation_epoch)) + res + return RestApiResponse.jsonResponseFinalized( + response, + node.getStateOptimistic(state), + node.dag.isFinalized(bslot.bid) + ) + RestApiResponse.jsonError(Http404, StateNotFoundError) + proc getBalances( node: BeaconNode, bslot: BlockSlotId, @@ -557,6 +651,31 @@ proc installBeaconApiHandlers*(router: var RestRouter, node: BeaconNode) = return RestApiResponse.jsonError(Http404, StateNotFoundError, $error) getBalances(node, bslot, validatorIds) + # https://ethereum.github.io/beacon-APIs/#/Beacon/postStateValidatorIdentities + router.metricsApi2( + MethodPost, "/eth/v1/beacon/states/{state_id}/validator_identities", + {RestServerMetricsType.Status, Response}) do ( + state_id: StateIdent, contentBody: Option[ContentBody]) -> RestApiResponse: + let + validatorIds = + block: + if contentBody.isNone(): + return RestApiResponse.jsonError(Http400, EmptyRequestBodyError) + let body = contentBody.get() + decodeBody(seq[ValidatorIdent], body).valueOr: + return RestApiResponse.jsonError( + Http400, InvalidValidatorIdValueError, $error) + sid = state_id.valueOr: + return RestApiResponse.jsonError(Http400, InvalidStateIdValueError, + $error) + bslot = node.getBlockSlotId(sid).valueOr: + if sid.kind == StateQueryKind.Root: + # TODO (cheatfate): Its impossible to retrieve state by `state_root` + # in current version of database. + return RestApiResponse.jsonError(Http500, NoImplementationError) + return RestApiResponse.jsonError(Http404, StateNotFoundError, $error) + getValidatorIdentities(node, bslot, validatorIds) + # https://ethereum.github.io/beacon-APIs/#/Beacon/getEpochCommittees router.metricsApi2( MethodGet, "/eth/v1/beacon/states/{state_id}/committees", @@ -1684,55 +1803,19 @@ proc installBeaconApiHandlers*(router: var RestRouter, node: BeaconNode) = # https://ethereum.github.io/beacon-APIs/?urls.primaryName=v2.4.2#/Beacon/getBlobSidecars # https://github.com/ethereum/beacon-APIs/blob/v2.4.2/apis/beacon/blob_sidecars/blob_sidecars.yaml router.api2(MethodGet, "/eth/v1/beacon/blob_sidecars/{block_id}") do ( - block_id: BlockIdent, indices: seq[uint64]) -> RestApiResponse: - let - blockIdent = block_id.valueOr: - return RestApiResponse.jsonError(Http400, InvalidBlockIdValueError, - $error) - bid = node.getBlockId(blockIdent).valueOr: - return RestApiResponse.jsonError(Http404, BlockNotFoundError) - - contentType = block: - let res = preferredContentType(jsonMediaType, - sszMediaType) - if res.isErr(): - return RestApiResponse.jsonError(Http406, ContentNotAcceptableError) - res.get() - + block_id: BlockIdent, indices: seq[uint64]) -> RestApiResponse: # https://github.com/ethereum/beacon-APIs/blob/v2.4.2/types/deneb/blob_sidecar.yaml#L2-L28 # The merkleization limit of the list is `MAX_BLOB_COMMITMENTS_PER_BLOCK`, # the serialization limit is configurable and is: # - `MAX_BLOBS_PER_BLOCK` from Deneb onward # - `MAX_BLOBS_PER_BLOCK_ELECTRA` from Electra. - let data = newClone(default( - List[BlobSidecar, Limit MAX_BLOB_COMMITMENTS_PER_BLOCK])) - - if indices.isErr: - return RestApiResponse.jsonError(Http400, - InvalidSidecarIndexValueError) - - let indexFilter = indices.get.toHashSet - - for blobIndex in 0'u64 ..< node.dag.cfg.MAX_BLOBS_PER_BLOCK_ELECTRA: - if indexFilter.len > 0 and blobIndex notin indexFilter: - continue - - var blobSidecar = new BlobSidecar - - if node.dag.db.getBlobSidecar(bid.root, blobIndex, blobSidecar[]): - discard data[].add blobSidecar[] - - if contentType == sszMediaType: - RestApiResponse.sszResponse( - data[], headers = [("eth-consensus-version", - node.dag.cfg.consensusForkAtEpoch(bid.slot.epoch).toString())]) - elif contentType == jsonMediaType: - RestApiResponse.jsonResponseBlobSidecars( - data[].asSeq(), node.dag.cfg.consensusForkAtEpoch(bid.slot.epoch), - Opt.some(node.dag.is_optimistic(bid)), - node.dag.isFinalized(bid)) - else: - RestApiResponse.jsonError(Http500, InvalidAcceptError) + handleDataSidecarRequest[ + InvalidBlobSidecarIndexValueError, + List[BlobSidecar, Limit MAX_BLOB_COMMITMENTS_PER_BLOCK], + getBlobSidecar + ]( + node, preferredContentType(jsonMediaType, sszMediaType), + block_id, indices, node.dag.cfg.MAX_BLOBS_PER_BLOCK_ELECTRA) # https://ethereum.github.io/beacon-APIs/?urls.primaryName=v3.1.0#/Beacon/getPendingDeposits router.metricsApi2( diff --git a/beacon_chain/rpc/rest_config_api.nim b/beacon_chain/rpc/rest_config_api.nim index f680dfab89..cf1c0c8dce 100644 --- a/beacon_chain/rpc/rest_config_api.nim +++ b/beacon_chain/rpc/rest_config_api.nim @@ -7,6 +7,7 @@ {.push raises: [].} +import std/algorithm, json, sequtils import stew/[byteutils, base10], chronicles import ".."/beacon_node, ".."/spec/forks, @@ -16,11 +17,27 @@ export rest_utils logScope: topics = "rest_config" +func cmpBPOconfig(x, y: BlobParameters): int = + cmp(x.EPOCH.distinctBase, y.EPOCH.distinctBase) + proc installConfigApiHandlers*(router: var RestRouter, node: BeaconNode) = template cfg(): auto = node.dag.cfg let cachedForkSchedule = RestApiResponse.prepareJsonResponse(getForkSchedule(cfg)) + # This has been intentionally copied and sorted in ascending order + # as the spec demands the endpoint to be sorted in this fashion. + # The spec says: + # There MUST NOT exist multiple blob schedule entries with the same epoch value. + # The maximum blobs per block limit for blob schedules entries MUST be less than + # or equal to `MAX_BLOB_COMMITMENTS_PER_BLOCK`. The blob schedule entries SHOULD + # be sorted by epoch in ascending order. The blob schedule MAY be empty. + sortedBlobSchedule = cfg.BLOB_SCHEDULE.sorted(cmp=cmpBPOconfig) + restBlobSchedule = sortedBlobSchedule.mapIt(%*{ + "EPOCH": Base10.toString(uint64(it.EPOCH)), + "MAX_BLOBS_PER_BLOCK": Base10.toString(uint64(it.MAX_BLOBS_PER_BLOCK)) + }) + cachedConfigSpec = RestApiResponse.prepareJsonResponse( ( @@ -300,6 +317,8 @@ proc installConfigApiHandlers*(router: var RestRouter, node: BeaconNode) = Base10.toString(VALIDATOR_CUSTODY_REQUIREMENT.uint64), BALANCE_PER_ADDITIONAL_CUSTODY_GROUP: Base10.toString(BALANCE_PER_ADDITIONAL_CUSTODY_GROUP), + BLOB_SCHEDULE: + restBlobSchedule, # MIN_EPOCHS_FOR_DATA_COLUMN_SIDECARS_REQUESTS: # Base10.toString(cfg.MIN_EPOCHS_FOR_DATA_COLUMN_SIDECARS_REQUESTS), diff --git a/beacon_chain/rpc/rest_constants.nim b/beacon_chain/rpc/rest_constants.nim index 3b0e4f127b..8f240567fe 100644 --- a/beacon_chain/rpc/rest_constants.nim +++ b/beacon_chain/rpc/rest_constants.nim @@ -267,8 +267,10 @@ const "Failed to obtain fork information" InvalidTimestampValue* = "Invalid or missing timestamp value" - InvalidSidecarIndexValueError* = + InvalidBlobSidecarIndexValueError* = "Invalid blob index" + InvalidDataColumnSidecarIndexValueError* = + "Invalid data column index" InvalidBroadcastValidationType* = "Invalid broadcast_validation type value" PathNotFoundError* = diff --git a/beacon_chain/rpc/rest_debug_api.nim b/beacon_chain/rpc/rest_debug_api.nim index 1013c09e75..3ceb2bb6e7 100644 --- a/beacon_chain/rpc/rest_debug_api.nim +++ b/beacon_chain/rpc/rest_debug_api.nim @@ -1,5 +1,5 @@ # beacon_chain -# Copyright (c) 2021-2024 Status Research & Development GmbH +# Copyright (c) 2021-2025 Status Research & Development GmbH # Licensed and distributed under either of # * MIT license (license terms in the root directory or at https://opensource.org/licenses/MIT). # * Apache v2 license (license terms in the root directory or at https://www.apache.org/licenses/LICENSE-2.0). @@ -11,7 +11,7 @@ import std/sequtils import chronicles, metrics import ".."/beacon_node, ".."/spec/forks, - "."/[rest_utils, state_ttl_cache] + "."/[rest_beacon_api, rest_utils, state_ttl_cache] from ../fork_choice/proto_array import ProtoArrayItem, items @@ -20,13 +20,27 @@ export rest_utils logScope: topics = "rest_debug" proc installDebugApiHandlers*(router: var RestRouter, node: BeaconNode) = + # https://ethereum.github.io/beacon-APIs/?urls.primaryName=dev#/Debug/getDebugDataColumnSidecars + # https://github.com/ethereum/beacon-APIs/blob/v4.0.0-alpha.0/apis/debug/data_column_sidecars.yaml + router.api2( + MethodGet, "/eth/v1/debug/beacon/data_column_sidecars/{block_id}") do ( + block_id: BlockIdent, indices: seq[uint64]) -> RestApiResponse: + handleDataSidecarRequest[ + InvalidDataColumnSidecarIndexValueError, + List[DataColumnSidecar, NUMBER_OF_COLUMNS], + getDataColumnSidecar + ]( + node, preferredContentType(jsonMediaType, sszMediaType), + block_id, indices) + # https://ethereum.github.io/beacon-APIs/#/Debug/getState router.api2(MethodGet, "/eth/v1/debug/beacon/states/{state_id}") do ( state_id: StateIdent) -> RestApiResponse: RestApiResponse.jsonError( Http410, DeprecatedRemovalBeaconBlocksDebugStateV1) - # https://ethereum.github.io/beacon-APIs/#/Debug/getStateV2 + # https://ethereum.github.io/beacon-APIs/?urls.primaryName=v3.1.0#/Debug/getStateV2 + # https://github.com/ethereum/beacon-APIs/blob/v4.0.0-alpha.0/apis/debug/state.v2.yaml router.metricsApi2( MethodGet, "/eth/v2/debug/beacon/states/{state_id}", {RestServerMetricsType.Status, Response}) do ( @@ -53,7 +67,8 @@ proc installDebugApiHandlers*(router: var RestRouter, node: BeaconNode) = return if contentType == jsonMediaType: RestApiResponse.jsonResponseState( - state, node.getStateOptimistic(state)) + state, node.getStateOptimistic(state), + node.dag.isFinalized(bslot.bid)) elif contentType == sszMediaType: let headers = [("eth-consensus-version", state.kind.toString())] withState(state): diff --git a/beacon_chain/rpc/rest_validator_api.nim b/beacon_chain/rpc/rest_validator_api.nim index 930c712f7f..9bb9265c90 100644 --- a/beacon_chain/rpc/rest_validator_api.nim +++ b/beacon_chain/rpc/rest_validator_api.nim @@ -213,7 +213,7 @@ proc installValidatorApiHandlers*(router: var RestRouter, node: BeaconNode) = # If the requested validator index was not valid within this old # state, it's not possible that it will sit on the sync committee. # Since this API must omit results for validators that don't have - # duties, we can simply ingnore this requested index. + # duties, we can simply ignore this requested index. # (we won't bother to validate it against a more recent state). continue diff --git a/beacon_chain/spec/beacon_time.nim b/beacon_chain/spec/beacon_time.nim index da0d73a188..ad61658b94 100644 --- a/beacon_chain/spec/beacon_time.nim +++ b/beacon_chain/spec/beacon_time.nim @@ -150,7 +150,7 @@ const # https://github.com/ethereum/consensus-specs/blob/v1.5.0-alpha.9/specs/altair/light-client/p2p-interface.md#sync-committee lightClientFinalityUpdateSlotOffset* = TimeDiff(nanoseconds: NANOSECONDS_PER_SLOT.int64 div INTERVALS_PER_SLOT) - # https://github.com/ethereum/consensus-specs/blob/v1.6.0-alpha.0/specs/altair/light-client/p2p-interface.md#sync-committee + # https://github.com/ethereum/consensus-specs/blob/v1.6.0-alpha.3/specs/altair/light-client/p2p-interface.md#sync-committee lightClientOptimisticUpdateSlotOffset* = TimeDiff(nanoseconds: NANOSECONDS_PER_SLOT.int64 div INTERVALS_PER_SLOT) diff --git a/beacon_chain/spec/beaconstate.nim b/beacon_chain/spec/beaconstate.nim index a9208bb6e5..0792e25144 100644 --- a/beacon_chain/spec/beaconstate.nim +++ b/beacon_chain/spec/beaconstate.nim @@ -2361,7 +2361,8 @@ func upgrade_to_fulu*( earliest_consolidation_epoch: pre.earliest_consolidation_epoch, pending_deposits: pre.pending_deposits, pending_partial_withdrawals: pre.pending_partial_withdrawals, - pending_consolidations: pre.pending_consolidations + pending_consolidations: pre.pending_consolidations, + proposer_lookahead: initialize_proposer_lookahead(pre, cache) ) post diff --git a/beacon_chain/spec/datatypes/base.nim b/beacon_chain/spec/datatypes/base.nim index 60f24ef5b6..b7cdc14ab4 100644 --- a/beacon_chain/spec/datatypes/base.nim +++ b/beacon_chain/spec/datatypes/base.nim @@ -74,7 +74,7 @@ export tables, results, endians2, json_serialization, sszTypes, beacon_time, crypto, digest, presets -const SPEC_VERSION* = "1.6.0-alpha.0" +const SPEC_VERSION* = "1.6.0-alpha.3" ## Spec version we're aiming to be compatible with, right now const diff --git a/beacon_chain/spec/datatypes/capella.nim b/beacon_chain/spec/datatypes/capella.nim index cad86e72c1..769aabc0b4 100644 --- a/beacon_chain/spec/datatypes/capella.nim +++ b/beacon_chain/spec/datatypes/capella.nim @@ -28,7 +28,7 @@ import export json_serialization, base const - # https://github.com/ethereum/consensus-specs/blob/v1.6.0-alpha.0/specs/capella/light-client/sync-protocol.md#constants + # https://github.com/ethereum/consensus-specs/blob/v1.6.0-alpha.3/specs/capella/light-client/sync-protocol.md#constants # This index is rooted in `BeaconBlockBody`. # The first member (`randao_reveal`) is 16, subsequent members +1 each. # If there are ever more than 16 members in `BeaconBlockBody`, indices change! diff --git a/beacon_chain/spec/datatypes/fulu.nim b/beacon_chain/spec/datatypes/fulu.nim index c768438b2e..45a7f21059 100644 --- a/beacon_chain/spec/datatypes/fulu.nim +++ b/beacon_chain/spec/datatypes/fulu.nim @@ -92,8 +92,6 @@ type CellIndex* = uint64 CustodyIndex* = uint64 - -type DataColumn* = List[KzgCell, Limit(MAX_BLOB_COMMITMENTS_PER_BLOCK)] DataColumnIndices* = List[ColumnIndex, Limit(NUMBER_OF_COLUMNS)] @@ -131,7 +129,7 @@ type CgcCount* = uint8 - # https://github.com/ethereum/consensus-specs/blob/v1.5.0-alpha.10/specs/fulu/p2p-interface.md#metadata + # https://github.com/ethereum/consensus-specs/blob/v1.6.0-alpha.2/specs/fulu/p2p-interface.md#enr-structure MetaData* = object seq_number*: uint64 attnets*: AttnetBits @@ -385,6 +383,11 @@ type HashList[PendingPartialWithdrawal, Limit PENDING_PARTIAL_WITHDRAWALS_LIMIT] pending_consolidations*: HashList[PendingConsolidation, Limit PENDING_CONSOLIDATIONS_LIMIT] + + # [New in Fulu:EIP7917] + proposer_lookahead*: + HashArray[Limit ((MIN_SEED_LOOKAHEAD + 1) * SLOTS_PER_EPOCH), uint64] + ## [New in Electra:EIP7251] # TODO Careful, not nil analysis is broken / incomplete and the semantics will diff --git a/beacon_chain/spec/eth2_apis/eth2_rest_serialization.nim b/beacon_chain/spec/eth2_apis/eth2_rest_serialization.nim index b6201dc223..f57a63ba7f 100644 --- a/beacon_chain/spec/eth2_apis/eth2_rest_serialization.nim +++ b/beacon_chain/spec/eth2_apis/eth2_rest_serialization.nim @@ -50,6 +50,7 @@ RestJson.useDefaultSerializationFor( Checkpoint, ConsolidationRequest, ContributionAndProof, + DataColumnSidecar, DataEnclosedObject, DataMetaEnclosedObject, DataOptimisticAndFinalizedObject, @@ -152,6 +153,7 @@ RestJson.useDefaultSerializationFor( RestSyncCommitteeSubscription, RestSyncInfo, RestValidator, + RestValidatorIdentity, RestValidatorBalance, SPDIR, SPDIR_Meta, @@ -268,7 +270,6 @@ RestJson.useDefaultSerializationFor( fulu_mev.BlindedBeaconBlock, fulu_mev.BlindedBeaconBlockBody, fulu_mev.BuilderBid, - fulu_mev.ExecutionPayloadAndBlobsBundle, fulu_mev.SignedBlindedBeaconBlock, fulu_mev.SignedBuilderBid, phase0.AggregateAndProof, @@ -390,8 +391,7 @@ type MevDecodeTypes* = GetHeaderResponseElectra | GetHeaderResponseFulu | - SubmitBlindedBlockResponseElectra | - SubmitBlindedBlockResponseFulu + SubmitBlindedBlockResponseElectra DecodeTypes* = DataEnclosedObject | @@ -642,9 +642,9 @@ proc jsonResponseBlock*(t: typedesc[RestApiResponse], default(seq[byte]) RestApiResponse.response(res, Http200, "application/json", headers = headers) -proc jsonResponseBlobSidecars*( +proc jsonResponseDataSidecars*( t: typedesc[RestApiResponse], - data: openArray[BlobSidecar], + data: openArray[BlobSidecar | DataColumnSidecar], version: ConsensusFork, execOpt: Opt[bool], finalized: bool @@ -669,7 +669,8 @@ proc jsonResponseBlobSidecars*( proc jsonResponseState*(t: typedesc[RestApiResponse], data: ForkedHashedBeaconState, - execOpt: Opt[bool]): RestApiResponse = + execOpt: Opt[bool], + finalized: bool): RestApiResponse = let headers = [("eth-consensus-version", data.kind.toString())] res = @@ -680,6 +681,7 @@ proc jsonResponseState*(t: typedesc[RestApiResponse], writer.writeField("version", data.kind.toString()) if execOpt.isSome(): writer.writeField("execution_optimistic", execOpt.get()) + writer.writeField("finalized", finalized) withState(data): writer.writeField("data", forkyState.data) writer.endRecord() @@ -1406,11 +1408,10 @@ proc writeValue*( ) {.raises: [IOError].} = writeValue(writer, hexOriginal(distinctBase(value))) -## KzgCommitment and KzgProof; both are the same type, but this makes it -## explicit. +## KzgCommitment, KzgProof, and KzgCell ## https://github.com/ethereum/beacon-APIs/blob/v2.4.2/types/primitive.yaml#L135-L146 proc readValue*(reader: var JsonReader[RestJson], - value: var (KzgCommitment|KzgProof)) {. + value: var (KzgCommitment|KzgProof|KzgCell)) {. raises: [IOError, SerializationError].} = try: hexToByteArray(reader.readValue(string), distinctBase(value.bytes)) @@ -1419,7 +1420,7 @@ proc readValue*(reader: var JsonReader[RestJson], "KzgCommitment value should be a valid hex string") proc writeValue*( - writer: var JsonWriter[RestJson], value: KzgCommitment | KzgProof + writer: var JsonWriter[RestJson], value: KzgCommitment | KzgProof | KzgCell ) {.raises: [IOError].} = writeValue(writer, hexOriginal(distinctBase(value.bytes))) @@ -2858,7 +2859,13 @@ proc readValue*(reader: var JsonReader[RestJson], value: var VCRuntimeConfig) {. raises: [SerializationError, IOError].} = for fieldName in readObjectFields(reader): - let fieldValue = reader.readValue(string) + let fieldValue = + case toLowerAscii(fieldName) + of "blob_schedule": + string(reader.readValue(JsonString)) + else: + reader.readValue(string) + if value.hasKeyOrPut(toUpperAscii(fieldName), fieldValue): let msg = "Multiple `" & fieldName & "` fields found" reader.raiseUnexpectedField(msg, "VCRuntimeConfig") diff --git a/beacon_chain/spec/eth2_apis/rest_types.nim b/beacon_chain/spec/eth2_apis/rest_types.nim index be7553901e..8f8fdf643d 100644 --- a/beacon_chain/spec/eth2_apis/rest_types.nim +++ b/beacon_chain/spec/eth2_apis/rest_types.nim @@ -223,6 +223,11 @@ type status*: string validator*: Validator + RestValidatorIdentity* = object + index*: ValidatorIndex + pubkey*: ValidatorPubKey + activation_epoch*: Epoch + RestBlockHeader* = object slot*: Slot proposer_index*: ValidatorIndex @@ -545,7 +550,6 @@ type GetHeaderResponseElectra* = DataVersionEnclosedObject[electra_mev.SignedBuilderBid] GetHeaderResponseFulu* = DataVersionEnclosedObject[fulu_mev.SignedBuilderBid] SubmitBlindedBlockResponseElectra* = DataVersionEnclosedObject[electra_mev.ExecutionPayloadAndBlobsBundle] - SubmitBlindedBlockResponseFulu* = DataVersionEnclosedObject[fulu_mev.ExecutionPayloadAndBlobsBundle] RestNodeValidity* {.pure.} = enum valid = "VALID", @@ -775,6 +779,12 @@ func init*(t: typedesc[RestValidator], index: ValidatorIndex, RestValidator(index: index, balance: Base10.toString(balance), status: status, validator: validator) +func init*(t: typedesc[RestValidatorIdentity], index: ValidatorIndex, + pubkey: ValidatorPubKey, + activation_epoch: Epoch): RestValidatorIdentity = + RestValidatorIdentity(index: index, pubkey: pubkey, + activation_epoch: activation_epoch) + func init*(t: typedesc[RestValidatorBalance], index: ValidatorIndex, balance: Gwei): RestValidatorBalance = RestValidatorBalance(index: index, balance: Base10.toString(balance)) diff --git a/beacon_chain/spec/forks.nim b/beacon_chain/spec/forks.nim index 2824582126..72adad2821 100644 --- a/beacon_chain/spec/forks.nim +++ b/beacon_chain/spec/forks.nim @@ -19,6 +19,8 @@ import ./datatypes/[phase0, altair, bellatrix, capella, deneb, electra, fulu], ./mev/[bellatrix_mev, capella_mev, deneb_mev, electra_mev, fulu_mev] +from std/sequtils import mapIt + export extras, block_id, phase0, altair, bellatrix, capella, deneb, electra, fulu, eth2_merkleization, eth2_ssz_serialization, forks_light_client, @@ -353,6 +355,7 @@ type deneb*: ForkDigest electra*: ForkDigest fulu*: ForkDigest + bpos*: seq[(Epoch, ConsensusFork, ForkDigest)] template kind*( x: typedesc[ @@ -486,8 +489,7 @@ template kind*( fulu.MsgTrustedSignedBeaconBlock | fulu.TrustedSignedBeaconBlock | fulu_mev.SignedBlindedBeaconBlock | - fulu_mev.SignedBuilderBid | - fulu_mev.ExecutionPayloadAndBlobsBundle]): ConsensusFork = + fulu_mev.SignedBuilderBid]): ConsensusFork = ConsensusFork.Fulu template BeaconState*(kind: static ConsensusFork): auto = @@ -1073,6 +1075,16 @@ func setStateRoot*(x: var ForkedHashedBeaconState, root: Eth2Digest) = withState(x): forkyState.root = root {.pop.} +# https://github.com/ethereum/consensus-specs/blob/v1.6.0-alpha.2/specs/fulu/beacon-chain.md#new-get_blob_parameters +func get_blob_parameters*(cfg: RuntimeConfig, epoch: Epoch): BlobParameters = + ## Return the blob parameters at a given epoch. + for entry in cfg.BLOB_SCHEDULE: + if epoch >= entry.EPOCH: + return entry + BlobParameters( + EPOCH: cfg.ELECTRA_FORK_EPOCH, + MAX_BLOBS_PER_BLOCK: cfg.MAX_BLOBS_PER_BLOCK_ELECTRA) + func consensusForkEpoch*( cfg: RuntimeConfig, consensusFork: ConsensusFork): Epoch = case consensusFork @@ -1129,6 +1141,9 @@ func consensusForkForDigest*( elif forkDigest == forkDigests.phase0: ok ConsensusFork.Phase0 else: + for (epoch, consensusFork, bpoForkDigest) in forkDigests.bpos: + if forkDigest == bpoForkDigest: + return ok consensusFork err() func atConsensusFork*( @@ -1151,7 +1166,17 @@ func atConsensusFork*( template atEpoch*( forkDigests: ForkDigests, epoch: Epoch, cfg: RuntimeConfig): ForkDigest = - forkDigests.atConsensusFork(cfg.consensusForkAtEpoch(epoch)) + if epoch >= cfg.FULU_FORK_EPOCH: + var res: Opt[ForkDigest] + for (bpoEpoch, _, forkDigest) in forkDigests.bpos: + if epoch >= bpoEpoch: + res = Opt[ForkDigest].ok(forkDigest) + break + res.valueOr: + # In BPO-compatible fork, without BPOs + forkDigests.atConsensusFork(cfg.consensusForkAtEpoch(epoch)) + else: + forkDigests.atConsensusFork(cfg.consensusForkAtEpoch(epoch)) template asSigned*( x: ForkedMsgTrustedSignedBeaconBlock | @@ -1552,8 +1577,15 @@ func forkVersionAtEpoch*(cfg: RuntimeConfig, epoch: Epoch): Version = of ConsensusFork.Phase0: cfg.GENESIS_FORK_VERSION func nextForkEpochAtEpoch*(cfg: RuntimeConfig, epoch: Epoch): Epoch = + ## Used to construct the eth2 field of ENRs case cfg.consensusForkAtEpoch(epoch) - of ConsensusFork.Fulu: FAR_FUTURE_EPOCH + of ConsensusFork.Fulu: + var res = FAR_FUTURE_EPOCH + for entry in cfg.BLOB_SCHEDULE: + if epoch >= entry.EPOCH: + break + res = entry.EPOCH + res of ConsensusFork.Electra: cfg.FULU_FORK_EPOCH of ConsensusFork.Deneb: cfg.ELECTRA_FORK_EPOCH of ConsensusFork.Capella: cfg.DENEB_FORK_EPOCH @@ -1677,6 +1709,30 @@ func compute_fork_digest*(current_version: Version, compute_fork_data_root( current_version, genesis_validators_root).data.toOpenArray(0, 3) +# https://github.com/ethereum/consensus-specs/blob/v1.6.0-alpha.2/specs/fulu/beacon-chain.md#modified-compute_fork_digest +func compute_fork_digest_fulu*( + cfg: RuntimeConfig, genesis_validators_root: Eth2Digest, epoch: Epoch): + ForkDigest = + ## Return the 4-byte fork digest for the ``version`` and + ## ``genesis_validators_root`` XOR'd with the hash of the blob parameters for + ## ``epoch``. + ## + ## This is a digest primarily used for domain separation on the p2p layer. + ## 4-bytes suffices for practical separation of forks/chains. + let + fork_version = forkVersionAtEpoch(cfg, epoch) + base_digest = compute_fork_data_root(fork_version, genesis_validators_root) + blob_parameters = get_blob_parameters(cfg, epoch) + + var bpo_buf: array[16, byte] + bpo_buf[0 .. 7] = toBytesLE(distinctBase(blob_parameters.EPOCH)) + bpo_buf[8 .. 15] = toBytesLE(blob_parameters.MAX_BLOBS_PER_BLOCK) + let bpo_digest = eth2digest(bpo_buf) + var res: array[4, byte] + for i in 0 ..< static(len(res)): + res[i] = base_digest.data[i] xor bpo_digest.data[i] + ForkDigest(res) + func init*(T: type ForkDigests, cfg: RuntimeConfig, genesis_validators_root: Eth2Digest): T = @@ -1695,7 +1751,13 @@ func init*(T: type ForkDigests, electra: compute_fork_digest(cfg.ELECTRA_FORK_VERSION, genesis_validators_root), fulu: - compute_fork_digest(cfg.FULU_FORK_VERSION, genesis_validators_root) + compute_fork_digest(cfg.FULU_FORK_VERSION, genesis_validators_root), + bpos: mapIt( + cfg.BLOB_SCHEDULE, + ( + it.EPOCH, + consensusForkAtEpoch(cfg, it.EPOCH), + compute_fork_digest_fulu(cfg, genesis_validators_root, it.EPOCH))) ) func toBlockId*(header: BeaconBlockHeader): BlockId = diff --git a/beacon_chain/spec/helpers.nim b/beacon_chain/spec/helpers.nim index 98a6447dcb..fd2d740ea1 100644 --- a/beacon_chain/spec/helpers.nim +++ b/beacon_chain/spec/helpers.nim @@ -380,7 +380,7 @@ func contextEpoch*(bootstrap: ForkyLightClientBootstrap): Epoch = # https://github.com/ethereum/consensus-specs/blob/v1.4.0-beta.5/specs/altair/light-client/p2p-interface.md#lightclientupdatesbyrange # https://github.com/ethereum/consensus-specs/blob/v1.4.0-beta.5/specs/altair/light-client/p2p-interface.md#getlightclientfinalityupdate -# https://github.com/ethereum/consensus-specs/blob/v1.6.0-alpha.0/specs/altair/light-client/p2p-interface.md#getlightclientoptimisticupdate +# https://github.com/ethereum/consensus-specs/blob/v1.6.0-alpha.3/specs/altair/light-client/p2p-interface.md#getlightclientoptimisticupdate func contextEpoch*(update: SomeForkyLightClientUpdate): Epoch = update.attested_header.beacon.slot.epoch diff --git a/beacon_chain/spec/mev/fulu_mev.nim b/beacon_chain/spec/mev/fulu_mev.nim index b8f727cfa1..5d208dda20 100644 --- a/beacon_chain/spec/mev/fulu_mev.nim +++ b/beacon_chain/spec/mev/fulu_mev.nim @@ -70,11 +70,6 @@ type message*: BlindedBeaconBlock signature*: ValidatorSig - # https://github.com/ethereum/builder-specs/blob/v0.5.0/specs/deneb/builder.md#executionpayloadandblobsbundle - ExecutionPayloadAndBlobsBundle* = object - execution_payload*: ExecutionPayload - blobs_bundle*: BlobsBundle - # Not spec, but suggested by spec BlindedExecutionPayloadAndBlobsBundle* = object execution_payload_header*: ExecutionPayloadHeader diff --git a/beacon_chain/spec/mev/rest_fulu_mev_calls.nim b/beacon_chain/spec/mev/rest_fulu_mev_calls.nim index 61bd649bcd..f492e7d856 100644 --- a/beacon_chain/spec/mev/rest_fulu_mev_calls.nim +++ b/beacon_chain/spec/mev/rest_fulu_mev_calls.nim @@ -38,7 +38,7 @@ proc getHeaderFulu*( proc submitBlindedBlockPlain*( body: fulu_mev.SignedBlindedBeaconBlock ): RestPlainResponse {. - rest, endpoint: "/eth/v1/builder/blinded_blocks", + rest, endpoint: "/eth/v2/builder/blinded_blocks", meth: MethodPost, connection: {Dedicated, Close}.} ## https://github.com/ethereum/builder-specs/blob/v0.4.0/apis/builder/blinded_blocks.yaml diff --git a/beacon_chain/spec/network.nim b/beacon_chain/spec/network.nim index 758788bb76..b69a50c630 100644 --- a/beacon_chain/spec/network.nim +++ b/beacon_chain/spec/network.nim @@ -15,7 +15,7 @@ export base const # https://github.com/ethereum/consensus-specs/blob/v1.5.0-alpha.10/specs/phase0/p2p-interface.md#topics-and-messages - # https://github.com/ethereum/consensus-specs/blob/v1.6.0-alpha.0/specs/capella/p2p-interface.md#topics-and-messages + # https://github.com/ethereum/consensus-specs/blob/v1.6.0-alpha.3/specs/capella/p2p-interface.md#topics-and-messages topicBeaconBlocksSuffix = "beacon_block/ssz_snappy" topicVoluntaryExitsSuffix = "voluntary_exit/ssz_snappy" topicProposerSlashingsSuffix = "proposer_slashing/ssz_snappy" @@ -44,6 +44,7 @@ const enrAttestationSubnetsField* = "attnets" enrSyncSubnetsField* = "syncnets" enrCustodySubnetCountField* = "cgc" + enrNextForkDigestField* = "nfd" enrForkIdField* = "eth2" template eth2Prefix(forkDigest: ForkDigest): string = @@ -121,7 +122,7 @@ func compute_subnet_for_blob_sidecar*( func compute_subnet_for_data_column_sidecar*(column_index: ColumnIndex): uint64 = uint64(column_index mod DATA_COLUMN_SIDECAR_SUBNET_COUNT) -# https://github.com/ethereum/consensus-specs/blob/v1.6.0-alpha.0/specs/altair/light-client/p2p-interface.md#light_client_finality_update +# https://github.com/ethereum/consensus-specs/blob/v1.6.0-alpha.3/specs/altair/light-client/p2p-interface.md#light_client_finality_update func getLightClientFinalityUpdateTopic*(forkDigest: ForkDigest): string = ## For broadcasting or obtaining the latest `LightClientFinalityUpdate`. eth2Prefix(forkDigest) & "light_client_finality_update/ssz_snappy" @@ -131,6 +132,14 @@ func getLightClientOptimisticUpdateTopic*(forkDigest: ForkDigest): string = ## For broadcasting or obtaining the latest `LightClientOptimisticUpdate`. eth2Prefix(forkDigest) & "light_client_optimistic_update/ssz_snappy" +func getForkDigest( + cfg: RuntimeConfig, genesis_validators_root: Eth2Digest, + current_fork_version: Version, epoch: Epoch): ForkDigest = + if epoch >= cfg.FULU_FORK_EPOCH: + compute_fork_digest_fulu(cfg, genesis_validators_root, epoch) + else: + compute_fork_digest(current_fork_version, genesis_validators_root) + func getENRForkID*(cfg: RuntimeConfig, epoch: Epoch, genesis_validators_root: Eth2Digest): ENRForkID = @@ -140,8 +149,8 @@ func getENRForkID*(cfg: RuntimeConfig, current_fork_version else: cfg.forkVersionAtEpoch(cfg.nextForkEpochAtEpoch(epoch)) - fork_digest = compute_fork_digest(current_fork_version, - genesis_validators_root) + fork_digest = cfg.getForkDigest( + genesis_validators_root, current_fork_version, epoch) ENRForkID( fork_digest: fork_digest, next_fork_version: next_fork_version, @@ -156,8 +165,8 @@ func getDiscoveryForkID*(cfg: RuntimeConfig, else: let current_fork_version = cfg.forkVersionAtEpoch(epoch) - fork_digest = compute_fork_digest(current_fork_version, - genesis_validators_root) + fork_digest = cfg.getForkDigest( + genesis_validators_root, current_fork_version, epoch) ENRForkID( fork_digest: fork_digest, next_fork_version: current_fork_version, diff --git a/beacon_chain/spec/peerdas_helpers.nim b/beacon_chain/spec/peerdas_helpers.nim index 9ae514b3b1..ced5277427 100644 --- a/beacon_chain/spec/peerdas_helpers.nim +++ b/beacon_chain/spec/peerdas_helpers.nim @@ -293,8 +293,7 @@ func get_extended_sample_count*(samples_per_slot: int, # https://github.com/ethereum/consensus-specs/blob/v1.5.0-alpha.10/specs/fulu/p2p-interface.md#verify_data_column_sidecar_inclusion_proof proc verify_data_column_sidecar_inclusion_proof*(sidecar: DataColumnSidecar): Result[void, cstring] = - ## Verify if the given KZG Commitments are in included - ## in the beacon block or not + ## Verify if the given KZG commitments included in the given beacon block. let gindex = KZG_COMMITMENTS_INCLUSION_PROOF_DEPTH_GINDEX.GeneralizedIndex if not is_valid_merkle_branch( @@ -311,8 +310,7 @@ proc verify_data_column_sidecar_inclusion_proof*(sidecar: DataColumnSidecar): # https://github.com/ethereum/consensus-specs/blob/v1.5.0-alpha.10/specs/fulu/p2p-interface.md#verify_data_column_sidecar_kzg_proofs proc verify_data_column_sidecar_kzg_proofs*(sidecar: DataColumnSidecar): Result[void, cstring] = - ## Verify if the KZG Proofs consisting in the `DataColumnSidecar` - ## is valid or not. + ## Verify if the KZG proofs are correct. # Check if the data column sidecar index < NUMBER_OF_COLUMNS if not (sidecar.index < NUMBER_OF_COLUMNS): @@ -327,18 +325,16 @@ proc verify_data_column_sidecar_kzg_proofs*(sidecar: DataColumnSidecar): return err("Sidecar kzg_commitments length is not equal to the kzg_proofs length") # Iterate through the cell indices - var cellIndices = - newSeq[CellIndex](MAX_BLOB_COMMITMENTS_PER_BLOCK) + var cellIndices = newSeqOfCap[CellIndex](sidecar.column.len) for _ in 0.. state.latest_block_header.slot): return err("process_block_header: block not newer than latest block header") - # Verify that proposer index is the correct index let proposer_index = get_beacon_proposer_index(state, cache).valueOr: return err("process_block_header: proposer missing") @@ -687,24 +687,6 @@ type proposer_slashings*: Gwei attester_slashings*: Gwei -# https://github.com/ethereum/consensus-specs/blob/v1.6.0-alpha.0/specs/fulu/das-core.md#get_max_blobs_per_block -func get_max_blobs_per_block(cfg: RuntimeConfig, epoch: Epoch): Opt[uint64] = - ## Return the maximum number of blobs that can be included in a block for a - ## given epoch. - if not len(cfg.BLOB_SCHEDULE) > 0: - return Opt.none(uint64) - - # Spec version of function sorts every time, which should happen only once at - # loading. - for entry in cfg.BLOB_SCHEDULE: - if epoch >= entry.EPOCH: - return Opt.some entry.MAX_BLOBS_PER_BLOCK - - # This is effectively a constant per node instance. - Opt.some foldl( - cfg.BLOB_SCHEDULE, min(a, b.MAX_BLOBS_PER_BLOCK), - cfg.BLOB_SCHEDULE[0].MAX_BLOBS_PER_BLOCK) - # https://github.com/ethereum/consensus-specs/blob/v1.5.0-alpha.8/specs/phase0/beacon-chain.md#operations # https://github.com/ethereum/consensus-specs/blob/v1.4.0-beta.5/specs/capella/beacon-chain.md#modified-process_operations # https://github.com/ethereum/consensus-specs/blob/v1.6.0-alpha.0/specs/electra/beacon-chain.md#modified-process_operations @@ -815,7 +797,6 @@ func get_participant_reward*(total_active_balance: Gwei): Gwei = func get_proposer_reward*(participant_reward: Gwei): Gwei = participant_reward * PROPOSER_WEIGHT div (WEIGHT_DENOMINATOR - PROPOSER_WEIGHT) -# https://github.com/ethereum/consensus-specs/blob/v1.5.0-beta.4/specs/altair/beacon-chain.md#sync-aggregate-processing proc process_sync_aggregate*( state: var (altair.BeaconState | bellatrix.BeaconState | capella.BeaconState | deneb.BeaconState | electra.BeaconState | @@ -1104,10 +1085,9 @@ proc process_execution_payload*( return err("process_execution_payload: invalid timestamp") # Verify commitments are under limit - let max_blobs_per_block = - cfg.get_max_blobs_per_block(get_current_epoch(state)).valueOr: - return err("process_execution_payload: missing blob schedule") - if not (lenu64(body.blob_kzg_commitments) <= max_blobs_per_block): + let blob_params = + cfg.get_blob_parameters(get_current_epoch(state)) + if not (lenu64(body.blob_kzg_commitments) <= blob_params.MAX_BLOBS_PER_BLOCK): return err("process_execution_payload: too many KZG commitments") # Verify the execution payload is valid diff --git a/beacon_chain/spec/state_transition_epoch.nim b/beacon_chain/spec/state_transition_epoch.nim index a158b4e603..d8e48f0192 100644 --- a/beacon_chain/spec/state_transition_epoch.nim +++ b/beacon_chain/spec/state_transition_epoch.nim @@ -1393,6 +1393,29 @@ func process_pending_consolidations*( ok() +# https://github.com/ethereum/consensus-specs/blob/v1.6.0-alpha.1/specs/fulu/beacon-chain.md#new-process_proposer_lookahead +func process_proposer_lookahead*(state: var fulu.BeaconState, + cache: var StateCache): + Result[void, cstring] = + let + total_slots = state.proposer_lookahead.data.lenu64 + last_epoch_start = total_slots - SLOTS_PER_EPOCH + + for i in 0 ..< last_epoch_start: + mitem(state.proposer_lookahead, i) = + mitem(state.proposer_lookahead, i + SLOTS_PER_EPOCH) + + let + next_epoch = get_current_epoch(state) + MIN_SEED_LOOKAHEAD + 1 + new_proposers = + get_beacon_proposer_indices(state, next_epoch) + + for i in 0 ..< SLOTS_PER_EPOCH: + if new_proposers[i].isSome(): + mitem(state.proposer_lookahead, last_epoch_start + i) = new_proposers[i].get.uint64 + + ok() + # https://github.com/ethereum/consensus-specs/blob/v1.4.0/specs/phase0/beacon-chain.md#epoch-processing proc process_epoch*( cfg: RuntimeConfig, state: var phase0.BeaconState, flags: UpdateFlags, @@ -1539,7 +1562,52 @@ proc process_epoch*( # https://github.com/ethereum/consensus-specs/blob/v1.6.0-alpha.0/specs/electra/beacon-chain.md#epoch-processing proc process_epoch*( - cfg: RuntimeConfig, state: var (electra.BeaconState | fulu.BeaconState), + cfg: RuntimeConfig, state: var electra.BeaconState, + flags: UpdateFlags, cache: var StateCache, info: var altair.EpochInfo): + Result[void, cstring] = + let epoch = get_current_epoch(state) + info.init(state) + + # https://github.com/ethereum/consensus-specs/blob/v1.5.0-beta.4/specs/altair/beacon-chain.md#justification-and-finalization + process_justification_and_finalization(state, info.balances, flags) + + # state.slot hasn't been incremented yet. + if strictVerification in flags: + # Rule 2/3/4 finalization results in the most pessimal case. The other + # three finalization rules finalize more quickly as long as the any of + # the finalization rules triggered. + if (epoch >= 2 and state.current_justified_checkpoint.epoch + 2 < epoch) or + (epoch >= 3 and state.finalized_checkpoint.epoch + 3 < epoch): + fatal "The network did not finalize", + epoch, finalizedEpoch = state.finalized_checkpoint.epoch + quit 1 + + process_inactivity_updates(cfg, state, info) + + # https://github.com/ethereum/consensus-specs/blob/v1.4.0/specs/altair/beacon-chain.md#rewards-and-penalties + process_rewards_and_penalties(cfg, state, info) + + # https://github.com/ethereum/consensus-specs/blob/v1.5.0-alpha.8/specs/phase0/beacon-chain.md#registry-updates + ? process_registry_updates(cfg, state, cache) # [Modified in Electra:EIP7251] + + # https://github.com/ethereum/consensus-specs/blob/v1.6.0-alpha.0/specs/altair/beacon-chain.md#slashings + process_slashings(state, info.balances.current_epoch) + + process_eth1_data_reset(state) + ? process_pending_deposits(cfg, state, cache) # [New in Electra:EIP7251] + ? process_pending_consolidations(cfg, state) # [New in Electra:EIP7251] + process_effective_balance_updates(state) # [Modified in Electra:EIP7251] + process_slashings_reset(state) + process_randao_mixes_reset(state) + ? process_historical_summaries_update(state) # [Modified in Capella] + process_participation_flag_updates(state) + process_sync_committee_updates(state) + + ok() + +# https://github.com/ethereum/consensus-specs/blob/v1.6.0-alpha.1/specs/fulu/beacon-chain.md#modified-process_epoch +proc process_epoch*( + cfg: RuntimeConfig, state: var fulu.BeaconState, flags: UpdateFlags, cache: var StateCache, info: var altair.EpochInfo): Result[void, cstring] = let epoch = get_current_epoch(state) @@ -1579,6 +1647,7 @@ proc process_epoch*( ? process_historical_summaries_update(state) # [Modified in Capella] process_participation_flag_updates(state) process_sync_committee_updates(state) + ? process_proposer_lookahead(state, cache) # [New in Fulu:EIP7917] ok() diff --git a/beacon_chain/spec/validator.nim b/beacon_chain/spec/validator.nim index 8f046284e3..b103f0d989 100644 --- a/beacon_chain/spec/validator.nim +++ b/beacon_chain/spec/validator.nim @@ -439,62 +439,116 @@ func compute_proposer_index(state: ForkyBeaconState, ## Return from ``indices`` a random index sampled by effective balance. compute_proposer_index(state, indices, seed, shuffled_index) +# https://github.com/ethereum/consensus-specs/blob/v1.6.0-alpha.2/specs/fulu/beacon-chain.md#new-compute_proposer_indices +func compute_proposer_indices*( + state: ForkyBeaconState, + epoch: Epoch, seed: Eth2Digest, + indices: seq[ValidatorIndex] +): seq[Opt[ValidatorIndex]] = + let startSlot = epoch.start_slot() + var proposerIndices: seq[Opt[ValidatorIndex]] + + for i in 0..= ConsensusFork.Fulu: + let pi = Opt.some(ValidatorIndex item(state.proposer_lookahead, slot mod SLOTS_PER_EPOCH)) + cache.beacon_proposer_indices[slot] = pi + return pi + else: + cache.beacon_proposer_indices.withValue(slot, proposer) do: + return proposer[] + do: + ## Return the beacon proposer index at the current slot. + var buffer: array[32 + 8, byte] + buffer[0..31] = get_seed(state, epoch, DOMAIN_BEACON_PROPOSER).data + # There's exactly one beacon proposer per slot - the same validator may + # however propose several times in the same epoch (however unlikely) + let indices = get_active_validator_indices(state, epoch) + var res: Opt[ValidatorIndex] + for epoch_slot in epoch.slots(): + buffer[32..39] = uint_to_bytes(epoch_slot.asUInt64) + let seed = eth2digest(buffer) + let pi = compute_proposer_index(state, indices, seed) + if epoch_slot == slot: + res = pi + cache.beacon_proposer_indices[epoch_slot] = pi + return res + +# https://github.com/ethereum/consensus-specs/blob/v1.6.0-alpha.2/specs/fulu/beacon-chain.md#new-get_beacon_proposer_indices +func get_beacon_proposer_indices*( + state: ForkyBeaconState, epoch: Epoch +): seq[Opt[ValidatorIndex]] = + ## Return the proposer indices for the given `epoch`. + let indices = get_active_validator_indices(state, epoch) + let seed = get_seed(state, epoch, DOMAIN_BEACON_PROPOSER) + compute_proposer_indices(state, epoch, seed, indices) - cache.beacon_proposer_indices.withValue(slot, proposer) do: - return proposer[] - do: - ## Return the beacon proposer index at the current slot. +# https://github.com/ethereum/consensus-specs/blob/v1.6.0-alpha.0/specs/phase0/beacon-chain.md#get_beacon_proposer_index +func get_beacon_proposer_indices*( + state: ForkyBeaconState, shuffled_indices: openArray[ValidatorIndex], epoch: Epoch): + seq[Opt[ValidatorIndex]] = + ## Return the beacon proposer indices at the current epoch, using shuffled + ## rather than sorted active validator indices. + when typeof(state).kind < ConsensusFork.Fulu: + var + buffer {.noinit.}: array[32 + 8, byte] + res: seq[Opt[ValidatorIndex]] - var buffer: array[32 + 8, byte] buffer[0..31] = get_seed(state, epoch, DOMAIN_BEACON_PROPOSER).data - - # There's exactly one beacon proposer per slot - the same validator may - # however propose several times in the same epoch (however unlikely) - let indices = get_active_validator_indices(state, epoch) - var res: Opt[ValidatorIndex] + let epoch_shuffle_seed = get_seed(state, epoch, DOMAIN_BEACON_ATTESTER) for epoch_slot in epoch.slots(): buffer[32..39] = uint_to_bytes(epoch_slot.asUInt64) - let seed = eth2digest(buffer) - let pi = compute_proposer_index(state, indices, seed) - if epoch_slot == slot: - res = pi - cache.beacon_proposer_indices[epoch_slot] = pi + res.add ( + compute_proposer_index(state, shuffled_indices, eth2digest(buffer)) do: + compute_inverted_shuffled_index( + shuffled_index, seq_len, epoch_shuffle_seed)) - return res + res -# https://github.com/ethereum/consensus-specs/blob/v1.6.0-alpha.0/specs/phase0/beacon-chain.md#get_beacon_proposer_index -func get_beacon_proposer_indices*( - state: ForkyBeaconState, shuffled_indices: openArray[ValidatorIndex], epoch: Epoch): - seq[Opt[ValidatorIndex]] = - ## Return the beacon proposer indices at the current epoch, using shuffled - ## rather than sorted active validator indices. - var - buffer {.noinit.}: array[32 + 8, byte] - res: seq[Opt[ValidatorIndex]] + else: + # Not using shuffled indices here is not a bug, + # as the method of computing proposer in the below + # function does not require shuffled indices post Fulu + get_beacon_proposer_indices(state, epoch) - buffer[0..31] = get_seed(state, epoch, DOMAIN_BEACON_PROPOSER).data - let epoch_shuffle_seed = get_seed(state, epoch, DOMAIN_BEACON_ATTESTER) - for epoch_slot in epoch.slots(): - buffer[32..39] = uint_to_bytes(epoch_slot.asUInt64) - res.add ( - compute_proposer_index(state, shuffled_indices, eth2digest(buffer)) do: - compute_inverted_shuffled_index( - shuffled_index, seq_len, epoch_shuffle_seed)) +func initialize_proposer_lookahead*(state: electra.BeaconState, + cache: var StateCache): + HashArray[Limit ((MIN_SEED_LOOKAHEAD + 1) * SLOTS_PER_EPOCH), uint64] = + let current_epoch = state.slot.epoch() + var lookahead: HashArray[Limit ((MIN_SEED_LOOKAHEAD + 1) * SLOTS_PER_EPOCH), uint64] - res + for i in 0 ..< (MIN_SEED_LOOKAHEAD + 1): + let + epoch_i = current_epoch + i + proposers = + get_beacon_proposer_indices(state, epoch_i) + + for j in 0 ..< SLOTS_PER_EPOCH: + if proposers[j].isSome(): + mitem(lookahead, i * SLOTS_PER_EPOCH + j) = proposers[j].get.uint64 + + lookahead # https://github.com/ethereum/consensus-specs/blob/v1.4.0-beta.6/specs/phase0/beacon-chain.md#get_beacon_proposer_index func get_beacon_proposer_index*(state: ForkyBeaconState, cache: var StateCache): diff --git a/beacon_chain/sync/light_client_manager.nim b/beacon_chain/sync/light_client_manager.nim index fe9a0c5b88..d0e17dcba2 100644 --- a/beacon_chain/sync/light_client_manager.nim +++ b/beacon_chain/sync/light_client_manager.nim @@ -112,7 +112,7 @@ proc isGossipSupported*( finalizedPeriod = self.getFinalizedPeriod(), isNextSyncCommitteeKnown = self.isNextSyncCommitteeKnown()) -# https://github.com/ethereum/consensus-specs/blob/v1.6.0-alpha.0/specs/altair/light-client/p2p-interface.md#getlightclientbootstrap +# https://github.com/ethereum/consensus-specs/blob/v1.6.0-alpha.3/specs/altair/light-client/p2p-interface.md#getlightclientbootstrap proc doRequest( e: typedesc[Bootstrap], peer: Peer, @@ -120,7 +120,7 @@ proc doRequest( ): Future[NetRes[ForkedLightClientBootstrap]] {.async: (raises: [CancelledError], raw: true).} = peer.lightClientBootstrap(blockRoot) -# https://github.com/ethereum/consensus-specs/blob/v1.6.0-alpha.0/specs/altair/light-client/p2p-interface.md#lightclientupdatesbyrange +# https://github.com/ethereum/consensus-specs/blob/v1.6.0-alpha.3/specs/altair/light-client/p2p-interface.md#lightclientupdatesbyrange type LightClientUpdatesByRangeResponse = NetRes[List[ForkedLightClientUpdate, MAX_REQUEST_LIGHT_CLIENT_UPDATES]] proc doRequest( @@ -138,7 +138,7 @@ proc doRequest( raise newException(ResponseError, e.error) return response -# https://github.com/ethereum/consensus-specs/blob/v1.6.0-alpha.0/specs/altair/light-client/p2p-interface.md#getlightclientfinalityupdate +# https://github.com/ethereum/consensus-specs/blob/v1.6.0-alpha.3/specs/altair/light-client/p2p-interface.md#getlightclientfinalityupdate proc doRequest( e: typedesc[FinalityUpdate], peer: Peer diff --git a/beacon_chain/sync/request_manager.nim b/beacon_chain/sync/request_manager.nim index 27dd3b25a3..91cf3ca574 100644 --- a/beacon_chain/sync/request_manager.nim +++ b/beacon_chain/sync/request_manager.nim @@ -19,7 +19,6 @@ import ../gossip_processing/block_processor from std/algorithm import binarySearch, sort -from std/sequtils import mapIt from std/strutils import join from ../beacon_clock import GetBeaconTimeFn export block_quarantine, sync_manager @@ -152,7 +151,6 @@ func checkResponseSanity( for sidecar in blobs.items(): let - slot = sidecar[].signed_block_header.message.slot block_root = hash_tree_root(sidecar[].signed_block_header.message) sidecarIdent = BlobIdentifier(block_root: block_root, index: sidecar[].index) @@ -290,7 +288,7 @@ proc fetchBlobsFromNetwork(self: RequestManager, for record in records: if record.block_root != curRoot: curRoot = record.block_root - if (let o = self.quarantine[].popBlobless(curRoot); o.isSome): + if (let o = self.quarantine[].popSidecarless(curRoot); o.isSome): let blck = o.unsafeGet() discard await self.blockVerifier(blck, false) # TODO: @@ -376,7 +374,7 @@ proc fetchDataColumnsFromNetwork(rman: RequestManager, let block_root = hash_tree_root(col.signed_block_header.message) if block_root != curRoot: curRoot = block_root - if (let o = rman.quarantine[].popColumnless(curRoot); o.isSome): + if (let o = rman.quarantine[].popSidecarless(curRoot); o.isSome): let col = o.unsafeGet() discard await rman.blockVerifier(col, false) else: @@ -458,7 +456,7 @@ proc getMissingBlobs(rman: RequestManager): seq[BlobIdentifier] = var idents: seq[BlobIdentifier] ready: seq[Eth2Digest] - for blobless in rman.quarantine[].peekBlobless(): + for blobless in rman.quarantine[].peekSidecarless(): withBlck(blobless): when consensusFork >= ConsensusFork.Deneb: # give blobs a chance to arrive over gossip @@ -488,7 +486,7 @@ proc getMissingBlobs(rman: RequestManager): seq[BlobIdentifier] = commitments = len(forkyBlck.message.body.blob_kzg_commitments) for root in ready: - let blobless = rman.quarantine[].popBlobless(root).valueOr: + let blobless = rman.quarantine[].popSidecarless(root).valueOr: continue discard rman.blockVerifier(blobless, false) idents @@ -533,7 +531,7 @@ proc requestManagerBlobLoop( Future[Result[void, VerifierError]] .Raising([CancelledError])](blockRoots.len) for blockRoot in blockRoots: - let blck = rman.quarantine[].popBlobless(blockRoot).valueOr: + let blck = rman.quarantine[].popSidecarless(blockRoot).valueOr: continue verifiers.add rman.blockVerifier(blck, maybeFinalized = false) try: @@ -572,7 +570,7 @@ proc getMissingDataColumns(rman: RequestManager): seq[DataColumnsByRootIdentifie fetches: seq[DataColumnsByRootIdentifier] ready: seq[Eth2Digest] - for columnless in rman.quarantine[].peekColumnless(): + for columnless in rman.quarantine[].peekSidecarless(): withBlck(columnless): when consensusFork >= ConsensusFork.Fulu: # granting data columns a chance to arrive over gossip @@ -605,7 +603,7 @@ proc getMissingDataColumns(rman: RequestManager): seq[DataColumnsByRootIdentifie ready.add(columnless.root) for root in ready: - let columnless = rman.quarantine[].popColumnless(root).valueOr: + let columnless = rman.quarantine[].popSidecarless(root).valueOr: continue discard rman.blockVerifier(columnless, false) fetches @@ -649,7 +647,7 @@ proc requestManagerDataColumnLoop( Future[Result[void, VerifierError]] .Raising([CancelledError])](blockRoots.len) for blockRoot in blockRoots: - let blck = rman.quarantine[].popColumnless(blockRoot).valueOr: + let blck = rman.quarantine[].popSidecarless(blockRoot).valueOr: continue verifiers.add rman.blockVerifier(blck, maybeFinalized = false) try: diff --git a/beacon_chain/sync/sync_manager.nim b/beacon_chain/sync/sync_manager.nim index 8e299e243d..1b179e7c97 100644 --- a/beacon_chain/sync/sync_manager.nim +++ b/beacon_chain/sync/sync_manager.nim @@ -640,7 +640,9 @@ proc syncStep[A, B]( proc processCallback() = man.workers[index].status = SyncWorkerStatus.Processing - var jobs: seq[Future[void].Raising([CancelledError])] + var + jobs: seq[Future[void].Raising([CancelledError])] + requests: seq[SyncRequest[Peer]] try: for rindex in 0 ..< man.concurrentRequestsCount: @@ -660,6 +662,7 @@ proc syncStep[A, B]( peer_score = peer.getScore(), peer_speed = peer.netKbps(), index = index, + request_index = rindex, local_head_slot = headSlot, remote_head_slot = peerSlot, queue_input_slot = man.queue.inpSlot, @@ -671,18 +674,22 @@ proc syncStep[A, B]( await sleepAsync(RESP_TIMEOUT_DUR) break + requests.add(request) man.workers[index].status = SyncWorkerStatus.Downloading + let data = (await man.getSyncBlockData(index, request)).valueOr: debug "Failed to get block data", peer = peer, peer_score = peer.getScore(), peer_speed = peer.netKbps(), index = index, + request_index = rindex, reason = error, direction = man.direction, sync_ident = man.ident, topics = "syncman" - man.queue.push(request) + # Mark all requests as failed + man.queue.push(requests) break # Scoring will happen in `syncUpdate`. @@ -702,6 +709,9 @@ proc syncStep[A, B]( await allFutures(jobs) except CancelledError as exc: + # Mark all requests as failed + man.queue.push(requests) + # Cancelling all verification jobs let pending = jobs.filterIt(not(it.finished)).mapIt(cancelAndWait(it)) await noCancel allFutures(pending) raise exc diff --git a/beacon_chain/sync/sync_queue.nim b/beacon_chain/sync/sync_queue.nim index c9bad685bc..bd33196607 100644 --- a/beacon_chain/sync/sync_queue.nim +++ b/beacon_chain/sync/sync_queue.nim @@ -37,9 +37,13 @@ type SyncQueueKind* {.pure.} = enum Forward, Backward + SyncRequestFlag* {.pure.} = enum + Void + SyncRequest*[T] = object kind*: SyncQueueKind data*: SyncRange + flags*: set[SyncRequestFlag] item*: T SyncQueueItem[T] = object @@ -472,11 +476,11 @@ func init*[T](t1: typedesc[SyncQueue], t2: typedesc[T], ident: ident ) -func contains[T](requests: openArray[SyncRequest[T]], source: T): bool = - for req in requests: - if req.item == source: - return true - false +func searchPeer[T](requests: openArray[SyncRequest[T]], source: T): int = + for index, request in requests.pairs(): + if request.item == source: + return index + -1 func find[T](sq: SyncQueue[T], req: SyncRequest[T]): Opt[SyncPosition] = if len(sq.requests) == 0: @@ -539,7 +543,8 @@ proc pop*[T](sq: SyncQueue[T], peerMaxSlot: Slot, item: T): SyncRequest[T] = var count = 0 for qitem in sq.requests.mitems(): if len(qitem.requests) < sq.requestsCount: - if item notin qitem.requests: + let sindex = qitem.requests.searchPeer(item) + if sindex < 0: return if qitem.data.slot > peerMaxSlot: # Peer could not satisfy our request, returning empty one. @@ -551,7 +556,9 @@ proc pop*[T](sq: SyncQueue[T], peerMaxSlot: Slot, item: T): SyncRequest[T] = qitem.requests.add(request) request else: - inc(count) + if SyncRequestFlag.Void notin qitem.requests[sindex].flags: + # We only count non-empty requests. + inc(count) doAssert(count < sq.requestsCount, "You should not pop so many requests for single peer") @@ -689,12 +696,17 @@ iterator blocks( for i in countdown(len(blcks) - 1, 0): yield (blcks[i], blobs.getOpt(i)) +proc push*[T](sq: SyncQueue[T], requests: openArray[SyncRequest[T]]) = + ## Push multiple failed requests back to queue. + for request in requests: + let pos = sq.find(request).valueOr: + debug "Request is not relevant anymore", request = request + continue + sq.del(pos) + proc push*[T](sq: SyncQueue[T], sr: SyncRequest[T]) = - ## Push failed request back to queue. - let pos = sq.find(sr).valueOr: - debug "Request is not relevant anymore", request = sr - return - sq.del(pos) + ## Push single failed request back to queue. + sq.push([sr]) proc process[T]( sq: SyncQueue[T], @@ -830,6 +842,10 @@ proc push*[T]( sr.item.updateStats(SyncResponseKind.Empty, 1'u64) inc(sq.requests[position.qindex].voidsCount) + # Mark empty request in queue, so this range will not be requested by + # the same peer. + sq.requests[position.qindex].requests[position.sindex].flags.incl( + SyncRequestFlag.Void) sq.gapList.add(GapItem.init(sr)) # With empty response - advance only when `requestsCount` of different # peers returns empty response for the same range. diff --git a/beacon_chain/validator_client/api.nim b/beacon_chain/validator_client/api.nim index 6fb2f071a7..c0e86a11cc 100644 --- a/beacon_chain/validator_client/api.nim +++ b/beacon_chain/validator_client/api.nim @@ -49,6 +49,17 @@ type data*: ApiResponse[T] score*: X + DoubleTimeoutState {.pure.} = enum + Soft, Hard + + DoubleTimeout* = object + startTime: Moment + softTimeout: Duration + hardTimeout: Duration + betweenTimeout: Duration + timeoutFuture*: Future[void].Raising([CancelledError]) + state: DoubleTimeoutState + const ViableNodeStatus* = { RestBeaconNodeStatus.Compatible, @@ -57,6 +68,70 @@ const RestBeaconNodeStatus.Synced } +proc init( + t: typedesc[DoubleTimeout], + softTimeout, hardTimeout: Duration +): DoubleTimeout = + let + betweenTimeout = + if softTimeout == InfiniteDuration: + ZeroDuration + else: + if hardTimeout == InfiniteDuration: + ZeroDuration + else: + doAssert(hardTimeout >= softTimeout, + "Hard timeout should be bigger than soft timeout") + hardTimeout - softTimeout + future = + if softTimeout == InfiniteDuration: + nil + else: + sleepAsync(softTimeout) + + DoubleTimeout( + startTime: Moment.now(), + softTimeout: softTimeout, + hardTimeout: hardTimeout, + betweenTimeout: betweenTimeout, + timeoutFuture: future, + state: DoubleTimeoutState.Soft + ) + +func timedOut(dt: DoubleTimeout): bool = + if isNil(dt.timeoutFuture): + false + else: + dt.timeoutFuture.finished() + +func hardTimedOut(dt: DoubleTimeout): bool = + (dt.state == DoubleTimeoutState.Hard) and dt.timedOut() + +func softTimedOut(dt: DoubleTimeout): bool = + (dt.state == DoubleTimeoutState.Hard) or + ((dt.state == DoubleTimeoutState.Soft) and dt.timedOut()) + +proc switch(dt: var DoubleTimeout) = + if dt.state == DoubleTimeoutState.Hard: + # It's too late to switch, so doing nothing + return + if not(dt.timedOut()): + # Timeout is not exceeded yet, so doing nothing + return + dt.state = DoubleTimeoutState.Hard + dt.timeoutFuture = + if dt.hardTimeout == InfiniteDuration: + nil + else: + sleepAsync(dt.betweenTimeout) + +proc timePassed(dt: DoubleTimeout): Duration = + Moment.now() - dt.startTime + +proc close(dt: DoubleTimeout): Future[void] {.async: (raises: []).} = + if not(isNil(dt.timeoutFuture)): + await cancelAndWait(dt.timeoutFuture) + proc `$`*[T](s: ApiScore[T]): string = var res = Base10.toString(uint64(s.index)) res.add(": ") @@ -95,7 +170,7 @@ proc lazyWaiter( strategy: ApiStrategyKind ) {.async: (raises: []).} = try: - await allFutures(request) + await request.join() if request.failed(): let failure = ApiNodeFailure.init( ApiFailure.Communication, requestName, strategy, node, @@ -130,6 +205,42 @@ proc lazyWait( else: await allFutures(futures) +proc lazyWait( + nodes: seq[BeaconNodeServerRef], + requests: seq[FutureBase], + timeout: ref DoubleTimeout, + requestName: string, + strategy: ApiStrategyKind +) {.async: (raises: [CancelledError]).} = + doAssert(len(nodes) == len(requests)) + if len(nodes) == 0: + return + + var futures: seq[Future[void]] + for index in 0 ..< len(requests): + futures.add(lazyWaiter(nodes[index], requests[index], requestName, + strategy)) + + if isNil(timeout[].timeoutFuture): + await allFutures(futures) + return + + while true: + try: + await allFutures(futures).wait(timeout[].timeoutFuture) + # All pending jobs finished successfully, exiting + break + except AsyncTimeoutError: + if timeout[].hardTimedOut(): + # Hard timeout exceeded, terminating all the jobs. + let pending = + futures.filterIt(not(it.finished())).mapIt(it.cancelAndWait()) + await noCancel allFutures(pending) + break + else: + # Soft timeout exceeded, switching to hard timeout future. + timeout[].switch() + proc apiResponseOr[T](future: FutureBase, timerFut: Future[void], message: string): ApiResponse[T] = if future.finished() and not(future.cancelled()): @@ -278,27 +389,22 @@ template firstSuccessParallel*( retRes template bestSuccess*( - vc: ValidatorClientRef, - responseType: typedesc, - handlerType: typedesc, - scoreType: typedesc, - timeout: Duration, - statuses: set[RestBeaconNodeStatus], - roles: set[BeaconNodeRole], - bodyRequest, - bodyScore, - bodyHandler: untyped): ApiResponse[handlerType] = + vc: ValidatorClientRef, + responseType: typedesc, + handlerType: typedesc, + scoreType: typedesc, + softTimeout: Duration, + hardTimeout: Duration, + statuses: set[RestBeaconNodeStatus], + roles: set[BeaconNodeRole], + bodyRequest, + bodyScore, + bodyHandler: untyped +): ApiResponse[handlerType] = var it {.inject.}: RestClientRef iterations = 0 - - var timerFut = - if timeout != InfiniteDuration: - sleepAsync(timeout) - else: - nil - - var + timeout = newClone(DoubleTimeout.init(softTimeout, hardTimeout)) retRes: ApiResponse[handlerType] scores: seq[ApiScore[scoreType]] bestResponse: Opt[BestNodeResponse[handlerType, scoreType]] @@ -309,26 +415,31 @@ template bestSuccess*( try: if iterations == 0: # We are not going to wait for BNs if there some available. - await vc.waitNodes(timerFut, statuses, roles, false) + await vc.waitNodes(timeout[].timeoutFuture, statuses, roles, false) else: - # We get here only, if all the requests are failed. To avoid requests - # spam we going to wait for changes in BNs statuses. - await vc.waitNodes(timerFut, statuses, roles, true) + # We get here only, if all the requests are failed. To avoid + # requests spam we going to wait for changes in BNs statuses. + await vc.waitNodes(timeout[].timeoutFuture, statuses, roles, true) vc.filterNodes(statuses, roles) except CancelledError as exc: - if not(isNil(timerFut)) and not(timerFut.finished()): - await timerFut.cancelAndWait() + await timeout[].close() raise exc if len(onlineNodes) == 0: - retRes = ApiResponse[handlerType].err("No online beacon node(s)") - break mainLoop + if timeout[].hardTimedOut(): + retRes = ApiResponse[handlerType].err("No online beacon node(s)") + break mainLoop + else: + debug "Soft timeout exceeded while waiting for beacon node(s)", + time_passed = timeout[].timePassed() + timeout[].switch() else: var (pendingRequests, pendingNodes) = block: - var requests: seq[FutureBase] - var nodes: seq[BeaconNodeServerRef] + var + requests: seq[FutureBase] + nodes: seq[BeaconNodeServerRef] for node {.inject.} in onlineNodes: it = node.client let fut = FutureBase(bodyRequest) @@ -342,24 +453,29 @@ template bestSuccess*( var finishedRequests: seq[FutureBase] finishedNodes: seq[BeaconNodeServerRef] - raceFut: Future[FutureBase].Raising([ValueError, CancelledError]) try: - raceFut = race(pendingRequests) - - if not(isNil(timerFut)): - discard await race(raceFut, timerFut) + if not(isNil(timeout.timeoutFuture)): + try: + discard await race(pendingRequests).wait( + timeout.timeoutFuture) + except ValueError: + raiseAssert "pendingRequests sequence must not be empty!" + except AsyncTimeoutError: + discard else: - await allFutures(raceFut) + try: + discard await race(pendingRequests) + except ValueError: + raiseAssert "pendingRequests sequence must not be empty!" for index, future in pendingRequests.pairs(): - if future.finished() or - (not(isNil(timerFut)) and timerFut.finished()): + if future.finished() or timeout[].hardTimedOut(): finishedRequests.add(future) finishedNodes.add(pendingNodes[index]) let node {.inject.} = pendingNodes[index] apiResponse {.inject.} = - apiResponseOr[responseType](future, timerFut, + apiResponseOr[responseType](future, timeout.timeoutFuture, "Timeout exceeded while awaiting for the response") handlerResponse = try: @@ -378,7 +494,7 @@ template bestSuccess*( scores.add(ApiScore.init(node, score)) if bestResponse.isNone() or - (score > bestResponse.get().score): + (score > bestResponse.get().score): bestResponse = Opt.some( BestNodeResponse.init(node, handlerResponse, score)) if perfectScore(score): @@ -387,13 +503,18 @@ template bestSuccess*( else: scores.add(ApiScore.init(node, scoreType)) + if timeout[].softTimedOut(): + timeout[].switch() + if bestResponse.isSome(): + perfectScoreFound = true + if perfectScoreFound: # lazyWait will cancel `pendingRequests` on timeout. - asyncSpawn lazyWait(pendingNodes, pendingRequests, timerFut, - RequestName, strategy) + asyncSpawn lazyWait( + pendingNodes, pendingRequests, timeout, RequestName, strategy) break innerLoop - if not(isNil(timerFut)) and timerFut.finished(): + if timeout[].hardTimedOut(): # If timeout is exceeded we need to cancel all the tasks which # are still running. var pendingCancel: seq[Future[void]] @@ -408,11 +529,9 @@ template bestSuccess*( except CancelledError as exc: var pendingCancel: seq[Future[void]] - # `or` operation does not cancelling Futures passed as arguments. - if not(isNil(raceFut)) and not(raceFut.finished()): - pendingCancel.add(raceFut.cancelAndWait()) - if not(isNil(timerFut)) and not(timerFut.finished()): - pendingCancel.add(timerFut.cancelAndWait()) + # `race` operation does not cancelling Futures passed as + # arguments. + pendingCancel.add(timeout[].close()) # We should cancel all the requests which are still pending. for future in pendingRequests.items(): if not(future.finished()): @@ -425,7 +544,7 @@ template bestSuccess*( retRes = bestResponse.get().data break mainLoop else: - if timerFut.finished(): + if timeout[].hardTimedOut(): retRes = ApiResponse[handlerType].err( "Timeout exceeded while awaiting for responses") break mainLoop @@ -439,8 +558,8 @@ template bestSuccess*( debug "Best score result selected", request = RequestName, available_scores = scores, best_score = shortScore(bestResponse.get().score), - best_node = bestResponse.get().node - + best_node = bestResponse.get().node, + time_passed = timeout[].timePassed() retRes template onceToAll*( @@ -1182,6 +1301,7 @@ proc getHeadBlockRoot*( RestPlainResponse, GetBlockRootResponse, float64, + SlotDurationSoft, SlotDuration, ViableNodeStatus, {BeaconNodeRole.SyncCommitteeData}, @@ -1417,6 +1537,7 @@ proc produceAttestationData*( RestPlainResponse, ProduceAttestationDataResponse, float64, + OneThirdDurationSoft, OneThirdDuration, ViableNodeStatus, {BeaconNodeRole.AttestationData}, @@ -1762,6 +1883,7 @@ proc getAggregatedAttestation*( RestPlainResponse, GetAggregatedAttestationResponse, float64, + OneThirdDurationSoft, OneThirdDuration, ViableNodeStatus, {BeaconNodeRole.AggregatedData}, @@ -1902,6 +2024,7 @@ proc getAggregatedAttestationV2*( RestPlainResponse, GetAggregatedAttestationV2Response, float64, + OneThirdDurationSoft, OneThirdDuration, ViableNodeStatus, {BeaconNodeRole.AggregatedData}, @@ -2039,6 +2162,7 @@ proc produceSyncCommitteeContribution*( RestPlainResponse, ProduceSyncCommitteeContributionResponse, float64, + OneThirdDurationSoft, OneThirdDuration, ViableNodeStatus, {BeaconNodeRole.SyncCommitteeData}, @@ -2337,6 +2461,7 @@ proc produceBlockV3*( RestPlainResponse, ProduceBlockResponseV3, UInt256, + SlotDurationSoft, SlotDuration, ViableNodeStatus, {BeaconNodeRole.BlockProposalData}, diff --git a/beacon_chain/validator_client/common.nim b/beacon_chain/validator_client/common.nim index d3c5f17af7..5d646617a5 100644 --- a/beacon_chain/validator_client/common.nim +++ b/beacon_chain/validator_client/common.nim @@ -267,8 +267,14 @@ type const DefaultDutyAndProof* = DutyAndProof(epoch: FAR_FUTURE_EPOCH) DefaultSyncCommitteeDuty* = SyncCommitteeDuty() - SlotDuration* = int64(SECONDS_PER_SLOT).seconds - OneThirdDuration* = int64(SECONDS_PER_SLOT).seconds div INTERVALS_PER_SLOT + SlotDuration* = + int64(SECONDS_PER_SLOT).seconds + SlotDurationSoft* = + (int64(SECONDS_PER_SLOT) div 2).seconds + OneThirdDuration* = + (int64(SECONDS_PER_SLOT) div int64(INTERVALS_PER_SLOT)).seconds + OneThirdDurationSoft* = + (int64(SECONDS_PER_SLOT) div int64(INTERVALS_PER_SLOT) div 2'i64).seconds AllBeaconNodeRoles* = { BeaconNodeRole.Duties, BeaconNodeRole.AttestationData, diff --git a/beacon_chain/validators/beacon_validators.nim b/beacon_chain/validators/beacon_validators.nim index e79b1549c7..43079d2daa 100644 --- a/beacon_chain/validators/beacon_validators.nim +++ b/beacon_chain/validators/beacon_validators.nim @@ -950,8 +950,10 @@ proc proposeBlockMEV( "Unblinded block not returned to proposer" err errMsg -func isEFMainnet(cfg: RuntimeConfig): bool = - cfg.DEPOSIT_CHAIN_ID == 1 and cfg.DEPOSIT_NETWORK_ID == 1 +func isExcludedTestnet(cfg: RuntimeConfig): bool = + ## Ensure that builder API testing can still occur in certain circumstances. + cfg.DEPOSIT_CHAIN_ID == cfg.DEPOSIT_NETWORK_ID and cfg.DEPOSIT_CHAIN_ID in [ + 17000'u64, 560048] # Holesky and Hoodi, respectively proc collectBids( SBBB: typedesc, EPS: typedesc, node: BeaconNode, @@ -967,7 +969,7 @@ proc collectBids( # EL fails -- i.e. it would change priorities, so any block from the # execution layer client would override builder API. But it seems an # odd requirement to produce no block at all in those conditions. - (not node.dag.cfg.isEFMainnet) or (not livenessFailsafeInEffect( + (node.dag.cfg.isExcludedTestnet) or (not livenessFailsafeInEffect( forkyState.data.block_roots.data, forkyState.data.slot)) else: false @@ -1258,7 +1260,7 @@ proc proposeBlock( consensusFork.SignedBlindedBeaconBlock, consensusFork.ExecutionPayloadForSigning) else: - # Pre-Deneb MEV is not supported; this signals that, because it triggers + # Pre-Electra MEV is not supported; this signals that, because it triggers # intentional SignedBlindedBeaconBlock/ExecutionPayload mismatches. proposeBlockContinuation( electra_mev.SignedBlindedBeaconBlock, diff --git a/beacon_chain/validators/message_router_mev.nim b/beacon_chain/validators/message_router_mev.nim index 9fb75f4d22..c151b388c9 100644 --- a/beacon_chain/validators/message_router_mev.nim +++ b/beacon_chain/validators/message_router_mev.nim @@ -49,8 +49,6 @@ proc unblindAndRouteBlockMEV*( electra_mev.SignedBlindedBeaconBlock | fulu_mev.SignedBlindedBeaconBlock): Future[Result[Opt[BlockRef], string]] {.async: (raises: [CancelledError]).} = - const consensusFork = typeof(blindedBlock).kind - info "Proposing blinded Builder API block", blindedBlock = shortLog(blindedBlock) @@ -74,57 +72,49 @@ proc unblindAndRouteBlockMEV*( return err( "REST unable to communicate with remote host, reason " & exc.msg) - const httpOk = 200 - if response.status != httpOk: - # https://github.com/ethereum/builder-specs/blob/v0.4.0/specs/bellatrix/validator.md#proposer-slashing - # This means if a validator publishes a signature for a - # `BlindedBeaconBlock` (via a dissemination of a - # `SignedBlindedBeaconBlock`) then the validator **MUST** not use the - # local build process as a fallback, even in the event of some failure - # with the external builder network. - return err("submitBlindedBlock failed with HTTP error code " & - $response.status & ": " & $shortLog(blindedBlock)) - when blindedBlock is electra_mev.SignedBlindedBeaconBlock: - let res = decodeBytesJsonOrSsz( - SubmitBlindedBlockResponseElectra, response.data, response.contentType, - response.headers.getString("eth-consensus-version")) - elif blindedBlock is fulu_mev.SignedBlindedBeaconBlock: - let res = decodeBytesJsonOrSsz( - SubmitBlindedBlockResponseFulu, response.data, response.contentType, - response.headers.getString("eth-consensus-version")) - else: - static: doAssert false - - let bundle = res.valueOr: - return err("Could not decode " & $consensusFork & " blinded block: " & $res.error & - " with HTTP status " & $response.status & ", Content-Type " & - $response.contentType & " and content " & $response.data) - - template execution_payload: untyped = bundle.data.execution_payload - - if hash_tree_root(blindedBlock.message.body.execution_payload_header) != - hash_tree_root(execution_payload): - return err("unblinded payload doesn't match blinded payload header: " & - $blindedBlock.message.body.execution_payload_header) - - # Signature provided is consistent with unblinded execution payload, - # so construct full beacon block - # https://github.com/ethereum/builder-specs/blob/v0.4.0/specs/bellatrix/validator.md#block-proposal - var signedBlock = consensusFork.SignedBeaconBlock( - signature: blindedBlock.signature) - copyFields( - signedBlock.message, blindedBlock.message, - getFieldNames(typeof(signedBlock.message))) - copyFields( - signedBlock.message.body, blindedBlock.message.body, - getFieldNames(typeof(signedBlock.message.body))) - assign(signedBlock.message.body.execution_payload, execution_payload) - signedBlock.root = hash_tree_root(signedBlock.message) - doAssert signedBlock.root == hash_tree_root(blindedBlock.message) - - let blobsOpt = - when consensusFork >= ConsensusFork.Deneb: + if response.status != 200: + # https://github.com/ethereum/builder-specs/blob/v0.5.0/specs/bellatrix/validator.md#proposer-slashing + # This means if a validator publishes a signature for a + # `BlindedBeaconBlock` (via a dissemination of a + # `SignedBlindedBeaconBlock`) then the validator **MUST** not use the + # local build process as a fallback, even in the event of some failure + # with the external builder network. + return err("submitBlindedBlock failed with HTTP error code " & + $response.status & ": " & $shortLog(blindedBlock)) + + let + res = decodeBytesJsonOrSsz( + SubmitBlindedBlockResponseElectra, response.data, response.contentType, + response.headers.getString("eth-consensus-version")) + bundle = res.valueOr: + return err("Could not decode Electra blinded block: " & $res.error & + " with HTTP status " & $response.status & ", Content-Type " & + $response.contentType & " and content " & $response.data) + + template execution_payload: untyped = bundle.data.execution_payload + + if hash_tree_root(blindedBlock.message.body.execution_payload_header) != + hash_tree_root(execution_payload): + return err("unblinded payload doesn't match blinded payload header: " & + $blindedBlock.message.body.execution_payload_header) + + # Signature provided is consistent with unblinded execution payload, + # so construct full beacon block + # https://github.com/ethereum/builder-specs/blob/v0.5.0/specs/bellatrix/validator.md#block-proposal + var signedBlock = electra.SignedBeaconBlock( + signature: blindedBlock.signature) + copyFields( + signedBlock.message, blindedBlock.message, + getFieldNames(typeof(signedBlock.message))) + copyFields( + signedBlock.message.body, blindedBlock.message.body, + getFieldNames(typeof(signedBlock.message.body))) + assign(signedBlock.message.body.execution_payload, execution_payload) + signedBlock.root = hash_tree_root(signedBlock.message) + doAssert signedBlock.root == hash_tree_root(blindedBlock.message) + + let blobsOpt = block: template blobs_bundle: untyped = bundle.data.blobs_bundle if blindedBlock.message.body.blob_kzg_commitments != bundle.data.blobs_bundle.commitments: @@ -138,22 +128,34 @@ proc unblindAndRouteBlockMEV*( return err("unblinded blobs bundle is invalid") Opt.some(signedBlock.create_blob_sidecars( blobs_bundle.proofs, blobs_bundle.blobs)) - else: - Opt.none(seq[BlobSidecar]) - debug "unblindAndRouteBlockMEV: proposing unblinded block", - blck = shortLog(signedBlock) + debug "unblindAndRouteBlockMEV: proposing unblinded block", + blck = shortLog(signedBlock) - let newBlockRef = - (await node.router.routeSignedBeaconBlock( - signedBlock, blobsOpt, checkValidator = false)).valueOr: - # submitBlindedBlock has run, so don't allow fallback to run - return err("routeSignedBeaconBlock error") # Errors logged in router + let newBlockRef = + (await node.router.routeSignedBeaconBlock( + signedBlock, blobsOpt, checkValidator = false)).valueOr: + # submitBlindedBlock has run, so don't allow fallback to run + return err("routeSignedBeaconBlock error") # Errors logged in router - if newBlockRef.isSome: - beacon_block_builder_proposed.inc() - notice "Block proposed (MEV)", - blockRoot = shortLog(signedBlock.root), blck = shortLog(signedBlock), - signature = shortLog(signedBlock.signature) + if newBlockRef.isSome: + beacon_block_builder_proposed.inc() + notice "Block proposed (MEV)", + blockRoot = shortLog(signedBlock.root), blck = shortLog(signedBlock), + signature = shortLog(signedBlock.signature) - ok newBlockRef + ok newBlockRef + elif blindedBlock is fulu_mev.SignedBlindedBeaconBlock: + if response.status == 202: + ok(Opt.none(BlockRef)) + else: + # https://github.com/ethereum/builder-specs/blob/v0.5.0/specs/bellatrix/validator.md#proposer-slashing + # This means if a validator publishes a signature for a + # `BlindedBeaconBlock` (via a dissemination of a + # `SignedBlindedBeaconBlock`) then the validator **MUST** not use the + # local build process as a fallback, even in the event of some failure + # with the external builder network. + err("submitBlindedBlock failed with HTTP error code " & + $response.status & ": " & $shortLog(blindedBlock)) + else: + static: doAssert false diff --git a/beacon_chain/validators/slashing_protection_common.nim b/beacon_chain/validators/slashing_protection_common.nim index 665429ff3b..65ba68f8ee 100644 --- a/beacon_chain/validators/slashing_protection_common.nim +++ b/beacon_chain/validators/slashing_protection_common.nim @@ -203,7 +203,7 @@ func `==`*(a, b: BadProposal): bool = proc writeValue*( writer: var JsonWriter, value: PubKey0x) {.inline, raises: [IOError].} = - writer.writeValue("0x" & value.PubKeyBytes.toHex()) + writer.writeValue(value.PubKeyBytes.to0xHex()) proc readValue*(reader: var JsonReader, value: var PubKey0x) {.raises: [SerializationError, IOError].} = @@ -214,7 +214,7 @@ proc readValue*(reader: var JsonReader, value: var PubKey0x) proc writeValue*( w: var JsonWriter, a: Eth2Digest0x) {.inline, raises: [IOError].} = - w.writeValue "0x" & a.Eth2Digest.data.toHex() + w.writeValue a.Eth2Digest.data.to0xHex() proc readValue*(r: var JsonReader, a: var Eth2Digest0x) {.raises: [SerializationError, IOError].} = @@ -272,6 +272,7 @@ chronicles.formatIt EpochString: it.Slot.shortLog chronicles.formatIt Eth2Digest0x: it.Eth2Digest.shortLog chronicles.formatIt SPDIR_SignedBlock: it.shortLog chronicles.formatIt SPDIR_SignedAttestation: it.shortLog +chronicles.formatIt PubKey0x: it.PubKeyBytes.to0xHex # Interchange import # -------------------------------------------- @@ -289,8 +290,7 @@ proc importInterchangeV5Impl*( let key = ValidatorPubKey.fromRaw(spdir.data[v].pubkey.PubKeyBytes) if key.isErr: # The bytes does not describe a valid encoding (length error) - error "Invalid public key.", - pubkey = "0x" & spdir.data[v].pubkey.PubKeyBytes.toHex() + error "Invalid public key.", pubkey = spdir.data[v].pubkey result = siPartial continue @@ -298,8 +298,7 @@ proc importInterchangeV5Impl*( # The bytes don't deserialize to a valid BLS G1 elliptic curve point. # Deserialization is costly but done only once per validator. # and SlashingDB import is a very rare event. - error "Invalid public key.", - pubkey = "0x" & spdir.data[v].pubkey.PubKeyBytes.toHex() + error "Invalid public key.", pubkey = spdir.data[v].pubkey result = siPartial continue diff --git a/beacon_chain/version.nim b/beacon_chain/version.nim index 647ad8e35c..a08910c876 100644 --- a/beacon_chain/version.nim +++ b/beacon_chain/version.nim @@ -18,8 +18,8 @@ const "Copyright (c) 2019-" & compileYear & " Status Research & Development GmbH" versionMajor* = 25 - versionMinor* = 6 - versionBuild* = 0 + versionMinor* = 7 + versionBuild* = 1 versionBlob* = "stateofus" # Single word - ends up in the default graffiti diff --git a/ci/Jenkinsfile b/ci/Jenkinsfile index 31dc700925..4d3d69e6c0 100644 --- a/ci/Jenkinsfile +++ b/ci/Jenkinsfile @@ -31,6 +31,7 @@ pipeline { } options { + disableRestartFromStage() timestamps() ansiColor('xterm') /* This also includes wait time in the queue. */ diff --git a/ci/nix.Jenkinsfile b/ci/nix.Jenkinsfile index 1c8d904cc5..b7e28913dd 100644 --- a/ci/nix.Jenkinsfile +++ b/ci/nix.Jenkinsfile @@ -1,6 +1,6 @@ #!/usr/bin/env groovy /* beacon_chain - * Copyright (c) 2019-2024 Status Research & Development GmbH + * Copyright (c) 2019-2025 Status Research & Development GmbH * Licensed and distributed under either of * * MIT license (license terms in the root directory or at https://opensource.org/licenses/MIT). * * Apache v2 license (license terms in the root directory or at https://www.apache.org/licenses/LICENSE-2.0). @@ -26,6 +26,7 @@ pipeline { } options { + disableRestartFromStage() timestamps() ansiColor('xterm') /* This also includes wait time in the queue. */ diff --git a/config.nims b/config.nims index 1423aacf13..aaacbaf35c 100644 --- a/config.nims +++ b/config.nims @@ -121,11 +121,12 @@ elif defined(riscv64): else: switch("passC", "-march=native") switch("passL", "-march=native") - if defined(windows): - # https://gcc.gnu.org/bugzilla/show_bug.cgi?id=65782 - # ("-fno-asynchronous-unwind-tables" breaks Nim's exception raising, sometimes) - switch("passC", "-mno-avx512f") - switch("passL", "-mno-avx512f") + # https://gcc.gnu.org/bugzilla/show_bug.cgi?id=65782 + # ("-fno-asynchronous-unwind-tables" breaks Nim's exception raising, sometimes) + # For non-Windows targets, https://github.com/bitcoin-core/secp256k1/issues/1623 + # also suggests disabling the same flag to address Ubuntu 22.04/recent AMD CPUs. + switch("passC", "-mno-avx512f") + switch("passL", "-mno-avx512f") # omitting frame pointers in nim breaks the GC # https://github.com/nim-lang/Nim/issues/10625 diff --git a/docs/e2store.md b/docs/e2store.md index 36bf2ca384..1c2e2c5657 100644 --- a/docs/e2store.md +++ b/docs/e2store.md @@ -181,7 +181,7 @@ Each era is identified by when it ends. Thus, the genesis era is era `0`, follow ## File name -`.era` file names follow a simple convention: `---.era`: +`.era` file names follow a simple convention: `--.era`: * `config-name` is the `CONFIG_NAME` field of the runtime configuration (`mainnet`, `sepolia`, `holesky`, `hoodi`, etc) * `era-number` is the number of the _first_ era stored in the file - for example, the genesis era file has number 0 - as a 5-digit 0-filled decimal integer diff --git a/docs/requirements.txt b/docs/requirements.txt index 67f0fabc3f..7eb6d69061 100644 --- a/docs/requirements.txt +++ b/docs/requirements.txt @@ -80,11 +80,11 @@ pyyaml-env-tag==0.1 # via mkdocs regex==2024.9.11 # via mkdocs-material -requests==2.32.3 +requests==2.32.4 # via mkdocs-material six==1.16.0 # via python-dateutil -urllib3==2.2.3 +urllib3==2.5.0 # via requests watchdog==5.0.3 # via mkdocs diff --git a/docs/the_nimbus_book/src/execution-client.md b/docs/the_nimbus_book/src/execution-client.md index 1f9c5ba41e..ec4722ec16 100644 --- a/docs/the_nimbus_book/src/execution-client.md +++ b/docs/the_nimbus_book/src/execution-client.md @@ -25,20 +25,20 @@ cd nimbus-eth1 To build the Nimbus execution client and its dependencies, make sure you have [all prerequisites](./install.md) and then run: ```sh -make -j4 nimbus_execution_client nrpc +make -j4 nimbus_execution_client ``` This may take a few minutes. -When the process finishes, the `nimbus_execution_client` and `nrpc` executables can be found in the `build` subdirectory. +When the process finishes, the `nimbus_execution_client` executables can be found in the `build` subdirectory. -## Import era files +## Syncing using era files Syncing Nimbus requires a set of `era1` and `era` files. These can be generated from a `geth` and `nimbus` consensus client respectively or downloaded from a third-party repository. In addition to the era files themselves, you will need at least 200GB of free space on a fast SSD in your data directory, as set by the `--data-dir` command line option. -!!! info "`era` file download locations" +!!! info "`era` file downloading" `era` and `era1` files for testing purposes could at the time of writing be found here - these sources may or may not be available: === "Mainnet" @@ -59,12 +59,52 @@ In addition to the era files themselves, you will need at least 200GB of free sp * https://sepolia.era.nimbus.team/ * https://sepolia.era1.nimbus.team/ + A wider community maintained list of `era` and `era1` files can be found eth-clients github [history-endpoints](https://eth-clients.github.io/history-endpoints/) + + Downloading these files can take a long time, specially if you are downloading sequentially. + For easier and fast download, please use the `era_downloader.sh` script provided in the `nimbus-eth1` repository. + #### You'll need: + - [`aria2`](https://aria2.github.io/) installed: + - **macOS**: `brew install aria2` + - **Ubuntu/Debian**: `sudo apt install aria2` + - Standard Unix tools: `bash`, `awk`, `find`, `grep`, `curl` + + === "Mainnet" + ```sh + cd nimbus-eth1 + chmod +x scripts/era_downloader.sh + ./scripts/era_downloader.sh https://mainnet.era1.nimbus.team/ ../build/era1 + ./scripts/era_downloader.sh https://mainnet.era.nimbus.team/ ../build/era + ``` + + === "Hoodi" + ```sh + cd nimbus-eth1 + chmod +x scripts/era_downloader.sh + ./scripts/era_downloader.sh https://hoodi.era.nimbus.team/ ../build/era + ``` + + === "Holesky" + ```sh + cd nimbus-eth1 + chmod +x scripts/era_downloader.sh + ./scripts/era_downloader.sh https://holesky.era.nimbus.team/ ../build/era + ``` + + === "Sepolia" + ```sh + cd nimbus-eth1 + chmod +x scripts/era_downloader.sh + ./scripts/era_downloader.sh https://sepolia.era1.nimbus.team/ ../build/era1 + ./scripts/era_downloader.sh https://sepolia.era.nimbus.team/ ../build/era + ``` + It is recommended that you place the era files in the data directory under `era1` and `era` respectively. Era files can be shared between multiple nodes and can reside on a slow drive - use the `--era1-dir` and `--era-dir` options if they are located outside of the data directory. See the [era file guide](./era-store.md) for more information. !!! tip "" - Future versions of Nimbus will support other methods of syncing + Future versions of Nimbus will support other methods of syncing, such as snap sync. === "Mainnet" !!! note "" @@ -118,35 +158,50 @@ During startup, a `jwt.hex` file will be placed in the data directory containing build/nimbus_execution_client --network=sepolia --data-dir=build/sepolia --engine-api ``` -## Top up blocks from the consensus node +## Optionally quickstart with a pre-synced database + +!!! warning "Unverified pre-synced database" + The pre-synced database is provided by the Nimbus team which contained the state, but using this database is trusting the team to have provided a valid database. This gives you a headstart on syncing, but if you don't trust the provider, you should do a full sync instead, either from era files or from the p2p network. + The pre-synced database is not available for all networks, and is only available for mainnet + +If you want to skip the era file import and start with a pre-synced database, you can download a pre-synced database from the Nimbus team. This database is for now only available for the mainnet. + +```sh +# Download the pre-synced database +wget https://eth1-db.nimbus.team/mainnet-static-vid-keyed.tar.gz + +# Extract the database into the data directory +tar -xzf mainnet-static-vid-keyed.tar.gz +``` + +This will extract the pre-synced database into the current directory, which you can then use as your data directory. + +## Using the consensus node to sync -While era files cover the majority of chain history, Nimbus currenty relies on the consensus node to sync the most recent blocks using the `nrpc` helper. +While era files cover the majority of chain history. In most cases, Nimbus will automatically sync recent blocks via peer-to-peer networking. +However, if your node is stuck, has no peers, or you're on a weak network connection, you can optionally use nrpc to sync recent blocks directly from a connected consensus node using the Engine API. This method of syncing loads blocks from the consensus node and passes them to the execution client via the Engine API. === "Mainnet" ```sh - # Start `nrpc` every 2 seconds in case there is a fork or the execution client goes out of sync - while true; do build/nrpc sync --beacon-api=http://localhost:5052 --el-engine-api=http://localhost:8550 --jwt-secret=build/mainnet/jwt.hex; sleep 2; done + ./build/nrpc sync --beacon-api=http://localhost:5052 --el-engine-api=http://localhost:8550 --jwt-secret=build/mainnet/jwt.hex ``` === "Hoodi" ```sh - # Start `nrpc` every 2 seconds in case there is a fork or the execution client goes out of sync - while true; do build/nrpc sync --network=hoodi --beacon-api=http://localhost:5052 --el-engine-api=http://localhost:8550 --jwt-secret=build/hoodi/jwt.hex; sleep 2; done + ./build/nrpc sync --network=hoodi --beacon-api=http://localhost:5052 --el-engine-api=http://localhost:8550 --jwt-secret=build/hoodi/jwt.hex ``` === "Holesky" ```sh - # Start `nrpc` every 2 seconds in case there is a fork or the execution client goes out of sync - while true; do build/nrpc sync --network=holesky --beacon-api=http://localhost:5052 --el-engine-api=http://localhost:8550 --jwt-secret=build/holesky/jwt.hex; sleep 2; done + ./build/nrpc sync --network=holesky --beacon-api=http://localhost:5052 --el-engine-api=http://localhost:8550 --jwt-secret=build/holesky/jwt.hex ``` === "Sepolia" ```sh - # Start `nrpc` every 2 seconds in case there is a fork or the execution client goes out of sync - while true; do build/nrpc sync --network=sepolia --beacon-api=http://localhost:5052 --el-engine-api=http://localhost:8550 --jwt-secret=build/sepolia/jwt.hex; sleep 2; done + ./build/nrpc sync --network=sepolia --beacon-api=http://localhost:5052 --el-engine-api=http://localhost:8550 --jwt-secret=build/sepolia/jwt.hex ``` !!! tip "" - Future versions of Nimbus will support other methods of syncing + Future versions of Nimbus will support snap sync. diff --git a/ncli/resttest-rules.json b/ncli/resttest-rules.json index fd89c70b1c..588816b936 100644 --- a/ncli/resttest-rules.json +++ b/ncli/resttest-rules.json @@ -2431,6 +2431,59 @@ "body": [{"operator": "jstructcmps", "start": ["data"], "value": [{"index": "", "balance": ""}]}] } }, + { + "topics": ["beacon", "states_validator_identities", "slow", "post"], + "request": { + "method": "POST", + "body": { + "content-type": "application/json", + "data": "[]" + }, + "url": "/eth/v1/beacon/states/head/validator_identities", + "headers": {"Accept": "application/json"} + }, + "response": { + "status": {"operator": "equals", "value": "200"}, + "headers": [{"key": "Content-Type", "value": "application/json", "operator": "equals"}], + "body": [{"operator": "jstructcmps", "start": ["data"], "value": [{"index": "", "pubkey": ""}]}] + } + }, + { + "topics": ["beacon", "states_validator_identities", "post"], + "comment": "Correct hexadecimal values #1", + "request": { + "method": "POST", + "body": { + "content-type": "application/json", + "data": "[\"0x000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\"]" + }, + "url": "/eth/v1/beacon/states/head/validator_identities", + "headers": {"Accept": "application/json"} + }, + "response": { + "status": {"operator": "equals", "value": "200"}, + "headers": [{"key": "Content-Type", "value": "application/json", "operator": "equals"}], + "body": [{"operator": "jstructcmps", "start": ["data"], "value": [{"index": "", "pubkey": ""}]}] + } + }, + { + "topics": ["beacon", "states_validator_identities", "post"], + "comment": "Incorrect hexadecimal values #1", + "request": { + "method": "POST", + "body": { + "content-type": "application/json", + "data": "[\"0x\"]" + }, + "url": "/eth/v1/beacon/states/head/validator_identities", + "headers": {"Accept": "application/json"} + }, + "response": { + "status": {"operator": "equals", "value": "400"}, + "headers": [{"key": "Content-Type", "value": "application/json", "operator": "equals"}], + "body": [{"operator": "jstructcmpns", "value": {"code": 400, "message": ""}}] + } + }, { "topics": ["beacon", "states_committees"], "request": { @@ -4216,7 +4269,7 @@ "response": { "status": {"operator": "equals", "value": "200"}, "headers": [{"key": "Content-Type", "value": "application/json", "operator": "equals"}], - "body": [{"operator": "jstructcmps", "start": ["data"], "value": {"MAX_COMMITTEES_PER_SLOT":"","TARGET_COMMITTEE_SIZE":"","MAX_VALIDATORS_PER_COMMITTEE":"","SHUFFLE_ROUND_COUNT":"","HYSTERESIS_QUOTIENT":"","HYSTERESIS_DOWNWARD_MULTIPLIER":"","HYSTERESIS_UPWARD_MULTIPLIER":"","MIN_DEPOSIT_AMOUNT":"","MAX_EFFECTIVE_BALANCE":"","EFFECTIVE_BALANCE_INCREMENT":"","MIN_ATTESTATION_INCLUSION_DELAY":"","SLOTS_PER_EPOCH":"","MIN_SEED_LOOKAHEAD":"","MAX_SEED_LOOKAHEAD":"","EPOCHS_PER_ETH1_VOTING_PERIOD":"","SLOTS_PER_HISTORICAL_ROOT":"","MIN_EPOCHS_TO_INACTIVITY_PENALTY":"","EPOCHS_PER_HISTORICAL_VECTOR":"","EPOCHS_PER_SLASHINGS_VECTOR":"","HISTORICAL_ROOTS_LIMIT":"","VALIDATOR_REGISTRY_LIMIT":"","BASE_REWARD_FACTOR":"","WHISTLEBLOWER_REWARD_QUOTIENT":"","PROPOSER_REWARD_QUOTIENT":"","INACTIVITY_PENALTY_QUOTIENT":"","MIN_SLASHING_PENALTY_QUOTIENT":"","PROPORTIONAL_SLASHING_MULTIPLIER":"","MAX_PROPOSER_SLASHINGS":"","MAX_ATTESTER_SLASHINGS":"","MAX_ATTESTATIONS":"","MAX_DEPOSITS":"","MAX_VOLUNTARY_EXITS":"","INACTIVITY_PENALTY_QUOTIENT_ALTAIR":"","MIN_SLASHING_PENALTY_QUOTIENT_ALTAIR":"","PROPORTIONAL_SLASHING_MULTIPLIER_ALTAIR":"","SYNC_COMMITTEE_SIZE":"","EPOCHS_PER_SYNC_COMMITTEE_PERIOD":"","MIN_SYNC_COMMITTEE_PARTICIPANTS":"","UPDATE_TIMEOUT":"","INACTIVITY_PENALTY_QUOTIENT_BELLATRIX":"","MIN_SLASHING_PENALTY_QUOTIENT_BELLATRIX":"","PROPORTIONAL_SLASHING_MULTIPLIER_BELLATRIX":"","MAX_BYTES_PER_TRANSACTION":"","MAX_TRANSACTIONS_PER_PAYLOAD":"","BYTES_PER_LOGS_BLOOM":"","MAX_EXTRA_DATA_BYTES":"","MAX_BLS_TO_EXECUTION_CHANGES":"","MAX_WITHDRAWALS_PER_PAYLOAD":"","MAX_VALIDATORS_PER_WITHDRAWALS_SWEEP":"","FIELD_ELEMENTS_PER_BLOB":"","MAX_BLOB_COMMITMENTS_PER_BLOCK":"","KZG_COMMITMENT_INCLUSION_PROOF_DEPTH":"","PRESET_BASE":"","CONFIG_NAME":"","TERMINAL_TOTAL_DIFFICULTY":"","TERMINAL_BLOCK_HASH":"","TERMINAL_BLOCK_HASH_ACTIVATION_EPOCH":"","MIN_GENESIS_ACTIVE_VALIDATOR_COUNT":"","MIN_GENESIS_TIME":"","GENESIS_FORK_VERSION":"","GENESIS_DELAY":"","ALTAIR_FORK_VERSION":"","ALTAIR_FORK_EPOCH":"","BELLATRIX_FORK_VERSION":"","BELLATRIX_FORK_EPOCH":"","CAPELLA_FORK_VERSION":"","CAPELLA_FORK_EPOCH":"","DENEB_FORK_VERSION":"","DENEB_FORK_EPOCH":"","ELECTRA_FORK_VERSION":"","ELECTRA_FORK_EPOCH":"","FULU_FORK_VERSION":"","FULU_FORK_EPOCH":"","SECONDS_PER_SLOT":"","SECONDS_PER_ETH1_BLOCK":"","MIN_VALIDATOR_WITHDRAWABILITY_DELAY":"","SHARD_COMMITTEE_PERIOD":"","ETH1_FOLLOW_DISTANCE":"","INACTIVITY_SCORE_BIAS":"","INACTIVITY_SCORE_RECOVERY_RATE":"","EJECTION_BALANCE":"","MIN_PER_EPOCH_CHURN_LIMIT":"","CHURN_LIMIT_QUOTIENT":"","MAX_PER_EPOCH_ACTIVATION_CHURN_LIMIT":"","PROPOSER_SCORE_BOOST":"","REORG_HEAD_WEIGHT_THRESHOLD":"","REORG_PARENT_WEIGHT_THRESHOLD":"","REORG_MAX_EPOCHS_SINCE_FINALIZATION":"","DEPOSIT_CHAIN_ID":"","DEPOSIT_NETWORK_ID":"","DEPOSIT_CONTRACT_ADDRESS":"","MAX_PAYLOAD_SIZE":"","MAX_REQUEST_BLOCKS":"","EPOCHS_PER_SUBNET_SUBSCRIPTION":"","MIN_EPOCHS_FOR_BLOCK_REQUESTS":"","TTFB_TIMEOUT":"","RESP_TIMEOUT":"","ATTESTATION_PROPAGATION_SLOT_RANGE":"","MAXIMUM_GOSSIP_CLOCK_DISPARITY":"","MESSAGE_DOMAIN_INVALID_SNAPPY":"","MESSAGE_DOMAIN_VALID_SNAPPY":"","SUBNETS_PER_NODE":"","ATTESTATION_SUBNET_COUNT":"","ATTESTATION_SUBNET_EXTRA_BITS":"","ATTESTATION_SUBNET_PREFIX_BITS":"","MAX_REQUEST_BLOCKS_DENEB":"","MIN_EPOCHS_FOR_BLOB_SIDECARS_REQUESTS":"","BLOB_SIDECAR_SUBNET_COUNT":"","MAX_BLOBS_PER_BLOCK":"","MAX_REQUEST_BLOB_SIDECARS":"","MIN_PER_EPOCH_CHURN_LIMIT_ELECTRA":"","MAX_PER_EPOCH_ACTIVATION_EXIT_CHURN_LIMIT":"","BLOB_SIDECAR_SUBNET_COUNT_ELECTRA":"","MAX_BLOBS_PER_BLOCK_ELECTRA":"","MAX_REQUEST_BLOB_SIDECARS_ELECTRA":"","NUMBER_OF_COLUMNS":"","NUMBER_OF_CUSTODY_GROUPS":"","DATA_COLUMN_SIDECAR_SUBNET_COUNT":"","MAX_REQUEST_DATA_COLUMN_SIDECARS":"","SAMPLES_PER_SLOT":"","CUSTODY_REQUIREMENT":"","VALIDATOR_CUSTODY_REQUIREMENT":"","BALANCE_PER_ADDITIONAL_CUSTODY_GROUP":"","BLS_WITHDRAWAL_PREFIX":"","ETH1_ADDRESS_WITHDRAWAL_PREFIX":"","DOMAIN_BEACON_PROPOSER":"","DOMAIN_BEACON_ATTESTER":"","DOMAIN_RANDAO":"","DOMAIN_DEPOSIT":"","DOMAIN_VOLUNTARY_EXIT":"","DOMAIN_SELECTION_PROOF":"","DOMAIN_AGGREGATE_AND_PROOF":"","TIMELY_SOURCE_FLAG_INDEX":"","TIMELY_TARGET_FLAG_INDEX":"","TIMELY_HEAD_FLAG_INDEX":"","TIMELY_SOURCE_WEIGHT":"","TIMELY_TARGET_WEIGHT":"","TIMELY_HEAD_WEIGHT":"","SYNC_REWARD_WEIGHT":"","PROPOSER_WEIGHT":"","WEIGHT_DENOMINATOR":"","DOMAIN_SYNC_COMMITTEE":"","DOMAIN_SYNC_COMMITTEE_SELECTION_PROOF":"","DOMAIN_CONTRIBUTION_AND_PROOF":"","DOMAIN_BLS_TO_EXECUTION_CHANGE":"","TARGET_AGGREGATORS_PER_COMMITTEE":"","TARGET_AGGREGATORS_PER_SYNC_SUBCOMMITTEE":"","SYNC_COMMITTEE_SUBNET_COUNT":"","UNSET_DEPOSIT_REQUESTS_START_INDEX":"","FULL_EXIT_REQUEST_AMOUNT":"","COMPOUNDING_WITHDRAWAL_PREFIX":"","DEPOSIT_REQUEST_TYPE":"","WITHDRAWAL_REQUEST_TYPE":"","CONSOLIDATION_REQUEST_TYPE":"","MIN_ACTIVATION_BALANCE":"","MAX_EFFECTIVE_BALANCE_ELECTRA":"","MIN_SLASHING_PENALTY_QUOTIENT_ELECTRA":"","WHISTLEBLOWER_REWARD_QUOTIENT_ELECTRA":"","PENDING_DEPOSITS_LIMIT":"","PENDING_PARTIAL_WITHDRAWALS_LIMIT":"","PENDING_CONSOLIDATIONS_LIMIT":"","MAX_ATTESTER_SLASHINGS_ELECTRA":"","MAX_ATTESTATIONS_ELECTRA":"","MAX_DEPOSIT_REQUESTS_PER_PAYLOAD":"","MAX_WITHDRAWAL_REQUESTS_PER_PAYLOAD":"","MAX_CONSOLIDATION_REQUESTS_PER_PAYLOAD":"","MAX_PENDING_PARTIALS_PER_WITHDRAWALS_SWEEP":"","MAX_PENDING_DEPOSITS_PER_EPOCH":""}}] + "body": [{"operator": "jstructcmps", "start": ["data"], "value": {"MAX_COMMITTEES_PER_SLOT":"","TARGET_COMMITTEE_SIZE":"","MAX_VALIDATORS_PER_COMMITTEE":"","SHUFFLE_ROUND_COUNT":"","HYSTERESIS_QUOTIENT":"","HYSTERESIS_DOWNWARD_MULTIPLIER":"","HYSTERESIS_UPWARD_MULTIPLIER":"","MIN_DEPOSIT_AMOUNT":"","MAX_EFFECTIVE_BALANCE":"","EFFECTIVE_BALANCE_INCREMENT":"","MIN_ATTESTATION_INCLUSION_DELAY":"","SLOTS_PER_EPOCH":"","MIN_SEED_LOOKAHEAD":"","MAX_SEED_LOOKAHEAD":"","EPOCHS_PER_ETH1_VOTING_PERIOD":"","SLOTS_PER_HISTORICAL_ROOT":"","MIN_EPOCHS_TO_INACTIVITY_PENALTY":"","EPOCHS_PER_HISTORICAL_VECTOR":"","EPOCHS_PER_SLASHINGS_VECTOR":"","HISTORICAL_ROOTS_LIMIT":"","VALIDATOR_REGISTRY_LIMIT":"","BASE_REWARD_FACTOR":"","WHISTLEBLOWER_REWARD_QUOTIENT":"","PROPOSER_REWARD_QUOTIENT":"","INACTIVITY_PENALTY_QUOTIENT":"","MIN_SLASHING_PENALTY_QUOTIENT":"","PROPORTIONAL_SLASHING_MULTIPLIER":"","MAX_PROPOSER_SLASHINGS":"","MAX_ATTESTER_SLASHINGS":"","MAX_ATTESTATIONS":"","MAX_DEPOSITS":"","MAX_VOLUNTARY_EXITS":"","INACTIVITY_PENALTY_QUOTIENT_ALTAIR":"","MIN_SLASHING_PENALTY_QUOTIENT_ALTAIR":"","PROPORTIONAL_SLASHING_MULTIPLIER_ALTAIR":"","SYNC_COMMITTEE_SIZE":"","EPOCHS_PER_SYNC_COMMITTEE_PERIOD":"","MIN_SYNC_COMMITTEE_PARTICIPANTS":"","UPDATE_TIMEOUT":"","INACTIVITY_PENALTY_QUOTIENT_BELLATRIX":"","MIN_SLASHING_PENALTY_QUOTIENT_BELLATRIX":"","PROPORTIONAL_SLASHING_MULTIPLIER_BELLATRIX":"","MAX_BYTES_PER_TRANSACTION":"","MAX_TRANSACTIONS_PER_PAYLOAD":"","BYTES_PER_LOGS_BLOOM":"","MAX_EXTRA_DATA_BYTES":"","MAX_BLS_TO_EXECUTION_CHANGES":"","MAX_WITHDRAWALS_PER_PAYLOAD":"","MAX_VALIDATORS_PER_WITHDRAWALS_SWEEP":"","FIELD_ELEMENTS_PER_BLOB":"","MAX_BLOB_COMMITMENTS_PER_BLOCK":"","KZG_COMMITMENT_INCLUSION_PROOF_DEPTH":"","PRESET_BASE":"","CONFIG_NAME":"","TERMINAL_TOTAL_DIFFICULTY":"","TERMINAL_BLOCK_HASH":"","TERMINAL_BLOCK_HASH_ACTIVATION_EPOCH":"","MIN_GENESIS_ACTIVE_VALIDATOR_COUNT":"","MIN_GENESIS_TIME":"","GENESIS_FORK_VERSION":"","GENESIS_DELAY":"","ALTAIR_FORK_VERSION":"","ALTAIR_FORK_EPOCH":"","BELLATRIX_FORK_VERSION":"","BELLATRIX_FORK_EPOCH":"","CAPELLA_FORK_VERSION":"","CAPELLA_FORK_EPOCH":"","DENEB_FORK_VERSION":"","DENEB_FORK_EPOCH":"","ELECTRA_FORK_VERSION":"","ELECTRA_FORK_EPOCH":"","FULU_FORK_VERSION":"","FULU_FORK_EPOCH":"","SECONDS_PER_SLOT":"","SECONDS_PER_ETH1_BLOCK":"","MIN_VALIDATOR_WITHDRAWABILITY_DELAY":"","SHARD_COMMITTEE_PERIOD":"","ETH1_FOLLOW_DISTANCE":"","INACTIVITY_SCORE_BIAS":"","INACTIVITY_SCORE_RECOVERY_RATE":"","EJECTION_BALANCE":"","MIN_PER_EPOCH_CHURN_LIMIT":"","CHURN_LIMIT_QUOTIENT":"","MAX_PER_EPOCH_ACTIVATION_CHURN_LIMIT":"","PROPOSER_SCORE_BOOST":"","REORG_HEAD_WEIGHT_THRESHOLD":"","REORG_PARENT_WEIGHT_THRESHOLD":"","REORG_MAX_EPOCHS_SINCE_FINALIZATION":"","DEPOSIT_CHAIN_ID":"","DEPOSIT_NETWORK_ID":"","DEPOSIT_CONTRACT_ADDRESS":"","MAX_PAYLOAD_SIZE":"","MAX_REQUEST_BLOCKS":"","EPOCHS_PER_SUBNET_SUBSCRIPTION":"","MIN_EPOCHS_FOR_BLOCK_REQUESTS":"","TTFB_TIMEOUT":"","RESP_TIMEOUT":"","ATTESTATION_PROPAGATION_SLOT_RANGE":"","MAXIMUM_GOSSIP_CLOCK_DISPARITY":"","MESSAGE_DOMAIN_INVALID_SNAPPY":"","MESSAGE_DOMAIN_VALID_SNAPPY":"","SUBNETS_PER_NODE":"","ATTESTATION_SUBNET_COUNT":"","ATTESTATION_SUBNET_EXTRA_BITS":"","ATTESTATION_SUBNET_PREFIX_BITS":"","MAX_REQUEST_BLOCKS_DENEB":"","MIN_EPOCHS_FOR_BLOB_SIDECARS_REQUESTS":"","BLOB_SIDECAR_SUBNET_COUNT":"","MAX_BLOBS_PER_BLOCK":"","MAX_REQUEST_BLOB_SIDECARS":"","MIN_PER_EPOCH_CHURN_LIMIT_ELECTRA":"","MAX_PER_EPOCH_ACTIVATION_EXIT_CHURN_LIMIT":"","BLOB_SIDECAR_SUBNET_COUNT_ELECTRA":"","MAX_BLOBS_PER_BLOCK_ELECTRA":"","MAX_REQUEST_BLOB_SIDECARS_ELECTRA":"","NUMBER_OF_COLUMNS":"","NUMBER_OF_CUSTODY_GROUPS":"","DATA_COLUMN_SIDECAR_SUBNET_COUNT":"","MAX_REQUEST_DATA_COLUMN_SIDECARS":"","SAMPLES_PER_SLOT":"","CUSTODY_REQUIREMENT":"","VALIDATOR_CUSTODY_REQUIREMENT":"","BALANCE_PER_ADDITIONAL_CUSTODY_GROUP":"","BLOB_SCHEDULE": [{"EPOCH": "*", "MAX_BLOBS_PER_BLOCK": "*"}],"BLS_WITHDRAWAL_PREFIX":"","ETH1_ADDRESS_WITHDRAWAL_PREFIX":"","DOMAIN_BEACON_PROPOSER":"","DOMAIN_BEACON_ATTESTER":"","DOMAIN_RANDAO":"","DOMAIN_DEPOSIT":"","DOMAIN_VOLUNTARY_EXIT":"","DOMAIN_SELECTION_PROOF":"","DOMAIN_AGGREGATE_AND_PROOF":"","TIMELY_SOURCE_FLAG_INDEX":"","TIMELY_TARGET_FLAG_INDEX":"","TIMELY_HEAD_FLAG_INDEX":"","TIMELY_SOURCE_WEIGHT":"","TIMELY_TARGET_WEIGHT":"","TIMELY_HEAD_WEIGHT":"","SYNC_REWARD_WEIGHT":"","PROPOSER_WEIGHT":"","WEIGHT_DENOMINATOR":"","DOMAIN_SYNC_COMMITTEE":"","DOMAIN_SYNC_COMMITTEE_SELECTION_PROOF":"","DOMAIN_CONTRIBUTION_AND_PROOF":"","DOMAIN_BLS_TO_EXECUTION_CHANGE":"","TARGET_AGGREGATORS_PER_COMMITTEE":"","TARGET_AGGREGATORS_PER_SYNC_SUBCOMMITTEE":"","SYNC_COMMITTEE_SUBNET_COUNT":"","UNSET_DEPOSIT_REQUESTS_START_INDEX":"","FULL_EXIT_REQUEST_AMOUNT":"","COMPOUNDING_WITHDRAWAL_PREFIX":"","DEPOSIT_REQUEST_TYPE":"","WITHDRAWAL_REQUEST_TYPE":"","CONSOLIDATION_REQUEST_TYPE":"","MIN_ACTIVATION_BALANCE":"","MAX_EFFECTIVE_BALANCE_ELECTRA":"","MIN_SLASHING_PENALTY_QUOTIENT_ELECTRA":"","WHISTLEBLOWER_REWARD_QUOTIENT_ELECTRA":"","PENDING_DEPOSITS_LIMIT":"","PENDING_PARTIAL_WITHDRAWALS_LIMIT":"","PENDING_CONSOLIDATIONS_LIMIT":"","MAX_ATTESTER_SLASHINGS_ELECTRA":"","MAX_ATTESTATIONS_ELECTRA":"","MAX_DEPOSIT_REQUESTS_PER_PAYLOAD":"","MAX_WITHDRAWAL_REQUESTS_PER_PAYLOAD":"","MAX_CONSOLIDATION_REQUESTS_PER_PAYLOAD":"","MAX_PENDING_PARTIALS_PER_WITHDRAWALS_SWEEP":"","MAX_PENDING_DEPOSITS_PER_EPOCH":""}}] } }, { @@ -4241,6 +4294,30 @@ "status": {"operator": "equals", "value": "410"} } }, + { + "topics": ["debug", "beacon_data_column_sidecars_blockid"], + "request": { + "url": "/eth/v1/debug/beacon/data_column_sidecars/head", + "headers": {"Accept": "application/json"} + }, + "response": {"status": {"operator": "equals", "value": "200"}} + }, + { + "topics": ["debug", "beacon_data_column_sidecars_blockid"], + "request": { + "url": "/eth/v1/debug/beacon/data_column_sidecars/finalized", + "headers": {"Accept": "application/json"} + }, + "response": {"status": {"operator": "equals", "value": "200"}} + }, + { + "topics": ["debug", "beacon_data_column_sidecars_blockid"], + "request": { + "url": "/eth/v1/debug/beacon/data_column_sidecars/0x0000000000000000000000000000000000000000000000000000000000000000", + "headers": {"Accept": "application/json"} + }, + "response": {"status": {"operator": "equals", "value": "404"}} + }, { "topics": ["debug", "beacon_states_head_slow", "slow"], "request": { diff --git a/nix/checksums.nix b/nix/checksums.nix index d79345d240..c9c9f3d452 100644 --- a/nix/checksums.nix +++ b/nix/checksums.nix @@ -6,7 +6,7 @@ let in pkgs.fetchFromGitHub { owner = "nim-lang"; repo = "checksums"; - rev = tools.findKeyValue "^ +ChecksumsStableCommit = \"([a-f0-9]+)\"$" sourceFile; + rev = tools.findKeyValue "^ +ChecksumsStableCommit = \"([a-f0-9]+)\".*$" sourceFile; # WARNING: Requires manual updates when Nim compiler version changes. - hash = "sha256-Bm5iJoT2kAvcTexiLMFBa9oU5gf7d4rWjo3OiN7obWQ="; + hash = "sha256-JZhWqn4SrAgNw/HLzBK0rrj3WzvJ3Tv1nuDMn83KoYY="; } diff --git a/nix/nimble.nix b/nix/nimble.nix index 39c5e0fff7..1eabe11dde 100644 --- a/nix/nimble.nix +++ b/nix/nimble.nix @@ -7,7 +7,7 @@ in pkgs.fetchFromGitHub { owner = "nim-lang"; repo = "nimble"; fetchSubmodules = true; - rev = tools.findKeyValue "^ +NimbleStableCommit = \"([a-f0-9]+)\".+" sourceFile; + rev = tools.findKeyValue "^ +NimbleStableCommit = \"([a-f0-9]+)\".*$" sourceFile; # WARNING: Requires manual updates when Nim compiler version changes. - hash = "sha256-Rz48sGUKZEAp+UySla+MlsOfsERekuGKw69Tm11fDz8="; + hash = "sha256-wgzFhModFkwB8st8F5vSkua7dITGGC2cjoDvgkRVZMs="; } diff --git a/nix/sat.nix b/nix/sat.nix index ca6403f68f..dc3d5df740 100644 --- a/nix/sat.nix +++ b/nix/sat.nix @@ -6,7 +6,7 @@ let in pkgs.fetchFromGitHub { owner = "nim-lang"; repo = "sat"; - rev = tools.findKeyValue "^ +SatStableCommit = \"([a-f0-9]+)\"$" sourceFile; + rev = tools.findKeyValue "^ +SatStableCommit = \"([a-f0-9]+)\".*$" sourceFile; # WARNING: Requires manual updates when Nim compiler version changes. hash = "sha256-JFrrSV+mehG0gP7NiQ8hYthL0cjh44HNbXfuxQNhq7c="; } diff --git a/research/block_sim.nim b/research/block_sim.nim index 35bcfa92fe..6bb2cd5d23 100644 --- a/research/block_sim.nim +++ b/research/block_sim.nim @@ -143,7 +143,7 @@ cli do(slots = SLOTS_PER_EPOCH * 7, echo "Starting simulation..." - let db = BeaconChainDB.new("block_sim_db") + let db = BeaconChainDB.new("block_sim_db", cfg) defer: db.close() ChainDAGRef.preInit(db, genesisState[]) @@ -157,7 +157,7 @@ cli do(slots = SLOTS_PER_EPOCH * 7, except Exception as exc: raiseAssert "Failed to initialize Taskpool: " & exc.msg verifier = BatchVerifier.init(rng, taskpool) - quarantine = newClone(Quarantine.init()) + quarantine = newClone(Quarantine.init(cfg)) attPool = AttestationPool.init(dag, quarantine) batchCrypto = BatchCrypto.new( rng, eager = func(): bool = true, @@ -465,4 +465,4 @@ cli do(slots = SLOTS_PER_EPOCH * 7, echo "Done!" - printTimers(dag.headState, attesters, true, timers) \ No newline at end of file + printTimers(dag.headState, attesters, true, timers) diff --git a/tests/consensus_spec/all_tests.nim b/tests/consensus_spec/all_tests.nim index d17bf5f540..5ece555cce 100644 --- a/tests/consensus_spec/all_tests.nim +++ b/tests/consensus_spec/all_tests.nim @@ -1,5 +1,5 @@ # beacon_chain -# Copyright (c) 2018-2024 Status Research & Development GmbH +# Copyright (c) 2018-2025 Status Research & Development GmbH # Licensed and distributed under either of # * MIT license (license terms in the root directory or at https://opensource.org/licenses/MIT). # * Apache v2 license (license terms in the root directory or at https://www.apache.org/licenses/LICENSE-2.0). @@ -14,6 +14,7 @@ # Tests that do not depend on `mainnet` vs `minimal` compile-time configuration import + ./test_fixture_fork_digest, ./test_fixture_kzg, ./test_fixture_networking, ./test_fixture_ssz_generic_types diff --git a/tests/consensus_spec/altair/test_fixture_light_client_sync_protocol.nim b/tests/consensus_spec/altair/test_fixture_light_client_sync_protocol.nim index 25543d0a32..4f05531dec 100644 --- a/tests/consensus_spec/altair/test_fixture_light_client_sync_protocol.nim +++ b/tests/consensus_spec/altair/test_fixture_light_client_sync_protocol.nim @@ -203,7 +203,7 @@ proc runTest(storeDataFork: static LightClientDataFork) = store.optimistic_header == update.attested_header store.current_max_active_participants > 0 - # https://github.com/ethereum/consensus-specs/blob/v1.6.0-alpha.0/tests/core/pyspec/eth2spec/test/altair/unittests/light_client/test_sync_protocol.py#L64-L96 + # https://github.com/ethereum/consensus-specs/blob/v1.6.0-alpha.3/tests/core/pyspec/eth2spec/test/altair/unittests/light_client/test_sync_protocol.py#L64-L96 test "test_process_light_client_update_at_period_boundary": var forked = assignClone(genesisState[]) template state(): auto = forked[].altairData.data diff --git a/tests/consensus_spec/fulu/test_fixture_ssz_consensus_objects.nim b/tests/consensus_spec/fulu/test_fixture_ssz_consensus_objects.nim index 8287104083..0b23639e4c 100644 --- a/tests/consensus_spec/fulu/test_fixture_ssz_consensus_objects.nim +++ b/tests/consensus_spec/fulu/test_fixture_ssz_consensus_objects.nim @@ -123,7 +123,7 @@ suite "EF - Fulu - SSZ consensus objects " & preset(): of "BeaconBlock": checkSSZ(electra.BeaconBlock, path, hash) of "BeaconBlockBody": checkSSZ(electra.BeaconBlockBody, path, hash) of "BeaconBlockHeader": checkSSZ(BeaconBlockHeader, path, hash) - of "BeaconState": checkSSZ(electra.BeaconState, path, hash) + of "BeaconState": checkSSZ(fulu.BeaconState, path, hash) of "BlobIdentifier": checkSSZ(BlobIdentifier, path, hash) of "BlobSidecar": checkSSZ(BlobSidecar, path, hash) of "BLSToExecutionChange": checkSSZ(BLSToExecutionChange, path, hash) diff --git a/tests/consensus_spec/fulu/test_fixture_state_transition_epoch.nim b/tests/consensus_spec/fulu/test_fixture_state_transition_epoch.nim index 465cbe9982..b27161ef79 100644 --- a/tests/consensus_spec/fulu/test_fixture_state_transition_epoch.nim +++ b/tests/consensus_spec/fulu/test_fixture_state_transition_epoch.nim @@ -41,6 +41,8 @@ const HistoricalSummariesUpdateDir = RootDir/"historical_summaries_update" PendingConsolidationsDir = RootDir/"pending_consolidations" PendingDepositsDir = RootDir/"pending_deposits" + ProposerLookaheadDir = RootDir/"proposer_lookahead" + doAssert (toHashSet(mapIt(toSeq(walkDir(RootDir, relative = false)), it.path)) - toHashSet([SyncCommitteeDir])) == @@ -49,7 +51,7 @@ doAssert (toHashSet(mapIt(toSeq(walkDir(RootDir, relative = false)), it.path)) - SlashingsDir, Eth1DataResetDir, EffectiveBalanceUpdatesDir, SlashingsResetDir, RandaoMixesResetDir, ParticipationFlagDir, RewardsAndPenaltiesDir, HistoricalSummariesUpdateDir, - PendingDepositsDir, PendingConsolidationsDir]) + PendingDepositsDir, PendingConsolidationsDir, ProposerLookaheadDir]) template runSuite( suiteDir, testName: string, transitionProc: untyped): untyped = @@ -153,6 +155,11 @@ runSuite(PendingDepositsDir, "Pending deposits"): runSuite(PendingConsolidationsDir, "Pending consolidations"): process_pending_consolidations(cfg, state) +# Proposer lookahead +# --------------------------------------------------------------- +runSuite(ProposerLookaheadDir, "Proposer lookahead"): + process_proposer_lookahead(state, cache) + # Sync committee updates # --------------------------------------------------------------- diff --git a/tests/consensus_spec/test_fixture_fork_choice.nim b/tests/consensus_spec/test_fixture_fork_choice.nim index 15b83220c0..29a74e69bf 100644 --- a/tests/consensus_spec/test_fixture_fork_choice.nim +++ b/tests/consensus_spec/test_fixture_fork_choice.nim @@ -27,12 +27,21 @@ import from std/json import JsonNode, getBool, getInt, getStr, hasKey, items, len, pairs, `$`, `[]` from std/sequtils import mapIt, toSeq -from std/strutils import contains +from std/strutils import contains, rsplit from stew/byteutils import fromHex from ../testbcutil import addHeadBlock +from ../../beacon_chain/spec/peerdas_helpers import + verify_data_column_sidecar_inclusion_proof, + verify_data_column_sidecar_kzg_proofs from ../../beacon_chain/spec/state_transition_block import check_attester_slashing, validate_blobs +block: + template sourceDir: string = currentSourcePath.rsplit(io2.DirSep, 1)[0] + doAssert loadTrustedSetup( + sourceDir & + "/../../vendor/nim-kzg4844/kzg4844/csources/src/trusted_setup.txt", 0).isOk + # Test format described at https://github.com/ethereum/consensus-specs/tree/v1.3.0/tests/formats/fork_choice # Note that our implementation has been optimized with "ProtoArray" # instead of following the spec (in particular the "store"). @@ -66,6 +75,7 @@ type of opOnBlock: blck: ForkedSignedBeaconBlock blobData: Opt[BlobData] + columnsValid: bool of opOnMergeBlock: powBlock: PowBlock of opOnPhase0AttesterSlashing: @@ -128,7 +138,8 @@ proc loadOps( let blck = loadBlock(path/filename & ".ssz_snappy", consensusFork) blobData = - when consensusFork >= ConsensusFork.Deneb: + when consensusFork in [ConsensusFork.Deneb, ConsensusFork.Electra]: + doAssert not step.hasKey"columns" if step.hasKey"blobs": numExtraFields += 2 Opt.some BlobData( @@ -143,9 +154,29 @@ proc loadOps( doAssert not step.hasKey"blobs" Opt.none(BlobData) + var columnsValid = true + when consensusFork >= ConsensusFork.Fulu: + doAssert not step.hasKey"blobs" + if step.hasKey"columns": + numExtraFields += 1 + if step["columns"].len < 64: + columnsValid = false + for column_name in step["columns"]: + let column = parseTest( + path/(column_name.getStr()) & ".ssz_snappy", SSZ, + DataColumnSidecar) + columnsValid = columnsValid and + verify_data_column_sidecar_inclusion_proof(column).isOk and + verify_data_column_sidecar_kzg_proofs(column).isOk + if not columnsValid: + break + else: + doAssert not step.hasKey"columns" + result.add Operation(kind: opOnBlock, blck: ForkedSignedBeaconBlock.init(blck), - blobData: blobData) + blobData: blobData, + columnsValid: columnsValid) elif step.hasKey"attester_slashing": let filename = step["attester_slashing"].getStr() if fork >= ConsensusFork.Electra: @@ -184,11 +215,12 @@ proc stepOnBlock( stateCache: var StateCache, signedBlock: ForkySignedBeaconBlock, blobData: Opt[BlobData], + columnsValid: bool, time: BeaconTime, invalidatedHashes: Table[Eth2Digest, Eth2Digest]): Result[BlockRef, VerifierError] = - # 1. Validate blobs - when typeof(signedBlock).kind >= ConsensusFork.Deneb: + # 1. Validate blobs and columns + when typeof(signedBlock).kind in [ConsensusFork.Deneb, ConsensusFork.Electra]: let kzgCommits = signedBlock.message.body.blob_kzg_commitments.asSeq if kzgCommits.len > 0 or blobData.isSome: if blobData.isNone or kzgCommits.validate_blobs( @@ -197,6 +229,9 @@ proc stepOnBlock( else: doAssert blobData.isNone, "Pre-Deneb test with specified blob data" + if not columnsValid: + return err(VerifierError.Invalid) + # 2. Move state to proper slot doAssert dag.updateState( state, @@ -241,7 +276,7 @@ proc stepOnBlock( doAssert status.isOk() # 5. Update DAG with new head - var quarantine = Quarantine.init() + var quarantine = Quarantine.init(dag.cfg) let newHead = fkChoice[].get_head(dag, time).get() dag.updateHead(dag.getBlockRef(newHead).get(), quarantine, []) if dag.needStateCachesAndForkChoicePruning(): @@ -295,7 +330,9 @@ proc stepChecks( proc doRunTest( path: string, fork: ConsensusFork) {.raises: [KeyError, ValueError].} = - let db = BeaconChainDB.new("", inMemory = true) + let db = withConsensusFork(fork): + BeaconChainDB.new( + "", consensusFork.genesisTestRuntimeConfig, inMemory = true) defer: db.close() @@ -343,7 +380,7 @@ proc doRunTest( let status = stepOnBlock( stores.dag, stores.fkChoice, verifier, state[], stateCache, - forkyBlck, step.blobData, time, invalidatedHashes) + forkyBlck, step.blobData, step.columnsValid, time, invalidatedHashes) doAssert status.isOk == step.valid of opOnPhase0AttesterSlashing: let indices = check_attester_slashing( diff --git a/tests/consensus_spec/test_fixture_fork_digest.nim b/tests/consensus_spec/test_fixture_fork_digest.nim new file mode 100644 index 0000000000..ebd4f54415 --- /dev/null +++ b/tests/consensus_spec/test_fixture_fork_digest.nim @@ -0,0 +1,70 @@ +# beacon_chain +# Copyright (c) 2025 Status Research & Development GmbH +# Licensed and distributed under either of +# * MIT license (license terms in the root directory or at https://opensource.org/licenses/MIT). +# * Apache v2 license (license terms in the root directory or at https://www.apache.org/licenses/LICENSE-2.0). +# at your option. This file may not be copied, modified, or distributed except according to those terms. + +# https://github.com/ethereum/consensus-specs/blob/18387696969c0bb34e96164434a3a36edca296c9/tests/core/pyspec/eth2spec/test/fulu/validator/test_compute_fork_digest.py + +{.push raises: [].} +{.used.} + +import + unittest2, + ../../beacon_chain/spec/forks + +var cfg = defaultRuntimeConfig +cfg.ALTAIR_FORK_EPOCH = GENESIS_EPOCH +cfg.BELLATRIX_FORK_EPOCH = GENESIS_EPOCH +cfg.CAPELLA_FORK_EPOCH = GENESIS_EPOCH +cfg.DENEB_FORK_EPOCH = GENESIS_EPOCH +cfg.ELECTRA_FORK_EPOCH = 9.Epoch +cfg.FULU_FORK_EPOCH = 100.Epoch +cfg.BLOB_SCHEDULE = @[ + BlobParameters(EPOCH: 300.Epoch, MAX_BLOBS_PER_BLOCK: 300), + BlobParameters(EPOCH: 250.Epoch, MAX_BLOBS_PER_BLOCK: 275), + BlobParameters(EPOCH: 200.Epoch, MAX_BLOBS_PER_BLOCK: 200), + BlobParameters(EPOCH: 150.Epoch, MAX_BLOBS_PER_BLOCK: 175), + BlobParameters(EPOCH: 100.Epoch, MAX_BLOBS_PER_BLOCK: 100), + BlobParameters(EPOCH: 9.Epoch, MAX_BLOBS_PER_BLOCK: 9)] + +proc cfd( + epoch: uint64, genesis_validators_root: Eth2Digest, + fork_version: array[4, byte], expected: array[4, byte]) = + var cfg = cfg + cfg.FULU_FORK_VERSION = Version(fork_version) + check: + ForkDigest(expected) == atEpoch( + ForkDigests.init(cfg, genesis_validators_root), epoch.Epoch, cfg) + ForkDigest(expected) == compute_fork_digest_fulu( + cfg, genesis_validators_root, epoch.Epoch) + +func getGvr(filling: uint8): Eth2Digest = + var res: Eth2Digest + for i in 0 ..< res.data.len: + res.data[i] = filling + res + +suite "EF - Fulu - BPO forkdigests": + test "Different lengths and blob limits": + cfd(100, getGvr(0), [6'u8, 0, 0, 0], [0xdf'u8, 0x67, 0x55, 0x7b]) + cfd(101, getGvr(0), [6'u8, 0, 0, 0], [0xdf'u8, 0x67, 0x55, 0x7b]) + cfd(150, getGvr(0), [6'u8, 0, 0, 0], [0x8a'u8, 0xb3, 0x8b, 0x59]) + cfd(199, getGvr(0), [6'u8, 0, 0, 0], [0x8a'u8, 0xb3, 0x8b, 0x59]) + cfd(200, getGvr(0), [6'u8, 0, 0, 0], [0xd9'u8, 0xb8, 0x14, 0x38]) + cfd(201, getGvr(0), [6'u8, 0, 0, 0], [0xd9'u8, 0xb8, 0x14, 0x38]) + cfd(250, getGvr(0), [6'u8, 0, 0, 0], [0x4e'u8, 0xf3, 0x2a, 0x62]) + cfd(299, getGvr(0), [6'u8, 0, 0, 0], [0x4e'u8, 0xf3, 0x2a, 0x62]) + cfd(300, getGvr(0), [6'u8, 0, 0, 0], [0xca'u8, 0x10, 0x0d, 0x64]) + cfd(301, getGvr(0), [6'u8, 0, 0, 0], [0xca'u8, 0x10, 0x0d, 0x64]) + + test "Different genesis validators roots": + cfd(100, getGvr(1), [6'u8, 0, 0, 0], [0xfd'u8, 0x3a, 0xa2, 0xa2]) + cfd(100, getGvr(2), [6'u8, 0, 0, 0], [0x80'u8, 0xc6, 0xbd, 0x97]) + cfd(100, getGvr(3), [6'u8, 0, 0, 0], [0xf2'u8, 0x09, 0xfd, 0xfc]) + + test "Different fork versions": + cfd(100, getGvr(0), [6'u8, 0, 0, 1], [0x44'u8, 0xa5, 0x71, 0xe8]) + cfd(100, getGvr(0), [7'u8, 0, 0, 0], [0x70'u8, 0x6f, 0x46, 0x1a]) + cfd(100, getGvr(0), [7'u8, 0, 0, 1], [0x1a'u8, 0x34, 0x15, 0xc2]) \ No newline at end of file diff --git a/tests/consensus_spec/test_fixture_kzg.nim b/tests/consensus_spec/test_fixture_kzg.nim index 07eb573479..5873e63948 100644 --- a/tests/consensus_spec/test_fixture_kzg.nim +++ b/tests/consensus_spec/test_fixture_kzg.nim @@ -76,7 +76,7 @@ proc runVerifyKzgProofTest(suiteName, suitePath, path: string) = y = fromHex[32](data["input"]["y"].getStr) proof = fromHex[48](data["input"]["proof"].getStr) - # https://github.com/ethereum/consensus-specs/blob/v1.5.0/tests/formats/kzg_4844/verify_kzg_proof.md#condition + # https://github.com/ethereum/consensus-specs/blob/v1.6.0-alpha.2/tests/formats/kzg_4844/verify_kzg_proof.md#condition # "If the commitment or proof is invalid (e.g. not on the curve or not in # the G1 subgroup of the BLS curve) or `z` or `y` are not a valid BLS # field element, it should error, i.e. the output should be `null`." @@ -209,7 +209,7 @@ proc runComputeCellsTest(suiteName, suitePath, path: string) = output = data["output"] blob = fromHex[131072](data["input"]["blob"].getStr) - # https://github.com/ethereum/consensus-specs/blob/v1.5.0/tests/formats/kzg_7594/compute_cells.md#condition + # https://github.com/ethereum/consensus-specs/blob/v1.6.0-alpha.2/tests/formats/kzg_7594/compute_cells.md#condition if blob.isNone: check output.kind == JNull else: @@ -256,7 +256,7 @@ proc runVerifyCellKzgProofBatchTest(suiteName, suitePath, path: string) = cells = data["input"]["cells"].mapIt(fromHex[2048](it.getStr)) proofs = data["input"]["proofs"].mapIt(fromHex[48](it.getStr)) - # https://github.com/ethereum/consensus-specs/blob/v1.5.0/tests/formats/kzg_7594/verify_cell_kzg_proof_batch.md#condition + # https://github.com/ethereum/consensus-specs/blob/v1.6.0-alpha.2/tests/formats/kzg_7594/verify_cell_kzg_proof_batch.md#condition # If the blob is invalid (e.g. incorrect length or one of the 32-byte # blocks does not represent a BLS field element) it should error, i.e. the # the output should be `null`. diff --git a/tests/consensus_spec/test_fixture_light_client_data_collection.nim b/tests/consensus_spec/test_fixture_light_client_data_collection.nim index aa91acef2e..bd71669877 100644 --- a/tests/consensus_spec/test_fixture_light_client_data_collection.nim +++ b/tests/consensus_spec/test_fixture_light_client_data_collection.nim @@ -146,7 +146,7 @@ proc runTest(suiteName, path: string, consensusFork: static ConsensusFork) = taskpool = Taskpool.new() var verifier = BatchVerifier.init(rng, taskpool) - quarantine = newClone(Quarantine.init()) + quarantine = newClone(Quarantine.init(cfg)) let steps = loadSteps(path, dag.forkDigests[]) for i, step in steps: diff --git a/tests/consensus_spec/test_fixture_ssz_generic_types.nim b/tests/consensus_spec/test_fixture_ssz_generic_types.nim index 2cbda1ab76..d46b43063d 100644 --- a/tests/consensus_spec/test_fixture_ssz_generic_types.nim +++ b/tests/consensus_spec/test_fixture_ssz_generic_types.nim @@ -73,6 +73,12 @@ type F: HashArray[4, FixedTestStruct] G: HashArray[2, VarTestStruct] + ProgressiveTestStruct = object + A: seq[byte] + B: seq[uint64] + C: seq[SmallTestStruct] + D: seq[seq[VarTestStruct]] + BitsStruct = object A: BitList[5] B: BitArray[2] @@ -99,6 +105,34 @@ proc checkBasic( # TODO check the value +proc checkProgressiveList( + sszSubType, dir: string, expectedHash: SSZHashTreeRoot +) {.raises: [ + IOError, SerializationError, TestSizeError, UnconsumedInput, ValueError].} = + var typeIdent: string + let wasMatched = + try: + scanf(sszSubType, "proglist_$+", typeIdent) + except ValueError: + false # Parsed `size` is out of range + doAssert wasMatched + + case typeIdent + of "bool": + checkBasic(seq[bool], dir, expectedHash) + of "uint8": + checkBasic(seq[uint8], dir, expectedHash) + of "uint16": + checkBasic(seq[uint16], dir, expectedHash) + of "uint32": + checkBasic(seq[uint32], dir, expectedHash) + of "uint64": + checkBasic(seq[uint64], dir, expectedHash) + of "uint128": + checkBasic(seq[UInt128], dir, expectedHash) + of "uint256": + checkBasic(seq[UInt256], dir, expectedHash) + macro testVector(typeIdent: string, size: int): untyped = # find the compile-time type to test # against the runtime combination (cartesian product) of @@ -237,6 +271,7 @@ proc sszCheck(baseDir, sszType, sszSubType: string) {.raises: [IOError, OSError, SerializationError, UnconsumedInput, ValueError, YamlConstructionError, YamlParserError].} = let dir = baseDir/sszSubType + checkpoint dir # Hash tree root var expectedHash: SSZHashTreeRoot @@ -265,6 +300,8 @@ proc sszCheck(baseDir, sszType, sszSubType: string) of 256: checkBasic(UInt256, dir, expectedHash) else: raise newException(ValueError, "unknown uint in test: " & sszSubType) + of "basic_progressivelist": + checkProgressiveList(sszSubType, dir, expectedHash) of "basic_vector": checkVector(sszSubType, dir, expectedHash) of "bitvector": checkBitVector(sszSubType, dir, expectedHash) of "bitlist": checkBitList(sszSubType, dir, expectedHash) @@ -280,6 +317,8 @@ proc sszCheck(baseDir, sszType, sszSubType: string) of "ComplexTestStruct": checkBasic(ComplexTestStruct, dir, expectedHash) checkBasic(HashArrayComplexTestStruct, dir, expectedHash) + of "ProgressiveTestStruct": + checkBasic(ProgressiveTestStruct, dir, expectedHash) of "BitsStruct": checkBasic(BitsStruct, dir, expectedHash) else: raise newException(ValueError, "unknown container in test: " & sszSubType) diff --git a/tests/test_attestation_pool.nim b/tests/test_attestation_pool.nim index 85831c5583..875a30f415 100644 --- a/tests/test_attestation_pool.nim +++ b/tests/test_attestation_pool.nim @@ -79,7 +79,7 @@ suite "Attestation pool processing" & preset(): validatorMonitor, {}) taskpool = Taskpool.new() verifier {.used.} = BatchVerifier.init(rng, taskpool) - quarantine = newClone(Quarantine.init()) + quarantine = newClone(Quarantine.init(dag.cfg)) pool = newClone(AttestationPool.init(dag, quarantine)) state = newClone(dag.headState) cache = StateCache() @@ -767,7 +767,7 @@ suite "Attestation pool electra processing" & preset(): makeTestDB( TOTAL_COMMITTEES * TARGET_COMMITTEE_SIZE * SLOTS_PER_EPOCH, cfg = cfg), validatorMonitor, {}) - quarantine = newClone(Quarantine.init()) + quarantine = newClone(Quarantine.init(dag.cfg)) pool = newClone(AttestationPool.init(dag, quarantine)) state = newClone(dag.headState) cache = StateCache() diff --git a/tests/test_beacon_chain_db.nim b/tests/test_beacon_chain_db.nim index b56ddfcf3a..07aca93581 100644 --- a/tests/test_beacon_chain_db.nim +++ b/tests/test_beacon_chain_db.nim @@ -1,5 +1,5 @@ # beacon_chain -# Copyright (c) 2018-2024 Status Research & Development GmbH +# Copyright (c) 2018-2025 Status Research & Development GmbH # Licensed under either of # * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or https://www.apache.org/licenses/LICENSE-2.0) # * MIT license ([LICENSE-MIT](LICENSE-MIT) or https://opensource.org/licenses/MIT) @@ -10,7 +10,7 @@ import unittest2, - ../beacon_chain/beacon_chain_db, + ../beacon_chain/[beacon_chain_db, beacon_chain_db_quarantine], ../beacon_chain/consensus_object_pools/block_dag, ../beacon_chain/spec/forks, ./testutil @@ -25,6 +25,7 @@ from ../beacon_chain/spec/beaconstate import initialize_hashed_beacon_state_from_eth1 from ../beacon_chain/spec/state_transition import noRollback from ../beacon_chain/validators/validator_monitor import ValidatorMonitor +from ./consensus_spec/fixtures_utils import genesisTestruntimeConfig from ./mocking/mock_genesis import mockEth1BlockHash from ./testblockutil import makeInitialDeposits from ./testdbutil import makeTestDB @@ -171,7 +172,8 @@ suite "Beacon chain DB" & preset(): db.getBlock(ZERO_HASH, phase0.TrustedSignedBeaconBlock).isNone test "sanity check phase 0 blocks" & preset(): - let db = BeaconChainDB.new("", inMemory = true) + let db = BeaconChainDB.new( + "", ConsensusFork.Phase0.genesisTestRuntimeConfig, inMemory = true) let signedBlock = withDigest((phase0.TrustedBeaconBlock)()) @@ -221,7 +223,8 @@ suite "Beacon chain DB" & preset(): db.close() test "sanity check Altair blocks" & preset(): - let db = BeaconChainDB.new("", inMemory = true) + let db = BeaconChainDB.new( + "", ConsensusFork.Altair.genesisTestRuntimeConfig, inMemory = true) let signedBlock = withDigest((altair.TrustedBeaconBlock)()) @@ -272,7 +275,8 @@ suite "Beacon chain DB" & preset(): db.close() test "sanity check Bellatrix blocks" & preset(): - let db = BeaconChainDB.new("", inMemory = true) + let db = BeaconChainDB.new( + "", ConsensusFork.Bellatrix.genesisTestRuntimeConfig, inMemory = true) let signedBlock = withDigest((bellatrix.TrustedBeaconBlock)()) @@ -323,7 +327,8 @@ suite "Beacon chain DB" & preset(): db.close() test "sanity check Capella blocks" & preset(): - let db = BeaconChainDB.new("", inMemory = true) + let db = BeaconChainDB.new( + "", ConsensusFork.Capella.genesisTestRuntimeConfig, inMemory = true) let signedBlock = withDigest((capella.TrustedBeaconBlock)()) @@ -374,7 +379,8 @@ suite "Beacon chain DB" & preset(): db.close() test "sanity check Deneb blocks" & preset(): - let db = BeaconChainDB.new("", inMemory = true) + let db = BeaconChainDB.new( + "", ConsensusFork.Deneb.genesisTestRuntimeConfig, inMemory = true) let signedBlock = withDigest((deneb.TrustedBeaconBlock)()) @@ -424,7 +430,8 @@ suite "Beacon chain DB" & preset(): db.close() test "sanity check Electra blocks" & preset(): - let db = BeaconChainDB.new("", inMemory = true) + let db = BeaconChainDB.new( + "", ConsensusFork.Electra.genesisTestRuntimeConfig, inMemory = true) let signedBlock = withDigest((electra.TrustedBeaconBlock)()) @@ -473,7 +480,8 @@ suite "Beacon chain DB" & preset(): db.close() test "sanity check Fulu blocks" & preset(): - let db = BeaconChainDB.new("", inMemory = true) + let db = BeaconChainDB.new( + "", ConsensusFork.Fulu.genesisTestRuntimeConfig, inMemory = true) let signedBlock = withDigest((fulu.TrustedBeaconBlock)()) @@ -1151,7 +1159,7 @@ suite "Beacon chain DB" & preset(): blockRoot0 = hash_tree_root(blockHeader0.message) blockRoot1 = hash_tree_root(blockHeader1.message) - # Ensure minimal-difference pairs on both block root and + # Ensure minimal-difference pairs on both block root and # data column index to verify that the columnkey uses both dataColumnSidecar0 = DataColumnSidecar(signed_block_header: blockHeader0, index: 3) dataColumnSidecar1 = DataColumnSidecar(signed_block_header: blockHeader0, index: 2) @@ -1172,7 +1180,7 @@ suite "Beacon chain DB" & preset(): not db.getDataColumnSidecarSZ(blockRoot1, 2, buf) db.putDataColumnSidecar(dataColumnSidecar0) - + check: db.getDataColumnSidecar(blockRoot0, 3, dataColumnSidecar) dataColumnSidecar == dataColumnSidecar0 @@ -1240,6 +1248,223 @@ suite "Beacon chain DB" & preset(): db.close() +suite "Quarantine" & preset(): + const cfg = defaultRuntimeConfig + + setup: + let + db = BeaconChainDB.new("", inMemory = true, cfg = cfg) + quarantine = db.getQuarantineDB() + + teardown: + db.close() + + func genBlockRoot(index: int): Eth2Digest = + var res: Eth2Digest + let tmp = uint64(index).toBytesLE() + copyMem(addr res.data[0], unsafeAddr tmp[0], sizeof(uint64)) + res + + func genKzgCommitment(index: int): KzgCommitment = + var res: KzgCommitment + let tmp = uint64(index).toBytesLE() + copyMem(addr res.bytes[0], unsafeAddr tmp[0], sizeof(uint64)) + res + + func genBlobSidecar( + index: int, + slot: int, + kzg_commitment: int, + proposer_index: int + ): BlobSidecar = + BlobSidecar( + index: BlobIndex(index), + kzg_commitment: genKzgCommitment(kzg_commitment), + signed_block_header: SignedBeaconBlockHeader( + message: BeaconBlockHeader( + slot: Slot(slot), + proposer_index: uint64(proposer_index)))) + + func genDataColumnSidecar( + index: int, + slot: int, + proposer_index: int + ): DataColumnSidecar = + DataColumnSidecar( + index: ColumnIndex(index), + signed_block_header: SignedBeaconBlockHeader( + message: BeaconBlockHeader( + slot: Slot(slot), + proposer_index: uint64(proposer_index)))) + + proc cmp( + a: openArray[ref BlobSidecar|ref DataColumnSidecar], + b: openArray[ref BlobSidecar|ref DataColumnSidecar] + ): bool = + if len(a) != len(b): + return false + for index in 0 ..< len(a): + if a[index][] != b[index][]: + return false + true + + proc generateBlobSidecars(): seq[ref BlobSidecar] = + @[ + newClone(genBlobSidecar(0, 100, 10, 24)), + newClone(genBlobSidecar(1, 100, 11, 24)), + newClone(genBlobSidecar(2, 100, 12, 24)), + newClone(genBlobSidecar(3, 100, 13, 24)), + newClone(genBlobSidecar(4, 100, 14, 24)), + newClone(genBlobSidecar(5, 100, 15, 24)), + newClone(genBlobSidecar(6, 100, 16, 24)), + newClone(genBlobSidecar(7, 100, 17, 24)), + newClone(genBlobSidecar(8, 100, 18, 24)) + ] + + proc generateDataColumnSidecars(): seq[ref DataColumnSidecar] = + @[ + newClone(genDataColumnSidecar(0, 200, 100234)), + newClone(genDataColumnSidecar(7, 200, 100234)), + newClone(genDataColumnSidecar(14, 200, 100234)), + newClone(genDataColumnSidecar(21, 200, 100234)), + newClone(genDataColumnSidecar(28, 200, 100234)), + newClone(genDataColumnSidecar(35, 200, 100234)), + newClone(genDataColumnSidecar(42, 200, 100234)), + newClone(genDataColumnSidecar(49, 200, 100234)), + newClone(genDataColumnSidecar(56, 200, 100234)), + newClone(genDataColumnSidecar(63, 200, 100234)), + newClone(genDataColumnSidecar(70, 200, 100234)), + newClone(genDataColumnSidecar(77, 200, 100234)), + newClone(genDataColumnSidecar(84, 200, 100234)), + newClone(genDataColumnSidecar(91, 200, 100234)), + newClone(genDataColumnSidecar(98, 200, 100234)), + newClone(genDataColumnSidecar(127, 200, 100234)), + ] + + proc getSidecars( + quarantine: QuarantineDB, + T: typedesc[BlobSidecar|DataColumnSidecar], + blockRoot: Eth2Digest + ): seq[ref T] = + var res: seq[ref T] + for item in quarantine.sidecars(T, blockRoot): + res.add(newClone(item)) + res + + proc runDataSidecarTest( + quarantine: QuarantineDB, + T: typedesc[ForkyDataSidecar] + ) = + let + broots = @[ + genBlockRoot(100), genBlockRoot(200), genBlockRoot(300) + ] + sidecars = + when T is deneb.BlobSidecar: + generateBlobSidecars() + else: + generateDataColumnSidecars() + offsets = + when T is deneb.BlobSidecar: + @[(0, 8), (0, 3), (0, 5)] + else: + @[(0, 15), (4, 11), (0, 7)] + + check: + len(quarantine.getSidecars(T, broots[0])) == 0 + len(quarantine.getSidecars(T, broots[1])) == 0 + len(quarantine.getSidecars(T, broots[2])) == 0 + quarantine.sidecarsCount(T) == 0 + + quarantine.removeDataSidecars(T, broots[0]) + quarantine.removeDataSidecars(T, broots[1]) + quarantine.removeDataSidecars(T, broots[2]) + + quarantine.putDataSidecars(broots[0], + sidecars.toOpenArray(offsets[0][0], offsets[0][1])) + + block: + let + res1 = quarantine.getSidecars(T, broots[0]) + check: + quarantine.sidecarsCount(T) == len(res1) + len(res1) == (offsets[0][1] - offsets[0][0] + 1) + cmp(res1, sidecars.toOpenArray(offsets[0][0], offsets[0][1])) == true + len(quarantine.getSidecars(T, broots[1])) == 0 + len(quarantine.getSidecars(T, broots[2])) == 0 + + quarantine.putDataSidecars(broots[1], + sidecars.toOpenArray(offsets[1][0], offsets[1][1])) + + block: + let + res1 = quarantine.getSidecars(T, broots[0]) + res2 = quarantine.getSidecars(T, broots[1]) + check: + quarantine.sidecarsCount(T) == len(res1) + len(res2) + len(res1) == (offsets[0][1] - offsets[0][0] + 1) + len(res2) == (offsets[1][1] - offsets[1][0] + 1) + cmp(res1, sidecars.toOpenArray(offsets[0][0], offsets[0][1])) == true + cmp(res2, sidecars.toOpenArray(offsets[1][0], offsets[1][1])) == true + len(quarantine.getSidecars(T, broots[2])) == 0 + + quarantine.putDataSidecars(broots[2], + sidecars.toOpenArray(offsets[2][0], offsets[2][1])) + + block: + let + res1 = quarantine.getSidecars(T, broots[0]) + res2 = quarantine.getSidecars(T, broots[1]) + res3 = quarantine.getSidecars(T, broots[2]) + check: + len(res1) == (offsets[0][1] - offsets[0][0] + 1) + len(res2) == (offsets[1][1] - offsets[1][0] + 1) + len(res3) == (offsets[2][1] - offsets[2][0] + 1) + quarantine.sidecarsCount(T) == len(res1) + len(res2) + len(res3) + cmp(res1, sidecars.toOpenArray(offsets[0][0], offsets[0][1])) == true + cmp(res2, sidecars.toOpenArray(offsets[1][0], offsets[1][1])) == true + cmp(res3, sidecars.toOpenArray(offsets[2][0], offsets[2][1])) == true + + quarantine.removeDataSidecars(T, broots[1]) + + block: + let + res1 = quarantine.getSidecars(T, broots[0]) + res3 = quarantine.getSidecars(T, broots[2]) + check: + len(res1) == (offsets[0][1] - offsets[0][0] + 1) + cmp(res1, sidecars.toOpenArray(offsets[0][0], offsets[0][1])) == true + len(quarantine.getSidecars(T, broots[1])) == 0 + len(res3) == (offsets[2][1] - offsets[2][0] + 1) + cmp(res3, sidecars.toOpenArray(offsets[2][0], offsets[2][1])) == true + quarantine.sidecarsCount(T) == len(res1) + len(res3) + + quarantine.removeDataSidecars(T, broots[0]) + + block: + let + res3 = quarantine.getSidecars(T, broots[2]) + check: + len(quarantine.getSidecars(T, broots[0])) == 0 + len(quarantine.getSidecars(T, broots[1])) == 0 + len(res3) == (offsets[2][1] - offsets[2][0] + 1) + cmp(res3, sidecars.toOpenArray(offsets[2][0], offsets[2][1])) == true + quarantine.sidecarsCount(T) == len(res3) + + quarantine.removeDataSidecars(T, broots[2]) + + check: + len(quarantine.getSidecars(T, broots[0])) == 0 + len(quarantine.getSidecars(T, broots[1])) == 0 + len(quarantine.getSidecars(T, broots[2])) == 0 + quarantine.sidecarsCount(T) == 0 + + test "put/iterate/remove test [BlobSidecars]": + quarantine.runDataSidecarTest(deneb.BlobSidecar) + + test "put/iterate/remove test [DataColumnSidecar]": + quarantine.runDataSidecarTest(fulu.DataColumnSidecar) + suite "FinalizedBlocks" & preset(): test "Basic ops" & preset(): var @@ -1267,4 +1492,4 @@ suite "FinalizedBlocks" & preset(): check: k in [Slot 0, Slot 5] items += 1 - check: items == 2 \ No newline at end of file + check: items == 2 diff --git a/tests/test_block_processor.nim b/tests/test_block_processor.nim index db4943ca94..553491b11e 100644 --- a/tests/test_block_processor.nim +++ b/tests/test_block_processor.nim @@ -47,7 +47,7 @@ suite "Block processor" & preset(): dag = init(ChainDAGRef, cfg, db, validatorMonitor, {}) var taskpool = Taskpool.new() - quarantine = newClone(Quarantine.init()) + quarantine = newClone(Quarantine.init(cfg)) blobQuarantine = newClone(BlobQuarantine()) attestationPool = newClone(AttestationPool.init(dag, quarantine)) elManager = new ELManager # TODO: initialise this properly diff --git a/tests/test_block_quarantine.nim b/tests/test_block_quarantine.nim index 85b6c140b7..43b8a11bcc 100644 --- a/tests/test_block_quarantine.nim +++ b/tests/test_block_quarantine.nim @@ -11,7 +11,7 @@ import unittest2, chronicles, - ../beacon_chain/spec/forks, + ../beacon_chain/spec/[forks, presets], ../beacon_chain/spec/datatypes/[phase0, deneb], ../beacon_chain/consensus_object_pools/block_quarantine @@ -40,7 +40,7 @@ suite "Block quarantine": b5 = makeBlobbyBlock(Slot 4, b3.root) b6 = makeBlobbyBlock(Slot 4, b4.root) - var quarantine: Quarantine + var quarantine = Quarantine.init(defaultRuntimeConfig) quarantine.addMissing(b1.root) check: @@ -54,20 +54,20 @@ suite "Block quarantine": quarantine.addOrphan(Slot 0, b3).isOk quarantine.addOrphan(Slot 0, b4).isOk - quarantine.addBlobless(Slot 0, b5) - quarantine.addBlobless(Slot 0, b6) + quarantine.addSidecarless(Slot 0, b5) + quarantine.addSidecarless(Slot 0, b6) (b4.root, ValidatorSig()) in quarantine.orphans - b5.root in quarantine.blobless - b6.root in quarantine.blobless + b5.root in quarantine.sidecarless + b6.root in quarantine.sidecarless quarantine.addUnviable(b4.root) check: (b4.root, ValidatorSig()) notin quarantine.orphans - b5.root in quarantine.blobless - b6.root notin quarantine.blobless + b5.root in quarantine.sidecarless + b6.root notin quarantine.sidecarless quarantine.addUnviable(b1.root) @@ -76,8 +76,8 @@ suite "Block quarantine": (b2.root, ValidatorSig()) notin quarantine.orphans (b3.root, ValidatorSig()) notin quarantine.orphans - b5.root notin quarantine.blobless - b6.root notin quarantine.blobless + b5.root notin quarantine.sidecarless + b6.root notin quarantine.sidecarless test "Recursive missing parent": let @@ -85,7 +85,7 @@ suite "Block quarantine": b1 = makeBlock(Slot 1, b0.root) b2 = makeBlock(Slot 2, b1.root) - var quarantine: Quarantine + var quarantine = Quarantine.init(defaultRuntimeConfig) check: b0.root notin quarantine.missing b1.root notin quarantine.missing @@ -121,7 +121,7 @@ suite "Block quarantine": b2.root notin quarantine.missing test "Keep downloading parent chain even if we hit missing limit": - var quarantine: Quarantine + var quarantine = Quarantine.init(defaultRuntimeConfig) var blocks = @[makeBlock(Slot 0, ZERO_HASH)] for i in 0.. RestApiResponse: + + if contentBody.isNone: + return RestApiResponse.jsonError(Http400, EmptyRequestBodyError) + + let + rawVersion = request.headers.getString("eth-consensus-version") + consensusFork = ConsensusFork.decodeString(rawVersion).valueOr: + return RestApiResponse.jsonError(Http400, "Invalid consensus version") + contentType = preferredContentType(jsonMediaType, + sszMediaType).valueOr: + return RestApiResponse.jsonError(Http406, "Content type not acceptable") + + if consensusFork < ConsensusFork.Fulu: + return RestApiResponse.jsonError(Http400, "Unsupported fork version") + + if contentType in [sszMediaType, jsonMediaType]: + RestApiResponse.response( + Http202, headers=[("eth-consensus-version", consensusFork.toString)]) + else: + RestApiResponse.jsonError(Http415, "Invalid Accept") + router.api2(MethodGet, "/eth/v1/builder/status") do () -> RestApiResponse: RestApiResponse.response(Http200) proc testSuite() = - suite "MEV calls serialization/deserialization and behavior test suite": let rng = HmacDrbgContext.new() @@ -470,7 +477,7 @@ proc testSuite() = else: ("application/json,application/octet-stream;q=0.9", ApplicationJsonMediaType) - (restAcceptType3, responseMediaType3) = + (restAcceptType3, _) = if responseKind == TestKind.Ssz: ("application/json;q=0.5,application/octet-stream;q=1.0", OctetStreamMediaType) @@ -502,7 +509,7 @@ proc testSuite() = check: response1.status == 200 response2.status == 200 - response3.status == 200 + response3.status == 202 let version1 = response1.headers.getString("eth-consensus-version") @@ -512,10 +519,8 @@ proc testSuite() = check: response1.contentType.isSome() response2.contentType.isSome() - response3.contentType.isSome() response1.contentType.get().mediaType == responseMediaType1 response2.contentType.get().mediaType == responseMediaType2 - response3.contentType.get().mediaType == responseMediaType3 version1 == ConsensusFork.Electra.toString() version2 == ConsensusFork.Electra.toString() version3 == ConsensusFork.Fulu.toString() @@ -527,17 +532,12 @@ proc testSuite() = payload2res = decodeBytesJsonOrSsz(SubmitBlindedBlockResponseElectra, response2.data, response2.contentType, version2) - payload3res = - decodeBytesJsonOrSsz(SubmitBlindedBlockResponseFulu, - response3.data, response3.contentType, version3) check: payload1res.isOk() payload2res.isOk() - payload3res.isOk() payload1res.get().data.execution_payload.parent_hash == parent_hash1 payload2res.get().data.execution_payload.parent_hash == parent_hash2 - payload3res.get().data.execution_payload.parent_hash == parent_hash3 asyncTest "/eth/v1/builder/status test": let response = await client.getStatus() diff --git a/tests/test_quarantine.nim b/tests/test_quarantine.nim index bd93edb38d..2234021b53 100644 --- a/tests/test_quarantine.nim +++ b/tests/test_quarantine.nim @@ -12,6 +12,7 @@ import std/[strutils, sequtils], stew/endians2, kzg4844/kzg, unittest2, ./testutil, + ../beacon_chain/[beacon_chain_db, beacon_chain_db_quarantine], ../beacon_chain/spec/datatypes/[deneb, electra, fulu], ../beacon_chain/spec/[presets, helpers], ../beacon_chain/consensus_object_pools/blob_quarantine @@ -100,6 +101,18 @@ func compareSidecars( return false true +func compareSidecarsByValue( + a, b: openArray[ref BlobSidecar|ref DataColumnSidecar] +): bool = + if len(a) != len(b): + return false + if len(a) == 0: + return true + for i in 0 ..< len(a): + if a[i][] != b[i][]: + return false + true + func compareSidecars( blockRoot: Eth2Digest, a: openArray[ref BlobSidecar|ref DataColumnSidecar], @@ -133,10 +146,16 @@ func supernodeColumns(): seq[ColumnIndex] = suite "BlobQuarantine data structure test suite " & preset(): setup: - let cfg = defaultRuntimeConfig + let + cfg {.used.} = defaultRuntimeConfig + db {.used.} = BeaconChainDB.new("", inMemory = true, cfg = cfg) + quarantine {.used.} = db.getQuarantineDB() + + teardown: + db.close() test "put()/hasSidecar(index, slot, proposer_index)/remove() test": - var bq = BlobQuarantine.init(cfg, nil) + var bq = BlobQuarantine.init(cfg, quarantine, 0, nil) let broot1 = genBlockRoot(1) broot2 = genBlockRoot(2) @@ -277,7 +296,7 @@ suite "BlobQuarantine data structure test suite " & preset(): len(bq) == 0 test "put(sidecar)/put([sidecars])/hasSidecars/popSidecars/remove() test": - var bq = BlobQuarantine.init(cfg, nil) + var bq = BlobQuarantine.init(cfg, quarantine, 0, nil) let broot1 = genBlockRoot(1) broot2 = genBlockRoot(2) @@ -306,9 +325,15 @@ suite "BlobQuarantine data structure test suite " & preset(): bq.put(broot1, sidecars1) + check: + len(bq) == len(sidecars1) + + var counter = 0 for index in 0 ..< len(sidecars2): if index mod 2 != 1: bq.put(broot2, sidecars2[index]) + inc(counter) + check len(bq) == len(sidecars1) + counter check: bq.hasSidecars(denebBlock) == true @@ -318,37 +343,38 @@ suite "BlobQuarantine data structure test suite " & preset(): check: dres.isOk() compareSidecars(dres.get(), sidecars1) == true + len(bq) == counter bq.put(broot2, sidecars2[1]) check: bq.hasSidecars(electraBlock) == false bq.popSidecars(electraBlock).isNone() == true + len(bq) == counter + 1 bq.put(broot2, sidecars2[3]) check: bq.hasSidecars(electraBlock) == false bq.popSidecars(electraBlock).isNone() == true + len(bq) == counter + 2 bq.put(broot2, sidecars2[5]) check: bq.hasSidecars(electraBlock) == false bq.popSidecars(electraBlock).isNone() == true + len(bq) == counter + 3 bq.put(broot2, sidecars2[7]) check: + len(bq) == len(sidecars2) bq.hasSidecars(electraBlock) == true let eres = bq.popSidecars(electraBlock) check: eres.isOk() compareSidecars(eres.get(), sidecars2) == true - - bq.remove(broot1) - bq.remove(broot2) - check: len(bq) == 0 test "put()/fetchMissingSidecars/remove test": - var bq = BlobQuarantine.init(cfg, nil) + var bq = BlobQuarantine.init(cfg, quarantine, 0, nil) let broot1 = genBlockRoot(1) broot2 = genBlockRoot(2) @@ -393,8 +419,7 @@ suite "BlobQuarantine data structure test suite " & preset(): check len(bq) == 0 test "popSidecars()/hasSidecars() return []/true on block without blobs": - var - bq = BlobQuarantine.init(cfg, nil) + var bq = BlobQuarantine.init(cfg, quarantine, 0, nil) let blockRoot1 = genBlockRoot(100) blockRoot2 = genBlockRoot(5337) @@ -428,7 +453,7 @@ suite "BlobQuarantine data structure test suite " & preset(): test "overfill protection test": var - bq = BlobQuarantine.init(cfg, nil) + bq = BlobQuarantine.init(cfg, quarantine, 0, nil) sidecars: seq[tuple[sidecar: ref BlobSidecar, blockRoot: Eth2Digest]] let maxSidecars = int(cfg.MAX_BLOBS_PER_BLOCK_ELECTRA * SLOTS_PER_EPOCH) * 3 @@ -538,7 +563,7 @@ suite "BlobQuarantine data structure test suite " & preset(): test "put() duplicate items should not affect counters": var - bq = BlobQuarantine.init(cfg, nil) + bq = BlobQuarantine.init(cfg, quarantine, 0, nil) sidecars1: seq[ref BlobSidecar] sidecars1d: seq[ref BlobSidecar] sidecars2: seq[ref BlobSidecar] @@ -637,7 +662,7 @@ suite "BlobQuarantine data structure test suite " & preset(): (root: 10, slot: 127, kzg: 35, index: 2, proposer_index: 29) ] - var bq = BlobQuarantine.init(cfg, nil) + var bq = BlobQuarantine.init(cfg, quarantine, 0, nil) for item in TestVectors: let sidecar = newClone( @@ -654,7 +679,7 @@ suite "BlobQuarantine data structure test suite " & preset(): genBlockRoot(item.root), Slot(item.slot), uint64(item.proposer_index), BlobIndex(item.index)) == true - bq.pruneAfterFinalization(Epoch(1)) + bq.pruneAfterFinalization(Epoch(0), false) check: len(bq) == len(TestVectors) - 5 @@ -669,7 +694,7 @@ suite "BlobQuarantine data structure test suite " & preset(): genBlockRoot(item.root), Slot(item.slot), uint64(item.proposer_index), BlobIndex(item.index)) == res - bq.pruneAfterFinalization(Epoch(2)) + bq.pruneAfterFinalization(Epoch(1), false) check: len(bq) == len(TestVectors) - 5 - 6 @@ -684,7 +709,7 @@ suite "BlobQuarantine data structure test suite " & preset(): genBlockRoot(item.root), Slot(item.slot), uint64(item.proposer_index), BlobIndex(item.index)) == res - bq.pruneAfterFinalization(Epoch(3)) + bq.pruneAfterFinalization(Epoch(2), false) check: len(bq) == len(TestVectors) - 5 - 6 - 12 @@ -699,7 +724,7 @@ suite "BlobQuarantine data structure test suite " & preset(): genBlockRoot(item.root), Slot(item.slot), uint64(item.proposer_index), BlobIndex(item.index)) == res - bq.pruneAfterFinalization(Epoch(4)) + bq.pruneAfterFinalization(Epoch(3), false) check: len(bq) == 0 @@ -709,9 +734,324 @@ suite "BlobQuarantine data structure test suite " & preset(): genBlockRoot(item.root), Slot(item.slot), uint64(item.proposer_index), BlobIndex(item.index)) == false + test "database unload/load test": + var + bq = BlobQuarantine.init(cfg, quarantine, 2, nil) + sidecars: seq[tuple[sidecar: ref BlobSidecar, blockRoot: Eth2Digest]] + + let maxSidecars = int(cfg.MAX_BLOBS_PER_BLOCK_ELECTRA * SLOTS_PER_EPOCH) * 3 + for i in 0 ..< maxSidecars: + let + index = i mod int(cfg.MAX_BLOBS_PER_BLOCK_ELECTRA) + slot = i div int(cfg.MAX_BLOBS_PER_BLOCK_ELECTRA) + 100 + blockRoot = genBlockRoot(slot) + sidecar = newClone(genBlobSidecar(index, slot, i, proposer_index = i)) + sidecars.add((sidecar, blockRoot)) + + for item in sidecars: + bq.put(item.blockRoot, item.sidecar) + + # put(sidecar) test + + check: + len(bq) == maxSidecars + lenMemory(bq) == maxSidecars + lenDisk(bq) == 0 + quarantine.sidecarsCount(typedesc[BlobSidecar]) == 0 + + for i in 0 ..< int(cfg.MAX_BLOBS_PER_BLOCK_ELECTRA): + check: + bq.hasSidecar( + blockRoot = + genBlockRoot( + int(sidecars[i].sidecar[].signed_block_header.message.slot)), + slot = + sidecars[i].sidecar[].signed_block_header.message.slot, + proposer_index = + sidecars[i].sidecar[].signed_block_header.message.proposer_index, + index = sidecars[i].sidecar[].index + ) == true + + let + sidecar = newClone(genBlobSidecar(index = 0, slot = 10000, 100000, + proposer_index = 1000000)) + blockRoot1 = genBlockRoot(10000) + check: + bq.hasSidecar(blockRoot = blockRoot1, slot = Slot(10000), + proposer_index = 1000000'u64, index = BlobIndex(0)) == false + + bq.put(blockRoot1, sidecar) + + check: + len(bq) == len(sidecars) + 1 + lenDisk(bq) == int(cfg.MAX_BLOBS_PER_BLOCK_ELECTRA) + quarantine.sidecarsCount(typedesc[BlobSidecar]) == + int(cfg.MAX_BLOBS_PER_BLOCK_ELECTRA) + lenMemory(bq) == len(sidecars) - int(cfg.MAX_BLOBS_PER_BLOCK_ELECTRA) + 1 + bq.hasSidecar(blockRoot = blockRoot1, slot = Slot(10000), + proposer_index = 1000000'u64, index = BlobIndex(0)) == true + + for i in 0 ..< int(cfg.MAX_BLOBS_PER_BLOCK_ELECTRA): + check: + bq.hasSidecar( + blockRoot = + genBlockRoot( + int(sidecars[i].sidecar[].signed_block_header.message.slot)), + slot = + sidecars[i].sidecar[].signed_block_header.message.slot, + proposer_index = + sidecars[i].sidecar[].signed_block_header.message.proposer_index, + index = sidecars[i].sidecar[].index + ) == true + + let + blockRoot2 = + genBlockRoot( + int(sidecars[0].sidecar[].signed_block_header.message.slot)) + sidecars2 = + sidecars.toOpenArray(0, int(cfg.MAX_BLOBS_PER_BLOCK_ELECTRA) - 1). + mapIt(it.sidecar) + blck = genElectraSignedBeaconBlock(blockRoot2, sidecars2) + dres = bq.popSidecars(blockRoot2, blck) + + check: + dres.isOk() + compareSidecarsByValue(dres.get(), sidecars2) == true + len(bq) == len(sidecars) - int(cfg.MAX_BLOBS_PER_BLOCK_ELECTRA) + 1 + lenDisk(bq) == 0 + quarantine.sidecarsCount(typedesc[BlobSidecar]) == 0 + + # put(openArray[sidecar]) test + + let + msidecars = + block: + var res: seq[ref BlobSidecar] + for i in 0 ..< int(cfg.MAX_BLOBS_PER_BLOCK_ELECTRA): + let sidecar = + newClone(genBlobSidecar(index = i, slot = 100_000, 200000, + proposer_index = 2000000)) + res.add(sidecar) + res + mblockRoot = genBlockRoot(20000) + + check: + len(bq) == (len(sidecars) - int(cfg.MAX_BLOBS_PER_BLOCK_ELECTRA) + 1) + + for s in msidecars: + check: + bq.hasSidecar(mblockRoot, + s.signed_block_header.message.slot, + s.signed_block_header.message.proposer_index, + s.index) == false + + bq.put(mblockRoot, msidecars) + + check: + lenDisk(bq) == int(cfg.MAX_BLOBS_PER_BLOCK_ELECTRA) + quarantine.sidecarsCount(typedesc[BlobSidecar]) == + int(cfg.MAX_BLOBS_PER_BLOCK_ELECTRA) + len(bq) == len(sidecars) + 1 + + for s in msidecars: + check: + bq.hasSidecar(mblockRoot, + s.signed_block_header.message.slot, + s.signed_block_header.message.proposer_index, + s.index) == true + + for i in 0 ..< int(cfg.MAX_BLOBS_PER_BLOCK_ELECTRA): + let j = int(cfg.MAX_BLOBS_PER_BLOCK_ELECTRA) + i + check: + bq.hasSidecar( + blockRoot = + genBlockRoot( + int(sidecars[j].sidecar[].signed_block_header.message.slot)), + slot = + sidecars[j].sidecar[].signed_block_header.message.slot, + proposer_index = + sidecars[j].sidecar[].signed_block_header.message.proposer_index, + index = sidecars[j].sidecar[].index + ) == true + + let + i3 = int(cfg.MAX_BLOBS_PER_BLOCK_ELECTRA) + blockRoot3 = + genBlockRoot( + int(sidecars[i3].sidecar[].signed_block_header.message.slot)) + sidecars3 = + sidecars.toOpenArray(i3, i3 + int(cfg.MAX_BLOBS_PER_BLOCK_ELECTRA) - 1). + mapIt(it.sidecar) + blck2 = genElectraSignedBeaconBlock(blockRoot3, sidecars3) + dres2 = bq.popSidecars(blockRoot3, blck2) + + check: + dres2.isOk() + compareSidecarsByValue(dres2.get(), sidecars3) == true + len(bq) == len(sidecars) - int(cfg.MAX_BLOBS_PER_BLOCK_ELECTRA) + 1 + lenDisk(bq) == 0 + quarantine.sidecarsCount(typedesc[BlobSidecar]) == 0 + + test "database and memory overfill protection and pruning test": + var + bq = BlobQuarantine.init(cfg, quarantine, 1, nil) + sidecars1: seq[tuple[sidecar: ref BlobSidecar, blockRoot: Eth2Digest]] + sidecars2: seq[tuple[sidecar: ref BlobSidecar, blockRoot: Eth2Digest]] + epochs1: seq[Epoch] + epochs2: seq[Epoch] + + let maxSidecars = int(cfg.MAX_BLOBS_PER_BLOCK_ELECTRA * SLOTS_PER_EPOCH) * 3 + for i in 0 ..< maxSidecars: + let + index = i mod int(cfg.MAX_BLOBS_PER_BLOCK_ELECTRA) + slot1 = i div int(cfg.MAX_BLOBS_PER_BLOCK_ELECTRA) + 100 + slot2 = i div int(cfg.MAX_BLOBS_PER_BLOCK_ELECTRA) + 1000 + epoch1 = Slot(slot1).epoch() + epoch2 = Slot(slot2).epoch() + blockRoot1 = genBlockRoot(slot1) + blockRoot2 = genBlockRoot(slot2) + sidecar1 = newClone(genBlobSidecar(index, slot1, i, proposer_index = i)) + sidecar2 = newClone(genBlobSidecar(index, slot2, i + maxSidecars, + proposer_index = 100 + i)) + sidecars1.add((sidecar1, blockRoot1)) + sidecars2.add((sidecar2, blockRoot2)) + if len(epochs1) == 0 or epochs1[^1] != epoch1: + epochs1.add(epoch1) + if len(epochs2) == 0 or epochs2[^1] != epoch2: + epochs2.add(epoch2) + + for item in sidecars1: + bq.put(item.blockRoot, item.sidecar) + + check: + len(bq) == len(sidecars1) + lenDisk(bq) == 0 + quarantine.sidecarsCount(typedesc[BlobSidecar]) == 0 + + for i in 0 ..< SLOTS_PER_EPOCH * 3: + let + start = int(cfg.MAX_BLOBS_PER_BLOCK_ELECTRA) * int(i) + finish = start + int(cfg.MAX_BLOBS_PER_BLOCK_ELECTRA) - 1 + blockRoot = sidecars2[start].blockRoot + sidecars = sidecars2.toOpenArray(start, finish).mapIt(it.sidecar) + bq.put(blockRoot, sidecars) + + check: + len(bq) == len(sidecars1) + len(sidecars2) + lenDisk(bq) == len(sidecars1) + quarantine.sidecarsCount(typedesc[BlobSidecar]) == len(sidecars1) + lenMemory(bq) == len(sidecars2) + + for i in 0 ..< len(sidecars1): + check: + bq.hasSidecar( + blockRoot = + genBlockRoot( + int(sidecars1[i].sidecar[].signed_block_header.message.slot)), + slot = + sidecars1[i].sidecar[].signed_block_header.message.slot, + proposer_index = + sidecars1[i].sidecar[].signed_block_header.message.proposer_index, + index = sidecars1[i].sidecar[].index + ) == true + + for i in 0 ..< len(sidecars2): + check: + bq.hasSidecar( + blockRoot = + genBlockRoot( + int(sidecars2[i].sidecar[].signed_block_header.message.slot)), + slot = + sidecars2[i].sidecar[].signed_block_header.message.slot, + proposer_index = + sidecars2[i].sidecar[].signed_block_header.message.proposer_index, + index = sidecars2[i].sidecar[].index + ) == true + + let + sidecar = newClone(genBlobSidecar(index = 0, slot = 100000, 100000, + proposer_index = 1000000)) + blockRoot = genBlockRoot(100000) + + check: + bq.hasSidecar(blockRoot = blockRoot, slot = Slot(100000), + proposer_index = 1000000'u64, + index = BlobIndex(0)) == false + + bq.put(blockRoot, sidecar) + + check: + len(bq) == len(sidecars1) + len(sidecars2) - + int(cfg.MAX_BLOBS_PER_BLOCK_ELECTRA) + 1 + lenDisk(bq) == len(sidecars1) + quarantine.sidecarsCount(typedesc[BlobSidecar]) == len(sidecars1) + lenMemory(bq) == len(sidecars2) - + int(cfg.MAX_BLOBS_PER_BLOCK_ELECTRA) + 1 + bq.hasSidecar(blockRoot = blockRoot, slot = Slot(100000), + proposer_index = 1000000'u64, index = BlobIndex(0)) == true + + for i in 0 ..< int(cfg.MAX_BLOBS_PER_BLOCK_ELECTRA): + check: + bq.hasSidecar( + blockRoot = + genBlockRoot( + int(sidecars1[i].sidecar[].signed_block_header.message.slot)), + slot = + sidecars1[i].sidecar[].signed_block_header.message.slot, + proposer_index = + sidecars1[i].sidecar[].signed_block_header.message.proposer_index, + index = sidecars1[i].sidecar[].index + ) == false + + for i in int(cfg.MAX_BLOBS_PER_BLOCK_ELECTRA) ..< len(sidecars1): + check: + bq.hasSidecar( + blockRoot = + genBlockRoot( + int(sidecars1[i].sidecar[].signed_block_header.message.slot)), + slot = + sidecars1[i].sidecar[].signed_block_header.message.slot, + proposer_index = + sidecars1[i].sidecar[].signed_block_header.message.proposer_index, + index = sidecars1[i].sidecar[].index + ) == true + + for i in 0 ..< len(sidecars2): + check: + bq.hasSidecar( + blockRoot = + genBlockRoot( + int(sidecars2[i].sidecar[].signed_block_header.message.slot)), + slot = + sidecars2[i].sidecar[].signed_block_header.message.slot, + proposer_index = + sidecars2[i].sidecar[].signed_block_header.message.proposer_index, + index = sidecars2[i].sidecar[].index + ) == true + + # Pruning memory and database + for epoch in epochs1: + bq.pruneAfterFinalization(epoch, false) + for epoch in epochs2: + bq.pruneAfterFinalization(epoch, false) + + check: + len(bq) == 1 + + bq.pruneAfterFinalization(Slot(100000).epoch(), false) + + check: + len(bq) == 0 + suite "ColumnQuarantine data structure test suite " & preset(): setup: - let cfg {.used.} = defaultRuntimeConfig + let + cfg {.used.} = defaultRuntimeConfig + db {.used.} = BeaconChainDB.new("", inMemory = true, cfg = cfg) + quarantine {.used.} = db.getQuarantineDB() + + teardown: + db.close() test "ColumnMap test": # Filling columns of different sizes with all bits [8, 128) @@ -811,7 +1151,7 @@ suite "ColumnQuarantine data structure test suite " & preset(): test "put()/hasSidecar(index, slot, proposer_index)/remove() test": let custodyColumns = [0, 31, 32, 63, 64, 95, 96, 127].mapIt(ColumnIndex(it)) - var bq = ColumnQuarantine.init(cfg, custodyColumns, nil) + var bq = ColumnQuarantine.init(cfg, custodyColumns, quarantine, 0, nil) let broot1 = genBlockRoot(1) broot2 = genBlockRoot(2) @@ -960,7 +1300,7 @@ suite "ColumnQuarantine data structure test suite " & preset(): test "put(sidecar)/put([sidecars])/hasSidecars/popSidecars/remove() [node] test": let custodyColumns = [0, 31, 32, 63, 64, 95, 96, 127].mapIt(ColumnIndex(it)) - var bq = ColumnQuarantine.init(cfg, custodyColumns, nil) + var bq = ColumnQuarantine.init(cfg, custodyColumns, quarantine, 0, nil) let broot1 = genBlockRoot(1) broot2 = genBlockRoot(2) @@ -994,10 +1334,15 @@ suite "ColumnQuarantine data structure test suite " & preset(): bq.popSidecars(fuluBlock2).isNone() == true bq.put(broot1, sidecars1) + check: + len(bq) == len(sidecars1) + var counter = 0 for index in 0 ..< len(sidecars2): if index notin [1, 3, 5, 7]: bq.put(broot2, sidecars2[index]) + inc(counter) + check len(bq) == len(sidecars1) + counter check: bq.hasSidecars(fuluBlock1) == true @@ -1007,38 +1352,40 @@ suite "ColumnQuarantine data structure test suite " & preset(): check: dres.isOk() compareSidecars(dres.get(), sidecars1) == true + len(bq) == counter bq.put(broot2, sidecars2[1]) check: bq.hasSidecars(fuluBlock2) == false bq.popSidecars(fuluBlock2).isNone() == true + len(bq) == counter + 1 bq.put(broot2, sidecars2[3]) check: bq.hasSidecars(fuluBlock2) == false bq.popSidecars(fuluBlock2).isNone() == true + len(bq) == counter + 2 bq.put(broot2, sidecars2[5]) check: bq.hasSidecars(fuluBlock2) == false bq.popSidecars(fuluBlock2).isNone() == true + len(bq) == counter + 3 bq.put(broot2, sidecars2[7]) check: bq.hasSidecars(fuluBlock2) == true + len(bq) == len(sidecars2) let eres = bq.popSidecars(fuluBlock2) check: eres.isOk() compareSidecars(eres.get(), sidecars2) == true - - bq.remove(broot1) - bq.remove(broot2) - check len(bq) == 0 + len(bq) == 0 test "put(sidecar)/put([sidecars])/hasSidecars/popSidecars/remove() [supernode] test": let custodyColumns = supernodeColumns() - var bq = ColumnQuarantine.init(cfg, custodyColumns, nil) + var bq = ColumnQuarantine.init(cfg, custodyColumns, quarantine, 0, nil) let broot1 = genBlockRoot(1) broot2 = genBlockRoot(2) @@ -1123,7 +1470,7 @@ suite "ColumnQuarantine data structure test suite " & preset(): peerCustodyColumns2 = [1, 2, 3, 4, 5, 6, 7, 8].mapIt(ColumnIndex(it)) - var bq = ColumnQuarantine.init(cfg, custodyColumns, nil) + var bq = ColumnQuarantine.init(cfg, custodyColumns, quarantine, 0, nil) let broot1 = genBlockRoot(1) broot2 = genBlockRoot(2) @@ -1220,7 +1567,7 @@ suite "ColumnQuarantine data structure test suite " & preset(): peerCustodyColumns1 = [63, 64, 65, 66, 95, 96, 97, 98].mapIt(ColumnIndex(it)) - var bq = ColumnQuarantine.init(cfg, custodyColumns, nil) + var bq = ColumnQuarantine.init(cfg, custodyColumns, quarantine, 0, nil) let broot1 = genBlockRoot(1) broot2 = genBlockRoot(2) @@ -1307,8 +1654,7 @@ suite "ColumnQuarantine data structure test suite " & preset(): let custodyColumns = [63, 64, 65, 66, 95, 96, 97, 98].mapIt(ColumnIndex(it)) - var - bq = ColumnQuarantine.init(cfg, custodyColumns, nil) + var bq = ColumnQuarantine.init(cfg, custodyColumns, quarantine, 0, nil) let blockRoot1 = genBlockRoot(100) blockRoot2 = genBlockRoot(5337) @@ -1341,7 +1687,7 @@ suite "ColumnQuarantine data structure test suite " & preset(): [63, 64, 65, 66, 95, 96, 97, 98].mapIt(ColumnIndex(it)) var - bq = ColumnQuarantine.init(cfg, custodyColumns, nil) + bq = ColumnQuarantine.init(cfg, custodyColumns, quarantine, 0, nil) sidecars: seq[tuple[sidecar: ref DataColumnSidecar, blockRoot: Eth2Digest]] @@ -1461,7 +1807,7 @@ suite "ColumnQuarantine data structure test suite " & preset(): custodyColumns = [63, 64, 65, 66, 95, 96, 97, 98].mapIt(ColumnIndex(it)) var - bq = ColumnQuarantine.init(cfg, custodyColumns, nil) + bq = ColumnQuarantine.init(cfg, custodyColumns, quarantine, 0, nil) sidecars1: seq[ref DataColumnSidecar] sidecars1d: seq[ref DataColumnSidecar] sidecars2: seq[ref DataColumnSidecar] @@ -1562,7 +1908,7 @@ suite "ColumnQuarantine data structure test suite " & preset(): (root: 10, slot: 127, index: 98, proposer_index: 29) ] - var bq = ColumnQuarantine.init(cfg, custodyColumns, nil) + var bq = ColumnQuarantine.init(cfg, custodyColumns, quarantine, 0, nil) for item in TestVectors: let sidecar = newClone( @@ -1579,7 +1925,7 @@ suite "ColumnQuarantine data structure test suite " & preset(): genBlockRoot(item.root), Slot(item.slot), uint64(item.proposer_index), BlobIndex(item.index)) == true - bq.pruneAfterFinalization(Epoch(1)) + bq.pruneAfterFinalization(Epoch(0), false) check: len(bq) == len(TestVectors) - 5 @@ -1594,7 +1940,7 @@ suite "ColumnQuarantine data structure test suite " & preset(): genBlockRoot(item.root), Slot(item.slot), uint64(item.proposer_index), BlobIndex(item.index)) == res - bq.pruneAfterFinalization(Epoch(2)) + bq.pruneAfterFinalization(Epoch(1), false) check: len(bq) == len(TestVectors) - 5 - 6 @@ -1609,7 +1955,7 @@ suite "ColumnQuarantine data structure test suite " & preset(): genBlockRoot(item.root), Slot(item.slot), uint64(item.proposer_index), BlobIndex(item.index)) == res - bq.pruneAfterFinalization(Epoch(3)) + bq.pruneAfterFinalization(Epoch(2), false) check: len(bq) == len(TestVectors) - 5 - 6 - 12 @@ -1624,7 +1970,7 @@ suite "ColumnQuarantine data structure test suite " & preset(): genBlockRoot(item.root), Slot(item.slot), uint64(item.proposer_index), BlobIndex(item.index)) == res - bq.pruneAfterFinalization(Epoch(4)) + bq.pruneAfterFinalization(Epoch(3), false) check: len(bq) == 0 @@ -1633,3 +1979,338 @@ suite "ColumnQuarantine data structure test suite " & preset(): bq.hasSidecar( genBlockRoot(item.root), Slot(item.slot), uint64(item.proposer_index), BlobIndex(item.index)) == false + + test "database unload/load test": + let + custodyColumns = + [63, 64, 65, 66, 95, 96, 97, 98].mapIt(ColumnIndex(it)) + + var + bq = ColumnQuarantine.init(cfg, custodyColumns, quarantine, 2, nil) + sidecars: seq[tuple[sidecar: ref DataColumnSidecar, + blockRoot: Eth2Digest]] + + let maxSidecars = int(NUMBER_OF_COLUMNS * SLOTS_PER_EPOCH) * 3 + for i in 0 ..< maxSidecars: + let + index = i mod len(custodyColumns) + slot = i div len(custodyColumns) + 100 + blockRoot = genBlockRoot(slot) + sidecar = newClone( + genDataColumnSidecar(index = int(custodyColumns[index]), + slot, proposer_index = i)) + sidecars.add((sidecar, blockRoot)) + + for item in sidecars: + bq.put(item.blockRoot, item.sidecar) + + # put(sidecar) test + + check: + len(bq) == maxSidecars + lenMemory(bq) == maxSidecars + lenDisk(bq) == 0 + quarantine.sidecarsCount(typedesc[DataColumnSidecar]) == 0 + + for i in 0 ..< len(custodyColumns): + check: + bq.hasSidecar( + blockRoot = + genBlockRoot( + int(sidecars[i].sidecar[].signed_block_header.message.slot)), + slot = + sidecars[i].sidecar[].signed_block_header.message.slot, + proposer_index = + sidecars[i].sidecar[].signed_block_header.message.proposer_index, + index = sidecars[i].sidecar[].index + ) == true + + let + sidecar = newClone( + genDataColumnSidecar(index = int(custodyColumns[0]), slot = 10000, + proposer_index = 1000000)) + blockRoot1 = genBlockRoot(10000) + check: + bq.hasSidecar( + blockRoot = blockRoot1, slot = Slot(10000), + proposer_index = 1000000'u64, index = custodyColumns[0]) == false + + bq.put(blockRoot1, sidecar) + + check: + len(bq) == len(sidecars) + 1 + lenDisk(bq) == len(custodyColumns) + quarantine.sidecarsCount(typedesc[DataColumnSidecar]) == + len(custodyColumns) + lenMemory(bq) == len(sidecars) - len(custodyColumns) + 1 + bq.hasSidecar( + blockRoot = blockRoot1, slot = Slot(10000), + proposer_index = 1000000'u64, index = custodyColumns[0]) == true + + for i in 0 ..< len(custodyColumns): + check: + bq.hasSidecar( + blockRoot = + genBlockRoot( + int(sidecars[i].sidecar[].signed_block_header.message.slot)), + slot = + sidecars[i].sidecar[].signed_block_header.message.slot, + proposer_index = + sidecars[i].sidecar[].signed_block_header.message.proposer_index, + index = sidecars[i].sidecar[].index + ) == true + + let + blockRoot2 = + genBlockRoot( + int(sidecars[0].sidecar[].signed_block_header.message.slot)) + sidecars2 = + sidecars.toOpenArray(0, len(custodyColumns) - 1).mapIt(it.sidecar) + commitments2 = + @[genKzgCommitment(1), genKzgCommitment(2), genKzgCommitment(3)] + blck = genFuluSignedBeaconBlock(blockRoot2, commitments2) + dres = bq.popSidecars(blockRoot2, blck) + + check: + dres.isOk() + compareSidecarsByValue(dres.get(), sidecars2) == true + len(bq) == len(sidecars) - len(custodyColumns) + 1 + lenDisk(bq) == 0 + quarantine.sidecarsCount(typedesc[DataColumnSidecar]) == 0 + + # put(openArray[sidecar]) test + + let + msidecars = + block: + var res: seq[ref DataColumnSidecar] + for i in 0 ..< len(custodyColumns): + let sidecar = + newClone( + genDataColumnSidecar( + index = int(custodyColumns[i]), slot = 100_000, + proposer_index = 2000000)) + res.add(sidecar) + res + mblockRoot = genBlockRoot(20000) + + check: + len(bq) == len(sidecars) - len(custodyColumns) + 1 + + for s in msidecars: + check: + bq.hasSidecar(mblockRoot, + s.signed_block_header.message.slot, + s.signed_block_header.message.proposer_index, + s.index) == false + + bq.put(mblockRoot, msidecars) + + check: + lenDisk(bq) == len(custodyColumns) + quarantine.sidecarsCount(typedesc[DataColumnSidecar]) == + len(custodyColumns) + len(bq) == len(sidecars) + 1 + + for s in msidecars: + check: + bq.hasSidecar(mblockRoot, + s.signed_block_header.message.slot, + s.signed_block_header.message.proposer_index, + s.index) == true + + for i in 0 ..< len(custodyColumns): + let j = len(custodyColumns) + i + check: + bq.hasSidecar( + blockRoot = + genBlockRoot( + int(sidecars[j].sidecar[].signed_block_header.message.slot)), + slot = + sidecars[j].sidecar[].signed_block_header.message.slot, + proposer_index = + sidecars[j].sidecar[].signed_block_header.message.proposer_index, + index = sidecars[j].sidecar[].index + ) == true + + let + i3 = len(custodyColumns) + blockRoot3 = + genBlockRoot( + int(sidecars[i3].sidecar[].signed_block_header.message.slot)) + sidecars3 = + sidecars.toOpenArray(i3, i3 + len(custodyColumns) - 1). + mapIt(it.sidecar) + commitments3 = + @[genKzgCommitment(5), genKzgCommitment(6), genKzgCommitment(7)] + blck3 = genFuluSignedBeaconBlock(blockRoot3, commitments3) + dres3 = bq.popSidecars(blockRoot3, blck3) + + check: + dres3.isOk() + compareSidecarsByValue(dres3.get(), sidecars3) == true + len(bq) == len(sidecars) - len(custodyColumns) + 1 + lenDisk(bq) == 0 + quarantine.sidecarsCount(typedesc[DataColumnSidecar]) == 0 + + test "database and memory overfill protection and pruning test": + let + custodyColumns = + [63, 64, 65, 66, 95, 96, 97, 98].mapIt(ColumnIndex(it)) + var + bq = ColumnQuarantine.init(cfg, custodyColumns, quarantine, 1, nil) + sidecars1: seq[tuple[sidecar: ref DataColumnSidecar, + blockRoot: Eth2Digest]] + sidecars2: seq[tuple[sidecar: ref DataColumnSidecar, + blockRoot: Eth2Digest]] + epochs1: seq[Epoch] + epochs2: seq[Epoch] + + let maxSidecars = int(NUMBER_OF_COLUMNS * SLOTS_PER_EPOCH) * 3 + for i in 0 ..< maxSidecars: + let + index = i mod len(custodyColumns) + slot1 = i div len(custodyColumns) + 100 + slot2 = i div len(custodyColumns) + 100000 + epoch1 = Slot(slot1).epoch() + epoch2 = Slot(slot2).epoch() + blockRoot1 = genBlockRoot(slot1) + blockRoot2 = genBlockRoot(slot2) + sidecar1 = newClone( + genDataColumnSidecar(int(custodyColumns[index]), slot1, + proposer_index = i)) + sidecar2 = newClone( + genDataColumnSidecar(int(custodyColumns[index]), slot2, + proposer_index = 100 + i)) + + sidecars1.add((sidecar1, blockRoot1)) + sidecars2.add((sidecar2, blockRoot2)) + if len(epochs1) == 0 or epochs1[^1] != epoch1: + epochs1.add(epoch1) + if len(epochs2) == 0 or epochs2[^1] != epoch2: + epochs2.add(epoch2) + + for item in sidecars1: + bq.put(item.blockRoot, item.sidecar) + + check: + len(bq) == len(sidecars1) + lenDisk(bq) == 0 + quarantine.sidecarsCount(typedesc[DataColumnSidecar]) == 0 + + for i in 0 ..< (maxSidecars div len(custodyColumns)): + let + start = len(custodyColumns) * int(i) + finish = start + len(custodyColumns) - 1 + blockRoot = sidecars2[start].blockRoot + sidecars = sidecars2.toOpenArray(start, finish).mapIt(it.sidecar) + bq.put(blockRoot, sidecars) + + check: + len(bq) == len(sidecars1) + len(sidecars2) + lenDisk(bq) == len(sidecars1) + quarantine.sidecarsCount(typedesc[DataColumnSidecar]) == + len(sidecars1) + lenMemory(bq) == len(sidecars2) + + for i in 0 ..< len(sidecars1): + check: + bq.hasSidecar( + blockRoot = + genBlockRoot( + int(sidecars1[i].sidecar[].signed_block_header.message.slot)), + slot = + sidecars1[i].sidecar[].signed_block_header.message.slot, + proposer_index = + sidecars1[i].sidecar[].signed_block_header.message.proposer_index, + index = sidecars1[i].sidecar[].index + ) == true + + for i in 0 ..< len(sidecars2): + check: + bq.hasSidecar( + blockRoot = + genBlockRoot( + int(sidecars2[i].sidecar[].signed_block_header.message.slot)), + slot = + sidecars2[i].sidecar[].signed_block_header.message.slot, + proposer_index = + sidecars2[i].sidecar[].signed_block_header.message.proposer_index, + index = sidecars2[i].sidecar[].index + ) == true + + let + sidecar = newClone(genDataColumnSidecar( + index = int(custodyColumns[0]), slot = 1000000, + proposer_index = 2000000)) + blockRoot = genBlockRoot(1000000) + + check: + bq.hasSidecar(blockRoot = blockRoot, slot = Slot(1000000), + proposer_index = 2000000'u64, + index = custodyColumns[0]) == false + + bq.put(blockRoot, sidecar) + + check: + len(bq) == len(sidecars1) + len(sidecars2) - len(custodyColumns) + 1 + lenDisk(bq) == len(sidecars1) + quarantine.sidecarsCount(typedesc[DataColumnSidecar]) == len(sidecars1) + lenMemory(bq) == len(sidecars2) - len(custodyColumns) + 1 + bq.hasSidecar( + blockRoot = blockRoot, slot = Slot(1000000), + proposer_index = 2000000'u64, index = custodyColumns[0]) == true + + for i in 0 ..< len(custodyColumns): + check: + bq.hasSidecar( + blockRoot = + genBlockRoot( + int(sidecars1[i].sidecar[].signed_block_header.message.slot)), + slot = + sidecars1[i].sidecar[].signed_block_header.message.slot, + proposer_index = + sidecars1[i].sidecar[].signed_block_header.message.proposer_index, + index = sidecars1[i].sidecar[].index + ) == false + + for i in len(custodyColumns) ..< len(sidecars1): + check: + bq.hasSidecar( + blockRoot = + genBlockRoot( + int(sidecars1[i].sidecar[].signed_block_header.message.slot)), + slot = + sidecars1[i].sidecar[].signed_block_header.message.slot, + proposer_index = + sidecars1[i].sidecar[].signed_block_header.message.proposer_index, + index = sidecars1[i].sidecar[].index + ) == true + + for i in 0 ..< len(sidecars2): + check: + bq.hasSidecar( + blockRoot = + genBlockRoot( + int(sidecars2[i].sidecar[].signed_block_header.message.slot)), + slot = + sidecars2[i].sidecar[].signed_block_header.message.slot, + proposer_index = + sidecars2[i].sidecar[].signed_block_header.message.proposer_index, + index = sidecars2[i].sidecar[].index + ) == true + + # Pruning memory and database + for epoch in epochs1: + bq.pruneAfterFinalization(epoch, false) + for epoch in epochs2: + bq.pruneAfterFinalization(epoch, false) + + check: + len(bq) == 1 + + bq.pruneAfterFinalization(Slot(1000000).epoch(), false) + + check: + len(bq) == 0 + quarantine.sidecarsCount(typedesc[DataColumnSidecar]) == 0 diff --git a/tests/test_sync_manager.nim b/tests/test_sync_manager.nim index 364020b9ae..c176b58115 100644 --- a/tests/test_sync_manager.nim +++ b/tests/test_sync_manager.nim @@ -855,6 +855,88 @@ suite "SyncManager test suite": sq.inpSlot == finishSlot sq.outSlot == finishSlot + asyncTest "[SyncQueue# & " & $kind & "] Empty responses should not " & + "be accounted [3 peers] test": + var emptyResponse: seq[ref ForkedSignedBeaconBlock] + let + scenario = + case kind + of SyncQueueKind.Forward: + [ + (Slot(0) .. Slot(31), Opt.none(VerifierError)), + (Slot(32) .. Slot(63), Opt.none(VerifierError)), + (Slot(64) .. Slot(95), Opt.none(VerifierError)), + (Slot(96) .. Slot(127), Opt.none(VerifierError)), + (Slot(128) .. Slot(159), Opt.none(VerifierError)) + ] + of SyncQueueKind.Backward: + [ + (Slot(128) .. Slot(159), Opt.none(VerifierError)), + (Slot(96) .. Slot(127), Opt.none(VerifierError)), + (Slot(64) .. Slot(95), Opt.none(VerifierError)), + (Slot(32) .. Slot(63), Opt.none(VerifierError)), + (Slot(0) .. Slot(31), Opt.none(VerifierError)) + ] + verifier = setupVerifier(kind, scenario) + sq = + case kind + of SyncQueueKind.Forward: + SyncQueue.init(SomeTPeer, kind, Slot(0), Slot(159), + 32'u64, # 32 slots per request + 3, # 3 concurrent requests + 2, # 2 failures allowed + getStaticSlotCb(Slot(0)), + verifier.collector) + of SyncQueueKind.Backward: + SyncQueue.init(SomeTPeer, kind, Slot(159), Slot(0), + 32'u64, # 32 slots per request + 3, # 3 concurrent requests + 2, # 2 failures allowed + getStaticSlotCb(Slot(159)), + verifier.collector) + slots = + case kind + of SyncQueueKind.Forward: + @[Slot(0), Slot(32), Slot(64), Slot(96), Slot(128)] + of SyncQueueKind.Backward: + @[Slot(128), Slot(96), Slot(64), Slot(32), Slot(0)] + peer1 = SomeTPeer.init("1") + peer2 = SomeTPeer.init("2") + peer3 = SomeTPeer.init("3") + + let + r11 = sq.pop(Slot(159), peer1) + r21 = sq.pop(Slot(159), peer2) + await sq.push(r11, emptyResponse, Opt.none(seq[BlobSidecars])) + let + r12 = sq.pop(Slot(159), peer1) + r13 = sq.pop(Slot(159), peer1) + # This should not raise an assertion, as the previously sent empty + # response should not be taken into account. + r14 = sq.pop(Slot(159), peer1) + + expect AssertionError: + let r1e {.used.} = sq.pop(Slot(159), peer1) + + check: + r11.data.slot == slots[0] + r12.data.slot == slots[1] + r13.data.slot == slots[2] + r14.data.slot == slots[3] + + # Scenario requires some finish steps + await sq.push(r21, createChain(r21.data), Opt.none(seq[BlobSidecars])) + let r22 = sq.pop(Slot(159), peer2) + await sq.push(r22, createChain(r22.data), Opt.none(seq[BlobSidecars])) + let r23 = sq.pop(Slot(159), peer2) + await sq.push(r23, createChain(r23.data), Opt.none(seq[BlobSidecars])) + let r24 = sq.pop(Slot(159), peer2) + await sq.push(r24, createChain(r24.data), Opt.none(seq[BlobSidecars])) + let r35 = sq.pop(Slot(159), peer3) + await sq.push(r35, createChain(r35.data), Opt.none(seq[BlobSidecars])) + + await noCancel wait(verifier.verifier, 2.seconds) + asyncTest "[SyncQueue# & " & $kind & "] Combination of missing parent " & "and good blocks [3 peers] test": let diff --git a/tests/test_validator_client.nim b/tests/test_validator_client.nim index 8da419d56a..9fea0f0e14 100644 --- a/tests/test_validator_client.nim +++ b/tests/test_validator_client.nim @@ -1,5 +1,5 @@ # beacon_chain -# Copyright (c) 2018-2024 Status Research & Development GmbH +# Copyright (c) 2018-2025 Status Research & Development GmbH # Licensed and distributed under either of # * MIT license (license terms in the root directory or at https://opensource.org/licenses/MIT). # * Apache v2 license (license terms in the root directory or at https://www.apache.org/licenses/LICENSE-2.0). @@ -9,7 +9,7 @@ {.used.} import std/strutils -import httputils +import httputils, stew/base10 import chronos/apps/http/httpserver import chronos/unittest2/asynctests import ../beacon_chain/spec/eth2_apis/eth2_rest_serialization, @@ -863,7 +863,7 @@ suite "Validator Client test suite": response.isErr() gotCancellation == true - asyncTest "bestSuccess() API timeout test": + asyncTest "bestSuccess() API hard timeout test": let uri = parseUri("http://127.0.0.1/") beaconNodes = @[BeaconNodeServerRef.init(uri, 0).tryGet()] @@ -893,6 +893,7 @@ suite "Validator Client test suite": RestPlainResponse, uint64, float64, + 50.milliseconds, 100.milliseconds, AllBeaconNodeStatuses, {BeaconNodeRole.Duties}, @@ -908,6 +909,237 @@ suite "Validator Client test suite": response.isErr() gotCancellation == true + asyncTest "bestSuccess() API soft timeout test": + let + strategy = ApiStrategyKind.Best + beaconNodes = @[ + BeaconNodeServerRef.init(parseUri("http://127.0.0.1/"), 0).tryGet(), + BeaconNodeServerRef.init(parseUri("http://127.0.0.2/"), 1).tryGet(), + BeaconNodeServerRef.init(parseUri("http://127.0.0.3/"), 2).tryGet(), + BeaconNodeServerRef.init(parseUri("http://127.0.0.4/"), 3).tryGet() + ] + vconf = ValidatorClientConf.load( + cmdLine = mapIt([ + "--beacon-node=http://127.0.0.1", + "--beacon-node=http://127.0.0.2", + "--beacon-node=http://127.0.0.3", + "--beacon-node=http://127.0.0.4" + ], it)) + epoch = Epoch(1) + + let + vc = newClone(ValidatorClient(config: vconf, beaconNodes: beaconNodes)) + + vc.fallbackService = await FallbackServiceRef.init(vc) + + proc getIndex(hostname: string): int = + case hostname + of "127.0.0.1": 0 + of "127.0.0.2": 1 + of "127.0.0.3": 2 + of "127.0.0.4": 3 + else: -1 + + proc init(t: typedesc[RestPlainResponse], data: string): RestPlainResponse = + RestPlainResponse( + status: 200, + contentType: Opt.some(getContentType("text/plain").get()), + data: stringToBytes(data) + ) + + template generateTestProcedures( + tm1, tm2, tm3, tm4: untyped, + rsps1, rsps2, rsps3, rsps4: static string, + rspu1, rspu2, rspu3, rspu4: static uint64, + score1, score2, score3, score4: static float64 + ) = + proc getTestDuties( + client: RestClientRef, + epoch: Epoch + ): Future[RestPlainResponse] {.async: (raises: [CancelledError]).} = + let index = getIndex(client.address.hostname) + try: + case index + of 0: + await sleepAsync(tm1) + events[0].fire() + RestPlainResponse.init(rsps1) + of 1: + await sleepAsync(tm2) + events[1].fire() + RestPlainResponse.init(rsps2) + of 2: + await sleepAsync(tm3) + events[2].fire() + RestPlainResponse.init(rsps3) + of 3: + await sleepAsync(tm4) + events[3].fire() + RestPlainResponse.init(rsps4) + else: + raiseAssert "Should not be here" + except CancelledError as exc: + cancellations[index] = true + events[index].fire() + raise exc + + proc getTestScore(data: uint64): float64 = + case data + of rspu1: + score1 + of rspu2: + score2 + of rspu3: + score3 + of rspu4: + score4 + else: + raiseAssert "Should not be here" + + const + RequestName = "getTestDuties" + + block: + let events = @[ + newAsyncEvent(), newAsyncEvent(), newAsyncEvent(), newAsyncEvent() + ] + var cancellations = @[false, false, false, false] + + generateTestProcedures( + 1500.milliseconds, + 900.milliseconds, + 600.milliseconds, + 1200.milliseconds, + "0", "10", "100", "1000", + 0'u64, 10'u64, 100'u64, 1000'u64, + 0'f64, 10'f64, 100'f64, 1000'f64 + ) + + let + response = + vc.bestSuccess( + RestPlainResponse, + uint64, + float64, + 500.milliseconds, + 1000.milliseconds, + AllBeaconNodeStatuses, + {BeaconNodeRole.Duties}, + getTestDuties(it, epoch), + getTestScore(itresponse)): + if apiResponse.isErr(): + ApiResponse[uint64].err(apiResponse.error) + else: + let response = apiResponse.get() + case response.status + of 200: + ApiResponse[uint64].ok( + Base10.decode(uint64, response.data).get()) + else: + ApiResponse[uint64].ok(0'u64) + pendingFutures = events.mapIt(it.wait()) + + await allFutures(pendingFutures) + + check: + cancellations == @[true, false, false, true] + response.isOk() + response.get() == 100'u64 + + block: + let events = @[ + newAsyncEvent(), newAsyncEvent(), newAsyncEvent(), newAsyncEvent() + ] + var cancellations = @[false, false, false, false] + + generateTestProcedures( + 1500.milliseconds, + 100.milliseconds, + 1200.milliseconds, + 1100.milliseconds, + "0", "10", "100", "1000", + 0'u64, 10'u64, 100'u64, 1000'u64, + 0'f64, 10'f64, 100'f64, 1000'f64 + ) + + let + response = + vc.bestSuccess( + RestPlainResponse, + uint64, + float64, + 500.milliseconds, + 1000.milliseconds, + AllBeaconNodeStatuses, + {BeaconNodeRole.Duties}, + getTestDuties(it, epoch), + getTestScore(itresponse)): + if apiResponse.isErr(): + ApiResponse[uint64].err(apiResponse.error) + else: + let response = apiResponse.get() + case response.status + of 200: + ApiResponse[uint64].ok( + Base10.decode(uint64, response.data).get()) + else: + ApiResponse[uint64].ok(0'u64) + pendingFutures = events.mapIt(it.wait()) + + await allFutures(pendingFutures) + + check: + cancellations == @[true, false, true, true] + response.isOk() + response.get() == 10'u64 + + block: + let events = @[ + newAsyncEvent(), newAsyncEvent(), newAsyncEvent(), newAsyncEvent() + ] + var cancellations = @[false, false, false, false] + + generateTestProcedures( + 1500.milliseconds, + 100.milliseconds, + 300.milliseconds, + 1200.milliseconds, + "0", "10", "100", "1000", + 0'u64, 10'u64, 100'u64, 1000'u64, + 0'f64, 10'f64, 100'f64, 1000'f64 + ) + + let + response = + vc.bestSuccess( + RestPlainResponse, + uint64, + float64, + 500.milliseconds, + 1000.milliseconds, + AllBeaconNodeStatuses, + {BeaconNodeRole.Duties}, + getTestDuties(it, epoch), + getTestScore(itresponse)): + if apiResponse.isErr(): + ApiResponse[uint64].err(apiResponse.error) + else: + let response = apiResponse.get() + case response.status + of 200: + ApiResponse[uint64].ok( + Base10.decode(uint64, response.data).get()) + else: + ApiResponse[uint64].ok(0'u64) + pendingFutures = events.mapIt(it.wait()) + + await allFutures(pendingFutures) + + check: + cancellations == @[true, false, false, true] + response.isOk() + response.get() == 100'u64 + test "getLiveness() response deserialization test": proc generateLivenessResponse(T: typedesc[string], start, count, modv: int): string = diff --git a/tests/testblockutil.nim b/tests/testblockutil.nim index 3dc7c501c2..40de5dbd58 100644 --- a/tests/testblockutil.nim +++ b/tests/testblockutil.nim @@ -11,6 +11,7 @@ import chronicles, stew/endians2, ../beacon_chain/consensus_object_pools/sync_committee_msg_pool, + ../beacon_chain/el/engine_api_conversions, ../beacon_chain/spec/datatypes/bellatrix, ../beacon_chain/spec/[ beaconstate, helpers, keystore, signatures, state_transition, validator] @@ -102,7 +103,7 @@ func build_empty_merge_execution_payload(state: bellatrix.BeaconState): var payload = bellatrix.ExecutionPayload( parent_hash: latest.block_hash, state_root: latest.state_root, # no changes to the state - receipts_root: EMPTY_ROOT_HASH, + receipts_root: EMPTY_ROOT_HASH.asEth2Digest, block_number: latest.block_number + 1, prev_randao: randao_mix, gas_limit: 30000000, # retain same limit @@ -134,7 +135,7 @@ func build_empty_execution_payload( parent_hash: latest.block_hash, fee_recipient: bellatrix.ExecutionAddress(data: distinctBase(feeRecipient)), state_root: latest.state_root, # no changes to the state - receipts_root: EMPTY_ROOT_HASH, + receipts_root: EMPTY_ROOT_HASH.asEth2Digest, block_number: latest.block_number + 1, prev_randao: randao_mix, gas_limit: latest.gas_limit, # retain same limit diff --git a/vendor/nim-blscurve b/vendor/nim-blscurve index dec99b4d86..bcfb3e77a2 160000 --- a/vendor/nim-blscurve +++ b/vendor/nim-blscurve @@ -1 +1 @@ -Subproject commit dec99b4d868ce0dc8268b7043d17e2a3cd6712cc +Subproject commit bcfb3e77a2c5e1a02611ee4d03f3a655fe902eb1 diff --git a/vendor/nim-eth b/vendor/nim-eth index 5c3969a5c1..92a02b672f 160000 --- a/vendor/nim-eth +++ b/vendor/nim-eth @@ -1 +1 @@ -Subproject commit 5c3969a5c12c7c5acc3d223723a8c005467deea6 +Subproject commit 92a02b672f60e6b5e5ea570d684904c289b495fa diff --git a/vendor/nim-eth2-scenarios b/vendor/nim-eth2-scenarios index 01ea1e9706..dbdf898961 160000 --- a/vendor/nim-eth2-scenarios +++ b/vendor/nim-eth2-scenarios @@ -1 +1 @@ -Subproject commit 01ea1e970672072c8af7a186939b9a78e56dadb7 +Subproject commit dbdf898961ad8895398cec5756f6f452c060e685 diff --git a/vendor/nim-faststreams b/vendor/nim-faststreams index c51315d0ae..6b3fea903e 160000 --- a/vendor/nim-faststreams +++ b/vendor/nim-faststreams @@ -1 +1 @@ -Subproject commit c51315d0ae5eb2594d0bf41181d0e1aca1b3c01d +Subproject commit 6b3fea903ea0ee058ac7698c5e6af63b3a43fed5 diff --git a/vendor/nim-libp2p b/vendor/nim-libp2p index cd60b254a0..d803352bd6 160000 --- a/vendor/nim-libp2p +++ b/vendor/nim-libp2p @@ -1 +1 @@ -Subproject commit cd60b254a0700b0daac7a6cb2c0c48860b57c539 +Subproject commit d803352bd63fe2215e149a5f72de98229cfb7867 diff --git a/vendor/nim-sqlite3-abi b/vendor/nim-sqlite3-abi index 38f84f1556..bdf01cf423 160000 --- a/vendor/nim-sqlite3-abi +++ b/vendor/nim-sqlite3-abi @@ -1 +1 @@ -Subproject commit 38f84f155662e22a39509fd45b85c9ea8f87efa4 +Subproject commit bdf01cf4236fb40788f0733466cdf6708783cbac diff --git a/vendor/nim-ssz-serialization b/vendor/nim-ssz-serialization index 0f7515524e..3ad8102750 160000 --- a/vendor/nim-ssz-serialization +++ b/vendor/nim-ssz-serialization @@ -1 +1 @@ -Subproject commit 0f7515524e23ede6510d156fd7b34766083990eb +Subproject commit 3ad8102750e6dd24323b2c114189ece79fd605c0 diff --git a/vendor/nim-stew b/vendor/nim-stew index 58abb4891f..9cc65bbf9d 160000 --- a/vendor/nim-stew +++ b/vendor/nim-stew @@ -1 +1 @@ -Subproject commit 58abb4891f97c6cdc07335e868414e0c7b736c68 +Subproject commit 9cc65bbf9d42d11efe3c3dcc687f6f69f4951627 diff --git a/vendor/nim-testutils b/vendor/nim-testutils index 94d68e796c..e4d37dc165 160000 --- a/vendor/nim-testutils +++ b/vendor/nim-testutils @@ -1 +1 @@ -Subproject commit 94d68e796c045d5b37cabc6be32d7bfa168f8857 +Subproject commit e4d37dc1652d5c63afb89907efb5a5e812261797