Closed
Description
This is something that could happen along the node executions but is easily reproducible on the initial setup. After the initial sync and some time running (a couple of minutes) an initial pruning kicks in and for some reason it generates a slot in the future error, here is an example log:
INFO 17:43:12.001 [StateDb] Pruning started. slot=5963552
INFO 17:43:12.001 [BlobDb] Pruning started. slot=5832480
INFO 17:43:12.001 [BlockDb] Pruning started. slot=5963552
INFO 17:43:12.001 [BlobDb] Pruning finished. 0 blobs removed.
INFO 17:43:12.003 [Libp2p] Slot transition slot=5963616
INFO 17:43:12.009 [BlockDb] Pruning finished. 52 blocks removed.
INFO 17:43:12.019 [StateDb] Pruning finished. 52 states removed.
INFO 17:43:12.684 [Gossip] Block received, block.slot: 5963616.
INFO 17:43:24.002 [Libp2p] Slot transition slot=5963617
INFO 17:43:24.216 [Gossip] Block received, block.slot: 5963617.
INFO 17:43:29.977 [Fork choice] Adding new block slot=5963613 root=0x3de..fc05
INFO 17:43:30.204 [Fork choice] Block processed. Recomputing head.
ERROR 17:43:30.204 GenServer LambdaEthereumConsensus.Libp2pPort terminating
** (RuntimeError) Parent 14C35B27D611342466C990ABE85147AFCDA13DBE2D5C05C4B3155309CF94FFFD not found in tree
(lambda_ethereum_consensus 0.1.0) lib/lambda_ethereum_consensus/fork_choice/simple_tree.ex:84: LambdaEthereumConsensus.ForkChoice.Simple.Tree.get_children!/2
(lambda_ethereum_consensus 0.1.0) lib/types/store.ex:147: Types.Store.get_children/2
(lambda_ethereum_consensus 0.1.0) lib/lambda_ethereum_consensus/fork_choice/head.ex:90: LambdaEthereumConsensus.ForkChoice.Head.filter_block_tree/4
(lambda_ethereum_consensus 0.1.0) lib/lambda_ethereum_consensus/fork_choice/head.ex:85: LambdaEthereumConsensus.ForkChoice.Head.get_filtered_block_tree/1
(lambda_ethereum_consensus 0.1.0) lib/lambda_ethereum_consensus/fork_choice/head.ex:14: LambdaEthereumConsensus.ForkChoice.Head.get_head/1
(lambda_ethereum_consensus 0.1.0) lib/lambda_ethereum_consensus/fork_choice/fork_choice.ex:269: LambdaEthereumConsensus.ForkChoice.recompute_head/1
(lambda_ethereum_consensus 0.1.0) lib/lambda_ethereum_consensus/fork_choice/fork_choice.ex:62: anonymous fn/1 in LambdaEthereumConsensus.ForkChoice.on_block/2
(telemetry 1.3.0) /home/admin/lambda_ethereum_consensus/deps/telemetry/src/telemetry.erl:324: :telemetry.span/3
Last message: {#Port<0.16>, {:data, <<50, 145, 166, 1, 10, 36, 55, 100, 49, 52, 49, 57, 53, 49, 45, 97, 100, 56, 50, 45, 52, 50, 101, 102, 45, 97, 102, 102, 101, 45, 48, 57, 102, 54, 50, 55, 101, 97, 57, 56, 102, 101, 16, 1, 26, 229, ...>>}}
INFO 17:43:30.227 [Optimistic Sync] Waiting 10.0 seconds to discover some peers before requesting blocks.
INFO 17:43:30.227 [Fork choice] Adding new block slot=5963613 root=0x3de..fc05
ERROR 17:43:30.280 [Fork choice] Failed to add block: block is from the future slot=5963613 root=0x3de..fc05
ERROR 17:43:30.280 [PendingBlocks] Saving block as invalid block is from the future slot=5963613 root=0x3de..fc05
INFO 17:43:36.001 [Libp2p] Slot transition slot=5963618
INFO 17:43:40.228 [Optimistic sync] Performing optimistic sync between slots 5963489 and 5963618, for a total of 130 slots.
This happens to kick in sync again and and may trigger #1308 issue.
Metadata
Metadata
Assignees
Labels
Type
Projects
Status
Done