Skip to content

[FIX] Processing checkpoint blocks by fetcher #866

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jan 5, 2021

Conversation

pslaski
Copy link
Contributor

@pslaski pslaski commented Dec 21, 2020

Description

Currently, the fetcher is not processing new checkpoint blocks which are older than the current top

Important Changes Introduced

  • changed processing of lastBlock in BlockFetcherState (reflects the last block imported by blockchain)
  • added processing checkpoint blocks by BlockFetcher

Testing

Syncing with Mordor testnet: ✔️
Need to be tested with federation nodes.

@pslaski pslaski requested review from kapke and lemastero December 21, 2020 14:53

private def handleNewCheckpointBlockNotOnTop(block: Block, peerId: PeerId, state: BlockFetcherState): Unit = {
log.debug("Got checkpoint block")
if (state.waitingHeaders.exists(_.hash == block.hash || state.readyBlocks.exists(_.hash == block.hash))) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would be quite beneficial to:

  1. extract methods for checking headers and blocks existence by hash (in fetcher state)
  2. save results to variables here, so computation is not performed with each if

log.debug(
s"Checkpoint block ${ByteStringUtils.hash2string(block.hash)} not fit into queues - clearing the queues"
)
val newState = state
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shouldn't the block be saved somehow or passed to block importer in this case? This case means more or less "we're on different fork" and we don't do anything in order to switch into checkpointed branch

.withPeerForBlocks(peerId, Seq(block.number))
.withKnownTopAt(block.number)
fetchBlocks(newState)
} else handleFutureBlock(block, state)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here - we just try to update known top block and believe other nodes will do their job so we land on checkpointed branch.
If this code doesn't fork on checkpointed testnet - we can probably leave it as it is now and pass a follow-up task to the team. WDYT?

fetchBlocks(newState)
if (state.readyBlocks.exists(_.header.hash == block.header.parentHash)) {
log.debug(s"Checkpoint block fit into current ready blocks queue")
val newState = state
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For inserting checkpoint into right slot - please make and use methods that protect fetcher state invariants (like for headers there's appendHeaders)
It could probably even simplify the code here significantly to something in lines of:

state.tryInsertCheckpointBlock(block, peerId) match {
  case Left(err) if block.number <= state.lastBlock =>
    log.debug("Couldn't insert checkpoint to queues, passing to importer")
    state.importer ! NewCheckpointBlock(block, peerId)
  case Left(err) =>
    log.debug("Couldn't insert checkpoint to queues")
    //whatever
  case Right(newState) =>
    fetchBlocks(newState)
}

@@ -179,10 +176,22 @@ case class BlockFetcherState(
}
.getOrElse(this)

def enqueueHeader(header: BlockHeader): BlockFetcherState = {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please remove these methods and use safer counterparts - lack of them caused a bit of headache already

@@ -313,6 +312,253 @@ class BlockFetcherSpec
}
}

"should process checkpoint blocks when checkpoint can fit into ready blocks queue" in new TestSetup {
startFetcher()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does it make sense to extract some common parts from these tests?

@pslaski pslaski force-pushed the fix-fetcher-processing-checkpoints branch from 4b84b5a to af7d8ff Compare December 23, 2020 09:08
Copy link
Contributor

@kapke kapke left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor things, besides that looks good to me!

)
})

def tryInsertBlock(block: Block, peerId: PeerId): Either[String, BlockFetcherState] = {
val blockHash = block.hash
if (isExistInReadyBlocks(blockHash) || isExistInWaitingHeaders(blockHash)) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just exists?


// Fetcher should enqueue all the received blocks
importer.send(blockFetcher, PickBlocks(chain.size))
importer.send(blockFetcher, PickBlocks(firstBlocksBatch.size))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

isn't this send call duplicate now?

@pslaski pslaski force-pushed the fix-fetcher-processing-checkpoints branch from af7d8ff to e1fbab4 Compare December 23, 2020 11:12
@pslaski pslaski force-pushed the fix-fetcher-processing-checkpoints branch 2 times, most recently from 83dd3b3 to fbb54f4 Compare January 4, 2021 09:54
@pslaski pslaski force-pushed the fix-fetcher-processing-checkpoints branch from fbb54f4 to 4ad48cf Compare January 5, 2021 10:19
@pslaski pslaski merged commit dac4214 into develop Jan 5, 2021
@pslaski pslaski deleted the fix-fetcher-processing-checkpoints branch January 5, 2021 11:29
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants