Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Nakamoto Miner[3.0] - Miners should handle tenure extend transactions #4709

Closed
saralab opened this issue Apr 23, 2024 · 2 comments
Closed

Nakamoto Miner[3.0] - Miners should handle tenure extend transactions #4709

saralab opened this issue Apr 23, 2024 · 2 comments

Comments

@saralab
Copy link
Contributor

saralab commented Apr 23, 2024

  • Details of cases where miners ought to handle tenure extensions
  • 2 cases :
    1. subsequent bitcoin block didn’t choose a sortition winner - this is 3.0 blocking
      
    2. subsequent sortition winner tried to produce a malicious block , rather than stalling the network for a block, the Miner should take over - this isn't 3.0 blocking
@github-project-automation github-project-automation bot moved this to Status: 🆕 New in Stacks Core Eng Apr 23, 2024
@smcclellan smcclellan moved this from Status: 🆕 New to Status: 📋 Backlog in Stacks Core Eng Apr 26, 2024
@jferrant jferrant assigned jferrant and unassigned obycode May 22, 2024
@jferrant jferrant moved this from Status: 📋 Backlog to Status: 💻 In Progress in Stacks Core Eng May 22, 2024
@kantai
Copy link
Member

kantai commented May 23, 2024

Adding a bit more context here:

The nakamoto miner implementation currently works with essentially a two thread process. The relayer thread (implemented in testnet/stacks-node/src/nakamoto_node/relayer.rs) receives events from the networking stack and checks if any new burnchain blocks have been processed. When a new burnchain block has been processed, it checks if there was a sortition, and if it won, it kicks off a miner thread nakamoto_node/miner.rs. This miner thread's first block contains a TenureChange transaction with a BlockFound reason. To handle this issue, if a new burnchain block is processed, the relayer thread should check if there was a sortition, and if there wasn't a sortition and this node is the last winning miner, it should spawn a new miner thread (stopping the prior thread), and starting a new thread. The new miner thread should create a TenureChange transaction with a TenureChangeCause::Extended reason.

Some relevant code pointers:

In relayer.rs:

...
        if sn.sortition {
            if won_sortition {
                MinerDirective::BeginTenure {
                    parent_tenure_start: committed_index_hash,
                    burnchain_tip: sn,
                }
            } else {
                MinerDirective::StopTenure
            }
        } else {
            MinerDirective::ContinueTenure {
                new_burn_view: consensus_hash,
            }
        }

and

...
        match miner_instruction {
            MinerDirective::BeginTenure {
                parent_tenure_start,
                burnchain_tip,
            } => match self.start_new_tenure(parent_tenure_start, burnchain_tip) {
                Ok(()) => {
                    debug!("Relayer: successfully started new tenure.");
                }
                Err(e) => {
                    error!("Relayer: Failed to start new tenure: {:?}", e);
                }
            },
            MinerDirective::ContinueTenure { new_burn_view: _ } => {
                // TODO: in this case, we eventually want to undergo a tenure
                //  change to switch to the new burn view, but right now, we will
                //  simply end our current tenure if it exists
                match self.stop_tenure() {
                    Ok(()) => {
                        debug!("Relayer: successfully stopped tenure.");
                    }
                    Err(e) => {
                        error!("Relayer: Failed to stop tenure: {:?}", e);
                    }
                }
            }
            MinerDirective::StopTenure => match self.stop_tenure() {
                Ok(()) => {
                    debug!("Relayer: successfully stopped tenure.");
                }
                Err(e) => {
                    error!("Relayer: Failed to stop tenure: {:?}", e);
                }
            },
        }
...

I think the most straight-forward way to implement this is to add a field to the BlockMinerThread struct which designates the thread as either a BlockFound or Extended reason, then the TODO above could be replaced with a check to see if the miner was the latest winning miner, and if so, invoke start_new_tenure with the Extended reason (and correct parent_tenure_id).

You can check if the current miner is the last winning miner with something like:

self.sortdb.index_handle_at_tip().get_last_snapshot_with_sortition_at_tip()?.winning_block_txid == self.current_mining_commit_tx

current_mining_commit_tx would need to be a new field that tracks the last commit that the relayer submitted which won (this could be set around where won_sortition is currently computed in the relayer).

get_last_snapshot_with_sortition_at_tip() would also be a new function in the sortdb.rs:

diff --git a/stackslib/src/chainstate/burn/db/sortdb.rs b/stackslib/src/chainstate/burn/db/sortdb.rs
index e3802d6ec..1d515f2ef 100644
--- a/stackslib/src/chainstate/burn/db/sortdb.rs
+++ b/stackslib/src/chainstate/burn/db/sortdb.rs
@@ -2226,6 +2226,29 @@ impl<'a> SortitionHandleConn<'a> {
         })
     }
 
+    /// Get the latest block snapshot on this fork where a sortition occured.
+    pub fn get_last_snapshot_with_sortition_from_tip(
+        &self,
+    ) -> Result<BlockSnapshot, db_error> {
+        let ancestor_hash = match self.get_indexed(&self.context.chain_tip, &db_keys::last_sortition())? {
+            Some(hex_str) => BurnchainHeaderHash::from_hex(&hex_str).unwrap_or_else(|_| {
+                panic!(
+                    "FATAL: corrupt database: failed to parse {} into a hex string",
+                    &hex_str
+                )
+            }),
+            None => {
+                // no prior sortitions, so get the first
+                return self.get_first_block_snapshot();
+            }
+        };
+
+        self.get_block_snapshot(&ancestor_hash).map(|snapshot_opt| {
+            snapshot_opt
+                .unwrap_or_else(|| panic!("FATAL: corrupt index: no snapshot {}", ancestor_hash))
+        })
+    }
+
     pub fn get_leader_key_at(
         &self,
         key_block_height: u64,

@saralab saralab moved this from Status: 💻 In Progress to Status: In Review in Stacks Core Eng Jun 20, 2024
@saralab saralab closed this as completed Jun 21, 2024
@github-project-automation github-project-automation bot moved this from Status: In Review to Status: ✅ Done in Stacks Core Eng Jun 21, 2024
@blockstack-devops
Copy link
Contributor

This issue has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

@stacks-network stacks-network locked as resolved and limited conversation to collaborators Oct 27, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
Archived in project
Development

No branches or pull requests

5 participants