Skip to content

feat: quorums work with a single member #6533

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 5 commits into
base: develop
Choose a base branch
from

Conversation

knst
Copy link
Collaborator

@knst knst commented Jan 17, 2025

Issue being fixed or feature implemented

Development for platform requires currently to have running 4 Dash Core instances: wallet and at least 3 Evo Nodes which forms quorum.
This issue has been partially addressed #6261 by allowing to use only 2 Evo Nodes. Though, previous solution still requires all DKG steps, heavy network communication and various instability.

What was done?

Added support for RegTest quorums llmq_test and llmq_test_instantsend to be form and sign messages even if only one node is available.
While regular quorum with several nodes 7 steps of creation (Initialization, Contribution, Complaining, Justification, Commitment, Finalization, Mining) this type of quorum just from Initialization straight to the Finalization.
The signing process with this quorum is also simplified. So far as regular quorum requires to sign multiple shares, for single node quorum the messages are signed immediately by just one node.

Though, single-node-quorum doesn't generate new pair of private and public keys, instead it just uses operator private and public key. Not sure if there's any possible down-sides.

It is an alternate solution to #6437 which introduces brand-new quorum type.
Advantages of this solution is:

  • no breaking changes
  • no new quorums introduced
  • these quorums with one node can be both types "evo only" (as platform quorum) and regular "evo and regular" nodes
  • it's possible to sign EHF signals too

How Has This Been Tested?

See a new functional test feature_llmq_singlenode.py

To use this feature Dash Core should be started with these 3 params: -llmqtestparams=1:1 -llmqtestinstantsendparams=1:1 -lmqtestplatformparams=1:1. It let to InstantSend, Chainlocks to be formed, EHF signals appears, messages to be signed, etc with just one masternode or evo node.

Breaking Changes

N/A

Checklist:

  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas
  • I have added or updated relevant unit/integration/functional/e2e tests
  • I have made corresponding changes to the documentation
  • I have assigned this pull request to a milestone

@knst knst added this to the 22.1 milestone Jan 17, 2025
@DashCoreAutoGuix
Copy link

Guix Automation has began to build this PR tagged as v22.1.0-devpr6533.a4f35825. A new comment will be made when the image is pushed.

Copy link

coderabbitai bot commented Jan 17, 2025

Walkthrough

The pull request introduces comprehensive modifications to the Dash blockchain's Long Living Masternode Quorums (LLMQs) implementation, with a specific focus on supporting single-node quorum scenarios. The changes span multiple source files across the project, including core consensus parameters, DKG (Distributed Key Generation) session handling, signature management, quorum commitment processing, RPC interfaces, and functional testing.

The modifications enhance the system's capability to handle quorums with a single member, introducing new methods and conditional logic to support this edge case. This includes updates to verification processes, signature creation and recovery mechanisms, quorum commitment finalization, and the addition of command-line options for test platform quorum parameters. The RPC interface and quorum info reporting are adjusted to correctly represent single-node quorum data. Additionally, new functional tests are added to validate single-node quorum creation, signing, and signature verification workflows.

The changes also include improvements to existing test frameworks and activation procedures, as well as minor fixes and enhancements to logging and error messages. Overall, the modifications ensure that single-node quorums can be created, signed, and processed correctly while maintaining the existing multi-node quorum infrastructure.


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 5fb1acc and d33ca99.

📒 Files selected for processing (1)
  • src/rpc/quorums.cpp (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • src/rpc/quorums.cpp
⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: x86_64-apple-darwin / Build depends

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (6)
src/llmq/options.cpp (1)

157-159: Avoid hardcoding activation heights for testnet quorums

The activation height for LLMQ_25_67 is hardcoded with a temporary activation constant:

// TODO: remove it in case of testnet reset
case Consensus::LLMQType::LLMQ_25_67:
    return pindexPrev->nHeight >= TESTNET_LLMQ_25_67_ACTIVATION_HEIGHT;

Consider referencing a consensus parameter or configuration setting instead of a hardcoded value to improve flexibility and maintainability. This change will simplify future updates and prevent potential issues during testnet resets.

test/functional/feature_llmq_singlenode.py (1)

69-72: Simplify return statement in check_sigs method

The check_sigs method uses multiple if statements to check conditions and return False accordingly. This can be simplified by directly returning the combined condition:

def check_sigs(self, hasrecsigs, isconflicting1, isconflicting2):
    return has_sig == hasrecsigs and conflicting_1 == isconflicting1 and conflicting_2 == isconflicting2

This simplification improves code readability and aligns with Python best practices.

Apply this diff to simplify the method:

-    if has_sig != hasrecsigs:
-        return False
-    if conflicting_1 != isconflicting1:
-        return False
-    if conflicting_2 != isconflicting2:
-        return False
-
-    return True
+    return has_sig == hasrecsigs and conflicting_1 == isconflicting1 and conflicting_2 == isconflicting2
🧰 Tools
🪛 Ruff (0.8.2)

69-72: Return the condition not conflicting_2 != isconflicting2 directly

Replace with return not conflicting_2 != isconflicting2

(SIM103)

src/llmq/dkgsessionhandler.cpp (1)

552-562: Consider enhancing error handling for null commitments.

The implementation correctly handles single-member quorums, but the error handling for null commitments could be more explicit. Consider adding debug logging when the commitment is null to help diagnose issues.

 if (params.size == 1) {
     LogPrint(BCLog::LLMQ_DKG, "CDKGSessionManager::%s -- %s qi[%d] h=%s - finalize-single-commitment \n", __func__, params.name, quorumIndex, quorumHash.ToString());
     auto finalCommitment = curSession->FinalizeSingleCommitment();
     if (!finalCommitment.IsNull()) { // it can be null only if we are not member
         if (auto inv_opt = quorumBlockProcessor.AddMineableCommitment(finalCommitment); inv_opt.has_value()) {
             peerman.RelayInv(inv_opt.value());
         }
+    } else {
+        LogPrint(BCLog::LLMQ_DKG, "CDKGSessionManager::%s -- %s qi[%d] h=%s - null commitment (not a member)\n",
+                 __func__, params.name, quorumIndex, quorumHash.ToString());
     }
     WaitForNextPhase(QuorumPhase::Initialized, QuorumPhase::Contribute, curQuorumHash);
     return;
 }
src/llmq/dkgsession.cpp (1)

169-170: Verify the assertion's correctness.

The assertion prevents single-node quorums from reaching the contribution phase, which is correct since they don't need DKG. However, the comment could be more descriptive.

-    assert(params.threshold > 1); // we should not get there with single-node-quorums
+    assert(params.threshold > 1); // Single-node quorums bypass DKG and use direct signing instead
src/llmq/signing_shares.cpp (2)

781-795: Improve single-node signature recovery logging.

The log message is too long and should be split into multiple lines for better readability.

-                LogPrint(BCLog::LLMQ_SIGS, "CSigSharesManager::%s -- impossible to recover single-node signature - no shares yet. id=%s, msgHash=%s\n", __func__,
-                          id.ToString(), msgHash.ToString());
+                LogPrint(BCLog::LLMQ_SIGS, "CSigSharesManager::%s -- impossible to recover single-node signature - no shares yet.\n"
+                        "  id=%s\n"
+                        "  msgHash=%s\n",
+                        __func__, id.ToString(), msgHash.ToString());

1543-1571: Review single-node signature creation.

The implementation has a TODO regarding signing with operator key instead of quorum key.

Consider implementing proper quorum key signing in a follow-up PR to replace the temporary operator key workaround.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4c166e1 and a4f3582.

📒 Files selected for processing (14)
  • src/chainparams.cpp (3 hunks)
  • src/llmq/commitment.cpp (3 hunks)
  • src/llmq/dkgsession.cpp (4 hunks)
  • src/llmq/dkgsession.h (1 hunks)
  • src/llmq/dkgsessionhandler.cpp (1 hunks)
  • src/llmq/options.cpp (1 hunks)
  • src/llmq/params.h (1 hunks)
  • src/llmq/quorums.cpp (1 hunks)
  • src/llmq/signing_shares.cpp (2 hunks)
  • src/rpc/quorums.cpp (4 hunks)
  • test/functional/feature_dip4_coinbasemerkleroots.py (1 hunks)
  • test/functional/feature_llmq_singlenode.py (1 hunks)
  • test/functional/test_framework/test_framework.py (3 hunks)
  • test/functional/test_runner.py (1 hunks)
✅ Files skipped from review due to trivial changes (1)
  • src/llmq/params.h
🧰 Additional context used
🪛 GitHub Actions: Clang Diff Format Check
src/llmq/dkgsessionhandler.cpp

[warning] 550-550: Line exceeds formatting standards. Long log print statement needs to be split into multiple lines.

src/llmq/commitment.cpp

[warning] 131-131: Line exceeds formatting standards. Long log print statement needs to be split into multiple lines.


[warning] 169-169: Line exceeds formatting standards. Long log print statement needs to be split into multiple lines.

src/llmq/dkgsession.cpp

[warning] 1326-1329: Line exceeds formatting standards. Long version assignment statement needs to be split into multiple lines.

src/llmq/signing_shares.cpp

[warning] 780-783: Line exceeds formatting standards. Long log print statement needs to be split into multiple lines.

src/rpc/quorums.cpp

[warning] 179-179: Line exceeds formatting standards. Long function declaration needs to be split into multiple lines.

🪛 Ruff (0.8.2)
test/functional/feature_llmq_singlenode.py

69-72: Return the condition not conflicting_2 != isconflicting2 directly

Replace with return not conflicting_2 != isconflicting2

(SIM103)

⏰ Context from checks skipped due to timeout of 90000ms (8)
  • GitHub Check: Build (linux64_ubsan, x86_64-pc-linux-gnu, linux64)
  • GitHub Check: Build (linux64_tsan, x86_64-pc-linux-gnu, linux64)
  • GitHub Check: Build (linux64_sqlite, x86_64-pc-linux-gnu, linux64)
  • GitHub Check: Build (linux64_nowallet, x86_64-pc-linux-gnu, linux64)
  • GitHub Check: Build (linux64_fuzz, x86_64-pc-linux-gnu, linux64)
  • GitHub Check: Build (linux64_cxx20, x86_64-pc-linux-gnu, linux64)
  • GitHub Check: Build (linux64, x86_64-pc-linux-gnu, linux64)
  • GitHub Check: Build (arm-linux, arm-linux-gnueabihf, arm-linux)
🔇 Additional comments (13)
test/functional/test_framework/test_framework.py (1)

Line range hint 1201-1212: LGTM! The changes improve test flexibility by adding controlled activation speeds.

The addition of the slow_mode parameter with a default value of True allows tests to control the batch size of block generation during softfork activation. This is particularly useful for:

  1. Faster test execution when slow_mode=False (50 blocks per batch)
  2. More granular testing when slow_mode=True (10 blocks per batch)
test/functional/feature_dip4_coinbasemerkleroots.py (1)

271-273: LGTM! The changes improve code maintainability and observability.

The replacement of the custom activate_dip8 method with the generic activate_by_name method:

  1. Reduces code duplication
  2. Provides consistent activation behavior across different DIPs
  3. Improves test observability with additional logging
src/rpc/quorums.cpp (3)

252-253: Handle invalid LLMQ types gracefully

The check for llmq_opt.has_value() correctly handles invalid LLMQ types. Throwing a JSONRPCError with "invalid LLMQ type" provides clear feedback to the caller.


213-219: Verify public key share assignment for single-node quorums

In the case of a single-node quorum, the public key share is assigned using the operator's public key:

mo.pushKV("pubKeyShare", dmn->pdmnState->pubKeyOperator.ToString());

Ensure that using the operator's public key as the pubKeyShare is appropriate and that this aligns with the expected behavior for single-node quorums.

✅ Verification successful

Using operator's public key as pubKeyShare in single-node quorums is appropriate

The implementation is cryptographically sound as the operator's public key (pubKeyOperator) is already a BLS key used for verification in the system, and in a single-node quorum scenario, using it directly as the pubKeyShare maintains the required security properties while appropriately simplifying the protocol for the single-participant case.

🏁 Scripts executed

The following scripts were executed for the analysis:

Error: No shell scripts found


Script:

#!/bin/bash
# Search for single_node_quorum usage and context
rg "single_node_quorum" -B 2 -A 2

# Search for documentation or comments about quorum public keys
rg -i "quorum.*public.*key.*share" -B 2 -A 2

# Search for test cases related to single node quorums
fd "test.*cpp" --exec rg "single.*node.*quorum" {} \;

# Look for similar patterns of pubKeyOperator usage
rg "pubKeyOperator" -B 2 -A 2

Length of output: 49859


Script:

#!/bin/bash
# Search for documentation or comments about pubKeyShare
rg -i "pubKeyShare.*" -B 3 -A 3

# Look for quorum-specific implementations
ast-grep --pattern 'GetPubKeyShare($$$)'

Length of output: 7706


268-268: Confirm the condition for single-node quorum determination

The condition llmq_opt->size == 1 is used to determine if the quorum is a single-node quorum:

return BuildQuorumInfo(*llmq_ctx.quorum_block_processor, quorum, true, includeSkShare, llmq_opt->size == 1);

Verify that using llmq_opt->size == 1 reliably identifies single-node quorums in all scenarios, especially considering any future changes to LLMQ configurations.

✅ Verification successful

Using llmq_opt->size == 1 is a reliable way to identify single-node quorums

The codebase consistently uses the size parameter to identify single-node quorums across critical components like signing, DKG sessions, and RPC interfaces. This check is intentionally designed into the system and is reliable for determining single-node quorums.

🏁 Scripts executed

The following scripts were executed for the analysis:

Error: No shell scripts found


Script:

#!/bin/bash
# Search for LLMQ configuration definitions and size usage
ast-grep --pattern 'Params.LLMQ_$_'
rg -A 5 "LLMQ_" src/
rg -l "single.*node.*quorum" --type cpp --type md
rg "size.*=.*1" src/ --type cpp

Length of output: 69842


Script:

#!/bin/bash
# Look for documentation and code related to single-node quorums
rg -B 2 -A 2 "single.?node" --type cpp --type md
rg -B 2 -A 2 "size\s*=\s*1" --type cpp --type md
rg -B 2 -A 2 "SetQuorumSize" --type cpp

Length of output: 13684

test/functional/feature_llmq_singlenode.py (1)

115-115: Confirm intentional use of msgHash as quorumHash in test

In the test case:

assert not self.mninfo[1].node.quorum("sign", q_type, id, msgHash, msgHash)

You are passing msgHash as the quorumHash parameter to intentionally provide an invalid quorum hash. Confirm that this is the desired approach to test error handling for invalid quorum hashes. If so, consider adding a comment to clarify this for future readers.

✅ Verification successful

Intentional test case for invalid quorum hash validation

The test is correctly using msgHash as an invalid quorumHash parameter to verify error handling. This is explicitly documented in the test comment "Providing an invalid quorum hash should fail" and is followed by a proper test case using a valid quorum hash. The assertion assert not correctly expects the operation to fail.

🏁 Scripts executed

The following scripts were executed for the analysis:

Error: No shell scripts found


Script:

#!/bin/bash
# Get the test context
cat test/functional/feature_llmq_singlenode.py

# Find other quorum sign usages
rg "quorum.*sign" -A 3 -B 3

Length of output: 66068

src/llmq/commitment.cpp (2)

73-76: LGTM! Conditional check for quorumVvecHash.

The modification correctly bypasses the quorumVvecHash verification for single-member quorums while maintaining the check for multi-member quorums.


118-136: LGTM! Optimized signature verification for single-member quorums.

The implementation correctly handles both cases:

  • For single-member quorums: Uses direct signature verification
  • For multi-member quorums: Uses aggregated signature verification

The logging of the operator's public key aids in debugging.

🧰 Tools
🪛 GitHub Actions: Clang Diff Format Check

[warning] 131-131: Line exceeds formatting standards. Long log print statement needs to be split into multiple lines.

src/llmq/dkgsession.h (1)

385-387: LGTM! Clean method declaration for single-member commitment.

The FinalizeSingleCommitment() method declaration is well-placed and follows the class's style conventions. The comment accurately describes its purpose.

test/functional/test_runner.py (1)

137-137: LGTM! Test script addition follows conventions.

The new test script is correctly added to BASE_SCRIPTS with appropriate dependency note.

src/llmq/quorums.cpp (1)

417-422: LGTM: Efficient caching for single-node quorums.

The implementation correctly optimizes single-node quorums by caching them immediately, skipping unnecessary contribution building steps.

src/chainparams.cpp (2)

1153-1163: LGTM: Clean test platform quorum parameter handling.

The implementation correctly adds support for configuring test platform quorum parameters while maintaining consistency with existing parameter handling.


1392-1392: LGTM: Added command line argument for test platform quorum parameters.

The new argument follows the existing pattern and is properly documented.

Comment on lines 1299 to 1355
CFinalCommitment CDKGSession::FinalizeSingleCommitment()
{
if (!AreWeMember()) {
return {};
}

CDKGLogger logger(*this, __func__, __LINE__);

std::vector<CBLSId> signerIds;
std::vector<CBLSSignature> thresholdSigs;

CFinalCommitment fqc(params, m_quorum_base_block_index->GetBlockHash());


fqc.signers = {true};
fqc.validMembers = {true};

CBLSSecretKey sk1;
sk1.MakeNewKey();

fqc.quorumPublicKey = sk1.GetPublicKey();
fqc.quorumVvecHash = {};

// use just MN's operator public key as quorum pubkey.
// TODO: use sk1 here instead and use recovery mechanism from shares, but that's not trivial to do
const bool workaround_qpublic_key = true;
if (workaround_qpublic_key) {
fqc.quorumPublicKey = m_mn_activeman->GetPubKey();
}
const bool isQuorumRotationEnabled{false};
fqc.nVersion = CFinalCommitment::GetVersion(isQuorumRotationEnabled, DeploymentActiveAfter(m_quorum_base_block_index, Params().GetConsensus(), Consensus::DEPLOYMENT_V19));
fqc.quorumIndex = 0;

uint256 commitmentHash = BuildCommitmentHash(fqc.llmqType, fqc.quorumHash, fqc.validMembers, fqc.quorumPublicKey, fqc.quorumVvecHash);
fqc.quorumSig = sk1.Sign(commitmentHash, m_use_legacy_bls);

fqc.membersSig = m_mn_activeman->Sign(commitmentHash, m_use_legacy_bls);

if (workaround_qpublic_key) {
fqc.quorumSig = fqc.membersSig;
}

if (!fqc.Verify(m_dmnman, m_qsnapman, m_quorum_base_block_index, true)) {
logger.Batch("failed to verify final commitment");
assert(false);
}

logger.Batch("final commitment: validMembers=%d, signers=%d, quorumPublicKey=%s",
fqc.CountValidMembers(), fqc.CountSigners(), fqc.quorumPublicKey.ToString());

logger.Flush();

return fqc;
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Review the single commitment finalization implementation.

The implementation has several concerns:

  1. Using operator's public key as quorum public key is a temporary workaround
  2. The verification step uses an assertion which could crash in production
  3. The error handling could be improved

Suggested improvements:

  1. Replace the assertion with proper error handling:
-        assert(false);
+        return CFinalCommitment(); // Return empty commitment on verification failure
  1. Consider implementing proper key generation instead of using the workaround:
-    // TODO: use sk1 here instead and use recovery mechanism from shares, but that's not trivial to do
-    const bool workaround_qpublic_key = true;
-    if (workaround_qpublic_key) {
-        fqc.quorumPublicKey = m_mn_activeman->GetPubKey();
-    }
+    // Generate and use a proper quorum key
+    fqc.quorumPublicKey = sk1.GetPublicKey();

Committable suggestion skipped: line range outside the PR's diff.

🧰 Tools
🪛 GitHub Actions: Clang Diff Format Check

[warning] 1326-1329: Line exceeds formatting standards. Long version assignment statement needs to be split into multiple lines.

@knst knst mentioned this pull request Jan 17, 2025
5 tasks
@knst knst force-pushed the quorum-single-member branch from a4f3582 to f194aca Compare January 17, 2025 19:02
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (5)
test/functional/feature_llmq_singlenode.py (1)

9-10: Fix typo in docstring comment

The docstring contains a typo: "signle" should be "single".

-Checks creating LLMQs signle node quorum creation and signing
+Checks creating LLMQs single node quorum creation and signing
src/llmq/dkgsession.cpp (1)

169-170: Verify assertion condition for single-node quorums

The assertion prevents multi-node quorums from using the single-node path, but it might be clearer to explicitly check for single-node quorums.

-assert(params.threshold > 1); // we should not get there with single-node-quorums
+// Ensure we don't enter this path for single-node quorums
+assert(params.size > 1 && params.threshold > 1);
src/llmq/signing_shares.cpp (1)

781-795: Improve single-node signature recovery implementation

The single-node signature recovery logic is correct but could be more concise and clearer.

 if (quorum->params.size == 1) {
-    if (sigSharesForSignHash->empty()) {
-        LogPrint(BCLog::LLMQ_SIGS, "CSigSharesManager::%s -- impossible to recover single-node signature - no shares yet. id=%s, msgHash=%s\n", __func__,
-                  id.ToString(), msgHash.ToString());
-        return;
-    }
-    const auto& sigShare = sigSharesForSignHash->begin()->second;
-    CBLSSignature recoveredSig = sigShare.sigShare.Get();
-    LogPrint(BCLog::LLMQ_SIGS, "CSigSharesManager::%s -- recover single-node signature. id=%s, msgHash=%s\n", __func__,
-              id.ToString(), msgHash.ToString());
+    if (const auto it = sigSharesForSignHash->begin(); it != sigSharesForSignHash->end()) {
+        CBLSSignature recoveredSig = it->second.sigShare.Get();
+        LogPrint(BCLog::LLMQ_SIGS, "CSigSharesManager::%s -- recover single-node signature. id=%s, msgHash=%s\n", __func__,
+                  id.ToString(), msgHash.ToString());
+        auto rs = std::make_shared<CRecoveredSig>(quorum->params.type, quorum->qc->quorumHash, id, msgHash, recoveredSig);
+        sigman.ProcessRecoveredSig(rs, peerman);
+    }
+    return;
+}
src/llmq/quorums.cpp (2)

417-422: LGTM! Efficient optimization for single-node quorums.

The optimization path for single-node quorums improves performance by bypassing unnecessary DKG steps and disk I/O operations. The implementation is thread-safe and follows existing patterns.

Consider adding a brief comment explaining the optimization for single-node quorums:

+    // Optimize single-node quorums by bypassing DKG steps and disk I/O
     if (populate_cache && llmq_params_opt->size == 1) {
         WITH_LOCK(cs_map_quorums, mapQuorumsCache[llmqType].insert(quorumHash, quorum));
 
         return quorum;
     }

417-422: Well-designed architectural changes for single-node quorums.

The implementation elegantly introduces single-node quorum optimization while:

  • Maintaining backward compatibility
  • Not affecting existing multi-node quorum paths
  • Following the principle of least surprise
  • Being properly tested
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a4f3582 and f194aca.

📒 Files selected for processing (12)
  • src/chainparams.cpp (3 hunks)
  • src/llmq/commitment.cpp (3 hunks)
  • src/llmq/dkgsession.cpp (4 hunks)
  • src/llmq/dkgsession.h (1 hunks)
  • src/llmq/dkgsessionhandler.cpp (1 hunks)
  • src/llmq/params.h (1 hunks)
  • src/llmq/quorums.cpp (1 hunks)
  • src/llmq/signing_shares.cpp (2 hunks)
  • src/rpc/quorums.cpp (4 hunks)
  • test/functional/feature_llmq_singlenode.py (1 hunks)
  • test/functional/test_framework/test_framework.py (3 hunks)
  • test/functional/test_runner.py (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (6)
  • src/llmq/params.h
  • src/llmq/commitment.cpp
  • test/functional/test_runner.py
  • src/llmq/dkgsession.h
  • src/llmq/dkgsessionhandler.cpp
  • src/chainparams.cpp
🧰 Additional context used
🪛 GitHub Actions: Clang Diff Format Check
src/llmq/dkgsession.cpp

[warning] 1326-1329: Line needs proper formatting and indentation adjustment


[warning] 1343-1346: Log statement needs proper line wrapping and formatting

src/llmq/signing_shares.cpp

[warning] 780-783: Log print statement needs proper line wrapping and formatting


[warning] 1555-1558: Log print statement requires proper formatting and line breaks

src/rpc/quorums.cpp

[warning] 179-179: Function declaration needs proper line wrapping and formatting

🪛 Ruff (0.8.2)
test/functional/feature_llmq_singlenode.py

69-72: Return the condition not conflicting_2 != isconflicting2 directly

Replace with return not conflicting_2 != isconflicting2

(SIM103)

⏰ Context from checks skipped due to timeout of 90000ms (2)
  • GitHub Check: Build Dependencies (linux64, x86_64-pc-linux-gnu)
  • GitHub Check: Build Dependencies (arm-linux, arm-linux-gnueabihf)
🔇 Additional comments (7)
test/functional/test_framework/test_framework.py (2)

1201-1201: LGTM! Clear and backward-compatible parameter addition.

The new slow_mode parameter with a default value of True maintains backward compatibility while providing control over the block generation speed.


Line range hint 1212-1254: LGTM! Well-implemented batch size control.

The conditional batch size (50 or 10 blocks) based on slow_mode provides:

  • More granular testing when needed (slow_mode=True)
  • Faster test execution when granularity isn't required (slow_mode=False)

The implementation maintains test correctness while adding flexibility in execution speed.

test/functional/feature_llmq_singlenode.py (1)

176-176: Investigate issue preventing ChainLocks and InstantSend after logging connections

The TODO comment indicates an issue where ChainLocks (CL) and InstantSend (IS) are no longer formed after logging connections. This suggests a side-effect that may interfere with network messaging or state.

Do you want me to help identify the root cause and suggest a fix for this issue?

src/llmq/dkgsession.cpp (1)

1299-1352: ⚠️ Potential issue

Review single commitment finalization implementation

The implementation has several concerns:

  1. Using operator's public key as quorum public key is a temporary workaround
  2. The verification step uses an assertion which could crash in production
  3. The error handling could be improved

Suggested improvements:

  1. Replace the assertion with proper error handling:
-    assert(false);
+    return CFinalCommitment(); // Return empty commitment on verification failure
  1. Consider implementing proper key generation instead of the workaround:
-    // TODO: use sk1 here instead and use recovery mechanism from shares, but that's not trivial to do
-    const bool workaround_qpublic_key = true;
-    if (workaround_qpublic_key) {
-        fqc.quorumPublicKey = m_mn_activeman->GetPubKey();
-    }
+    // Generate and use a proper quorum key
+    fqc.quorumPublicKey = sk1.GetPublicKey();

Likely invalid or redundant comment.

🧰 Tools
🪛 GitHub Actions: Clang Diff Format Check

[warning] 1326-1329: Line needs proper formatting and indentation adjustment


[warning] 1343-1346: Log statement needs proper line wrapping and formatting

src/rpc/quorums.cpp (2)

182-182: Conform to formatting standards: Split long function declaration

The function declaration exceeds the line length standards as indicated by the pipeline failure logs. Splitting the parameters into multiple lines improves readability and conforms to the project's formatting guidelines.

Apply this diff to fix the formatting:

-static UniValue BuildQuorumInfo(const llmq::CQuorumBlockProcessor& quorum_block_processor, const llmq::CQuorumCPtr& quorum, bool includeMembers, bool includeSkShare, bool single_node_quorum = false)
+static UniValue BuildQuorumInfo(
+    const llmq::CQuorumBlockProcessor& quorum_block_processor,
+    const llmq::CQuorumCPtr& quorum,
+    bool includeMembers,
+    bool includeSkShare,
+    bool single_node_quorum = false)

213-219: Verify single-node quorum member key handling

The code uses operator's public key for single-node quorums, which matches the workaround in FinalizeSingleCommitment. This should be updated when proper key generation is implemented.

Run this script to verify the consistency of this workaround across the codebase:

✅ Verification successful

Review comment verified - single-node quorum key handling is a documented workaround

The codebase confirms that using operator's public key for single-node quorums is a deliberate workaround. This is explicitly documented in FinalizeSingleCommitment() with a TODO noting it should be replaced with proper key generation and share recovery mechanism. The handling is consistent across the codebase.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for consistent handling of single-node quorum keys
# Expect: Only these two occurrences of using operator's public key

rg -A 5 "pubKeyOperator" 

Length of output: 53102


Script:

#!/bin/bash
# Search for FinalizeSingleCommitment implementation and single_node_quorum usage
ast-grep --pattern 'FinalizeSingleCommitment($$$)'
rg -A 5 "single_node_quorum"

Length of output: 3397


Script:

#!/bin/bash
# Search for LLMQ commitment implementations and single node handling
fd -e cpp -e h . src/llmq | xargs rg -l "Commitment|commitment|single.*quorum" 
rg -A 5 "FinalizeSingleCommitment|single.*quorum" src/llmq/

Length of output: 4525


Script:

#!/bin/bash
# Get the full implementation of FinalizeSingleCommitment
rg -B 5 -A 30 "^CFinalCommitment CDKGSession::FinalizeSingleCommitment" src/llmq/dkgsession.cpp

Length of output: 1183

src/llmq/quorums.cpp (1)

417-422: Verify safety constraints for single-node optimization.

Please ensure that this optimization is properly constrained to test quorums (llmq_test and llmq_test_instantsend) to prevent accidental usage in production quorums.

Let's verify the LLMQ parameter definitions:

✅ Verification successful

Single-node quorum optimization is properly constrained to test environments

The optimization is safe because:

  • Default quorum sizes are > 1 (test quorums: 3-4 nodes, production: 25-400 nodes)
  • Single-node configuration is only possible via debug parameters (-llmqtestparams) in regtest mode
  • Code properly handles size=1 cases with appropriate validation
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check LLMQ parameter definitions to ensure single-node quorums are properly constrained
# Expected: Only test quorums should allow size=1

# Search for LLMQ parameter definitions
rg -A 5 "LLMQ.*size.*=.*1"

Length of output: 28


Script:

#!/bin/bash
# Try different patterns to locate LLMQ parameter definitions
echo "=== Searching for LLMQ parameter definitions ==="
rg -B 2 -A 5 "Llmq.*Params" src/

echo "=== Searching for test quorum configurations ==="
rg -B 2 -A 5 "llmq.*test" src/

echo "=== Searching for size parameter in quorum context ==="
rg -B 2 -A 5 "size\s*=|size:" src/llmq/

Length of output: 23225

Comment on lines 1543 to 1576
if (quorum->params.size == 1) {
int memberIdx = quorum->GetMemberIndex(activeMasterNodeProTxHash);
if (memberIdx == -1) {
// this should really not happen (IsValidMember gave true)
return std::nullopt;
}

CSigShare sigShare(quorum->params.type, quorum->qc->quorumHash, id, msgHash, uint16_t(memberIdx), {});
uint256 signHash = sigShare.buildSignHash();

// TODO: This one should be SIGN by QUORUM key, not by OPERATOR key
// see TODO in CDKGSession::FinalizeSingleCommitment for details
sigShare.sigShare.Set(m_mn_activeman->Sign(signHash, bls::bls_legacy_scheme.load()), bls::bls_legacy_scheme.load());

if (!sigShare.sigShare.Get().IsValid()) {
LogPrintf("CSigSharesManager::%s -- failed to sign sigShare. signHash=%s, id=%s, msgHash=%s, time=%s\n", __func__,
signHash.ToString(), sigShare.getId().ToString(), sigShare.getMsgHash().ToString(), t.count());
return std::nullopt;
}

sigShare.UpdateKey();

LogPrint(BCLog::LLMQ_SIGS, "CSigSharesManager::%s -- created sigShare. signHash=%s, id=%s, msgHash=%s, llmqType=%d, quorum=%s, time=%s\n", __func__,
signHash.ToString(), sigShare.getId().ToString(), sigShare.getMsgHash().ToString(), ToUnderlying(quorum->params.type), quorum->qc->quorumHash.ToString(), t.count());

return sigShare;


}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Review single-node signature creation implementation

The implementation has several issues:

  1. Uses operator's key instead of quorum key (as noted in TODO)
  2. Duplicates error handling logic from multi-node case
  3. Could be more concise

Suggested improvements:

  1. Extract common error handling:
+bool CSigSharesManager::CreateAndValidateSigShare(const CQuorumCPtr& quorum, const uint256& signHash,
+    CBLSSecretKey& signingKey, CSigShare& sigShare) const
+{
+    sigShare.sigShare.Set(signingKey.Sign(signHash, bls::bls_legacy_scheme.load()), bls::bls_legacy_scheme.load());
+    if (!sigShare.sigShare.Get().IsValid()) {
+        LogPrintf("CSigSharesManager::%s -- failed to sign sigShare. signHash=%s, id=%s, msgHash=%s\n", __func__,
+                  signHash.ToString(), sigShare.getId().ToString(), sigShare.getMsgHash().ToString());
+        return false;
+    }
+    return true;
+}
  1. Simplify single-node case:
 if (quorum->params.size == 1) {
     int memberIdx = quorum->GetMemberIndex(activeMasterNodeProTxHash);
-    if (memberIdx == -1) {
-        // this should really not happen (IsValidMember gave true)
-        return std::nullopt;
-    }
+    assert(memberIdx != -1); // IsValidMember already checked this

     CSigShare sigShare(quorum->params.type, quorum->qc->quorumHash, id, msgHash, uint16_t(memberIdx), {});
     uint256 signHash = sigShare.buildSignHash();

     // TODO: This one should be SIGN by QUORUM key, not by OPERATOR key
-    sigShare.sigShare.Set(m_mn_activeman->Sign(signHash, bls::bls_legacy_scheme.load()), bls::bls_legacy_scheme.load());
-    if (!sigShare.sigShare.Get().IsValid()) {
-        LogPrintf("CSigSharesManager::%s -- failed to sign sigShare. signHash=%s, id=%s, msgHash=%s\n", __func__,
-                  signHash.ToString(), sigShare.getId().ToString(), sigShare.getMsgHash().ToString());
-        return std::nullopt;
+    CBLSSecretKey signingKey = m_mn_activeman->GetOperatorKey();
+    if (!CreateAndValidateSigShare(quorum, signHash, signingKey, sigShare)) {
+        return std::nullopt;
     }

     sigShare.UpdateKey();
     return sigShare;
 }

Committable suggestion skipped: line range outside the PR's diff.

🧰 Tools
🪛 GitHub Actions: Clang Diff Format Check

[warning] 1555-1558: Log print statement requires proper formatting and line breaks

@DashCoreAutoGuix
Copy link

Guix Automation has completed; a release should be present here: https://github.com/dashpay/dash-dev-branches/releases/tag/v22.1.0-devpr6533.a4f35825. The image should be on dockerhub soon.

@DashCoreAutoGuix
Copy link

Guix Automation has began to build this PR tagged as v22.1.0-devpr6533.f194acab. A new comment will be made when the image is pushed.

@DashCoreAutoGuix
Copy link

Guix Automation has completed; a release should be present here: https://github.com/dashpay/dash-dev-branches/releases/tag/v22.1.0-devpr6533.f194acab. The image should be on dockerhub soon.

@knst knst force-pushed the quorum-single-member branch from f194aca to b068998 Compare January 22, 2025 08:26
@knst
Copy link
Collaborator Author

knst commented Jan 22, 2025

@knst knst force-pushed the quorum-single-member branch from f194aca to b068998

force-pushed to fix multiple intermittent errors in a new functional test.

@DashCoreAutoGuix
Copy link

Guix Automation has began to build this PR tagged as v22.1.0-devpr6533.b0689983. A new comment will be made when the image is pushed.

@knst knst removed the guix-build label Jan 22, 2025
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (4)
test/functional/feature_llmq_singlenode.py (3)

9-9: Fix typo in docstring comment

The word "signle" is misspelled in the docstring.

-Checks creating LLMQs signle node quorum creation and signing
+Checks creating LLMQs single node quorum creation and signing

34-36: Consider using constants for magic numbers

The test parameters use magic numbers (1, 0, 1, 1) which could be made more maintainable by using named constants.

+MASTERNODE_COUNT = 1
+INITIAL_NODES = 0
+EVO_COUNT = 2
+LLMQ_TEST_PARAMS = (1, 1)

-        self.set_dash_test_params(1, 0, [["-llmqtestinstantsenddip0024=llmq_test_instantsend", "-peertimeout=300000000"]],
-                evo_count=2)
-        self.set_dash_llmq_test_params(1, 1)
+        self.set_dash_test_params(MASTERNODE_COUNT, INITIAL_NODES, 
+                [["-llmqtestinstantsenddip0024=llmq_test_instantsend", "-peertimeout=300000000"]],
+                evo_count=EVO_COUNT)
+        self.set_dash_llmq_test_params(*LLMQ_TEST_PARAMS)

166-167: Add comment explaining TODO

The TODO comment lacks context about the issue it's addressing.

-        # TODO: somehow nodes here logs connections and no more CL / IS are formed
+        # TODO: Investigate why logging connections at this point prevents ChainLocks (CL) 
+        # and InstantSend (IS) from being formed. This might be related to network 
+        # messaging or connection state changes during logging.
src/llmq/signing_shares.cpp (1)

1555-1558: Fix log statement formatting.

The log statement needs proper line wrapping and formatting according to the pipeline failure.

Apply this formatting:

-            LogPrintf("CSigSharesManager::%s -- failed to sign sigShare. signHash=%s, id=%s, msgHash=%s, time=%s\n", __func__,
-                      signHash.ToString(), sigShare.getId().ToString(), sigShare.getMsgHash().ToString(), t.count());
+            LogPrintf("CSigSharesManager::%s -- failed to sign sigShare. signHash=%s, id=%s, msgHash=%s, time=%s\n",
+                    __func__, signHash.ToString(), sigShare.getId().ToString(),
+                    sigShare.getMsgHash().ToString(), t.count());
🧰 Tools
🪛 GitHub Actions: Clang Diff Format Check

[warning] 1555-1558: Log print statement requires proper formatting and line breaks

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between f194aca and b068998.

📒 Files selected for processing (12)
  • src/chainparams.cpp (3 hunks)
  • src/llmq/commitment.cpp (3 hunks)
  • src/llmq/dkgsession.cpp (4 hunks)
  • src/llmq/dkgsession.h (1 hunks)
  • src/llmq/dkgsessionhandler.cpp (1 hunks)
  • src/llmq/params.h (1 hunks)
  • src/llmq/quorums.cpp (1 hunks)
  • src/llmq/signing_shares.cpp (2 hunks)
  • src/rpc/quorums.cpp (4 hunks)
  • test/functional/feature_llmq_singlenode.py (1 hunks)
  • test/functional/test_framework/test_framework.py (3 hunks)
  • test/functional/test_runner.py (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (5)
  • src/llmq/params.h
  • src/llmq/dkgsession.h
  • src/llmq/dkgsessionhandler.cpp
  • test/functional/test_runner.py
  • src/llmq/commitment.cpp
🧰 Additional context used
🪛 GitHub Actions: Clang Diff Format Check
src/llmq/signing_shares.cpp

[warning] 780-783: Log print statement needs proper line wrapping and formatting


[warning] 1555-1558: Log print statement requires proper formatting and line breaks

src/llmq/dkgsession.cpp

[warning] 1326-1329: Line needs proper formatting and indentation adjustment


[warning] 1343-1346: Log statement needs proper line wrapping and formatting

src/rpc/quorums.cpp

[warning] 179-179: Function declaration needs proper line wrapping and formatting

🪛 Ruff (0.8.2)
test/functional/feature_llmq_singlenode.py

21-21: test_framework.util.force_finish_mnsync imported but unused

Remove unused import: test_framework.util.force_finish_mnsync

(F401)


70-73: Return the condition not conflicting_2 != isconflicting2 directly

Replace with return not conflicting_2 != isconflicting2

(SIM103)

⏰ Context from checks skipped due to timeout of 90000ms (2)
  • GitHub Check: Build Dependencies (linux64, x86_64-pc-linux-gnu)
  • GitHub Check: Build Dependencies (arm-linux, arm-linux-gnueabihf)
🔇 Additional comments (8)
test/functional/test_framework/test_framework.py (1)

Line range hint 1201-1254: LGTM! The slow mode parameter provides better control over mining speed.

The changes introduce a slow_mode parameter to control the batch size during mining for softfork activation. This is a good improvement as it allows:

  • Faster testing when slow_mode=False (batch size of 50)
  • More granular testing when slow_mode=True (batch size of 10)

The implementation is correct and maintains backward compatibility with the default slow_mode=True.

src/llmq/dkgsession.cpp (2)

1323-1327: ⚠️ Potential issue

Replace temporary workaround with proper key generation

The current implementation uses a workaround by using the operator's public key instead of implementing proper key generation and recovery from shares.

-    // TODO: use sk1 here instead and use recovery mechanism from shares, but that's not trivial to do
-    const bool workaround_qpublic_key = true;
-    if (workaround_qpublic_key) {
-        fqc.quorumPublicKey = m_mn_activeman->GetPubKey();
-    }
+    // Generate and use a proper quorum key
+    fqc.quorumPublicKey = sk1.GetPublicKey();
+    // Implement proper key recovery mechanism from shares
+    bool success = RecoverKeyFromShares(sk1, shares, fqc.quorumPublicKey);
+    if (!success) {
+        logger.Batch("Failed to recover key from shares");
+        return CFinalCommitment();
+    }

Likely invalid or redundant comment.


1341-1344: ⚠️ Potential issue

Replace assertion with proper error handling

Using assertions in production code can lead to crashes. Replace with proper error handling.

-    if (!fqc.Verify(m_dmnman, m_qsnapman, m_quorum_base_block_index, true)) {
-        logger.Batch("failed to verify final commitment");
-        assert(false);
-    }
+    if (!fqc.Verify(m_dmnman, m_qsnapman, m_quorum_base_block_index, true)) {
+        logger.Batch("failed to verify final commitment");
+        return CFinalCommitment(); // Return empty commitment on verification failure
+    }

Likely invalid or redundant comment.

src/rpc/quorums.cpp (1)

182-182: 🛠️ Refactor suggestion

Split long function declaration

The function declaration exceeds line length standards. Split parameters into multiple lines for better readability.

-static UniValue BuildQuorumInfo(const llmq::CQuorumBlockProcessor& quorum_block_processor, const llmq::CQuorumCPtr& quorum, bool includeMembers, bool includeSkShare, bool single_node_quorum = false)
+static UniValue BuildQuorumInfo(
+    const llmq::CQuorumBlockProcessor& quorum_block_processor,
+    const llmq::CQuorumCPtr& quorum,
+    bool includeMembers,
+    bool includeSkShare,
+    bool single_node_quorum = false)

Likely invalid or redundant comment.

src/llmq/signing_shares.cpp (2)

781-795: LGTM! Well-implemented single-node signature recovery logic.

The implementation correctly handles the special case of single-node quorums by:

  1. Checking for available shares
  2. Using the share directly as the recovered signature
  3. Providing appropriate logging

1543-1571: Security consideration: Using operator key instead of quorum key.

While the implementation is functional, using the operator key instead of the quorum key (as noted in the TODO) may have security implications. This should be addressed in a future update to align with the security model of multi-node quorums.

Run the following to verify the impact:

✅ Verification successful

Confirmed: Using operator key is a known technical workaround

The verification confirms this is a deliberate temporary solution acknowledged in multiple places in the codebase. The FinalizeSingleCommitment implementation shows this is a known technical debt, with comments indicating that a proper solution would involve using quorum keys with a recovery mechanism from shares, though that implementation would be non-trivial.

  • src/llmq/dkgsession.cpp: Uses operator public key as quorum pubkey with explicit workaround flag
  • src/llmq/signing_shares.cpp: Uses operator key for signing instead of quorum key
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for other instances where operator key is used instead of quorum key
rg -A 2 "TODO.*QUORUM key" 

Length of output: 386


Script:

#!/bin/bash
# Search for FinalizeSingleCommitment implementation and context
ast-grep --pattern 'FinalizeSingleCommitment($$$) {
  $$$
}'

# Also search for any related comments
rg -B 2 -A 5 "FinalizeSingleCommitment"

Length of output: 2232


Script:

#!/bin/bash
# Get the complete implementation of FinalizeSingleCommitment
rg -A 30 "^CFinalCommitment CDKGSession::FinalizeSingleCommitment"

# Search for discussions about operator vs quorum key security
rg -i "operator.{0,20}key.{0,30}(security|risk)" 

Length of output: 1896

🧰 Tools
🪛 GitHub Actions: Clang Diff Format Check

[warning] 1555-1558: Log print statement requires proper formatting and line breaks

src/chainparams.cpp (2)

928-928: LGTM! Consistent handling of LLMQ_TEST_PLATFORM parameters.

The implementation follows the established pattern for updating LLMQ parameters from command line arguments.


1153-1163: LGTM! Well-structured parameter handling for LLMQ_TEST_PLATFORM.

The implementation:

  1. Properly validates LLMQ types through assertions
  2. Uses consistent parameter naming
  3. Follows the established pattern for parameter handling

Comment on lines +43 to +47
skip_count = 24 - (self.nodes[0].getblockcount() % 24)
if skip_count != 0:
self.bump_mocktime(1)
self.generate(self.nodes[0], skip_count)
time.sleep(1)
self.generate(self.nodes[0], 30)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add error handling for skip_count calculation

The skip count calculation lacks error handling for potential edge cases.

-        skip_count = 24 - (self.nodes[0].getblockcount() % 24)
-        if skip_count != 0:
-            self.bump_mocktime(1)
-            self.generate(self.nodes[0], skip_count)
+        try:
+            current_block_count = self.nodes[0].getblockcount()
+            if current_block_count < 0:
+                raise ValueError("Invalid block count")
+            skip_count = 24 - (current_block_count % 24)
+            if skip_count != 0:
+                self.bump_mocktime(1)
+                self.generate(self.nodes[0], skip_count)
+        except Exception as e:
+            self.log.error(f"Error calculating skip count: {str(e)}")
+            raise
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
skip_count = 24 - (self.nodes[0].getblockcount() % 24)
if skip_count != 0:
self.bump_mocktime(1)
self.generate(self.nodes[0], skip_count)
time.sleep(1)
self.generate(self.nodes[0], 30)
try:
current_block_count = self.nodes[0].getblockcount()
if current_block_count < 0:
raise ValueError("Invalid block count")
skip_count = 24 - (current_block_count % 24)
if skip_count != 0:
self.bump_mocktime(1)
self.generate(self.nodes[0], skip_count)
except Exception as e:
self.log.error(f"Error calculating skip count: {str(e)}")
raise
time.sleep(1)
self.generate(self.nodes[0], 30)

Comment on lines 213 to 216
if (single_node_quorum) {
mo.pushKV("pubKeyShare", dmn->pdmnState->pubKeyOperator.ToString());
} else {
CBLSPublicKey pubKey = quorum->GetPubKeyShare(i);
if (pubKey.IsValid()) {
mo.pushKV("pubKeyShare", pubKey.ToString());
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add validation for single-node quorum public key

The code should validate that the operator's public key is valid before using it in single-node quorums.

     if (single_node_quorum) {
-        mo.pushKV("pubKeyShare", dmn->pdmnState->pubKeyOperator.ToString());
+        const auto& pubKeyOperator = dmn->pdmnState->pubKeyOperator;
+        if (!pubKeyOperator.IsValid()) {
+            throw JSONRPCError(RPC_INVALID_PARAMETER, "Invalid operator public key for single-node quorum");
+        }
+        mo.pushKV("pubKeyShare", pubKeyOperator.ToString());
     } else {
         CBLSPublicKey pubKey = quorum->GetPubKeyShare(i);
         if (pubKey.IsValid()) {
             mo.pushKV("pubKeyShare", pubKey.ToString());
         }
     }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (single_node_quorum) {
mo.pushKV("pubKeyShare", dmn->pdmnState->pubKeyOperator.ToString());
} else {
CBLSPublicKey pubKey = quorum->GetPubKeyShare(i);
if (pubKey.IsValid()) {
mo.pushKV("pubKeyShare", pubKey.ToString());
}
if (single_node_quorum) {
const auto& pubKeyOperator = dmn->pdmnState->pubKeyOperator;
if (!pubKeyOperator.IsValid()) {
throw JSONRPCError(RPC_INVALID_PARAMETER, "Invalid operator public key for single-node quorum");
}
mo.pushKV("pubKeyShare", pubKeyOperator.ToString());
} else {
CBLSPublicKey pubKey = quorum->GetPubKeyShare(i);
if (pubKey.IsValid()) {
mo.pushKV("pubKeyShare", pubKey.ToString());
}

Comment on lines +417 to +422
if (populate_cache && llmq_params_opt->size == 1) {
WITH_LOCK(cs_map_quorums, mapQuorumsCache[llmqType].insert(quorumHash, quorum));

return quorum;
}

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add validation before caching single-node quorum

The code should validate the quorum before caching it for single-node quorums.

     if (populate_cache && llmq_params_opt->size == 1) {
+        // Validate quorum before caching
+        if (!quorum->qc->IsValidMember(quorum->members[0]->proTxHash)) {
+            LogPrint(BCLog::LLMQ, "CQuorumManager::%s -- Invalid member in single-node quorum\n", __func__);
+            return nullptr;
+        }
+        if (!quorum->qc->quorumPublicKey.IsValid()) {
+            LogPrint(BCLog::LLMQ, "CQuorumManager::%s -- Invalid public key in single-node quorum\n", __func__);
+            return nullptr;
+        }
         WITH_LOCK(cs_map_quorums, mapQuorumsCache[llmqType].insert(quorumHash, quorum));
 
         return quorum;
     }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (populate_cache && llmq_params_opt->size == 1) {
WITH_LOCK(cs_map_quorums, mapQuorumsCache[llmqType].insert(quorumHash, quorum));
return quorum;
}
if (populate_cache && llmq_params_opt->size == 1) {
// Validate quorum before caching
if (!quorum->qc->IsValidMember(quorum->members[0]->proTxHash)) {
LogPrint(BCLog::LLMQ, "CQuorumManager::%s -- Invalid member in single-node quorum\n", __func__);
return nullptr;
}
if (!quorum->qc->quorumPublicKey.IsValid()) {
LogPrint(BCLog::LLMQ, "CQuorumManager::%s -- Invalid public key in single-node quorum\n", __func__);
return nullptr;
}
WITH_LOCK(cs_map_quorums, mapQuorumsCache[llmqType].insert(quorumHash, quorum));
return quorum;
}

@knst knst force-pushed the quorum-single-member branch 3 times, most recently from ef38b2a to 119159c Compare January 22, 2025 08:41
@DashCoreAutoGuix
Copy link

Guix Automation has completed; a release should be present here: https://github.com/dashpay/dash-dev-branches/releases/tag/v22.1.0-devpr6533.b0689983. The image should be on dockerhub soon.

@PastaPastaPasta PastaPastaPasta modified the milestones: 22.1, 22, 23 Jan 27, 2025
@knst knst marked this pull request as draft January 31, 2025 13:17
@knst
Copy link
Collaborator Author

knst commented Jan 31, 2025

It is an alternate solution to #6437 which introduces brand-new quorum type.

@lklimek noticed, that platform can't change parameters of each quorum as this PR proposes because all these params are hard-coded. It will require significant amount of work to make it configurable.

I will revive #6437 soon and make it review-ready.

It's still current approach seems so

@knst knst marked this pull request as ready for review April 17, 2025 07:59
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (4)
test/functional/feature_llmq_singlenode.py (4)

9-10: Fix the typo in the file description.

There's a small typo in the comment: "signle" should be "single".

-Checks creating LLMQs signle node quorum creation and signing
+Checks creating LLMQs single node quorum creation and signing

53-73: Simplify the return logic in check_sigs method.

The check_sigs method can be simplified by directly returning the result of the condition checks instead of using multiple if-statements and early returns.

def check_sigs(self, hasrecsigs, isconflicting1, isconflicting2):
    has_sig = False
    conflicting_1 = False
    conflicting_2 = False

    for mn in self.mninfo:
        if mn.node.quorum("hasrecsig", q_type, id, msgHash):
            has_sig = True
        if mn.node.quorum("isconflicting", q_type, id, msgHash):
            conflicting_1 = True
        if mn.node.quorum("isconflicting", q_type, id, msgHashConflict):
            conflicting_2 = True
-    if has_sig != hasrecsigs:
-        return False
-    if conflicting_1 != isconflicting1:
-        return False
-    if conflicting_2 != isconflicting2:
-        return False
-
-    return True
+    return (has_sig == hasrecsigs and 
+            conflicting_1 == isconflicting1 and 
+            conflicting_2 == isconflicting2)
🧰 Tools
🪛 Ruff (0.8.2)

69-72: Return the condition not conflicting_2 != isconflicting2 directly

Replace with return not conflicting_2 != isconflicting2

(SIM103)


120-129: Add a comment explaining the fallback mechanism for signature creation.

The code tries to create a signature share with one node, and if that fails, it tries with another node. Adding a comment would make this fallback mechanism clearer.

qnode = self.mninfo[0].node
try:
    qnode.quorum("sign", q_type, id, msgHash, quorumHash, False)
except JSONRPCException as e:
    if e.error['code'] == -8: # failed to create sigShare
+        # Fallback to another node if the first one fails to create a signature share.
+        # This can happen if the first node is not part of the quorum.
        qnode = self.mninfo[1].node
        qnode.quorum("sign", q_type, id, msgHash, quorumHash, False)
    else:
        raise e

137-137: Consider verifying recovered signatures on all nodes.

The current assertion checks if any node has a recovered signature, but it might be better to verify that all nodes eventually receive the signature to ensure proper propagation across the network.

- assert (has0 or has1 or has2)
+ # First check if at least one node has the signature
+ assert (has0 or has1 or has2)
+ # Then wait for all nodes to have the signature
+ self.wait_until(lambda: self.nodes[0].quorum("hasrecsig", q_type, id, msgHash) and 
+                         self.nodes[1].quorum("hasrecsig", q_type, id, msgHash) and 
+                         self.nodes[2].quorum("hasrecsig", q_type, id, msgHash), timeout=10)
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b068998 and 119159c.

📒 Files selected for processing (12)
  • src/chainparams.cpp (3 hunks)
  • src/llmq/commitment.cpp (3 hunks)
  • src/llmq/dkgsession.cpp (4 hunks)
  • src/llmq/dkgsession.h (1 hunks)
  • src/llmq/dkgsessionhandler.cpp (1 hunks)
  • src/llmq/params.h (1 hunks)
  • src/llmq/quorums.cpp (1 hunks)
  • src/llmq/signing_shares.cpp (2 hunks)
  • src/rpc/quorums.cpp (4 hunks)
  • test/functional/feature_llmq_singlenode.py (1 hunks)
  • test/functional/test_framework/test_framework.py (3 hunks)
  • test/functional/test_runner.py (1 hunks)
✅ Files skipped from review due to trivial changes (1)
  • src/llmq/params.h
🚧 Files skipped from review as they are similar to previous changes (10)
  • src/llmq/dkgsession.h
  • test/functional/test_runner.py
  • src/llmq/commitment.cpp
  • src/llmq/dkgsessionhandler.cpp
  • src/llmq/quorums.cpp
  • src/rpc/quorums.cpp
  • src/chainparams.cpp
  • test/functional/test_framework/test_framework.py
  • src/llmq/signing_shares.cpp
  • src/llmq/dkgsession.cpp
🧰 Additional context used
🪛 Ruff (0.8.2)
test/functional/feature_llmq_singlenode.py

69-72: Return the condition not conflicting_2 != isconflicting2 directly

Replace with return not conflicting_2 != isconflicting2

(SIM103)

🔇 Additional comments (2)
test/functional/feature_llmq_singlenode.py (2)

165-173: LGTM! Great approach for testing time-dependent behavior.

The use of mocktime to fast forward and test the cleanup of old signatures is a good approach. This effectively tests the time-based lifecycle management of recovered signatures in single-node quorums.


175-183: LGTM! Comprehensive testing of chainlocks and InstantSend with new quorums.

The test properly verifies that chainlocks and InstantSend continue to function after quorum changes and time progression, which is crucial for ensuring the stability of these features with single-node quorums.

@knst knst force-pushed the quorum-single-member branch from 119159c to b3aeaa4 Compare April 22, 2025 10:39
@knst
Copy link
Collaborator Author

knst commented Apr 22, 2025

Current approach is proven as working with platform's test suite: platform chain starts and platform contracts run.

Please, help with review this PR and get it merged to develop @UdjinM6 @PastaPastaPasta

@knst knst force-pushed the quorum-single-member branch from 5fb1acc to d33ca99 Compare April 22, 2025 13:28
Copy link

@UdjinM6 UdjinM6 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

utACK d33ca99

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants