Skip to content

Conversation

@PastaPastaPasta
Copy link
Member

@PastaPastaPasta PastaPastaPasta commented Nov 10, 2025

Issue being fixed or feature implemented

Today's testing on testnet showed under load, cs_main was a major contender:

(however, note the major contenders from before #6912 / #6468 are now significantly reduced)

========================================================================================================================
TOP 40 LOCATIONS BY TOTAL CONTENTION TIME (AGGREGATE)
========================================================================================================================
Lock Name                                Location                               Count    Total(μs)    Avg(μs)    Max(μs)  Nodes
------------------------------------------------------------------------------------------------------------------------
cs                                       llmq/signing_shares.cpp:507            11952     81357095     6807.0     299734      8
::cs_main                                instantsend/signing.cpp:210            75241     62985844      837.1      93240      8
cs_main                                  index/base.cpp:340                     29584     27040349      914.0    1613365      8
cs_main                                  net_processing.cpp:5462                69291     19784473      285.5    1329191      8
cs                                       llmq/signing_shares.cpp:1732            2355     17466035     7416.6     127471      8
::cs_main                                llmq/quorums.cpp:1214                  38975     15478543      397.1     883884      8
m_nodes_mutex                            net.cpp:2043                           33457     14736059      440.4      31622      8
::cs_main                                llmq/quorums.cpp:526                   10587     10577503      999.1    2083501      8
pnode->cs_vSend                          net.cpp:2435                           68528      9671050      141.1      37265      8
cs_main                                  net_processing.cpp:4196                 4945      8608157     1740.8    1326146      8
::cs_main                                instantsend/instantsend.cpp:272         5602      8416269     1502.4    1260997      8
cs                                       txmempool.cpp:1319                      7942      8059684     1014.8     356199      8
::cs_main                                validation.cpp:3747                      309      6468144    20932.5    1225274      8
::cs_main                                validation.cpp:6009                    21855      3393152      155.3     228195      8
pnode->cs_vSend                          net.cpp:4709                            7532      2299813      305.3      41543      8
m_nodes_mutex                            ./net.h:1374                            6229      1998506      320.8      15593      8
inv_relay->m_tx_inventory_mutex          net_processing.cpp:1169                 7687      1871859      243.5      16292      8
cs_db                                    instantsend/db.cpp:239                  4323      1527297      353.3      20779      8
cs_cache                                 spork.cpp:244                          18083      1331472       73.6      27722      8
::cs_main                                chainlock/signing.cpp:58                  98      1312496    13392.8    1273691      8
cs_db                                    ./instantsend/db.h:139                  4381      1305738      298.0      29777      8
mutexMsgProc                             net.cpp:2604                            4877      1243737      255.0      24641      8
  • Add unordered_lru_cache for block heights in CInstantSendManager
  • Provide GetTipHeight() (non-optional) and GetBlockHeight() helpers
  • Use cache in InstantSendSigner::CheckCanLock
  • Cache-first for cycleHash height in ISLOCK message path and batch verify
  • Cache-first mined-height for HasChainLock and WriteInstantSendLockMined

What was done?

How Has This Been Tested?

Breaking Changes

Please describe any breaking changes your code introduces

Checklist:

Go over all the following points, and put an x in all the boxes that apply.

  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas
  • I have added or updated relevant unit/integration/functional/e2e tests
  • I have made corresponding changes to the documentation
  • I have assigned this pull request to a milestone (for repository code-owners and collaborators only)

- Add unordered_lru_cache for block heights in CInstantSendManager
- Provide GetTipHeight() (non-optional) and GetBlockHeight() helpers
- Use cache in InstantSendSigner::CheckCanLock
- Cache-first for cycleHash height in ISLOCK message path and batch verify
- Cache-first mined-height for HasChainLock and WriteInstantSendLockMined

Only InstantSend files are included in this commit.
@PastaPastaPasta PastaPastaPasta added this to the 23.1 milestone Nov 10, 2025
@github-actions
Copy link

github-actions bot commented Nov 10, 2025

⚠️ Potential Merge Conflicts Detected

This PR has potential conflicts with the following open PRs:

Please coordinate with the authors of these PRs to avoid merge conflicts.

@DashCoreAutoGuix
Copy link

Guix Automation has began to build this PR tagged as v23.0.0-devpr6953.ce5601c6. A new comment will be made when the image is pushed.

@coderabbitai
Copy link

coderabbitai bot commented Nov 10, 2025

Walkthrough

The patch adds an explicit block-height cache to CInstantSendManager (m_cached_block_heights, m_cached_tip_height) with CacheBlockHeight/CacheBlockHeightInternal/GetBlockHeight/GetTipHeight APIs. ISLOCK processing (ProcessMessage, ProcessPendingInstantSendLocks, ProcessInstantSendLock, ResolveBlockConflicts, RemoveConflictingLock) now consults and updates the height cache instead of direct CBlockIndex lookups; mined heights are persisted when resolved. BlockConnected, BlockDisconnected, and UpdatedBlockTip synchronize and maintain the cache and cached tip height. Locking annotations were updated to require cs_height_cache where appropriate.

Sequence Diagram(s)

sequenceDiagram
    participant P2P as P2P Message
    participant ISM as InstantSendManager
    participant Cache as HeightCache (m_cached_block_heights)
    participant Chain as ChainState
    participant Mempool as Mempool/UTXO

    P2P->>ISM: Receive ISLOCK (ProcessMessage)
    ISM->>Cache: GetBlockHeight(cycleHash)
    alt cached
        Cache-->>ISM: return height
    else not cached
        ISM->>Chain: LookupBlockIndex(cycleHash)
        Chain-->>ISM: return blockIndex/height?
        ISM->>Cache: CacheBlockHeight(cycleHash,height)
    end
    ISM->>ISM: validate cycleHeight vs dkgInterval
    ISM->>ISM: ProcessInstantSendLock(...)
    ISM->>Mempool: Find TX / check inputs
    alt TX mined
        ISM->>Cache: GetBlockHeight(hashBlock)
        alt cached
            Cache-->>ISM: minedHeight
        else not cached
            ISM->>Chain: LookupBlockIndex(hashBlock)
            Chain-->>ISM: return height
            ISM->>Cache: CacheBlockHeight(hashBlock,height)
        end
        ISM->>ISM: WriteInstantSendLockMined(minedHeight)
    else TX unmined
        ISM->>ISM: handle mempool path / mark pending
    end
Loading
sequenceDiagram
    participant Node as Node
    participant ISM as InstantSendManager
    participant Cache as HeightCache

    Node->>ISM: BlockConnected(newBlock)
    ISM->>Cache: CacheBlockHeight(newBlock.hash, newBlock.height)
    ISM->>Cache: Update m_cached_tip_height

    Node->>ISM: BlockDisconnected(oldBlock)
    ISM->>Cache: Invalidate/Adjust cached entries for oldBlock and recompute new tip
    ISM->>Cache: CacheBlockHeight(newTip.hash,newTip.height) (if known)
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Areas requiring extra attention:

  • Cache correctness: eviction policy (MAX_BLOCK_HEIGHT_CACHE) and CacheBlockHeightInternal logic.
  • Concurrency: correct locking (cs_height_cache) across all modified methods and interaction with other locks (e.g., cs_main).
  • Reorg handling: correctness of BlockDisconnected logic and tip-height updates under reorgs.
  • Persistence and correctness of minedHeight handling and WriteInstantSendLockMined calls.
  • Signer interface change: InstantSendSignerParent additions (GetBlockHeight, GetTipHeight) and all implementations/consumers.

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The PR title accurately and clearly summarizes the main changes: adding a height cache mechanism to InstantSend and reducing cs_main lock contention.
Description check ✅ Passed The PR description is related to the changeset; it explains the motivation (cs_main contention under load), lists key implementation changes (cache infrastructure, helper functions, usage patterns), and includes performance profiling data demonstrating improvements.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@DashCoreAutoGuix
Copy link

Guix Automation has completed; a release should be present here: https://github.com/dashpay/dash-dev-branches/releases/tag/v23.0.0-devpr6953.ce5601c6. The image should be on dockerhub soon.

@DashCoreAutoGuix
Copy link

Guix Automation has began to build this PR tagged as v23.0.0-devpr6953.8fc12bc1. A new comment will be made when the image is pushed.

@DashCoreAutoGuix
Copy link

Guix Automation has completed; a release should be present here: https://github.com/dashpay/dash-dev-branches/releases/tag/v23.0.0-devpr6953.8fc12bc1. The image should be on dockerhub soon.

@DashCoreAutoGuix
Copy link

Guix Automation has began to build this PR tagged as v23.0.0-devpr6953.8a5b845a. A new comment will be made when the image is pushed.

@DashCoreAutoGuix
Copy link

Guix Automation has completed; a release should be present here: https://github.com/dashpay/dash-dev-branches/releases/tag/v23.0.0-devpr6953.8a5b845a. The image should be on dockerhub soon.

- Updated log messages in CInstantSendManager to enhance clarity and maintain consistency.
- Continued log message formatting for better readability.
- No functional changes were made to the logic.
@DashCoreAutoGuix
Copy link

Guix Automation has began to build this PR tagged as v23.0.0-devpr6953.8bdb71d4. A new comment will be made when the image is pushed.

@DashCoreAutoGuix
Copy link

Guix Automation has completed; a release should be present here: https://github.com/dashpay/dash-dev-branches/releases/tag/v23.0.0-devpr6953.8bdb71d4. The image should be on dockerhub soon.

Uint256HashMap<int64_t> timingsTxSeen GUARDED_BY(cs_timingsTxSeen);

mutable Mutex cs_height_cache;
static constexpr size_t MAX_BLOCK_HEIGHT_CACHE{16384};
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

isn't 16k a bit too much? rotation quorums are not alive that much; also it's very unlikely situation that tx will exist that long before it will be mined.

16k is roughly 28 days

@PastaPastaPasta
Copy link
Member Author

Deployed analysis during load on testnet. Note; that these 2 nodes can't be compared 1:1, they may be slightly differently connected, but we see significant improvements.

🔍 Overall Contention Summary

Metric Before After Change
Total Lock Contentions 134,190 157,051 +17% increase
Total Contention Time (μs) 140,162,326 126,780,252 −9.5% decrease
Average Contention Time (μs) 1,044.5 807.3 −22.7% decrease
Unique Lock Locations 171 164 −4% fewer locks

Interpretation:
Although the number of lock contentions increased (~17%), the total contention time and average wait time per contention both improved notably.
This suggests higher concurrency efficiency — the system is hitting more locks but each contention is shorter, likely due to improved lock granularity or reduced critical section duration.


⚙️ Top Locks by Total Contention Time

1. Major Improvements

  • ::cs_main instantsend/signing.cpp:210
    Removed entirely.
    This lock was the 2nd heaviest in “before,” with 25M μs total contention.
    Its removal likely shifted contention away from InstantSend signing to other modules (notably llmq/quorums.cpp).

  • cs_main index/base.cpp:340
    → Contention time dropped from 13.1M → 9.7M μs (−26%).
    Average contention decreased from 1589 → 950 μs, a major win for block index operations.

  • m_nodes_mutex net.cpp:2043
    → Total contention time decreased from 5.23M → 5.98M μs, but per-lock average dropped from 1022 → 686 μs — contention increased but each block was shorter.

  • ::cs_main validation.cpp:6009
    → Total contention dropped from 2.31M → 0.53M μs, with average from 452 → 99 μs — likely due to better validation flow parallelism.

  • ::cs_main validation.cpp:3747
    → Still present, but total contention time ~1M μs vs 0.82M μs, about the same but now less dominant in overall share.

2. New or Increased Contention

  • ::cs_main llmq/quorums.cpp:1214
    → Total contention jumped from 8.25M → 29.7M μs (+260%).
    Average ~949 μs.
    This moved to the 2 hotspot overall, implying new concurrency bottlenecks in quorum processing (likely after InstantSend refactors moved more work here).

  • cs llmq/signing_shares.cpp:507
    → Slight decrease in total contention (41.0M → 37.7M μs), but remains the 1 contention hotspot.
    Average lock duration improved from 11.6 ms → 8.96 ms (−23%).

  • cs llmq/signing_shares.cpp:1732
    → Increased slightly (10.56M → 10.28M μs).
    Still high average (10.2 ms), suggesting signing share aggregation is still a major contention source.

  • txmempool.cpp:1319
    → Total contention rose from 7.00M → 9.34M μs (+33%), with a small rise in average duration (~1977 μs).
    Indicates heavier mempool locking — possibly due to more transaction throughput.


📈 Top Locks by Contention Count

Lock (Location) Before Count After Count ΔCount Avg Time Change
pnode->cs_vSend net.cpp:2435 22,502 34,780 +55% +11% (130 → 145 μs)
::cs_main llmq/quorums.cpp:1214 9,017 31,341 +247% +4%
cs_main net_processing.cpp:5462 16,558 14,634 −12% −24%
cs_main index/base.cpp:340 8,285 10,230 +24% −40%
m_nodes_mutex net.cpp:2043 5,115 8,713 +70% −33%
::cs_main validation.cpp:6009 5,107 5,406 +6% −78%

Observations:

  • The highest-frequency locks (e.g. pnode->cs_vSend, m_nodes_mutex) show more contention events but much faster execution — again supporting that thread concurrency increased and lock contention periods shortened.
  • Network-related locks (cs_vSend, m_nodes_mutex, connman.m_nodes_mutex) became more active but more efficient, suggesting better message batching or thread handling.

🔄 Structural & Behavioral Shifts

✅ Improved

  • InstantSend & Validation path: contention drastically reduced or removed (instantsend/signing.cpp lock gone, validation.cpp locks improved).
    Likely due to better critical section partitioning or async signing refactor.
  • Average contention time per lock dropped across most major areas.

⚠️ Regressed

  • Quorum operations (llmq/quorums.cpp) now dominate total contention time.
    This suggests some of the synchronization load from InstantSend was moved to the LLMQ code.
  • txmempool.cpp contention increased, hinting at heavier mempool updates (potentially due to faster block validation or more aggressive transaction relay).

🧠 Interpretation

Overall, this transition indicates:

  • Refactor Success: The shift reduced overall contention time and made locking more granular — a clear performance gain.
  • New Bottleneck: Contention has migrated to LLMQ quorum signing and sharing, particularly around llmq/quorums.cpp:1214 and signing_shares.cpp:507.
  • Networking Improvements: Higher cs_vSend counts but low average time suggest increased concurrency and better throughput.
  • InstantSend codepaths were streamlined and now cause negligible lock contention.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
src/instantsend/instantsend.h (1)

97-97: Reconsider the cache size constant.

The MAX_BLOCK_HEIGHT_CACHE constant is set to 16384, which represents approximately 28 days of blocks (at ~2.5 minute block time). As noted in previous review feedback, this may be excessive for the typical lifetime of transactions in the InstantSend flow. Consider reducing this to a more reasonable value based on typical InstantSend lock processing times and memory constraints.

🧹 Nitpick comments (2)
src/instantsend/instantsend.cpp (2)

131-141: Consider minor optimization in cache update.

When the block index is found via fallback lookup (line 133-138), you're calling blockIndex->GetBlockHash() again on line 139. Since islock->cycleHash should already match the block hash, you could use islock->cycleHash directly for caching.

Apply this diff:

-        CacheBlockHeight(blockIndex->GetBlockHash(), blockIndex->nHeight);
+        CacheBlockHeight(islock->cycleHash, blockIndex->nHeight);

391-399: Simplify fallback logic by relying on GetBlockHeight.

The fallback pattern here (lines 393-399) duplicates logic already present in GetBlockHeight (lines 977-995). Since GetBlockHeight already performs the lookup and caching internally, the explicit fallback here may be redundant. The current code calls GetBlockHeight once (line 391), then does manual lookup and caching if it returns nullopt (lines 393-399).

However, GetBlockHeight already does this internally. The only scenario where this explicit fallback helps is if you need to ensure caching happens even when the block isn't initially found. But GetBlockHeight already caches on successful lookup.

Consider whether this duplication is necessary or if a single GetBlockHeight call suffices.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 8fc12bc and 13727fb.

📒 Files selected for processing (3)
  • src/instantsend/instantsend.cpp (9 hunks)
  • src/instantsend/instantsend.h (6 hunks)
  • src/instantsend/signing.cpp (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • src/instantsend/signing.cpp
🧰 Additional context used
🧠 Learnings (7)
📓 Common learnings
Learnt from: kwvg
Repo: dashpay/dash PR: 6543
File: src/wallet/receive.cpp:240-251
Timestamp: 2025-02-06T14:34:30.466Z
Learning: Pull request #6543 is focused on move-only changes and refactoring, specifically backporting from Bitcoin. Behavior changes should be proposed in separate PRs.
Learnt from: knst
Repo: dashpay/dash PR: 6692
File: src/llmq/blockprocessor.cpp:217-224
Timestamp: 2025-08-19T14:57:31.801Z
Learning: In PR #6692, knst acknowledged a null pointer dereference issue in ProcessBlock() method where LookupBlockIndex may return nullptr but is passed to gsl::not_null, and created follow-up PR #6789 to address it, consistent with avoiding scope creep in performance-focused PRs.
📚 Learning: 2025-08-19T14:57:31.801Z
Learnt from: knst
Repo: dashpay/dash PR: 6692
File: src/llmq/blockprocessor.cpp:217-224
Timestamp: 2025-08-19T14:57:31.801Z
Learning: In PR #6692, knst acknowledged a null pointer dereference issue in ProcessBlock() method where LookupBlockIndex may return nullptr but is passed to gsl::not_null, and created follow-up PR #6789 to address it, consistent with avoiding scope creep in performance-focused PRs.

Applied to files:

  • src/instantsend/instantsend.cpp
📚 Learning: 2025-07-15T14:53:04.819Z
Learnt from: knst
Repo: dashpay/dash PR: 6691
File: src/test/llmq_params_tests.cpp:148-151
Timestamp: 2025-07-15T14:53:04.819Z
Learning: In the Dash Core LLMQ implementation, signingActiveQuorumCount is never 0 in the actual parameters defined in params.h, making division by zero scenarios unrealistic in the max_cycles() function.

Applied to files:

  • src/instantsend/instantsend.cpp
📚 Learning: 2025-05-05T12:45:44.781Z
Learnt from: knst
Repo: dashpay/dash PR: 6658
File: src/evo/creditpool.cpp:177-185
Timestamp: 2025-05-05T12:45:44.781Z
Learning: The GetAncestor() function in CBlockIndex safely handles negative heights by returning nullptr rather than asserting, making it safe to call with potentially negative values.

Applied to files:

  • src/instantsend/instantsend.cpp
📚 Learning: 2025-10-21T11:09:34.688Z
Learnt from: kwvg
Repo: dashpay/dash PR: 6849
File: src/governance/governance.cpp:1339-1343
Timestamp: 2025-10-21T11:09:34.688Z
Learning: In the Dash Core codebase, `CacheMap` (defined in src/cachemap.h) is internally thread-safe and uses its own `mutable CCriticalSection cs` to protect access to its members. Methods like `GetSize()`, `Insert()`, `Get()`, `HasKey()`, etc., can be called without holding external locks.

Applied to files:

  • src/instantsend/instantsend.h
📚 Learning: 2025-01-02T21:50:00.967Z
Learnt from: kwvg
Repo: dashpay/dash PR: 6504
File: src/llmq/context.cpp:42-43
Timestamp: 2025-01-02T21:50:00.967Z
Learning: LLMQContext manages concurrency for the `CInstantSendManager`. Previously, this was handled globally; now it's handled as a class member in `LLMQContext`, but the concurrency control remains consistent.

Applied to files:

  • src/instantsend/instantsend.h
📚 Learning: 2025-09-09T21:36:11.833Z
Learnt from: kwvg
Repo: dashpay/dash PR: 6837
File: src/stats/rawsender.cpp:0-0
Timestamp: 2025-09-09T21:36:11.833Z
Learning: In RawSender class (src/stats/rawsender.cpp), cs_net is the appropriate mutex for protecting socket access (m_sock) and network operations, not additional custom locks. The implementation correctly uses cs_net with GUARDED_BY annotations and EXCLUSIVE_LOCKS_REQUIRED to synchronize socket access between SendDirectly() and ReconnectThread().

Applied to files:

  • src/instantsend/instantsend.h
🧬 Code graph analysis (2)
src/instantsend/instantsend.cpp (1)
src/node/interfaces.cpp (23)
  • hash (893-910)
  • hash (893-893)
  • hash (932-936)
  • hash (932-932)
  • hash (958-963)
  • hash (958-958)
  • LOCK (581-585)
  • LOCK (591-598)
  • LOCK (599-606)
  • LOCK (868-877)
  • LOCK (911-915)
  • LOCK (1095-1099)
  • height (563-573)
  • height (563-563)
  • height (878-885)
  • height (878-878)
  • height (886-892)
  • height (886-886)
  • height (937-941)
  • height (937-937)
  • tip (97-105)
  • tip (586-590)
  • tip (607-615)
src/instantsend/instantsend.h (2)
src/saltedhasher.h (1)
  • uint256 (38-44)
src/instantsend/instantsend.cpp (14)
  • CacheBlockHeightInternal (965-969)
  • CacheBlockHeightInternal (965-965)
  • ProcessPendingInstantSendLocks (183-246)
  • ProcessPendingInstantSendLocks (183-183)
  • ProcessPendingInstantSendLocks (248-360)
  • ProcessPendingInstantSendLocks (248-251)
  • ProcessInstantSendLock (362-449)
  • ProcessInstantSendLock (362-363)
  • CacheBlockHeight (971-975)
  • CacheBlockHeight (971-971)
  • GetBlockHeight (977-995)
  • GetBlockHeight (977-977)
  • GetTipHeight (997-1015)
  • GetTipHeight (997-997)
🪛 GitHub Actions: Clang Diff Format Check
src/instantsend/instantsend.cpp

[error] 393-395: Clang format differences found. Apply clang-format changes to src/instantsend/instantsend.cpp. Exit code 1 due to unformatted code.

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (10)
  • GitHub Check: linux64_tsan-build / Build source
  • GitHub Check: arm-linux-build / Build source
  • GitHub Check: win64-build / Build source
  • GitHub Check: mac-build / Build source
  • GitHub Check: linux64_ubsan-build / Build source
  • GitHub Check: linux64_fuzz-build / Build source
  • GitHub Check: linux64-build / Build source
  • GitHub Check: linux64_sqlite-build / Build source
  • GitHub Check: linux64_nowallet-build / Build source
  • GitHub Check: Lint / Run linters
🔇 Additional comments (10)
src/instantsend/instantsend.cpp (9)

25-25: LGTM!

Adding the cassert include is appropriate for the assert statements used in the implementation.


280-292: LGTM!

The height cache integration correctly uses GetBlockHeight to fetch the cycle height and GetTipHeight to determine the appropriate signing height for quorum selection. The thread-safe cache access reduces cs_main contention as intended.


509-513: LGTM!

The block height caching during connection correctly maintains the cache and updates the tip height atomically under the lock. This ensures the cache stays synchronized with chain state.


542-550: LGTM with minor observation.

The cache invalidation logic correctly removes the disconnected block and updates the tip. The code handles the case where there's no previous block (genesis disconnect) by setting m_cached_tip_height to -1.

One minor note: when caching new_tip (line 548), we assume pprev is a valid block. This is safe because pprev is either a valid ancestor or nullptr, and we guard the caching with if (new_tip).


673-677: LGTM!

The tip height caching in UpdatedBlockTip correctly maintains synchronization between the chain state and the height cache.


859-859: LGTM!

Using GetTipHeight here reduces cs_main contention as intended by the PR objectives.


965-975: LGTM!

The internal/external caching method pattern correctly separates lock acquisition from the locked operation, allowing efficient batch caching when the lock is already held.


977-995: LGTM!

The GetBlockHeight implementation correctly:

  1. Returns std::nullopt for null/unknown hashes
  2. Checks the cache first to avoid cs_main contention
  3. Falls back to block index lookup when necessary
  4. Caches the result before returning

This achieves the PR's goal of reducing cs_main contention in the InstantSend flow.


997-1015: Document assumption in GetTipHeight assertion.

The assertion at line 1007 (assert(tip != nullptr)) assumes the chain tip is always valid. While this is typically true after genesis, consider whether this is always guaranteed in all call contexts (e.g., during initialization or chain reorgs).

If this assumption is architectural (i.e., GetTipHeight should never be called before the chain is initialized), the assert is appropriate but could benefit from a comment explaining the invariant. Otherwise, consider returning a sentinel value or throwing a more descriptive error.

Based on learnings about safe handling of potential nullptr returns from chain operations.

src/instantsend/instantsend.h (1)

98-99: StaticSaltedHasher is properly compatible with uint256.

The SaltedHasherImpl<uint256> specialization exists and uses SipHashUint256(k0, k1, v) for hashing uint256 values. StaticSaltedHasher's templated operator() delegates to SaltedHasherImpl<T>::CalcHash, which for uint256 keys provides proper cryptographic distribution. This pattern is used consistently throughout the codebase for LRU caches with uint256 keys (e.g., masternode/meta.h, llmq/signing.h), and there's even a dedicated Uint256LruHashMap type alias for this use case.

Comment on lines +391 to 415
std::optional<int> minedHeight = GetBlockHeight(hashBlock);
if (found_transaction) {
if (!minedHeight.has_value()) {
const CBlockIndex* pindexMined = WITH_LOCK(::cs_main, return m_chainstate.m_blockman.LookupBlockIndex(hashBlock));
if (pindexMined != nullptr) {
CacheBlockHeight(pindexMined->GetBlockHash(), pindexMined->nHeight);
minedHeight = pindexMined->nHeight;
}
}
// Let's see if the TX that was locked by this islock is already mined in a ChainLocked block. If yes,
// we can simply ignore the islock, as the ChainLock implies locking of all TXs in that chain
if (pindexMined != nullptr && clhandler.HasChainLock(pindexMined->nHeight, pindexMined->GetBlockHash())) {
LogPrint(BCLog::INSTANTSEND, "CInstantSendManager::%s -- txlock=%s, islock=%s: dropping islock as it already got a ChainLock in block %s, peer=%d\n", __func__,
islock->txid.ToString(), hash.ToString(), hashBlock.ToString(), from);
if (minedHeight.has_value() && clhandler.HasChainLock(*minedHeight, hashBlock)) {
LogPrint(BCLog::INSTANTSEND, /* Continued */
"CInstantSendManager::%s -- txlock=%s, islock=%s: dropping islock as it already got a " /* Continued */
"ChainLock in block %s, peer=%d\n",
__func__, islock->txid.ToString(), hash.ToString(), hashBlock.ToString(), from);
return {};
}
}

if (found_transaction) {
db.WriteNewInstantSendLock(hash, islock);
if (pindexMined) {
db.WriteInstantSendLockMined(hash, pindexMined->nHeight);
if (minedHeight.has_value()) {
db.WriteInstantSendLockMined(hash, *minedHeight);
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix clang-format issue.

The pipeline indicates clang-format differences at lines 393-395. Please run clang-format to resolve the formatting issue.

🧰 Tools
🪛 GitHub Actions: Clang Diff Format Check

[error] 393-395: Clang format differences found. Apply clang-format changes to src/instantsend/instantsend.cpp. Exit code 1 due to unformatted code.

🤖 Prompt for AI Agents
In src/instantsend/instantsend.cpp around lines 391 to 415, the submitted
changes trigger a clang-format mismatch specifically around lines 393-395;
reformat the file (or at least that region) with the project's clang-format
configuration to fix spacing/line breaks so the code matches style rules, then
stage and commit the formatted file. Ensure no semantic changes are
introduced—only whitespace/formatting adjustments.

@DashCoreAutoGuix
Copy link

Guix Automation has began to build this PR tagged as v23.0.0-devpr6953.13727fb3. A new comment will be made when the image is pushed.

@DashCoreAutoGuix
Copy link

Guix Automation has completed; a release should be present here: https://github.com/dashpay/dash-dev-branches/releases/tag/v23.0.0-devpr6953.13727fb3. The image should be on dockerhub soon.

@UdjinM6
Copy link

UdjinM6 commented Nov 14, 2025

pls consider ffd84b6

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants