-
Notifications
You must be signed in to change notification settings - Fork 7
fix(spv): implement filter re-checking when gap limits change during sync #204
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: v0.41-dev
Are you sure you want to change the base?
Conversation
…sync During blockchain sync, compact filters are checked to determine which blocks contain relevant transactions. However, gap limit maintenance happens AFTER blocks are processed, meaning newly-generated addresses are not checked against previously-evaluated filters. This causes transactions to addresses beyond the initial gap limit to be permanently missed. This commit implements automatic filter re-checking when gap limits change: Changes: - Add gap limit change tracking to GapLimit::mark_used() - Modify WalletInterface::process_block() to return gap limit change indicator - Add FilterRecheckQueue to track and manage filter re-check operations - Implement filter re-checking logic in SequentialSyncManager - Queue affected filter ranges when gap limits change during block processing - Process queued re-checks before phase transitions - Add comprehensive logging and iteration limits (max 10) to prevent loops Impact: - Fixes missing coinjoin and other transactions beyond initial gap limit - Zero overhead when gap limits don't change (common case) - Automatic and transparent - no configuration required - Includes safety limits and comprehensive error handling Fixes issue where coinjoin transactions were skipped during sync. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
WalkthroughThis change introduces gap limit change detection and filter re-checking infrastructure. When wallet address generation expands (gap limit changes), the system now detects this condition, returns a flag from Changes
Sequence Diagram(s)sequenceDiagram
participant BlockProc as Block Processor
participant Wallet as Wallet Manager
participant Queue as Recheck Queue
participant SyncMgr as Sequential Sync Manager
participant FilterCheck as Filter Check
BlockProc->>Wallet: process_block(block, height)
Wallet->>Wallet: Detect address generation<br/>(addresses_after > addresses_before)
Wallet-->>BlockProc: (txids, gap_limit_changed=true)
BlockProc->>SyncMgr: Handle block message<br/>gap_limit_changed=true
SyncMgr->>Queue: add_range(start, end, iteration=0)
Queue-->>SyncMgr: Range enqueued
Note over SyncMgr: Before phase transition
SyncMgr->>Queue: has_pending()?
alt Pending ranges exist
loop For each pending range
SyncMgr->>Queue: next_range()
Queue-->>SyncMgr: RecheckRange
SyncMgr->>FilterCheck: recheck_filters_for_range(range)
FilterCheck-->>SyncMgr: new_matches: Vec<(BlockHash, height)>
SyncMgr->>SyncMgr: new_matches.len() > 0?
alt New blocks to download
SyncMgr->>SyncMgr: Re-enter DownloadingBlocks phase
else No new blocks
SyncMgr->>Queue: mark_completed(range)
end
end
else No pending ranges
SyncMgr->>SyncMgr: Proceed with normal phase transition
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes
Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (9)
key-wallet/src/gap_limit.rs (1)
85-125:mark_usedreturn semantics look correct; simplify unreachable branch for clarityThe new
boolreturn correctly signals only whenhighest_used_indexincreases, which is what callers need to know for gap extension. However, after you updateself.highest_used_indexwithmax(current, index), theif index > highestbranch can never be taken (highestis always ≥index), so you always fall back tocalculate_current_gap().This is logically fine but the comment “Reset unused count if this breaks a gap” no longer matches behavior and the branch is dead code.
Consider simplifying to always recalculate the gap, or, if you really want the “breaks a gap” fast-path, compare against
old_highestinstead of the updatedhighest:- if let Some(highest) = self.highest_used_index { - if index > highest { - self.current_unused_count = 0; - } else { - // Recalculate unused count from highest used - self.current_unused_count = self.calculate_current_gap(); - } - } + if self.highest_used_index.is_some() { + // Recalculate unused count from highest used + self.current_unused_count = self.calculate_current_gap(); + }This keeps behavior while making intent obvious.
dash-spv/src/sync/sequential/lifecycle.rs (1)
41-45: Recheck queue initialization is correct; consider future configurabilityCreating
FilterRecheckConfig::default()and wiring aFilterRecheckQueueinto the manager’s state is consistent with the new queue type and keeps defaults centralized inrecheck.rs.If you anticipate needing to tune
enabled/max_iterationsper deployment, consider plumbing these fromClientConfiglater so behavior is not hard‑coded here. For now this is fine.Also applies to: 46-67
key-wallet-manager/tests/spv_integration_tests.rs (1)
100-122: Test updates match new signature; consider adding a positive gap-limit caseDestructuring
(txids, gap_limit_changed)and assertingtxids.len() == 0and!gap_limit_changedis consistent with the currentWalletManager::process_blockbehavior for a fresh wallet and a non-matching transaction.As a follow‑up, it would be useful to add at least one test that exercises a real gap-limit change (e.g., a transaction that causes new addresses to be generated) and asserts
gap_limit_changed == true, to guard the new behavior end‑to‑end.dash-spv/src/sync/sequential/filter_recheck.rs (1)
1-105: Recheck logic is sound; narrow wallet lock scope to avoid long-held write locksThe overall flow—iterating the height range, skipping missing headers/filters, rebuilding
BlockFilterfrom stored bytes, and re-runningcheck_compact_filteragainst the updated wallet state—matches the intended “best-effort recheck” behavior and should correctly surface newly-relevant blocks.One concern is that you acquire a
write()lock onself.walletonce (Line 38) and then hold it across allstorage.get_header(...).awaitandstorage.load_filter(...).awaitcalls inside the loop. This:
- Blocks any other concurrent wallet operations for the entire recheck range.
- Slightly increases deadlock risk if a wallet implementation ever ends up waiting on storage or other locks internally.
You can keep semantics while scoping the wallet lock only around the actual
check_compact_filtercall:- // Lock wallet once for the entire range - let mut wallet = self.wallet.write().await; - - // Iterate through the height range - for height in range.start..=range.end { + // Iterate through the height range + for height in range.start..=range.end { // Get the block hash for this height let header = match storage.get_header(height).await { // ... }; let block_hash = header.block_hash(); // Get the compact filter for this height let filter_data = match storage.load_filter(height).await { // ... }; let filter = dashcore::bip158::BlockFilter::new(&filter_data[..]); - // Check filter with wallet's CURRENT (updated) address set - let matches = - wallet.check_compact_filter(&filter, &block_hash, self.config.network).await; + // Check filter with wallet's CURRENT (updated) address set + let matches = { + let mut wallet = self.wallet.write().await; + wallet + .check_compact_filter(&filter, &block_hash, self.config.network) + .await + };This keeps recheck behavior but avoids holding the wallet write lock across unrelated I/O.
dash-spv/src/client/block_processor.rs (1)
229-238: Gap-limit flag is plumbed correctly but currently only logged hereThe tuple destructuring and logging look fine, and behavior is unchanged aside from the new warning when
gap_limit_changedis true. However, unlike the sequential sync path, this code path does not enqueue any filter re-checks yet; if blocks can be processed via this worker outside of the sequential sync manager, gap-limit changes here will not currently trigger re-checks.Consider either:
- documenting that re-checks are only orchestrated from the sequential sync flow, or
- wiring this TODO to the same
FilterRecheckQueuemechanism if this worker can see gap-limit-triggering blocks first.key-wallet-manager/src/wallet_manager/process_block.rs (1)
21-24: Gap-limit detection via address count is reasonable and consistentUsing total
monitored_addresses(network).len()before/after processing to derivegap_limit_changedis simple and correct as long as address sets are append-only, which matches typical HD wallet/gap-limit behavior. The new return shape and comments are coherent with the SPV-side consumers.If
monitored_addressesallocates heavily, you might later consider a lighter-weight counter cached per network, but this is not required for correctness.Also applies to: 54-63
dash-spv/src/client/block_processor_test.rs (1)
44-56: Mocks updated correctly to the new(Vec<Txid>, bool)shapeBoth
MockWalletandNonMatchingWalletnow return the required tuple withgap_limit_changed = false, which keeps existing tests valid and clearly documents that these mocks don’t model gap-limit dynamics.If you later add tests around filter re-check orchestration, consider adding a mock that can toggle
gap_limit_changedto exercise that path.Also applies to: 270-278
dash-spv/src/sync/sequential/message_handlers.rs (1)
765-827: Range selection for re-checks is coherent but may be too narrowThe logic to derive
[min_height, max_height]frompending_blocks,downloading, andcompletedis internally consistent and safely bounded by the known block heights in the currentDownloadingBlocksbatch.However, gap-limit expansions can reveal addresses that already have history in earlier blocks outside this batch. Restricting re-checks to only the current batch’s min/max heights could miss older filters whose blocks were previously skipped due to the smaller address set.
Consider whether the re-check range should instead be anchored to something like:
- the earliest relevant height for the wallet (
earliest_required_height), or- at least the earliest height for which filters have already been downloaded,
and expand downwards accordingly, or make this window configurable. At minimum, some documentation or tests around expected coverage would help clarify the intended behavior.
dash-spv/src/sync/filters/recheck.rs (1)
54-61: Minor inconsistency betweenlen()andis_empty()for inverted ranges
RecheckRange::len()usesend.saturating_sub(start).saturating_add(1), so forstart > endit returns1, whileis_empty()correctly reportstrue(end < start). That inconsistency probably won’t matter in practice (callers shouldn’t be constructing inverted ranges), but it’s a subtle trap.If you want a fully self-consistent API, you could do:
- pub fn len(&self) -> u32 { - self.end.saturating_sub(self.start).saturating_add(1) - } + pub fn len(&self) -> u32 { + if self.is_empty() { + 0 + } else { + self.end - self.start + 1 + } + }Not urgent, but worth considering.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (13)
dash-spv/src/client/block_processor.rs(1 hunks)dash-spv/src/client/block_processor_test.rs(2 hunks)dash-spv/src/sync/filters/mod.rs(2 hunks)dash-spv/src/sync/filters/recheck.rs(1 hunks)dash-spv/src/sync/sequential/filter_recheck.rs(1 hunks)dash-spv/src/sync/sequential/lifecycle.rs(2 hunks)dash-spv/src/sync/sequential/manager.rs(1 hunks)dash-spv/src/sync/sequential/message_handlers.rs(2 hunks)dash-spv/src/sync/sequential/mod.rs(1 hunks)key-wallet-manager/src/wallet_interface.rs(1 hunks)key-wallet-manager/src/wallet_manager/process_block.rs(2 hunks)key-wallet-manager/tests/spv_integration_tests.rs(1 hunks)key-wallet/src/gap_limit.rs(5 hunks)
🧰 Additional context used
🧠 Learnings (4)
📓 Common learnings
Learnt from: DCG-Claude
Repo: dashpay/rust-dashcore PR: 0
File: :0-0
Timestamp: 2025-06-26T16:01:37.609Z
Learning: The mempool tracking infrastructure (UnconfirmedTransaction, MempoolState, configuration, and mempool_filter.rs) is fully implemented and integrated in the Dash SPV client as of this PR, including client logic, FFI APIs, and tests.
📚 Learning: 2025-06-26T16:01:37.609Z
Learnt from: DCG-Claude
Repo: dashpay/rust-dashcore PR: 0
File: :0-0
Timestamp: 2025-06-26T16:01:37.609Z
Learning: The mempool tracking infrastructure (UnconfirmedTransaction, MempoolState, configuration, and mempool_filter.rs) is fully implemented and integrated in the Dash SPV client as of this PR, including client logic, FFI APIs, and tests.
Applied to files:
dash-spv/src/sync/sequential/manager.rsdash-spv/src/sync/sequential/lifecycle.rsdash-spv/src/sync/sequential/mod.rsdash-spv/src/client/block_processor.rsdash-spv/src/sync/sequential/message_handlers.rsdash-spv/src/sync/filters/mod.rsdash-spv/src/client/block_processor_test.rsdash-spv/src/sync/sequential/filter_recheck.rskey-wallet-manager/src/wallet_manager/process_block.rsdash-spv/src/sync/filters/recheck.rs
📚 Learning: 2025-06-26T15:54:02.509Z
Learnt from: DCG-Claude
Repo: dashpay/rust-dashcore PR: 0
File: :0-0
Timestamp: 2025-06-26T15:54:02.509Z
Learning: The `StorageManager` trait in `dash-spv/src/storage/mod.rs` uses `&mut self` methods but is also `Send + Sync`, and implementations often use interior mutability for concurrency. This can be confusing, so explicit documentation should clarify thread-safety expectations and the rationale for the API design.
Applied to files:
dash-spv/src/sync/sequential/manager.rsdash-spv/src/sync/sequential/mod.rs
📚 Learning: 2025-06-26T15:47:37.438Z
Learnt from: DCG-Claude
Repo: dashpay/rust-dashcore PR: 0
File: :0-0
Timestamp: 2025-06-26T15:47:37.438Z
Learning: Transaction IDs (txids) and block hashes in Dash are always 32 bytes and should be passed as fixed-size byte arrays in FFI interfaces, not as C strings, to avoid inefficiency and encoding issues.
Applied to files:
dash-spv/src/client/block_processor.rsdash-spv/src/client/block_processor_test.rs
🧬 Code graph analysis (5)
dash-spv/src/sync/sequential/lifecycle.rs (1)
dash-spv/src/sync/filters/recheck.rs (3)
default(19-24)new(40-46)new(79-86)
dash-spv/src/client/block_processor.rs (1)
dash-spv/src/client/core.rs (1)
wallet(161-163)
key-wallet-manager/src/wallet_interface.rs (2)
dash-spv/src/client/block_processor_test.rs (2)
process_block(44-56)process_block(271-278)key-wallet-manager/src/wallet_manager/process_block.rs (1)
process_block(16-64)
dash-spv/src/sync/sequential/message_handlers.rs (1)
dash-spv/src/sync/filters/recheck.rs (3)
pending_count(141-143)stats(151-158)len(54-56)
dash-spv/src/client/block_processor_test.rs (1)
dash-spv/src/client/block_processor.rs (1)
new(46-64)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (19)
- GitHub Check: fuzz (hashes_sha256)
- GitHub Check: fuzz (hashes_json)
- GitHub Check: fuzz (dash_deserialize_block)
- GitHub Check: fuzz (hashes_sha512_256)
- GitHub Check: fuzz (dash_deserialize_witness)
- GitHub Check: fuzz (hashes_sha512)
- GitHub Check: fuzz (hashes_cbor)
- GitHub Check: fuzz (dash_deserialize_script)
- GitHub Check: fuzz (hashes_ripemd160)
- GitHub Check: fuzz (dash_deserialize_address)
- GitHub Check: fuzz (dash_deser_net_msg)
- GitHub Check: fuzz (dash_script_bytes_to_asm_fmt)
- GitHub Check: fuzz (dash_deserialize_amount)
- GitHub Check: fuzz (dash_outpoint_string)
- GitHub Check: RPC Tests (stable, true)
- GitHub Check: Strict Warnings and Clippy Checks
- GitHub Check: Core Components Tests
- GitHub Check: SPV Components Tests
- GitHub Check: Key Wallet Components Tests
🔇 Additional comments (5)
key-wallet/src/gap_limit.rs (1)
370-396: Tests exercise newmark_usedcontract appropriatelyThe updated tests now assert:
- First use returns
true.- Subsequent increases of
highest_used_indexreturntrue.- Re-marking an existing index or using an index ≤ current highest returns
false.This matches the documented contract (“returns true if the highest_used_index increased”) and gives good coverage around the new API.
Also applies to: 398-415, 441-459
dash-spv/src/sync/sequential/mod.rs (1)
33-37:filter_recheckmodule is wired in cleanlyThe new documentation bullet and
pub mod filter_recheck;cleanly expose the recheck functionality from the sequential sync namespace without altering existing behavior.dash-spv/src/sync/filters/mod.rs (1)
16-17: Filterrecheckmodule is integrated consistentlyThe new
recheckdocumentation entry andpub mod recheck;fit well with the existing sub-module layout and provide a clear home for the recheck queue/config logic.Also applies to: 26-32
key-wallet-manager/src/wallet_interface.rs (1)
16-25:process_blocktuple return cleanly surfaces gap-limit changes (API break to be aware of)Returning
(Vec<Txid>, bool)with clear docs forgap_limit_changedis a good way to signal recheck triggers to callers.This is, however, a breaking change for any external
WalletInterfaceimplementors. Ensure all in-tree and out-of-tree implementations are updated and that any published crate using this trait bumps its version appropriately.dash-spv/src/sync/sequential/manager.rs (1)
103-105: Newfilter_recheck_queuefield is correctly added to manager stateThe
filter_recheck_queuefield is typed and scoped appropriately (pub(super)) for internal orchestration by sequential submodules, and it aligns with the initialization done inlifecycle.rs.
| if should_transition { | ||
| self.transition_to_next_phase(storage, network, "All blocks downloaded").await?; | ||
| // Before transitioning, process any pending filter re-checks | ||
| if self.filter_recheck_queue.has_pending() { | ||
| tracing::info!( | ||
| "🔄 Processing {} pending filter re-check(s) before transitioning", | ||
| self.filter_recheck_queue.pending_count() | ||
| ); | ||
|
|
||
| // Execute the next phase (if any) | ||
| self.execute_current_phase(network, storage).await?; | ||
| // Process all pending re-checks | ||
| while let Some(range) = self.filter_recheck_queue.next_range() { | ||
| tracing::info!( | ||
| "🔍 Re-checking filters for heights {}-{} (iteration {}/{})", | ||
| range.start, | ||
| range.end, | ||
| range.iteration + 1, | ||
| self.filter_recheck_queue.stats().config.max_iterations | ||
| ); | ||
|
|
||
| // Re-check filters for this range | ||
| match self.recheck_filters_for_range(storage, network, &range).await { | ||
| Ok(new_matches) => { | ||
| if !new_matches.is_empty() { | ||
| tracing::info!( | ||
| "✅ Found {} new filter matches after re-check for range {}-{}", | ||
| new_matches.len(), | ||
| range.start, | ||
| range.end | ||
| ); | ||
|
|
||
| // Download the newly matched blocks | ||
| // This will trigger another iteration if gap limits change again | ||
| for (block_hash, height) in new_matches { | ||
| tracing::debug!( | ||
| "📦 Requesting block {} at height {} from filter re-check", | ||
| block_hash, | ||
| height | ||
| ); | ||
| // Add to pending blocks for download | ||
| // Note: We'll re-enter the DownloadingBlocks phase | ||
| } | ||
| } | ||
| self.filter_recheck_queue.mark_completed(&range); | ||
| } | ||
| Err(e) => { | ||
| tracing::error!( | ||
| "Failed to re-check filters for range {}-{}: {}", | ||
| range.start, | ||
| range.end, | ||
| e | ||
| ); | ||
| // Continue with other re-checks | ||
| } | ||
| } | ||
| } | ||
|
|
||
| // If we added new blocks to download, don't transition yet | ||
| if self.no_more_pending_blocks() { | ||
| self.transition_to_next_phase( | ||
| storage, | ||
| network, | ||
| "All blocks downloaded (after filter re-check)", | ||
| ) | ||
| .await?; | ||
| self.execute_current_phase(network, storage).await?; | ||
| } else { | ||
| tracing::info!( | ||
| "🔄 Filter re-check found new blocks to download, staying in DownloadingBlocks phase" | ||
| ); | ||
| } | ||
| } else { | ||
| // No pending re-checks, normal transition | ||
| self.transition_to_next_phase(storage, network, "All blocks downloaded").await?; | ||
| self.execute_current_phase(network, storage).await?; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Re-check results (new_matches) are only logged; ensure blocks actually get requested
The re-check loop correctly drains filter_recheck_queue, calls recheck_filters_for_range, logs progress, and defers phase transition based on no_more_pending_blocks(). But inside the Ok(new_matches) branch, the code currently only logs each (block_hash, height) with comments about requesting blocks and updating pending state, without actually:
- enqueueing these blocks into the
DownloadingBlocksstructures, or - sending
GetDatarequests vianetwork.
If recheck_filters_for_range already performs the block-request and pending_blocks updates, this is fine but non-obvious; in that case, consider either:
- removing the commented “Download the newly matched blocks” section, or
- explicitly documenting in the function comment that
recheck_filters_for_rangeboth re-checks filters and schedules block downloads, withnew_matchesused purely for reporting.
If recheck_filters_for_range does not schedule downloads, then currently new matches discovered during re-checks will never be fetched, and transactions beyond the initial gap limit remain missed—the original bug persists.
Either way, adding an integration test that simulates:
- a first pass with a smaller gap limit,
- a gap-limit-increasing block,
- a re-check that finds additional matches,
and asserting that the corresponding blocks are actually requested/downloaded would solidify this path.
I'll verify the review comment by examining the codebase to understand whether recheck_filters_for_range handles block enqueuing and whether integration tests exist.
---
🏁 Script executed:
```shell
#!/bin/bash
# Find the recheck_filters_for_range function definition
rg -nA 20 "fn recheck_filters_for_range" --type=rs
Length of output: 26
🏁 Script executed:
#!/bin/bash
# Search for where blocks are normally enqueued in DownloadingBlocks phase
rg -nP "pending_blocks|GetData" --type=rs -C 3 | head -100Length of output: 26
🏁 Script executed:
#!/bin/bash
# Look for integration or unit tests related to filter recheck
fd -e rs | xargs grep -l "recheck" | grep -i testLength of output: 0
🏁 Script executed:
#!/bin/bash
# Search for filter_recheck_queue usage to understand flow
rg -n "filter_recheck_queue" --type=rs -C 2Length of output: 26
🏁 Script executed:
#!/bin/bash
# Check the structure around line 864-937 for context
sed -n '840,950p' dash-spv/src/sync/sequential/message_handlers.rsLength of output: 4351
---
🏁 Script executed:
```shell
#!/bin/bash
# Search using correct rust file type
rg -n "fn recheck_filters_for_range" --type=rust -A 30
Length of output: 2636
🏁 Script executed:
#!/bin/bash
# Search for normal block enqueuing patterns
rg -n "add_pending\|enqueue\|pending_blocks" --type=rust -C 3 | head -150Length of output: 0
🏁 Script executed:
#!/bin/bash
# Look for where new_matches is used elsewhere
rg -n "new_matches" --type=rust -B 2 -A 5Length of output: 4387
🏁 Script executed:
#!/bin/bash
# Search for test files and test patterns
fd "test" --type f | grep -E "(test|spec)" | head -20Length of output: 742
🏁 Script executed:
#!/bin/bash
# Check if recheck_filters_for_range might be async and perform enqueuing
rg -n "recheck_filters_for_range" --type=rust -B 5 -A 50Length of output: 10618
---
🏁 Script executed:
```shell
#!/bin/bash
# Find how blocks are normally added to the downloading phase
rg -n "DownloadingBlocks\|downloading\.insert\|downloading\.add" --type=rust -C 3 | head -200
Length of output: 0
🏁 Script executed:
#!/bin/bash
# Look for GetData or similar block request logic
rg -n "GetData\|request.*block" --type=rust -C 2 | head -150Length of output: 0
🏁 Script executed:
#!/bin/bash
# Check for integration tests on filter_recheck
fd "test" --type f -path "*/sync/*" | xargs cat 2>/dev/null | grep -A 30 -i "recheck\|filter_recheck_queue" | head -100Length of output: 168
🏁 Script executed:
#!/bin/bash
# Find where pending blocks logic exists - search for no_more_pending_blocks
rg -n "no_more_pending_blocks" --type=rust -B 5 -A 5Length of output: 2730
🏁 Script executed:
#!/bin/bash
# Search for struct DownloadingBlocks to understand its structure
rg -n "struct DownloadingBlocks\|DownloadingBlocks {" --type=rust -A 10 | head -100Length of output: 153
---
🏁 Script executed:
```shell
#!/bin/bash
# Find SyncPhase enum definition
rg -n "enum SyncPhase" --type=rust -A 30
Length of output: 2332
🏁 Script executed:
#!/bin/bash
# Look for where blocks are normally added to downloading phase
rg -n "downloading.insert\|HashSet::new\|BTreeSet::new" --type=rust -C 2 | head -150Length of output: 0
🏁 Script executed:
#!/bin/bash
# Search for how pending blocks flow in the normal (non-recheck) case
rg -n "filter_matches\|request_blocks\|request_data" --type=rust -C 3 | head -150Length of output: 0
🏁 Script executed:
#!/bin/bash
# Check the complete block handler flow
sed -n '820,870p' dash-spv/src/sync/sequential/message_handlers.rsLength of output: 1693
🏁 Script executed:
#!/bin/bash
# Look for where blocks are initially added to downloading set in normal flow
rg -n "let.*downloading\|mut downloading" --type=rust -C 2 | head -100Length of output: 0
Blocks discovered during filter re-check are logged but never enqueued for download; no_more_pending_blocks() stub prevents detection
The review concern is confirmed. recheck_filters_for_range() returns matched blocks but your caller in message_handlers.rs (lines 895–903) only logs them—the loop contains no code to add these blocks to the downloading HashSet. Additionally, the stub no_more_pending_blocks() function in phase_execution.rs (line 526–530) unconditionally returns true with a "for now" comment, so even if blocks were enqueued, this check would not detect them. As a result, newly discovered blocks during re-checks are never fetched, and the original gap-limit bug persists.
To fix this:
- In the loop at lines 895–903, actually add each matched block to the pending/downloading structures (similar to the normal block download flow)
- Implement
no_more_pending_blocks()to check whether there are pending blocks awaiting download, or ensure newly enqueued blocks are properly tracked - Add an integration test that simulates a gap-limit increase triggering a re-check, verifies new matches are found, and asserts those blocks are actually requested
🤖 Prompt for AI Agents
In dash-spv/src/sync/sequential/message_handlers.rs around lines 864–937: the
re-check loop logs new_matches but never enqueues them for download; also
no_more_pending_blocks() is a stub that always returns true so newly discovered
blocks are never detected or fetched. Fix by: in the loop over new_matches
actually add each (block_hash, height) into the same pending/downloading
structures used by the normal download flow (update any indexes/counters and
emit the same request/logging path so the state machine re-enters
DownloadingBlocks); implement no_more_pending_blocks() in phase_execution.rs to
inspect the real pending/downloading collections and return false when any are
present; and add an integration test that simulates a gap-limit re-check,
asserts matched blocks are enqueued, and verifies the nodes request/download
those blocks.
UNTESTED
During blockchain sync, compact filters are checked to determine which blocks contain relevant transactions. However, gap limit maintenance happens AFTER blocks are processed, meaning newly-generated addresses are not checked against previously-evaluated filters. This causes transactions to addresses beyond the initial gap limit to be permanently missed.
This commit implements automatic filter re-checking when gap limits change:
Changes:
Impact:
Fixes issue where coinjoin transactions were skipped during sync.
🤖 Generated with Claude Code
Summary by CodeRabbit
Release Notes
✏️ Tip: You can customize this high-level summary in your review settings.