-
Notifications
You must be signed in to change notification settings - Fork 3
refactor: split big files in dash-spv #160
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughThis pull request performs a major refactor and expansion of the Dash SPV client, restructuring the monolithic codebase into modular, generic components. The client module is reorganized into focused sub-modules (core, lifecycle, events, sync coordination, etc.), disk storage is refactored from a single file into a segmented, modularized architecture with dedicated I/O and state management, and filter synchronization is extracted into a comprehensive 10-module subsystem. Sequential sync is similarly decomposed into lifecycle, phase execution, and message handling modules. Changes
Sequence Diagram(s)sequenceDiagram
participant Client as DashSpvClient<W,N,S>
participant SyncMgr as SequentialSyncManager
participant FilterSync as FilterSyncManager
participant Storage as StorageManager
participant Network as NetworkManager
Client->>SyncMgr: start_sync()
SyncMgr->>SyncMgr: initialize phase (DownloadingHeaders)
SyncMgr->>Network: send_initial_requests()
rect rgb(200, 220, 255)
note over SyncMgr,Storage: Phase: DownloadingHeaders
Network-->>SyncMgr: Headers message
SyncMgr->>Storage: store_headers()
SyncMgr->>SyncMgr: transition_to_next_phase()
end
rect rgb(200, 255, 220)
note over FilterSync,Storage: Phase: DownloadingFilters
SyncMgr->>FilterSync: sync_filters()
FilterSync->>Network: request_filters()
Network-->>FilterSync: CFilter message
FilterSync->>Storage: store_filter_headers()
FilterSync->>FilterSync: check_filter_gap()
alt Gap detected
FilterSync->>Network: retry_missing_filters()
end
end
SyncMgr->>Client: emit_event(SyncPhaseTransition)
Client-->>Client: emit_progress(SyncProgress)
sequenceDiagram
participant App as Application
participant Client as DashSpvClient
participant LifeCycle as lifecycle module
participant Monitor as monitor_network loop
participant ChainLock as chainlock module
participant Storage as DiskStorageManager
App->>Client: new() constructor
Client->>LifeCycle: initialize all managers
LifeCycle->>Storage: load chain state & sync state
LifeCycle->>Client: ready
App->>Client: start()
Client->>LifeCycle: setup network & sync
LifeCycle->>Client: running = true
App->>Client: monitor_network()
loop Network Event Loop
Monitor->>Monitor: poll network messages
alt ChainLock/ISLock received
Monitor->>ChainLock: process_chainlock()
ChainLock->>Storage: store chain state
ChainLock-->>Monitor: emit ChainLockReceived
else Regular message
Monitor->>Monitor: route to sync coordinator
end
Monitor->>Client: emit_progress(status)
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Rationale: This PR introduces substantial new functionality across multiple interrelated systems (client lifecycle, storage modularity, filter sync orchestration, sequential sync coordination) with dense generic logic, async state management, and intricate control flows. While many changes follow consistent refactoring patterns (decomposition into modules, trait-bound generics), each subsystem exhibits unique complexity (background workers, gap detection, phase transitions, recovery paths). The breadth spans ~20+ new files with interdependencies, necessitating holistic reasoning about trait bounds, error propagation, concurrency, and state consistency. No single file is trivial, but patterns repeat across cohorts (modularization, generic trait usage), offsetting some cognitive load. Possibly related PRs
Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 32
🧹 Nitpick comments (62)
dash-spv/src/storage/disk/mod.rs (2)
31-33: Make segment sizing/tuning configurable.Hard-coding
HEADERS_PER_SEGMENT=50_000andMAX_ACTIVE_SEGMENTS=10can be sub‑optimal across devices. Consider feature flags, env, or constructor params onDiskStorageManagerto tune these.Also applies to: 34-35
22-27: Consider gating disk storage behind a crate feature.This module is always compiled; add
#[cfg(feature = "disk-storage")]on submodules and re‑exports for optional builds, per guidelines on feature flags.dash-spv/src/client/events.rs (1)
27-31: Don’t silently drop when channel is closed; log at trace/debug.Currently the send result is discarded; if the receiver is dropped, events/progress vanish without visibility.
Apply:
- pub(crate) fn emit_event(&self, event: SpvEvent) { - tracing::debug!("Emitting event: {:?}", event); - let _ = self.event_tx.send(event); - } + pub(crate) fn emit_event(&self, event: SpvEvent) { + tracing::trace!("Emitting event"); + if let Err(_e) = self.event_tx.send(event) { + tracing::debug!("Event receiver dropped; event not delivered"); + } + } - pub(super) fn emit_progress(&self, progress: DetailedSyncProgress) { - if let Some(ref sender) = self.progress_sender { - let _ = sender.send(progress); - } - } + pub(super) fn emit_progress(&self, progress: DetailedSyncProgress) { + if let Some(ref sender) = self.progress_sender { + if sender.send(progress).is_err() { + tracing::trace!("Progress receiver dropped; update not delivered"); + } + } + }Also consider bounded channels for backpressure if producers can outpace consumers.
Also applies to: 41-45
dash-spv/src/storage/disk/segments.rs (1)
48-59: Centralize sentinel definition/check.
create_sentinel_headerand the sentinel-skip logic in io.rs should share a single predicate (e.g.,is_sentinel(&BlockHeader)) to prevent drift.Extract to
super::ioor a smallsentinel.rs:pub(super) fn is_sentinel(h: &BlockHeader) -> bool { /* mirror fields */ }Use in
save_segment_to_diskinstead of duplicating the field checks.dash-spv/src/sync/filters/headers.rs (1)
361-457: Tip stop-hash fallback is sensible; add one more guard.When
header_tip_height == 0and the tip header is missing, you trytip-1guarded byif header_tip_height > 0. Consider an explicit error for the impossible “no header at tip=0” case to avoid ambiguous validation errors.dash-spv/src/sync/sequential/message_handlers.rs (4)
352-354: Avoid cloning headers just to read length/emptinessCompute counts before moving the Vec into header_sync to remove a clone and reduce allocs.
- let continue_sync = - self.header_sync.handle_headers_message(headers.clone(), storage, network).await?; + let received_count = headers.len() as u32; + let was_empty = headers.is_empty(); + let continue_sync = self + .header_sync + .handle_headers_message(headers, storage, network) + .await?; @@ - *headers_downloaded += headers.len() as u32; + *headers_downloaded += received_count; @@ - if headers.is_empty() { + if was_empty { *received_empty_response = true; }Also applies to: 369-381
582-591: Reuse computed block height to avoid duplicate storage queriesYou fetch height for CFilter validation, then query it again for progress accounting. Cache it.
- let height = storage + let height = storage .get_header_height_by_hash(&cfilter.block_hash) .await @@ - if let Ok(Some(height)) = storage.get_header_height_by_hash(&cfilter.block_hash).await { - completed_heights.insert(height); + { + completed_heights.insert(height);Also applies to: 673-686
478-499: QRInfo increments diffs_processed; track separately to avoid skewed statsQRInfo is a different artifact than MnListDiff. Counting it as a diff can inflate progress metrics.
Consider a qrinfos_processed counter or a generic “items_processed” that sums both. Update SyncPhase::DownloadingMnList fields accordingly.
42-55: Blocks allowed during MnList phase but ignored; make the intent explicitYou accept Block in is_message_expected_in_phase for DownloadingMnList but immediately ignore it. Either process, buffer, or document clearly why it’s ignored.
Add a comment to is_message_expected_in_phase or drop the allowance to reduce noise.
Also applies to: 195-285
dash-spv/CODE_ANALYSIS_SUMMARY.md (2)
153-158: Add language to fenced code block (markdownlint MD040)Specify a language to silence CI lint.
-``` +```text 2000+ lines: 0 files ✅ (all large files refactored) <500 lines: 95+ files ✅ (excellent - focused modules)--- `3-6`: **Neutralize external vendor reference and stale timestamp** Replace “Claude (Anthropic AI)” and fixed date with neutral wording to keep repo vendor-agnostic and avoid going stale. ```diff -**Date:** 2025-01-21 -**Analyzer:** Claude (Anthropic AI) +**Date:** 2025-10-21 +**Analyzer:** Automated static reviewdash-spv/src/client/progress.rs (2)
35-37: Avoid narrowing cast from usize -> u32Use TryFrom or saturate to keep clippy happy and prevent theoretical overflow.
- stats.connected_peers = self.network.peer_count() as u32; - stats.total_peers = self.network.peer_count() as u32; // TODO: Track total discovered peers + stats.connected_peers = u32::try_from(self.network.peer_count()).unwrap_or(u32::MAX); + stats.total_peers = u32::try_from(self.network.peer_count()).unwrap_or(u32::MAX); // TODO: Track total discovered peers
82-88: Stage mapping for MnList -> ValidatingHeaders looks offValidatingHeaders doesn’t match masternode list sync semantically. Ensure SyncStage has an MnList/Quorum-related variant, or rename for clarity.
If available, map to SyncStage::DownloadingMnList/ValidatingMasternodes; otherwise add such a variant for accurate reporting.
dash-spv/src/client/chainlock.rs (1)
86-106: InstantSendLock handling is TODO; define minimal validation pathImplement signature/quorum verification, input lock checks, and persistence, or gate with a feature flag until ready.
I can scaffold an ISLock validator integrating your masternode engine and storage, behind a feature flag. Want a draft?
dash-spv/src/client/lifecycle.rs (2)
146-166: Prefer bounded channels for backpressureUnbounded channels can grow without limit under load. Use a sensible capacity.
- let (block_processor_tx, _block_processor_rx) = mpsc::unbounded_channel(); + let (block_processor_tx, _block_processor_rx) = mpsc::channel(1024); @@ - let (progress_sender, progress_receiver) = mpsc::unbounded_channel(); + let (progress_sender, progress_receiver) = mpsc::channel(1024); @@ - let (event_tx, event_rx) = mpsc::unbounded_channel(); + let (event_tx, event_rx) = mpsc::channel(1024);Also applies to: 71-75, 73-79
120-130: MempoolFilter monitored set is empty; wire wallet addressesCurrently HashSet::new() disables address-based filtering. Fetch monitored script pubkeys from the wallet.
I can add a WalletInterface method to enumerate monitored scripts and plumb it here. Interested?
Also applies to: 131-144
dash-spv/src/sync/filters/requests.rs (3)
74-91: Stop-hash lookup per batch is correct; consider minor optimization.For large ranges, repeated
get_header(batch_end)is fine but can be batched/cached if StorageManager exposes a range API. Optional.
152-160: Initial sends are sequential awaits; acceptable, but parallel fire-and-forget is possible.If
request_filtersis non-blocking, current approach is fine. Otherwise, consider sending the first N withfutures::future::try_join_allto reduce latency.
212-221: Doc comment is stale wrt behavior.This function only backfills available slots; it doesn’t mark completions. Update the comment to avoid confusion.
dash-spv/src/client/mempool.rs (1)
145-157: Filter (re)initialization duplicates enablement path; okay for now.Once wallet integration lands, consider centralizing filter construction to avoid drift.
dash-spv/src/storage/disk/io.rs (1)
18-49: JoinError mapping is fine. Consider factoring common file I/O helpers.The repeated open/flush/read/write mappings could live in small helpers to keep closures tight.
Also applies to: 51-83, 84-97, 99-132, 134-158, 160-178
dash-spv/src/sync/filters/retry.rs (5)
5-7: Docs claim exponential backoff, but code retries immediately.Either implement backoff or adjust docs. Recommend simple capped exponential backoff keyed by retry_count.
Apply a minimal backoff when re-queuing (illustrative):
- // Add to front of queue for priority retry - self.pending_cfheader_requests.push_front(retry_request); + // Add to front of queue for priority retry with backoff metadata + // (Option A) enqueue immediately but delay sending in processor based on retry_count + self.pending_cfheader_requests.push_front(retry_request); + // TODO: in process_next_queued_cfheader_requests(), sleep for + // backoff = min(2_u64.pow(retry_count as u32) * 100, 5_000) ms before send.
129-139: Use consistent error kinds for storage lookups.These are storage retrieval failures, not validation errors. Switch Validation → Storage to align with error taxonomy and clippy lints.
- Ok(None) => { - return Err(SyncError::Validation(format!( + Ok(None) => { + return Err(SyncError::Storage(format!( "Tip header not found at height {} (genesis) during recovery", header_tip_height ))); } - Err(e) => { - return Err(SyncError::Validation(format!( + Err(e) => { + return Err(SyncError::Storage(format!( "Failed to get tip header during recovery: {}", e ))); }
191-199: Remove unused parameters in handle_cfheader_request_timeout (and call site)._network and _storage are not used. Keep signatures tight to avoid needless coupling.
- async fn handle_cfheader_request_timeout( - &mut self, - start_height: u32, - stop_hash: BlockHash, - _network: &mut N, - _storage: &S, - ) -> SyncResult<()> { + async fn handle_cfheader_request_timeout( + &mut self, + start_height: u32, + stop_hash: BlockHash, + ) -> SyncResult<()> {And the caller:
- for (start_height, stop_hash) in timed_out_requests { - self.handle_cfheader_request_timeout(start_height, stop_hash, network, storage).await?; - } + for (start_height, stop_hash) in timed_out_requests { + self.handle_cfheader_request_timeout(start_height, stop_hash).await?; + }Also applies to: 181-184
273-279: Avoid trait-object param here; keep signatures uniform.handle_request_timeout takes &mut dyn NetworkManager but never uses it. Either drop the param or make it &mut N for consistency with the impl’s generics.
- async fn handle_request_timeout( + async fn handle_request_timeout( &mut self, range: (u32, u32), - _network: &mut dyn NetworkManager, - storage: &S, + storage: &S, ) -> SyncResult<()> {And update the call:
- self.handle_request_timeout(range, network, storage).await?; + self.handle_request_timeout(range, storage).await?;Also applies to: 250-259
357-361: Non-blocking try_lock may hide real timeouts under contention.Returning early on lock contention can starve detection. Consider a short bounded wait or fall back to a read-only snapshot taken elsewhere.
Example:
- let heights = match self.received_filter_heights.try_lock() { - Ok(heights) => heights.clone(), - Err(_) => return timed_out, - }; + let heights = match self.received_filter_heights.try_lock() { + Ok(h) => h.clone(), + Err(_) => { + // Fallback: brief wait to reduce false negatives under load + if let Ok(h) = self.received_filter_heights.lock().await.try_clone() { + h + } else { + return timed_out; + } + } + };Note: replace try_clone with an actual clone since the lock guard derefs to the set.
dash-spv/src/sync/sequential/post_sync.rs (1)
306-384: CFHeaders range selection looks good; minor polish.Good stop computation via previous block to avoid tip races and filter_tip short-circuit. Consider logging the computed count (stop_height - start_height + 1) for observability.
dash-spv/src/client/queries.rs (3)
38-41: Drop redundant async getter or make it truly async.get_peer_count() is async but does no await; prefer one sync API, or add an async network call if needed.
- pub async fn get_peer_count(&self) -> usize { - self.network.peer_count() - } + // Remove this method, or if an async variant is needed, forward to an async network metric.
65-70: Returning references into internal engine maps.This exposes internal lifetimes and can hinder refactors. Consider returning an owned Arc or a read-only domain DTO instead of &MasternodeList.
154-163: Balance API stub should error, not return empty data.Returning {} may be misread as “zero balances”. Prefer a clear Config/Unimplemented error until wallet integration lands.
- Ok(std::collections::HashMap::new()) + Err(SpvError::Config( + "Balance queries must be served by the wallet implementation".to_string(), + ))dash-spv/src/sync/filters/types.rs (3)
69-71: Remove unnecessary #[allow(dead_code)] attributes.Both CFHeaderRequest.is_retry and ReceivedCFHeaderBatch.received_at are used (or intended). Keeping allow(dead_code) may hide real issues under clippy -D warnings.
- #[allow(dead_code)] pub is_retry: bool,- #[allow(dead_code)] pub received_at: Instant,Also applies to: 81-86
41-44: Consider a bounded channel for FilterNotificationSender.UnboundedSender risks unbounded memory under peer spam. A small bounded channel with drop/backpressure is safer in libraries.
-pub type FilterNotificationSender = - mpsc::UnboundedSender<dashcore::network::message_filter::CFilter>; +pub type FilterNotificationSender = + mpsc::Sender<dashcore::network::message_filter::CFilter>;And adjust the creation site to mpsc::channel(capacity).
16-21: Timeout defaults seem aggressive.SYNC_TIMEOUT_SECONDS = 5s and REQUEST_TIMEOUT_SECONDS = 30s may be too low on congested networks. Consider making them config-driven and raising defaults (e.g., 30s/90s).
Also applies to: 31-36
dash-spv/src/storage/disk/headers.rs (2)
288-339: Range-to-segment translation looks correct; add guard for empty ranges.Consider early-return if range.start >= range.end to avoid work.
+ if range.start >= range.end { + return Ok(Vec::new()); + }
36-38: Minor: avoid loading chain state per batch if already cached.If load_chain_state() hits disk, consider caching sync_base_height in memory and refreshing via state updates.
dash-spv/src/sync/sequential/lifecycle.rs (2)
54-56: Make timeouts/retries configurable.
phase_timeoutandmax_phase_retriesare hard-coded. Read these fromClientConfig(like other CFHeaders settings) to allow tuning and test control.
175-188: Don’t discard potential errors from sub-resets.
let _ = self.header_sync.reset_pending_requests();silently ignores failures. Log and proceed, or bubble up. Example:- let _ = self.header_sync.reset_pending_requests(); + if let Err(e) = self.header_sync.reset_pending_requests() { + tracing::warn!("Failed to reset header sync pending requests: {}", e); + }dash-spv/src/sync/sequential/phase_execution.rs (3)
367-370: Avoid magic constant for recovery batch threshold.
active_count < 10is a baked-in limit. Make it configurable (e.g., viaClientConfig) or derive from a constant shared withFilterSyncManager.
342-365: Remove redundant timeout check call.
check_filter_request_timeoutsis invoked twice in the same branch (Lines 342 and 364). Keep one to reduce duplicate work/log noise.
71-87: Unused_safe_height.Computed but not used. Remove or wire into masternode sync bounds to avoid confusion.
dash-spv/src/storage/disk/manager.rs (2)
150-156: Usetracing::error!instead ofeprintln!in the async worker.Swap to structured logging for consistency.
- eprintln!("Failed to save segment {}: {}", segment_id, e); + tracing::error!("Failed to save segment {}: {}", segment_id, e); ... - eprintln!("Failed to save filter segment {}: {}", segment_id, e); + tracing::error!("Failed to save filter segment {}: {}", segment_id, e); ... - if let Err(e) = save_index_to_disk(&path, &index).await { - eprintln!("Failed to save index: {}", e); + if let Err(e) = save_index_to_disk(&path, &index).await { + tracing::error!("Failed to save index: {}", e);Also applies to: 168-176, 186-193
82-99: Blocking std::fs in async contexts.
create_dir_all/read_dirare sync calls inside async fns. Considertokio::fsorspawn_blockingfor the directory scan/rebuild to avoid blocking the runtime.Also applies to: 291-399
dash-spv/ARCHITECTURE.md (1)
4-4: Fix markdown lint and staleness.
- Add languages to fenced blocks (use
text,rust, ormermaidas appropriate) to satisfy MD040.- Replace emphasis-as-heading with real headings to satisfy MD036.
- Update “Last Updated” to 2025-10-21 (PR authored on Oct 21, 2025) to reflect current state; verify counts (files/tests) before publishing.
Example:
-``` +```text ┌─────────────────────────────────────────────────────────────┐ │ DashSpvClient<W,N,S> │-**Last Updated:** 2025-01-21 +**Last Updated:** 2025-10-21Also applies to: 69-74, 81-106, 110-132, 521-537, 855-864, 1011-1024, 1090-1103, 1455-1467
dash-spv/src/client/core.rs (2)
263-277: Propagate config changes to sub-managers or document constraints.
update_configmutatesself.configbut doesn’t refreshsync_manager,header_sync,filter_sync, or timeouts derived from config. Either (a) plumb areload_config(&ClientConfig)through subcomponents, or (b) document that only pre-start changes are supported.
225-259: Ensure filter-reset covers all internal queues and flow-control state.You clear storage, in-memory state, and stats. Confirm
filter_sync_mut().clear_filter_state()also resets:
- pending/active requests (filters and CFHeaders),
- retry counters and flow-control windows,
- any processing thread tracking sets.
If not, add explicit resets here to avoid phantom timeouts after a reset.
dash-spv/src/client/sync_coordinator.rs (3)
403-417: Prefer monotonic time for periodic save cadence.Using
SystemTimeto gate a 30s interval is subject to clock changes. UseInstant-based elapsed tracking.Example pattern:
- Store
Instantinlast_sync_state_save.- Compare with
elapsed() >= Duration::from_secs(30), then reset toInstant::now().
278-286: Duplicate progress snapshot work; consider DRY-ing to one helper.Two similar blocks build
SyncProgresssnapshots per loop. Extract a helper to compute once and reuse for both emissions.Also applies to: 349-358
243-247: Hard-coded “30+ seconds” string—tie to a single timeout constant.Keep logs consistent with configured timeout. Source a shared constant (e.g.,
SYNC_TIMEOUT_SECONDS) to avoid drift.dash-spv/src/sync/filters/stats.rs (3)
25-31:try_lockon hot path can undercount; consider awaiting the lock or sampling strategy.Returning 0 on contention may mislead UIs/logic. If accurate counts matter, use
lock().await; if you must avoid blocking, document this as “best-effort”.
129-142: Unify 30s timeout as a constant/config.
30appears in two places. Define once (e.g.,const SYNC_TIMEOUT_SECONDS: u64) or use a config field to keep behavior consistent and tunable.- last_received.elapsed() > std::time::Duration::from_secs(30) + last_received.elapsed() > std::time::Duration::from_secs(SYNC_TIMEOUT_SECONDS)Also applies to: 157-164
201-232: Coverage vs. corrected missing can diverge; compute coverage from the corrected missing when applied.When
corrected_total_missingadjusts gaps,actual_coveragemay still show 100%. Align them to avoid UI inconsistencies.- let actual_coverage = filter_sync.get_actual_coverage_percentage(); + let actual_coverage = if total_missing == 0 + && stats_lock.filters_received < stats_lock.filters_requested + { + // Use basic stats-derived coverage in inconsistency case + (stats_lock.filters_received as f64 / stats_lock.filters_requested as f64) * 100.0 + } else { + filter_sync.get_actual_coverage_percentage() + };dash-spv/src/sync/filters/matching.rs (3)
69-76: Remove#[allow(dead_code)]by integrating or scoping the helper.Either use
filter_matches_scriptsin the flow (e.g., in the processing thread) or mark itpub(super)and add a unit test to justify it.
198-206: Pending-queue scan is O(n); consider an auxiliaryHashSet<BlockHash>for O(1) membership.Keep
VecDequefor order, but maintain a parallelqueued_blocks: HashSet<BlockHash>to avoid repeated linear scans.Also applies to: 220-231
264-283: Height set to 0 for processing-thread requests; consider resolving real height.Lookup height (e.g., via storage index) to avoid ambiguous 0-height matches in downstream consumers.
dash-spv/src/sync/filters/gaps.rs (2)
27-33: Avoid dropping receipts under contention—use an awaited lock.
try_lockmay silently miss heights during contention. Preferlock().awaithere; writes are short.- pub fn record_filter_received(&mut self, height: u32) { - if let Ok(mut heights) = self.received_filter_heights.try_lock() { - heights.insert(height); - tracing::trace!("📊 Recorded filter received at height {}", height); - } - } + pub async fn record_filter_received(&self, height: u32) { + let mut heights = self.received_filter_heights.lock().await; + heights.insert(height); + tracing::trace!("📊 Recorded filter received at height {}", height); + }Note: adjust call sites accordingly.
261-272: Magic30timeout andMAX_FILTER_REQUEST_SIZEcommentary—centralize constants.Use the same shared
SYNC_TIMEOUT_SECONDSconstant as elsewhere; ensure a single source of truth for request-size limits and mention it in docs/comments.Also applies to: 327-341
dash-spv/src/sync/filters/manager.rs (1)
287-296: Reset methods: prefer consistent naming and scope.You have
reset,reset_pending_requests, andclear_filter_sync_state. Consider consolidating semantics or documenting differences to prevent misuse.Also applies to: 295-309
dash-spv/src/sync/filters/download.rs (3)
325-327: Clarify log: “to {}” currently prints a hash, not a heightThe message reads like a height range. Make it explicit to avoid confusion.
- tracing::trace!("Requested filters from height {} to {}", start_height, stop_hash); + tracing::trace!("Requested filters: start_height={}, stop_hash={}", start_height, stop_hash);
298-303: Prefer a typed filter type if availableIf
GetCFilters::filter_typehas an enum (e.g.,FilterType::Basic), use it instead of a raw0for clarity and safety. If it’s a u8 in the protocol struct, ignore.
443-481: Method currently always returnsfalse
download_and_check_filterinitiates the request then returnsOk(false). Either document that this is fire‑and‑forget and the result comes via an event, or plumb an await/notification path.I can wire a small completion notifier (e.g., oneshot channel) pattern if helpful.
dash-spv/src/sync/sequential/manager.rs (1)
200-207: Duplicate engine accessors
masternode_list_engine()andget_masternode_engine()are equivalent. Consider keeping one and deprecating the other to reduce surface area.Also applies to: 224-228
dash-spv/src/storage/disk/state.rs (1)
97-109: Consistency: apply atomic writes to other critical files (optional)Consider using the temp‑file + rename pattern for:
state/masternode.jsonstate/<key>.datfromstore_metadatachainlocks/chainlock_*.binislocks/islock_*.bincheckpoints/checkpoint_*.jsonThis improves durability under crashes.
Also applies to: 379-395, 254-272, 335-355, 189-209
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (34)
dash-spv/ARCHITECTURE.md(11 hunks)dash-spv/CODE_ANALYSIS_SUMMARY.md(2 hunks)dash-spv/src/client/chainlock.rs(1 hunks)dash-spv/src/client/core.rs(1 hunks)dash-spv/src/client/events.rs(1 hunks)dash-spv/src/client/lifecycle.rs(1 hunks)dash-spv/src/client/mempool.rs(1 hunks)dash-spv/src/client/progress.rs(1 hunks)dash-spv/src/client/queries.rs(1 hunks)dash-spv/src/client/sync_coordinator.rs(1 hunks)dash-spv/src/storage/disk.rs(0 hunks)dash-spv/src/storage/disk/filters.rs(1 hunks)dash-spv/src/storage/disk/headers.rs(1 hunks)dash-spv/src/storage/disk/io.rs(1 hunks)dash-spv/src/storage/disk/manager.rs(1 hunks)dash-spv/src/storage/disk/mod.rs(1 hunks)dash-spv/src/storage/disk/segments.rs(1 hunks)dash-spv/src/storage/disk/state.rs(1 hunks)dash-spv/src/sync/filters/download.rs(1 hunks)dash-spv/src/sync/filters/gaps.rs(1 hunks)dash-spv/src/sync/filters/headers.rs(1 hunks)dash-spv/src/sync/filters/manager.rs(1 hunks)dash-spv/src/sync/filters/matching.rs(1 hunks)dash-spv/src/sync/filters/mod.rs(1 hunks)dash-spv/src/sync/filters/requests.rs(1 hunks)dash-spv/src/sync/filters/retry.rs(1 hunks)dash-spv/src/sync/filters/stats.rs(1 hunks)dash-spv/src/sync/filters/types.rs(1 hunks)dash-spv/src/sync/sequential/lifecycle.rs(1 hunks)dash-spv/src/sync/sequential/manager.rs(1 hunks)dash-spv/src/sync/sequential/message_handlers.rs(1 hunks)dash-spv/src/sync/sequential/mod.rs(1 hunks)dash-spv/src/sync/sequential/phase_execution.rs(1 hunks)dash-spv/src/sync/sequential/post_sync.rs(1 hunks)
💤 Files with no reviewable changes (1)
- dash-spv/src/storage/disk.rs
🧰 Additional context used
📓 Path-based instructions (4)
dash-spv/**/*.rs
📄 CodeRabbit inference engine (dash-spv/CLAUDE.md)
dash-spv/**/*.rs: Enforce Rust formatting viacargo fmt --checkon all Rust source files
All code must be clippy-clean: runcargo clippy --all-targets --all-features -- -D warnings
Files:
dash-spv/src/client/progress.rsdash-spv/src/storage/disk/mod.rsdash-spv/src/sync/sequential/post_sync.rsdash-spv/src/sync/sequential/message_handlers.rsdash-spv/src/storage/disk/segments.rsdash-spv/src/sync/sequential/lifecycle.rsdash-spv/src/sync/filters/headers.rsdash-spv/src/storage/disk/filters.rsdash-spv/src/sync/filters/retry.rsdash-spv/src/sync/filters/requests.rsdash-spv/src/storage/disk/headers.rsdash-spv/src/client/chainlock.rsdash-spv/src/client/events.rsdash-spv/src/sync/filters/types.rsdash-spv/src/client/core.rsdash-spv/src/sync/filters/stats.rsdash-spv/src/client/queries.rsdash-spv/src/sync/filters/mod.rsdash-spv/src/sync/sequential/mod.rsdash-spv/src/client/mempool.rsdash-spv/src/sync/sequential/manager.rsdash-spv/src/sync/filters/gaps.rsdash-spv/src/client/sync_coordinator.rsdash-spv/src/client/lifecycle.rsdash-spv/src/sync/filters/matching.rsdash-spv/src/sync/filters/download.rsdash-spv/src/storage/disk/manager.rsdash-spv/src/sync/filters/manager.rsdash-spv/src/sync/sequential/phase_execution.rsdash-spv/src/storage/disk/io.rsdash-spv/src/storage/disk/state.rs
**/*.rs
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.rs: Use proper error types with thiserror and propagate errors appropriately
Use the tokio runtime for async operations in Rust
Use conditional compilation feature flags for optional features (#[cfg(feature = ...)])
Format Rust code with cargo fmt (and enforce via cargo fmt --check)
Run clippy with -D warnings and fix all lints
Adhere to MSRV Rust 1.89 (avoid features requiring newer compiler)
**/*.rs: Format Rust code with rustfmt (per rustfmt.toml); run cargo fmt --all before commits
Lint with clippy; treat warnings as errors in CI
Follow Rust naming: snake_case for functions/variables, UpperCamelCase for types/traits, SCREAMING_SNAKE_CASE for consts
Prefer async using tokio where applicable
Files:
dash-spv/src/client/progress.rsdash-spv/src/storage/disk/mod.rsdash-spv/src/sync/sequential/post_sync.rsdash-spv/src/sync/sequential/message_handlers.rsdash-spv/src/storage/disk/segments.rsdash-spv/src/sync/sequential/lifecycle.rsdash-spv/src/sync/filters/headers.rsdash-spv/src/storage/disk/filters.rsdash-spv/src/sync/filters/retry.rsdash-spv/src/sync/filters/requests.rsdash-spv/src/storage/disk/headers.rsdash-spv/src/client/chainlock.rsdash-spv/src/client/events.rsdash-spv/src/sync/filters/types.rsdash-spv/src/client/core.rsdash-spv/src/sync/filters/stats.rsdash-spv/src/client/queries.rsdash-spv/src/sync/filters/mod.rsdash-spv/src/sync/sequential/mod.rsdash-spv/src/client/mempool.rsdash-spv/src/sync/sequential/manager.rsdash-spv/src/sync/filters/gaps.rsdash-spv/src/client/sync_coordinator.rsdash-spv/src/client/lifecycle.rsdash-spv/src/sync/filters/matching.rsdash-spv/src/sync/filters/download.rsdash-spv/src/storage/disk/manager.rsdash-spv/src/sync/filters/manager.rsdash-spv/src/sync/sequential/phase_execution.rsdash-spv/src/storage/disk/io.rsdash-spv/src/storage/disk/state.rs
**/src/**/*.rs
📄 CodeRabbit inference engine (AGENTS.md)
**/src/**/*.rs: Each crate keeps sources in src/
Avoid unwrap()/expect() in library code; use proper error types (e.g., thiserror)
Place unit tests alongside code with #[cfg(test)]
Files:
dash-spv/src/client/progress.rsdash-spv/src/storage/disk/mod.rsdash-spv/src/sync/sequential/post_sync.rsdash-spv/src/sync/sequential/message_handlers.rsdash-spv/src/storage/disk/segments.rsdash-spv/src/sync/sequential/lifecycle.rsdash-spv/src/sync/filters/headers.rsdash-spv/src/storage/disk/filters.rsdash-spv/src/sync/filters/retry.rsdash-spv/src/sync/filters/requests.rsdash-spv/src/storage/disk/headers.rsdash-spv/src/client/chainlock.rsdash-spv/src/client/events.rsdash-spv/src/sync/filters/types.rsdash-spv/src/client/core.rsdash-spv/src/sync/filters/stats.rsdash-spv/src/client/queries.rsdash-spv/src/sync/filters/mod.rsdash-spv/src/sync/sequential/mod.rsdash-spv/src/client/mempool.rsdash-spv/src/sync/sequential/manager.rsdash-spv/src/sync/filters/gaps.rsdash-spv/src/client/sync_coordinator.rsdash-spv/src/client/lifecycle.rsdash-spv/src/sync/filters/matching.rsdash-spv/src/sync/filters/download.rsdash-spv/src/storage/disk/manager.rsdash-spv/src/sync/filters/manager.rsdash-spv/src/sync/sequential/phase_execution.rsdash-spv/src/storage/disk/io.rsdash-spv/src/storage/disk/state.rs
dash-spv/src/storage/**/*.rs
📄 CodeRabbit inference engine (dash-spv/CLAUDE.md)
Provide both
MemoryStorageManagerandDiskStorageManagerbehind theStorageManagertrait
Files:
dash-spv/src/storage/disk/mod.rsdash-spv/src/storage/disk/segments.rsdash-spv/src/storage/disk/filters.rsdash-spv/src/storage/disk/headers.rsdash-spv/src/storage/disk/manager.rsdash-spv/src/storage/disk/io.rsdash-spv/src/storage/disk/state.rs
🧠 Learnings (4)
📓 Common learnings
Learnt from: DCG-Claude
PR: dashpay/rust-dashcore#0
File: :0-0
Timestamp: 2025-06-26T16:01:37.609Z
Learning: The mempool tracking infrastructure (UnconfirmedTransaction, MempoolState, configuration, and mempool_filter.rs) is fully implemented and integrated in the Dash SPV client as of this PR, including client logic, FFI APIs, and tests.
📚 Learning: 2025-08-25T17:15:59.361Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: dash-spv/CLAUDE.md:0-0
Timestamp: 2025-08-25T17:15:59.361Z
Learning: Applies to dash-spv/src/storage/**/*.rs : Provide both `MemoryStorageManager` and `DiskStorageManager` behind the `StorageManager` trait
Applied to files:
dash-spv/src/storage/disk/mod.rsdash-spv/src/storage/disk/filters.rsdash-spv/src/storage/disk/headers.rsdash-spv/ARCHITECTURE.mddash-spv/src/storage/disk/manager.rsdash-spv/src/storage/disk/io.rsdash-spv/src/storage/disk/state.rs
📚 Learning: 2025-08-16T04:15:57.225Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: swift-dash-core-sdk/Examples/DashHDWalletExample/CLAUDE.md:0-0
Timestamp: 2025-08-16T04:15:57.225Z
Learning: Applies to swift-dash-core-sdk/Examples/DashHDWalletExample/**/*.swift : When accessing SPV functionality, use DashSDK public APIs instead of direct client access; add public wrapper methods to DashSDK if needed
Applied to files:
dash-spv/src/client/core.rs
📚 Learning: 2025-06-26T16:01:37.609Z
Learnt from: DCG-Claude
PR: dashpay/rust-dashcore#0
File: :0-0
Timestamp: 2025-06-26T16:01:37.609Z
Learning: The mempool tracking infrastructure (UnconfirmedTransaction, MempoolState, configuration, and mempool_filter.rs) is fully implemented and integrated in the Dash SPV client as of this PR, including client logic, FFI APIs, and tests.
Applied to files:
dash-spv/src/client/mempool.rs
🧬 Code graph analysis (29)
dash-spv/src/client/progress.rs (1)
dash-spv/src/network/peer.rs (1)
connected_peers(69-71)
dash-spv/src/sync/sequential/post_sync.rs (5)
dash-spv/src/sync/sequential/manager.rs (1)
current_phase(196-198)dash-spv/src/sync/filters/manager.rs (1)
new(111-154)dash-spv/src/client/lifecycle.rs (1)
new(36-105)dash-spv/src/sync/sequential/lifecycle.rs (1)
new(31-62)dash/src/blockdata/constants.rs (1)
genesis_block(112-185)
dash-spv/src/sync/sequential/message_handlers.rs (4)
dash-spv/src/client/core.rs (3)
network(166-168)storage(171-173)wallet(161-163)dash-spv/src/sync/sequential/manager.rs (2)
current_phase(196-198)filter_sync(231-233)dash-spv/src/sync/filters/headers.rs (1)
handle_cfheaders_message(140-480)dash-spv/src/sync/filters/manager.rs (2)
pending_download_count(273-275)active_request_count(278-280)
dash-spv/src/storage/disk/segments.rs (2)
dash-spv/src/storage/disk/io.rs (4)
load_headers_from_file(19-49)save_segment_to_disk(100-132)load_filter_headers_from_file(52-82)save_filter_segment_to_disk(135-158)dash-spv/src/storage/disk/manager.rs (1)
new(79-129)
dash-spv/src/sync/sequential/lifecycle.rs (4)
dash-spv/src/storage/disk/manager.rs (1)
new(79-129)dash-spv/src/sync/filters/manager.rs (2)
new(111-154)reset_pending_requests(334-346)dash-spv/src/client/lifecycle.rs (1)
new(36-105)dash-spv/src/sync/sequential/manager.rs (1)
current_phase(196-198)
dash-spv/src/sync/filters/headers.rs (2)
dash-spv/src/sync/sequential/message_handlers.rs (1)
handle_cfheaders_message(503-555)dash-spv/src/sync/filters/manager.rs (2)
header_abs_to_storage_index(163-169)filter_abs_to_storage_index(173-179)
dash-spv/src/storage/disk/filters.rs (4)
dash-spv/src/storage/disk/state.rs (7)
store_filter_headers(568-573)load_filter_headers(575-580)get_filter_header(582-587)get_filter_tip_height(589-591)store_filter(609-611)load_filter(613-615)clear_filters(629-631)dash-spv/src/sync/filters/download.rs (1)
store_filter_headers(483-630)dash-spv/src/storage/disk/manager.rs (3)
get_segment_id(218-220)get_segment_offset(223-225)new(79-129)dash-spv/src/storage/disk/segments.rs (2)
ensure_filter_segment_loaded(150-193)save_dirty_segments(228-322)
dash-spv/src/sync/filters/retry.rs (2)
dash-spv/src/sync/filters/manager.rs (1)
new(111-154)dash-spv/src/sync/sequential/lifecycle.rs (1)
new(31-62)
dash-spv/src/sync/filters/requests.rs (1)
dash-spv/src/client/lifecycle.rs (1)
start(108-259)
dash-spv/src/storage/disk/headers.rs (3)
dash-spv/src/storage/disk/segments.rs (3)
create_sentinel_header(50-59)ensure_segment_loaded(62-117)save_dirty_segments(228-322)dash-spv/src/storage/disk/manager.rs (3)
get_segment_id(218-220)get_segment_offset(223-225)new(79-129)dash-spv/src/storage/disk/state.rs (5)
load_headers(556-558)get_header(560-562)get_tip_height(564-566)get_header_height_by_hash(637-639)get_headers_batch(641-647)
dash-spv/src/client/chainlock.rs (2)
dash-spv/src/storage/disk/manager.rs (1)
new(79-129)dash-spv/src/client/lifecycle.rs (1)
new(36-105)
dash-spv/src/client/events.rs (3)
dash-spv/src/storage/disk/manager.rs (2)
mpsc(135-135)mpsc(136-136)dash-spv/src/sync/filters/matching.rs (1)
mpsc(298-298)dash-spv/src/sync/sequential/phases.rs (1)
progress(278-524)
dash-spv/src/sync/filters/types.rs (1)
dash-spv/src/sync/filters/matching.rs (1)
mpsc(298-298)
dash-spv/src/client/core.rs (2)
dash-spv/src/storage/disk/state.rs (5)
clear_sync_state(180-187)clear_sync_state(660-662)clear_filters(629-631)stats(443-474)stats(633-635)dash-spv/src/sync/filters/manager.rs (1)
new(111-154)
dash-spv/src/sync/filters/stats.rs (1)
dash-spv/src/sync/sequential/manager.rs (1)
filter_sync(231-233)
dash-spv/src/client/queries.rs (2)
dash-spv/src/sync/sequential/manager.rs (1)
masternode_list_engine(202-206)dash-spv/src/sync/filters/manager.rs (1)
is_filter_sync_available(199-203)
dash-spv/src/sync/sequential/mod.rs (1)
dash-spv/src/sync/sequential/phases.rs (1)
progress(278-524)
dash-spv/src/client/mempool.rs (1)
dash-spv/src/client/lifecycle.rs (1)
new(36-105)
dash-spv/src/sync/sequential/manager.rs (2)
dash-spv/src/client/core.rs (2)
network(166-168)storage(171-173)dash-spv/src/client/queries.rs (2)
network(46-49)masternode_list_engine(61-63)
dash-spv/src/sync/filters/gaps.rs (2)
dash-spv/src/sync/filters/manager.rs (1)
new(111-154)dash-spv/src/sync/sequential/lifecycle.rs (1)
new(31-62)
dash-spv/src/client/sync_coordinator.rs (7)
dash-spv/src/client/core.rs (4)
network(166-168)storage(171-173)tip_hash(194-197)chain_state(206-209)dash-spv/src/sync/filters/manager.rs (1)
new(111-154)dash-spv/src/client/lifecycle.rs (2)
new(36-105)start(108-259)dash-spv/src/sync/sequential/lifecycle.rs (1)
new(31-62)dash-spv/src/client/progress.rs (3)
sync_progress(24-27)map_phase_to_stage(61-114)stats(30-58)dash-spv/src/sync/sequential/manager.rs (1)
filter_sync(231-233)dash-spv/src/storage/sync_state.rs (1)
from_chain_state(154-200)
dash-spv/src/client/lifecycle.rs (5)
dash-spv/src/storage/disk/manager.rs (3)
mpsc(135-135)mpsc(136-136)new(79-129)dash-spv/src/sync/sequential/lifecycle.rs (2)
new(31-62)start_sync(108-142)dash-spv/src/types.rs (1)
new_for_network(292-329)dash-spv/src/storage/disk/state.rs (4)
stats(443-474)stats(633-635)shutdown(477-491)shutdown(749-751)dash-spv/src/chain/checkpoints.rs (2)
mainnet_checkpoints(261-348)testnet_checkpoints(351-402)
dash-spv/src/sync/filters/matching.rs (2)
dash-spv/src/sync/filters/manager.rs (1)
new(111-154)dash-spv/src/sync/filters/stats.rs (1)
update_filter_received(81-87)
dash-spv/src/sync/filters/download.rs (3)
dash-spv/src/storage/disk/manager.rs (1)
new(79-129)dash-spv/src/sync/filters/manager.rs (1)
new(111-154)dash-spv/src/storage/disk/filters.rs (1)
store_filter_headers(15-91)
dash-spv/src/storage/disk/manager.rs (2)
dash-spv/src/storage/disk/io.rs (5)
save_filter_segment_to_disk(135-158)save_index_to_disk(161-178)save_segment_to_disk(100-132)load_index_from_file(85-97)load_headers_from_file(19-49)dash-spv/src/storage/disk/segments.rs (2)
ensure_segment_loaded(62-117)ensure_filter_segment_loaded(150-193)
dash-spv/src/sync/filters/manager.rs (4)
dash-spv/src/client/queries.rs (2)
network(46-49)is_filter_sync_available(168-172)dash-spv/src/storage/disk/manager.rs (1)
new(79-129)dash-spv/src/client/lifecycle.rs (1)
new(36-105)dash-spv/src/sync/sequential/lifecycle.rs (1)
new(31-62)
dash-spv/src/sync/sequential/phase_execution.rs (2)
dash-spv/src/sync/sequential/manager.rs (1)
current_phase(196-198)dash-spv/src/sync/sequential/phases.rs (1)
progress(278-524)
dash-spv/src/storage/disk/io.rs (1)
dash-spv/src/storage/disk/manager.rs (1)
new(79-129)
dash-spv/src/storage/disk/state.rs (4)
dash-spv/src/storage/disk/manager.rs (1)
new(79-129)dash-spv/src/storage/disk/segments.rs (1)
save_dirty_segments(228-322)dash-spv/src/storage/disk/headers.rs (5)
load_headers(289-339)get_header(342-404)get_tip_height(407-419)get_header_height_by_hash(422-424)get_headers_batch(427-448)dash-spv/src/storage/disk/filters.rs (6)
store_filter_headers(15-91)load_filter_headers(94-140)get_filter_header(143-178)get_filter_tip_height(181-183)store_filter(186-190)load_filter(193-201)
🪛 LanguageTool
dash-spv/ARCHITECTURE.md
[grammar] ~1468-~1468: Use a hyphen to join words.
Context: ...table complexity levels. The 1,000-1,500 line files contain inherently complex lo...
(QB_NEW_EN_HYPHEN)
🪛 markdownlint-cli2 (0.18.1)
dash-spv/ARCHITECTURE.md
35-35: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
520-520: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
854-854: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
1011-1011: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
1091-1091: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
dash-spv/CODE_ANALYSIS_SUMMARY.md
153-153: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
| let _ = ui | ||
| .update_status(|status| { | ||
| status.peer_count = 1; // Connected to one peer | ||
| status.headers = header_height; | ||
| status.filter_headers = filter_height; | ||
| }) | ||
| .await; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don’t hard-code UI peer_count = 1
Use the live network peer count to avoid misleading status.
- status.peer_count = 1; // Connected to one peer
+ status.peer_count = self.network.peer_count() as u32;Also applies to: 236-246
🤖 Prompt for AI Agents
In dash-spv/src/client/lifecycle.rs around lines 236-246 and 249-256, the UI
update is hard-coding status.peer_count = 1; replace that with the actual peer
count from the runtime network state (e.g., call or pass in the peer
list/manager and set status.peer_count = peers.len() or the provided peer_count
variable, handling Option/Result as needed), ensure the value is captured
outside the async closure or cloned into it if necessary, and keep the rest of
the status updates the same so the UI reflects the live network peer count
instead of a constant.
| let genesis_header = match self.config.network { | ||
| dashcore::Network::Dash => { | ||
| // Use the actual Dash mainnet genesis block parameters | ||
| BlockHeader { | ||
| version: Version::from_consensus(1), | ||
| prev_blockhash: dashcore::BlockHash::from([0u8; 32]), | ||
| merkle_root: "e0028eb9648db56b1ac77cf090b99048a8007e2bb64b68f092c03c7f56a662c7" | ||
| .parse() | ||
| .unwrap_or_else(|_| dashcore::hashes::sha256d::Hash::all_zeros().into()), | ||
| time: 1390095618, | ||
| bits: CompactTarget::from_consensus(0x1e0ffff0), | ||
| nonce: 28917698, | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Genesis merkle_root parse fallback hides config mistakes
Silently falling back to all-zero hash can mask a bad constant and cause a mismatch elsewhere. Prefer failing fast.
- merkle_root: "e0028e...662c7"
- .parse()
- .unwrap_or_else(|_| dashcore::hashes::sha256d::Hash::all_zeros().into()),
+ merkle_root: "e0028e...662c7"
+ .parse()
+ .map_err(|_| SpvError::Config("Invalid mainnet genesis merkle_root".into()))?,
@@
- merkle_root: "e0028e...662c7"
- .parse()
- .unwrap_or_else(|_| dashcore::hashes::sha256d::Hash::all_zeros().into()),
+ merkle_root: "e0028e...662c7"
+ .parse()
+ .map_err(|_| SpvError::Config("Invalid testnet genesis merkle_root".into()))?,Ensure function signature supports returning Result in this block (it already does).
Also applies to: 476-484
🤖 Prompt for AI Agents
In dash-spv/src/client/lifecycle.rs around lines 443-455 (and apply same change
to 476-484), the merkle_root parse currently falls back to an all-zero hash
which hides bad constants; change the code to propagate the parse error instead
of silently substituting a zero hash: replace the parse().unwrap_or_else(...)
pattern with using parse()? or map_err(|e| SomeError::new(format!(...)))? to
return an Err from the function with a clear error message (ensure the existing
Result return type is used), so bad genesis constants fail fast and provide
contextual diagnostics.
| /// Get mempool balance for an address. | ||
| pub async fn get_mempool_balance( | ||
| &self, | ||
| address: &dashcore::Address, | ||
| ) -> Result<crate::types::MempoolBalance> { | ||
| let _wallet = self.wallet.read().await; | ||
| let mempool_state = self.mempool_state.read().await; | ||
|
|
||
| let mut pending = 0i64; | ||
| let mut pending_instant = 0i64; | ||
|
|
||
| // Calculate pending balances from mempool transactions | ||
| for tx in mempool_state.transactions.values() { | ||
| // Check if this transaction affects the given address | ||
| let mut address_affected = false; | ||
| for addr in &tx.addresses { | ||
| if addr == address { | ||
| address_affected = true; | ||
| break; | ||
| } | ||
| } | ||
|
|
||
| if address_affected { | ||
| // Calculate the actual balance change for this specific address | ||
| // by examining inputs and outputs directly | ||
| let mut address_balance_change = 0i64; | ||
|
|
||
| // Check outputs to this address (incoming funds) | ||
| for output in &tx.transaction.output { | ||
| if let Ok(out_addr) = | ||
| dashcore::Address::from_script(&output.script_pubkey, self.config.network) | ||
| { | ||
| if &out_addr == address { | ||
| address_balance_change += output.value as i64; | ||
| } | ||
| } | ||
| } | ||
|
|
||
| // Check inputs from this address (outgoing funds) | ||
| // We need to check if any of the inputs were previously owned by this address | ||
| // Note: This requires the wallet to have knowledge of the UTXOs being spent | ||
| // In a real implementation, we would need to look up the previous outputs | ||
| // For now, we'll rely on the is_outgoing flag and net_amount when we can't determine ownership | ||
|
|
||
| // Validate that the calculated balance change is consistent with net_amount | ||
| // for transactions where this address is involved | ||
| if address_balance_change != 0 { | ||
| // For outgoing transactions, net_amount should be negative if we're spending | ||
| // For incoming transactions, net_amount should be positive if we're receiving | ||
| // Mixed transactions (both sending and receiving) should have the net effect | ||
|
|
||
| // Apply the validated balance change | ||
| if tx.is_instant_send { | ||
| pending_instant += address_balance_change; | ||
| } else { | ||
| pending += address_balance_change; | ||
| } | ||
| } else if tx.net_amount != 0 && tx.is_outgoing { | ||
| // Edge case: If we calculated zero change but net_amount is non-zero, | ||
| // and it's an outgoing transaction, it might be a fee-only transaction | ||
| // In this case, we should not affect the balance for this address | ||
| // unless it's the sender paying the fee | ||
| continue; | ||
| } | ||
| } | ||
| } | ||
|
|
||
| // Convert to unsigned values, ensuring no negative balances | ||
| let pending_sats = if pending < 0 { | ||
| 0 | ||
| } else { | ||
| pending as u64 | ||
| }; | ||
| let pending_instant_sats = if pending_instant < 0 { | ||
| 0 | ||
| } else { | ||
| pending_instant as u64 | ||
| }; | ||
|
|
||
| Ok(crate::types::MempoolBalance { | ||
| pending: dashcore::Amount::from_sat(pending_sats), | ||
| pending_instant: dashcore::Amount::from_sat(pending_instant_sats), | ||
| }) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Mempool “balance” drops outgoing amounts; likely incorrect.
You accumulate negative deltas into pending/pending_instant but then clamp to zero, discarding spends/fees. This misreports net mempool effect.
Suggestions:
- If
MempoolBalancesupports it, return incoming and outgoing separately, or a signed net. - Short-term: if precise input ownership isn’t available, fallback to
tx.net_amountfor address-involved txs (document limitations). - Update the docstring to clarify semantics (incoming-only vs net).
Please confirm crate::types::MempoolBalance fields so we can provide an exact patch. Run:
🏁 Script executed:
#!/bin/bash
ast-grep --pattern $'struct MempoolBalance { $$$ }' -n
rg -nP 'struct\s+UnconfirmedTransaction|net_amount|is_outgoing|is_instant_send'Length of output: 7916
🏁 Script executed:
#!/bin/bash
# Find MempoolBalance struct definition
rg -A5 'struct MempoolBalance' dash-spv/src/types.rs
# Check UnconfirmedTransaction constructor to understand net_amount
rg -A15 'impl UnconfirmedTransaction' dash-spv/src/types.rs | head -40
# Look for MempoolBalance tests or usage
rg -B2 -A8 'MempoolBalance {' dash-spv/src/Length of output: 804
🏁 Script executed:
#!/bin/bash
# Find get_mempool_balance usage and any tests
rg -B3 -A10 'get_mempool_balance' dash-spv/src/ --type=rs
# Look for any type that might track both incoming and outgoing
rg -B2 -A5 'outgoing|net_effect' dash-spv/src/types.rs | head -50
# Check if there are doc comments or tests that explain pending balance semantics
rg -B5 'pending.*balance|balance.*pending' dash-spv/src/types.rs | head -30Length of output: 1813
🏁 Script executed:
#!/bin/bash
# Check if MempoolState tracks per-address balances
rg -B3 -A20 'struct MempoolState' dash-spv/src/types.rs
# Look at the actual get_mempool_balance function context (before/after the snippet)
rg -B10 -A50 'fn get_mempool_balance' dash-spv/src/client/mempool.rs | head -80
# See if there are tests for get_mempool_balance
rg -B5 -A15 'test.*mempool_balance|mempool_balance.*test' dash-spv/Length of output: 7386
Use tx.net_amount directly instead of recalculating only outputs; address filter requires per-tx net amount, not gross incoming.
The function drops outgoing amounts by only examining tx.transaction.output and clamping negatives to zero. The tx.net_amount field is pre-calculated (signed) and accounts for both inputs and outputs; use it directly for address-involved transactions rather than attempting incomplete output-only reconstruction.
For address-filtered balance:
- When
address_affectedis true, applytx.net_amountdirectly (already includes net effect for wallet) - If precise address-level input ownership is unavailable, this is the correct fallback
- Update docstring to clarify whether
MempoolBalancerepresents incoming-only or net pending effect
Example fix:
if address_affected {
// Use pre-calculated net amount (includes inputs & outputs)
if tx.is_instant_send {
pending_instant += tx.net_amount;
} else {
pending += tx.net_amount;
}
}Then handle unsigned conversion at end (clamping only once, not mid-accumulation).
🤖 Prompt for AI Agents
In dash-spv/src/client/mempool.rs around lines 52 to 135, the code recalculates
address effects by only inspecting outputs and ignores the precomputed signed
tx.net_amount, causing outgoing amounts to be dropped and incorrect clamping;
instead, when address_affected is true apply tx.net_amount directly to pending
or pending_instant (depending on tx.is_instant_send), accumulate signed values
throughout, and perform unsigned clamping only once at the end before
constructing MempoolBalance; also update the docstring to state that
MempoolBalance represents the net pending effect for the address.
| tracing::info!( | ||
| "✅ Initial sync requests sent! Current state - Headers: {}, Filter headers: {}", | ||
| result.header_height, | ||
| result.filter_header_height | ||
| ); | ||
| tracing::info!("📊 Actual sync will complete asynchronously through monitoring loop"); | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Misleading log message.
“Initial sync requests sent!” isn’t true here—no requests are sent in sync_to_tip; monitoring loop does that. Reword to avoid confusion.
Example: “Prepared initial sync state; requests will be sent by the monitoring loop.”
🤖 Prompt for AI Agents
In dash-spv/src/client/sync_coordinator.rs around lines 55 to 61, the log
message "✅ Initial sync requests sent! ..." is misleading because sync_to_tip
does not send requests; the monitoring loop does. Update the tracing::info
message to indicate setup/preparation only (e.g., "Prepared initial sync state;
requests will be sent by the monitoring loop") and keep the subsequent info
about async completion unchanged so logs accurately reflect that requests are
sent later by the monitor.
| let handler_result = { | ||
| let mut storage = self.storage.lock().await; | ||
|
|
||
| // Create a MessageHandler instance with all required parameters | ||
| let mut handler = MessageHandler::new( | ||
| &mut self.sync_manager, | ||
| &mut *storage, | ||
| &mut self.network, | ||
| &self.config, | ||
| &self.stats, | ||
| &self.block_processor_tx, | ||
| &self.mempool_filter, | ||
| &self.mempool_state, | ||
| &self.event_tx, | ||
| ); | ||
|
|
||
| // Delegate message handling to the MessageHandler | ||
| handler.handle_network_message(message.clone()).await | ||
| }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don’t hold the storage mutex across an .await inside message handling.
storage is locked, then handler.handle_network_message(...).await runs, potentially awaiting network I/O while holding the lock. This can serialize unrelated storage users and risks deadlocks.
Refactor MessageHandler to:
- Build a plan without holding
&mut S. - Acquire
&mut Sin short, non-async apply steps.
Or split handling: parse/validate (no lock) -> minimal locked writes.
| // Get the actual peer ID from the network manager | ||
| let peer_id = network.get_last_message_peer_id().await; | ||
| self.handle_headers2_message(headers2, peer_id, network, storage).await?; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
🧩 Analysis chain
Peer attribution may race; consider passing peer id/addr with the message
Relying on get_last_message_peer_id/addr can mismatch under concurrency if multiple messages arrive. Prefer delivering peer identity alongside the message from the network layer.
Also applies to: 133-135, 510-516, 564-576
🏁 Script executed:
#!/bin/bash
# Inspect how peer identity is plumbed into message delivery.
rg -nP -C2 '\bget_last_message_peer_(id|addr)\b'
rg -nP -C3 'handle_message\s*\(' dash-spv/src/networkLength of output: 10864
The search results from the script execution will help determine the scope and feasibility of the refactor. Let me generate the verification now with a careful analysis.
Based on my analysis of the code and architectural patterns, here is my assessment:
The race condition concern is architecturally valid
Race conditions must be avoided using concurrency primitives, and shared mutable state should be avoided in favor of channels or other concurrency primitives. The current pattern of calling get_last_message_peer_id() after receiving a message creates a time-of-check-time-of-use (TOCTOU) window.
However, the practical impact depends on:
- Whether NetworkMessage enum can be modified – If it's part of an external protocol contract, adding peer context may not be feasible.
- Concurrency model – If messages from different peers are processed sequentially (not concurrently), the race is theoretical.
- Usage pattern – The flagged instances show mixed patterns (some only log, others use peer for functional logic).
The suggestion is sound for critical paths where peer identity determines routing/validation logic (lines 85-86, 133-134). For diagnostics-only logging (lines 510, 564), the urgency is lower.
🏁 Script executed:
#!/bin/bash
# Find NetworkMessage enum definition and handlers that use get_last_message_peer_id
fd -t f '\.(rs)$' dash-spv/src --exec grep -l 'enum NetworkMessage' {} \;
# Show the definition
rg -n -A 30 'enum NetworkMessage' dash-spv/srcLength of output: 152
🏁 Script executed:
#!/bin/bash
# Find NetworkMessage enum definition
rg -n -A 40 'enum NetworkMessage' dash-spv/src
# Find where messages are created/dispatched with peer context
rg -n -B5 -A5 'NetworkMessage::' dash-spv/src/sync/sequential/message_handlers.rs | head -100Length of output: 3807
🏁 Script executed:
#!/bin/bash
# Find the actual NetworkMessage enum definition
rg -n 'pub enum NetworkMessage' dash-spv/src
# Show the full enum definition
rg -n -A 50 'pub enum NetworkMessage' dash-spv/src
# Find where handle_sync_message is invoked
rg -n 'handle_sync_message' dash-spv/src -B3 -A3Length of output: 0
🏁 Script executed:
#!/bin/bash
# Broader search for NetworkMessage enum (without pub restriction)
rg -n 'enum NetworkMessage' dash-spv/src
# Find where NetworkMessage is imported/used
rg -n 'use.*NetworkMessage' dash-spv/src | head -20
# Check types module
rg -n 'NetworkMessage' dash-spv/src/types.rs | head -30
# Find the definition by looking at where it's constructed
rg -n 'NetworkMessage::(Block|Headers|Headers2)' dash-spv/src --type=rs | head -20Length of output: 1428
🏁 Script executed:
#!/bin/bash
# Find where handle_sync_message is called
rg -n 'handle_sync_message' dash-spv/src -B5 -A2
# Find the message dispatch/reception code flow
rg -n 'NetworkMessage' dash-spv/src/network/multi_peer.rs -B2 -A2 | head -60
# Check if there's a correlation mechanism between message and peer
rg -n 'last_message_peer' dash-spv/src/network/multi_peer.rs -B3 -A3 | head -80Length of output: 6411
🏁 Script executed:
#!/bin/bash
# Find where messages are dispatched to handlers after being received from the channel
rg -n -A20 'Ok(Some(\(addr, msg\)))' dash-spv/src/network/multi_peer.rs
# Look for where handle_sync_message or similar is called with just the message
rg -n 'handle_sync_message\|handle_message' dash-spv/src/network/multi_peer.rs -B5 -A5Length of output: 0
🏁 Script executed:
#!/bin/bash
# Look at code after line 1065 in multi_peer.rs to see message dispatch
rg -n -A 30 'Store the last message peer' dash-spv/src/network/multi_peer.rs
# Find who calls receive_message on the network
rg -n 'receive_message\|\.recv\(' dash-spv/src -B3 -A3 | head -80Length of output: 1303
🏁 Script executed:
#!/bin/bash
# Find all call sites of receive_message
rg -n '\.receive_message\(\)' dash-spv/src -B3 -A5
# Check the trait definition for receive_message
rg -n 'fn receive_message' dash-spv/src -B2 -A3Length of output: 4990
Peer attribution race condition confirmed; refactor required for functional logic paths
The review comment is correct. The message channel in dash-spv/src/network/multi_peer.rs receives messages as (SocketAddr, NetworkMessage) tuples (line 1061), but the peer address is stored separately in a Mutex<Option<SocketAddr>> while only the message is returned from receive_message(). Handlers then query this Mutex later via get_last_message_peer_id(), creating a time-of-check-time-of-use (TOCTOU) window where another message from a different peer could update the Mutex before the handler reads it.
This is a real issue for lines 84-87 and 133-135 where peer_id is used for functional logic in handle_headers2_message(), as peer misattribution could cause incorrect header processing. Lines 510-516 and 564-576 are lower priority since they use peer context only for diagnostics/logging.
The fix is feasible: modify the receive_message() trait method return type to bundle peer identity with the message (e.g., Option<(PeerId, NetworkMessage)>), eliminating the race by keeping them together from reception through handling.
| // Calculate blockchain height before borrowing self.current_phase | ||
| let blockchain_height = self.get_blockchain_height_from_storage(storage).await.unwrap_or(0); | ||
|
|
||
| // Update phase state and check if we need to transition | ||
| let should_transition = if let SyncPhase::DownloadingHeaders { | ||
| current_height, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don’t mask storage errors when deriving blockchain height
Using unwrap_or(0) hides real failures and can skew phase logic. Propagate the error so callers can react.
Apply:
- let blockchain_height = self.get_blockchain_height_from_storage(storage).await.unwrap_or(0);
+ let blockchain_height = self
+ .get_blockchain_height_from_storage(storage)
+ .await?;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Calculate blockchain height before borrowing self.current_phase | |
| let blockchain_height = self.get_blockchain_height_from_storage(storage).await.unwrap_or(0); | |
| // Update phase state and check if we need to transition | |
| let should_transition = if let SyncPhase::DownloadingHeaders { | |
| current_height, | |
| // Calculate blockchain height before borrowing self.current_phase | |
| let blockchain_height = self | |
| .get_blockchain_height_from_storage(storage) | |
| .await?; | |
| // Update phase state and check if we need to transition | |
| let should_transition = if let SyncPhase::DownloadingHeaders { | |
| current_height, |
🤖 Prompt for AI Agents
In dash-spv/src/sync/sequential/message_handlers.rs around lines 309 to 314, the
code currently calls
self.get_blockchain_height_from_storage(storage).await.unwrap_or(0) which masks
real storage errors; change this to propagate the error instead of defaulting to
0 (e.g., use the ? operator or map_err to convert into the function's Result
error type) so callers receive the actual failure; ensure the surrounding
function signature and return paths support returning a Result and adjust
control flow to handle the propagated error rather than proceeding with a bogus
height.
| // Calculate blockchain height before borrowing self.current_phase | ||
| let blockchain_height = self.get_blockchain_height_from_storage(storage).await.unwrap_or(0); | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same here: avoid unwrap_or(0) on storage height
Return the actual error; 0 is a misleading sentinel.
- let blockchain_height = self.get_blockchain_height_from_storage(storage).await.unwrap_or(0);
+ let blockchain_height = self
+ .get_blockchain_height_from_storage(storage)
+ .await?;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Calculate blockchain height before borrowing self.current_phase | |
| let blockchain_height = self.get_blockchain_height_from_storage(storage).await.unwrap_or(0); | |
| // Calculate blockchain height before borrowing self.current_phase | |
| let blockchain_height = self | |
| .get_blockchain_height_from_storage(storage) | |
| .await?; |
🤖 Prompt for AI Agents
In dash-spv/src/sync/sequential/message_handlers.rs around lines 355 to 357,
avoid using unwrap_or(0) when reading blockchain height from storage because
returning 0 hides the real error; instead propagate the storage error (e.g., use
the ? operator or map_err to convert it to the handler's error type) so the
caller receives the actual failure, and adjust the function signature/return
type if necessary to return Result rather than swallowing the error.
| if filter_header_tip > 0 { | ||
| // Download all filters for complete blockchain history | ||
| // This ensures the wallet can find transactions from any point in history | ||
| let start_height = self.header_sync.get_sync_base_height().max(1); | ||
| let count = filter_header_tip - start_height + 1; | ||
|
|
||
| tracing::info!( | ||
| "Starting filter download from height {} to {} ({} filters)", | ||
| start_height, | ||
| filter_header_tip, | ||
| count | ||
| ); | ||
|
|
||
| // Update the phase to track the expected total | ||
| if let SyncPhase::DownloadingFilters { | ||
| total_filters, | ||
| .. | ||
| } = &mut self.current_phase | ||
| { | ||
| *total_filters = count; | ||
| } | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix u32 underflow when computing filter count.
If start_height > filter_header_tip, filter_header_tip - start_height + 1 underflows (panic in debug, wrap in release). Use saturating math and early-exit when nothing to fetch.
- if filter_header_tip > 0 {
- // Download all filters for complete blockchain history
- // This ensures the wallet can find transactions from any point in history
- let start_height = self.header_sync.get_sync_base_height().max(1);
- let count = filter_header_tip - start_height + 1;
+ if filter_header_tip > 0 {
+ // Calculate safe range (avoid underflow)
+ let base = self.header_sync.get_sync_base_height();
+ let start_height = base.min(filter_header_tip);
+ let count = filter_header_tip
+ .saturating_sub(start_height)
+ .saturating_add(1);
+ if count == 0 {
+ self.transition_to_next_phase(
+ storage,
+ network,
+ "No filters to download",
+ ).await?;
+ return Ok(());
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if filter_header_tip > 0 { | |
| // Download all filters for complete blockchain history | |
| // This ensures the wallet can find transactions from any point in history | |
| let start_height = self.header_sync.get_sync_base_height().max(1); | |
| let count = filter_header_tip - start_height + 1; | |
| tracing::info!( | |
| "Starting filter download from height {} to {} ({} filters)", | |
| start_height, | |
| filter_header_tip, | |
| count | |
| ); | |
| // Update the phase to track the expected total | |
| if let SyncPhase::DownloadingFilters { | |
| total_filters, | |
| .. | |
| } = &mut self.current_phase | |
| { | |
| *total_filters = count; | |
| } | |
| if filter_header_tip > 0 { | |
| // Calculate safe range (avoid underflow) | |
| let base = self.header_sync.get_sync_base_height(); | |
| let start_height = base.min(filter_header_tip); | |
| let count = filter_header_tip | |
| .saturating_sub(start_height) | |
| .saturating_add(1); | |
| if count == 0 { | |
| self.transition_to_next_phase( | |
| storage, | |
| network, | |
| "No filters to download", | |
| ).await?; | |
| return Ok(()); | |
| } | |
| tracing::info!( | |
| "Starting filter download from height {} to {} ({} filters)", | |
| start_height, | |
| filter_header_tip, | |
| count | |
| ); | |
| // Update the phase to track the expected total | |
| if let SyncPhase::DownloadingFilters { | |
| total_filters, | |
| .. | |
| } = &mut self.current_phase | |
| { | |
| *total_filters = count; | |
| } |
🤖 Prompt for AI Agents
In dash-spv/src/sync/sequential/phase_execution.rs around lines 152 to 173, the
computation `let count = filter_header_tip - start_height + 1` can underflow if
start_height > filter_header_tip; change the logic to avoid underflow by
computing the count with saturating/checked subtraction and early-exiting when
there are no filters to fetch: compute count as either
filter_header_tip.saturating_sub(start_height).saturating_add(1) or use
checked_sub to return/skip the download when None, and ensure you only update
the SyncPhase::DownloadingFilters.total_filters when count > 0 (or set it to 0
if you prefer explicit tracking) so the code never performs an underflowing
subtraction.
| let get_headers = NetworkMessage::GetHeaders( | ||
| dashcore::network::message_blockdata::GetHeadersMessage { | ||
| version: dashcore::network::constants::PROTOCOL_VERSION, | ||
| locator_hashes, | ||
| stop_hash: BlockHash::from_raw_hash(dashcore::hashes::Hash::all_zeros()), | ||
| }, | ||
| ); | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Build a proper block locator (not a single tip hash).
Using only the tip as locator reduces robustness across forks/reorgs. Construct a standard backtracked locator (tip, tip-1, tip-2, tip-4, …, genesis).
- let locator_hashes = match base_hash {
- Some(hash) => {
- tracing::info!("📍 Using tip hash as locator: {}", hash);
- vec![hash]
- }
- None => {
- tracing::info!("📍 No headers found in storage, using empty locator for initial sync");
- Vec::new()
- }
- };
+ let locator_hashes = match base_hash {
+ Some(tip) => self.build_block_locator(storage, tip).await?,
+ None => Vec::new(),
+ };If build_block_locator doesn’t exist here, lift the existing locator builder from header_sync or add a small helper in this impl.
Also applies to: 41-55
🏁 Script executed:
#!/bin/bash
# Find an existing block locator builder
rg -nP --type=rust -C2 '\b(build_)?block(_)?locator\b'Length of output: 1620
I need to examine the actual code in post_sync.rs and verify whether headers.rs has the proper multi-hash locator logic or also uses a single-hash approach.
🏁 Script executed:
#!/bin/bash
# Retrieve post_sync.rs lines 41-66 and headers.rs lines 370-395
echo "=== post_sync.rs lines 41-66 ==="
sed -n '41,66p' dash-spv/src/sync/sequential/post_sync.rs
echo -e "\n=== headers.rs lines 370-395 ==="
sed -n '370,395p' dash-spv/src/sync/headers.rs
echo -e "\n=== Search for any fn containing 'locator' ==="
rg -nP --type=rust '^\s*(?:async\s+)?fn\s+\w*locator' dash-spv/Length of output: 2590
Add a proper exponential backoff block locator helper; single-hash locators reduce sync robustness across reorgs.
The current code uses only the tip hash as the locator (vec![hash]), which is insufficient per SPV protocol. A standard locator requires exponential backtracking: [tip, tip-1, tip-2, tip-4, …, genesis]. Neither post_sync.rs nor headers.rs contains a multi-hash locator builder; you must create one.
Required locations:
dash-spv/src/sync/sequential/post_sync.rslines 41-55 and 59-66: replace single-hash logic with call to new helperdash-spv/src/sync/headers.rslines 370-390: also uses same single-hash pattern and should call the new helper
Create a helper method (or utility function) that queries the header storage, walks backwards with exponential intervals (1, 2, 4, 8, …), and returns the properly constructed locator vector. Then update both call sites to use it.
🤖 Prompt for AI Agents
In dash-spv/src/sync/sequential/post_sync.rs lines 41-55 and 59-66 and
dash-spv/src/sync/headers.rs lines 370-390, the code currently builds a locator
using a single tip hash (vec![hash]) which is not robust; create a new helper
function (e.g., build_exponential_block_locator) in dash-spv/src/sync/headers.rs
(or a shared util module) that: queries the header storage for the current tip,
walks backwards producing hashes at exponential steps (1,2,4,8,...) until
genesis or headers exhausted, and returns Vec<BlockHash> starting with tip and
ending with genesis; make it return the same hash type used by GetHeaders and
replace the single-hash construction at both call sites with a call to this
helper; ensure the helper handles missing headers gracefully and is
unit-testable and public/internal as needed for the two callers.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did some high level testing and noticed no regressions.
This is a refactoring only of large files in the codebase. It was done with Claude Sonnet 4.5.