Skip to content

Conversation

@PastaPastaPasta
Copy link
Member

@PastaPastaPasta PastaPastaPasta commented Sep 26, 2025

Summary by CodeRabbit

  • New Features

    • Added verification of compact filters against the header chain before processing or storing, improving sync integrity and preventing acceptance of mismatched data.
  • Bug Fixes

    • Improved sync progress polling to avoid premature termination when progress data is temporarily unavailable, leading to more reliable progress tracking and completion.

- Compute BlockFilter header from received cfilter and compare to stored filter header
- Reject cfilters whose computed header mismatches expected value
- Apply verification in both initial sync and post-sync filter handling
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Sep 26, 2025

Walkthrough

Adds header-based verification for compact filters before processing or storing them, integrates the verification into sequential sync handlers, and adjusts the FFI CLI sync progress polling to continue waiting when progress data is temporarily unavailable.

Changes

Cohort / File(s) Summary of Changes
FFI CLI polling behavior
dash-spv-ffi/src/bin/ffi_cli.rs
Removed early-exit on null progress pointer; loop now continues sleeping/polling until progress becomes available or other conditions trigger exit.
Filter verification logic
dash-spv/src/sync/filters.rs
Added verify_cfilter_against_headers to compute and compare filter headers against stored CFHeaders; imported BlockFilter to support header computation; returns Ok(true/false) based on match, propagates storage/format errors.
Sequential sync integration
dash-spv/src/sync/sequential/mod.rs
Inserted header verification before processing/storing cfilters in handle_cfilter_message and handle_post_sync_cfilter; on mismatch, logs warning and returns early without wallet checks or storage writes.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  actor Peer
  participant Seq as SequentialSync
  participant FSM as FilterSyncManager
  participant Store as Storage

  Peer->>Seq: CFilter(block_hash, filter_data)
  Seq->>Store: get_height(block_hash)
  alt height found
    Seq->>FSM: verify_cfilter_against_headers(filter_data, height, Store)
    FSM->>Store: load CFHeader(height-1), CFHeader(height)
    Store-->>FSM: prev_header, expected_header
    FSM-->>Seq: true/false
    alt verified (true)
      Seq->>Seq: proceed with wallet checks / processing
      Seq->>Store: persist filter (as applicable)
    else mismatch (false)
      Seq->>Seq: log warning and return early
    end
  else height missing
    Seq->>Seq: handle as existing logic (error/skip)
  end
Loading
sequenceDiagram
  autonumber
  participant CLI as ffi_cli
  loop poll every 300ms
    CLI->>CLI: fetch progress pointer
    alt pointer is null
      Note over CLI: Continue polling (no early exit)
    else pointer valid
      CLI->>CLI: process progress
    end
  end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly related PRs

Suggested reviewers

  • QuantumExplorer
  • ogabrielides

Poem

A nibble of bytes, a hop through the chain,
I sniff every filter for header-domain.
If hashes align, I twitch with delight—
If not, I thump: “No carrot tonight!”
I’ll wait in the loop, ears up, eyes bright—
Sync by sync, we make it right. 🥕🐇

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The title clearly summarizes the primary change of adding validation of cfilter hashes against the cfheader chain, is concise, free of extraneous detail, and accurately reflects the main feature introduced by this pull request.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
✨ Finishing touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/validate-cfilter-hash-against-cfheader-chain

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ef7374d and f636fdf.

📒 Files selected for processing (3)
  • dash-spv-ffi/src/bin/ffi_cli.rs (0 hunks)
  • dash-spv/src/sync/filters.rs (2 hunks)
  • dash-spv/src/sync/sequential/mod.rs (2 hunks)
💤 Files with no reviewable changes (1)
  • dash-spv-ffi/src/bin/ffi_cli.rs
🧰 Additional context used
📓 Path-based instructions (3)
dash-spv/**/*.rs

📄 CodeRabbit inference engine (dash-spv/CLAUDE.md)

dash-spv/**/*.rs: Enforce Rust formatting via cargo fmt --check on all Rust source files
All code must be clippy-clean: run cargo clippy --all-targets --all-features -- -D warnings

Files:

  • dash-spv/src/sync/filters.rs
  • dash-spv/src/sync/sequential/mod.rs
**/*.rs

📄 CodeRabbit inference engine (CLAUDE.md)

**/*.rs: Use proper error types with thiserror and propagate errors appropriately
Use the tokio runtime for async operations in Rust
Use conditional compilation feature flags for optional features (#[cfg(feature = ...)])
Format Rust code with cargo fmt (and enforce via cargo fmt --check)
Run clippy with -D warnings and fix all lints
Adhere to MSRV Rust 1.89 (avoid features requiring newer compiler)

**/*.rs: Format Rust code with rustfmt (per rustfmt.toml); run cargo fmt --all before commits
Lint with clippy; treat warnings as errors in CI
Follow Rust naming: snake_case for functions/variables, UpperCamelCase for types/traits, SCREAMING_SNAKE_CASE for consts
Prefer async using tokio where applicable

Files:

  • dash-spv/src/sync/filters.rs
  • dash-spv/src/sync/sequential/mod.rs
**/src/**/*.rs

📄 CodeRabbit inference engine (AGENTS.md)

**/src/**/*.rs: Each crate keeps sources in src/
Avoid unwrap()/expect() in library code; use proper error types (e.g., thiserror)
Place unit tests alongside code with #[cfg(test)]

Files:

  • dash-spv/src/sync/filters.rs
  • dash-spv/src/sync/sequential/mod.rs
🧬 Code graph analysis (1)
dash-spv/src/sync/sequential/mod.rs (1)
dash-spv/src/client/mod.rs (1)
  • storage (2397-2399)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (20)
  • GitHub Check: RPC Tests (stable, true)
  • GitHub Check: Integration Tests (stable, 22.1.3)
  • GitHub Check: Strict Warnings and Clippy Checks
  • GitHub Check: Core Components Tests
  • GitHub Check: fuzz (dash_deserialize_block)
  • GitHub Check: fuzz (dash_deserialize_script)
  • GitHub Check: fuzz (hashes_sha256)
  • GitHub Check: fuzz (hashes_ripemd160)
  • GitHub Check: fuzz (hashes_sha512_256)
  • GitHub Check: fuzz (hashes_sha1)
  • GitHub Check: fuzz (dash_script_bytes_to_asm_fmt)
  • GitHub Check: SPV Components Tests
  • GitHub Check: fuzz (hashes_sha512)
  • GitHub Check: fuzz (hashes_cbor)
  • GitHub Check: fuzz (dash_outpoint_string)
  • GitHub Check: fuzz (dash_deser_net_msg)
  • GitHub Check: fuzz (dash_deserialize_witness)
  • GitHub Check: fuzz (dash_deserialize_address)
  • GitHub Check: fuzz (hashes_json)
  • GitHub Check: fuzz (dash_deserialize_amount)

Comment on lines +118 to +151
}

// Load previous and expected headers
let prev_header = storage.get_filter_header(height - 1).await.map_err(|e| {
SyncError::Storage(format!("Failed to load previous filter header: {}", e))
})?;
let expected_header = storage.get_filter_header(height).await.map_err(|e| {
SyncError::Storage(format!("Failed to load expected filter header: {}", e))
})?;

let (Some(prev_header), Some(expected_header)) = (prev_header, expected_header) else {
tracing::warn!(
"Missing filter headers in storage for height {} (prev and/or expected)",
height
);
return Ok(false);
};

// Compute the header from the received filter bytes and compare
let filter = BlockFilter::new(filter_data);
let computed_header = filter.filter_header(&prev_header);

let matches = computed_header == expected_header;
if !matches {
tracing::error!(
"CFilter header mismatch at height {}: computed={:?}, expected={:?}",
height,
computed_header,
expected_header
);
}

Ok(matches)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Checkpoint-start filters are rejected forever

When we sync from a checkpoint the first filter range starts at sync_base_height. This code then tries to load the previous filter header (height - 1), but we never store headers below the checkpoint base. As a result prev_header is None, verify_cfilter_against_headers returns Ok(false), and every filter at the base height is dropped. Because that height never marks as received, the DownloadingFilters phase can’t finish and the sync stalls. Please skip header-chain verification for heights at or below the checkpoint base (mirroring the CFHeader logic) instead of rejecting those filters.

         if height == 0 {
             tracing::debug!("Skipping cfilter verification at genesis height 0");
             return Ok(true);
         }
 
+        if self.sync_base_height > 0 && height <= self.sync_base_height {
+            tracing::debug!(
+                "Skipping cfilter verification at checkpoint base height {}",
+                height
+            );
+            return Ok(true);
+        }
+
         // Load previous and expected headers
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
}
// Load previous and expected headers
let prev_header = storage.get_filter_header(height - 1).await.map_err(|e| {
SyncError::Storage(format!("Failed to load previous filter header: {}", e))
})?;
let expected_header = storage.get_filter_header(height).await.map_err(|e| {
SyncError::Storage(format!("Failed to load expected filter header: {}", e))
})?;
let (Some(prev_header), Some(expected_header)) = (prev_header, expected_header) else {
tracing::warn!(
"Missing filter headers in storage for height {} (prev and/or expected)",
height
);
return Ok(false);
};
// Compute the header from the received filter bytes and compare
let filter = BlockFilter::new(filter_data);
let computed_header = filter.filter_header(&prev_header);
let matches = computed_header == expected_header;
if !matches {
tracing::error!(
"CFilter header mismatch at height {}: computed={:?}, expected={:?}",
height,
computed_header,
expected_header
);
}
Ok(matches)
}
if height == 0 {
tracing::debug!("Skipping cfilter verification at genesis height 0");
return Ok(true);
}
if self.sync_base_height > 0 && height <= self.sync_base_height {
tracing::debug!(
"Skipping cfilter verification at checkpoint base height {}",
height
);
return Ok(true);
}
// Load previous and expected headers
let prev_header = storage.get_filter_header(height - 1).await.map_err(|e| {
SyncError::Storage(format!("Failed to load previous filter header: {}", e))
})?;
let expected_header = storage.get_filter_header(height).await.map_err(|e| {
SyncError::Storage(format!("Failed to load expected filter header: {}", e))
})?;
let (Some(prev_header), Some(expected_header)) = (prev_header, expected_header) else {
tracing::warn!(
"Missing filter headers in storage for height {} (prev and/or expected)",
height
);
return Ok(false);
};
// Compute the header from the received filter bytes and compare
let filter = BlockFilter::new(filter_data);
let computed_header = filter.filter_header(&prev_header);
let matches = computed_header == expected_header;
if !matches {
tracing::error!(
"CFilter header mismatch at height {}: computed={:?}, expected={:?}",
height,
computed_header,
expected_header
);
}
Ok(matches)
}
🤖 Prompt for AI Agents
In dash-spv/src/sync/filters.rs around lines 118 to 151, the current logic
rejects filters at the checkpoint base because it attempts to load a previous
header that we never store below the checkpoint; update the function to skip
header-chain verification for heights at or below the checkpoint base (mirror
CFHeader logic) by checking the sync/checkpoint base value early and returning
Ok(true) for those heights so the base-range filters are accepted; otherwise
keep the normal previous-header lookup and verification flow and only log/error
when a real mismatch occurs.

Comment on lines +1365 to +1389
// First, verify filter data matches expected filter header chain
let height = storage
.get_header_height_by_hash(&cfilter.block_hash)
.await
.map_err(|e| SyncError::Storage(format!("Failed to get filter block height: {}", e)))?
.ok_or_else(|| {
SyncError::Validation(format!(
"Block height not found for cfilter block {}",
cfilter.block_hash
))
})?;

let header_ok = self
.filter_sync
.verify_cfilter_against_headers(&cfilter.filter, height, &*storage)
.await?;

if !header_ok {
tracing::warn!(
"Rejecting CFilter for block {} at height {} due to header mismatch",
cfilter.block_hash,
height
);
return Ok(());
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Don’t leak filter requests on header mismatch

We bail out before calling mark_filter_received, so the outstanding request stays in the FilterSyncManager’s active set. That slot is never released, so the download phase can stall indefinitely once a single bad filter is seen. Please ensure we notify the filter sync manager (e.g., mark the request as failed and requeue it) before returning.

🤖 Prompt for AI Agents
In dash-spv/src/sync/sequential/mod.rs around lines 1365-1389, we currently
return early on header mismatch without notifying the FilterSyncManager, leaking
the active request slot; before returning when header_ok is false, call the
filter sync manager API to mark this cfilter request as failed (or mark it
received with an error) so the manager can release the active slot and requeue
or drop the request, await that call and handle/log any error, then return;
ensure this notification happens on the same code path where we log the header
mismatch so the request is never left active.

Comment on lines +1993 to +2004
let header_ok = self
.filter_sync
.verify_cfilter_against_headers(&cfilter.filter, height, &*storage)
.await?;
if !header_ok {
tracing::warn!(
"Rejecting post-sync CFilter for block {} at height {} due to header mismatch",
cfilter.block_hash,
height
);
return Ok(());
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Post-sync filters need a retry path on header mismatch

In the FullySynced path we drop the filter and return without re-requesting it, leaving that block without a stored filter (and the next header validation will continue to fail). We should treat this the same as the main sync path—clear the outstanding request and schedule a retry from another peer—instead of silently abandoning it.

🤖 Prompt for AI Agents
In dash-spv/src/sync/sequential/mod.rs around lines 1993 to 2004, the
FullySynced branch currently drops a post-sync CFilter on header mismatch and
returns, leaving the block without a stored filter; instead replicate the main
sync failure handling: clear the outstanding request for this block/filter and
schedule a retry from another peer (i.e., invoke the same functions used in the
main sync path to remove the request and re-queue the fetch or peer-request for
this block), and log the retry; ensure the function returns after scheduling the
retry rather than silently abandoning the filter.

@QuantumExplorer QuantumExplorer merged commit fe99e22 into v0.40-dev Sep 27, 2025
28 of 29 checks passed
@QuantumExplorer QuantumExplorer deleted the feat/validate-cfilter-hash-against-cfheader-chain branch October 20, 2025 08:28
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants