-
Notifications
You must be signed in to change notification settings - Fork 3
feat: validate cfilter hash against cfheader chain #149
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: validate cfilter hash against cfheader chain #149
Conversation
- Compute BlockFilter header from received cfilter and compare to stored filter header - Reject cfilters whose computed header mismatches expected value - Apply verification in both initial sync and post-sync filter handling
WalkthroughAdds header-based verification for compact filters before processing or storing them, integrates the verification into sequential sync handlers, and adjusts the FFI CLI sync progress polling to continue waiting when progress data is temporarily unavailable. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor Peer
participant Seq as SequentialSync
participant FSM as FilterSyncManager
participant Store as Storage
Peer->>Seq: CFilter(block_hash, filter_data)
Seq->>Store: get_height(block_hash)
alt height found
Seq->>FSM: verify_cfilter_against_headers(filter_data, height, Store)
FSM->>Store: load CFHeader(height-1), CFHeader(height)
Store-->>FSM: prev_header, expected_header
FSM-->>Seq: true/false
alt verified (true)
Seq->>Seq: proceed with wallet checks / processing
Seq->>Store: persist filter (as applicable)
else mismatch (false)
Seq->>Seq: log warning and return early
end
else height missing
Seq->>Seq: handle as existing logic (error/skip)
end
sequenceDiagram
autonumber
participant CLI as ffi_cli
loop poll every 300ms
CLI->>CLI: fetch progress pointer
alt pointer is null
Note over CLI: Continue polling (no early exit)
else pointer valid
CLI->>CLI: process progress
end
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested reviewers
Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
dash-spv-ffi/src/bin/ffi_cli.rs(0 hunks)dash-spv/src/sync/filters.rs(2 hunks)dash-spv/src/sync/sequential/mod.rs(2 hunks)
💤 Files with no reviewable changes (1)
- dash-spv-ffi/src/bin/ffi_cli.rs
🧰 Additional context used
📓 Path-based instructions (3)
dash-spv/**/*.rs
📄 CodeRabbit inference engine (dash-spv/CLAUDE.md)
dash-spv/**/*.rs: Enforce Rust formatting viacargo fmt --checkon all Rust source files
All code must be clippy-clean: runcargo clippy --all-targets --all-features -- -D warnings
Files:
dash-spv/src/sync/filters.rsdash-spv/src/sync/sequential/mod.rs
**/*.rs
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.rs: Use proper error types with thiserror and propagate errors appropriately
Use the tokio runtime for async operations in Rust
Use conditional compilation feature flags for optional features (#[cfg(feature = ...)])
Format Rust code with cargo fmt (and enforce via cargo fmt --check)
Run clippy with -D warnings and fix all lints
Adhere to MSRV Rust 1.89 (avoid features requiring newer compiler)
**/*.rs: Format Rust code with rustfmt (per rustfmt.toml); run cargo fmt --all before commits
Lint with clippy; treat warnings as errors in CI
Follow Rust naming: snake_case for functions/variables, UpperCamelCase for types/traits, SCREAMING_SNAKE_CASE for consts
Prefer async using tokio where applicable
Files:
dash-spv/src/sync/filters.rsdash-spv/src/sync/sequential/mod.rs
**/src/**/*.rs
📄 CodeRabbit inference engine (AGENTS.md)
**/src/**/*.rs: Each crate keeps sources in src/
Avoid unwrap()/expect() in library code; use proper error types (e.g., thiserror)
Place unit tests alongside code with #[cfg(test)]
Files:
dash-spv/src/sync/filters.rsdash-spv/src/sync/sequential/mod.rs
🧬 Code graph analysis (1)
dash-spv/src/sync/sequential/mod.rs (1)
dash-spv/src/client/mod.rs (1)
storage(2397-2399)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (20)
- GitHub Check: RPC Tests (stable, true)
- GitHub Check: Integration Tests (stable, 22.1.3)
- GitHub Check: Strict Warnings and Clippy Checks
- GitHub Check: Core Components Tests
- GitHub Check: fuzz (dash_deserialize_block)
- GitHub Check: fuzz (dash_deserialize_script)
- GitHub Check: fuzz (hashes_sha256)
- GitHub Check: fuzz (hashes_ripemd160)
- GitHub Check: fuzz (hashes_sha512_256)
- GitHub Check: fuzz (hashes_sha1)
- GitHub Check: fuzz (dash_script_bytes_to_asm_fmt)
- GitHub Check: SPV Components Tests
- GitHub Check: fuzz (hashes_sha512)
- GitHub Check: fuzz (hashes_cbor)
- GitHub Check: fuzz (dash_outpoint_string)
- GitHub Check: fuzz (dash_deser_net_msg)
- GitHub Check: fuzz (dash_deserialize_witness)
- GitHub Check: fuzz (dash_deserialize_address)
- GitHub Check: fuzz (hashes_json)
- GitHub Check: fuzz (dash_deserialize_amount)
| } | ||
|
|
||
| // Load previous and expected headers | ||
| let prev_header = storage.get_filter_header(height - 1).await.map_err(|e| { | ||
| SyncError::Storage(format!("Failed to load previous filter header: {}", e)) | ||
| })?; | ||
| let expected_header = storage.get_filter_header(height).await.map_err(|e| { | ||
| SyncError::Storage(format!("Failed to load expected filter header: {}", e)) | ||
| })?; | ||
|
|
||
| let (Some(prev_header), Some(expected_header)) = (prev_header, expected_header) else { | ||
| tracing::warn!( | ||
| "Missing filter headers in storage for height {} (prev and/or expected)", | ||
| height | ||
| ); | ||
| return Ok(false); | ||
| }; | ||
|
|
||
| // Compute the header from the received filter bytes and compare | ||
| let filter = BlockFilter::new(filter_data); | ||
| let computed_header = filter.filter_header(&prev_header); | ||
|
|
||
| let matches = computed_header == expected_header; | ||
| if !matches { | ||
| tracing::error!( | ||
| "CFilter header mismatch at height {}: computed={:?}, expected={:?}", | ||
| height, | ||
| computed_header, | ||
| expected_header | ||
| ); | ||
| } | ||
|
|
||
| Ok(matches) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Checkpoint-start filters are rejected forever
When we sync from a checkpoint the first filter range starts at sync_base_height. This code then tries to load the previous filter header (height - 1), but we never store headers below the checkpoint base. As a result prev_header is None, verify_cfilter_against_headers returns Ok(false), and every filter at the base height is dropped. Because that height never marks as received, the DownloadingFilters phase can’t finish and the sync stalls. Please skip header-chain verification for heights at or below the checkpoint base (mirroring the CFHeader logic) instead of rejecting those filters.
if height == 0 {
tracing::debug!("Skipping cfilter verification at genesis height 0");
return Ok(true);
}
+ if self.sync_base_height > 0 && height <= self.sync_base_height {
+ tracing::debug!(
+ "Skipping cfilter verification at checkpoint base height {}",
+ height
+ );
+ return Ok(true);
+ }
+
// Load previous and expected headers📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| } | |
| // Load previous and expected headers | |
| let prev_header = storage.get_filter_header(height - 1).await.map_err(|e| { | |
| SyncError::Storage(format!("Failed to load previous filter header: {}", e)) | |
| })?; | |
| let expected_header = storage.get_filter_header(height).await.map_err(|e| { | |
| SyncError::Storage(format!("Failed to load expected filter header: {}", e)) | |
| })?; | |
| let (Some(prev_header), Some(expected_header)) = (prev_header, expected_header) else { | |
| tracing::warn!( | |
| "Missing filter headers in storage for height {} (prev and/or expected)", | |
| height | |
| ); | |
| return Ok(false); | |
| }; | |
| // Compute the header from the received filter bytes and compare | |
| let filter = BlockFilter::new(filter_data); | |
| let computed_header = filter.filter_header(&prev_header); | |
| let matches = computed_header == expected_header; | |
| if !matches { | |
| tracing::error!( | |
| "CFilter header mismatch at height {}: computed={:?}, expected={:?}", | |
| height, | |
| computed_header, | |
| expected_header | |
| ); | |
| } | |
| Ok(matches) | |
| } | |
| if height == 0 { | |
| tracing::debug!("Skipping cfilter verification at genesis height 0"); | |
| return Ok(true); | |
| } | |
| if self.sync_base_height > 0 && height <= self.sync_base_height { | |
| tracing::debug!( | |
| "Skipping cfilter verification at checkpoint base height {}", | |
| height | |
| ); | |
| return Ok(true); | |
| } | |
| // Load previous and expected headers | |
| let prev_header = storage.get_filter_header(height - 1).await.map_err(|e| { | |
| SyncError::Storage(format!("Failed to load previous filter header: {}", e)) | |
| })?; | |
| let expected_header = storage.get_filter_header(height).await.map_err(|e| { | |
| SyncError::Storage(format!("Failed to load expected filter header: {}", e)) | |
| })?; | |
| let (Some(prev_header), Some(expected_header)) = (prev_header, expected_header) else { | |
| tracing::warn!( | |
| "Missing filter headers in storage for height {} (prev and/or expected)", | |
| height | |
| ); | |
| return Ok(false); | |
| }; | |
| // Compute the header from the received filter bytes and compare | |
| let filter = BlockFilter::new(filter_data); | |
| let computed_header = filter.filter_header(&prev_header); | |
| let matches = computed_header == expected_header; | |
| if !matches { | |
| tracing::error!( | |
| "CFilter header mismatch at height {}: computed={:?}, expected={:?}", | |
| height, | |
| computed_header, | |
| expected_header | |
| ); | |
| } | |
| Ok(matches) | |
| } |
🤖 Prompt for AI Agents
In dash-spv/src/sync/filters.rs around lines 118 to 151, the current logic
rejects filters at the checkpoint base because it attempts to load a previous
header that we never store below the checkpoint; update the function to skip
header-chain verification for heights at or below the checkpoint base (mirror
CFHeader logic) by checking the sync/checkpoint base value early and returning
Ok(true) for those heights so the base-range filters are accepted; otherwise
keep the normal previous-header lookup and verification flow and only log/error
when a real mismatch occurs.
| // First, verify filter data matches expected filter header chain | ||
| let height = storage | ||
| .get_header_height_by_hash(&cfilter.block_hash) | ||
| .await | ||
| .map_err(|e| SyncError::Storage(format!("Failed to get filter block height: {}", e)))? | ||
| .ok_or_else(|| { | ||
| SyncError::Validation(format!( | ||
| "Block height not found for cfilter block {}", | ||
| cfilter.block_hash | ||
| )) | ||
| })?; | ||
|
|
||
| let header_ok = self | ||
| .filter_sync | ||
| .verify_cfilter_against_headers(&cfilter.filter, height, &*storage) | ||
| .await?; | ||
|
|
||
| if !header_ok { | ||
| tracing::warn!( | ||
| "Rejecting CFilter for block {} at height {} due to header mismatch", | ||
| cfilter.block_hash, | ||
| height | ||
| ); | ||
| return Ok(()); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don’t leak filter requests on header mismatch
We bail out before calling mark_filter_received, so the outstanding request stays in the FilterSyncManager’s active set. That slot is never released, so the download phase can stall indefinitely once a single bad filter is seen. Please ensure we notify the filter sync manager (e.g., mark the request as failed and requeue it) before returning.
🤖 Prompt for AI Agents
In dash-spv/src/sync/sequential/mod.rs around lines 1365-1389, we currently
return early on header mismatch without notifying the FilterSyncManager, leaking
the active request slot; before returning when header_ok is false, call the
filter sync manager API to mark this cfilter request as failed (or mark it
received with an error) so the manager can release the active slot and requeue
or drop the request, await that call and handle/log any error, then return;
ensure this notification happens on the same code path where we log the header
mismatch so the request is never left active.
| let header_ok = self | ||
| .filter_sync | ||
| .verify_cfilter_against_headers(&cfilter.filter, height, &*storage) | ||
| .await?; | ||
| if !header_ok { | ||
| tracing::warn!( | ||
| "Rejecting post-sync CFilter for block {} at height {} due to header mismatch", | ||
| cfilter.block_hash, | ||
| height | ||
| ); | ||
| return Ok(()); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Post-sync filters need a retry path on header mismatch
In the FullySynced path we drop the filter and return without re-requesting it, leaving that block without a stored filter (and the next header validation will continue to fail). We should treat this the same as the main sync path—clear the outstanding request and schedule a retry from another peer—instead of silently abandoning it.
🤖 Prompt for AI Agents
In dash-spv/src/sync/sequential/mod.rs around lines 1993 to 2004, the
FullySynced branch currently drops a post-sync CFilter on header mismatch and
returns, leaving the block without a stored filter; instead replicate the main
sync failure handling: clear the outstanding request for this block/filter and
schedule a retry from another peer (i.e., invoke the same functions used in the
main sync path to remove the request and re-queue the fetch or peer-request for
this block), and log the retry; ensure the function returns after scheduling the
retry rather than silently abandoning the filter.
Summary by CodeRabbit
New Features
Bug Fixes