-
Notifications
You must be signed in to change notification settings - Fork 4
feat: balance calculations #144
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
- Eliminate duplicate ChainState initialization by sharing a single Arc<RwLock<ChainState>> across client and header sync. - HeaderSyncManagerWithReorg now holds shared state and no longer constructs its own ChainState. - SequentialSyncManager::new signature updated to accept the shared ChainState and plumb it through. - Client passes its ChainState to sync manager and removes the client-side header copy path. - Read/write updated to use RwLock guards; added lightweight cached checkpoint flags in header sync. This removes the duplicate "Initialized ChainState" logs and unifies state as a single source of truth.
- total_headers_synced should be base + headers_len - 1 when headers exist, or base/0 when empty. - Prevents overstating height by 1 in both checkpoint and normal sync paths.
- Introduced logging for wallet transactions, capturing net balance changes and transaction context (mempool or block). - Implemented UTXO ingestion for outputs that pay to monitored addresses, ensuring accurate tracking of spendable accounts. - Removed UTXOs that are spent by transaction inputs, improving wallet state management. - Added periodic logging of detected transactions and wallet balances to enhance monitoring capabilities.
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (16)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (5)
key-wallet/Cargo.toml (1)
1-10: Add MSRV to Cargo manifest.Per repo guidelines, set rust-version = "1.89" in Cargo.toml.
Apply:
[package] name = "key-wallet" version = { workspace = true } authors = ["The Dash Core Developers"] edition = "2021" +rust-version = "1.89" description = "Key derivation and wallet functionality for Dash"key-wallet/src/transaction_checking/wallet_checker.rs (1)
147-148: Fixis_ourssemantics.
is_ours: net_amount < 0marks outgoing as “ours” and incoming as not; this is likely inverted. Use involvement or non‑zero net instead.Apply:
- is_ours: net_amount < 0, + // Transaction affects our wallet if net != 0 + is_ours: net_amount != 0,dash-spv/src/main.rs (1)
227-233: Remove hard‑coded mnemonic (secrets in source).Embedding a mnemonic violates security/compliance guidance. Read from env/CLI or generate at runtime.
Apply:
- let wallet_id = wallet_manager.create_wallet_from_mnemonic( - "enemy check owner stumble unaware debris suffer peanut good fabric bleak outside", - "", - &[network], - None, - key_wallet::wallet::initialization::WalletAccountCreationOptions::default(), - )?; + let mnemonic = std::env::var("DASH_SPV_MNEMONIC") + .map_err(|_| "DASH_SPV_MNEMONIC not set")?; + let wallet_id = wallet_manager.create_wallet_from_mnemonic( + &mnemonic, + "", + &[network], + None, + key_wallet::wallet::initialization::WalletAccountCreationOptions::default(), + )?;Optionally add a
--mnemonicCLI flag instead. I can draft it if you want.dash-spv/src/sync/headers_with_reorg.rs (2)
687-699: Fix TOCTOU when deriving checkpoint hash (prepare_sync, None tip case).Same double-lock pattern; take one read lock and use headers.first().
Apply:
- if self.is_synced_from_checkpoint() - && !self.chain_state.read().await.headers.is_empty() - { - // We're syncing from a checkpoint and have the checkpoint header - let checkpoint_header = &self.chain_state.read().await.headers[0]; - let checkpoint_hash = checkpoint_header.block_hash(); + if self.is_synced_from_checkpoint() { + // We're syncing from a checkpoint; derive hash under a single read lock + let checkpoint_hash = { + let cs = self.chain_state.read().await; + cs.headers.first().map(|h| h.block_hash()) + }; + if let Some(checkpoint_hash) = checkpoint_hash {and keep the existing body; close the new if-let accordingly.
746-764: Fix TOCTOU at checkpoint-height branch.Single read lock; avoid separate is_empty() + [0] indexing.
Apply:
- if self.is_synced_from_checkpoint() && height == self.get_sync_base_height() { + if self.is_synced_from_checkpoint() && height == self.get_sync_base_height() { // We're at the checkpoint height - use the checkpoint hash from chain state tracing::info!( "At checkpoint height {}. Chain state has {} headers", height, - self.chain_state.read().await.headers.len() + self.chain_state.read().await.headers.len() ); - // The checkpoint header should be the first (and possibly only) header - if !self.chain_state.read().await.headers.is_empty() { - let checkpoint_header = &self.chain_state.read().await.headers[0]; - let hash = checkpoint_header.block_hash(); + // The checkpoint header should be the first (and possibly only) header + let hash = { + let cs = self.chain_state.read().await; + cs.headers.first().map(|h| h.block_hash()) + }; + if let Some(hash) = hash { tracing::info!("Using checkpoint hash for height {}: {}", height, hash); Some(hash) } else { tracing::error!("Synced from checkpoint but no headers in chain state!"); None }
🧹 Nitpick comments (11)
key-wallet/Cargo.toml (1)
45-45: Tracing dep OK; consider making it optional for no_std builds.Gate logging behind a feature (e.g., logging = ["dep:tracing"]) to avoid pulling tracing in minimal builds. Non-blocking.
dash-spv/src/main.rs (1)
372-465: Event logger OK; consider load-shedding for large wallets.Snapshot walks all stored transactions every 10s. Make interval configurable (arg/env) and short‑circuit if tx count is large, or compute incrementally.
dash-spv/src/sync/sequential/mod.rs (2)
1969-1992: Avoid potential u32 overflow on height math.Use saturating_add when composing absolute heights (defensive).
Apply:
- let mn_blockchain_height = if is_ckpt && sync_base > 0 { - sync_base + mn_state.last_height + let mn_blockchain_height = if is_ckpt && sync_base > 0 { + sync_base.saturating_add(mn_state.last_height) } else { mn_state.last_height };
2089-2095: Same here: saturating add for absolute height.Defensive math; negligible cost.
Apply:
- Ok(self.header_sync.get_sync_base_height() + storage_height) + Ok(self.header_sync.get_sync_base_height().saturating_add(storage_height))dash-spv/src/sync/headers_with_reorg.rs (7)
57-57: Cache coherence: ensure cached_ stays in sync with ChainState.*cached_synced_from_checkpoint and cached_sync_base_height can drift if ChainState changes elsewhere. Either (a) refresh from ChainState on demand, or (b) add a small helper to rebuild caches from an actual read lock.
Example helper (outside this hunk):
impl<S: StorageManager + Send + Sync + 'static, N: NetworkManager + Send + Sync + 'static> HeaderSyncManagerWithReorg<S, N> { pub async fn refresh_cache_from_chain_state(&mut self) { let cs = self.chain_state.read().await; self.cached_synced_from_checkpoint = cs.synced_from_checkpoint; self.cached_sync_base_height = cs.sync_base_height; let headers_len = cs.headers.len() as u32; drop(cs); self.update_cached_from_state_snapshot(self.cached_synced_from_checkpoint, self.cached_sync_base_height, headers_len); } }Also applies to: 65-66, 1116-1135
177-183: Reduce per-header overhead while holding write lock.We hold a write lock and push headers one by one. Consider a bulk method on ChainState (reserve + extend) to cut per-iteration overhead and improve throughput for large batches.
If ChainState can expose
add_headers_bulk(&[BlockHeader]), switch to a single call inside this lock.
207-217: Minor: progress log denominator can be misleading.tip_height is a storage index (0-based), while loaded_count is a count. Logs may look off-by-one (and differ between checkpoint vs genesis paths). Consider computing expected_total = (tip_height + 1 - start_index) for accurate progress.
290-292: Avoid magic constants for “early block.”0x1e0ffff0 and the timestamp cutoff are magic values. Prefer named consts (network-specific) or deriving from network params.
561-564: Gate unreachable Headers2 code behind a feature.Even with #[allow(unreachable_code)], it still type-checks and adds maintenance burden. Wrap the block with #[cfg(feature = "headers2")] so it’s compiled only when enabled.
Apply:
- #[allow(unreachable_code)] - { + #[cfg(feature = "headers2")] + #[allow(unreachable_code)] + { // ... existing block ... }Also applies to: 572-578, 602-611
529-551: Behavior of handle_headers2_message: error return vs. graceful fallback.Currently always returns Err, relying on callers to fallback. If callers treat any Err as fatal, this could abort sync. Consider returning Ok(false) after setting headers2_failed to trigger the existing fallback path without bubbling an error.
399-424: Verbosity: downgrade very chatty logs to debug.Locator details and raw message bytes (when re-enabled) are noisy at info. Consider debug/trace to keep prod logs lean.
Also applies to: 453-524
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
dash-spv/src/client/mod.rs(3 hunks)dash-spv/src/main.rs(6 hunks)dash-spv/src/sync/headers_with_reorg.rs(31 hunks)dash-spv/src/sync/sequential/mod.rs(7 hunks)key-wallet/Cargo.toml(1 hunks)key-wallet/src/transaction_checking/wallet_checker.rs(3 hunks)
🧰 Additional context used
📓 Path-based instructions (10)
**/*.rs
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.rs: Use proper error types with thiserror and propagate errors appropriately
Use the tokio runtime for async operations in Rust
Use conditional compilation feature flags for optional features (#[cfg(feature = ...)])
Format Rust code with cargo fmt (and enforce via cargo fmt --check)
Run clippy with -D warnings and fix all lints
Adhere to MSRV Rust 1.89 (avoid features requiring newer compiler)
**/*.rs: Format Rust code with rustfmt (per rustfmt.toml); run cargo fmt --all before commits
Lint with clippy; treat warnings as errors in CI
Follow Rust naming: snake_case for functions/variables, UpperCamelCase for types/traits, SCREAMING_SNAKE_CASE for consts
Prefer async using tokio where applicable
Files:
key-wallet/src/transaction_checking/wallet_checker.rsdash-spv/src/client/mod.rsdash-spv/src/main.rsdash-spv/src/sync/headers_with_reorg.rsdash-spv/src/sync/sequential/mod.rs
{key-wallet,swift-dash-core-sdk}/**/*.{rs,swift}
📄 CodeRabbit inference engine (CLAUDE.md)
Never log or expose private keys
Files:
key-wallet/src/transaction_checking/wallet_checker.rs
key-wallet/**/*.rs
📄 CodeRabbit inference engine (CLAUDE.md)
Use secure random number generation for key material
Files:
key-wallet/src/transaction_checking/wallet_checker.rs
key-wallet/src/**/*.rs
📄 CodeRabbit inference engine (key-wallet/CLAUDE.md)
key-wallet/src/**/*.rs: Never serialize, print, or log private keys; avoid exposing secret key material in any form
Prefer using public keys or key fingerprints for identification in logs and messages instead of private keys
Always validate network consistency; never mix mainnet and testnet operations
Use the custom Error type; propagate errors with the ? operator and provide contextual messages
Never panic in library code; return Result<T, Error> for all fallible operations
Files:
key-wallet/src/transaction_checking/wallet_checker.rs
key-wallet/src/{managed_account,transaction_checking}/**/*.rs
📄 CodeRabbit inference engine (key-wallet/CLAUDE.md)
Update wallet state atomically: add transaction, update UTXOs, recalculate balance, and mark addresses used in a single operation
Files:
key-wallet/src/transaction_checking/wallet_checker.rs
key-wallet/src/transaction_checking/**/*.rs
📄 CodeRabbit inference engine (key-wallet/CLAUDE.md)
Use transaction type routing to check only relevant accounts
Files:
key-wallet/src/transaction_checking/wallet_checker.rs
**/src/**/*.rs
📄 CodeRabbit inference engine (AGENTS.md)
**/src/**/*.rs: Each crate keeps sources in src/
Avoid unwrap()/expect() in library code; use proper error types (e.g., thiserror)
Place unit tests alongside code with #[cfg(test)]
Files:
key-wallet/src/transaction_checking/wallet_checker.rsdash-spv/src/client/mod.rsdash-spv/src/main.rsdash-spv/src/sync/headers_with_reorg.rsdash-spv/src/sync/sequential/mod.rs
key-wallet/**/Cargo.toml
📄 CodeRabbit inference engine (key-wallet/CLAUDE.md)
Set MSRV via rust-version = "1.89" in Cargo.toml
Files:
key-wallet/Cargo.toml
**/Cargo.toml
📄 CodeRabbit inference engine (AGENTS.md)
MSRV is 1.89; set and respect rust-version = "1.89" in Cargo.toml
Files:
key-wallet/Cargo.toml
dash-spv/**/*.rs
📄 CodeRabbit inference engine (dash-spv/CLAUDE.md)
dash-spv/**/*.rs: Enforce Rust formatting viacargo fmt --checkon all Rust source files
All code must be clippy-clean: runcargo clippy --all-targets --all-features -- -D warnings
Files:
dash-spv/src/client/mod.rsdash-spv/src/main.rsdash-spv/src/sync/headers_with_reorg.rsdash-spv/src/sync/sequential/mod.rs
🧠 Learnings (11)
📓 Common learnings
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: key-wallet/CLAUDE.md:0-0
Timestamp: 2025-08-29T04:01:18.111Z
Learning: Applies to key-wallet/src/{managed_account,transaction_checking}/**/*.rs : Update wallet state atomically: add transaction, update UTXOs, recalculate balance, and mark addresses used in a single operation
Learnt from: DCG-Claude
PR: dashpay/rust-dashcore#0
File: :0-0
Timestamp: 2025-06-26T16:01:37.609Z
Learning: The mempool tracking infrastructure (UnconfirmedTransaction, MempoolState, configuration, and mempool_filter.rs) is fully implemented and integrated in the Dash SPV client as of this PR, including client logic, FFI APIs, and tests.
📚 Learning: 2025-08-29T04:01:18.111Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: key-wallet/CLAUDE.md:0-0
Timestamp: 2025-08-29T04:01:18.111Z
Learning: Applies to key-wallet/src/{managed_account,transaction_checking}/**/*.rs : Update wallet state atomically: add transaction, update UTXOs, recalculate balance, and mark addresses used in a single operation
Applied to files:
key-wallet/src/transaction_checking/wallet_checker.rskey-wallet/Cargo.tomldash-spv/src/main.rs
📚 Learning: 2025-08-29T04:01:18.111Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: key-wallet/CLAUDE.md:0-0
Timestamp: 2025-08-29T04:01:18.111Z
Learning: Applies to key-wallet/src/transaction_checking/**/*.rs : Use transaction type routing to check only relevant accounts
Applied to files:
key-wallet/src/transaction_checking/wallet_checker.rs
📚 Learning: 2025-06-26T16:01:37.609Z
Learnt from: DCG-Claude
PR: dashpay/rust-dashcore#0
File: :0-0
Timestamp: 2025-06-26T16:01:37.609Z
Learning: The mempool tracking infrastructure (UnconfirmedTransaction, MempoolState, configuration, and mempool_filter.rs) is fully implemented and integrated in the Dash SPV client as of this PR, including client logic, FFI APIs, and tests.
Applied to files:
key-wallet/src/transaction_checking/wallet_checker.rsdash-spv/src/client/mod.rsdash-spv/src/main.rs
📚 Learning: 2025-08-29T04:01:18.111Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: key-wallet/CLAUDE.md:0-0
Timestamp: 2025-08-29T04:01:18.111Z
Learning: Applies to key-wallet/**/Cargo.toml : Set MSRV via rust-version = "1.89" in Cargo.toml
Applied to files:
key-wallet/Cargo.toml
📚 Learning: 2025-08-29T04:01:18.111Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: key-wallet/CLAUDE.md:0-0
Timestamp: 2025-08-29T04:01:18.111Z
Learning: Applies to key-wallet/tests/**/*.rs : Keep test vectors in sync with Dash Core releases
Applied to files:
key-wallet/Cargo.toml
📚 Learning: 2025-06-26T15:54:02.509Z
Learnt from: DCG-Claude
PR: dashpay/rust-dashcore#0
File: :0-0
Timestamp: 2025-06-26T15:54:02.509Z
Learning: The `StorageManager` trait in `dash-spv/src/storage/mod.rs` uses `&mut self` methods but is also `Send + Sync`, and implementations often use interior mutability for concurrency. This can be confusing, so explicit documentation should clarify thread-safety expectations and the rationale for the API design.
Applied to files:
dash-spv/src/client/mod.rs
📚 Learning: 2025-08-29T04:01:18.111Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: key-wallet/CLAUDE.md:0-0
Timestamp: 2025-08-29T04:01:18.111Z
Learning: Applies to key-wallet/src/{bip32.rs,dip9.rs} : Keep Dash-specific derivation paths up to date (e.g., DIP-9) in bip32.rs and dip9.rs
Applied to files:
dash-spv/src/main.rs
📚 Learning: 2025-08-25T17:15:59.361Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: dash-spv/CLAUDE.md:0-0
Timestamp: 2025-08-25T17:15:59.361Z
Learning: Applies to dash-spv/**/*.rs : All code must be clippy-clean: run `cargo clippy --all-targets --all-features -- -D warnings`
Applied to files:
dash-spv/src/main.rs
📚 Learning: 2025-08-16T04:15:57.225Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: swift-dash-core-sdk/Examples/DashHDWalletExample/CLAUDE.md:0-0
Timestamp: 2025-08-16T04:15:57.225Z
Learning: Applies to swift-dash-core-sdk/Examples/DashHDWalletExample/**/*.swift : When accessing SPV functionality, use DashSDK public APIs instead of direct client access; add public wrapper methods to DashSDK if needed
Applied to files:
dash-spv/src/main.rs
📚 Learning: 2025-08-29T04:01:18.111Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: key-wallet/CLAUDE.md:0-0
Timestamp: 2025-08-29T04:01:18.111Z
Learning: Applies to key-wallet/tests/mnemonic_tests.rs : Place mnemonic generation and validation tests in tests/mnemonic_tests.rs
Applied to files:
dash-spv/src/main.rs
🧬 Code graph analysis (4)
key-wallet/src/transaction_checking/wallet_checker.rs (2)
key-wallet/src/managed_account/mod.rs (2)
new(63-73)network(787-789)dash/src/address.rs (3)
from_script(497-515)from_script(1140-1142)network(944-946)
dash-spv/src/main.rs (2)
key-wallet/src/wallet/managed_wallet_info/wallet_info_interface.rs (4)
wallet_id(30-30)wallet_id(122-124)monitored_addresses(60-60)monitored_addresses(162-173)key-wallet-manager/src/wallet_manager/mod.rs (3)
get_network_state(1051-1053)get_wallet_balance(993-1000)monitored_addresses(1061-1067)
dash-spv/src/sync/headers_with_reorg.rs (3)
dash-spv/src/sync/sequential/mod.rs (1)
new(89-118)dash-spv/src/storage/disk.rs (1)
new(210-324)dash-spv/src/types.rs (4)
new(184-186)new(897-917)tip_height(229-238)tip_hash(241-243)
dash-spv/src/sync/sequential/mod.rs (3)
dash-spv/src/client/status_display.rs (2)
chain_state(120-123)new(23-37)dash-spv/src/sync/headers_with_reorg.rs (1)
new(73-107)dash-spv/src/storage/disk.rs (1)
new(210-324)
🔇 Additional comments (14)
key-wallet/src/transaction_checking/wallet_checker.rs (2)
191-243: UTXO ingest/evict flow looks good; verify coinbase maturity handling.You store coinbase UTXOs as confirmed (with
is_coin_base()set). Ensure downstream spend path enforces maturity (100 blocks) before spending.Would you like a follow‑up test that spends a coinbase UTXO at height <100 and asserts rejection?
291-300: Balance update and logging after atomic state update: LGTM.Order (tx record → UTXOs → mark used → maintain gap → update balance → log) is consistent with atomic wallet update guidance.
dash-spv/src/client/mod.rs (2)
193-199: Pass shared ChainState into sync manager: LGTM.Creates a single source of truth for checkpoint-aware state across components.
2176-2178: Confirmheaders_lensemantics for cache update.
headers_lenderives fromchain_state_for_storage.headers.len() as u32. Verifyupdate_chain_state_cache(.., headers_len)expects header vector length (not tip height). If it expects tip height, pass the absolute height instead.Also applies to: 2198-2204
dash-spv/src/main.rs (1)
264-265: Wallet ID plumbing: LGTM.Signature and call sites updated consistently; good context propagation for logger task.
Also applies to: 282-283, 294-303, 316-317
dash-spv/src/sync/sequential/mod.rs (2)
90-95: Constructor takes shared ChainState: LGTM.Clean injection into HeaderSyncManagerWithReorg; good step toward unified state handling.
Also applies to: 102-105
804-816: Cache update API: LGTM.Interface is clear and localized; avoids exposing mutable state setters.
dash-spv/src/sync/headers_with_reorg.rs (7)
119-124: Good: initialize in-memory state from persisted snapshot.Loading ChainState from storage and seeding caches up front looks correct.
270-274: Good: fail fast if batch doesn’t connect to our current tip.This guards against misaligned batches before doing work.
322-334: Checkpoint validation per prospective height: solid.Computes height ahead of time and validates vs checkpoints before mutation.
338-343: Good: single write lock for batch append.Batch add under one lock avoids interleaving and reduces contention.
650-653: Good: introspective logging before requests.Helpful for diagnosing initial state.
952-956: Solid simplification: use total_headers_synced for checkpoint boundary.This keeps “past checkpoints” logic consistent regardless of base height.
1102-1104: Good: async tip hash accessor.This avoids leaking internals and keeps locking localized.
| use std::sync::Arc; | ||
| use tokio::sync::RwLock; | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Good: async-safe locking primitives.
Using Arc + tokio::sync::RwLock for ChainState is the right choice for async contexts.
To ensure there’s no accidental use of std::sync::RwLock (which can deadlock in async), run:
🏁 Script executed:
#!/bin/bash
rg -nP --type=rs -C1 '\bstd::sync::RwLock\b|parking_lot::RwLock'
rg -nP --type=rs -C2 'Arc\s*<\s*(std::sync::)?RwLock\s*<\s*ChainState'Length of output: 53
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "=== search: std::sync::RwLock or parking_lot::RwLock in .rs files ==="
if command -v rg >/dev/null 2>&1; then
rg -n -C1 -g '**/*.rs' '\bstd::sync::RwLock\b|parking_lot::RwLock' || true
else
find . -type f -name '*.rs' -print0 | xargs -0 grep -n -E -C1 '\bstd::sync::RwLock\b|parking_lot::RwLock' || true
fi
echo "=== search: Arc<...RwLock<ChainState>> patterns ==="
if command -v rg >/dev/null 2>&1; then
rg -n -C2 -g '**/*.rs' 'Arc\s*<\s*(?:std::sync::)?RwLock\s*<\s*ChainState' || true
else
find . -type f -name '*.rs' -print0 | xargs -0 grep -n -E -C2 'Arc\s*<\s*(std::sync::)?RwLock\s*<\s*ChainState' || true
fi
echo "=== search: use std::sync::RwLock imports ==="
if command -v rg >/dev/null 2>&1; then
rg -n -g '**/*.rs' 'use\s+std::sync::RwLock' || true
else
find . -type f -name '*.rs' -print0 | xargs -0 grep -n -E 'use\s+std::sync::RwLock' || true
fi
echo "=== search: Cargo.toml for parking_lot or tokio dependencies ==="
if command -v rg >/dev/null 2>&1; then
rg -n --hidden -g 'Cargo.toml' 'parking_lot|tokio' || true
else
grep -n -E 'parking_lot|tokio' Cargo.toml || true
fi
echo "=== done ==="Length of output: 3478
Critical: replace std::sync::RwLock in async code
dash-spv/src/storage/sync_storage.rs:7 imports std::sync::RwLock — this can deadlock in async contexts; replace it with tokio::sync::RwLock or ensure the API is strictly synchronous. Other files (e.g., dash-spv/src/sync/headers_with_reorg.rs) already use tokio::sync::RwLock.
🤖 Prompt for AI Agents
In dash-spv/src/storage/sync_storage.rs around line 7, the code imports
std::sync::RwLock which can deadlock in async contexts; replace that import with
tokio::sync::RwLock (and adjust any fully-qualified uses) so async tasks use the
async-aware lock; ensure any Arc<RwLock<...>> types remain compatible and update
any tests/usages to await or use read().await/write().await as needed.
| if self.is_synced_from_checkpoint() | ||
| && !self.chain_state.read().await.headers.is_empty() | ||
| { | ||
| // Use the checkpoint hash from chain state | ||
| let checkpoint_hash = self.chain_state.headers[0].block_hash(); | ||
| let checkpoint_hash = self.chain_state.read().await.headers[0].block_hash(); | ||
| tracing::info!( | ||
| "📍 No base_hash provided but syncing from checkpoint at height {}. Using checkpoint hash: {}", | ||
| self.chain_state.sync_base_height, | ||
| self.get_sync_base_height(), | ||
| checkpoint_hash |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix TOCTOU on chain_state.headers (double read-lock + index).
You check is_empty() and then index [0] via separate read().await calls. A concurrent writer could invalidate the check and cause a panic. Take one read lock and derive the value once.
Apply:
- if self.is_synced_from_checkpoint()
- && !self.chain_state.read().await.headers.is_empty()
- {
- // Use the checkpoint hash from chain state
- let checkpoint_hash = self.chain_state.read().await.headers[0].block_hash();
+ if self.is_synced_from_checkpoint() {
+ // Use the checkpoint hash from chain state (single read lock)
+ let checkpoint_hash_opt = {
+ let cs = self.chain_state.read().await;
+ cs.headers.first().map(|h| h.block_hash())
+ };
+ let checkpoint_hash = match checkpoint_hash_opt {
+ Some(h) => h,
+ None => {
+ tracing::error!("Synced from checkpoint but no header in chain state; falling back to genesis locator");
+ self.config.network.known_genesis_block_hash().unwrap_or(BlockHash::from_byte_array([0; 32]))
+ }
+ };📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if self.is_synced_from_checkpoint() | |
| && !self.chain_state.read().await.headers.is_empty() | |
| { | |
| // Use the checkpoint hash from chain state | |
| let checkpoint_hash = self.chain_state.headers[0].block_hash(); | |
| let checkpoint_hash = self.chain_state.read().await.headers[0].block_hash(); | |
| tracing::info!( | |
| "📍 No base_hash provided but syncing from checkpoint at height {}. Using checkpoint hash: {}", | |
| self.chain_state.sync_base_height, | |
| self.get_sync_base_height(), | |
| checkpoint_hash | |
| if self.is_synced_from_checkpoint() { | |
| // Use the checkpoint hash from chain state (single read lock) | |
| let checkpoint_hash_opt = { | |
| let cs = self.chain_state.read().await; | |
| cs.headers.first().map(|h| h.block_hash()) | |
| }; | |
| let checkpoint_hash = match checkpoint_hash_opt { | |
| Some(h) => h, | |
| None => { | |
| tracing::error!("Synced from checkpoint but no header in chain state; falling back to genesis locator"); | |
| self.config.network.known_genesis_block_hash().unwrap_or(BlockHash::from_byte_array([0; 32])) | |
| } | |
| }; | |
| tracing::info!( | |
| "📍 No base_hash provided but syncing from checkpoint at height {}. Using checkpoint hash: {}", | |
| self.get_sync_base_height(), | |
| checkpoint_hash |
🤖 Prompt for AI Agents
In dash-spv/src/sync/headers_with_reorg.rs around lines 427 to 435, you
currently call self.chain_state.read().await twice (once for is_empty() and
again for indexing [0]), creating a TOCTOU race that can panic if a writer
changes headers between reads; fix by taking a single read lock: let headers =
self.chain_state.read().await; check headers.is_empty() (or use headers.first())
and derive the checkpoint_hash from headers[0].block_hash() (or
headers.first().map(|h| h.block_hash()).cloned()), then drop the read guard (or
clone the hash) before logging so you don’t hold the lock during tracing::info!.
| // Use the checkpoint hash from chain state | ||
| if !self.chain_state.headers.is_empty() { | ||
| let checkpoint_hash = self.chain_state.headers[0].block_hash(); | ||
| if !self.chain_state.read().await.headers.is_empty() { | ||
| let checkpoint_hash = | ||
| self.chain_state.read().await.headers[0].block_hash(); | ||
| tracing::info!( | ||
| "Using checkpoint hash for recovery: {} (chain state has {} headers, first header time: {})", | ||
| checkpoint_hash, | ||
| self.chain_state.headers.len(), | ||
| self.chain_state.headers[0].time | ||
| self.chain_state.read().await.headers.len(), | ||
| self.chain_state.read().await.headers[0].time | ||
| ); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix TOCTOU in timeout recovery path.
Multiple read locks with [0] indexing; derive all needed values under a single read lock.
Apply:
- if !self.chain_state.read().await.headers.is_empty() {
- let checkpoint_hash =
- self.chain_state.read().await.headers[0].block_hash();
+ {
+ let (checkpoint_hash_opt, count_time_opt) = {
+ let cs = self.chain_state.read().await;
+ (cs.headers.first().map(|h| h.block_hash()),
+ cs.headers.first().map(|h| (cs.headers.len(), h.time)))
+ };
+ if let (Some(checkpoint_hash), Some((hdrs_len, first_time))) = (checkpoint_hash_opt, count_time_opt) {
tracing::info!(
"Using checkpoint hash for recovery: {} (chain state has {} headers, first header time: {})",
checkpoint_hash,
- self.chain_state.read().await.headers.len(),
- self.chain_state.read().await.headers[0].time
+ hdrs_len,
+ first_time
);
Some(checkpoint_hash)
} else {
tracing::warn!("No checkpoint header in chain state for recovery");
None
}
- } else {
- tracing::warn!("No checkpoint header in chain state for recovery");
- None
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Use the checkpoint hash from chain state | |
| if !self.chain_state.headers.is_empty() { | |
| let checkpoint_hash = self.chain_state.headers[0].block_hash(); | |
| if !self.chain_state.read().await.headers.is_empty() { | |
| let checkpoint_hash = | |
| self.chain_state.read().await.headers[0].block_hash(); | |
| tracing::info!( | |
| "Using checkpoint hash for recovery: {} (chain state has {} headers, first header time: {})", | |
| checkpoint_hash, | |
| self.chain_state.headers.len(), | |
| self.chain_state.headers[0].time | |
| self.chain_state.read().await.headers.len(), | |
| self.chain_state.read().await.headers[0].time | |
| ); | |
| // Use the checkpoint hash from chain state | |
| { | |
| let (checkpoint_hash_opt, count_time_opt) = { | |
| let cs = self.chain_state.read().await; | |
| (cs.headers.first().map(|h| h.block_hash()), | |
| cs.headers.first().map(|h| (cs.headers.len(), h.time))) | |
| }; | |
| if let (Some(checkpoint_hash), Some((hdrs_len, first_time))) = (checkpoint_hash_opt, count_time_opt) { | |
| tracing::info!( | |
| "Using checkpoint hash for recovery: {} (chain state has {} headers, first header time: {})", | |
| checkpoint_hash, | |
| hdrs_len, | |
| first_time | |
| ); | |
| Some(checkpoint_hash) | |
| } else { | |
| tracing::warn!("No checkpoint header in chain state for recovery"); | |
| None | |
| } | |
| } |
🤖 Prompt for AI Agents
In dash-spv/src/sync/headers_with_reorg.rs around lines 854 to 863, the recovery
path currently takes multiple async read locks and indexes headers[0]
separately, creating a TOCTOU risk; change it to acquire a single read guard,
extract the needed values (checkpoint_hash, headers length, first header time)
from that guard while still held, then drop the guard and call tracing::info
with those captured values so no subsequent reads or indexing occur after the
lock is released.
| let mut involved_addrs = alloc::collections::BTreeSet::new(); | ||
| for info in account_match.account_type_match.all_involved_addresses() { | ||
| involved_addrs.insert(info.address.clone()); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Replace alloc:: uses (no extern crate alloc here).
Use std equivalents (BTreeSet, format!) to avoid build issues and simplify.
Apply:
- let mut involved_addrs = alloc::collections::BTreeSet::new();
+ let mut involved_addrs = std::collections::BTreeSet::new();and
- let ctx = match context {
- TransactionContext::Mempool => alloc::format!("mempool"),
+ let ctx = match context {
+ TransactionContext::Mempool => format!("mempool"),
TransactionContext::InBlock {
height,
..
- } => alloc::format!("block {}", height),
+ } => format!("block {}", height),
TransactionContext::InChainLockedBlock {
height,
..
} => {
- alloc::format!("chainlocked block {}", height)
+ format!("chainlocked block {}", height)
}
};Optionally add at top:
use std::collections::BTreeSet; // if preferredAlso applies to: 301-312
🤖 Prompt for AI Agents
In key-wallet/src/transaction_checking/wallet_checker.rs around lines 198-201
(and similarly around 301-312), the code uses alloc::collections::BTreeSet and
alloc::format!, which fails because there is no extern crate alloc; replace
alloc::collections::BTreeSet::new() with std::collections::BTreeSet::new() (or
add use std::collections::BTreeSet; at the top and call BTreeSet::new()), and
replace alloc::format! with the standard format! macro; make the same
replacements in the other mentioned block (301-312).
Summary by CodeRabbit
New Features
Refactor
Chores