-
Notifications
You must be signed in to change notification settings - Fork 3
Feat/mempool bloom filters chain management wip 1 #87
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feat/mempool bloom filters chain management wip 1 #87
Conversation
Remove hardcoded localhost addresses from mainnet and testnet seed lists. These were causing unnecessary connection attempts during initial sync
- Add DetailedSyncProgress struct with performance metrics - Headers per second, bytes per second, ETA calculation - Sync stage tracking (Connecting, Querying, Downloading, etc.) - Add SyncStage enum for granular progress states - Add filter_sync_available field to track peer capabilities - Add supports_compact_filters() helper to PeerInfo - Add progress channel to DashSpvClient for real-time updates - Add is_filter_sync_available() method to check peer support This enables better progress reporting and performance monitoring for SPV sync operations.
- Add intermediate handshake states for better debugging: - VersionReceivedVerackSent: Version received, verack sent - VerackReceived: Verack received from peer - Add tracking flags for version_received, verack_received, version_sent - Improve logging throughout handshake process - Better error messages with handshake state information This makes handshake debugging easier and provides clearer state transitions during peer connection establishment.
- Reduce header sync timeout from 10s to 4s for faster failure detection - Change status update interval from 5s to 500ms for smoother progress - Add detailed logging for sync process including locator info - Improve empty locator handling for genesis sync These changes provide more responsive sync behavior and better real-time progress feedback.
- Enhanced status display with better formatting and metrics - Improved network connection handling and error recovery - Updated network constants for better peer management - Enhanced multi-peer connection logic with better peer selection - Improved filter sync with better error handling - General sync module improvements for reliability These changes collectively improve the stability and performance of the SPV client's network and synchronization operations.
Change header sync timeout from 4 seconds to 500 milliseconds for more responsive timeout detection during sync operations.
- Add Swift SDK package with comprehensive Dash Core functionality - Implement SPV client FFI bindings with async/await support - Add HD wallet integration via key-wallet-ffi - Create example iOS app demonstrating wallet and SPV features - Include persistent wallet storage and transaction management - Add network FFI bindings for Swift integration - Update CLAUDE.md with build instructions and project structure - Include build scripts for iOS targets (arm64, x86_64) - Add comprehensive documentation and implementation plans 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
- Add env_logger dependency for better debugging - Add FFIDetailedSyncProgress struct with comprehensive sync metrics - Add FFISyncStage enum for granular sync state tracking - Add sync-specific callbacks for progress and completion events - Add test_sync function for verifying sync functionality - Update cbindgen configuration to exclude internal callback types - Enhance FFI client with sync progress channels and callbacks - Update tests to handle new sync progress features This enables FFI consumers to track detailed sync progress including headers per second, ETA, sync stages, and real-time updates.
- Add detailed project overview and structure - Include build commands for Rust and FFI targets - Add test commands and environment variables - Document development commands (linting, formatting, docs) - Include key features (Dash-specific and architecture) - Add code style guidelines and constraints - Document Git workflow and current status - Add security considerations and API stability notes - Include known limitations This provides comprehensive guidance for Claude Code when working with the rust-dashcore repository.
- Simplify DashSPVFFI.swift by removing empty namespace enum - Update dash_spv_ffi.h with new FFI types matching Rust implementation - Add FFISyncStage, FFIDetailedSyncProgress types - Update function signatures for enhanced sync tracking - Add invalidArgument error case to DashSDKError - Provides better error handling for invalid inputs - Includes helpful error messages and recovery suggestions These changes align the Swift SDK with the updated Rust FFI layer and improve error handling capabilities.
Add documentation file to provide Claude Code with context and guidance when working with the Swift SDK package. This helps maintain consistency and understanding of the SDK structure.
- Add EnhancedSyncProgressView for detailed sync progress display - Add SettingsView for app configuration - Add ModelContainerHelper for SwiftData management - Update project configuration and build scripts - Improve wallet services with better error handling - Enhanced UI with platform-specific colors and layouts - Add comprehensive documentation: - DEBUG_SUMMARY.md for debugging tips - IOS_APP_SETUP_GUIDE.md for setup instructions - LINKING_FIX.md for resolving linking issues - SPM_LINKING_SOLUTION.md for Swift Package Manager setup - Add DashSPVFFI.xcframework support - Update build phase script for better library management - Improve wallet models with better persistence support This provides a more complete example app demonstrating HD wallet functionality with SPV sync capabilities and improved user experience.
Remove libdash_spv_ffi.a from version control. Binary artifacts should be built locally and not committed to the repository.
- Build and copy key_wallet_ffi libraries for both simulator and device - Create symlinks for both dash_spv_ffi and key_wallet_ffi libraries - Default symlinks point to simulator versions for development - Improve output messages with detailed library information This enables the iOS example app to use both SPV and wallet FFI functionality with proper library management.
- Add mempool_filter module for transaction filtering - Implement mempool transaction storage and callbacks - Add mempool transaction types and event handling - Update storage layer to support mempool transactions - Add FFI bindings for mempool events - Add Swift types for mempool transactions This enables SPV clients to track and filter mempool transactions, providing real-time transaction updates before blockchain confirmation. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
- Enhance block processor for better balance updates - Update FFI wallet bindings with improved error handling - Add immature balance tracking to Swift Balance model - Improve persistent wallet management with better error handling - Add comprehensive balance calculations in WalletManager 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
- Add SPV client verification methods and retry management - Implement comprehensive mempool support in Swift SDK - Add WatchStatusView for monitoring address watch status - Enhance WalletService with improved SPV integration - Update FFI bridge with new callback handlers - Improve error handling and state management - Add support for mempool transactions in example app - Update build script with improved error handling 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
- Update root CLAUDE.md with recent development notes - Add mempool implementation details to dash-spv CLAUDE.md - Update dash-spv README with mempool features - Enhance Swift SDK documentation with new features - Update BUILD.md with improved build instructions 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
- Enhance address derivation with better error handling - Update BIP32 implementation for improved compatibility - Refactor DIP9 address derivation methods - Update Dash network constants and checkpoint data - Improve examples and test coverage 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
- Add mempool configuration options to client config - Enhance consistency checks and filter sync - Improve network connection and message handling - Add comprehensive chainlock and instantlock validation - Refactor sync module for better modularity - Update multi-peer connection management - Add new error types for mempool operations - Update dependencies and test configurations This improves the overall reliability and performance of the SPV client with better error handling and validation mechanisms. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
Add terminal block data to support deterministic masternode list synchronization. Terminal blocks contain masternode list snapshots at specific heights, enabling efficient sync without downloading all intermediate blocks. - Add mainnet terminal blocks (heights 1088640-2000000) - Add testnet terminal blocks (heights 387480-900000) - Implement terminal block loading and validation - Add tests and examples for terminal block usage - Include script for fetching terminal block data from nodes This enables SPV clients to quickly sync masternode lists by jumping to known terminal blocks instead of processing every block from genesis. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
Add common library file extensions to prevent build artifacts from being committed: - *.a (static libraries) - *.so (Linux shared libraries) - *.dylib (macOS dynamic libraries) - *.dll (Windows dynamic libraries) 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
Add simple example that verifies genesis block generation for mainnet and testnet networks. This helps ensure the genesis block constants are correctly defined. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
- Enhanced sync managers for headers and masternodes - Improved error handling and logging in sync components - Added support for sequential sync coordination - Better timeout and retry logic for sync operations 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
- Move implementation documentation to organized structure - Add bloom filter specification - Document sequential sync design - Include peer reputation system documentation - Document fixes and implementation summaries 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
- Add multi-peer connection management - Improve handshake protocol handling - Enhanced message routing and error handling - Better connection resilience and retry logic - Add peer reputation tracking foundation 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
…ntegration - Add mempool transaction tracking with configurable strategies - Implement balance calculations including unconfirmed transactions - Enhanced wallet address management and UTXO tracking - Improved error handling with specific error types - Add transaction relevance checking for watched addresses 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
- Add mempool tracking configuration to FFI - Implement mempool-specific event callbacks - Add balance queries including mempool transactions - Enhanced error handling with specific error codes - Support for recording sent transactions 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
- Add BIP37 bloom filter message type definitions - Define bloom filter constants (max size, hash functions) - Add bloom filter update flags enum - Update network protocol with bloom filter support - Prepare for future bloom filter implementation 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
- Add mempool transaction tracking to Swift SDK - Implement mempool event types and callbacks - Enhanced balance calculations with unconfirmed transactions - Improved transaction model with mempool state - Add error handling types for better debugging - Update example app to demonstrate mempool features 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
- Revert devnet to use correct dash_genesis_tx() like all other networks - Fix devnet merkle root to match Dash genesis transaction - Correct devnet ChainHash constant to proper value - Remove unnecessary bitcoin_genesis_tx() function - Update test assertions to expect correct values - Add missing set_sync_base_height method to FilterSyncManager - Add wallet_creation_time field to ClientConfig for checkpoint sync Devnets should use the same genesis transaction as all other Dash networks, not Bitcoin's genesis transaction. This fixes the incorrect changes introduced during the rebase process. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
Reverted changes in: - key-wallet/src/address.rs - key-wallet/src/derivation.rs - key-wallet/tests/address_tests.rs These files now match the v0.40-dev branch state.
…module The platform_integration module was referenced but the file doesn't exist, causing build failures. Removed the module declaration and its re-export. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
Remove obsolete fix summaries from docs/fixes/ directory. These were temporary documentation files for issues that have already been resolved and no longer provide value for future maintenance. Removed files: - ADDRESS_DERIVATION_FIX.md - BALANCE_CALLBACK_FIX.md - BALANCE_FIX_COMPLETE.md - EVENT_CALLBACK_FIX_SUMMARY.md - FILTER_SYNC_FIX_SUMMARY.md - INVENTORY_HANDLING_IMPLEMENTATION.md - SEGMENT_BOUNDARY_FIX_SUMMARY.md - TIP_HEADER_FIX_SUMMARY.md - XCODE_BUILD_FIX.md 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
Remove obsolete temporary documentation files from the Swift SDK that were used during development for debugging and one-time setup instructions. These files no longer provide value now that the issues have been resolved. Removed files: - swift-dash-core-sdk/BUILD_FIXES_SUMMARY.md - swift-dash-core-sdk/Examples/DashHDWalletExample/ADD_PACKAGE_DEPENDENCY.md - swift-dash-core-sdk/Examples/DashHDWalletExample/DEBUG_SUMMARY.md - swift-dash-core-sdk/Examples/DashHDWalletExample/FIX_MODULE_ERROR.md - swift-dash-core-sdk/Examples/DashHDWalletExample/LINKING_FIX.md - swift-dash-core-sdk/Examples/DashHDWalletExample/SPM_LINKING_SOLUTION.md 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
Remove obsolete test scripts and data files that were used during headers2 protocol development and are no longer needed. Removed files: - test_headers2.rs - Temporary headers2 test - test_headers2.sh - Temporary test runner - test_simple_headers2.rs - Another temporary test - run_headers2_test.sh - Duplicate test runner - peer_reputation.json - Test data file with only 2 entries 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
Remove standalone extraction script that is not used by any commands. The /fix-coderabbit-pr command has its own inline extraction logic. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
… v0.40-dev state Remove linting configuration from hashes/Cargo.toml and revert formatting changes in key-wallet/src/derivation.rs to match v0.40-dev baseline. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
- Created private async helper `calculate_filter_header_height()` in StatusDisplay - Replaced duplicated filter header height calculation logic in: - get_sync_progress() method (lines 78-103) - update_status_display() method with terminal UI (lines 159-163) - update_status_display() method without terminal UI (lines 251-255) - This refactoring improves code clarity and maintainability by eliminating duplication Fixes CodeRabbitAI review comment from PR dashpay#80 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
Extract the duplicated header height calculation logic from sync_progress() and update_status_display() into a private helper method calculate_header_height_with_logging(). This improves maintainability and reduces code repetition as identified by CodeRabbitAI in PR dashpay#80. The helper method supports optional debug logging to preserve the existing logging behavior in update_status_display().
…sing Replace the panic-inducing expect() call when parsing start_height_str with proper error handling using map_err() and the ? operator. This ensures the program gracefully exits with a descriptive error message instead of panicking when an invalid start height is provided. Fixes CodeRabbitAI review comment from PR dashpay#80
Implemented comprehensive tests to verify actual reentrancy behavior and thread safety in the FFI callback system as requested by CodeRabbitAI: - Enhanced test_callback_reentrancy to: - Actually trigger callbacks through real FFI operations - Test reentrancy by attempting to call FFI functions from within callbacks - Detect potential deadlocks with timeout monitoring - Track and verify reentrancy protection mechanisms - Added test_callback_thread_safety to: - Invoke callbacks from multiple threads concurrently - Monitor concurrent callback executions - Detect race conditions through shared state access - Verify thread-safe behavior without deadlocks or data races These tests ensure the FFI layer properly handles nested/concurrent callback invocations without issues. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
Changed valid_count update logic in dash-spv/src/storage/disk.rs to only increment when offset equals current valid_count. This ensures valid_count always reflects the count of contiguous valid headers without gaps, preventing sentinel headers from being incorrectly counted as valid when headers are stored non-sequentially. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
Previously, get_mempool_balance relied solely on the sign of net_amount to determine transaction effects on an address, which could lead to incorrect balance calculations. This fix: - Examines transaction outputs directly to calculate actual balance changes - Validates that calculated changes are consistent with transaction metadata - Handles edge cases like zero amounts and conflicting signs - Adds comprehensive unit tests covering various scenarios The implementation now: 1. Calculates incoming funds by checking outputs to the address 2. Notes that outgoing funds would require UTXO knowledge (commented) 3. Applies the validated balance change rather than trusting net_amount alone 4. Ensures no negative balances are returned 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
…iring new ones Fixed potential deadlock issue in BloomFilterManager by ensuring write locks are explicitly released before acquiring subsequent locks. This addresses the CodeRabbitAI review comment from PR dashpay#80 about nested write locks in the bloom manager. Changes: - Wrapped write lock acquisitions in explicit scopes in add_address(), add_outpoint(), and add_data() methods to ensure locks are dropped before acquiring filter lock - Refactored clear() method to use explicit scopes for each lock acquisition This ensures consistent lock ordering and prevents potential deadlocks when multiple threads access the bloom filter manager concurrently. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
Implement atomic-like behavior for wallet rollback and storage operations during chain reorganization to prevent inconsistent state. Changes: - Create chain state checkpoint before any modifications - Wrap all reorg operations in a single async block - Restore chain state on any failure - Track successfully stored headers for better error reporting - Add clear error messages indicating partial state issues This addresses the CodeRabbitAI review comment about ensuring atomicity of operations during reorganization to maintain consistency. Note: Due to StorageManager trait limitations, we cannot fully rollback wallet state or storage operations. The error messages guide users to resync from a checkpoint if needed. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
…orum_public_key Added size parameter to ffi_dash_spv_get_quorum_public_key function to prevent buffer overflow vulnerabilities. The function now validates that the output buffer is at least 48 bytes before writing to it. - Added `out_pubkey_size` parameter to function signature - Implemented size validation that returns InvalidArgument error if buffer is too small - Updated function documentation to reflect new size requirements - Added NotImplemented error code for unimplemented platform functions - Created platform_integration module with placeholder implementations - Added test to verify buffer size validation works correctly This addresses the security issue identified by CodeRabbitAI in PR dashpay#80. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
…t-aware reorg protection This commit implements several key improvements to the Dash SPV client: ## Network & Peer Discovery - Enable DNS-first peer discovery for immediate startup without hardcoded peers - Remove hardcoded peer IPs in favor of automatic DNS seed discovery - Fix network connection timeouts (increase from 15ms to 100ms) to prevent TCP partial reads - Add comprehensive timeout documentation with historical context - Change terminal UI from default-enabled to opt-in via --terminal-ui flag ## Chain Management & Sync - Add checkpoint-aware reorganization protection to prevent reorgs past checkpoint base - Improve fork detection during checkpoint sync with proper height calculations - Use blockchain height instead of storage height for checkpoint-aware progress tracking - Enhance headers locator logic for checkpoint sync (checkpoint-only, no genesis fallback) - Add validation for headers received from before checkpoint base height ## FFI Safety & SDK - Add buffer size validation to ffi_dash_spv_get_quorum_public_key for overflow protection - Update Swift SDK FFI interface with out_pubkey_size parameter - Minor formatting improvements in Swift configuration files ## Bug Fixes - Fix TCP connection timeout issues that caused message corruption and checksum failures - Resolve peer connection issues during startup when no explicit peers configured - Improve test stability with corrected storage manager initialization These changes enhance reliability, security, and performance of the SPV client, particularly for checkpoint sync scenarios and network connectivity edge cases. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
When header sync timeout triggers re-sending of requests, the same headers were being processed twice, causing false fork detection warnings. Added duplicate header check using storage.get_header_height_by_hash() to skip already processed headers, preventing unnecessary fork warnings and reorg operations during normal sync. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
…idation When fully synced, the SPV client was not requesting masternode list updates for new blocks, causing ChainLock validation to fail with "No masternode lists in engine" errors. ChainLocks require masternode lists from 8 blocks prior (per DIP24) to validate signatures. Changes: - Add masternode diff requests in handle_new_headers when masternodes enabled - Implement handle_post_sync_mnlistdiff to process diffs in FullySynced phase - Add gap detection and catch-up logic for missing masternode lists - Enhanced logging to show required masternode list heights for debugging This ensures ChainLocks can be validated immediately when they arrive for new blocks after the initial sync is complete. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
- Replace get_header_by_height with get_header method calls - Fix GetMnListDiff message struct naming - Comment out unimplemented chain manager functionality 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
…onization - Add FFI functions to configure start height and wallet creation time - Implement store_headers_from_height in disk storage for checkpoint-based sync - Improve header sync error handling for checkpoint scenarios with minimal headers - Add Swift SDK support for enabling checkpoint sync - Update terminal block data modules (formatting fixes) This allows SPV clients to start syncing from predefined checkpoints instead of genesis, significantly reducing initial sync time. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
- Update README with Unified SDK option for iOS integration - Add UNIFIED_SDK.md documenting the unified architecture approach - Update FFI crate READMEs with notes about Unified SDK availability - Modify SwiftDashCoreSDK to use static library linking for unified framework - Fix SPVClient type from OpaquePointer to UnsafeMutablePointer<FFIDashSpvClient> The Unified SDK combines Core (dash-spv-ffi, key-wallet-ffi) and Platform (dash-sdk-ffi) functionality into a single optimized XCFramework. This provides 79.4% size reduction while maintaining full API compatibility. See UNIFIED_SDK.md for integration details and platform-ios/packages/rs-sdk-ffi for the build implementation. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
- Add await keywords to async initializers in test setup - Update Package.swift to include macOS library paths for testing - Fix maxPeers assertion to match actual default value (12 instead of 8) - Add aarch64-apple-darwin target paths for both DashSPVFFI and KeyWalletFFI These changes allow the SwiftDashCoreSDK tests to compile and pass on macOS. All 21 tests now pass successfully. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
WalkthroughThis update introduces major new features and architectural improvements to the Dash SPV client, including modular support for Bloom filters, advanced chain management (chain tips, work, chainlocks, checkpoints, fork detection, orphans, and reorgs), persistent sync state, mempool filtering, peer reputation, and enhanced storage for terminal blocks and mempool data. The network layer and client configuration were expanded for mempool and peer management. Extensive documentation, tests, and example programs were added to cover new functionality and usage. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant SPVClient
participant NetworkManager
participant Storage
participant Wallet
participant BloomManager
participant ChainManager
participant MempoolFilter
participant PeerReputationManager
User->>SPVClient: Start sync (optionally from checkpoint)
SPVClient->>NetworkManager: Connect peers (with reputation check)
NetworkManager->>PeerReputationManager: Update on connect/ban
SPVClient->>ChainManager: Load chain state, tips, checkpoints
SPVClient->>BloomManager: Build Bloom filter from wallet
SPVClient->>NetworkManager: Request headers/blocks/filters
NetworkManager->>SPVClient: Receive headers/blocks/filters
SPVClient->>ChainManager: Process headers, detect forks/reorgs
ChainManager->>Storage: Store headers, manage tips/orphans
SPVClient->>Wallet: Process transactions, update balances
SPVClient->>MempoolFilter: Filter/process mempool transactions
MempoolFilter->>Wallet: Update unconfirmed state
SPVClient->>Storage: Persist sync state, mempool, terminal blocks
SPVClient->>User: Emit events (progress, balance, reorg, etc.)
Possibly related PRs
Poem
Warning There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure. 🔧 Clippy (1.86.0)warning: profiles for the non root package will be ignored, specify profiles at the workspace root: Caused by: Caused by: Caused by: ✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 29
🔭 Outside diff range comments (2)
dash-spv/src/main.rs (1)
306-310: Invalid example regtest addresses.The regtest addresses appear to be invalid placeholders and don't follow the correct format.
These addresses should either be:
- Valid regtest addresses generated from a test environment
- Clearly marked as placeholders with a comment
- Removed if no valid examples are available
The current addresses like "yQ9J8qK3nNW8JL8h5T6tB3VZwwH9h5T6tB" appear malformed (duplicate suffix pattern).
dash-spv/src/network/connection.rs (1)
134-184: Address TODO: Eliminate duplicate connection setup code.The connection setup logic is duplicated between
connect()andconnect_instance(). This violates DRY principles and increases maintenance burden.Consider extracting the common setup logic into a shared method:
/// Common connection setup logic fn setup_connection(stream: TcpStream, read_timeout: Duration) -> NetworkResult<ConnectionState> { stream.set_nodelay(true).map_err(|e| { NetworkError::ConnectionFailed(format!("Failed to set TCP_NODELAY: {}", e)) })?; stream.set_read_timeout(Some(read_timeout)).map_err(|e| { NetworkError::ConnectionFailed(format!("Failed to set read timeout: {}", e)) })?; let read_stream = stream.try_clone().map_err(|e| { NetworkError::ConnectionFailed(format!("Failed to clone stream: {}", e)) })?; Ok(ConnectionState { stream, read_buffer: BufReader::new(read_stream), }) }Would you like me to create a complete refactor that eliminates this duplication?
🧹 Nitpick comments (40)
dash-spv/examples/test_genesis.rs (1)
12-12: Consider consistent formatting for bits output.The bits are printed in hex format (
{:x}) but the expected value shows1e0ffff0format. Consider using consistent formatting for easier comparison.- println!("Bits: {:x}", mainnet_genesis.header.bits.to_consensus()); + println!("Bits: {:08x}", mainnet_genesis.header.bits.to_consensus());dash-spv/examples/test_terminal_blocks.rs (2)
13-16: Consider making test heights configurable or dynamically determined.The hardcoded test heights may become outdated as the blockchain grows. Consider making them configurable or determining them dynamically.
- let test_heights = vec![ - 387480, 400000, 450000, 500000, 550000, 600000, 650000, 700000, 750000, 760000, 800000, - 850000, 900000, - ]; + // Generate test heights at regular intervals + let test_heights: Vec<u32> = (400_000..=900_000).step_by(50_000).collect(); + let mut test_heights = test_heights; + test_heights.push(387480); // Include known terminal block
36-44: Add error handling for robustness.Consider adding error handling for potential failures in the terminal block operations.
for target in test_target_heights { - if let Some(best) = manager.find_best_terminal_block_with_data(target) { + match manager.find_best_terminal_block_with_data(target) { + Some(best) => { println!( "For target height {}: best terminal block is {} with {} masternodes", target, best.height, best.masternode_count ); - } else { + } + None => { println!("For target height {}: no terminal block with data found", target); + } } }dash-spv/data/testnet/mod.rs (1)
5-110: Functional terminal block loading with appropriate error handling.The loading pattern is repetitive but functional for an auto-generated file. The error handling with
if let Ok()is appropriate here, silently skipping invalid JSON data rather than failing the entire loading process.The use of
include_str!to embed JSON at compile time is efficient and ensures the data is always available. The sequential loading of terminal blocks from 387480 to 900000 provides good coverage for testnet synchronization optimization.For potential future improvements, consider:
- Adding logging when JSON deserialization fails
- Validating that the loaded states have the expected block heights
- Adding a macro to reduce repetition if this becomes manually maintained
// Example improvement for error handling: if let Ok(state) = serde_json::from_str::<TerminalBlockMasternodeState>(data) { manager.add_state(state); +} else { + log::warn!("Failed to parse terminal block data for height {}", height); }dash-spv/examples/test_initial_sync.rs (3)
18-18: Consider making temporary directory creation more robust.Using process ID for temporary directory names could potentially lead to conflicts if multiple instances run simultaneously or if the directory already exists from a previous run.
Consider using
tempfilecrate for safer temporary directory handling:-let data_dir = PathBuf::from(format!("/tmp/dash-spv-initial-sync-{}", std::process::id())); +use tempfile::TempDir; +let temp_dir = TempDir::new()?; +let data_dir = temp_dir.path().to_path_buf();
22-25: Hardcoded peer addresses may become stale.The hardcoded testnet peer addresses could become unreachable over time, causing test failures unrelated to the code being tested.
Consider using peer discovery or maintaining a list of known stable peers, or make the peers configurable via environment variables for CI/testing scenarios.
37-37: Fixed timing assumptions may cause flaky tests.The 10-second waits are arbitrary and may not be sufficient on slower systems or networks, potentially causing false negatives in testing.
Consider implementing a polling mechanism with timeout instead of fixed delays:
-tokio::time::sleep(Duration::from_secs(10)).await; +// Poll for progress with timeout +let mut attempts = 0; +let max_attempts = 30; // 30 seconds with 1-second intervals +loop { + let progress = client.sync_progress().await?; + if progress.header_height > 0 || attempts >= max_attempts { + break; + } + tokio::time::sleep(Duration::from_secs(1)).await; + attempts += 1; +}Also applies to: 47-48
dash-spv/data/mainnet/mod.rs (1)
8-12: Consider logging deserialization failures.The silent error handling may hide issues with corrupted or invalid terminal block data. While this might be acceptable for auto-generated code, consider adding logging for debugging purposes.
if let Ok(state) = serde_json::from_str::<TerminalBlockMasternodeState>(data) { manager.add_state(state); +} else { + tracing::warn!("Failed to deserialize terminal block data for height 1088640"); }dash-spv/examples/test_headers2.rs (1)
24-27: Consider making peer addresses configurable.The hardcoded peer addresses may become unreachable over time, making the example less reliable. Consider accepting peers as command-line arguments or environment variables.
+use std::env; + #[tokio::main] async fn main() -> Result<(), SpvError> { + let peers = env::args().nth(1) + .map(|p| vec![p.parse().unwrap()]) + .unwrap_or_else(|| vec![ + "seed.dash.org:9999".parse().unwrap(), + "dnsseed.dash.org:9999".parse().unwrap(), + ]); + - config.peers = vec![ - "seed.dash.org:9999".parse().unwrap(), - "dnsseed.dash.org:9999".parse().unwrap(), - ]; + config.peers = peers;dash-spv/examples/test_header_count.rs (1)
22-24: Consider proper error handling in examples.While this is an example, it should demonstrate best practices. Consider using proper error handling instead of
unwrap()to set a good example for users.- if std::path::Path::new(storage_dir).exists() { - std::fs::remove_dir_all(storage_dir)?; - } + if std::path::Path::new(storage_dir).exists() { + if let Err(e) = std::fs::remove_dir_all(storage_dir) { + eprintln!("Warning: Failed to clean up previous test data: {}", e); + } + }And at the end:
- std::fs::remove_dir_all(storage_dir)?; + if let Err(e) = std::fs::remove_dir_all(storage_dir) { + eprintln!("Warning: Failed to clean up test data: {}", e); + }Also applies to: 42-43, 51-52, 95-95
dash-spv/src/chain/reorg_test.rs (1)
59-59: Document the ReorgManager parameters for clarity.The magic number
100should be documented to explain what it represents (likely max reorg depth).- let reorg_manager = ReorgManager::new(100, false); + // Create reorg manager with max reorg depth of 100 blocks + let reorg_manager = ReorgManager::new(100, false);dash-spv-ffi/tests/test_mempool_tracking.rs (2)
16-43: Add null checks in callback functions for defensive programming.While the test setup ensures valid pointers, adding null checks would make these extern "C" functions more robust.
Example for the first callback:
extern "C" fn test_mempool_added( _txid: *const [u8; 32], _amount: i64, _addresses: *const c_char, _is_instant_send: bool, user_data: *mut c_void, ) { + if user_data.is_null() { + return; + } let callbacks = unsafe { &*(user_data as *const TestCallbacks) }; let mut count = callbacks.mempool_added_count.lock().unwrap(); *count += 1; }Apply similar changes to the other two callbacks.
169-172: Fix comment formatting.Minor formatting inconsistency in the comment.
let balance_data = (*balance); assert_eq!(balance_data.confirmed, 0); // No confirmed balance in mempool - // mempool and mempool_instant fields contain the actual mempool balance + // mempool and mempool_instant fields contain the actual mempool balance dash_spv_ffi_balance_destroy(balance);dash-spv/src/main.rs (1)
180-183: Document the special handling of u32::MAX for "now".The use of
u32::MAXas a special value should be documented in the code for clarity.if start_height_str == "now" { - // Use a very high number to get the latest checkpoint + // Use u32::MAX as a special sentinel value that the sync logic + // will interpret as "start from the latest available checkpoint" config.start_from_height = Some(u32::MAX); tracing::info!("Will start syncing from the latest available checkpoint");dash-spv-ffi/tests/test_event_callbacks.rs (1)
125-126: Consider making the test timeout configurable or use a more robust waiting mechanism.The fixed 5-second sleep can make tests flaky, especially in CI environments. Consider using a configurable timeout or implementing a polling mechanism that checks for events periodically.
- // Wait a bit for events to be processed - thread::sleep(Duration::from_secs(5)); + // Wait for events to be processed with configurable timeout + let timeout = Duration::from_secs(std::env::var("TEST_TIMEOUT").map(|s| s.parse().unwrap_or(5)).unwrap_or(5)); + thread::sleep(timeout);dash-spv/src/chain/chain_work.rs (1)
97-112: Enhance hex parsing with input validation.The hex parsing implementation handles basic cases but could benefit from additional validation for edge cases.
pub fn from_hex(hex: &str) -> Result<Self, String> { // Remove 0x prefix if present let hex = hex.strip_prefix("0x").unwrap_or(hex); + // Validate hex string contains only valid hex characters + if !hex.chars().all(|c| c.is_ascii_hexdigit()) { + return Err("Invalid hex characters".to_string()); + } + // Parse hex string to bytes let bytes = hex::decode(hex).map_err(|e| format!("Invalid hex: {}", e))?;dash-spv/docs/TERMINAL_BLOCKS.md (2)
28-31: Add language specifier to code block for better rendering.The code block should specify a language for proper syntax highlighting and linting compliance.
-``` +```text Request: Genesis (0) → Current (1,276,272) Diff size: ~500MB, covering 1.2M blocks--- `34-37`: **Add language specifier to code block for consistency.** Similar to the previous code block, this should specify a language for proper rendering. ```diff -``` +```text Request: Terminal Block (900,000) → Current (1,276,272) Diff size: ~100MB, covering 376K blocks</blockquote></details> <details> <summary>dash-spv/src/chain/chain_tip.rs (1)</summary><blockquote> `104-104`: **Remove unnecessary clone() on BlockHash.** `BlockHash` implements `Copy`, so the `clone()` call is redundant. ```diff - self.tips.insert(tip_hash.clone(), tip); + self.tips.insert(*tip_hash, tip);dash-spv/src/chain/orphan_pool.rs (1)
7-10: Consider making orphan pool limits configurable.The hardcoded constants
MAX_ORPHAN_BLOCKSandORPHAN_TIMEOUTmight not be suitable for all use cases. Consider accepting these as configuration parameters to allow flexibility for different network conditions or resource constraints.dash-spv/src/bloom/builder.rs (1)
124-127: Improve error handling specificity.Consider preserving the original error type instead of wrapping in a generic
SpvError::General. This would provide better error context for callers.Either add a specific error variant for bloom filter creation failures or use a more structured error approach that preserves the original error as a source.
dash-spv/src/network/mod.rs (2)
310-314: Track the headers2 compatibility issue.The headers2 support is intentionally disabled with a TODO comment. Please ensure this is tracked in your issue tracker to avoid it being forgotten.
Would you like me to open an issue to track the headers2 decompression fix?
325-339: Clarify the peer update behavior.The comment mentions "update the peer info if we have one", but the implementation only logs and stores the preference locally. Consider clarifying whether this is the intended behavior for single-peer connections or if actual peer state update is needed.
- // For single peer connection, update the peer info if we have one + // For single peer connection, log the DSQ preference change if let Some(connection) = &self.connection {dash-spv/src/bloom/manager.rs (1)
277-298: Consider consolidating the internal state for more efficient clearing.While the current approach is safe (separate scopes prevent deadlock), acquiring 5 write locks sequentially could be optimized by consolidating the internal state into a single lockable structure.
dash-spv/src/network/handshake.rs (3)
38-42: Consider documenting the redundancy between state enum and boolean flags.The handshake now tracks state both via the
HandshakeStateenum and separate boolean flags (version_received,verack_received,version_sent). While this provides more granular tracking, it creates potential for inconsistency.Consider adding a comment explaining why both approaches are used, or consolidate into a single state representation:
pub struct HandshakeManager { _network: Network, state: HandshakeState, our_version: u32, peer_version: Option<u32>, peer_services: Option<ServiceFlags>, + // Boolean flags provide granular state tracking independent of the state enum + // This allows for more flexible handshake flow, especially for peer-initiated handshakes version_received: bool, verack_received: bool, version_sent: bool, mempool_strategy: MempoolStrategy, }
252-255: Consider using a more descriptive error for address parsing.While the error handling is correct, the error message could be more specific about what address failed to parse.
- let local_addr = "127.0.0.1:0" - .parse() - .map_err(|_| NetworkError::AddressParse("Failed to parse local address".to_string()))?; + let local_addr = "127.0.0.1:0" + .parse() + .map_err(|e| NetworkError::AddressParse(format!("Failed to parse local address '127.0.0.1:0': {}", e)))?;
291-297: Document why headers2 is disabled.The method always sends SendHeaders regardless of peer support, with a comment that headers2 is disabled. Consider adding more context about why it's disabled and any plans to enable it.
async fn negotiate_headers2(&self, connection: &mut TcpConnection) -> NetworkResult<()> { - // Headers2 is currently disabled due to protocol compatibility issues + // Headers2 (compressed headers) is currently disabled due to protocol compatibility issues + // TODO: Enable headers2 support once protocol issues are resolved (see issue #XX) // Always send SendHeaders regardless of peer support tracing::info!("Headers2 is disabled - sending SendHeaders only"); connection.send_message(NetworkMessage::SendHeaders).await?; Ok(()) }dash-spv/src/chain/reorg.rs (1)
275-280: Consider using interior mutability to avoid needing write access to storage during the read phase.The current implementation requires
&mut dyn StorageManagereven in the read-only collect phase. According to the learnings, the StorageManager trait uses&mut selfbut implementations often use interior mutability.Consider accepting
&dyn StorageManagerin the collect phase methods to better reflect the read-only nature and reduce potential for borrow conflicts. The StorageManager implementations can handle concurrency internally.dash-spv/src/mempool_filter.rs (1)
54-57: Add documentation for BloomFilter strategy behavior.The comment mentions that bloom filter checking is "handled by the network layer" but doesn't explain the implications.
MempoolStrategy::BloomFilter => { - // For bloom filter strategy, we would check the bloom filter - // This is handled by the network layer + // For bloom filter strategy, always return true here. + // The actual filtering is performed by BIP37 bloom filters at the network layer, + // which prevent irrelevant transactions from being announced via INV messages. true }dash-spv/src/chain/chainlock_manager.rs (1)
17-26: Consider using a time abstraction for testability.The ChainLockEntry uses std::time::SystemTime directly, which makes testing time-based behavior difficult.
Consider using a time abstraction trait that can be mocked in tests:
pub trait TimeProvider: Send + Sync { fn now(&self) -> std::time::SystemTime; } pub struct ChainLockEntry { pub chain_lock: ChainLock, pub received_at: std::time::SystemTime, pub validated: bool, }dash-spv/src/bloom/stats.rs (2)
42-42: Consider using Option for min_query_time_us to handle initialization more clearly.The current logic uses 0 as a sentinel value, which prevents recording actual zero-microsecond queries. Using Option would make the intent clearer.
-pub min_query_time_us: u64, +pub min_query_time_us: Option<u64>,Then update the comparison:
-if self.stats.query_performance.min_query_time_us == 0 - || micros < self.stats.query_performance.min_query_time_us -{ - self.stats.query_performance.min_query_time_us = micros; -} +match self.stats.query_performance.min_query_time_us { + None => self.stats.query_performance.min_query_time_us = Some(micros), + Some(min) if micros < min => self.stats.query_performance.min_query_time_us = Some(micros), + _ => {} +}Also applies to: 111-115
158-160: Document or make configurable the bandwidth estimation multiplier.The hardcoded multiplier of 10 for bandwidth estimation is arbitrary and could produce misleading statistics.
Consider adding a constant with documentation:
+/// Estimated ratio of unrelated transactions to relevant ones in the network +const BANDWIDTH_MULTIPLIER: u64 = 10; -self.stats.network_impact.bandwidth_saved_bytes += (tx_size * 10) as u64; -// Rough estimate +// Estimate bandwidth saved by not downloading unrelated transactions +// Assumes roughly BANDWIDTH_MULTIPLIER unrelated txs for each relevant one +self.stats.network_impact.bandwidth_saved_bytes += (tx_size * BANDWIDTH_MULTIPLIER) as u64;dash-spv/src/network/multi_peer.rs (2)
1271-1275: Track the Headers2 protocol issue.Headers2 support is currently disabled. The TODO indicates decompression issues need to be fixed.
Would you like me to create an issue to track the Headers2 decompression fix? This seems like an important protocol feature that should be properly tracked.
1285-1302: Consider the race condition in DSQ preference updates.The
update_peer_dsq_preferencemethod relies onlast_message_peerwhich could be updated by another thread between checking and using it. While this is logged as informational only, consider if this could cause issues.If precise peer tracking is needed, consider passing the peer address explicitly rather than relying on the last message peer.
dash-network-ffi/src/dash_network_ffiFFI.h (1)
11-11: Fix typo in comment: "uniffied" → "unified"-// of the FFI, and thus useful to multiple uniffied crates. +// of the FFI, and thus useful to multiple unified crates.dash-spv/src/chain/checkpoints.rs (1)
154-166: Optimize checkpoint search by timestamp.The current implementation scans all checkpoints linearly. Since checkpoints are ordered by height and timestamps generally increase with height, consider using the sorted heights for a more efficient search.
pub fn last_checkpoint_before_timestamp(&self, timestamp: u32) -> Option<&Checkpoint> { - let mut best_checkpoint = None; - let mut best_height = 0; - - for checkpoint in self.checkpoints.values() { - if checkpoint.timestamp <= timestamp && checkpoint.height >= best_height { - best_height = checkpoint.height; - best_checkpoint = Some(checkpoint); - } - } - - best_checkpoint + // Binary search would be more complex here since we're searching by timestamp + // but checkpoints are sorted by height. For now, iterate in reverse order + // to find the highest checkpoint before the timestamp + self.sorted_heights + .iter() + .rev() + .filter_map(|&height| self.checkpoints.get(&height)) + .find(|checkpoint| checkpoint.timestamp <= timestamp) }dash-spv/src/sync/filters.rs (3)
384-424: Good defensive programming for chain tip handlingThe addition of fallback logic to try the previous header when the exact tip header is not found is a solid improvement that handles edge cases at the chain tip gracefully.
Consider extracting this repeated pattern into a helper method to reduce code duplication:
+ /// Get header at height with fallback to previous height if at chain tip + async fn get_header_with_tip_fallback( + &self, + storage: &dyn StorageManager, + height: u32, + is_tip: bool, + ) -> SyncResult<BlockHash> { + match storage.get_header(height).await { + Ok(Some(header)) => Ok(header.block_hash()), + Ok(None) if is_tip && height > 0 => { + tracing::debug!( + "Tip header not found at height {}, trying previous header", + height + ); + storage + .get_header(height - 1) + .await + .map_err(|e| { + SyncError::Storage(format!( + "Failed to get previous header: {}", + e + )) + })? + .ok_or_else(|| { + SyncError::Storage(format!( + "Neither tip ({}) nor previous header found", + height + )) + }) + .map(|h| h.block_hash()) + } + Ok(None) => { + Err(SyncError::Validation(format!( + "Header not found at height {}", + height + ))) + } + Err(e) => { + Err(SyncError::Storage(format!( + "Failed to get header at height {}: {}", + height, e + ))) + } + } + }Also applies to: 538-578, 684-722
2266-2273: Consider explicit error handling for lock failuresThe method silently returns 0 when the mutex lock fails, which could hide underlying issues.
Consider logging the lock failure or propagating the error:
- pub fn get_received_filter_count(&self) -> u32 { - if let Ok(heights) = self.received_filter_heights.lock() { - heights.len() as u32 - } else { - 0 - } - } + pub fn get_received_filter_count(&self) -> u32 { + match self.received_filter_heights.lock() { + Ok(heights) => heights.len() as u32, + Err(e) => { + tracing::error!("Failed to lock received filter heights: {}", e); + 0 + } + } + }
2071-2073: Good defensive programming for collection accessAdding error handling for the
remove()operation is good defensive programming, even though this case should theoretically never occur.Consider a more descriptive error message:
- let mut filter_match = self.pending_block_downloads.remove(pos) - .ok_or_else(|| SyncError::InvalidState("filter match should exist at position".to_string()))?; + let mut filter_match = self.pending_block_downloads.remove(pos) + .ok_or_else(|| SyncError::InvalidState(format!( + "Inconsistent state: pending_block_downloads should have element at position {} for block {}", + pos, block_hash + )))?;dash-spv-ffi/src/client.rs (1)
1102-1110: Verify event listener state instead of just logging.The comment "The event listener should already be running from start()" suggests uncertainty. Consider actively checking if the listener thread is running and starting it if needed, rather than just logging.
- // The event listener should already be running from start() - // but we log this for debugging + // Ensure event listener is running + if !self.is_event_listener_running() { + self.start_event_listener(); + }
| #[test] | ||
| #[serial] | ||
| fn test_sync_diagnostic() { | ||
| unsafe { | ||
| // Create testnet config for the diagnostic test | ||
| let config = dash_spv_ffi_config_testnet(); | ||
| let temp_dir = TempDir::new().unwrap(); | ||
| let path = CString::new(temp_dir.path().to_str().unwrap()).unwrap(); | ||
| dash_spv_ffi_config_set_data_dir(config, path.as_ptr()); | ||
|
|
||
| // Enable test mode to use deterministic peers | ||
| dash_spv_ffi_enable_test_mode(); | ||
|
|
||
| // Create client | ||
| let client = dash_spv_ffi_client_new(config); | ||
| assert!(!client.is_null(), "Failed to create client"); | ||
|
|
||
| // Start the client | ||
| let start_result = dash_spv_ffi_client_start(client); | ||
| if start_result != FFIErrorCode::Success as i32 { | ||
| println!("Warning: Failed to start client, error code: {}", start_result); | ||
| let error = dash_spv_ffi_get_last_error(); | ||
| if !error.is_null() { | ||
| let error_str = CString::from_raw(error as *mut _); | ||
| println!("Error message: {:?}", error_str); | ||
| } | ||
| } | ||
|
|
||
| // Run the diagnostic sync test | ||
| println!("Running sync diagnostic test..."); | ||
| let test_result = dash_spv_ffi_client_test_sync(client); | ||
|
|
||
| if test_result == FFIErrorCode::Success as i32 { | ||
| println!("✅ Sync test passed!"); | ||
| } else { | ||
| println!("❌ Sync test failed with error code: {}", test_result); | ||
| let error = dash_spv_ffi_get_last_error(); | ||
| if !error.is_null() { | ||
| let error_str = std::ffi::CStr::from_ptr(error); | ||
| println!("Error message: {:?}", error_str); | ||
| } | ||
| } | ||
|
|
||
| // Stop and cleanup | ||
| let _stop_result = dash_spv_ffi_client_stop(client); | ||
| dash_spv_ffi_client_destroy(client); | ||
| dash_spv_ffi_config_destroy(config); | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix incorrect CString::from_raw usage and improve error handling.
The test has several issues that need to be addressed:
- Incorrect CString::from_raw usage (line 236): The function expects a mutable pointer but you're casting to
*mut _which is unsafe - Inconsistent error string handling: Lines 236 and 251 use different approaches to handle error strings
- Memory safety: The current error handling could lead to memory leaks or double-free issues
Apply this diff to fix the error handling:
- let error_str = CString::from_raw(error as *mut _);
- println!("Error message: {:?}", error_str);
+ let error_str = std::ffi::CStr::from_ptr(error);
+ println!("Error message: {:?}", error_str);Also, ensure consistent error handling throughout the test by using CStr::from_ptr instead of CString::from_raw since the error string is managed by the FFI layer.
🤖 Prompt for AI Agents
In dash-spv-ffi/tests/test_client.rs from lines 213 to 261, fix the incorrect
usage of CString::from_raw by replacing it with CStr::from_ptr for error string
handling to avoid unsafe mutable pointer casting and potential memory issues.
Ensure consistent error handling by using CStr::from_ptr in all places where
error strings are accessed from the FFI layer, preventing memory leaks or
double-free errors since the FFI manages the error string memory. Update both
error handling blocks accordingly to use CStr::from_ptr and adjust the printing
logic to display the error messages safely.
|
|
||
| // DNS seeds for Dash testnet | ||
| pub const TESTNET_DNS_SEEDS: &[&str] = &["testnet-seed.dashdot.io", "test.dnsseed.masternode.io"]; | ||
| pub const TESTNET_DNS_SEEDS: &[&str] = &["testnet-seed.dashdot.io"]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Verify testnet peer discovery with reduced DNS seeds.
Reducing TESTNET_DNS_SEEDS from 2 to 1 entry limits peer discovery options. Ensure the remaining seed (testnet-seed.dashdot.io) is reliable and provides sufficient peer diversity.
Consider monitoring testnet connectivity to ensure adequate peer discovery with the single DNS seed.
🌐 Web query:
Is testnet-seed.dashdot.io a reliable and actively maintained DNS seed for Dash testnet?
💡 Result:
The Dash Core documentation lists the following DNS seeds for the Dash testnet:
test.dnsseed.masternode.iotestnet-seed.darkcoin.qatestnet-seed.dashpay.io
These are the officially recognized DNS seeds for the Dash testnet. (docs.dash.org)
The DNS seed testnet-seed.dashdot.io is not listed among the official Dash testnet DNS seeds. While it may function as a DNS seed, its reliability and maintenance status are not officially documented. For optimal performance and security, it's recommended to use the official DNS seeds provided by Dash.
Citations:
Replace unverified DNS seed with official Dash testnet seeds
The seed testnet-seed.dashdot.io isn’t listed in the Dash Core docs as an official testnet DNS seed and may not be reliable long-term. To ensure robust peer discovery and diversity, update this to use the official seeds:
• Replace in dash-spv/src/network/constants.rs (around line 31):
-pub const TESTNET_DNS_SEEDS: &[&str] = &["testnet-seed.dashdot.io"];
+pub const TESTNET_DNS_SEEDS: &[&str] = &[
+ "test.dnsseed.masternode.io",
+ "testnet-seed.darkcoin.qa",
+ "testnet-seed.dashpay.io",
+];• Confirm connectivity on CI or in staging to catch any peer-discovery regressions.
• Optionally monitor testnet node counts over time to validate sufficient peer diversity.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| pub const TESTNET_DNS_SEEDS: &[&str] = &["testnet-seed.dashdot.io"]; | |
| pub const TESTNET_DNS_SEEDS: &[&str] = &[ | |
| "test.dnsseed.masternode.io", | |
| "testnet-seed.darkcoin.qa", | |
| "testnet-seed.dashpay.io", | |
| ]; |
🤖 Prompt for AI Agents
In dash-spv/src/network/constants.rs at line 31, replace the current testnet DNS
seed "testnet-seed.dashdot.io" with the official Dash testnet DNS seeds as
listed in the Dash Core documentation to ensure reliable and diverse peer
discovery. After updating the seeds, verify connectivity through CI or staging
environments to detect any peer-discovery issues and consider monitoring testnet
node counts over time for validation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is the proper testnet seed used by dash core.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@PastaPastaPasta Thank you for the correction! I appreciate you clarifying that testnet-seed.dashdot.io is indeed the proper testnet seed used by Dash Core. My previous assessment was incorrect - I apologize for the confusion. It's good to know that the change in the diff is removing an outdated or unreliable seed while keeping the correct one.
🐰
(•‿•)
/| |
/ |___|
Learning something new!
✏️ Learnings added
Learnt from: PastaPastaPasta
PR: dashpay/rust-dashcore#87
File: dash-spv/src/network/constants.rs:31-31
Timestamp: 2025-07-16T15:59:17.291Z
Learning: The testnet DNS seed "testnet-seed.dashdot.io" is the proper testnet seed used by Dash Core, not the seeds listed in some documentation. This is the correct and recommended seed for Dash testnet peer discovery.
Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.
🧠 Learnings used
Learnt from: QuantumExplorer
PR: dashpay/rust-dashcore#74
File: key-wallet/src/address.rs:30-40
Timestamp: 2025-06-15T15:31:44.136Z
Learning: In the key-wallet crate, QuantumExplorer prefers keeping wildcard arms that default unknown Network variants to testnet prefixes rather than using exhaustive matches or panics. This fallback behavior is intentional.
Learnt from: DCG-Claude
PR: dashpay/rust-dashcore#0
File: :0-0
Timestamp: 2025-06-26T16:01:37.609Z
Learning: The mempool tracking infrastructure (UnconfirmedTransaction, MempoolState, configuration, and mempool_filter.rs) is fully implemented and integrated in the Dash SPV client as of this PR, including client logic, FFI APIs, and tests.
| pub fn load_mainnet_terminal_blocks(manager: &mut TerminalBlockDataManager) { | ||
| // Terminal block 1088640 | ||
| { | ||
| let data = include_str!("terminal_block_1088640.json"); | ||
| if let Ok(state) = serde_json::from_str::<TerminalBlockMasternodeState>(data) { | ||
| manager.add_state(state); | ||
| } | ||
| } | ||
|
|
||
| // Terminal block 1100000 | ||
| { | ||
| let data = include_str!("terminal_block_1100000.json"); | ||
| if let Ok(state) = serde_json::from_str::<TerminalBlockMasternodeState>(data) { | ||
| manager.add_state(state); | ||
| } | ||
| } | ||
|
|
||
| // Terminal block 1150000 | ||
| { | ||
| let data = include_str!("terminal_block_1150000.json"); | ||
| if let Ok(state) = serde_json::from_str::<TerminalBlockMasternodeState>(data) { | ||
| manager.add_state(state); | ||
| } | ||
| } | ||
|
|
||
| // Terminal block 1200000 | ||
| { | ||
| let data = include_str!("terminal_block_1200000.json"); | ||
| if let Ok(state) = serde_json::from_str::<TerminalBlockMasternodeState>(data) { | ||
| manager.add_state(state); | ||
| } | ||
| } | ||
|
|
||
| // Terminal block 1250000 | ||
| { | ||
| let data = include_str!("terminal_block_1250000.json"); | ||
| if let Ok(state) = serde_json::from_str::<TerminalBlockMasternodeState>(data) { | ||
| manager.add_state(state); | ||
| } | ||
| } | ||
|
|
||
| // Terminal block 1300000 | ||
| { | ||
| let data = include_str!("terminal_block_1300000.json"); | ||
| if let Ok(state) = serde_json::from_str::<TerminalBlockMasternodeState>(data) { | ||
| manager.add_state(state); | ||
| } | ||
| } | ||
|
|
||
| // Terminal block 1350000 | ||
| { | ||
| let data = include_str!("terminal_block_1350000.json"); | ||
| if let Ok(state) = serde_json::from_str::<TerminalBlockMasternodeState>(data) { | ||
| manager.add_state(state); | ||
| } | ||
| } | ||
|
|
||
| // Terminal block 1400000 | ||
| { | ||
| let data = include_str!("terminal_block_1400000.json"); | ||
| if let Ok(state) = serde_json::from_str::<TerminalBlockMasternodeState>(data) { | ||
| manager.add_state(state); | ||
| } | ||
| } | ||
|
|
||
| // Terminal block 1450000 | ||
| { | ||
| let data = include_str!("terminal_block_1450000.json"); | ||
| if let Ok(state) = serde_json::from_str::<TerminalBlockMasternodeState>(data) { | ||
| manager.add_state(state); | ||
| } | ||
| } | ||
|
|
||
| // Terminal block 1500000 | ||
| { | ||
| let data = include_str!("terminal_block_1500000.json"); | ||
| if let Ok(state) = serde_json::from_str::<TerminalBlockMasternodeState>(data) { | ||
| manager.add_state(state); | ||
| } | ||
| } | ||
|
|
||
| // Terminal block 1550000 | ||
| { | ||
| let data = include_str!("terminal_block_1550000.json"); | ||
| if let Ok(state) = serde_json::from_str::<TerminalBlockMasternodeState>(data) { | ||
| manager.add_state(state); | ||
| } | ||
| } | ||
|
|
||
| // Terminal block 1600000 | ||
| { | ||
| let data = include_str!("terminal_block_1600000.json"); | ||
| if let Ok(state) = serde_json::from_str::<TerminalBlockMasternodeState>(data) { | ||
| manager.add_state(state); | ||
| } | ||
| } | ||
|
|
||
| // Terminal block 1650000 | ||
| { | ||
| let data = include_str!("terminal_block_1650000.json"); | ||
| if let Ok(state) = serde_json::from_str::<TerminalBlockMasternodeState>(data) { | ||
| manager.add_state(state); | ||
| } | ||
| } | ||
|
|
||
| // Terminal block 1700000 | ||
| { | ||
| let data = include_str!("terminal_block_1700000.json"); | ||
| if let Ok(state) = serde_json::from_str::<TerminalBlockMasternodeState>(data) { | ||
| manager.add_state(state); | ||
| } | ||
| } | ||
|
|
||
| // Terminal block 1720000 | ||
| { | ||
| let data = include_str!("terminal_block_1720000.json"); | ||
| if let Ok(state) = serde_json::from_str::<TerminalBlockMasternodeState>(data) { | ||
| manager.add_state(state); | ||
| } | ||
| } | ||
|
|
||
| // Terminal block 1750000 | ||
| { | ||
| let data = include_str!("terminal_block_1750000.json"); | ||
| if let Ok(state) = serde_json::from_str::<TerminalBlockMasternodeState>(data) { | ||
| manager.add_state(state); | ||
| } | ||
| } | ||
|
|
||
| // Terminal block 1800000 | ||
| { | ||
| let data = include_str!("terminal_block_1800000.json"); | ||
| if let Ok(state) = serde_json::from_str::<TerminalBlockMasternodeState>(data) { | ||
| manager.add_state(state); | ||
| } | ||
| } | ||
|
|
||
| // Terminal block 1850000 | ||
| { | ||
| let data = include_str!("terminal_block_1850000.json"); | ||
| if let Ok(state) = serde_json::from_str::<TerminalBlockMasternodeState>(data) { | ||
| manager.add_state(state); | ||
| } | ||
| } | ||
|
|
||
| // Terminal block 1900000 | ||
| { | ||
| let data = include_str!("terminal_block_1900000.json"); | ||
| if let Ok(state) = serde_json::from_str::<TerminalBlockMasternodeState>(data) { | ||
| manager.add_state(state); | ||
| } | ||
| } | ||
|
|
||
| // Terminal block 1950000 | ||
| { | ||
| let data = include_str!("terminal_block_1950000.json"); | ||
| if let Ok(state) = serde_json::from_str::<TerminalBlockMasternodeState>(data) { | ||
| manager.add_state(state); | ||
| } | ||
| } | ||
|
|
||
| // Terminal block 2000000 | ||
| { | ||
| let data = include_str!("terminal_block_2000000.json"); | ||
| if let Ok(state) = serde_json::from_str::<TerminalBlockMasternodeState>(data) { | ||
| manager.add_state(state); | ||
| } | ||
| } | ||
|
|
||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Verify terminal block data integrity.
Since this module loads critical terminal block data that affects synchronization, ensure the embedded JSON files are validated during the build process.
🏁 Script executed:
#!/bin/bash
# Description: Verify that all referenced JSON files exist and are valid JSON
# Expected: All JSON files should exist and be parseable
echo "Checking for terminal block JSON files..."
for height in 1088640 1100000 1150000 1200000 1250000 1300000 1350000 1400000 1450000 1500000 1550000 1600000 1650000 1700000 1720000 1750000 1800000 1850000 1900000 1950000 2000000; do
file="dash-spv/data/mainnet/terminal_block_${height}.json"
if [ -f "$file" ]; then
echo "✓ Found: $file"
if ! jq empty "$file" 2>/dev/null; then
echo "✗ Invalid JSON: $file"
fi
else
echo "✗ Missing: $file"
fi
doneLength of output: 9284
Missing terminal block JSON files
The load_mainnet_terminal_blocks function references 21 JSON files (terminal_block_1088640.json through terminal_block_2000000.json), but only terminal_block_2000000.json is present under dash-spv/data/mainnet/. This will cause compile‐time failures and prevent synchronization.
Please address the following:
- Add all missing
terminal_block_<height>.jsonfiles todash-spv/data/mainnet/. - Introduce a
build.rs(or CI script) that:- Ensures each
include_str!-referenced file exists before compilation. - Validates the JSON schema/format (e.g., try deserializing with
serde_json).
- Ensures each
- Optionally, add a unit test to iterate over
dash-spv/data/mainnet/*.jsonand assert successful parsing to catch invalid JSON early.
🤖 Prompt for AI Agents
In dash-spv/data/mainnet/mod.rs from lines 5 to 174, the function
load_mainnet_terminal_blocks references 21 JSON files but only
terminal_block_2000000.json exists, causing compile-time errors. Fix this by
adding all missing terminal_block_<height>.json files to the
dash-spv/data/mainnet/ directory. Additionally, create a build.rs script or CI
check that verifies each included JSON file exists and can be successfully
deserialized with serde_json before compilation. Optionally, add a unit test
that iterates over all JSON files in dash-spv/data/mainnet/ and asserts they
parse correctly to catch invalid JSON early.
|
|
||
| println!("\n🧪 Testing headers2 fix...\n"); | ||
|
|
||
| let addr = "192.168.1.163:19999".parse().unwrap(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Hardcoded IP address reduces portability.
The hardcoded IP address 192.168.1.163:19999 appears to be a private network address, making this example only usable in specific network environments.
Consider accepting the address as a command-line argument:
+use std::env;
+
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
- let addr = "192.168.1.163:19999".parse().unwrap();
+ let addr = env::args().nth(1)
+ .unwrap_or_else(|| "192.168.1.163:19999".to_string())
+ .parse()
+ .expect("Invalid address format");📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let addr = "192.168.1.163:19999".parse().unwrap(); | |
| use std::env; | |
| #[tokio::main] | |
| async fn main() -> Result<(), Box<dyn std::error::Error>> { | |
| let addr = env::args().nth(1) | |
| .unwrap_or_else(|| "192.168.1.163:19999".to_string()) | |
| .parse() | |
| .expect("Invalid address format"); | |
| // rest of your implementation... | |
| } |
🤖 Prompt for AI Agents
In dash-spv/examples/test_headers2_fix.rs at line 20, the IP address is
hardcoded to a private network address, reducing portability. Modify the code to
accept the address as a command-line argument instead of hardcoding it. Use a
suitable argument parsing method to read the address from the command line and
parse it, ensuring the example can run in different network environments.
| connection_timeout: Duration::from_secs(5), | ||
| message_timeout: Duration::from_secs(30), | ||
| sync_timeout: Duration::from_secs(60), | ||
| read_timeout: Duration::from_millis(15), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider increasing the read timeout value.
A 15ms read timeout is extremely low for network operations and may cause flaky tests. Consider using at least 1 second for test stability.
- read_timeout: Duration::from_millis(15),
+ read_timeout: Duration::from_secs(1),📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| read_timeout: Duration::from_millis(15), | |
| read_timeout: Duration::from_secs(1), |
🤖 Prompt for AI Agents
In dash-spv/src/network/tests.rs at line 24, the read_timeout is set to 15
milliseconds, which is too low for reliable network operations in tests.
Increase the read_timeout value to at least 1 second (1000 milliseconds) to
improve test stability and reduce flakiness.
| /// Cancels the sync operation. | ||
| /// | ||
| /// **Note**: This function currently only stops the SPV client and clears sync callbacks, | ||
| /// but does not fully abort the ongoing sync process. The sync operation may continue | ||
| /// running in the background until it completes naturally. Full sync cancellation with | ||
| /// proper task abortion is not yet implemented. | ||
| /// | ||
| /// # Safety | ||
| /// The client pointer must be valid and non-null. | ||
| /// | ||
| /// # Returns | ||
| /// Returns 0 on success, or an error code on failure. | ||
| #[no_mangle] | ||
| pub unsafe extern "C" fn dash_spv_ffi_client_cancel_sync(client: *mut FFIDashSpvClient) -> i32 { | ||
| null_check!(client); | ||
|
|
||
| let client = &(*client); | ||
|
|
||
| // Clear callbacks to stop progress updates and unregister from the registry | ||
| let mut cb_guard = client.sync_callbacks.lock().unwrap(); | ||
| if let Some(ref callback_data) = *cb_guard { | ||
| CALLBACK_REGISTRY.lock().unwrap().unregister(callback_data.callback_id); | ||
| } | ||
| *cb_guard = None; | ||
|
|
||
| // TODO: Implement proper sync task cancellation using cancellation tokens or abort handles. | ||
| // Currently, this only stops the client, but the sync task may continue running in the background. | ||
| let inner = client.inner.clone(); | ||
| let result = client.runtime.block_on(async { | ||
| let mut guard = inner.lock().unwrap(); | ||
| if let Some(ref mut spv_client) = *guard { | ||
| spv_client.stop().await | ||
| } else { | ||
| Err(dash_spv::SpvError::Config("Client not initialized".to_string())) | ||
| } | ||
| }); | ||
|
|
||
| match result { | ||
| Ok(_) => FFIErrorCode::Success as i32, | ||
| Err(e) => { | ||
| set_last_error(&e.to_string()); | ||
| FFIErrorCode::from(e) as i32 | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Document cancellation limitations more prominently.
While the limitation is documented, users might still expect immediate cancellation. Consider:
- Returning a status indicating partial cancellation
- Adding a
force_cancelparameter for future implementation - Tracking active sync operations to provide status feedback
The current behavior could lead to resource waste if users think sync has stopped but it continues in the background.
Would you like me to help implement proper sync cancellation using tokio's abort handles or cancellation tokens?
🤖 Prompt for AI Agents
In dash-spv-ffi/src/client.rs around lines 710 to 754, the current cancel_sync
function only stops the client and clears callbacks but does not fully abort the
ongoing sync task, which may mislead users. To fix this, update the function
signature to include a force_cancel boolean parameter for future use, modify the
return value to indicate partial cancellation when full abort is not
implemented, and enhance the documentation to clearly state these limitations
and expected behavior. Additionally, consider adding internal tracking of active
sync operations to provide accurate status feedback to callers.
| /// Performs a test synchronization of the SPV client | ||
| /// | ||
| /// # Parameters | ||
| /// - `client`: Pointer to an FFIDashSpvClient instance | ||
| /// | ||
| /// # Returns | ||
| /// - `0` on success | ||
| /// - Negative error code on failure | ||
| /// | ||
| /// # Safety | ||
| /// This function is unsafe because it dereferences a raw pointer. | ||
| /// The caller must ensure that the client pointer is valid. | ||
| #[no_mangle] | ||
| pub unsafe extern "C" fn dash_spv_ffi_client_test_sync(client: *mut FFIDashSpvClient) -> i32 { | ||
| null_check!(client); | ||
|
|
||
| let client = &(*client); | ||
| let result = client.runtime.block_on(async { | ||
| let mut guard = client.inner.lock().unwrap(); | ||
| if let Some(ref mut spv_client) = *guard { | ||
| println!("Starting test sync..."); | ||
|
|
||
| // Get initial height | ||
| let start_height = match spv_client.sync_progress().await { | ||
| Ok(progress) => progress.header_height, | ||
| Err(e) => { | ||
| eprintln!("Failed to get initial height: {}", e); | ||
| return Err(e); | ||
| } | ||
| }; | ||
| println!("Initial height: {}", start_height); | ||
|
|
||
| // Start sync | ||
| match spv_client.sync_to_tip().await { | ||
| Ok(_) => println!("Sync started successfully"), | ||
| Err(e) => { | ||
| eprintln!("Failed to start sync: {}", e); | ||
| return Err(e); | ||
| } | ||
| } | ||
|
|
||
| // Wait a bit for headers to download | ||
| tokio::time::sleep(Duration::from_secs(10)).await; | ||
|
|
||
| // Check if headers increased | ||
| let end_height = match spv_client.sync_progress().await { | ||
| Ok(progress) => progress.header_height, | ||
| Err(e) => { | ||
| eprintln!("Failed to get final height: {}", e); | ||
| return Err(e); | ||
| } | ||
| }; | ||
| println!("Final height: {}", end_height); | ||
|
|
||
| if end_height > start_height { | ||
| println!("✅ Sync working! Downloaded {} headers", end_height - start_height); | ||
| Ok(()) | ||
| } else { | ||
| let err = dash_spv::SpvError::Storage(dash_spv::StorageError::NotFound( | ||
| "Client not initialized".to_string(), | ||
| )); | ||
| callbacks.call_completion(false, Some(&err.to_string())); | ||
| Err(err) | ||
| let msg = "No headers downloaded".to_string(); | ||
| eprintln!("❌ {}", msg); | ||
| Err(dash_spv::SpvError::Sync(dash_spv::SyncError::SyncFailed(msg))) | ||
| } | ||
| }); | ||
| } else { | ||
| Err(dash_spv::SpvError::Config("Client not initialized".to_string())) | ||
| } | ||
| }); | ||
|
|
||
| if let Err(e) = result { | ||
| match result { | ||
| Ok(_) => FFIErrorCode::Success as i32, | ||
| Err(e) => { | ||
| set_last_error(&e.to_string()); | ||
| FFIErrorCode::from(e) as i32 | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Test/debug code should be clearly marked or improved.
This function appears to be for testing/debugging purposes but is exposed in the production API. Consider:
- Moving to a test module if it's only for testing
- Using proper logging instead of
println! - Making the wait duration configurable instead of hardcoded 10 seconds
- Adding documentation clarifying its purpose
🤖 Prompt for AI Agents
In dash-spv-ffi/src/client.rs lines 470 to 544, the test synchronization
function is exposed in the production API and uses println! for output, has a
hardcoded 10-second wait, and lacks clear documentation of its test/debug
purpose. To fix this, move the function to a dedicated test module if it is only
for testing, replace println! and eprintln! calls with proper logging macros,
make the sleep duration configurable via a parameter or configuration, and
update the function documentation to clearly state it is intended for testing or
debugging only.
| // Parse the txid string to a Txid type | ||
| if let Ok(txid_parsed) = txid.parse::<dashcore::Txid>() { | ||
| callbacks.call_transaction(&txid_parsed, confirmed, amount as i64, addresses, block_height); | ||
| } else { | ||
| tracing::error!("Failed to parse transaction ID: {}", txid); | ||
| } | ||
| } | ||
| dash_spv::types::SpvEvent::BlockProcessed { height, ref hash, transactions_count, relevant_transactions } => { | ||
| tracing::info!("📦 Block processed: height={}, hash={}, total_tx={}, relevant_tx={}", | ||
| height, hash, transactions_count, relevant_transactions); | ||
| // Parse the block hash string to a BlockHash type | ||
| if let Ok(hash_parsed) = hash.parse::<dashcore::BlockHash>() { | ||
| callbacks.call_block(height, &hash_parsed); | ||
| } else { | ||
| tracing::error!("Failed to parse block hash: {}", hash); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider notifying callbacks of parse failures.
When txid or block hash parsing fails, the error is only logged and the event is silently dropped. Consider either:
- Calling the callback with a special error indicator
- Having a separate error callback for such failures
- At minimum, incrementing an error counter that can be queried
This would help developers diagnose missing events.
🤖 Prompt for AI Agents
In dash-spv-ffi/src/client.rs around lines 234 to 249, the code logs errors when
parsing txid or block hash fails but does not notify callbacks, causing silent
drops. Modify the code to notify callbacks of these parse failures by either
invoking a callback with an error indicator, adding a dedicated error callback
for parse failures, or incrementing an error counter accessible for diagnostics.
This ensures that parse errors are communicated and can be tracked effectively.
| Ok(mempool_balance) => { | ||
| // Convert MempoolBalance to FFIBalance | ||
| let balance = FFIBalance { | ||
| confirmed: 0, // No confirmed balance in mempool | ||
| pending: mempool_balance.pending.to_sat(), | ||
| instantlocked: 0, // No confirmed instantlocked in mempool | ||
| mempool: mempool_balance.pending.to_sat() | ||
| + mempool_balance.pending_instant.to_sat(), | ||
| mempool_instant: mempool_balance.pending_instant.to_sat(), | ||
| total: mempool_balance.pending.to_sat() + mempool_balance.pending_instant.to_sat(), | ||
| }; | ||
| Box::into_raw(Box::new(balance)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Consider using From trait for balance conversion.
The manual construction of FFIBalance could be replaced with a From trait implementation if one exists for MempoolBalance to FFIBalance conversion. This would centralize the conversion logic and reduce maintenance burden.
🏁 Script executed:
#!/bin/bash
# Check if From trait exists for MempoolBalance to FFIBalance conversion
ast-grep --pattern 'impl From<$_> for FFIBalance {
$$$
}'
# Also check for any existing conversion methods
rg -A 5 'impl.*FFIBalance|From.*Balance.*for.*FFIBalance'Length of output: 2690
Add From<MempoolBalance> for FFIBalance and simplify conversion
Centralize the mempool balance → FFIBalance logic by defining a From<MempoolBalance> impl and replacing the manual construction in client.rs.
• In dash-spv-ffi/src/wallet.rs (add after the existing From impls):
impl From<dash_spv::types::MempoolBalance> for FFIBalance {
fn from(balance: dash_spv::types::MempoolBalance) -> Self {
let pending = balance.pending.to_sat();
let pending_instant = balance.pending_instant.to_sat();
FFIBalance {
confirmed: 0,
pending,
instantlocked: 0,
mempool: pending + pending_instant,
mempool_instant: pending_instant,
total: pending + pending_instant,
}
}
}• In dash-spv-ffi/src/client.rs (lines 1831–1842), replace:
// current manual construction
let balance = FFIBalance {
confirmed: 0,
pending: mempool_balance.pending.to_sat(),
instantlocked: 0,
mempool: mempool_balance.pending.to_sat()
+ mempool_balance.pending_instant.to_sat(),
mempool_instant: mempool_balance.pending_instant.to_sat(),
total: mempool_balance.pending.to_sat()
+ mempool_balance.pending_instant.to_sat(),
};
Box::into_raw(Box::new(balance))with:
// use the From impl
let ffi_balance: FFIBalance = mempool_balance.into();
Box::into_raw(Box::new(ffi_balance))🤖 Prompt for AI Agents
In dash-spv-ffi/src/wallet.rs after the existing From impls, add an
implementation of From<MempoolBalance> for FFIBalance that centralizes the
conversion logic by extracting pending and pending_instant sats and constructing
FFIBalance accordingly. Then in dash-spv-ffi/src/client.rs lines 1831 to 1842,
replace the manual FFIBalance construction with a call to mempool_balance.into()
to use the new From impl, and box that result as before.
| let result = client.runtime.block_on(async { | ||
| let guard = inner.lock().unwrap(); | ||
| if let Some(ref spv_client) = *guard { | ||
| // Get all watched addresses | ||
| let watch_items = spv_client.get_watch_items().await; | ||
| let mut total_confirmed = 0u64; | ||
| let mut total_unconfirmed = 0u64; | ||
|
|
||
| // Sum up balances for all watched addresses | ||
| for item in watch_items { | ||
| if let dash_spv::types::WatchItem::Address { | ||
| address, | ||
| .. | ||
| } = item | ||
| { | ||
| match spv_client.get_address_balance(&address).await { | ||
| Ok(balance) => { | ||
| total_confirmed += balance.confirmed.to_sat(); | ||
| total_unconfirmed += balance.unconfirmed.to_sat(); | ||
| tracing::debug!( | ||
| "Address {} balance: confirmed={}, unconfirmed={}", | ||
| address, | ||
| balance.confirmed, | ||
| balance.unconfirmed | ||
| ); | ||
| } | ||
| Err(e) => { | ||
| tracing::warn!("Failed to get balance for address {}: {}", address, e); | ||
| } | ||
| } | ||
| } | ||
| } | ||
| }); | ||
|
|
||
| Ok(dash_spv::types::AddressBalance { | ||
| confirmed: dashcore::Amount::from_sat(total_confirmed), | ||
| unconfirmed: dashcore::Amount::from_sat(total_unconfirmed), | ||
| pending: dashcore::Amount::from_sat(0), | ||
| pending_instant: dashcore::Amount::from_sat(0), | ||
| }) | ||
| } else { | ||
| Err(dash_spv::SpvError::Storage(dash_spv::StorageError::NotFound( | ||
| "Client not initialized".to_string(), | ||
| ))) | ||
| } | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Address balance errors should be propagated.
Currently, if getting balance for an address fails, it's only logged as a warning and excluded from the total. This could report an incorrect total balance without the caller knowing. Consider:
- Collecting errors and returning them alongside the balance
- Failing the entire operation if any address fails
- At minimum, returning a flag indicating partial results
- Err(e) => {
- tracing::warn!("Failed to get balance for address {}: {}", address, e);
- }
+ Err(e) => {
+ tracing::warn!("Failed to get balance for address {}: {}", address, e);
+ // Consider propagating this error or tracking failed addresses
+ return Err(e);
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let result = client.runtime.block_on(async { | |
| let guard = inner.lock().unwrap(); | |
| if let Some(ref spv_client) = *guard { | |
| // Get all watched addresses | |
| let watch_items = spv_client.get_watch_items().await; | |
| let mut total_confirmed = 0u64; | |
| let mut total_unconfirmed = 0u64; | |
| // Sum up balances for all watched addresses | |
| for item in watch_items { | |
| if let dash_spv::types::WatchItem::Address { | |
| address, | |
| .. | |
| } = item | |
| { | |
| match spv_client.get_address_balance(&address).await { | |
| Ok(balance) => { | |
| total_confirmed += balance.confirmed.to_sat(); | |
| total_unconfirmed += balance.unconfirmed.to_sat(); | |
| tracing::debug!( | |
| "Address {} balance: confirmed={}, unconfirmed={}", | |
| address, | |
| balance.confirmed, | |
| balance.unconfirmed | |
| ); | |
| } | |
| Err(e) => { | |
| tracing::warn!("Failed to get balance for address {}: {}", address, e); | |
| } | |
| } | |
| } | |
| } | |
| }); | |
| Ok(dash_spv::types::AddressBalance { | |
| confirmed: dashcore::Amount::from_sat(total_confirmed), | |
| unconfirmed: dashcore::Amount::from_sat(total_unconfirmed), | |
| pending: dashcore::Amount::from_sat(0), | |
| pending_instant: dashcore::Amount::from_sat(0), | |
| }) | |
| } else { | |
| Err(dash_spv::SpvError::Storage(dash_spv::StorageError::NotFound( | |
| "Client not initialized".to_string(), | |
| ))) | |
| } | |
| }); | |
| let result = client.runtime.block_on(async { | |
| let guard = inner.lock().unwrap(); | |
| if let Some(ref spv_client) = *guard { | |
| // Get all watched addresses | |
| let watch_items = spv_client.get_watch_items().await; | |
| let mut total_confirmed = 0u64; | |
| let mut total_unconfirmed = 0u64; | |
| // Sum up balances for all watched addresses | |
| for item in watch_items { | |
| if let dash_spv::types::WatchItem::Address { | |
| address, | |
| .. | |
| } = item | |
| { | |
| match spv_client.get_address_balance(&address).await { | |
| Ok(balance) => { | |
| total_confirmed += balance.confirmed.to_sat(); | |
| total_unconfirmed += balance.unconfirmed.to_sat(); | |
| tracing::debug!( | |
| "Address {} balance: confirmed={}, unconfirmed={}", | |
| address, | |
| balance.confirmed, | |
| balance.unconfirmed | |
| ); | |
| } | |
| Err(e) => { | |
| tracing::warn!("Failed to get balance for address {}: {}", address, e); | |
| // Consider propagating this error or tracking failed addresses | |
| return Err(e); | |
| } | |
| } | |
| } | |
| } | |
| Ok(dash_spv::types::AddressBalance { | |
| confirmed: dashcore::Amount::from_sat(total_confirmed), | |
| unconfirmed: dashcore::Amount::from_sat(total_unconfirmed), | |
| pending: dashcore::Amount::from_sat(0), | |
| pending_instant: dashcore::Amount::from_sat(0), | |
| }) | |
| } else { | |
| Err(dash_spv::SpvError::Storage(dash_spv::StorageError::NotFound( | |
| "Client not initialized".to_string(), | |
| ))) | |
| } | |
| }); |
🤖 Prompt for AI Agents
In dash-spv-ffi/src/client.rs around lines 1528 to 1572, the code currently logs
errors when failing to get an address balance but continues processing, which
can lead to incorrect total balances without notifying the caller. To fix this,
modify the logic to propagate these errors by either collecting all errors and
returning them with the balance, or by failing the entire operation immediately
upon any error. At minimum, add a mechanism to indicate partial results so the
caller is aware of incomplete data.
Summary by CodeRabbit
New Features
Bug Fixes
Documentation
Tests
Chores