Skip to content

fix: invalid pcv #831

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Mar 28, 2025
Merged

fix: invalid pcv #831

merged 1 commit into from
Mar 28, 2025

Conversation

gfyrag
Copy link
Collaborator

@gfyrag gfyrag commented Mar 28, 2025

No description provided.

@gfyrag gfyrag requested a review from a team as a code owner March 28, 2025 09:29
Copy link

coderabbitai bot commented Mar 28, 2025

Walkthrough

This PR introduces multiple updates to SQL migration scripts. A new JSON entry is added to the postings array in one test file. Several migration scripts update the transaction identifier from transactions_id to transactions_seq, revising aggregation logic and indexing accordingly. In addition, the expected test JSON output is expanded with new SELL key-value pairs. One migration script implements a new procedure that creates a temporary table, processes data in batches with notifications, and cleans up after execution.

Changes

File(s) Summary
internal/storage/bucket/migrations/0-init-schema/up_tests_after.sql Inserted a new JSON object into the postings array with dynamic "destination", static "source", "asset", and "amount" fields, positioned before the existing orders entry.
internal/storage/bucket/migrations/19-transactions-fill-pcv/up.sql,
internal/storage/bucket/migrations/27-fix-invalid-pcv/up.sql
Replaced transactions_id with transactions_seq across temporary table creation, aggregation logic (using jsonb_build_object and public.aggregate_objects), query grouping, and index creation.
internal/storage/bucket/migrations/19-transactions-fill-pcv/up_tests_after.sql Updated the expected JSON output to include additional SELL-related key-value pairs for both "world" and "sellers:0".
internal/storage/bucket/migrations/28-fix-pcv-missing-asset/up.sql Introduced a new procedure for processing transaction volumes in batches: creates and indexes a temporary table, checks for records, processes data in a loop with notifications, and drops the temporary table.

Sequence Diagram(s)

sequenceDiagram
    participant Proc as Procedure
    participant TV as Temporary Table (moves_view)
    participant TT as Transactions Table
    participant Notif as Notification System

    Proc->>TV: Drop existing moves_view and create new one
    Proc->>TV: Create index on transactions_seq
    Proc->>TV: Check for records
    alt moves_view is empty
        Proc->>Notif: Send exit notification
        Proc->>Proc: Exit procedure
    else moves_view has records
        loop For each batch
            Proc->>TV: Select batch of records
            Proc->>TT: Update transactions with aggregated JSON data
            Proc->>Notif: Send progress notification
        end
    end
    Proc->>TV: Drop moves_view
Loading

Possibly related PRs

Suggested labels

build-images

Suggested reviewers

  • Dav-14
  • flemzord

Poem

I’m a rabbit in a maze of code,
Hop by hop, I follow each new node.
SQL carrots line my trail so bright,
Migrating errors into data light.
With a twitch of my nose and a joyful leap,
I celebrate changes before I go to sleep!
🐰


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai plan to trigger planning for file edits and PR creation.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

codecov bot commented Mar 28, 2025

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 81.97%. Comparing base (d38461e) to head (bc1b96b).
Report is 1 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main     #831      +/-   ##
==========================================
+ Coverage   81.92%   81.97%   +0.05%     
==========================================
  Files         137      137              
  Lines        7512     7512              
==========================================
+ Hits         6154     6158       +4     
+ Misses       1040     1038       -2     
+ Partials      318      316       -2     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (5)
internal/storage/bucket/migrations/19-transactions-fill-pcv/up_tests_after.sql (1)

2-3: Verify Expected JSON Update

The updated expected variable now includes the additional "SELL" key for both the "world" and "sellers:0" objects. Please confirm that this new JSON structure is fully in line with the corresponding schema and test expectations introduced in linked migration scripts.

internal/storage/bucket/migrations/0-init-schema/up_tests_after.sql (1)

17-22: Verify New "SELL" Posting Entry

A new posting entry for the "SELL" asset has been inserted into the JSON array. The snippet correctly constructs the object with:

  • "destination": "sellers:" || (seq % 5)
  • "source": "world"
  • "asset": "SELL"
  • "amount": 1

Ensure that this aligns with the updated test expectations and the related changes in the schema.

internal/storage/bucket/migrations/19-transactions-fill-pcv/up.sql (1)

10-11: Validate Aggregation Logic for Transaction Volumes

The creation of the temporary table moves_view now uses transactions_seq with aggregation via public.aggregate_objects and JSON operations. Verify that casting transactions_seq to numeric (as seen in the inner query) is required and that it correctly supports downstream operations.

internal/storage/bucket/migrations/27-fix-invalid-pcv/up.sql (1)

10-11: Validate Aggregation Logic for PCV

The temporary table moves_view is again built using transactions_seq and the aggregation function. Ensure that the JSON construction (via jsonb_build_object) and conversion to numeric are returning the expected format for downstream PCV calculations.

internal/storage/bucket/migrations/28-fix-pcv-missing-asset/up.sql (1)

1-7: New Migration for PCV Missing Asset

This new migration script addresses the missing asset issue in the PCV calculations. It constructs the temporary table moves_view by aggregating volume data and, importantly, utilizes json_build_object(asset, json_build_object(...)) to ensure the asset is included. Please confirm that this change fully resolves the missing asset problem in related PCV logic.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d38461e and bc1b96b.

⛔ Files ignored due to path filters (4)
  • internal/storage/bucket/migrations/28-fix-pcv-missing-asset/notes.yaml is excluded by !**/*.yaml
  • internal/storage/bucket/migrations/29-accounts-recreate-unique-index/notes.yaml is excluded by !**/*.yaml
  • internal/storage/bucket/migrations/30-clean-database/notes.yaml is excluded by !**/*.yaml
  • internal/storage/bucket/migrations/31-logs-async-hash-procedure/notes.yaml is excluded by !**/*.yaml
📒 Files selected for processing (5)
  • internal/storage/bucket/migrations/0-init-schema/up_tests_after.sql (1 hunks)
  • internal/storage/bucket/migrations/19-transactions-fill-pcv/up.sql (3 hunks)
  • internal/storage/bucket/migrations/19-transactions-fill-pcv/up_tests_after.sql (1 hunks)
  • internal/storage/bucket/migrations/27-fix-invalid-pcv/up.sql (3 hunks)
  • internal/storage/bucket/migrations/28-fix-pcv-missing-asset/up.sql (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: Tests
🔇 Additional comments (10)
internal/storage/bucket/migrations/19-transactions-fill-pcv/up_tests_after.sql (1)

9-9: PL/pgSQL Block Terminator Check

The end delimiter ($$) marked with a change appears correctly placed. Just ensure that any formatting or minor changes around the closing block do not affect execution.

internal/storage/bucket/migrations/19-transactions-fill-pcv/up.sql (4)

15-20: Ensure Consistency in Transaction Sequence Handling

The inner query uses DISTINCT ON (moves.transactions_seq, accounts_address, asset) together with a window function to pick the latest post_commit_volumes. This change—migrating from the old identifier to transactions_seq—appears correct. Just double-check that all parts of the system now consistently refer to transactions_seq.


29-29: Index Creation on Transaction Sequence

The index on moves_view(transactions_seq) supports the new query pattern and the updated foreign key. This change is appropriate for maintaining performance.


37-47: Review Batch Update Logic

In the loop, the update statement matches rows via where transactions.seq = data.transactions_seq. Please verify that the field transactions.seq is the correct reference for the new transaction identifier and that the batching logic (using _offset and _batch_size) correctly processes all records.


60-63: Alter Table Constraint Addition

Adding the post_commit_volumes_not_null constraint at the end appears safe. Confirm that the data updated during the batch process meets this constraint and that the not valid clause is intentional (perhaps for a later validation phase).

internal/storage/bucket/migrations/27-fix-invalid-pcv/up.sql (3)

15-20: Confirm Transaction Sequence Grouping

The inner query now groups by moves.transactions_seq, accounts_address, and asset using DISTINCT ON and window functions. This logic appears to be designed to extract the latest posting values—please confirm that this meets the desired data semantics.


29-29: Indexing Verification

Creating an index on moves_view(transactions_seq) is consistent with recent changes and should help query performance during the update phase.


37-47: Review Batch Processing Update

The loop for batch processing uses the updated transactions_seq to select and update data. Verify that the join condition where transactions.seq = data.transactions_seq correctly maps to the new schema and that no legacy references remain.

internal/storage/bucket/migrations/28-fix-pcv-missing-asset/up.sql (2)

15-26: Check Aggregation Query for Asset Volumes

The inner query groups by transactions_seq and accounts_address while employing a DISTINCT ON clause and window function to correctly extract the latest post_commit_volumes. Verify that this updated aggregation accurately reflects the intended per-asset volume breakdown.


37-47: Review Batch Processing Loop

The batch update loop is consistent with other migration scripts, updating the transactions table using the newly aggregated volumes. Please ensure that the join condition involving transactions.seq matches the transactions_seq field and that the batching logic performs as expected.

@gfyrag gfyrag added this pull request to the merge queue Mar 28, 2025
Merged via the queue into main with commit ace5f9a Mar 28, 2025
10 checks passed
@gfyrag gfyrag deleted the fix/invalid-pcv branch March 28, 2025 11:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants