Skip to content

Conversation

@gfyrag
Copy link
Contributor

@gfyrag gfyrag commented Nov 18, 2025

No description provided.

@gfyrag gfyrag requested a review from a team as a code owner November 18, 2025 15:15
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Nov 18, 2025

Walkthrough

A PostgreSQL migration file that removes 13 indexes from various tables using a DO block to set the schema search_path and execute DROP INDEX statements.

Changes

Cohort / File(s) Summary
Database index cleanup migration
internal/storage/bucket/migrations/41-clean-indexes/up.sql
Adds a DO $$ ... $$ migration block that sets the schema search_path and drops 13 indexes across multiple tables (no new objects created)

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

  • Verify the 13 index names match existing indexes in the target schema
  • Confirm the migration's search_path value is correct for intended environments
  • Ensure DROP INDEX statements are safe to run (no dependencies or unintended side effects)

Poem

🐰 I hopped through schemas, tidy and spry,
Thirteen old indexes waved goodbye,
A DO block whispered, "Clean and neat,"
The meadow of storage now skips a beat,
I nibble on bytes and hop in delight.

Pre-merge checks and finishing touches

❌ Failed checks (1 inconclusive)
Check name Status Explanation Resolution
Description check ❓ Inconclusive No pull request description was provided by the author. Add a description explaining why these specific indexes are being removed and the expected performance impact.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly describes the main change: removing indexes to improve write performance in a database migration.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch experiments/write-only

📜 Recent review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 51d9cec and 63a4a41.

⛔ Files ignored due to path filters (1)
  • internal/storage/bucket/migrations/41-clean-indexes/notes.yaml is excluded by !**/*.yaml
📒 Files selected for processing (1)
  • internal/storage/bucket/migrations/41-clean-indexes/up.sql (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • internal/storage/bucket/migrations/41-clean-indexes/up.sql
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: BenchmarkCompare
  • GitHub Check: Tests
  • GitHub Check: Dirty

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (1)
internal/storage/bucket/migrations/41-clean-indexes/up.sql (1)

17-17: Add newline at end of file.

Following POSIX convention, add a newline character at the end of the file.

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between fba221d and 237db46.

⛔ Files ignored due to path filters (1)
  • internal/storage/bucket/migrations/41-clean-indexes/notes.yaml is excluded by !**/*.yaml
📒 Files selected for processing (1)
  • internal/storage/bucket/migrations/41-clean-indexes/up.sql (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: Dirty
  • GitHub Check: Tests
  • GitHub Check: BenchmarkCompare
🔇 Additional comments (1)
internal/storage/bucket/migrations/41-clean-indexes/up.sql (1)

1-17: Validate that removing these indexes doesn't degrade query performance.

The PR objective is to "try to remove indexes to speed up writes," but dropping indexes can significantly impact read performance. Before merging, ensure:

  • Query performance has been profiled with and without these indexes
  • Read-heavy queries have been tested to confirm acceptable performance
  • There's a clear performance baseline and acceptance criteria

Can you provide evidence or results from performance testing comparing query execution times before and after index removal? Have slow query logs or EXPLAIN ANALYZE results been collected?

Comment on lines 1 to 19
do $$
begin
set search_path = '{{ .Schema }}';

drop index accounts_address_array;
drop index accounts_address_array_length;
drop index moves_ledger;
drop index moves_range_dates;
drop index moves_account_address;
drop index moves_date;
drop index moves_asset;
drop index transactions_metadata_metadata;
drop index accounts_metadata_metadata;
drop index transactions_date;
drop index transactions_metadata_index;
end
$$; No newline at end of file
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

Verify the down migration exists and correctly recreates the dropped indexes.

Since this is an "up" migration that permanently removes indexes, ensure a corresponding "down" migration (down.sql) exists that recreates these indexes with their original definitions (including uniqueness, column order, and any special options).

Run the following script to verify the down migration file exists and contains the index recreations:


🏁 Script executed:

#!/bin/bash
# Description: Verify down migration exists and contains CREATE INDEX statements

DOWN_MIGRATION="internal/storage/bucket/migrations/41-clean-indexes/down.sql"

if [ ! -f "$DOWN_MIGRATION" ]; then
  echo "ERROR: Down migration file not found at $DOWN_MIGRATION"
  exit 1
fi

echo "Down migration file found. Checking for CREATE INDEX statements:"
grep -c "create index" "$DOWN_MIGRATION" || echo "WARNING: No 'create index' statements found in down migration"

echo ""
echo "Down migration content:"
cat "$DOWN_MIGRATION"

Length of output: 100



Wait, I need the codebase to check these files. Let me correct this:

---

🏁 Script executed:

```shell
#!/bin/bash
# Description: Find all files in the 41-clean-indexes migration directory

echo "Checking migration directory structure:"
fd . "internal/storage/bucket/migrations/41-clean-indexes" || echo "Directory not found"

echo ""
echo "Listing all migration files in 41-clean-indexes:"
ls -la "internal/storage/bucket/migrations/41-clean-indexes" 2>/dev/null || echo "Cannot list directory"

echo ""
echo "Searching for any down.sql files in entire migrations directory:"
fd "down\.sql" "internal/storage/bucket/migrations"

Length of output: 381


🏁 Script executed:

#!/bin/bash
# Description: Verify migration directory structure and search for down migration

echo "Checking if migration directory exists:"
ls -la "internal/storage/bucket/migrations/41-clean-indexes/" 2>&1

echo ""
echo "Searching for any down.sql files in migrations:"
find "internal/storage/bucket/migrations" -name "down.sql" 2>/dev/null | head -20

echo ""
echo "Checking migration directory layout:"
fd --type f "41-clean-indexes" "internal/storage/bucket/migrations"

Length of output: 293


Down migration file is completely missing — migration cannot be rolled back.

Verification confirms that internal/storage/bucket/migrations/41-clean-indexes/ contains only up.sql with no corresponding down.sql file. The directory has been searched exhaustively and no down migrations exist anywhere in the migrations directory.

This is a critical issue: the up migration permanently drops 13 indexes (accounts_address_array, accounts_address_array_length, moves_ledger, moves_range_dates, moves_account_address, moves_date, moves_asset, transactions_metadata_metadata, accounts_metadata_metadata, transactions_date, transactions_metadata_index) with zero rollback capability.

Required action: Create internal/storage/bucket/migrations/41-clean-indexes/down.sql that recreates all 13 dropped indexes with their original definitions (including uniqueness constraints, column order, and any special options).

🤖 Prompt for AI Agents
In internal/storage/bucket/migrations/41-clean-indexes/ around lines 1-17, the
up migration drops 11 indexes but there is no down.sql to recreate them; add a
new file internal/storage/bucket/migrations/41-clean-indexes/down.sql that
recreates each dropped index with the exact original definitions (names, column
lists and order, uniqueness, expression/functional definitions, partial WHERE
clauses, and any index method or options) using the same schema variable (set
search_path = '{{ .Schema }}'); if the original definitions are not known,
retrieve them from git history or the production DB (pg_indexes/pg_get_indexdef)
and reproduce them verbatim in the down.sql so the migration can be safely
rolled back.

Comment on lines +5 to +15
drop index accounts_address_array;
drop index accounts_address_array_length;
drop index moves_ledger;
drop index moves_range_dates;
drop index moves_account_address;
drop index moves_date;
drop index moves_asset;
drop index transactions_metadata_metadata;
drop index accounts_metadata_metadata;
drop index transactions_date;
drop index transactions_metadata_index;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Add IF EXISTS to all DROP INDEX statements to ensure idempotent migrations.

Dropping indexes without IF EXISTS will cause the entire migration to fail if any index doesn't exist in a given environment (due to deployment variance, pre-existing manual drops, or schema state differences). This is a critical issue for production migrations.

Apply this diff to make the migration idempotent:

 do $$
 	begin
 		set search_path = '{{ .Schema }}';
 
-		drop index accounts_address_array;
-		drop index accounts_address_array_length;
-		drop index moves_ledger;
-		drop index moves_range_dates;
-		drop index moves_account_address;
-		drop index moves_date;
-		drop index moves_asset;
-		drop index transactions_metadata_metadata;
-		drop index accounts_metadata_metadata;
-		drop index transactions_date;
-		drop index transactions_metadata_index;
+		drop index if exists accounts_address_array;
+		drop index if exists accounts_address_array_length;
+		drop index if exists moves_ledger;
+		drop index if exists moves_range_dates;
+		drop index if exists moves_account_address;
+		drop index if exists moves_date;
+		drop index if exists moves_asset;
+		drop index if exists transactions_metadata_metadata;
+		drop index if exists accounts_metadata_metadata;
+		drop index if exists transactions_date;
+		drop index if exists transactions_metadata_index;
 	end
 $$;
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
drop index accounts_address_array;
drop index accounts_address_array_length;
drop index moves_ledger;
drop index moves_range_dates;
drop index moves_account_address;
drop index moves_date;
drop index moves_asset;
drop index transactions_metadata_metadata;
drop index accounts_metadata_metadata;
drop index transactions_date;
drop index transactions_metadata_index;
do $$
begin
set search_path = '{{ .Schema }}';
drop index if exists accounts_address_array;
drop index if exists accounts_address_array_length;
drop index if exists moves_ledger;
drop index if exists moves_range_dates;
drop index if exists moves_account_address;
drop index if exists moves_date;
drop index if exists moves_asset;
drop index if exists transactions_metadata_metadata;
drop index if exists accounts_metadata_metadata;
drop index if exists transactions_date;
drop index if exists transactions_metadata_index;
end
$$;
🤖 Prompt for AI Agents
internal/storage/bucket/migrations/41-clean-indexes/up.sql lines 5-15: the DROP
INDEX statements are not idempotent and will fail if an index is already absent;
update each DROP INDEX to use "DROP INDEX IF EXISTS <index_name>;" so the
migration becomes safe to run multiple times and tolerant of environments where
some indexes are missing.

@codecov
Copy link

codecov bot commented Nov 18, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 82.13%. Comparing base (fba221d) to head (63a4a41).
⚠️ Report is 4 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1137      +/-   ##
==========================================
+ Coverage   78.38%   82.13%   +3.75%     
==========================================
  Files         190      190              
  Lines        9408     9319      -89     
==========================================
+ Hits         7374     7654     +280     
+ Misses       1305     1214      -91     
+ Partials      729      451     -278     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@gfyrag gfyrag force-pushed the experiments/write-only branch from 237db46 to 51d9cec Compare November 19, 2025 15:27
@gfyrag gfyrag force-pushed the experiments/write-only branch from 51d9cec to 63a4a41 Compare November 21, 2025 15:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants