-
Notifications
You must be signed in to change notification settings - Fork 36
Create data migration for moving to new tables #315
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughRenamed schema tables (users_new→users, account_followers→account_follows), added user_relays, made relays.url UNIQUE, updated/added updated_at triggers, and added a six-step data migration (0006) to populate users, relays, user_relays, and accounts_new. Rust SQL and tests updated to use the new table names. Changes
Sequence Diagram(s)sequenceDiagram
actor MigrationRunner
participant SQLite
MigrationRunner->>SQLite: INSERT INTO users (pubkey,metadata) SELECT DISTINCT pubkey, json(...) FROM contacts WHERE NOT EXISTS(users.pubkey)
MigrationRunner->>SQLite: INSERT OR IGNORE INTO relays(url) SELECT DISTINCT url FROM json_each(contacts.*)
MigrationRunner->>SQLite: INSERT OR IGNORE INTO user_relays(user_id,relay_id,relay_type) SELECT users.id, relays.id, 'nip65' FROM contacts↔users↔relays
MigrationRunner->>SQLite: INSERT OR IGNORE INTO relays(url) SELECT DISTINCT url FROM json_each(accounts.*)
MigrationRunner->>SQLite: INSERT OR IGNORE INTO user_relays(...) SELECT users.id, relays.id, relay_type FROM accounts↔users↔relays
MigrationRunner->>SQLite: INSERT INTO accounts_new(pubkey,user_id,settings,last_synced_at) SELECT ..., datetime(last_synced,'unixepoch') FROM accounts WHERE NOT EXISTS(...)
sequenceDiagram
actor App
participant DB as SQLite
App->>DB: SELECT u.*, f.* FROM users u JOIN account_follows f ON f.user_id = u.id WHERE f.account_id = ?
DB-->>App: rows -> mapped User objects
Estimated code review effort🎯 4 (Complex) | ⏱️ ~35 minutes Possibly related PRs
Suggested reviewers
Poem
✨ Finishing Touches🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
jgmontoya
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, just the account relays missing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
🧹 Nitpick comments (2)
db_migrations/0006_data_migration.sql (2)
14-22: Normalize relay URLs to avoid false duplicates and empty/whitespace entries.Optional but recommended: trim whitespace during extraction and filter on TRIM(url) != ''. Consider lowercasing scheme/host if appropriate for your domain.
Example diff for one block (apply similarly to others):
-SELECT DISTINCT - json_extract(relay_value.value, '$') as url +SELECT DISTINCT + TRIM(json_extract(relay_value.value, '$')) AS url ... - AND json_extract(relay_value.value, '$') IS NOT NULL - AND json_extract(relay_value.value, '$') != ''; + AND json_extract(relay_value.value, '$') IS NOT NULL + AND TRIM(json_extract(relay_value.value, '$')) != '';If you adopt trimming here, make sure the JOINs to relays in Steps 3 and 5 also use the same TRIM() so lookups match the normalized relays.
Also applies to: 24-31, 34-41, 88-96, 98-105, 108-115
1-1: Consider wrapping the migration in a transaction.To ensure atomicity and speed, wrap with BEGIN IMMEDIATE; ... COMMIT; unless your migration runner already does.
BEGIN IMMEDIATE; -- migration statements... COMMIT;
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
db_migrations/0006_data_migration.sql(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: check (ubuntu-latest, native)
🔇 Additional comments (1)
db_migrations/0006_data_migration.sql (1)
167-169: Verify last_synced units (seconds vs milliseconds) before using datetime(..., 'unixepoch').If last_synced is milliseconds, datetime(..., 'unixepoch') will produce dates in 1970. If it’s seconds, you’re good. Confirm the units and adjust (e.g., last_synced/1000) if needed.
Run these quick checks in your DB to infer typical ranges:
-- Inspect a sample of values SELECT MIN(last_synced), MAX(last_synced) FROM accounts WHERE last_synced IS NOT NULL; -- Compare converted dates SELECT last_synced, datetime(last_synced, 'unixepoch') AS as_seconds, datetime(last_synced/1000, 'unixepoch') AS as_millis FROM accounts WHERE last_synced IS NOT NULL LIMIT 20;
| CASE | ||
| WHEN metadata IS NOT NULL AND metadata != '' THEN json(metadata) | ||
| ELSE NULL | ||
| END as metadata |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid nullifying metadata: only call json() on valid JSON.
json(metadata) returns NULL on invalid JSON, which will drop non-JSON-but-previously-populated metadata. Check json_valid first.
- CASE
- WHEN metadata IS NOT NULL AND metadata != '' THEN json(metadata)
- ELSE NULL
- END as metadata
+ CASE
+ WHEN json_valid(metadata) THEN json(metadata)
+ ELSE NULL
+ END AS metadata🤖 Prompt for AI Agents
In db_migrations/0006_data_migration.sql around lines 5 to 8, the current CASE
calls json(metadata) unconditionally which yields NULL for invalid JSON and
drops existing non-JSON metadata; update the condition to only call
json(metadata) when json_valid(metadata) is true and otherwise preserve the
original metadata (do not return NULL) so invalid/non-JSON strings remain
unchanged.
| JOIN relays r ON r.url = json_extract(relay_value.value, '$'), | ||
| json_each(c.nip65_relays) as relay_value | ||
| WHERE json_valid(c.nip65_relays) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix JOIN order: ON clause references alias not yet in scope (will error in SQLite).
In each block, the JOIN relays r ... ON ... references relay_value before json_each(...) relay_value is introduced (appears after a comma). In SQLite, the ON clause can only reference tables to its left. Reorder to introduce json_each(...) first (via CROSS JOIN), then join relays.
Apply these diffs:
--- Step 3: nip65_relays
-FROM contacts c
-JOIN users u ON u.pubkey = c.pubkey
-JOIN relays r ON r.url = json_extract(relay_value.value, '$'),
- json_each(c.nip65_relays) as relay_value
+FROM contacts c
+JOIN users u ON u.pubkey = c.pubkey
+CROSS JOIN json_each(c.nip65_relays) AS relay_value
+JOIN relays r ON r.url = json_extract(relay_value.value, '$')
WHERE json_valid(c.nip65_relays)--- Step 3: inbox_relays
-FROM contacts c
-JOIN users u ON u.pubkey = c.pubkey
-JOIN relays r ON r.url = json_extract(relay_value.value, '$'),
- json_each(c.inbox_relays) as relay_value
+FROM contacts c
+JOIN users u ON u.pubkey = c.pubkey
+CROSS JOIN json_each(c.inbox_relays) AS relay_value
+JOIN relays r ON r.url = json_extract(relay_value.value, '$')
WHERE json_valid(c.inbox_relays)--- Step 3: key_package_relays
-FROM contacts c
-JOIN users u ON u.pubkey = c.pubkey
-JOIN relays r ON r.url = json_extract(relay_value.value, '$'),
- json_each(c.key_package_relays) as relay_value
+FROM contacts c
+JOIN users u ON u.pubkey = c.pubkey
+CROSS JOIN json_each(c.key_package_relays) AS relay_value
+JOIN relays r ON r.url = json_extract(relay_value.value, '$')
WHERE json_valid(c.key_package_relays)Also applies to: 66-68, 80-82
🤖 Prompt for AI Agents
In db_migrations/0006_data_migration.sql around lines 52-54 (also apply same
change at 66-68 and 80-82), the JOIN ordering is incorrect because the ON clause
references the alias relay_value before json_each(... ) AS relay_value is
introduced; reorder the joins so json_each(c.nip65_relays) AS relay_value (as a
CROSS JOIN or comma-separated table placed to the left) is introduced first,
then JOIN relays r ON r.url = json_extract(relay_value.value, '$'); ensure
json_valid(c.nip65_relays) remains in the WHERE clause and update each affected
block accordingly.
| JOIN relays r ON r.url = json_extract(relay_value.value, '$'), | ||
| json_each(a.nip65_relays) as relay_value | ||
| WHERE json_valid(a.nip65_relays) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same JOIN scoping bug in Step 5; reorder to bring json_each into scope before joining relays.
These three blocks have the same issue; reorder as in Step 3.
Apply these diffs:
--- Step 5: accounts.nip65_relays
-FROM accounts a
-JOIN users u ON u.pubkey = a.pubkey
-JOIN relays r ON r.url = json_extract(relay_value.value, '$'),
- json_each(a.nip65_relays) as relay_value
+FROM accounts a
+JOIN users u ON u.pubkey = a.pubkey
+CROSS JOIN json_each(a.nip65_relays) AS relay_value
+JOIN relays r ON r.url = json_extract(relay_value.value, '$')
WHERE json_valid(a.nip65_relays)--- Step 5: accounts.inbox_relays
-FROM accounts a
-JOIN users u ON u.pubkey = a.pubkey
-JOIN relays r ON r.url = json_extract(relay_value.value, '$'),
- json_each(a.inbox_relays) as relay_value
+FROM accounts a
+JOIN users u ON u.pubkey = a.pubkey
+CROSS JOIN json_each(a.inbox_relays) AS relay_value
+JOIN relays r ON r.url = json_extract(relay_value.value, '$')
WHERE json_valid(a.inbox_relays)--- Step 5: accounts.key_package_relays
-FROM accounts a
-JOIN users u ON u.pubkey = a.pubkey
-JOIN relays r ON r.url = json_extract(relay_value.value, '$'),
- json_each(a.key_package_relays) as relay_value
+FROM accounts a
+JOIN users u ON u.pubkey = a.pubkey
+CROSS JOIN json_each(a.key_package_relays) AS relay_value
+JOIN relays r ON r.url = json_extract(relay_value.value, '$')
WHERE json_valid(a.key_package_relays)Also applies to: 140-142, 154-156
| -- Step 6: Migrate accounts to accounts_new table | ||
| INSERT INTO accounts_new (pubkey, user_id, settings, last_synced_at) | ||
| SELECT | ||
| a.pubkey, | ||
| u.id as user_id, | ||
| a.settings, | ||
| CASE | ||
| WHEN a.last_synced IS NOT NULL THEN datetime(a.last_synced, 'unixepoch') | ||
| ELSE NULL | ||
| END as last_synced_at | ||
| FROM accounts a | ||
| JOIN users u ON u.pubkey = a.pubkey | ||
| WHERE NOT EXISTS (SELECT 1 FROM accounts_new WHERE accounts_new.pubkey = a.pubkey); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Accounts with no matching user are dropped; insert missing users from accounts first.
Currently, accounts_new is populated by joining to users. If a pubkey exists in accounts but not in contacts (Step 1), its account will be skipped. Insert any missing users from accounts before Step 6.
Add this block right after Step 1 (before Step 2):
-- Step 1b: Ensure all account pubkeys exist in users
INSERT INTO users (pubkey, metadata)
SELECT DISTINCT a.pubkey, NULL
FROM accounts a
WHERE a.pubkey IS NOT NULL
AND a.pubkey != ''
AND NOT EXISTS (SELECT 1 FROM users u WHERE u.pubkey = a.pubkey);🤖 Prompt for AI Agents
In db_migrations/0006_data_migration.sql around lines 160 to 172, accounts rows
with pubkeys that have no matching users are being skipped by the JOIN, so add a
pre-migration step right after Step 1 (before Step 2) that inserts any distinct,
non-null, non-empty pubkeys from the accounts table into users with metadata set
to NULL if they don't already exist; this ensures the subsequent INSERT INTO
accounts_new ... JOIN users will find a user row for every account pubkey and no
accounts are dropped.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (1)
src/whitenoise/relays.rs (1)
964-967: Nit: reuse relay1 instead of re-parsing the same URLAvoid duplicating parsing for the same string; clone relay1 for target_relay.
- let relay1 = RelayUrl::parse("ws://localhost:7777").unwrap(); - let relay2 = RelayUrl::parse("ws://localhost:8080").unwrap(); - let target_relay = RelayUrl::parse("ws://localhost:7777").unwrap(); + let relay1 = RelayUrl::parse("ws://localhost:7777").unwrap(); + let relay2 = RelayUrl::parse("ws://localhost:8080").unwrap(); + let target_relay = relay1.clone();
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
src/whitenoise/relays.rs(2 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: check (ubuntu-latest, native)
- GitHub Check: check (macos-14, native)
| // Test relay URLs - use localhost relays that are available in test environment | ||
| let relay1 = RelayUrl::parse("ws://localhost:7777").unwrap(); | ||
| let relay2 = RelayUrl::parse("ws://localhost:8080").unwrap(); | ||
| let target_relay = RelayUrl::parse("ws://localhost:7777").unwrap(); | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Nit: reuse relay1 instead of re-parsing the same URL + quick verification script
Same dedupe applies here. Also, consider verifying there are no remaining TLS endpoints in tests.
- let relay1 = RelayUrl::parse("ws://localhost:7777").unwrap();
- let relay2 = RelayUrl::parse("ws://localhost:8080").unwrap();
- let target_relay = RelayUrl::parse("ws://localhost:7777").unwrap();
+ let relay1 = RelayUrl::parse("ws://localhost:7777").unwrap();
+ let relay2 = RelayUrl::parse("ws://localhost:8080").unwrap();
+ let target_relay = relay1.clone();🏁 Script executed:
#!/bin/bash
# Find remaining TLS endpoints in code/tests
rg -n -S 'wss://|https://' || trueLength of output: 60277
Refactor duplicate parsing in relays.rs
Instead of parsing the same URL twice, clone relay1 for target_relay:
• File: src/whitenoise/relays.rs (around lines 1043–1047)
- let relay1 = RelayUrl::parse("ws://localhost:7777").unwrap();
- let relay2 = RelayUrl::parse("ws://localhost:8080").unwrap();
- let target_relay = RelayUrl::parse("ws://localhost:7777").unwrap();
+ let relay1 = RelayUrl::parse("ws://localhost:7777").unwrap();
+ let relay2 = RelayUrl::parse("ws://localhost:8080").unwrap();
+ let target_relay = relay1.clone();I ran a global search for wss:// and https:// and found many TLS URIs in parser, integration tests, examples, etc.—those are intentionally testing URL parsing and can remain as-is.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Test relay URLs - use localhost relays that are available in test environment | |
| let relay1 = RelayUrl::parse("ws://localhost:7777").unwrap(); | |
| let relay2 = RelayUrl::parse("ws://localhost:8080").unwrap(); | |
| let target_relay = RelayUrl::parse("ws://localhost:7777").unwrap(); | |
| // Test relay URLs - use localhost relays that are available in test environment | |
| let relay1 = RelayUrl::parse("ws://localhost:7777").unwrap(); | |
| let relay2 = RelayUrl::parse("ws://localhost:8080").unwrap(); | |
| let target_relay = relay1.clone(); |
🤖 Prompt for AI Agents
In src/whitenoise/relays.rs around lines 1043 to 1047, the test parses the same
URL twice — create relay1 then set target_relay by cloning relay1 instead of
calling RelayUrl::parse again; update target_relay = relay1.clone() (or
.to_owned() if preferred) to remove the duplicate parse and keep behavior
identical.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (8)
db_migrations/0006_data_migration.sql (8)
6-14: Do not NULL-out non-JSON metadata; only json() when valid JSON.json(metadata) returns NULL for invalid JSON; the current CASE will drop previously populated non-JSON metadata. Guard with json_valid and preserve original text when not JSON.
Apply:
INSERT INTO users (pubkey, metadata) SELECT pubkey, - CASE - WHEN metadata IS NOT NULL AND metadata != '' THEN json(metadata) - ELSE NULL - END as metadata + CASE + WHEN json_valid(metadata) THEN json(metadata) + WHEN metadata IS NOT NULL AND metadata != '' THEN metadata + ELSE NULL + END AS metadata FROM contacts WHERE NOT EXISTS (SELECT 1 FROM users WHERE users.pubkey = contacts.pubkey);
165-176: Accounts without a contacts row will be dropped; seed missing users from accounts first.If a.pubkey doesn’t exist in users (because it wasn’t in contacts Step 1), this JOIN drops the account. Insert any missing users from accounts before Step 2.
Add immediately after Step 1:
-- Step 1b: Ensure all account pubkeys exist in users INSERT INTO users (pubkey, metadata) SELECT DISTINCT a.pubkey, NULL FROM accounts a WHERE a.pubkey IS NOT NULL AND a.pubkey != '' AND NOT EXISTS (SELECT 1 FROM users u WHERE u.pubkey = a.pubkey);
63-74: Same join scoping fix needed for inbox_relays (contacts).Apply:
FROM contacts c JOIN users u ON u.pubkey = c.pubkey -JOIN relays r ON r.url = json_extract(relay_value.value, '$'), - json_each(c.inbox_relays) as relay_value +CROSS JOIN json_each(c.inbox_relays) AS relay_value +JOIN relays r ON r.url = json_extract(relay_value.value, '$') WHERE json_valid(c.inbox_relays)
77-88: Same join scoping fix needed for key_package_relays (contacts).Apply:
FROM contacts c JOIN users u ON u.pubkey = c.pubkey -JOIN relays r ON r.url = json_extract(relay_value.value, '$'), - json_each(c.key_package_relays) as relay_value +CROSS JOIN json_each(c.key_package_relays) AS relay_value +JOIN relays r ON r.url = json_extract(relay_value.value, '$') WHERE json_valid(c.key_package_relays)
137-148: Same join scoping fix needed for accounts.inbox_relays.Apply:
FROM accounts a JOIN users u ON u.pubkey = a.pubkey -JOIN relays r ON r.url = json_extract(relay_value.value, '$'), - json_each(a.inbox_relays) as relay_value +CROSS JOIN json_each(a.inbox_relays) AS relay_value +JOIN relays r ON r.url = json_extract(relay_value.value, '$') WHERE json_valid(a.inbox_relays)
151-162: Same join scoping fix needed for accounts.key_package_relays.Apply:
FROM accounts a JOIN users u ON u.pubkey = a.pubkey -JOIN relays r ON r.url = json_extract(relay_value.value, '$'), - json_each(a.key_package_relays) as relay_value +CROSS JOIN json_each(a.key_package_relays) AS relay_value +JOIN relays r ON r.url = json_extract(relay_value.value, '$') WHERE json_valid(a.key_package_relays)
49-60: SQLite join scoping bug: bring json_each into scope before joining relays.ON clause references relay_value before it exists; reorder to CROSS JOIN json_each first, then JOIN relays.
Apply:
FROM contacts c JOIN users u ON u.pubkey = c.pubkey -JOIN relays r ON r.url = json_extract(relay_value.value, '$'), - json_each(c.nip65_relays) as relay_value +CROSS JOIN json_each(c.nip65_relays) AS relay_value +JOIN relays r ON r.url = json_extract(relay_value.value, '$') WHERE json_valid(c.nip65_relays) AND json_extract(relay_value.value, '$') IS NOT NULL AND json_extract(relay_value.value, '$') != '';
123-134: Same join scoping fix needed for accounts.nip65_relays.Apply:
FROM accounts a JOIN users u ON u.pubkey = a.pubkey -JOIN relays r ON r.url = json_extract(relay_value.value, '$'), - json_each(a.nip65_relays) as relay_value +CROSS JOIN json_each(a.nip65_relays) AS relay_value +JOIN relays r ON r.url = json_extract(relay_value.value, '$') WHERE json_valid(a.nip65_relays)
🧹 Nitpick comments (2)
db_migrations/0006_data_migration.sql (2)
18-26: OK: deduped relay URL extraction from contacts.nip65_relays. Optional: filter to strings only.Current checks avoid NULL/empty and rely on relays.url UNIQUE. Consider ensuring only string values are inserted to avoid numeric/boolean/object artifacts.
Apply:
WHERE json_valid(contacts.nip65_relays) AND json_extract(relay_value.value, '$') IS NOT NULL - AND json_extract(relay_value.value, '$') != ''; + AND json_extract(relay_value.value, '$') != '' + AND json_type(relay_value.value) = 'text';Same optional filter applies to lines 33-35 and 43-45.
92-100: OK: accounts.nip65_relays URL extraction with OR IGNORE. Optional: ensure text values.Same optional string-only filter as in contacts:
WHERE json_valid(accounts.nip65_relays) AND json_extract(relay_value.value, '$') IS NOT NULL - AND json_extract(relay_value.value, '$') != ''; + AND json_extract(relay_value.value, '$') != '' + AND json_type(relay_value.value) = 'text';Apply similarly at 107-109 and 117-119.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
db_migrations/0006_data_migration.sql(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: check (macos-14, native)
- GitHub Check: check (ubuntu-latest, native)
🔇 Additional comments (1)
db_migrations/0006_data_migration.sql (1)
1-3: Good: unique constraint up front to enforce idempotency and prevent dupes.Creating idx_user_relays_unique before backfilling relationships is the right move, combined with OR IGNORE on inserts.
| FROM contacts | ||
| WHERE NOT EXISTS (SELECT 1 FROM users WHERE users.pubkey = contacts.pubkey); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid inserting users with NULL/empty pubkeys.
Rows with NULL/empty pubkey will pass the NOT EXISTS check and attempt to insert, likely violating NOT NULL/PK constraints.
Apply:
-FROM contacts
-WHERE NOT EXISTS (SELECT 1 FROM users WHERE users.pubkey = contacts.pubkey);
+FROM contacts
+WHERE pubkey IS NOT NULL
+ AND pubkey != ''
+ AND NOT EXISTS (SELECT 1 FROM users WHERE users.pubkey = contacts.pubkey);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| FROM contacts | |
| WHERE NOT EXISTS (SELECT 1 FROM users WHERE users.pubkey = contacts.pubkey); | |
| FROM contacts | |
| WHERE pubkey IS NOT NULL | |
| AND pubkey != '' | |
| AND NOT EXISTS (SELECT 1 FROM users WHERE users.pubkey = contacts.pubkey); |
🤖 Prompt for AI Agents
In db_migrations/0006_data_migration.sql around lines 13 to 14, the INSERT
selects from contacts and will attempt to insert rows where contacts.pubkey is
NULL or empty because the NOT EXISTS check alone doesn't exclude them; update
the WHERE clause to exclude NULL and empty (and optionally whitespace-only)
pubkeys before the NOT EXISTS check (e.g., add conditions like contacts.pubkey
IS NOT NULL AND contacts.pubkey <> '' or use TRIM(contacts.pubkey) <> '') so
only valid non-empty pubkeys are considered for insertion.
Summary by CodeRabbit
New Features
Bug Fixes
Chores