-
Notifications
You must be signed in to change notification settings - Fork 36
Big plans: Drop old tables, migrate to new schema/structure #318
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
This method handles both TEXT and INTEGER timestamps from the database Also refactors current models in database/* such that they use the new parse_timestamp method
…accounts Also fixes incorrect behavior
WalkthroughLarge refactor moving from in-memory/LMDB/DashSet patterns to a DB-first model: adds persistent AppSettings and follow/relay persistence, introduces per-call MLS flows, removes contacts/metadata example modules, updates migrations, public APIs, event handlers, and many DB row mappings. Changes
Sequence Diagram(s)sequenceDiagram
participant App as Application
participant WN as Whitenoise
participant DB as Database
participant NM as NostrManager
App->>WN: initialize()
WN->>DB: run migrations, ensure app_settings & default relays
WN->>NM: start event processing
WN->>DB: Account::all()
loop for each account
WN->>NM: setup_account_subscriptions(account, &[Relay])
WN->>WN: spawn background_fetch_account_data(account)
end
sequenceDiagram
participant Client
participant WN as Whitenoise
participant DB as Database
participant U as User
participant A as Account
Client->>WN: follow_user(account, pubkey)
WN->>DB: User::find_or_create_by_pubkey(pubkey)
DB-->>WN: (User, created?)
WN->>DB: Account::follow_user(user)
WN-->>Client: Result
sequenceDiagram
participant NM as NostrManager
participant WN as Whitenoise
participant Handler as EventHandler
participant DB as Database
NM-->>WN: incoming Event (Metadata / RelayList)
WN->>Handler: dispatch to handle_metadata/handle_relay_list(event)
Handler->>DB: User::find_or_create_by_pubkey / Relay::find_or_create_by_url
Handler->>DB: save metadata/associations
Handler-->>WN: Ok / Err(EventProcessor)
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Suggested reviewers
Poem
Tip 🔌 Remote MCP (Model Context Protocol) integration is now available!Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats. ✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 30
🔭 Outside diff range comments (2)
src/whitenoise/event_processor/event_handlers/handle_giftwrap.rs (1)
47-54: Don’t unwrap: propagate MLS creation failure instead of panicking.Unwrapping on Account::create_nostr_mls will abort the task if storage creation fails. Convert to a typed error and return Result.
Apply:
- let nostr_mls = - Account::create_nostr_mls(account.pubkey, &self.config.data_dir).unwrap(); + let nostr_mls = Account::create_nostr_mls(account.pubkey, &self.config.data_dir) + .map_err(|e| { + WhitenoiseError::Configuration(format!( + "Failed to create MLS storage for {}: {}", + account.pubkey.to_hex(), + e + )) + })?;src/nostr_manager/mod.rs (1)
409-418: Avoid unwrap when accessing signer in extract_invite_events
self.client.signer().awaitcan returnNone. The currentunwrap()can panic.Apply this diff:
- for event in gw_events.iter() { - if let Ok(unwrapped) = extract_rumor(&self.client.signer().await.unwrap(), event).await + for event in gw_events.iter() { + if let Some(signer) = self.client.signer().await { + if let Ok(unwrapped) = extract_rumor(&signer, event).await + { + if unwrapped.rumor.kind == Kind::MlsWelcome { + invite_events.push((event.id, unwrapped.rumor)); + } + } + } - { - if unwrapped.rumor.kind == Kind::MlsWelcome { - invite_events.push((event.id, unwrapped.rumor)); - } - }
🧹 Nitpick comments (57)
db_migrations/0007_drop_old_tables.sql (1)
12-17: Trigger is immediately removed in 0009; consider avoiding churnYou add
update_app_settings_updated_athere but drop it in migration 0009. Either omit it here or merge the drop with this migration to reduce churn. If you keep it, ensurePRAGMA recursive_triggersremains OFF (default) to avoid recursion.db_migrations/0009_fix_timestamp_triggers.sql (1)
1-3: Ensure app-layer updatesupdated_atand add coverageWith triggers gone, app code must update
updated_aton writes. Add/update tests to verify this for all affected tables (users, accounts, account_follows, relays, user_relays, app_settings).I can generate a checklist of repository call-sites and a test scaffold to assert
updated_atchanges across CRUD paths—want me to draft that?src/whitenoise/error.rs (1)
101-103: Consider carrying structured error context instead of String for EventProcessorStoring only a String loses error context and backtraces from sources like serde_json. Prefer holding a concrete error type to preserve causality.
Apply this diff to make EventProcessor preserve error context:
- #[error("Event handler error: {0}")] - EventProcessor(String), + #[error("Event handler error: {0}")] + EventProcessor(anyhow::Error),Follow-up: call sites should wrap with
anyhow::anyhow!(e)rather thane.to_string().Note: While here, Line 95 has a typo in the display string ("nip04 direcet"). Consider: "NIP-04 direct message error".
src/nostr_manager/search.rs (2)
18-22: Deduplicate pubkeys to reduce redundant author filtersIf multiple metadata events exist per user, authors(pubkeys) may include duplicates. Deduplicate to reduce request size.
Apply this diff to dedup:
- let pubkeys: Vec<PublicKey> = metadata_events - .iter() - .map(|user| user.pubkey) - .collect::<Vec<PublicKey>>(); + use std::collections::HashSet; + let pubkeys: Vec<PublicKey> = metadata_events + .iter() + .map(|e| e.pubkey) + .collect::<HashSet<_>>() + .into_iter() + .collect();
12-17: Minor cleanups: avoid unnecessary clone; log parse issues instead of silently defaulting; clarify variable naming
- filter.clone() is unnecessary since filter is not reused.
- Swallowing parse errors with unwrap_or_default hides issues; consider logging at debug level.
- Rename loop variable from user (it’s actually an Event) to event for clarity.
Apply this diff to address all three:
- let metadata_events = self - .client - .fetch_events(filter.clone(), self.timeout().await.unwrap()) + let metadata_events = self + .client + .fetch_events(filter, timeout) .await .map_err(NostrManagerError::from)?; @@ - for user in metadata_events { - let enriched_contact = EnrichedContact { - metadata: Metadata::from_json(&user.content).unwrap_or_default(), + for event in metadata_events { + let metadata = match Metadata::from_json(&event.content) { + Ok(m) => m, + Err(e) => { + tracing::debug!( + target: "nostr_manager::search", + "Failed to parse metadata for {}: {}", + event.pubkey, + e + ); + Metadata::default() + } + }; + let enriched_contact = EnrichedContact { + metadata, nip17: enriching_events .iter() - .any(|event| event.kind == Kind::InboxRelays && event.pubkey == user.pubkey), + .any(|e| e.kind == Kind::InboxRelays && e.pubkey == event.pubkey), nip104: enriching_events .iter() - .any(|event| event.kind == Kind::MlsKeyPackage && event.pubkey == user.pubkey), + .any(|e| e.kind == Kind::MlsKeyPackage && e.pubkey == event.pubkey), nostr_relays: Vec::new(), // For now, we don't care about these since we're only searching in the context of finding a person to start a conversation with. We'll fetch all their data later. inbox_relays: Vec::new(), // For now, we don't care about these key_package_relays: Vec::new(), // For now, we don't care about these }; - enriched_contacts.insert(user.pubkey.to_hex(), enriched_contact); + enriched_contacts.insert(event.pubkey.to_hex(), enriched_contact); }Also applies to: 38-52
src/whitenoise/event_processor/event_handlers/handle_metadata.rs (2)
13-15: Drop redundant let _ = when using ?; it’s a no-op
let _ = ...?;is redundant. Just use...?;to propagate errors.Apply this diff:
- let _ = user.save(&self.database).await?; + user.save(&self.database).await?;
17-23: Use a precise tracing target for this handlerCurrent target references nostr_manager; use an event_processor-specific target for clarity in logs.
Apply this diff:
- tracing::error!( - target: "whitenoise::nostr_manager::fetch_all_user_data", + tracing::error!( + target: "whitenoise::event_processor::handle_metadata", "Failed to parse metadata for user {}: {}", event.pubkey, e );src/whitenoise/database/utils.rs (1)
54-58: Differentiate ColumnNotFound vs decode failures for better diagnosticsCurrently, missing columns and decode failures both return ColumnDecode. Consider returning ColumnNotFound when the column truly doesn’t exist to aid debugging.
One approach: use
row.columns()to check for the presence of column_name and returnsqlx::Error::ColumnNotFound(column_name.to_string())before attempting to decode.src/lib.rs (1)
16-17: Consider adding short doc comments for new re-exports to match the style used for nostr types.Optional consistency tweak: brief docstrings for Account, AppSettings, and ThemeMode would help IDE hover docs and keep the public surface uniformly documented.
Example:
- pub use crate::whitenoise::accounts::Account; - pub use crate::whitenoise::app_settings::{AppSettings, ThemeMode}; + /// Whitenoise account handle for a pubkey (lean public type). + pub use crate::whitenoise::accounts::Account; + /// Application-wide settings persisted in the database. + /// Theme mode for the UI. + pub use crate::whitenoise::app_settings::{AppSettings, ThemeMode};src/whitenoise/event_processor/event_handlers/handle_relay_list.rs (3)
12-13: Avoid cloning the full Event just to extract relay URLs.NostrManager::relay_urls_from_event takes Event by value, forcing a clone. If feasible, refactor that helper to accept &Event to avoid an extra allocation here.
Proposed change in this file (after refactoring the helper signature):
- let relay_urls = NostrManager::relay_urls_from_event(event.clone()); + let relay_urls = NostrManager::relay_urls_from_event(&event);If changing the helper isn’t practical right now, feel free to skip.
13-17: Make user-relay associations idempotent to avoid unique constraint churn.If user.add_relay inserts into a composite PK (user_id, relay_id, relay_type), repeated events will cause constraint violations unless the method upserts/ignores duplicates. Ensure add_relay uses an UPSERT/ON CONFLICT DO NOTHING pattern.
If add_relay currently does a plain INSERT, consider changing it to:
INSERT INTO user_relays (user_id, relay_id, relay_type, created_at, updated_at) VALUES (?, ?, ?, ?, ?) ON CONFLICT(user_id, relay_id, relay_type) DO UPDATE SET updated_at=excluded.updated_at;
9-20: Consider a single transaction for all associations derived from one event.If a later insert fails, some associations will have been created while others won’t. Wrapping per-event processing in a DB transaction would make this atomic.
Happy to provide a patch if you want to move this loop into a transaction on self.database.
src/whitenoise/database/mod.rs (2)
150-162: Quote table names when generating DROP statements.Even though names come from sqlite_master (not user input), quoting is a safer default and prevents edge cases with reserved words or odd names.
Apply:
- for (table_name,) in tables { - let drop_query = format!("DROP TABLE IF EXISTS {}", table_name); - sqlx::query(&drop_query).execute(&mut *txn).await?; - } + for (table_name,) in tables { + // Double any embedded quotes for safety + let quoted = table_name.replace('"', "\"\""); + let drop_query = format!(r#"DROP TABLE IF EXISTS "{}""#, quoted); + sqlx::query(&drop_query).execute(&mut *txn).await?; + }
171-173: Failure mode leaves an empty DB if migrations fail.If MIGRATOR.run fails here, the DB is left with no tables. If that’s acceptable for “factory reset” semantics, ignore. Otherwise, consider running migrations in a fresh temporary file and swapping, or at least surface a stronger error to upstream so the UI can guide the user to retry.
src/whitenoise/event_processor/event_handlers/handle_giftwrap.rs (1)
8-12: Update log message: this handler does implement processing.The message “processing not yet implemented” is misleading now that welcome handling is in place.
- tracing::info!( + tracing::info!( target: "whitenoise::event_handlers::handle_giftwrap", - "Giftwrap received for account: {} - processing not yet implemented", + "Giftwrap received for account: {} - decrypting and dispatching", account.pubkey.to_hex() );src/whitenoise/database/app_settings.rs (3)
83-97: Upsert updated_at semantics are sound; consider tiny readability improvementThe ON CONFLICT clause correctly refreshes updated_at while preserving created_at. For readability, consider formatting the query string over multiple lines or using a query! macro later, but the current logic is fine.
99-125: Update API is correct; consider returning whether a row was actually updatedThe UPDATE targets id = 1 and refreshes updated_at. Optionally return the number of affected rows (0/1) to let callers detect absence of the default row.
128-344: Great unit test coverage; add round-trip tests for find_or_create_default and save() upsertThe parser/conversion tests are solid. Add tests that:
- call find_or_create_default on empty DB (inserts default)
- call save() twice to exercise the ON CONFLICT path and verify updated_at changed while created_at stayed constant
Would you like a test scaffold for these cases?
src/whitenoise/users.rs (2)
8-15: Deriving Hash on User may fail or be semantically brittle; verify trait bounds and intentUser derives Hash over all fields (including Metadata and timestamps). Ensure:
- Metadata implements Hash (in nostr_sdk). If not, this won’t compile.
- Hash/Eq semantics are desired across mutable fields; otherwise sets/maps may break when metadata/timestamps change.
Safer alternative: hash by stable identity (id if Some, else pubkey). If you want this, replace the derive and implement Hash/PartialEq/Eq accordingly.
If you choose the identity-based approach, here’s a sketch:
-#[derive(Serialize, Deserialize, Clone, Debug, PartialEq, Eq, Hash)] +#[derive(Serialize, Deserialize, Clone, Debug)] pub struct User { pub id: Option<i64>, pub pubkey: PublicKey, pub metadata: Metadata, pub created_at: DateTime<Utc>, pub updated_at: DateTime<Utc>, }Then implement Eq/PartialEq/Hash based on (id.unwrap_or_default(), pubkey). I can generate the exact impls if you prefer.
147-151: user_metadata clones current metadata; consider returning a reference when possibleIf call sites don’t need ownership, returning a reference (&Metadata) avoids clone. If API stability favors owned, keep as is.
src/whitenoise/accounts/messages.rs (4)
19-27: Docs param mismatch: function takes account, not sender_pubkeyUpdate the parameter docs to reflect the signature and avoid confusion.
- /// * `sender_pubkey` - The public key of the user sending the message. This is used - /// to identify the sender and fetch their account for message creation. + /// * `account` - The account of the user sending the message.
83-87: Sequential DB lookups per relay; optionally parallelize and skip missing relaysCurrent loop fails the entire send if any relay URL isn’t in DB. Consider:
- Parallel DB lookups
- Skipping not-found relays (log at debug) to still publish to others
- let mut relays: HashSet<Relay> = HashSet::new(); - for relay in group_relays { - let db_relay = Relay::find_by_url(&relay, &self.database).await?; - relays.insert(db_relay); - } + let mut relays: HashSet<Relay> = HashSet::new(); + let futs = group_relays.into_iter().map(|r| Relay::find_by_url(&r, &self.database)); + for res in futures::future::join_all(futs).await { + match res { + Ok(db_relay) => { relays.insert(db_relay); } + Err(e) => { + tracing::debug!( + target: "whitenoise::accounts::messages::send_message_to_group", + "Skipping unknown relay URL for group: {}", e + ); + } + } + }Add at top of file (outside selected range):
use futures::future::join_all;
89-91: Publishing path looks correct with de-dup; consider small readability tweakThe HashSet -> Vec conversion is fine. Minor: collect into a vec variable before call to make debugging easier.
210-227: Unsigned event builder is solid; consider taking kind: Kind instead of u16 to enforce type-safetyAccepting Kind reduces accidental misuse and conversions. Keep as-is if API stability requires u16.
src/whitenoise/event_processor/event_handlers/handle_mls_message.rs (2)
43-93: Tests exercise the happy path well; consider asserting process side-effects if availableIf process_message writes to storage or queues, add assertions to validate state changes (optional and may require test hooks).
95-149: Error-path test is good; reduce the sleep if possibleSleep(300ms) is already reduced elsewhere to 100ms; if stable, use the shorter delay to speed up CI.
src/whitenoise/event_processor/mod.rs (3)
37-41: Typo in doc: 'accouny_pubkey'Fix minor typo in the subscription ID format description.
- /// where hashed_pubkey = SHA256(session salt || accouny_pubkey)[..12] + /// where hashed_pubkey = SHA256(session salt || account_pubkey)[..12]
41-68: O(n) scan per event over all accounts; acceptable for now, but consider cachingThe logic is correct and robust against multiple underscores. For scale, consider:
- Keeping an in-memory map of {12-char-hash -> PublicKey} refreshed on account changes
- Or, store this short hash in the DB with an index to resolve in O(1)
241-245: Preserve original error cause instead of replacing itMapping to a generic InvalidEvent loses context. Prefer propagating the error from extract_pubkey_from_subscription_id directly; tests only assert is_err, so this should be safe.
- let target_pubkey = self - .extract_pubkey_from_subscription_id(&sub_id) - .await - .map_err(|_| { - WhitenoiseError::InvalidEvent(format!( - "Cannot extract pubkey from subscription ID: {}", - sub_id - )) - })?; + let target_pubkey = self + .extract_pubkey_from_subscription_id(&sub_id) + .await?;src/whitenoise/app_settings.rs (1)
107-145: Docs say “load and save,” but implementation performs a direct DB updateThe docstring claims this method loads the current settings, mutates, and saves them; the implementation delegates to a DB-level update that updates
theme_mode(and presumablyupdated_at) directly. Consider aligning the comment with the actual behavior.Proposed doc tweak:
- /// This is a convenience method that loads the current settings, - /// updates only the theme mode, and saves the settings back to the - /// database. This is more efficient than manually loading, modifying, - /// and saving when you only need to change the theme. + /// Convenience method that updates only the theme mode directly in the database + /// (and refreshes `updated_at`). Use this when you only need to change the theme, + /// without loading or returning the full settings record.src/whitenoise/follows.rs (2)
21-29: Doc claims conflict errors, but DB uses upsert (idempotent follow)Docs say an error may occur if the relationship already exists, but the DB layer uses an upsert:
INSERT ... ON CONFLICT(account_id, user_id) DO UPDATE SET updated_at = ?, so this call is effectively idempotent and should not error on duplicates. Please update the comment accordingly.Suggested tweak:
- /// * The follow relationship already exists (depending on database constraints) + /// This operation is idempotent and will not error if the relationship already exists.
161-168: Normalize user resolution path for consistency and reuse
follow_userusesUser::find_or_create_by_pubkey(...)whilefollow_userscallsself.find_or_create_user_by_pubkey(...). Consider standardizing on one path (ideally the Whitenoise helper) to keep behavior uniform and centralize logging/metrics.src/whitenoise/accounts/key_packages.rs (3)
26-44: Docs reference a “lock” that isn’t usedThe comment mentions trying to acquire a
nostr_mlslock, but no lock is present. Consider removing or updating this wording to avoid confusion.
68-78: Docs mention a key_package_relays argument that is not part of the signatureThe argument list documents
key_package_relays, but the function takes only(account, event_id, delete_mls_stored_keys). Update the docs to match the signature.
48-61: Avoid duplicate DB calls for relays in publish flow
publish_key_package_for_accountcallsencoded_key_package(which fetches relays) and then fetches relays again for publishing. Cache the relays once and reuse to save a trip.One option is to have
encoded_key_packageaccept relay URLs instead of reading them internally, or refactor it to return both(payload, tags, relays)to reuse downstream.src/whitenoise/database/relays.rs (2)
84-90: Minor: use the From impl to reduce duplicationYou already implemented
From<RelayRow> for Relay. Use it here to keep things DRY.- Ok(Relay { - id: Some(relay_row.id), - url: relay_row.url, - created_at: relay_row.created_at, - updated_at: relay_row.updated_at, - }) + Ok(relay_row.into())
156-164: Tests: add UNIQUE(url) to exercise ON CONFLICT pathYour table schema in tests lacks a unique constraint on
url, soON CONFLICT(url)won’t be exercised. Consider makingurlunique in the test schema to cover upsert behavior.- "CREATE TABLE relays ( + "CREATE TABLE relays ( id INTEGER PRIMARY KEY AUTOINCREMENT, - url TEXT NOT NULL, + url TEXT NOT NULL UNIQUE, created_at DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP, updated_at DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP )",src/nostr_manager/subscriptions.rs (1)
75-78: Extract a tiny helper to avoid repeating relay->url mappingThe
relays.iter().map(|r| r.url.clone()).collect()pattern appears in 3 places. Consider a small helper to keep it DRY.For example:
fn relay_urls(relays: &[Relay]) -> Vec<RelayUrl> { relays.iter().map(|r| r.url.clone()).collect() }Then:
- let urls: Vec<RelayUrl> = user_relays.iter().map(|r| r.url.clone()).collect(); + let urls = relay_urls(user_relays);Also applies to: 105-108, 145-148, 183-186
src/whitenoise/accounts/welcomes.rs (1)
165-168: Deduplicate relays without requiring orderingUsing sort/dedup assumes
RelayUrl: Ordand does O(n log n) work. A HashSet avoids ordering constraints and is O(n).Apply this refactor:
- // Collect all relays from all groups into a single vector - for group in &groups { - let relays = nostr_mls.get_relays(&group.mls_group_id)?; - group_relays.extend(relays); - } - - // Remove duplicates by sorting and deduplicating - group_relays.sort(); - group_relays.dedup(); - Ok((group_ids, group_relays)) + // Collect and de-duplicate relays from all groups + let mut relay_set: HashSet<RelayUrl> = HashSet::new(); + for group in &groups { + let relays = nostr_mls.get_relays(&group.mls_group_id)?; + relay_set.extend(relays); + } + Ok((group_ids, relay_set.into_iter().collect::<Vec<_>>()))Also applies to: 153-163
src/whitenoise/accounts/groups.rs (2)
105-107: Prefer Duration math for clarity and consistencyElsewhere you use
Timestamp::now() + Duration::from_secs(...). Stick to one style for readability.- use std::ops::Add; - let one_month_future = Timestamp::now().add(30 * 24 * 60 * 60); + let one_month_future = Timestamp::now() + Duration::from_secs(30 * 24 * 60 * 60);
107-115: Consider a fallback when a user has no inbox relays configuredWhen sending gift wraps,
relays_to_usemay be empty for some users. Consider falling back toaccount.nip65_relays(self).await?to ensure delivery, similar to your evolution-event fallback.Would you like me to wire a fallback like below?
- let relays_to_use = user.relays(RelayType::Inbox, &self.database).await?; + let mut relays_to_use = user.relays(RelayType::Inbox, &self.database).await?; + if relays_to_use.is_empty() { + tracing::warn!(target: "whitenoise::gift_wrap", "User has no inbox relays; falling back to account Nip65 relays"); + relays_to_use = account.nip65_relays(self).await?; + }Also applies to: 321-330
src/whitenoise/mod.rs (3)
149-171: Guard against double-starting the event loop on repeated initializationIf
initialize_whitenoiseis called multiple times,get_or_try_initwill return the existing instance, but the code still starts a new processing loop with fresh channels that no producer uses. Guard the loop start to only run when a new instance is created.Proposed approach:
- Early return if
GLOBAL_WHITENOISE.get().is_some().- Or split channel initialization so it only happens when actually initializing the instance (inside the
get_or_try_initclosure), and return a flag indicating whether initialization occurred.Want me to draft a minimal refactor for this?
Also applies to: 220-227
97-101: Debug impl includes stale field name
nostr_mls_mapno longer exists; including it (even as redacted) is misleading in logs. Consider renaming to a generic placeholder likeinternals.- .field("nostr_mls_map", &"<REDACTED>") + .field("internals", &"<REDACTED>")
491-494: Minor: avoid connecting twice to relays in testsIn
create_mock_whitenoisea background task connects, and thennostr.client.connect().awaitis also called. One is enough.- tokio::spawn({ - let client = nostr.client.clone(); - async move { - client.connect().await; - } - }); - nostr.client.connect().await; + tokio::spawn({ + let client = nostr.client.clone(); + async move { + client.connect().await; + } + });src/whitenoise/relays.rs (1)
53-62: Avoid defaulting unknown Kinds toNip65Mapping unknown
Kindvariants toRelayType::Nip65risks misclassification. Consider returning aResult<RelayType, _>or using a dedicatedUnknownvariant to make invalid data explicit.Possible change:
-impl From<Kind> for RelayType { - fn from(kind: Kind) -> Self { - match kind { - Kind::RelayList => RelayType::Nip65, - Kind::InboxRelays => RelayType::Inbox, - Kind::MlsKeyPackageRelays => RelayType::KeyPackage, - _ => RelayType::Nip65, // Default fallback - } - } -} +impl TryFrom<Kind> for RelayType { + type Error = (); + fn try_from(kind: Kind) -> Result<Self, Self::Error> { + match kind { + Kind::RelayList => Ok(RelayType::Nip65), + Kind::InboxRelays => Ok(RelayType::Inbox), + Kind::MlsKeyPackageRelays => Ok(RelayType::KeyPackage), + _ => Err(()), + } + } +}Adjust call sites as needed.
src/whitenoise/accounts/core.rs (1)
59-73: Prefer avoiding unwrap() in default relay parsingThese are constant URLs, but using
unwrap()will still panic if they’re ever edited incorrectly.Apply this diff to improve diagnostics:
- relays - .iter() - .map(|url| RelayUrl::parse(url).unwrap()) - .collect() + relays + .iter() + .map(|url| RelayUrl::parse(url).expect("invalid default relay URL")) + .collect()src/whitenoise/database/users.rs (1)
151-166: Guard against None user IDs when querying relays
self.idisOption<i64>. BindingNonewill result in NULL and an empty result set, masking logic errors. Fail early with a clear error.Apply this diff:
- let relay_rows = sqlx::query_as::<_, RelayRow>( + let user_id = self.id.ok_or(WhitenoiseError::UserNotFound)?; + let relay_rows = sqlx::query_as::<_, RelayRow>( "SELECT r.id, r.url, r.created_at, r.updated_at FROM relays r INNER JOIN user_relays ur ON r.id = ur.relay_id WHERE ur.user_id = ? AND ur.relay_type = ?", ) - .bind(self.id) + .bind(user_id)src/nostr_manager/mod.rs (2)
560-639: Potential race: connecting to newly added relays is spawned and not awaitedPublishing/subscribing immediately after may run before connections are attempted. Consider awaiting
connect()when you just added relays, or document that the system tolerates partial connectivity.One approach:
- tokio::spawn({ - let client = self.client.clone(); - async move { - client.connect().await; - } - }); + self.client.connect().await;Chill take: if you rely on retry/subscription resiliency, keeping the spawn is acceptable.
456-511: delete_all_data() docs don’t match implementationDocs promise a full wipe including DB removal, but the function only unsets signer and unsubscribes. This is misleading.
- Either implement the full wipe (client reset + DB/file removal) or
- Trim the docs to reflect the lighter behavior.
I can help wire a storage wipe if you confirm where the Nostr DB lives now.
src/whitenoise/database/accounts.rs (3)
215-227: Inconsistent error mapping in is_following_userYou return
WhitenoiseError, but propagate rawsqlx::Errorwith?. Map to your DB error consistently.Apply this diff:
- let result = sqlx::query( + let result = sqlx::query( "SELECT COUNT(*) FROM account_follows WHERE account_id = ? AND user_id = ?", ) .bind(self.id) .bind(user.id) - .fetch_one(&database.pool) - .await?; + .fetch_one(&database.pool) + .await + .map_err(DatabaseError::Sqlx) + .map_err(WhitenoiseError::Database)?;
244-258: follow_user: wrap DB errors in WhitenoiseError for consistencyCurrently only mapped to
DatabaseError::Sqlx. Wrap toWhitenoiseError::Databaseas elsewhere.Apply this diff:
- .await - .map_err(DatabaseError::Sqlx)?; + .await + .map_err(DatabaseError::Sqlx) + .map_err(WhitenoiseError::Database)?;
275-285: unfollow_user: propagate DB errors consistentlySame nuance as follow_user; don’t return raw
sqlx::Error.Apply this diff:
- .await?; + .await + .map_err(DatabaseError::Sqlx) + .map_err(WhitenoiseError::Database)?;src/bin/integration_test.rs (5)
36-36: Consider using a more concise separator.While the separator line serves its purpose, consider using a shorter separator for improved readability, especially given the extensive logging throughout the test.
- tracing::info!("=================================== Starting Whitenoise Integration Test ==================================="); + tracing::info!("===== Starting Whitenoise Integration Test =====");
74-74: Consider using a constant for timing delays.Multiple hardcoded sleep durations appear throughout the test. Consider defining constants for these values to make them easier to adjust and maintain.
Add these constants at the beginning of the file:
const RELAY_CONNECTION_DELAY_MS: u64 = 300; const EVENT_PROCESSING_DELAY_MS: u64 = 500; const AGGREGATION_DELAY_MS: u64 = 1000;Then replace the hardcoded delays:
- tokio::time::sleep(tokio::time::Duration::from_millis(300)).await; + tokio::time::sleep(tokio::time::Duration::from_millis(RELAY_CONNECTION_DELAY_MS)).await;
144-145: Ensure metadata is properly loaded before assertion.The test fetches metadata and immediately asserts on the name field. Consider checking if the metadata fetch was successful before asserting on specific fields.
let metadata = whitenoise.user_metadata(&account3.pubkey).await.unwrap(); + assert!(metadata.name.is_some(), "Metadata name should be present"); assert_eq!(metadata.name, Some("Known User".to_string()));
1059-1059: Fix redundant clone operation.The
clone()call onaccount2is unnecessary sincepubkeyisCopy.- whitenoise.logout(&account2.clone().pubkey).await.unwrap(); + whitenoise.logout(&account2.pubkey).await.unwrap();
1087-1087: Remove redundant clone operation.Similar to the previous comment, the
clone()call is unnecessary.- .fetch_aggregated_messages_for_group(&account2.clone().pubkey, &test_group.mls_group_id) + .fetch_aggregated_messages_for_group(&account2.pubkey, &test_group.mls_group_id)
src/whitenoise/event_processor/event_handlers/handle_mls_message.rs
Outdated
Show resolved
Hide resolved
also use find_or_create_by_pubkey on Account#new
…oups, key_packages, messages and welcomes
…mls with proper error handling
a599dab to
7a54e51
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 8
♻️ Duplicate comments (3)
src/whitenoise/event_processor/event_handlers/handle_mls_message.rs (1)
100-101: Same note as earlier: verify no remaining old signature usagesThis mirrors the earlier call-site change.
Use the same AST grep in the earlier comment.
src/whitenoise/database/accounts.rs (2)
121-129: Don’t map all DB failures to AccountNotFoundOnly translate RowNotFound; propagate other sqlx errors as Database errors.
- let account_row = - sqlx::query_as::<_, AccountRow>("SELECT * FROM accounts WHERE pubkey = ?") - .bind(pubkey.to_hex().as_str()) - .fetch_one(&database.pool) - .await - .map_err(|_| WhitenoiseError::AccountNotFound)?; + let account_row = + sqlx::query_as::<_, AccountRow>("SELECT * FROM accounts WHERE pubkey = ?") + .bind(pubkey.to_hex().as_str()) + .fetch_one(&database.pool) + .await + .map_err(|e| match e { + sqlx::Error::RowNotFound => WhitenoiseError::AccountNotFound, + _ => WhitenoiseError::Database(DatabaseError::Sqlx(e)), + })?;
175-185: Surface real DB failures in follows(); empty results are OKfetch_all returns an empty Vec on “no rows” – not an error. Only map actual DB errors to Database errors, not AccountNotFound.
- .fetch_all(&database.pool) - .await - .map_err(|_| WhitenoiseError::AccountNotFound)?; + .fetch_all(&database.pool) + .await + .map_err(DatabaseError::Sqlx) + .map_err(WhitenoiseError::Database)?;
🧹 Nitpick comments (23)
src/whitenoise/event_processor/event_handlers/handle_mls_message.rs (5)
67-69: Avoidunwrap()in tests; useexpect()with contextBetter failure messages and easier debugging.
- let nostr_mls = - Account::create_nostr_mls(creator_account.pubkey, &whitenoise.config.data_dir).unwrap(); + let nostr_mls = Account::create_nostr_mls( + creator_account.pubkey, + &whitenoise.config.data_dir, + ).expect("failed to create test MLS instance for creator");
76-85: Optional: include event id in success-path debug to aid test triageIf you keep the detailed log above in production path, mirroring it here isn’t necessary. Otherwise, consider logging the created message’s id in tests to simplify diagnosing flakes.
116-118: Avoidunwrap()in tests; useexpect()with contextSame rationale as above.
- let nostr_mls = - Account::create_nostr_mls(creator_account.pubkey, &whitenoise.config.data_dir).unwrap(); + let nostr_mls = Account::create_nostr_mls( + creator_account.pubkey, + &whitenoise.config.data_dir, + ).expect("failed to create test MLS instance for creator");
52-54: Reduce flakiness: prefer readiness checks over fixed sleepsFixed sleeps can be flaky under load. When feasible, poll for key package availability or event propagation rather than sleeping a fixed 100/300ms.
I can draft a small polling helper (with timeout) to check readiness and replace both sleeps—want me to open a follow-up?
Also applies to: 100-103
14-33: Optional refactor: enrich logs with event ID and useinto()for error conversionLeveraging the existing
#[from]onWhitenoiseError::NostrMlsError, you can replace the manual variant construction withe.into(), and include the event’s ID in your logs for better traceability.In src/whitenoise/event_processor/event_handlers/handle_mls_message.rs, update:
- match nostr_mls.process_message(&event) { + match nostr_mls.process_message(&event) { Ok(result) => { tracing::debug!( - target: "whitenoise::event_handlers::handle_mls_message", - "Handled MLS message - Result: {:?}", - result + target: "whitenoise::event_handlers::handle_mls_message", + "Handled MLS message for account {} (event {}): {:?}", + account.pubkey.to_hex(), + event.id, + result ); Ok(()) } Err(e) => { tracing::error!( - target: "whitenoise::event_handlers::handle_mls_message", - "MLS message handling failed for account {}: {}", - account.pubkey.to_hex(), - e + target: "whitenoise::event_handlers::handle_mls_message", + "MLS message handling failed for account {} (event {}): {}", + account.pubkey.to_hex(), + event.id, + e ); - Err(WhitenoiseError::NostrMlsError(e)) + Err(e.into()) } }Since
WhitenoiseError::NostrMlsError(#[from] nostr_mls::Error)is already declared,e.into()will automatically perform the conversion.src/whitenoise/accounts/messages.rs (2)
19-21: Doc-param mismatch: function takes Account, not sender_pubkeyComment still references sender_pubkey, but the API takes account: &Account. Please update for clarity.
- /// * `sender_pubkey` - The public key of the user sending the message. This is used - /// to identify the sender and fetch their account for message creation. + /// * `account` - The sender’s account (used to derive pubkey and MLS context)
204-206: Avoid erasing aggregator error details with anyhow wrappingMapping the aggregator error through anyhow and then into WhitenoiseError loses specificity. Prefer a concrete WhitenoiseError variant (e.g., AggregationError) or direct From mapping if available.
Would you like me to add a WhitenoiseError::Aggregation(Error) variant and update this mapping?
src/whitenoise/database/relays.rs (1)
84-90: Minor: use the existing From conversionAvoid manual struct construction; it’s already implemented.
- Ok(Relay { - id: Some(relay_row.id), - url: relay_row.url, - created_at: relay_row.created_at, - updated_at: relay_row.updated_at, - }) + Ok(relay_row.into())src/whitenoise/relays.rs (1)
56-63: Consider TryFrom instead of defaulting unknown kinds to Nip65Defaulting to Nip65 can hide bugs when encountering unexpected kinds. If practical, prefer TryFrom -> Result<RelayType, _> at call sites that can handle failure.
src/whitenoise/accounts/core.rs (4)
739-744: Avoid mapping Blossom errors to Other(anyhow)Wrapping the Blossom client error in anyhow and then Other loses fidelity. Prefer a dedicated WhitenoiseError variant (e.g., BlossomError) or implement From<nostr_blossom::Error> for WhitenoiseError and use map_err(WhitenoiseError::from).
I can add a specific error variant and conversion if you confirm the desired naming.
642-656: Minor: avoid duplicate DB calls for relays in setup_subscriptionsYou call account.nip65_relays(self).await? twice. Cache results once for clarity and fewer queries.
- let user_relays = account.nip65_relays(self).await?; + let user_relays = account.nip65_relays(self).await?; + let inbox_relays = account.inbox_relays(self).await?; ... self.nostr .setup_account_subscriptions_with_signer( account.pubkey, - &account.nip65_relays(self).await?, - &account.inbox_relays(self).await?, + &user_relays, + &inbox_relays, &group_relays_vec, &nostr_group_ids, keys, )
929-932: Outdated comment mentions DashSetThe comment references DashSet, but the code uses Vec. Update the comment to reflect the current data structure.
- // Convert DashSet to Vec to avoid iterator type issues + // Convert defaults to a Vec for comparison
1126-1175: Gate network-dependent test to avoid CI flakinesstest_update_metadata requires local relays (it even hints at this in the expect message). Consider marking it #[ignore] or gating behind an “integration” feature/env flag.
I can wire a small helper to conditionally skip when relays aren’t reachable.
src/whitenoise/database/users.rs (4)
150-169: Guard against unsaved User (None id) before querying relaysBinding self.id (Option) can silently turn into NULL and yield misleading empty results. Fail fast if the user isn’t persisted.
pub(crate) async fn relays( &self, relay_type: RelayType, database: &Database, ) -> Result<Vec<Relay>, WhitenoiseError> { + let user_id = self + .id + .ok_or_else(|| WhitenoiseError::UserNotFound)?; let relay_type_str = String::from(relay_type); @@ - .bind(self.id) + .bind(user_id)If a dedicated error (e.g., UserNotPersisted) exists, prefer that over UserNotFound.
170-179: Use existing RelayRow -> Relay conversion to reduce duplicationRelayRow already converts into Relay. Prefer Into::into for clarity and consistency.
- let relays = relay_rows - .into_iter() - .map(|row| Relay { - id: Some(row.id), - url: row.url, - created_at: row.created_at, - updated_at: row.updated_at, - }) - .collect(); + let relays = relay_rows.into_iter().map(Into::into).collect();
196-223: Consistency: error mapping style is mixed in save()You sometimes rely on From (with ?), and elsewhere map to WhitenoiseError::Database explicitly. Consider standardizing to one approach across the module for readability.
Can we confirm that WhitenoiseError has
impl From<DatabaseError>? If so, we can simplify.map_err(DatabaseError::Sqlx).map_err(WhitenoiseError::Database)?call sites.
300-324: Single-query delete is good; add persisted-user guardGreat improvement using URL-based subselect. Add the same persisted-user check as above to avoid binding NULL user_id.
pub(crate) async fn remove_relay( &self, relay: &Relay, relay_type: RelayType, database: &Database, ) -> Result<(), WhitenoiseError> { + let user_id = self.id.ok_or(WhitenoiseError::UserNotFound)?; let result = sqlx::query( "DELETE FROM user_relays WHERE user_id = ? AND relay_id = (SELECT id FROM relays WHERE url = ?) AND relay_type = ?", ) - .bind(self.id) + .bind(user_id) .bind(relay.url.to_string()) .bind(String::from(relay_type)) .execute(&database.pool)src/whitenoise/database/accounts.rs (6)
129-137: Reuse AccountRow::into_account to avoid duplicationYou already defined into_account; use it here for consistency and less boilerplate.
- Ok(Account { - id: Some(account_row.id), - user_id: account_row.user_id, - pubkey: account_row.pubkey, - last_synced_at: account_row.last_synced_at, - created_at: account_row.created_at, - updated_at: account_row.updated_at, - }) + Ok(account_row.into_account()?)
186-197: Prefer Into::into for UserRow -> UserYou already have From for User; leverage it for clarity and to avoid drift.
- let users = user_rows - .into_iter() - .map(|row| User { - id: Some(row.id), - pubkey: row.pubkey, - metadata: row.metadata, - created_at: row.created_at, - updated_at: row.updated_at, - }) - .collect(); + let users = user_rows.into_iter().map(Into::into).collect();
215-227: Validate persisted IDs before “is_following_user” checkIf either id is None, you’ll bind NULL and get false negatives. Fail fast for clarity.
pub(crate) async fn is_following_user( &self, user: &User, database: &Database, ) -> Result<bool, WhitenoiseError> { + let account_id = self.id.ok_or(WhitenoiseError::AccountNotFound)?; + let user_id = user.id.ok_or(WhitenoiseError::UserNotFound)?; let result = sqlx::query( "SELECT COUNT(*) FROM account_follows WHERE account_id = ? AND user_id = ?", ) - .bind(self.id) - .bind(user.id) + .bind(account_id) + .bind(user_id) .fetch_one(&database.pool) .await?; Ok(result.get::<i64, _>(0) > 0) }
248-257: Good upsert pattern for follow_user; consider wrapping in a tx for bulk opsThe ON CONFLICT update is solid. If you’re calling follow_user repeatedly (e.g., follow_users), wrapping in a single transaction can reduce IO and lock overhead.
301-310: Batch following should use a transactionMinimize round-trips and ensure atomicity for multi-follow operations.
pub(crate) async fn follow_users( &self, users: &[User], database: &Database, ) -> Result<(), WhitenoiseError> { - for user in users { - self.follow_user(user, database).await?; - } - Ok(()) + let mut tx = database.pool.begin().await.map_err(DatabaseError::Sqlx)?; + for user in users { + sqlx::query("INSERT INTO account_follows (account_id, user_id, created_at, updated_at) VALUES (?, ?, ?, ?) ON CONFLICT(account_id, user_id) DO UPDATE SET updated_at = ?") + .bind(self.id) + .bind(user.id) + .bind(self.created_at.timestamp_millis()) + .bind(self.updated_at.timestamp_millis()) + .bind(Utc::now().timestamp_millis()) + .execute(&mut *tx) + .await + .map_err(DatabaseError::Sqlx)?; + } + tx.commit().await.map_err(DatabaseError::Sqlx)?; + Ok(()) }Note: Keep the single-row variant for ergonomics; use this fast path for bulk.
45-51: Optional: simplify last_synced_at decodingCurrent code probes Option then re-parses. To keep flexibility if column type changes (INTEGER/TEXT), you can rely solely on parse_timestamp with an Option<String/i64> probe and avoid double reads.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (18)
db_migrations/0007_drop_old_tables.sql(1 hunks)src/nostr_manager/query.rs(1 hunks)src/nostr_manager/search.rs(0 hunks)src/whitenoise/accounts/core.rs(1 hunks)src/whitenoise/accounts/groups.rs(13 hunks)src/whitenoise/accounts/key_packages.rs(1 hunks)src/whitenoise/accounts/messages.rs(4 hunks)src/whitenoise/accounts/welcomes.rs(6 hunks)src/whitenoise/database/accounts.rs(33 hunks)src/whitenoise/database/app_settings.rs(1 hunks)src/whitenoise/database/mod.rs(10 hunks)src/whitenoise/database/relays.rs(6 hunks)src/whitenoise/database/user_relays.rs(3 hunks)src/whitenoise/database/users.rs(27 hunks)src/whitenoise/database/utils.rs(1 hunks)src/whitenoise/event_processor/event_handlers/handle_giftwrap.rs(3 hunks)src/whitenoise/event_processor/event_handlers/handle_mls_message.rs(5 hunks)src/whitenoise/relays.rs(2 hunks)
💤 Files with no reviewable changes (1)
- src/nostr_manager/search.rs
🚧 Files skipped from review as they are similar to previous changes (9)
- db_migrations/0007_drop_old_tables.sql
- src/whitenoise/event_processor/event_handlers/handle_giftwrap.rs
- src/whitenoise/database/user_relays.rs
- src/whitenoise/database/app_settings.rs
- src/whitenoise/accounts/key_packages.rs
- src/whitenoise/accounts/welcomes.rs
- src/whitenoise/database/mod.rs
- src/whitenoise/accounts/groups.rs
- src/whitenoise/database/utils.rs
🧰 Additional context used
🧠 Learnings (7)
📚 Learning: 2025-08-17T19:34:30.290Z
Learnt from: jmcorgan
PR: parres-hq/whitenoise#318
File: src/whitenoise/accounts/messages.rs:193-195
Timestamp: 2025-08-17T19:34:30.290Z
Learning: In the whitenoise codebase, `Account::create_nostr_mls()` returns `Result<NostrMls<NostrMlsSqliteStorage>, AccountError>`, and `WhitenoiseError` has an `Account(#[from] AccountError)` variant that automatically converts. When handling MLS creation errors, use the `?` operator to let `AccountError` convert naturally to `WhitenoiseError::Account` rather than manually mapping to `WhitenoiseError::Configuration`.
Applied to files:
src/whitenoise/accounts/messages.rssrc/whitenoise/accounts/core.rssrc/whitenoise/database/accounts.rssrc/whitenoise/database/users.rssrc/whitenoise/event_processor/event_handlers/handle_mls_message.rssrc/whitenoise/relays.rs
📚 Learning: 2025-08-17T19:35:53.509Z
Learnt from: jmcorgan
PR: parres-hq/whitenoise#318
File: src/whitenoise/accounts/welcomes.rs:53-53
Timestamp: 2025-08-17T19:35:53.509Z
Learning: In the Whitenoise codebase, Account::create_nostr_mls should have its errors mapped to WhitenoiseError::NostrMlsError instead of using anyhow::anyhow!() wrapping, as this preserves type safety and specific error information.
Applied to files:
src/whitenoise/accounts/messages.rssrc/whitenoise/accounts/core.rssrc/whitenoise/database/accounts.rssrc/whitenoise/database/users.rssrc/whitenoise/event_processor/event_handlers/handle_mls_message.rssrc/whitenoise/relays.rs
📚 Learning: 2025-08-17T19:21:11.346Z
Learnt from: jmcorgan
PR: parres-hq/whitenoise#318
File: src/whitenoise/accounts/groups.rs:61-63
Timestamp: 2025-08-17T19:21:11.346Z
Learning: When suggesting error handling improvements in the Whitenoise codebase, prefer using existing specific WhitenoiseError variants (like NostrMlsError, DatabaseError, Configuration) over generic anyhow::anyhow!() wrapping, as this maintains type safety and consistency with the established error handling patterns.
Applied to files:
src/whitenoise/accounts/messages.rssrc/whitenoise/accounts/core.rssrc/whitenoise/event_processor/event_handlers/handle_mls_message.rs
📚 Learning: 2025-08-11T10:25:55.436Z
Learnt from: jgmontoya
PR: parres-hq/whitenoise#310
File: src/whitenoise/accounts/mod.rs:669-692
Timestamp: 2025-08-11T10:25:55.436Z
Learning: In the Whitenoise codebase, the `NostrManager::publish_event_builder_with_signer` method automatically ensures relay connectivity by calling `self.add_relays()` before publishing events. This means explicit relay connectivity checks are not needed before calling publishing methods that use this function.
Applied to files:
src/whitenoise/accounts/messages.rssrc/nostr_manager/query.rs
📚 Learning: 2025-08-11T10:23:59.406Z
Learnt from: jgmontoya
PR: parres-hq/whitenoise#310
File: src/whitenoise/accounts/relays.rs:260-278
Timestamp: 2025-08-11T10:23:59.406Z
Learning: In `src/whitenoise/accounts/relays.rs`, the `publish_relay_list_for_account` method is only called after the account setup is complete, which guarantees that `account.nip65_relays` is already populated and will never be empty. Therefore, no fallback logic is needed when `target_relays` is None.
Applied to files:
src/whitenoise/accounts/core.rssrc/whitenoise/database/users.rssrc/whitenoise/event_processor/event_handlers/handle_mls_message.rssrc/whitenoise/relays.rssrc/nostr_manager/query.rs
📚 Learning: 2025-08-17T16:15:29.420Z
Learnt from: erskingardner
PR: parres-hq/whitenoise#318
File: src/nostr_manager/query.rs:39-47
Timestamp: 2025-08-17T16:15:29.420Z
Learning: In Whitenoise's NostrManager, NIP-65 relay lists use Tag::reference() to create "r" tags (the correct NIP-65 format), while other relay types like Inbox and KeyPackage use Tag::custom(TagKind::Relay, ...) to create "relay" tags. These different tag formats should not be unified as they follow different Nostr specifications.
Applied to files:
src/whitenoise/relays.rssrc/nostr_manager/query.rs
📚 Learning: 2025-08-11T10:25:55.436Z
Learnt from: jgmontoya
PR: parres-hq/whitenoise#310
File: src/whitenoise/accounts/mod.rs:669-692
Timestamp: 2025-08-11T10:25:55.436Z
Learning: In the Whitenoise codebase, the `NostrManager::publish_event_builder_with_signer` method automatically ensures relay connectivity by calling `self.ensure_relays_connected()` before publishing events. This means explicit relay connectivity checks are not needed before calling methods like `publish_relay_list_for_pubkey` that internally use `publish_event_builder_with_signer`.
Applied to files:
src/nostr_manager/query.rs
🧬 Code Graph Analysis (8)
src/whitenoise/accounts/messages.rs (6)
src/whitenoise/accounts/groups.rs (4)
relays(129-129)relays(292-292)relays(388-388)nostr_mls(69-73)src/whitenoise/accounts/core.rs (3)
create_nostr_mls(38-45)create_nostr_mls(773-775)new(17-36)src/whitenoise/accounts/key_packages.rs (1)
account(13-18)src/whitenoise/relays.rs (1)
new(68-75)src/whitenoise/database/relays.rs (1)
find_by_url(74-90)src/whitenoise/database/accounts.rs (1)
find_by_pubkey(118-137)
src/whitenoise/accounts/core.rs (7)
src/whitenoise/database/users.rs (5)
relays(150-181)find_by_pubkey(123-134)sqlx(127-127)sqlx(157-157)sqlx(214-214)src/whitenoise/relays.rs (1)
new(68-75)src/whitenoise/users.rs (2)
new(18-26)user_relays(143-145)src/nostr_manager/mod.rs (2)
new(79-188)event(884-889)src/whitenoise/database/accounts.rs (7)
user(152-159)find_by_pubkey(118-137)sqlx(93-93)sqlx(123-123)sqlx(153-153)sqlx(175-175)all(92-102)src/whitenoise/utils.rs (1)
capitalize_first_letter(114-120)src/whitenoise/database/relays.rs (3)
sqlx(78-78)sqlx(133-133)find_by_url(74-90)
src/whitenoise/database/accounts.rs (4)
src/whitenoise/database/utils.rs (3)
parse_timestamp(27-52)row(38-38)row(44-44)src/whitenoise/database/users.rs (4)
sqlx(127-127)sqlx(157-157)sqlx(214-214)from_row(30-58)src/whitenoise/users.rs (1)
new(18-26)src/whitenoise/accounts/core.rs (1)
new(17-36)
src/whitenoise/database/users.rs (4)
src/whitenoise/database/utils.rs (3)
parse_timestamp(27-52)row(38-38)row(44-44)src/whitenoise/database/accounts.rs (8)
row(45-45)find_by_pubkey(118-137)user(152-159)save(325-337)sqlx(93-93)sqlx(123-123)sqlx(153-153)sqlx(175-175)src/whitenoise/database/relays.rs (4)
from(49-56)save(119-142)sqlx(78-78)sqlx(133-133)src/whitenoise/accounts/core.rs (2)
new(17-36)add_relay(107-118)
src/whitenoise/event_processor/event_handlers/handle_mls_message.rs (4)
src/whitenoise/accounts/groups.rs (1)
nostr_mls(69-73)src/whitenoise/accounts/core.rs (4)
create_nostr_mls(38-45)create_nostr_mls(773-775)groups(636-639)new(17-36)src/whitenoise/mod.rs (2)
setup_multiple_test_accounts(626-655)new(43-57)src/whitenoise/accounts/welcomes.rs (1)
groups(154-157)
src/whitenoise/relays.rs (1)
src/whitenoise/database/relays.rs (2)
from(49-56)find_or_create_by_url(92-104)
src/nostr_manager/query.rs (3)
src/whitenoise/relays.rs (1)
new(68-75)src/nostr_manager/mod.rs (3)
new(79-188)event(884-889)relay_urls_from_event(424-434)src/whitenoise/utils.rs (1)
urls(103-105)
src/whitenoise/database/relays.rs (3)
src/whitenoise/database/utils.rs (3)
parse_timestamp(27-52)row(38-38)row(44-44)src/whitenoise/database/accounts.rs (6)
row(45-45)save(325-337)sqlx(93-93)sqlx(123-123)sqlx(153-153)sqlx(175-175)src/whitenoise/relays.rs (4)
from(37-43)from(47-53)from(57-64)new(68-75)
🔇 Additional comments (5)
src/whitenoise/event_processor/event_handlers/handle_mls_message.rs (2)
14-14: Good: switched to?for MLS creation error propagationThis aligns with the codebase’s pattern of leveraging #[from] on AccountError into WhitenoiseError and avoids panics. Nice.
49-51: No remaining three-argument calls tosetup_multiple_test_accounts
AST-based search confirmed there are no stale 3-argument usages—every call (including insrc/whitenoise/event_processor/event_handlers/handle_mls_message.rs) now uses the updated two-argument signature.src/whitenoise/accounts/messages.rs (1)
74-76: Good fix: per-call MLS and idiomatic error propagationUsing Account::create_nostr_mls(...)? here is correct. It lets AccountError convert into WhitenoiseError via #[from], matching project error-handling patterns.
src/nostr_manager/query.rs (1)
49-58: Correct tag formats per spec (NIP-65 vs. others) — looks goodUsing Tag::reference for NIP-65 and Tag::custom(TagKind::Relay, ...) for Inbox/KeyPackage matches the intended specifications.
src/whitenoise/database/users.rs (1)
61-71: LGTM: Accurate UserRow -> User mappingPromotes DB id into Some(id) and carries timestamps/metadata through. Matches broader refactor direction.
| #[derive(Serialize, Deserialize, Debug, PartialEq, Eq, Clone, Hash)] | ||
| pub struct Relay { | ||
| pub id: i64, | ||
| pub id: Option<i64>, | ||
| pub url: RelayUrl, | ||
| pub created_at: DateTime<Utc>, | ||
| pub updated_at: DateTime<Utc>, | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Relay Eq/Hash derive is incorrect for dedup; implement URL-based equality
Deriving Eq/Hash across all fields (id, created_at, updated_at) breaks deduplication by URL. Two Relay instances pointing to the same URL but different timestamps won’t be equal, defeating HashSet-based dedup in callers.
-#[derive(Serialize, Deserialize, Debug, PartialEq, Eq, Clone, Hash)]
+#[derive(Serialize, Deserialize, Debug, Clone)]
pub struct Relay {
pub id: Option<i64>,
pub url: RelayUrl,
pub created_at: DateTime<Utc>,
pub updated_at: DateTime<Utc>,
}
+
+impl PartialEq for Relay {
+ fn eq(&self, other: &Self) -> bool {
+ self.url == other.url
+ }
+}
+impl Eq for Relay {}
+impl std::hash::Hash for Relay {
+ fn hash<H: std::hash::Hasher>(&self, state: &mut H) {
+ self.url.hash(state)
+ }
+}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| #[derive(Serialize, Deserialize, Debug, PartialEq, Eq, Clone, Hash)] | |
| pub struct Relay { | |
| pub id: i64, | |
| pub id: Option<i64>, | |
| pub url: RelayUrl, | |
| pub created_at: DateTime<Utc>, | |
| pub updated_at: DateTime<Utc>, | |
| } | |
| #[derive(Serialize, Deserialize, Debug, Clone)] | |
| pub struct Relay { | |
| pub id: Option<i64>, | |
| pub url: RelayUrl, | |
| pub created_at: DateTime<Utc>, | |
| pub updated_at: DateTime<Utc>, | |
| } | |
| impl PartialEq for Relay { | |
| fn eq(&self, other: &Self) -> bool { | |
| self.url == other.url | |
| } | |
| } | |
| impl Eq for Relay {} | |
| impl std::hash::Hash for Relay { | |
| fn hash<H: std::hash::Hasher>(&self, state: &mut H) { | |
| self.url.hash(state) | |
| } | |
| } |
🤖 Prompt for AI Agents
In src/whitenoise/relays.rs around lines 8 to 14, the Relay struct currently
derives Eq and Hash across all fields which prevents deduplication by URL;
change the implementation so equality and hashing are based solely on the url
field. Remove Eq and Hash from the derive list (keep Serialize, Deserialize,
Debug, Clone, PartialEq if desired) and add manual impls: implement PartialEq
and Eq (or only PartialEq if already derived) comparing only self.url ==
other.url, and implement Hash using only self.url.hash(state). This ensures
HashSet/HashMap dedupe uses the RelayUrl value instead of id/timestamps.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@coderabbitai - if we have a uniqueness constraint on Relay in the database, wouldn't using the macro derives for Hash, Eq, and PartialEq work just fine to ensure that we're getting the right relay? This feels like an extra layer of validation that we might not need?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Let me examine the database schema and usage patterns to better understand this concern.
@erskingardner You make an excellent point! You're absolutely right. Let me examine the actual usage patterns and database constraints.
Looking at the codebase, I can see that:
-
Database uniqueness constraint: The relays table likely has a UNIQUE constraint on the
urlcolumn, ensuring only one record per URL exists in the database. -
find_or_create_by_urlpattern: This method will always return the same database record (with the sameid,created_at,updated_at) for the same URL, so twoRelayinstances representing the same URL will have identical field values. -
Real-world usage: The
Relayinstances are typically created through database operations rather than being constructed with different timestamps in memory.
You're correct that the database uniqueness constraint effectively ensures that Relay instances for the same URL will have the same field values, making the default derived implementations of Hash, Eq, and PartialEq work correctly.
My suggestion was based on a theoretical concern about in-memory deduplication, but in practice, the database layer already handles this constraint. The extra validation layer I suggested is indeed unnecessary.
The current approach with derived traits is the right choice here - it's simpler, follows Rust conventions, and works correctly given the database constraints. Thanks for the clarification!
🏁 Script executed:
#!/bin/bash
# Check for database schema/migration files to see if there's a uniqueness constraint on relay URLs
find . -name "*.sql" -type f | head -10
Length of output: 419
🏁 Script executed:
#!/bin/bash
# Look for uniqueness constraints on relay URLs in migration files
rg -n -A 3 -B 3 "UNIQUE.*url|url.*UNIQUE" --type=sql
Length of output: 549
🏁 Script executed:
#!/bin/bash
# Check how Relay is used in HashSet contexts in the codebase
rg -n -A 5 -B 5 "HashSet.*Relay|HashMap.*Relay" --type=rust
Length of output: 2236
🏁 Script executed:
#!/bin/bash
# Look for the database schema for relays table to understand constraints
rg -n -A 10 -B 5 "CREATE TABLE.*relays|CREATE.*TABLE.*relays" --type=sql
Length of output: 1699
🏁 Script executed:
#!/bin/bash
# Check the find_or_create_by_url implementation to understand the database behavior
rg -n -A 15 -B 5 "find_or_create_by_url" --type=rust
Length of output: 4678
@erskingardner You are absolutely correct! The evidence confirms your point perfectly.
Looking at the database schema in db_migrations/0005_new_tables.sql:
CREATE TABLE relays (
id INTEGER PRIMARY KEY AUTOINCREMENT,
url TEXT NOT NULL UNIQUE, -- <-- This is the key constraint
created_at DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,
updated_at DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP
);And examining the find_or_create_by_url implementation, it's clear that:
- Database enforces uniqueness: The
UNIQUEconstraint on theurlcolumn ensures only one record per URL exists - Consistent instances:
find_or_create_by_urleither returns the existing record or creates a new one, but never creates duplicates - Same field values: Any two
Relayinstances representing the same URL will have identicalid,url,created_at, andupdated_atvalues
I can also see this pattern working correctly in practice - the code uses HashSet<Relay> for deduplication in places like messages.rs and welcomes.rs, and it works fine with the derived traits.
My suggestion for custom URL-based equality was solving a problem that doesn't exist in this architecture. The database constraint handles the uniqueness concern at the right level, and the derived implementations are simpler, more maintainable, and follow Rust conventions.
Thank you for the clarification - the current approach is definitely the right choice!
🧬 Code Graph Analysis Results
Based on the provided code snippets, there are a few potentially relevant code snippets that could be helpful in understanding the file and the user's comment:
-
src/whitenoise/database/relays.rs:- The
find_or_create_by_urlfunction at line range 92-104 is relevant, as it handles the logic of finding or creating aRelayinstance based on the given URL. This could be relevant to the user's comment about the uniqueness constraint on theRelayentity.
- The
-
src/whitenoise/relays.rs:- The
Relaystruct at the beginning of the file is relevant, as it contains the definition of theRelayentity, including theid,url,created_at, andupdated_atfields. - The
find_or_create_relayfunction at the end of the file is also relevant, as it calls thefind_or_create_by_urlfunction from therelays.rsfile to handle the logic of finding or creating aRelayinstance.
- The
The other code snippets from the provided list do not seem directly relevant to the user's comment, as they deal with different aspects of the Whitenoise application, such as secrets management, message aggregation, error handling, user management, and database setup.
In summary, the most relevant code snippets are from the src/whitenoise/database/relays.rs and src/whitenoise/relays.rs files, as they contain the logic for managing the Relay entity, which seems to be the focus of the user's comment.
🧠 Learnings used
Learnt from: jgmontoya
PR: parres-hq/whitenoise#310
File: src/whitenoise/accounts/relays.rs:260-278
Timestamp: 2025-08-11T10:23:59.406Z
Learning: In `src/whitenoise/accounts/relays.rs`, the `publish_relay_list_for_account` method is only called after the account setup is complete, which guarantees that `account.nip65_relays` is already populated and will never be empty. Therefore, no fallback logic is needed when `target_relays` is None.
Learnt from: jmcorgan
PR: parres-hq/whitenoise#318
File: src/whitenoise/accounts/messages.rs:193-195
Timestamp: 2025-08-17T19:34:30.290Z
Learning: In the whitenoise codebase, `Account::create_nostr_mls()` returns `Result<NostrMls<NostrMlsSqliteStorage>, AccountError>`, and `WhitenoiseError` has an `Account(#[from] AccountError)` variant that automatically converts. When handling MLS creation errors, use the `?` operator to let `AccountError` convert naturally to `WhitenoiseError::Account` rather than manually mapping to `WhitenoiseError::Configuration`.
| Ok((account, keys)) | ||
| } | ||
|
|
||
| pub(crate) fn create_nostr_mls( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This usage is not efficient or ergonomic.
For every message operation, sending, fetching, creating groups, etc.,
A new rusqlite connection is being established, and the migrations are run.
If we don't want this field in the Account struct, then we can have a hashmap in the global whitenoise object.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jgmontoya and I talked about this and decided to just leave it horribly inefficient for now. We know it's not optimal but don't want to spend any more time on that now. We can fix this later when it becomes an issue and we have real data to back up how slow/inefficient it is.
delcin-raj
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have issue with nostr_mls object usage
Distinguish RowNotFound from genuine DB errors, add UserNotPersisted validation, and refactor add_relay to use relay.save()
|
Commence Operation Big Plans! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🔭 Outside diff range comments (1)
src/whitenoise/mod.rs (1)
145-148: Idempotency bug: subsequent initialize_whitenoise calls start orphan loops and leak channelsinitialize_whitenoise always creates fresh channels and starts a new event loop, even when GLOBAL_WHITENOISE is already initialized. On the 2nd+ call, the new event_receiver isn't paired with the stored event_sender inside the OnceCell instance, resulting in an idle loop and leaked channels. This is high-impact and hard to diagnose.
Guard against re-initialization early to keep initialization idempotent and ensure the event loop is started exactly once.
Apply this diff to short-circuit repeated initialization:
pub async fn initialize_whitenoise(config: WhitenoiseConfig) -> Result<()> { - // Create event processing channels - let (event_sender, event_receiver) = mpsc::channel(500); - let (shutdown_sender, shutdown_receiver) = mpsc::channel(1); + // Return early if we've already initialized the global instance + if GLOBAL_WHITENOISE.get().is_some() { + tracing::debug!( + target: "whitenoise::initialize_whitenoise", + "Whitenoise already initialized; skipping re-initialization" + ); + return Ok(()); + } + + // Create event processing channels + let (event_sender, event_receiver) = mpsc::channel(500); + let (shutdown_sender, shutdown_receiver) = mpsc::channel(1); @@ - Self::start_event_processing_loop(whitenoise_ref, event_receiver, shutdown_receiver).await; + // Start event processing loop once (first initialization only) + Self::start_event_processing_loop(whitenoise_ref, event_receiver, shutdown_receiver).await;If you prefer, we can move channel creation and loop start into the get_or_try_init closure and keep them strictly tied to the initial setup; the early-return is the minimal fix.
Also applies to: 220-227
♻️ Duplicate comments (2)
src/nostr_manager/mod.rs (1)
642-643: Consider renaming tosync_all_user_datafor clarity.The method name should reflect its action rather than imply a return value.
src/whitenoise/database/users.rs (1)
286-307: remove_relay: robust URL-based deletion – LGTMDeleting via relay URL avoids reliance on relay.id being Some and fixes the prior edge case.
🧹 Nitpick comments (11)
src/nostr_manager/mod.rs (1)
642-734: Add error handling for empty relays list before streaming.While
ensure_relays_connectedhandles empty relay lists gracefully, the stream operations might fail silently with no relays. Consider checking if any events were actually fetched and log a warning if streams are empty.pub async fn sync_all_user_data( &self, signer: impl NostrSigner + 'static, account: &Account, group_ids: Vec<String>, ) -> Result<()> { self.client.set_signer(signer).await; let mut contacts_and_self = match self.client.get_contact_list_public_keys(self.timeout).await { Ok(contacts) => contacts, Err(e) => { tracing::error!( target: "whitenoise::nostr_manager::fetch_all_user_data_to_nostr_cache", "Failed to get contact list public keys: {}", e ); self.client.unset_signer().await; return Err(NostrManagerError::Client(e)); } }; contacts_and_self.push(account.pubkey); let metadata_filter = Filter::new() .authors(contacts_and_self.clone()) .kinds(vec![Kind::Metadata]); let relay_filter = Filter::new().authors(contacts_and_self).kinds(vec![ Kind::RelayList, Kind::InboxRelays, Kind::MlsKeyPackageRelays, ]); let giftwrap_filter = Filter::new().kind(Kind::GiftWrap).pubkey(account.pubkey); let group_messages_filter = Filter::new() .kind(Kind::MlsGroupMessage) .custom_tags(SingleLetterTag::lowercase(Alphabet::H), group_ids) .since(Timestamp::from( account.last_synced_at.unwrap_or_default().timestamp() as u64, )); let timeout_duration = Duration::from_secs(10); + let mut event_count = 0; + let mut metadata_events = self .client .stream_events(metadata_filter, timeout_duration) .await?; let mut relay_events = self .client .stream_events(relay_filter, timeout_duration) .await?; let mut giftwrap_events = self .client .stream_events(giftwrap_filter, timeout_duration) .await?; let mut group_messages = self .client .stream_events(group_messages_filter, timeout_duration) .await?; let whitenoise = Whitenoise::get_instance() .map_err(|e| NostrManagerError::EventProcessingError(e.to_string()))?; while let Some(event) = metadata_events.next().await { + event_count += 1; whitenoise .handle_metadata(event) .await .map_err(|e| NostrManagerError::EventProcessingError(e.to_string()))?; } while let Some(event) = relay_events.next().await { + event_count += 1; whitenoise .handle_relay_list(event) .await .map_err(|e| NostrManagerError::EventProcessingError(e.to_string()))?; } while let Some(event) = giftwrap_events.next().await { + event_count += 1; whitenoise .handle_giftwrap(account, event) .await .map_err(|e| NostrManagerError::EventProcessingError(e.to_string()))?; } while let Some(event) = group_messages.next().await { + event_count += 1; whitenoise .handle_mls_message(account, event) .await .map_err(|e| NostrManagerError::EventProcessingError(e.to_string()))?; } + if event_count == 0 { + tracing::warn!( + target: "whitenoise::nostr_manager::sync_all_user_data", + "No events fetched during sync for account {}", + account.pubkey + ); + } else { + tracing::debug!( + target: "whitenoise::nostr_manager::sync_all_user_data", + "Successfully synced {} events for account {}", + event_count, + account.pubkey + ); + } + self.client.unset_signer().await; Ok(()) }src/whitenoise/database/accounts.rs (4)
45-52: HandleNonecase forlast_synced_atparsing without intermediate value check.The current pattern of checking for
Some(_)and then callingparse_timestampis redundant sinceparse_timestampalready handles the conversion properly.- let last_synced_at = match row.try_get::<Option<i64>, _>("last_synced_at")? { - Some(_) => Some(parse_timestamp(row, "last_synced_at")?), - None => None, - }; + let last_synced_at = row + .try_get::<Option<i64>, _>("last_synced_at")? + .map(|_| parse_timestamp(row, "last_synced_at")) + .transpose()?;
65-76: Clarify or remove the outdated NostrMls comment.The comment on Line 65 mentions "creating the required NostrMls instance" but the method no longer does that.
- /// Converts an AccountRow to an Account by creating the required NostrMls instance. + /// Converts an AccountRow to an Account.
248-258: Use CURRENT_TIMESTAMP for consistency.The upsert uses manually calculated timestamps. Consider using database-level CURRENT_TIMESTAMP for the updated_at field to ensure consistency.
- sqlx::query("INSERT INTO account_follows (account_id, user_id, created_at, updated_at) VALUES (?, ?, ?, ?) ON CONFLICT(account_id, user_id) DO UPDATE SET updated_at = ?") + sqlx::query("INSERT INTO account_follows (account_id, user_id, created_at, updated_at) VALUES (?, ?, ?, ?) ON CONFLICT(account_id, user_id) DO UPDATE SET updated_at = CURRENT_TIMESTAMP") .bind(self.id) .bind(user.id) .bind(self.created_at.timestamp_millis()) .bind(self.updated_at.timestamp_millis()) - .bind(Utc::now().timestamp_millis()) .execute(&database.pool) .await .map_err(DatabaseError::Sqlx)?;
326-337: Consider using RETURNING clause for atomic upsert.The current upsert pattern could benefit from using SQLite's RETURNING clause (available in SQLite 3.35.0+) to get the inserted/updated row atomically.
- sqlx::query("INSERT INTO accounts (pubkey, user_id, last_synced_at, created_at, updated_at) VALUES (?, ?, ?, ?, ?) ON CONFLICT(pubkey) DO UPDATE SET user_id = excluded.user_id, last_synced_at = excluded.last_synced_at, updated_at = ?") + let _result = sqlx::query("INSERT INTO accounts (pubkey, user_id, last_synced_at, created_at, updated_at) VALUES (?, ?, ?, ?, ?) ON CONFLICT(pubkey) DO UPDATE SET user_id = excluded.user_id, last_synced_at = excluded.last_synced_at, updated_at = ? RETURNING id") .bind(self.pubkey.to_hex().as_str()) .bind(self.user_id) .bind(self.last_synced_at.map(|ts| ts.timestamp_millis())) .bind(self.created_at.timestamp_millis()) .bind(self.updated_at.timestamp_millis()) .bind(Utc::now().timestamp_millis()) - .execute(&database.pool) + .fetch_one(&database.pool) .await .map_err(DatabaseError::Sqlx)?;src/whitenoise/mod.rs (3)
193-201: Creating default relays/app settings on first run – LGTMSeeding the DB with default relays and app settings on initialization is sound. Consider batching relay inserts later (as noted by your TODO) for fewer transactions.
253-261: Outdated docstring: mentions removing from “in-memory accounts list”The logout docs still reference an in-memory accounts list, which no longer exists after the DB-first refactor. Update to avoid confusion.
Apply this doc-only change:
- /// - Removes the account from the in-memory accounts list. + /// - Updates any in-memory state as needed (no in-memory accounts list in DB-first model).
489-504: Duplicate connect calls in testscreate_mock_whitenoise spawns client.connect() and also calls nostr.client.connect().await. The second call is redundant.
Apply this diff:
tokio::spawn({ let client = nostr.client.clone(); async move { client.connect().await; } }); - nostr.client.connect().await;src/whitenoise/accounts/core.rs (3)
739-744: Prefer typed error mapping over anyhow for Blossom uploadMapping Blossom upload errors to WhitenoiseError::Other(anyhow!(err)) loses type information. Use a dedicated error variant or From conversion to preserve context.
For example, if nostr_blossom exposes a concrete error type (e.g., nostr_blossom::client::Error), add a conversion and use it:
- .await - .map_err(|err| WhitenoiseError::Other(anyhow::anyhow!(err)))?; + .await + .map_err(WhitenoiseError::from)?;and implement From<nostr_blossom::client::Error> for WhitenoiseError (or add a WhitenoiseError::Blossom variant). I can draft the enum update and impl if you'd like.
642-656: Avoid duplicate DB calls for nip65 relays in setup_subscriptionsYou fetch user_relays = account.nip65_relays(self).await? and then call account.nip65_relays(self).await? again when invoking setup_account_subscriptions_with_signer. Use the already fetched user_relays.
Apply this diff:
- let user_relays = account.nip65_relays(self).await?; + let user_relays = account.nip65_relays(self).await?; @@ - .setup_account_subscriptions_with_signer( + .setup_account_subscriptions_with_signer( account.pubkey, - &account.nip65_relays(self).await?, - &account.inbox_relays(self).await?, + &user_relays, + &account.inbox_relays(self).await?, &group_relays_vec, &nostr_group_ids, keys, )
75-85: Relay connection dedup – fine, but consider dedup by URLUsing HashSet relies on Relay’s Eq/Hash impl. Deduping by URL avoids surprises if Eq/Hash consider timestamps/IDs. If Relay’s Eq/Hash are already by URL only, ignore this.
If needed, switch to a set of RelayUrl:
- Collect into a HashSet directly (from nip65/inbox) and add by URL.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (8)
src/nostr_manager/mod.rs(17 hunks)src/whitenoise/accounts/core.rs(1 hunks)src/whitenoise/accounts/messages.rs(4 hunks)src/whitenoise/database/accounts.rs(33 hunks)src/whitenoise/database/relays.rs(6 hunks)src/whitenoise/database/users.rs(27 hunks)src/whitenoise/error.rs(2 hunks)src/whitenoise/mod.rs(13 hunks)
🧰 Additional context used
🧠 Learnings (8)
📚 Learning: 2025-08-17T19:34:30.290Z
Learnt from: jmcorgan
PR: parres-hq/whitenoise#318
File: src/whitenoise/accounts/messages.rs:193-195
Timestamp: 2025-08-17T19:34:30.290Z
Learning: In the whitenoise codebase, `Account::create_nostr_mls()` returns `Result<NostrMls<NostrMlsSqliteStorage>, AccountError>`, and `WhitenoiseError` has an `Account(#[from] AccountError)` variant that automatically converts. When handling MLS creation errors, use the `?` operator to let `AccountError` convert naturally to `WhitenoiseError::Account` rather than manually mapping to `WhitenoiseError::Configuration`.
Applied to files:
src/whitenoise/accounts/messages.rssrc/whitenoise/accounts/core.rssrc/whitenoise/database/accounts.rssrc/whitenoise/mod.rssrc/whitenoise/database/users.rssrc/nostr_manager/mod.rs
📚 Learning: 2025-08-17T19:35:53.509Z
Learnt from: jmcorgan
PR: parres-hq/whitenoise#318
File: src/whitenoise/accounts/welcomes.rs:53-53
Timestamp: 2025-08-17T19:35:53.509Z
Learning: In the Whitenoise codebase, Account::create_nostr_mls should have its errors mapped to WhitenoiseError::NostrMlsError instead of using anyhow::anyhow!() wrapping, as this preserves type safety and specific error information.
Applied to files:
src/whitenoise/accounts/messages.rssrc/whitenoise/accounts/core.rssrc/whitenoise/database/accounts.rssrc/whitenoise/database/users.rs
📚 Learning: 2025-08-17T19:21:11.346Z
Learnt from: jmcorgan
PR: parres-hq/whitenoise#318
File: src/whitenoise/accounts/groups.rs:61-63
Timestamp: 2025-08-17T19:21:11.346Z
Learning: When suggesting error handling improvements in the Whitenoise codebase, prefer using existing specific WhitenoiseError variants (like NostrMlsError, DatabaseError, Configuration) over generic anyhow::anyhow!() wrapping, as this maintains type safety and consistency with the established error handling patterns.
Applied to files:
src/whitenoise/accounts/messages.rssrc/whitenoise/accounts/core.rs
📚 Learning: 2025-08-18T07:22:47.078Z
Learnt from: jgmontoya
PR: parres-hq/whitenoise#318
File: src/whitenoise/accounts/messages.rs:83-91
Timestamp: 2025-08-18T07:22:47.078Z
Learning: In the Whitenoise codebase, `nostr_mls.get_relays()` returns a `std::collections::BTreeSet<RelayUrl>`, which means relay URLs are already deduplicated and don't need additional deduplication logic.
Applied to files:
src/whitenoise/accounts/messages.rssrc/whitenoise/database/users.rssrc/nostr_manager/mod.rs
📚 Learning: 2025-08-11T10:23:59.406Z
Learnt from: jgmontoya
PR: parres-hq/whitenoise#310
File: src/whitenoise/accounts/relays.rs:260-278
Timestamp: 2025-08-11T10:23:59.406Z
Learning: In `src/whitenoise/accounts/relays.rs`, the `publish_relay_list_for_account` method is only called after the account setup is complete, which guarantees that `account.nip65_relays` is already populated and will never be empty. Therefore, no fallback logic is needed when `target_relays` is None.
Applied to files:
src/whitenoise/accounts/messages.rssrc/whitenoise/accounts/core.rssrc/whitenoise/mod.rssrc/whitenoise/database/users.rs
📚 Learning: 2025-08-11T10:25:55.436Z
Learnt from: jgmontoya
PR: parres-hq/whitenoise#310
File: src/whitenoise/accounts/mod.rs:669-692
Timestamp: 2025-08-11T10:25:55.436Z
Learning: In the Whitenoise codebase, the `NostrManager::publish_event_builder_with_signer` method automatically ensures relay connectivity by calling `self.add_relays()` before publishing events. This means explicit relay connectivity checks are not needed before calling publishing methods that use this function.
Applied to files:
src/whitenoise/accounts/messages.rssrc/whitenoise/mod.rssrc/nostr_manager/mod.rs
📚 Learning: 2025-08-07T12:15:27.541Z
Learnt from: jgmontoya
PR: parres-hq/whitenoise#295
File: src/whitenoise/accounts/mod.rs:317-319
Timestamp: 2025-08-07T12:15:27.541Z
Learning: In the login flow, background_fetch_account_data should block and fail the login if it encounters an error, as this ensures users don't miss recent events. This is different from load_accounts where background fetch failures are logged but don't prevent account loading, since application startup should be resilient to partial failures.
Applied to files:
src/whitenoise/mod.rs
📚 Learning: 2025-08-11T10:25:55.436Z
Learnt from: jgmontoya
PR: parres-hq/whitenoise#310
File: src/whitenoise/accounts/mod.rs:669-692
Timestamp: 2025-08-11T10:25:55.436Z
Learning: In the Whitenoise codebase, the `NostrManager::publish_event_builder_with_signer` method automatically ensures relay connectivity by calling `self.ensure_relays_connected()` before publishing events. This means explicit relay connectivity checks are not needed before calling methods like `publish_relay_list_for_pubkey` that internally use `publish_event_builder_with_signer`.
Applied to files:
src/whitenoise/mod.rssrc/nostr_manager/mod.rs
🧬 Code Graph Analysis (7)
src/whitenoise/accounts/messages.rs (5)
src/whitenoise/database/users.rs (2)
relays(153-185)find_by_pubkey(123-137)src/whitenoise/accounts/core.rs (2)
create_nostr_mls(38-45)create_nostr_mls(773-775)src/whitenoise/accounts/key_packages.rs (1)
account(13-18)src/whitenoise/database/relays.rs (1)
find_or_create_by_url(95-107)src/whitenoise/database/accounts.rs (1)
find_by_pubkey(118-137)
src/whitenoise/database/relays.rs (2)
src/whitenoise/database/utils.rs (3)
parse_timestamp(27-52)row(38-38)row(44-44)src/whitenoise/relays.rs (4)
from(37-43)from(47-53)from(57-64)new(68-75)
src/whitenoise/accounts/core.rs (6)
src/whitenoise/database/users.rs (5)
relays(153-185)find_by_pubkey(123-137)sqlx(127-127)sqlx(161-161)sqlx(218-218)src/whitenoise/relays.rs (1)
new(68-75)src/whitenoise/users.rs (2)
new(18-26)user_relays(143-145)src/whitenoise/database/accounts.rs (7)
user(152-159)find_by_pubkey(118-137)sqlx(93-93)sqlx(123-123)sqlx(153-153)sqlx(175-175)all(92-102)src/whitenoise/accounts/key_packages.rs (1)
account(13-18)src/whitenoise/utils.rs (1)
capitalize_first_letter(114-120)
src/whitenoise/database/accounts.rs (4)
src/whitenoise/database/utils.rs (3)
parse_timestamp(27-52)row(38-38)row(44-44)src/whitenoise/database/users.rs (5)
sqlx(127-127)sqlx(161-161)sqlx(218-218)from_row(30-58)find_by_pubkey(123-137)src/whitenoise/accounts/core.rs (1)
new(17-36)src/whitenoise/users.rs (1)
new(18-26)
src/whitenoise/mod.rs (8)
src/whitenoise/app_settings.rs (2)
app_settings(103-105)new(64-71)src/whitenoise/database/accounts.rs (2)
follows(174-198)all(92-102)src/nostr_manager/mod.rs (2)
new(79-188)default_timeout(70-72)src/whitenoise/accounts/core.rs (2)
new(17-36)default_relays(59-73)src/whitenoise/relays.rs (1)
new(68-75)src/nostr_manager/parser.rs (1)
new(87-89)src/whitenoise/database/app_settings.rs (1)
find_or_create_default(53-70)src/whitenoise/accounts/key_packages.rs (1)
account(13-18)
src/whitenoise/database/users.rs (4)
src/whitenoise/database/utils.rs (3)
parse_timestamp(27-52)row(38-38)row(44-44)src/whitenoise/database/relays.rs (3)
from(49-56)sqlx(78-78)sqlx(136-136)src/whitenoise/relays.rs (4)
from(37-43)from(47-53)from(57-64)new(68-75)src/whitenoise/accounts/core.rs (1)
new(17-36)
src/nostr_manager/mod.rs (10)
src/whitenoise/database/users.rs (2)
relays(153-185)from(62-70)src/whitenoise/utils.rs (1)
urls(103-105)src/whitenoise/accounts/core.rs (2)
inbox_relays(93-97)new(17-36)src/whitenoise/mod.rs (2)
new(43-57)get_instance(331-335)src/whitenoise/database/mod.rs (1)
new(48-92)src/whitenoise/relays.rs (4)
new(68-75)from(37-43)from(47-53)from(57-64)src/nostr_manager/parser.rs (2)
new(87-89)from(38-53)src/whitenoise/accounts/key_packages.rs (1)
account(13-18)src/whitenoise/database/relays.rs (1)
from(49-56)src/whitenoise/error.rs (1)
from(112-114)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: check (ubuntu-latest, native)
🔇 Additional comments (26)
src/whitenoise/error.rs (3)
38-39: LGTM!The new error variant is well-documented and follows the existing pattern.
47-49: LGTM!The error variant is appropriately named and documented.
104-106: LGTM!The
EventProcessorvariant follows the established pattern with clear error messaging.src/nostr_manager/mod.rs (7)
44-46: LGTM!The new error variant is well-placed and follows the existing error pattern.
79-88: LGTM!The simplified constructor removes the database path dependency, aligning with the architectural shift away from LMDB.
206-212: LGTM!The method correctly converts the
Relayslice toRelayUrlvector for compatibility with the SDK.
232-245: LGTM!The updated signature properly handles the conversion from
RelaytoRelayUrl.
279-292: LGTM!The method correctly handles the new parameter types and properly converts
RelaytoRelayUrl.
424-434: LGTM!The return type change from
DashSettoHashSetis appropriate for the new architecture.
560-640: LGTM!The method properly handles the conversion from
RelaytoRelayUrland ensures relay connectivity.src/whitenoise/database/accounts.rs (3)
93-102: LGTM!The method properly handles database errors and transforms rows to accounts correctly.
153-159: LGTM! Good correction using the foreign key.Using
user_idinstead ofpubkeyfor the lookup is the correct approach, following proper relational database design.
175-198: LGTM!The method correctly fetches and converts followed users.
src/whitenoise/accounts/messages.rs (4)
73-79: LGTM! Proper error handling for MLS creation.The error handling correctly uses the
?operator to letAccountErrorpropagate naturally toWhitenoiseError::Account.
82-86: LGTM! Good use of find_or_create pattern.Using
find_or_create_by_urlensures resilience when relays haven't been persisted to the database yet.
147-148: LGTM! Consistent error handling.The MLS creation follows the same pattern as other methods with proper error propagation.
192-193: LGTM! Consistent implementation.The MLS creation and error handling are consistent with the rest of the file.
src/whitenoise/database/relays.rs (4)
78-85: LGTM! Proper error discrimination.The error mapping correctly distinguishes between
RowNotFoundand other database errors.
95-107: LGTM! Robust find-or-create pattern.The method properly handles the case where a relay doesn't exist in the database.
123-144: LGTM! Proper transaction usage.The save method correctly uses a transaction and fetches by URL to handle both insert and update cases reliably.
405-427: LGTM! Good test coverage for upsert behavior.The test properly verifies that the ID remains stable and
updated_atprogresses on subsequent saves.src/whitenoise/mod.rs (1)
169-171: NostrManager::new call updated – LGTMThe new signature using (event_sender.clone(), NostrManager::default_timeout()) aligns with the refactor and removes the data_dir dependency. Good simplification.
src/whitenoise/accounts/core.rs (2)
223-251: Login does not block on data sync (behavior regression vs. prior guidance)background_sync_account_data spawns a task and returns immediately. In earlier flows, login blocked and failed if the background fetch failed, to ensure users don’t miss recent events. Confirm if the non-blocking login is intentional.
Would you like me to add a blocking sync_login_account_data that awaits nostr.sync_all_user_data and propagates errors, while keeping background behavior for startup account loading?
I can provide a patch with a feature-gated or parameterized option to choose blocking vs. non-blocking.
529-540: publish_relay_list signature and usage – LGTMThe method now awaits the publish call and returns Result<()>; aligns with prior review feedback to avoid returning Output inadvertently.
src/whitenoise/database/users.rs (2)
127-136: Correct error mapping for find_by_pubkey – LGTMOnly RowNotFound maps to UserNotFound; other sqlx errors become Database errors. This preserves root causes.
200-227: save(): transactional upsert with proper serialization – LGTM
- Uses INSERT ... ON CONFLICT(pubkey) DO UPDATE with updated_at bump.
- Avoids unwrap on serde_json::to_string and maps to DatabaseError::Serialization.
- Reads back the updated row within the transaction.
Solid.
Summary by CodeRabbit
New Features
Improvements
Bug Fixes
Documentation
Chores