Skip to content

Conversation

@erskingardner
Copy link
Member

@erskingardner erskingardner commented Aug 21, 2025

Summary by CodeRabbit

  • New Features

    • Per-account relay and profile metadata management.
    • MLS welcomes: list, accept, decline invites and auto-subscribe to group messages.
    • Fetch a user’s MLS key package event.
  • Improvements

    • Initialization auto-seeds relays, ensures settings, connects client, and starts per-account background sync.
    • Follows auto-fetches new user data and publishes updated follow lists.
    • Media cache now records owning account for cached files.
  • Refactor

    • Public API and module reorganizations; import cleanup and surface simplifications.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Aug 21, 2025

Walkthrough

Refactors whitenoise to replace global user/relay APIs with per-account methods, adds RelayType and expanded Account/User APIs, moves welcomes into Whitenoise, introduces a Whitenoise singleton init flow, adds NostrManager error/relay helpers, and updates media cache to persist account_pubkey.

Changes

Cohort / File(s) Summary
Per-account API & Account surface
src/whitenoise/accounts.rs, src/whitenoise/accounts/mod.rs (removed), src/whitenoise/accounts/welcomes.rs (removed)
New standalone Account type and AccountError; added Account methods (relays, metadata, update_metadata, add/remove_relay, upload_profile_picture, create_nostr_mls, load_nostr_group_ids, connect_relays) and crate-level account helpers; removed old accounts/mod wrapper and its prior public items.
User API & follow/metadata changes
src/whitenoise/users.rs, src/whitenoise/follows.rs
Replaced user metadata/update path with User::sync_metadata, added key_package_event and relays_by_type; removed older Whitenoise-level user_relays/user_metadata/find_or_create helpers; updated follow logic to use User::find_or_create_by_pubkey.
Whitenoise top-level: init, welcomes, modules
src/whitenoise/mod.rs, src/whitenoise/welcomes.rs, src/lib.rs
Added Whitenoise singleton init (initialize_whitenoise using OnceCell), expanded startup steps (seed default relays/settings, client connect, background tasks), introduced welcomes module on Whitenoise, and reorganized public re-exports (added User, NostrGroupConfigData, RelayType adjustments).
NostrManager: errors & relay extraction
src/nostr_manager/mod.rs, src/nostr_manager/parser.rs, src/nostr_manager/query.rs, src/nostr_manager/subscriptions.rs
Added NostrManagerError enum, relay_urls_from_event (plus tag helpers), and pub mod subscriptions; mostly import reorganizations.
Media cache change
src/media/cache.rs
add_to_cache now requires account_pubkey: &str and persists it to DB (SQL INSERT updated); rest of caching logic unchanged.
Event handlers / processor imports
src/whitenoise/event_processor/... (handle_giftwrap.rs, handle_metadata.rs, handle_mls_message.rs, handle_relay_list.rs, mod.rs)
Consolidated grouped imports; no behavior changes.
Database import hygiene
src/whitenoise/database/... (accounts.rs, app_settings.rs, group_information.rs, mod.rs, relays.rs, user_relays.rs, users.rs)
Consolidated/reordered imports and added necessary trait imports (FromStr) for clarity; no logic changes.
Relays / groups / messages / utils docs & imports
src/whitenoise/relays.rs, src/whitenoise/groups.rs, src/whitenoise/messages.rs, src/whitenoise/key_packages.rs, src/whitenoise/group_information.rs, src/whitenoise/app_settings.rs, src/whitenoise/utils.rs
Trimmed/removed long doc comments and consolidated imports; function signatures unchanged.
Media import reorderings
src/media/encryption.rs, src/media/sanitizer.rs, src/media/types.rs
Import consolidation/reordering only; no behavioral changes.
Error import consolidation
src/whitenoise/error.rs
Grouped imports for error types; no change to WhitenoiseError/Result signatures.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  actor Test as Integration/Test
  participant W as Whitenoise (singleton)
  participant DB as Database
  participant NM as NostrManager
  participant A as Account
  participant U as User

  Note over W: initialize_whitenoise()
  Test->>W: initialize_whitenoise(cfg)
  W->>DB: ensure default relays & app settings
  W->>NM: add relays if none, connect client
  W->>W: spawn background tasks, load accounts

  Note over A: Per-account operations
  Test->>A: metadata(whitenoise)
  A->>DB: read stored metadata
  A->>NM: publish/persist when update_metadata called
  Test->>A: relays(RelayType::Nip65, whitenoise)
  A->>DB: fetch relays filtered by type
Loading
sequenceDiagram
  autonumber
  actor User as External User
  participant W as Whitenoise
  participant MLS as NostrMls
  participant NM as NostrManager
  participant G as Groups

  Note over W: Welcomes flow
  User->>W: pending_welcomes(pubkey)
  W->>MLS: load MLS for account
  MLS->>W: return welcomes
  User->>W: accept_welcome(pubkey, event_id)
  W->>MLS: accept welcome
  W->>G: aggregate group relays
  W->>NM: subscribe to group messages with signer keys
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Suggested reviewers

  • delcin-raj
  • jgmontoya

Poem

A rabbit hops through code so neat,
Per-account voices now meet.
Relays and welcomes take their place,
A singleton hums — steady pace.
I hide a byte beneath the root, thump-thump, hooray — reboot! 🐇✨

Tip

🔌 Remote MCP (Model Context Protocol) integration is now available!

Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch remove-whitenoise-methods

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (23)
src/media/encryption.rs (2)

44-49: Fix decryption API and nonce handling to match encryption.

  • Key length should be 32 bytes, same as encrypt_file.
  • Pass a proper Nonce instead of relying on into() from &[u8] (which is brittle across crate versions).

Apply this diff:

-#[allow(dead_code)]
-pub fn decrypt_file(data: &[u8], key: &[u8], nonce: &[u8]) -> Result<Vec<u8>, MediaError> {
-    let cipher = ChaCha20Poly1305::new(Key::from_slice(key));
-    cipher
-        .decrypt(nonce.into(), data)
-        .map_err(|e| MediaError::Decryption(e.to_string()))
-}
+#[allow(dead_code)]
+pub fn decrypt_file(
+    data: &[u8],
+    key: &[u8; 32],
+    nonce: &[u8],
+) -> Result<Vec<u8>, MediaError> {
+    let cipher = ChaCha20Poly1305::new(Key::from_slice(key));
+    let nonce = Nonce::from_slice(nonce);
+    cipher
+        .decrypt(nonce, data)
+        .map_err(|e| MediaError::Decryption(e.to_string()))
+}

21-31: Explicitly Use OsRng for Nonce Generation

The current call to rand::rng() relies on a generic RNG and can obscure whether you’re using a CSPRNG. Even though this crate pins rand = "0.9" (and the lock file shows v0.9.1), for AEAD nonces you should explicitly use an OS‐backed CSPRNG to guarantee cryptographic quality and make your intent clear.

File: src/media/encryption.rs
Lines: 21–31

Suggested patch:

 use chacha20poly1305::{
     aead::{Aead, KeyInit},
     ChaCha20Poly1305, Key, Nonce,
 };
-use rand::RngCore;
-use rand::rng();
+use rand::RngCore;
+use rand::rngs::OsRng;

     let cipher = ChaCha20Poly1305::new(Key::from_slice(key));
     let mut nonce_bytes = [0u8; 12];
-    rand::rng().fill_bytes(&mut nonce_bytes);
+    // Use OS CSPRNG for nonce generation
+    let mut rng = OsRng;
+    rng.fill_bytes(&mut nonce_bytes);
     let nonce = Nonce::from_slice(&nonce_bytes);

     cipher
         .encrypt(nonce, data)
         .map(|encrypted| (encrypted, nonce_bytes.to_vec()))
         .map_err(|e| MediaError::Encryption(e.to_string()))
 }

If you’d still prefer rand::rng(), the repository does indeed depend on rand 0.9.x (per Cargo.toml and Cargo.lock), so that function is stable—but for maximum clarity and security, using OsRng is strongly recommended.

src/media/cache.rs (4)

76-95: Avoid unwrap() when serializing JSON for DB bind.

A serialization failure will panic the process. Propagate as MediaError::Cache instead.

-    .bind(file_metadata.map(|m| serde_json::to_value(m).unwrap()))
+    .bind(
+        match file_metadata {
+            Some(m) => Some(serde_json::to_value(m).map_err(|e| MediaError::Cache(e.to_string()))?),
+            None => None,
+        }
+    )

109-121: Scope fetch by account_pubkey to avoid cross-account collisions.

With the new account_pubkey dimension, SELECT * WHERE mls_group_id AND file_hash can fetch another account’s entry if group and hash overlap.

Proposed change (signature + query):

-pub async fn fetch_cached_file(
-    group: &group_types::Group,
-    file_hash: &str,
-    db: &Database,
-) -> Result<Option<CachedMediaFile>, MediaError> {
+pub async fn fetch_cached_file(
+    group: &group_types::Group,
+    file_hash: &str,
+    account_pubkey: &str,
+    db: &Database,
+) -> Result<Option<CachedMediaFile>, MediaError> {
@@
-    let media_file = sqlx::query_as::<_, MediaFile>(
-        "SELECT * FROM media_files WHERE mls_group_id = ? AND file_hash = ?",
-    )
+    let media_file = sqlx::query_as::<_, MediaFile>(
+        "SELECT * FROM media_files WHERE mls_group_id = ? AND file_hash = ? AND account_pubkey = ?",
+    )
     .bind(group.mls_group_id.as_slice())
     .bind(file_hash)
+    .bind(account_pubkey)

Adjust call sites and tests accordingly.


145-169: Scope deletion by account_pubkey to prevent deleting another account’s cached file.

Same rationale as fetch; include the account in both the prefetch step and the DELETE.

-pub async fn delete_cached_file(
-    group: &group_types::Group,
-    file_hash: &str,
-    db: &Database,
-) -> Result<(), MediaError> {
+pub async fn delete_cached_file(
+    group: &group_types::Group,
+    file_hash: &str,
+    account_pubkey: &str,
+    db: &Database,
+) -> Result<(), MediaError> {
@@
-    let cached_media_file = fetch_cached_file(group, file_hash, db).await?;
+    let cached_media_file = fetch_cached_file(group, file_hash, account_pubkey, db).await?;
@@
-        sqlx::query("DELETE FROM media_files WHERE mls_group_id = ? AND file_hash = ?")
+        sqlx::query(
+            "DELETE FROM media_files WHERE mls_group_id = ? AND file_hash = ? AND account_pubkey = ?",
+        )
             .bind(group.mls_group_id.as_slice())
             .bind(file_hash)
+            .bind(account_pubkey)
             .execute(&db.pool)
             .await?;

373-384: Scope fetch_cached_file and delete_cached_file by account_pubkey

Both fetch_cached_file and delete_cached_file currently only filter on mls_group_id and file_hash, but the media_files table schema and the cache‐insert API include an account_pubkey. To enforce per‐account isolation, these helpers must be updated:

• In src/media/cache.rs (around lines 107–112), change

pub async fn fetch_cached_file(
    group: &group_types::Group,
    file_hash: &str,
    db: &Database,
) -> Result<Option<CachedMediaFile>, MediaError>

to

pub async fn fetch_cached_file(
    group: &group_types::Group,
    account_pubkey: &str,
    file_hash: &str,
    db: &Database,
) -> Result<Option<CachedMediaFile>, MediaError>

and update the SELECT SQL to include AND account_pubkey = ?.

• In src/media/cache.rs (around lines 145–149), change

pub async fn delete_cached_file(
    group: &group_types::Group,
    file_hash: &str,
    db: &Database,
) -> Result<(), MediaError>

to include account_pubkey: &str, and add the same filter to the DELETE statement.

• Update all call sites to pass the account_pubkey argument:
– In src/media/mod.rs (lines ~168, ~185): cache::fetch_cached_file(group, file_hash, db)cache::fetch_cached_file(group, &account_pubkey, file_hash, db), and similarly for delete_cached_file.
– In the tests in src/media/cache.rs (around lines 307–311, 366–382), pass the test’s account_pubkey when calling both helpers.

• Add a test case asserting that using the wrong account_pubkey returns None from fetch_cached_file and that delete_cached_file does not remove another account’s file.

src/whitenoise/database/relays.rs (1)

95-107: Don’t create on non-not-found errors.

find_or_create_by_url currently creates a new relay for any error from find_by_url, not just RelayNotFound. This risks masking transient DB errors and may introduce inconsistent state.

Harden the match to only create on RelayNotFound and propagate other errors:

-        match Relay::find_by_url(url, database).await {
-            Ok(relay) => Ok(relay),
-            Err(_) => {
-                let relay = Relay::new(url);
-                let new_relay = relay.save(database).await?;
-                Ok(new_relay)
-            }
-        }
+        match Relay::find_by_url(url, database).await {
+            Ok(relay) => Ok(relay),
+            Err(WhitenoiseError::RelayNotFound) => {
+                let relay = Relay::new(url);
+                let new_relay = relay.save(database).await?;
+                Ok(new_relay)
+            }
+            Err(e) => Err(e),
+        }
src/whitenoise/error.rs (2)

101-103: Fix typo in user-facing error message.

"direcet" → "direct".

Apply this diff:

-    #[error("nip04 direcet message error")]
+    #[error("nip04 direct message error")]

114-118: Preserve error chain when converting from boxed errors.

Using anyhow!(err.to_string()) loses the original source chain and backtrace. Prefer anyhow::Error::from(err).

 impl From<Box<dyn std::error::Error + Send + Sync>> for WhitenoiseError {
     fn from(err: Box<dyn std::error::Error + Send + Sync>) -> Self {
-        WhitenoiseError::Other(anyhow::anyhow!(err.to_string()))
+        WhitenoiseError::Other(anyhow::Error::from(err))
     }
 }
src/whitenoise/database/users.rs (2)

254-256: Avoid expect on relay.id; return a typed error instead.

Panicking here turns a recoverable situation into a crash. Resolve the ID via find_by_url when absent and propagate RelayNotFound if it can’t be resolved.

-        let relay_id = relay.id.expect("Relay should have ID after save");
+        let relay_id = match relay.id {
+            Some(id) => id,
+            None => {
+                Relay::find_by_url(&relay.url, database)
+                    .await?
+                    .id
+                    .ok_or(WhitenoiseError::RelayNotFound)?
+            }
+        };

263-265: Use association timestamps for user_relays, not the user’s timestamps.

Recording the association time with the user’s created_at/updated_at is misleading and breaks auditability. Use “now” for both.

-        .bind(self.created_at.timestamp_millis())
-        .bind(self.updated_at.timestamp_millis())
+        .bind(chrono::Utc::now().timestamp_millis())
+        .bind(chrono::Utc::now().timestamp_millis())
src/whitenoise/database/accounts.rs (1)

45-48: Bug: last_synced_at NULL/text handling can misdecode.

row.try_get::<Option<i64>>("last_synced_at")? will error if the DB column is TEXT (even when NULL), preventing the intended fallback in parse_timestamp from ever running. This breaks environments where timestamps are stored as TEXT.

Use a type-agnostic NULL check and then delegate to parse_timestamp only when non-NULL. One portable approach is to try Option<String> first, then Option<i64>:

-        let last_synced_at = match row.try_get::<Option<i64>, _>("last_synced_at")? {
-            Some(_) => Some(parse_timestamp(row, "last_synced_at")?),
-            None => None,
-        };
+        let last_synced_at = match row.try_get::<Option<String>, _>("last_synced_at") {
+            Ok(Some(_)) => Some(parse_timestamp(row, "last_synced_at")?),
+            Ok(None) => None,
+            Err(_) => match row.try_get::<Option<i64>, _>("last_synced_at") {
+                Ok(Some(_)) => Some(parse_timestamp(row, "last_synced_at")?),
+                Ok(None) => None,
+                Err(e) => return Err(e),
+            },
+        };

This preserves the dual INTEGER/TEXT compatibility provided by parse_timestamp.

src/whitenoise/messages.rs (1)

119-121: Introduce a dedicated WhitenoiseError variant for message processing failures

To preserve the original ProcessingError type and improve downstream error handling/observability, add a new WhitenoiseError variant and derive From<ProcessingError>, then replace the current anyhow! wrapper:

• In src/whitenoise/error.rs, extend the WhitenoiseError enum:

 #[derive(Error, Debug)]
 pub enum WhitenoiseError {
     // … existing variants …

+    #[error("Message processing error: {0}")]
+    MessageProcessing(#[from] crate::whitenoise::message_aggregator::ProcessingError),
 
     #[error("Other error: {0}")]
     Other(#[from] anyhow::Error),
 }

• In src/whitenoise/messages.rs, simplify the map_err on aggregation:

-.await
-.map_err(|e| {
-    WhitenoiseError::from(anyhow::anyhow!("Message aggregation failed: {}", e))
-})
+.await
+.map_err(WhitenoiseError::from)

This change maintains the original error type, enables pattern‐matching on WhitenoiseError::MessageProcessing, and removes the generic anyhow wrapping.

src/whitenoise/relays.rs (1)

101-113: Docs parameter mismatch (account vs pubkey).

The function takes &Account but docs still reference pubkey. Update to prevent API confusion.

-    /// Fetches the status of relays associated with a user's public key.
+    /// Fetches the status of relays associated with an account.
@@
-    /// * `pubkey` - The `PublicKey` of the user whose relay statuses should be fetched.
+    /// * `account` - The account whose relay statuses should be fetched.
src/whitenoise/groups.rs (2)

110-113: Bug: invalid time arithmetic on Timestamp.

Timestamp::now().add(…) with a raw integer likely won’t compile (or is semantically wrong). Elsewhere you correctly use Duration. Do the same here.

-            use std::ops::Add;
-            let one_month_future = Timestamp::now().add(30 * 24 * 60 * 60);
+            let one_month_future = Timestamp::now() + Duration::from_secs(30 * 24 * 60 * 60);

63-65: Typo in log message (“Succefully”).

Fix spelling to avoid noisy logs and ease searching.

-        tracing::debug!("Succefully fetched the key packages of members");
+        tracing::debug!("Successfully fetched the key packages of members");
src/whitenoise/mod.rs (1)

262-265: Replace unsafe unwrap() on get_instance()

There’s one call site in src/whitenoise/mod.rs that still uses

Whitenoise::get_instance().unwrap()

which will panic if the singleton hasn’t been initialized. All other callers already use ? or map the error:

  • src/whitenoise/users.rs: let whitenoise = Whitenoise::get_instance()?;
  • src/nostr_manager/mod.rs:
    let whitenoise = Whitenoise::get_instance()
        .map_err(|e| NostrManagerError::EventProcessingError(e.to_string()))?;
  • src/bin/integration_test.rs: let whitenoise = Whitenoise::get_instance()?;

Please update the convenience function in src/whitenoise/mod.rs to return a Result (or otherwise handle the Err(WhitenoiseError::Initialization) case) instead of unconditionally unwrapping. For example:

- pub fn instance() -> &'static Whitenoise {
-     Whitenoise::get_instance().unwrap()
- }
+ pub fn instance() -> Result<&'static Whitenoise, WhitenoiseError> {
+     Whitenoise::get_instance()
+ }

Or, if you really need a panicking getter, document that it will panic and add an explicit message:

pub fn instance_or_panic() -> &'static Whitenoise {
    Whitenoise::get_instance()
        .expect("Whitenoise must be initialized before calling instance_or_panic()")
}

This ensures that uninitialized-access errors are handled deliberately, rather than triggering a silent panic deep in the library.

src/whitenoise/users.rs (1)

195-283: Excessive relay-list publishing due to per-item add/remove calls

sync_relays_for_type removes and adds relays one-by-one via self.remove_relay / self.add_relay. In this codebase, those calls trigger a background publish of the relay list each time, causing N redundant publications and potential rate-limiting or event spam when many changes occur.

Consider batching:

  • Apply all DB mutations silently, then publish once at the end if any changes occurred.
  • Options:
    • Add “silent” variants of add_relay/remove_relay on User that don’t publish.
    • Or, at the end of this method, fetch the account via Whitenoise::find_account_by_pubkey and call background_publish_account_relay_list(account, relay_type) once.

Illustrative change at the end of the method:

-        Ok(true)
+        // After applying all changes, publish once
+        if let Ok(account) = whitenoise.find_account_by_pubkey(&self.pubkey).await {
+            let _ = whitenoise
+                .background_publish_account_relay_list(&account, relay_type)
+                .await;
+        }
+        Ok(true)

If you prefer not to introduce a dependency on Account here, I can propose “silent” DB helpers on User and move the publish to callers.

src/whitenoise/accounts.rs (5)

118-149: add_relay always publishes — provide a way to batch

This method triggers background_publish_account_relay_list on every add. In setup flows that add multiple relays, this creates unnecessary duplicate publications. Provide a way to:

  • Add multiple relays without publishing, then publish once; or
  • Accept a publish: bool flag.

This will also harmonize with the batching suggestion in User::sync_relays_for_type.

Example:

-    pub async fn add_relay(&self, relay: &Relay, relay_type: RelayType, whitenoise: &Whitenoise) -> Result<()> {
+    pub async fn add_relay(&self, relay: &Relay, relay_type: RelayType, whitenoise: &Whitenoise) -> Result<()> {
       let user = self.user(&whitenoise.database).await?;
       user.add_relay(relay, relay_type, &whitenoise.database).await?;
-      whitenoise.background_publish_account_relay_list(self, relay_type).await?;
+      whitenoise.background_publish_account_relay_list(self, relay_type).await?;
       tracing::debug!(...);
       Ok(())
     }
+
+    // Optionally add:
+    pub(crate) async fn add_relay_silent(&self, relay: &Relay, relay_type: RelayType, whitenoise: &Whitenoise) -> Result<()> {
+      let user = self.user(&whitenoise.database).await?;
+      user.add_relay(relay, relay_type, &whitenoise.database).await
+    }

Then use the silent variant inside bulk-setup paths.


150-179: Same batching concern for remove_relay

Mirrors the comment above — per-item publish will spam events during large diffs. Provide a “silent” remove or batch at the caller.

I can update both setup paths to publish once at the end.


503-534: Duplicate publications in new-account setup

setup_relays_for_new_account calls add_relays_to_account (which publishes on every add), then explicitly calls publish_relay_list for each type, causing redundant events.

Fix options:

  • Use “silent” add inside setup, then publish once per type (preferred).
  • Or remove the explicit publish_relay_list calls and rely on a single publish after adding all relays of the type.

Minimal change:

-        let nip65_relays = self
-            .setup_new_account_relay_type(account, RelayType::Nip65, &default_relays)
+        let nip65_relays = self
+            .setup_new_account_relay_type(account, RelayType::Nip65, &default_relays)
             .await?;
...
-        // Always publish all relay lists for new accounts
+        // Publish once per type for new accounts
         self.publish_relay_list(&nip65_relays, RelayType::Nip65, &nip65_relays, &keys).await?;
         self.publish_relay_list(&inbox_relays, RelayType::Inbox, &nip65_relays, &keys).await?;
         self.publish_relay_list(&key_package_relays, RelayType::KeyPackage, &nip65_relays, &keys).await?;

…and change setup_new_account_relay_type to perform silent DB adds.


536-590: Duplicate publications in existing-account setup; violates “only publish when needed” intent

This path also uses add_relays_to_account (publishes per add) and then conditionally publishes again when should_publish_* is true, defeating the optimization.

Adopt silent adds and publish only when should_publish_* is true. See previous comment for pattern.


649-659: add_relays_to_account should not publish per item in bulk flows

This helper loops over relays and calls account.add_relay, publishing each time. Provide a bulk-add path that avoids per-item publishes (or switch to silent add).

Proposed helper:

-    async fn add_relays_to_account(&self, account: &mut Account, relays: &[Relay], relay_type: RelayType) -> Result<()> {
-        for relay in relays {
-            account.add_relay(relay, relay_type, self).await?;
-        }
-        Ok(())
-    }
+    async fn add_relays_to_account_silent(&self, account: &mut Account, relays: &[Relay], relay_type: RelayType) -> Result<()> {
+        for relay in relays {
+            let user = account.user(&self.database).await?;
+            user.add_relay(relay, relay_type, &self.database).await?;
+        }
+        Ok(())
+    }

Then use the silent version in setup paths and publish once.

🧹 Nitpick comments (44)
src/media/encryption.rs (1)

56-86: Add negative-path tests (wrong key and wrong nonce).

Consider adding tests that ensure decryption fails with an incorrect key and with a mutated nonce. This guards against accidental misuse and regressions.

@@
     async fn test_decrypt_file() {
@@
         assert_eq!(decrypted, data);
     }
+
+    #[tokio::test]
+    async fn test_decrypt_file_wrong_key_fails() {
+        let keys = Keys::generate();
+        let other_keys = Keys::generate();
+        let data = b"test data";
+        let encrypted = encrypt_file(data, &keys.secret_key().to_secret_bytes()).unwrap();
+        let err = decrypt_file(&encrypted.0, &other_keys.secret_key().to_secret_bytes(), &encrypted.1).unwrap_err();
+        assert!(matches!(err, MediaError::Decryption(_)));
+    }
+
+    #[tokio::test]
+    async fn test_decrypt_file_wrong_nonce_fails() {
+        let keys = Keys::generate();
+        let data = b"test data";
+        let (ciphertext, mut nonce) = encrypt_file(data, &keys.secret_key().to_secret_bytes()).unwrap();
+        nonce[0] ^= 0xFF;
+        let err = decrypt_file(&ciphertext, &keys.secret_key().to_secret_bytes(), &nonce).unwrap_err();
+        assert!(matches!(err, MediaError::Decryption(_)));
+    }
src/media/sanitizer.rs (3)

147-190: Compute alpha and bits-per-pixel from the decoded image, not the container format.

has_alpha and bits_per_pixel are inferred from the input container, which can be wrong (e.g., WebP can have alpha). Use img.color() to derive accurate values.

Apply this diff:

@@
-    Ok(SafeMediaMetadata {
+    let color = img.color();
+    Ok(SafeMediaMetadata {
@@
-        has_alpha: Some(matches!(input_format, ImageFormat::Png | ImageFormat::Gif)),
-        bits_per_pixel: Some(match input_format {
-            ImageFormat::Png => 32,
-            ImageFormat::Jpeg => 24,
-            ImageFormat::Gif => 8,
-            _ => 24,
-        }),
+        has_alpha: Some(color.has_alpha()),
+        bits_per_pixel: Some((color.bytes_per_pixel() as u8) * 8),

Update the WebP metadata assertions accordingly if needed.


208-226: Extract metadata from the sanitized bytes to reflect the final artifact.

Currently metadata is extracted from the original data, which may differ from the sanitized output (format/quality). Extract from sanitized_data instead.

@@ fn sanitize_image_file(data: &[u8], mime_type: &str) -> Result<SanitizedMedia, MediaError> {
-    // Extract metadata
-    let metadata = extract_image_metadata(data, mime_type)?;
+    // Extract metadata from the sanitized data
+    let metadata = extract_image_metadata(&sanitized_data, mime_type)?;

271-289: Doc comment mentions mp4parse, but implementation doesn’t use it.

The comment says “Videos (extracts metadata using mp4parse)” but the code returns basic metadata only. Update the docs or wire in mp4 parsing.

@@
-/// - Videos (extracts metadata using mp4parse)
+/// - Videos (basic metadata only; no container parsing yet)
src/media/cache.rs (4)

171-223: Test table schema aligns with code; consider adding an index.

The schema includes account_pubkey. For performance and uniqueness guarantees at scale, consider an index on (mls_group_id, file_hash, account_pubkey).

CREATE INDEX IF NOT EXISTS idx_media_files_lookup
ON media_files(mls_group_id, file_hash, account_pubkey);

270-281: Add assertion for account_pubkey once MediaFile exposes it.

After adding account_pubkey to MediaFile, assert it here to harden the test.

         let media_file = add_to_cache(
@@
         .unwrap();
 
         // Verify database entry
         assert_eq!(media_file.mls_group_id, group.mls_group_id);
+        assert_eq!(media_file.account_pubkey, account_pubkey);

308-321: Prefer reading raw bytes in the test to avoid UTF-8 assumptions.

read_to_string will fail for binary objects. Since the cache stores arbitrary bytes, compare bytes.

-        let contents = fs::read_to_string(&expected_path).unwrap();
-        assert_eq!(contents, "test file content");
+        let contents = fs::read(&expected_path).unwrap();
+        assert_eq!(contents, b"test file content");

38-47: All add_to_cache call sites updated with account_pubkey

Verified that every invocation of add_to_cache now includes the new account_pubkey parameter:

  • src/media/cache.rs:270 (production code)
  • src/media/cache.rs:346 (test code)
  • src/media/mod.rs:111 (public API surface)

No missing call sites remain—this change won’t break existing callers.

Optional improvement: consider defining a newtype (e.g. struct AccountPubkey(String)) for account_pubkey to improve clarity and type safety as you expand multi-account support.

src/whitenoise/key_packages.rs (2)

29-31: Stale comment about a lock — there’s no lock being held here.

The comment “Extract key package data while holding the lock” is misleading; this code path doesn’t acquire any lock. Please update or remove to prevent confusion during future maintenance.

-        // Extract key package data while holding the lock
+        // Extract key package data for the account
         let (encoded_key_package, tags) = self.encoded_key_package(account).await?;

64-74: Avoid buffering the entire stream when only the first (unique) event is needed.

For a specific event_id, the first match suffices. Short-circuiting reduces memory and latency, and makes intent clearer.

-        let mut key_package_events = Vec::new();
-        while let Some(event) = key_package_stream.next().await {
-            key_package_events.push(event);
-        }
+        let first_event = key_package_stream.next().await;

And then below:

-        if let Some(event) = key_package_events.first() {
+        if let Some(event) = first_event.as_ref() {
             if delete_mls_stored_keys {
                 let nostr_mls = Account::create_nostr_mls(account.pubkey, &self.config.data_dir)?;
                 let key_package = nostr_mls.parse_key_package(event)?;
                 nostr_mls.delete_key_package_from_storage(&key_package)?;
             }
src/whitenoise/event_processor/event_handlers/handle_metadata.rs (2)

21-25: Fix incorrect tracing target to reflect this module.

The target references nostr_manager::fetch_all_user_data, which doesn’t match this handler and makes log filtering harder.

Apply:

-                tracing::error!(
-                    target: "whitenoise::nostr_manager::fetch_all_user_data",
+                tracing::error!(
+                    target: "whitenoise::event_handlers::handle_metadata",
                     "Failed to parse metadata for user {}: {}",
                     event.pubkey,
                     e
                 );

16-16: Drop redundant let _ = ...?; binding.

You’re already propagating errors with ?. The binding to _ is unnecessary.

-                let _ = user.save(&self.database).await?;
+                user.save(&self.database).await?;
src/whitenoise/database/relays.rs (1)

87-93: Use the existing From<RelayRow> for Relay to reduce duplication.

Minor simplification—use the conversion you defined.

-        Ok(Relay {
-            id: Some(relay_row.id),
-            url: relay_row.url,
-            created_at: relay_row.created_at,
-            updated_at: relay_row.updated_at,
-        })
+        Ok(relay_row.into())
src/whitenoise/secrets_store.rs (2)

1-4: Grouped std imports are fine; align fs usage for consistency.

You import fs but still call std::fs::... in a few places. Prefer the imported fs to keep it uniform.

Examples to adjust elsewhere (non-exhaustive):

  • Replace std::fs::read_to_string with fs::read_to_string
  • Replace std::fs::create_dir_all with fs::create_dir_all
  • Replace std::fs::write with fs::write

218-222: Doc/behavior mismatch for keyring entry creation errors.

The docstring says this returns an error if Entry creation fails, but the implementation swallows that error by using if let Ok(entry) = Entry::new(...).

Propagate Entry creation errors while still ignoring deletion failures:

-        } else {
-            let entry = Entry::new(SERVICE_NAME, hex_pubkey.as_str());
-            if let Ok(entry) = entry {
-                let _ = entry.delete_credential();
-            }
-        }
+        } else {
+            let entry = Entry::new(SERVICE_NAME, hex_pubkey.as_str())
+                .map_err(SecretsStoreError::KeyringError)?;
+            let _ = entry.delete_credential(); // keep idempotent semantics
+        }
src/whitenoise/error.rs (1)

56-58: Standardize error casing (“MLS”).

Minor copy edit for consistency with the rest of the codebase and the MLS acronym.

-    #[error("Nostr Mls error: {0}")]
+    #[error("Nostr MLS error: {0}")]
src/whitenoise/database/group_information.rs (1)

154-171: Avoid manual placeholder construction; use QueryBuilder for safety and clarity.

Building the IN clause with string formatting is easy to get wrong. sqlx::QueryBuilder produces the right number of placeholders and bindings safely.

Apply this diff:

-        // Build dynamic query with correct number of placeholders
-        let placeholders = "?,".repeat(id_bytes.len());
-        let placeholders = placeholders.trim_end_matches(',');
-
-        let query = format!(
-            "SELECT id, mls_group_id, group_type, created_at, updated_at
-             FROM group_information
-             WHERE mls_group_id IN ({})",
-            placeholders
-        );
-
-        // Build and execute query with bindings
-        let mut query_builder = sqlx::query_as::<_, GroupInformationRow>(&query);
-        for id_bytes in &id_bytes {
-            query_builder = query_builder.bind(id_bytes);
-        }
-
-        let rows = query_builder.fetch_all(&database.pool).await?;
+        // Build and execute query using QueryBuilder to avoid manual placeholder formatting
+        let mut qb = sqlx::QueryBuilder::new(
+            "SELECT id, mls_group_id, group_type, created_at, updated_at FROM group_information WHERE mls_group_id IN (",
+        );
+        {
+            let mut separated = qb.separated(", ");
+            for bytes in &id_bytes {
+                separated.push_bind(bytes);
+            }
+        }
+        qb.push(")");
+        let rows: Vec<GroupInformationRow> = qb
+            .build_query_as()
+            .fetch_all(&database.pool)
+            .await?;

Add this import at the top of the file:

use sqlx::QueryBuilder;
src/whitenoise/event_processor/mod.rs (4)

41-42: Fix doc comment typo in subscription hash formula.

“accouny_pubkey” → “account_pubkey”.

-    /// where hashed_pubkey = SHA256(session salt || accouny_pubkey)[..12]
+    /// where hashed_pubkey = SHA256(session salt || account_pubkey)[..12]

56-66: Consider avoiding O(n) DB scan to resolve subscription hashes.

Account::all(&self.database) on every event can become a hotspot with many accounts. Cache a hash→pubkey map (refresh on account changes) or index in DB to resolve in O(1) or O(log n).

I can sketch a small in-memory cache keyed by the 12-char hash if helpful.


242-245: Generalize error message; function is used beyond MLS.

account_from_subscription_id is called for giftwrap and other flows; message mentions “MLS” specifically.

-        let sub_id = subscription_id.ok_or_else(|| {
-            WhitenoiseError::InvalidEvent(
-                "MLS message received without subscription ID".to_string(),
-            )
-        })?;
+        let sub_id = subscription_id.ok_or_else(|| {
+            WhitenoiseError::InvalidEvent("Event received without subscription ID".to_string())
+        })?;

257-261: Fix tracing target/context.

Log target references process_mls_message while inside account_from_subscription_id. Update for accurate filtering.

-        tracing::debug!(
-        target: "whitenoise::event_processor::process_mls_message",
-        "Processing MLS message for account: {}",
-        target_pubkey.to_hex()
-        );
+        tracing::debug!(
+            target: "whitenoise::event_processor::account_from_subscription_id",
+            "Resolved account for subscription: {}",
+            target_pubkey.to_hex()
+        );
src/whitenoise/event_processor/event_handlers/handle_giftwrap.rs (1)

11-15: Fix misleading log message.

The handler does implement MLS welcome processing below; the log currently says “processing not yet implemented”.

Apply this diff to correct the message:

-        tracing::info!(
-            target: "whitenoise::event_handlers::handle_giftwrap",
-            "Giftwrap received for account: {} - processing not yet implemented",
-            account.pubkey.to_hex()
-        );
+        tracing::info!(
+            target: "whitenoise::event_handlers::handle_giftwrap",
+            "Giftwrap received for account: {} - attempting unwrap and dispatch",
+            account.pubkey.to_hex()
+        );
src/whitenoise/database/accounts.rs (1)

219-227: Prefer explicit “persisted ID” checks before relational ops.

These queries bind self.id / user.id (Options) directly; binding NULL may lead to surprising behavior or SQL errors. Returning a clear app error when the entities aren’t persisted improves diagnostics.

Apply the pattern below, assuming WhitenoiseError::{AccountNotPersisted,UserNotPersisted} exist (adjust to your actual variants):

     pub(crate) async fn is_following_user(
         &self,
         user: &User,
         database: &Database,
     ) -> Result<bool, WhitenoiseError> {
-        let result = sqlx::query(
+        let account_id = self.id.ok_or(WhitenoiseError::AccountNotPersisted)?;
+        let user_id = user.id.ok_or(WhitenoiseError::UserNotPersisted)?;
+        let result = sqlx::query(
             "SELECT COUNT(*) FROM account_follows WHERE account_id = ? AND user_id = ?",
         )
-        .bind(self.id)
-        .bind(user.id)
+        .bind(account_id)
+        .bind(user_id)
         .fetch_one(&database.pool)
         .await?;
         Ok(result.get::<i64, _>(0) > 0)
     }

Make analogous changes in follow_user and unfollow_user.

Also applies to: 243-258, 275-285

src/whitenoise/event_processor/event_handlers/handle_relay_list.rs (1)

12-16: Avoid cloning event if not required.

If NostrManager::relay_urls_from_event can accept &Event, pass a reference to eliminate the clone.

-        let relay_urls = NostrManager::relay_urls_from_event(event.clone());
+        let relay_urls = NostrManager::relay_urls_from_event(&event);

If the helper currently takes ownership, consider changing its signature to borrow.

src/whitenoise/app_settings.rs (1)

1-2: Imports reorg LGTM; minor ergonomics nit.

  • Grouped imports are cleaner and consistent with the refactor.
  • Minor: ThemeMode is small and trivially copyable—consider deriving Copy for convenience.
-#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize, Hash)]
+#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize, Hash)]
 pub enum ThemeMode {

Also applies to: 6-6

src/whitenoise/messages.rs (2)

44-49: Avoid variable shadowing and normalize error type path.

  • The local binding named message shadows the message: String parameter, which lowers readability. Rename the binding to something descriptive.
  • Elsewhere we use nostr_mls::Error; here it’s nostr_mls::error::Error. Normalize to the former for consistency.

Apply:

-        let message = nostr_mls
+        let stored_message = nostr_mls
             .get_message(&event_id)?
-            .ok_or(WhitenoiseError::NostrMlsError(
-                nostr_mls::error::Error::MessageNotFound,
-            ))?;
+            .ok_or(WhitenoiseError::NostrMlsError(nostr_mls::Error::MessageNotFound))?;
@@
-        let tokens = self.nostr.parse(&message.content);
+        let tokens = self.nostr.parse(&stored_message.content);
@@
-        Ok(MessageWithTokens::new(message, tokens))
+        Ok(MessageWithTokens::new(stored_message, tokens))

Also applies to: 61-64


139-141: Avoid unwrap(); use expect() for safer invariants.

ensure_id() should guarantee presence, but expect() gives a clearer backtrace if invariants break.

-        let event_id = inner_event.id.unwrap(); // This is guaranteed to be Some by ensure_id
+        let event_id = inner_event
+            .id
+            .expect("ensure_id() must set an event id before use");
src/whitenoise/relays.rs (2)

57-66: Avoid silent fallback when mapping Kind -> RelayType.

Defaulting unknown Kind to RelayType::Nip65 can hide bugs. At minimum, log a warning; ideally, use TryFrom<Kind> and propagate an error for unsupported kinds.

Low-impact mitigation:

-            _ => RelayType::Nip65, // Default fallback
+            _ => {
+                tracing::warn!(
+                    target: "whitenoise::relays",
+                    "Unexpected Kind {:?}; defaulting to RelayType::Nip65. Consider handling explicitly.",
+                    kind
+                );
+                RelayType::Nip65
+            }

115-124: Simplify deduplication.

Use iterator collection to a set directly; avoids a temporary Vec -> Set dance and tightens intent.

-        let mut unique_relay_urls = HashSet::new();
-        for relay in all_relays {
-            unique_relay_urls.insert(relay.url);
-        }
+        let unique_relay_urls: HashSet<_> = all_relays.into_iter().map(|r| r.url).collect();
src/whitenoise/groups.rs (2)

246-249: Avoid cloning group_relays.

You can iterate by reference; find_or_create_relay_by_url accepts &RelayUrl.

-        for relay_url in group_relays.clone() {
-            let db_relay = self.find_or_create_relay_by_url(&relay_url).await?;
+        for relay_url in &group_relays {
+            let db_relay = self.find_or_create_relay_by_url(relay_url).await?;
             relays.insert(db_relay);
         }

314-316: Comment/code mismatch about inbox-relay fallback.

The comment promises a fallback if a contact has no inbox relays, but the code doesn’t implement it. Either implement a fallback (e.g., account’s NIP-65 relays) or update the comment to reflect actual behavior.

If a fallback is desired, confirm the intended source (creator’s NIP-65 relays vs. defaults) and I can draft the patch.

src/whitenoise/utils.rs (1)

53-57: Hard-coded column name in SQL decode error.

index: "nip65_relays" will be misleading if this utility is reused for inbox or key-package columns. Either pass the column name as a parameter or use a neutral label.

Minimal tweak:

-            .map_err(|e| sqlx::Error::ColumnDecode {
-                index: "nip65_relays".to_owned(),
+            .map_err(|e| sqlx::Error::ColumnDecode {
+                index: "relays".to_owned(),
                 source: Box::new(e),
             })

Alternatively, introduce parse_relays_from_sql_with_col(column: &str, relays: String) and have the current function call it with "relays".

src/whitenoise/follows.rs (2)

73-82: Backfill background fetch for batch follows.

follow_user triggers background_fetch_user_data when a user record is newly created; follow_users currently doesn’t. Mirror the behavior to avoid inconsistent state when bulk-following.

-        for pubkey in pubkeys {
-            let (user, _) = User::find_or_create_by_pubkey(pubkey, &self.database).await?;
-            users.push(user);
-        }
+        for pubkey in pubkeys {
+            let (user, newly_created) = User::find_or_create_by_pubkey(pubkey, &self.database).await?;
+            if newly_created {
+                self.background_fetch_user_data(&user).await?;
+            }
+            users.push(user);
+        }

If you prefer, I can batch these fetches concurrently with a bounded buffer to avoid long-tail latency.


39-41: Use the User API directly in unfollow_user (and for consistency, in is_following_user)

The find_user_by_pubkey helper on Whitenoise simply delegates to User::find_by_pubkey, so calling the model’s API directly removes one extra indirection and aligns with our “remove whitenoise methods” goal.

• In src/whitenoise/follows.rs
– Lines 39–41 (unfollow_user):

-        let user = self.find_user_by_pubkey(pubkey).await?;
+        let user = User::find_by_pubkey(pubkey, &self.database).await?;
         account.unfollow_user(&user, &self.database).await?;
         self.background_publish_account_follow_list(account).await?;

• In the same file, lines 56–58 (is_following_user):

-        let user = self.find_user_by_pubkey(pubkey).await?;
+        let user = User::find_by_pubkey(pubkey, &self.database).await?;
         account.is_following_user(&user, &self.database).await

No other breaking changes—find_user_by_pubkey remains available on Whitenoise (and still used by our integration tests), so this is purely an internal, optional cleanup.

src/whitenoise/welcomes.rs (2)

84-96: Optimize relay deduplication using HashSet.

The current approach sorts and deduplicates the vector which has O(n log n) complexity. Since you're already importing HashSet, use it directly for O(n) deduplication.

-            // Collect all relays from all groups into a single vector
-            for group in &groups {
-                let relays = nostr_mls.get_relays(&group.mls_group_id)?;
-                group_relays.extend(relays);
-            }
-
-            // Remove duplicates by sorting and deduplicating
-            group_relays.sort();
-            group_relays.dedup();
+            // Collect all relays from all groups using HashSet for deduplication
+            let mut relay_set = HashSet::new();
+            for group in &groups {
+                let relays = nostr_mls.get_relays(&group.mls_group_id)?;
+                relay_set.extend(relays);
+            }
+            let group_relays: Vec<_> = relay_set.into_iter().collect();

203-204: Remove or implement the commented sleep.

There's a commented sleep(Duration::from_secs(3)) which suggests timing concerns during testing. Either remove it if no longer needed, or uncomment and properly handle the timing requirements.

         // Give some time for the event processor to process welcome messages
-        // sleep(Duration::from_secs(3));
+        tokio::time::sleep(tokio::time::Duration::from_secs(3)).await;
src/whitenoise/mod.rs (1)

208-242: Consider extracting account loading logic.

The account loading and subscription setup logic within initialize_whitenoise could be extracted to a separate method for better separation of concerns and testability.

+    async fn load_and_setup_accounts(whitenoise_ref: &'static Whitenoise) -> Result<()> {
+        let accounts = Account::all(&whitenoise_ref.database).await?;
+        for account in accounts {
+            // Trigger background data fetch for each account (non-critical)
+            if let Err(e) = whitenoise_ref.background_sync_account_data(&account).await {
+                tracing::warn!(
+                    target: "whitenoise::load_accounts",
+                    "Failed to trigger background fetch for account {}: {}",
+                    account.pubkey.to_hex(),
+                    e
+                );
+                // Continue - background fetch failure should not prevent account loading
+            }
+            // Setup subscriptions for this account
+            match whitenoise_ref.setup_subscriptions(&account).await {
+                Ok(()) => {
+                    tracing::debug!(
+                        target: "whitenoise::initialize_whitenoise",
+                        "Successfully set up subscriptions for account: {}",
+                        account.pubkey.to_hex()
+                    );
+                }
+                Err(e) => {
+                    tracing::warn!(
+                        target: "whitenoise::initialize_whitenoise",
+                        "Failed to set up subscriptions for account {}: {}",
+                        account.pubkey.to_hex(),
+                        e
+                    );
+                    // Continue with other accounts instead of failing completely
+                }
+            }
+        }
+        Ok(())
+    }

     // Fetch events and setup subscriptions for all accounts after event processing has started
-    {
-        let accounts = Account::all(&whitenoise_ref.database).await?;
-        for account in accounts {
-            // Trigger background data fetch for each account (non-critical)
-            if let Err(e) = whitenoise_ref.background_sync_account_data(&account).await {
-                tracing::warn!(
-                    target: "whitenoise::load_accounts",
-                    "Failed to trigger background fetch for account {}: {}",
-                    account.pubkey.to_hex(),
-                    e
-                );
-                // Continue - background fetch failure should not prevent account loading
-            }
-            // Setup subscriptions for this account
-            match whitenoise_ref.setup_subscriptions(&account).await {
-                Ok(()) => {
-                    tracing::debug!(
-                        target: "whitenoise::initialize_whitenoise",
-                        "Successfully set up subscriptions for account: {}",
-                        account.pubkey.to_hex()
-                    );
-                }
-                Err(e) => {
-                    tracing::warn!(
-                        target: "whitenoise::initialize_whitenoise",
-                        "Failed to set up subscriptions for account {}: {}",
-                        account.pubkey.to_hex(),
-                        e
-                    );
-                    // Continue with other accounts instead of failing completely
-                }
-            }
-        }
-    }
+    Self::load_and_setup_accounts(whitenoise_ref).await?;
src/whitenoise/users.rs (2)

39-52: Syncs only on content change, not recency — clarify behavior

The implementation updates when metadata differs, assuming fetch_metadata_from returns the freshest event. If “newer” semantics are important, either compare event timestamps or adjust the docs to say “if changed.”

Would you like me to update the docstring to explicitly say “if the fetched metadata differs from the cached metadata”?


474-485: Minor: test user construction duplicated across tests

Multiple tests inline identical User initializations. Consider a small helper to reduce duplication and keep tests focused on behavior.

I can provide a fn make_user(pubkey: PublicKey, name: &str) -> User helper if you want.

src/whitenoise/accounts.rs (5)

19-38: Good error taxonomy; consider adding a Blossom error variant

AccountError is well-structured with From mappings. In upload_profile_picture you downcast to WhitenoiseError::Other(anyhow!(...)). For parity and type safety, consider adding:

  • BlossomError(#[from] nostr_blossom::Error) (or the client’s error type)
    This avoids losing specificity and makes handling easier upstream.

I can wire this through WhitenoiseError via From<AccountError> if desired.


276-286: Relay connect: dedup and error handling

  • Dedup by URL rather than Relay to avoid relying on Hash/Eq for the whole struct:
    let mut urls: HashSet<RelayUrl> = HashSet::new(); urls.extend(self.nip65_relays(...).await?.into_iter().map(|r| r.url)); ...
  • Consider propagating errors from connect() (if available) instead of ignoring the result.

I can provide the exact diff once we confirm nostr.client.connect()’s signature.


384-397: Inefficient account count (loads full rows)

get_accounts_count loads all accounts into memory just to count them. Add a SELECT COUNT(*) query in database/accounts.rs and use that here.

Example:

-    pub async fn get_accounts_count(&self) -> Result<usize> {
-        let accounts = Account::all(&self.database).await?;
-        Ok(accounts.len())
-    }
+    pub async fn get_accounts_count(&self) -> Result<usize> {
+        let count = crate::whitenoise::database::accounts::count(&self.database).await?;
+        Ok(count as usize)
+    }

I can add the count helper with the exact SQL if you want.


813-882: Redundant dedup in subscriptions (non-blocking)

nostr_mls.get_relays() already returns a deduplicated set (BTreeSet). You additionally dedup with a HashSet and then collect again. It’s harmless but could be simplified by directly iterating the set.

This keeps the method leaner without changing behavior.


11-18: Import style inconsistency for RelayType across files

Here you import RelayType from the crate root (use crate::RelayType;), while users.rs imports it via crate::whitenoise::relays::{Relay, RelayType}. Prefer one style for consistency across the module.

I can submit a follow-up patch normalizing imports.

Copy link
Contributor

@jgmontoya jgmontoya left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! I think CodeRabbit comments make sense though

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
src/whitenoise/accounts.rs (3)

669-693: Spawned task drops errors; log failures inside the task

Using ? inside tokio::spawn returns an Err that gets dropped since the JoinHandle isn’t awaited. This causes silent failures for metadata publishes.

Apply this diff to log errors instead of dropping them:

@@
-        tokio::spawn(async move {
-            tracing::debug!(target: "whitenoise::accounts::background_publish_user_metadata", "Background task: Publishing metadata for account: {:?}", account_clone.pubkey);
-
-            nostr
-                .publish_metadata_with_signer(&user.metadata, &relays, signer)
-                .await?;
-
-            tracing::debug!(target: "whitenoise::accounts::background_publish_user_metadata", "Successfully published metadata for account: {:?}", account_clone.pubkey);
-            Ok::<(), WhitenoiseError>(())
-        });
+        tokio::spawn(async move {
+            tracing::debug!(
+                target: "whitenoise::accounts::background_publish_user_metadata",
+                "Background task: Publishing metadata for account: {:?}",
+                account_clone.pubkey
+            );
+            if let Err(e) = nostr
+                .publish_metadata_with_signer(&user.metadata, &relays, signer)
+                .await
+            {
+                tracing::error!(
+                    target: "whitenoise::accounts::background_publish_user_metadata",
+                    "Failed to publish metadata for account {}: {}",
+                    account_clone.pubkey.to_hex(),
+                    e
+                );
+            } else {
+                tracing::debug!(
+                    target: "whitenoise::accounts::background_publish_user_metadata",
+                    "Successfully published metadata for account: {:?}",
+                    account_clone.pubkey
+                );
+            }
+        });

694-722: Same error-dropping issue for relay list publication

Mirror the fix from metadata publishing to avoid silent failures when publishing relay lists.

Apply this diff:

@@
-        tokio::spawn(async move {
+        tokio::spawn(async move {
             tracing::debug!(target: "whitenoise::accounts::background_publish_account_relay_list", "Background task: Publishing relay list for account: {:?}", account_clone.pubkey);
-
-            nostr
-                .publish_relay_list_with_signer(&relays, relay_type, &target_relays, keys)
-                .await?;
-
-            tracing::debug!(target: "whitenoise::accounts::background_publish_account_relay_list", "Successfully published relay list for account: {:?}", account_clone.pubkey);
-            Ok::<(), WhitenoiseError>(())
-        });
+            if let Err(e) = nostr
+                .publish_relay_list_with_signer(&relays, relay_type, &target_relays, keys)
+                .await
+            {
+                tracing::error!(
+                    target: "whitenoise::accounts::background_publish_account_relay_list",
+                    "Failed to publish relay list for account {}: {}",
+                    account_clone.pubkey.to_hex(),
+                    e
+                );
+            } else {
+                tracing::debug!(
+                    target: "whitenoise::accounts::background_publish_account_relay_list",
+                    "Successfully published relay list for account: {:?}",
+                    account_clone.pubkey
+                );
+            }
+        });

724-748: Same error-dropping issue for follow list publication

Ensure follow-list publish failures are logged rather than silently dropped.

Apply this diff:

@@
-        tokio::spawn(async move {
+        tokio::spawn(async move {
             tracing::debug!(target: "whitenoise::accounts::background_publish_account_follow_list", "Background task: Publishing follow list for account: {:?}", account_clone.pubkey);
-
-            nostr
-                .publish_follow_list_with_signer(&follows_pubkeys, &relays, keys)
-                .await?;
-
-            tracing::debug!(target: "whitenoise::accounts::background_publish_account_follow_list", "Successfully published follow list for account: {:?}", account_clone.pubkey);
-            Ok::<(), WhitenoiseError>(())
-        });
+            if let Err(e) = nostr
+                .publish_follow_list_with_signer(&follows_pubkeys, &relays, keys)
+                .await
+            {
+                tracing::error!(
+                    target: "whitenoise::accounts::background_publish_account_follow_list",
+                    "Failed to publish follow list for account {}: {}",
+                    account_clone.pubkey.to_hex(),
+                    e
+                );
+            } else {
+                tracing::debug!(
+                    target: "whitenoise::accounts::background_publish_account_follow_list",
+                    "Successfully published follow list for account: {:?}",
+                    account_clone.pubkey
+                );
+            }
+        });
🧹 Nitpick comments (6)
src/whitenoise/accounts.rs (6)

50-70: Avoid unwrap on persisted User.id; propagate a concrete error instead

user.id.unwrap() can panic if an invariant is ever violated. Prefer explicit error propagation to preserve failure context and keep panic-free code paths.

Apply this diff:

@@
-        let account = Account {
-            id: None,
-            user_id: user.id.unwrap(),
+        let user_id = user.id.ok_or(WhitenoiseError::UserNotPersisted)?;
+        let account = Account {
+            id: None,
+            user_id,
             pubkey: keys.public_key(),
             last_synced_at: None,
             created_at: Utc::now(),
             updated_at: Utc::now(),
         };

118-148: Reduce redundant relay-list publishes when (bulk) adding/removing relays

Every add/remove triggers background_publish_account_relay_list, and setup flows later publish the whole list again. During initial setup this can spawn N+1 publishes per relay type and cause avoidable network churn.

  • Keep Account::add_relay/remove_relay as-is for external API.
  • In internal bulk paths (e.g., add_relays_to_account), write directly via User::add_relay and publish once afterwards (which you already do in setup flows).

Apply this diff to make add_relays_to_account bulk-write without auto-publish side effects:

@@
     async fn add_relays_to_account(
         &self,
         account: &mut Account,
         relays: &[Relay],
         relay_type: RelayType,
     ) -> Result<()> {
-        for relay in relays {
-            account.add_relay(relay, relay_type, self).await?;
-        }
+        // Bulk insert without triggering per-relay publishes; callers publish once afterwards.
+        let user = account.user(&self.database).await?;
+        for relay in relays {
+            user.add_relay(relay, relay_type, &self.database).await?;
+        }
         Ok(())
     }

This keeps the public API ergonomics while avoiding redundant background tasks during setup/migrations.

Also applies to: 150-178, 644-654


217-249: Prefer concrete error variants over Other(anyhow!(...)) for Blossom upload

Type-erasing Blossom upload errors into WhitenoiseError::Other(anyhow::Error) loses actionable detail. Consider adding a dedicated error variant (e.g., WhitenoiseError::Blossom(#[from] nostr_blossom::Error)) or mapping to an existing specific variant to preserve context and enable targeted handling/telemetry.

If you want, I can open a follow-up to add a Blossom error variant in WhitenoiseError and update call sites.


379-392: Avoid loading all rows just to count accounts

get_accounts_count() currently loads all accounts and takes len(). For large datasets this is wasteful.

Prefer SELECT COUNT(*) FROM accounts via a lightweight DB call (e.g., sqlx::query_scalar::<_, i64>("SELECT COUNT(*) FROM accounts")). If you’d like, I can add a Database::count_accounts() helper and wire this up.


355-377: Docstring mismatch for logout parameter; update to reflect pubkey

The docs mention “the account to log out,” but the method takes a &PublicKey. Also the note about “in-memory accounts list” isn’t reflected in this method.

Apply this doc-only diff:

@@
-    /// Logs out the user associated with the given account.
+    /// Logs out the user associated with the given public key.
@@
-    /// * `account` - The account to log out.
+    /// * `pubkey` - Public key of the account to log out.

If there’s still an in-memory list elsewhere, consider mentioning where it’s updated (or clarify that it’s managed lazily on next startup).


808-876: Minor: reuse already-fetched user relays to avoid duplicate DB hits

You already fetch user_relays at Line 854 for logging; pass it to the subscription setup instead of re-awaiting.

@@
-        let user_relays = account.nip65_relays(self).await?;
+        let user_relays = account.nip65_relays(self).await?;
@@
-        self.nostr
-            .setup_account_subscriptions_with_signer(
-                account.pubkey,
-                &account.nip65_relays(self).await?,
-                &account.inbox_relays(self).await?,
-                &group_relays_vec,
-                &nostr_group_ids,
-                keys,
-            )
+        self.nostr
+            .setup_account_subscriptions_with_signer(
+                account.pubkey,
+                &user_relays,
+                &account.inbox_relays(self).await?,
+                &group_relays_vec,
+                &nostr_group_ids,
+                keys,
+            )
             .await?;
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between c3a0467 and 341f5ff.

📒 Files selected for processing (2)
  • src/whitenoise/accounts.rs (8 hunks)
  • src/whitenoise/users.rs (8 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • src/whitenoise/users.rs
🧰 Additional context used
🧠 Learnings (4)
📓 Common learnings
Learnt from: jgmontoya
PR: parres-hq/whitenoise#310
File: src/whitenoise/accounts/relays.rs:260-278
Timestamp: 2025-08-11T10:23:59.406Z
Learning: In `src/whitenoise/accounts/relays.rs`, the `publish_relay_list_for_account` method is only called after the account setup is complete, which guarantees that `account.nip65_relays` is already populated and will never be empty. Therefore, no fallback logic is needed when `target_relays` is None.
Learnt from: jmcorgan
PR: parres-hq/whitenoise#318
File: src/whitenoise/accounts/welcomes.rs:53-53
Timestamp: 2025-08-17T19:35:53.509Z
Learning: In the Whitenoise codebase, Account::create_nostr_mls should have its errors mapped to WhitenoiseError::NostrMlsError instead of using anyhow::anyhow!() wrapping, as this preserves type safety and specific error information.
📚 Learning: 2025-08-17T19:34:30.290Z
Learnt from: jmcorgan
PR: parres-hq/whitenoise#318
File: src/whitenoise/accounts/messages.rs:193-195
Timestamp: 2025-08-17T19:34:30.290Z
Learning: In the whitenoise codebase, `Account::create_nostr_mls()` returns `Result<NostrMls<NostrMlsSqliteStorage>, AccountError>`, and `WhitenoiseError` has an `Account(#[from] AccountError)` variant that automatically converts. When handling MLS creation errors, use the `?` operator to let `AccountError` convert naturally to `WhitenoiseError::Account` rather than manually mapping to `WhitenoiseError::Configuration`.

Applied to files:

  • src/whitenoise/accounts.rs
📚 Learning: 2025-08-17T19:35:53.509Z
Learnt from: jmcorgan
PR: parres-hq/whitenoise#318
File: src/whitenoise/accounts/welcomes.rs:53-53
Timestamp: 2025-08-17T19:35:53.509Z
Learning: In the Whitenoise codebase, Account::create_nostr_mls should have its errors mapped to WhitenoiseError::NostrMlsError instead of using anyhow::anyhow!() wrapping, as this preserves type safety and specific error information.

Applied to files:

  • src/whitenoise/accounts.rs
📚 Learning: 2025-08-11T10:23:59.406Z
Learnt from: jgmontoya
PR: parres-hq/whitenoise#310
File: src/whitenoise/accounts/relays.rs:260-278
Timestamp: 2025-08-11T10:23:59.406Z
Learning: In `src/whitenoise/accounts/relays.rs`, the `publish_relay_list_for_account` method is only called after the account setup is complete, which guarantees that `account.nip65_relays` is already populated and will never be empty. Therefore, no fallback logic is needed when `target_relays` is None.

Applied to files:

  • src/whitenoise/accounts.rs
🧬 Code graph analysis (1)
src/whitenoise/accounts.rs (4)
src/whitenoise/database/users.rs (3)
  • relays (157-189)
  • from (66-74)
  • find_by_pubkey (127-141)
src/whitenoise/database/relays.rs (1)
  • from (49-56)
src/whitenoise/relays.rs (3)
  • from (38-44)
  • from (48-54)
  • from (58-65)
src/whitenoise/database/accounts.rs (3)
  • user (152-159)
  • all (92-102)
  • find_by_pubkey (118-137)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: check (ubuntu-latest, native)
🔇 Additional comments (6)
src/whitenoise/accounts.rs (6)

85-93: Relays retrieval via user linkage looks good

Clean separation: account resolves its user, then defers to user->relays by type. Straightforward, minimal DB round trips.


194-215: Metadata update pipeline is solid

Local persistence followed by background publish is the right balance between UX and reliability. Tracepoints are helpful.


250-257: MLS storage initialization helper is concise and focused

Solid helper that isolates path layout and storage creation; errors flow via AccountError as expected.


259-269: Group-id loading helper is correct and returns stable hex IDs

Good use of create_nostr_mls and hex encoding for stable identifiers.


271-281: Explicitly handle errors from Client::connect()

The call to

whitenoise.nostr.client.connect().await;

silently drops any potential failure. If connect() returns a Result, you should either propagate the error or handle it explicitly—otherwise failed connections will go unnoticed at runtime.

Suggested change:

-your_task
-whitenoise.nostr.client.connect().await;
+// Propagate any error from connect()
+whitenoise.nostr.client.connect().await?;

• Location: src/whitenoise/accounts.rs:279
• Please verify the signature of nostr_sdk::Client::connect() (in the upstream crate) to confirm whether it returns a Result or (), and update this call accordingly.


766-779: No DB type mismatch—last_synced_at is stored as INTEGER and consistently handled as epoch-millis

All evidence in the code shows that we persist and load last_synced_at as an integer number of milliseconds since the epoch:

  • In the SQLite schema (used for in-process or test databases), the table is created as:
    CREATE TABLE accounts (
        …,
        last_synced_at INTEGER,
        created_at     INTEGER NOT NULL,
        updated_at     INTEGER NOT NULL
    )
    (src/whitenoise/database/accounts.rs)
  • The Rust AccountRow struct declares
    pub last_synced_at: Option<DateTime<Utc>>,
    and its loader uses
    let last_synced_at = match row.try_get::<Option<i64>, _>("last_synced_at")? {
        Some(_) => Some(parse_timestamp(row, "last_synced_at")?),
        None    => None,
    };
    where parse_timestamp converts the stored i64 (millis) into a DateTime<Utc>.
  • All INSERT/UPDATE calls—including the snippet in question—bind ts.timestamp_millis() (an i64) against that same INTEGER column.

Because the column is defined as INTEGER and the code uniformly writes and reads epoch-millis, there is no mismatch here.

Likely an incorrect or invalid review comment.

@erskingardner erskingardner merged commit f1a3314 into master Aug 21, 2025
4 checks passed
@erskingardner erskingardner deleted the remove-whitenoise-methods branch August 21, 2025 12:03
@coderabbitai coderabbitai bot mentioned this pull request Oct 7, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants