-
Notifications
You must be signed in to change notification settings - Fork 36
Remove whitenoise methods #322
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughRefactors whitenoise to replace global user/relay APIs with per-account methods, adds RelayType and expanded Account/User APIs, moves welcomes into Whitenoise, introduces a Whitenoise singleton init flow, adds NostrManager error/relay helpers, and updates media cache to persist account_pubkey. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor Test as Integration/Test
participant W as Whitenoise (singleton)
participant DB as Database
participant NM as NostrManager
participant A as Account
participant U as User
Note over W: initialize_whitenoise()
Test->>W: initialize_whitenoise(cfg)
W->>DB: ensure default relays & app settings
W->>NM: add relays if none, connect client
W->>W: spawn background tasks, load accounts
Note over A: Per-account operations
Test->>A: metadata(whitenoise)
A->>DB: read stored metadata
A->>NM: publish/persist when update_metadata called
Test->>A: relays(RelayType::Nip65, whitenoise)
A->>DB: fetch relays filtered by type
sequenceDiagram
autonumber
actor User as External User
participant W as Whitenoise
participant MLS as NostrMls
participant NM as NostrManager
participant G as Groups
Note over W: Welcomes flow
User->>W: pending_welcomes(pubkey)
W->>MLS: load MLS for account
MLS->>W: return welcomes
User->>W: accept_welcome(pubkey, event_id)
W->>MLS: accept welcome
W->>G: aggregate group relays
W->>NM: subscribe to group messages with signer keys
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
Poem
Tip 🔌 Remote MCP (Model Context Protocol) integration is now available!Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats. ✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (23)
src/media/encryption.rs (2)
44-49: Fix decryption API and nonce handling to match encryption.
- Key length should be 32 bytes, same as
encrypt_file.- Pass a proper
Nonceinstead of relying oninto()from&[u8](which is brittle across crate versions).Apply this diff:
-#[allow(dead_code)] -pub fn decrypt_file(data: &[u8], key: &[u8], nonce: &[u8]) -> Result<Vec<u8>, MediaError> { - let cipher = ChaCha20Poly1305::new(Key::from_slice(key)); - cipher - .decrypt(nonce.into(), data) - .map_err(|e| MediaError::Decryption(e.to_string())) -} +#[allow(dead_code)] +pub fn decrypt_file( + data: &[u8], + key: &[u8; 32], + nonce: &[u8], +) -> Result<Vec<u8>, MediaError> { + let cipher = ChaCha20Poly1305::new(Key::from_slice(key)); + let nonce = Nonce::from_slice(nonce); + cipher + .decrypt(nonce, data) + .map_err(|e| MediaError::Decryption(e.to_string())) +}
21-31: Explicitly Use OsRng for Nonce GenerationThe current call to
rand::rng()relies on a generic RNG and can obscure whether you’re using a CSPRNG. Even though this crate pinsrand = "0.9"(and the lock file shows v0.9.1), for AEAD nonces you should explicitly use an OS‐backed CSPRNG to guarantee cryptographic quality and make your intent clear.File:
src/media/encryption.rs
Lines: 21–31Suggested patch:
use chacha20poly1305::{ aead::{Aead, KeyInit}, ChaCha20Poly1305, Key, Nonce, }; -use rand::RngCore; -use rand::rng(); +use rand::RngCore; +use rand::rngs::OsRng; let cipher = ChaCha20Poly1305::new(Key::from_slice(key)); let mut nonce_bytes = [0u8; 12]; - rand::rng().fill_bytes(&mut nonce_bytes); + // Use OS CSPRNG for nonce generation + let mut rng = OsRng; + rng.fill_bytes(&mut nonce_bytes); let nonce = Nonce::from_slice(&nonce_bytes); cipher .encrypt(nonce, data) .map(|encrypted| (encrypted, nonce_bytes.to_vec())) .map_err(|e| MediaError::Encryption(e.to_string())) }If you’d still prefer
rand::rng(), the repository does indeed depend on rand 0.9.x (per Cargo.toml and Cargo.lock), so that function is stable—but for maximum clarity and security, usingOsRngis strongly recommended.src/media/cache.rs (4)
76-95: Avoidunwrap()when serializing JSON for DB bind.A serialization failure will panic the process. Propagate as
MediaError::Cacheinstead.- .bind(file_metadata.map(|m| serde_json::to_value(m).unwrap())) + .bind( + match file_metadata { + Some(m) => Some(serde_json::to_value(m).map_err(|e| MediaError::Cache(e.to_string()))?), + None => None, + } + )
109-121: Scope fetch byaccount_pubkeyto avoid cross-account collisions.With the new
account_pubkeydimension,SELECT * WHERE mls_group_id AND file_hashcan fetch another account’s entry if group and hash overlap.Proposed change (signature + query):
-pub async fn fetch_cached_file( - group: &group_types::Group, - file_hash: &str, - db: &Database, -) -> Result<Option<CachedMediaFile>, MediaError> { +pub async fn fetch_cached_file( + group: &group_types::Group, + file_hash: &str, + account_pubkey: &str, + db: &Database, +) -> Result<Option<CachedMediaFile>, MediaError> { @@ - let media_file = sqlx::query_as::<_, MediaFile>( - "SELECT * FROM media_files WHERE mls_group_id = ? AND file_hash = ?", - ) + let media_file = sqlx::query_as::<_, MediaFile>( + "SELECT * FROM media_files WHERE mls_group_id = ? AND file_hash = ? AND account_pubkey = ?", + ) .bind(group.mls_group_id.as_slice()) .bind(file_hash) + .bind(account_pubkey)Adjust call sites and tests accordingly.
145-169: Scope deletion byaccount_pubkeyto prevent deleting another account’s cached file.Same rationale as fetch; include the account in both the prefetch step and the DELETE.
-pub async fn delete_cached_file( - group: &group_types::Group, - file_hash: &str, - db: &Database, -) -> Result<(), MediaError> { +pub async fn delete_cached_file( + group: &group_types::Group, + file_hash: &str, + account_pubkey: &str, + db: &Database, +) -> Result<(), MediaError> { @@ - let cached_media_file = fetch_cached_file(group, file_hash, db).await?; + let cached_media_file = fetch_cached_file(group, file_hash, account_pubkey, db).await?; @@ - sqlx::query("DELETE FROM media_files WHERE mls_group_id = ? AND file_hash = ?") + sqlx::query( + "DELETE FROM media_files WHERE mls_group_id = ? AND file_hash = ? AND account_pubkey = ?", + ) .bind(group.mls_group_id.as_slice()) .bind(file_hash) + .bind(account_pubkey) .execute(&db.pool) .await?;
373-384: Scopefetch_cached_fileanddelete_cached_filebyaccount_pubkeyBoth
fetch_cached_fileanddelete_cached_filecurrently only filter onmls_group_idandfile_hash, but themedia_filestable schema and the cache‐insert API include anaccount_pubkey. To enforce per‐account isolation, these helpers must be updated:• In
src/media/cache.rs(around lines 107–112), changepub async fn fetch_cached_file( group: &group_types::Group, file_hash: &str, db: &Database, ) -> Result<Option<CachedMediaFile>, MediaError>to
pub async fn fetch_cached_file( group: &group_types::Group, account_pubkey: &str, file_hash: &str, db: &Database, ) -> Result<Option<CachedMediaFile>, MediaError>and update the SELECT SQL to include
AND account_pubkey = ?.• In
src/media/cache.rs(around lines 145–149), changepub async fn delete_cached_file( group: &group_types::Group, file_hash: &str, db: &Database, ) -> Result<(), MediaError>to include
account_pubkey: &str, and add the same filter to the DELETE statement.• Update all call sites to pass the
account_pubkeyargument:
– Insrc/media/mod.rs(lines ~168, ~185):cache::fetch_cached_file(group, file_hash, db)→cache::fetch_cached_file(group, &account_pubkey, file_hash, db), and similarly fordelete_cached_file.
– In the tests insrc/media/cache.rs(around lines 307–311, 366–382), pass the test’saccount_pubkeywhen calling both helpers.• Add a test case asserting that using the wrong
account_pubkeyreturnsNonefromfetch_cached_fileand thatdelete_cached_filedoes not remove another account’s file.src/whitenoise/database/relays.rs (1)
95-107: Don’t create on non-not-found errors.
find_or_create_by_urlcurrently creates a new relay for any error fromfind_by_url, not justRelayNotFound. This risks masking transient DB errors and may introduce inconsistent state.Harden the match to only create on
RelayNotFoundand propagate other errors:- match Relay::find_by_url(url, database).await { - Ok(relay) => Ok(relay), - Err(_) => { - let relay = Relay::new(url); - let new_relay = relay.save(database).await?; - Ok(new_relay) - } - } + match Relay::find_by_url(url, database).await { + Ok(relay) => Ok(relay), + Err(WhitenoiseError::RelayNotFound) => { + let relay = Relay::new(url); + let new_relay = relay.save(database).await?; + Ok(new_relay) + } + Err(e) => Err(e), + }src/whitenoise/error.rs (2)
101-103: Fix typo in user-facing error message."direcet" → "direct".
Apply this diff:
- #[error("nip04 direcet message error")] + #[error("nip04 direct message error")]
114-118: Preserve error chain when converting from boxed errors.Using
anyhow!(err.to_string())loses the original source chain and backtrace. Preferanyhow::Error::from(err).impl From<Box<dyn std::error::Error + Send + Sync>> for WhitenoiseError { fn from(err: Box<dyn std::error::Error + Send + Sync>) -> Self { - WhitenoiseError::Other(anyhow::anyhow!(err.to_string())) + WhitenoiseError::Other(anyhow::Error::from(err)) } }src/whitenoise/database/users.rs (2)
254-256: Avoidexpectonrelay.id; return a typed error instead.Panicking here turns a recoverable situation into a crash. Resolve the ID via
find_by_urlwhen absent and propagateRelayNotFoundif it can’t be resolved.- let relay_id = relay.id.expect("Relay should have ID after save"); + let relay_id = match relay.id { + Some(id) => id, + None => { + Relay::find_by_url(&relay.url, database) + .await? + .id + .ok_or(WhitenoiseError::RelayNotFound)? + } + };
263-265: Use association timestamps for user_relays, not the user’s timestamps.Recording the association time with the user’s
created_at/updated_atis misleading and breaks auditability. Use “now” for both.- .bind(self.created_at.timestamp_millis()) - .bind(self.updated_at.timestamp_millis()) + .bind(chrono::Utc::now().timestamp_millis()) + .bind(chrono::Utc::now().timestamp_millis())src/whitenoise/database/accounts.rs (1)
45-48: Bug:last_synced_atNULL/text handling can misdecode.
row.try_get::<Option<i64>>("last_synced_at")?will error if the DB column is TEXT (even when NULL), preventing the intended fallback inparse_timestampfrom ever running. This breaks environments where timestamps are stored as TEXT.Use a type-agnostic NULL check and then delegate to
parse_timestamponly when non-NULL. One portable approach is to tryOption<String>first, thenOption<i64>:- let last_synced_at = match row.try_get::<Option<i64>, _>("last_synced_at")? { - Some(_) => Some(parse_timestamp(row, "last_synced_at")?), - None => None, - }; + let last_synced_at = match row.try_get::<Option<String>, _>("last_synced_at") { + Ok(Some(_)) => Some(parse_timestamp(row, "last_synced_at")?), + Ok(None) => None, + Err(_) => match row.try_get::<Option<i64>, _>("last_synced_at") { + Ok(Some(_)) => Some(parse_timestamp(row, "last_synced_at")?), + Ok(None) => None, + Err(e) => return Err(e), + }, + };This preserves the dual INTEGER/TEXT compatibility provided by
parse_timestamp.src/whitenoise/messages.rs (1)
119-121: Introduce a dedicated WhitenoiseError variant for message processing failuresTo preserve the original
ProcessingErrortype and improve downstream error handling/observability, add a newWhitenoiseErrorvariant and deriveFrom<ProcessingError>, then replace the currentanyhow!wrapper:• In src/whitenoise/error.rs, extend the
WhitenoiseErrorenum:#[derive(Error, Debug)] pub enum WhitenoiseError { // … existing variants … + #[error("Message processing error: {0}")] + MessageProcessing(#[from] crate::whitenoise::message_aggregator::ProcessingError), #[error("Other error: {0}")] Other(#[from] anyhow::Error), }• In src/whitenoise/messages.rs, simplify the
map_erron aggregation:-.await -.map_err(|e| { - WhitenoiseError::from(anyhow::anyhow!("Message aggregation failed: {}", e)) -}) +.await +.map_err(WhitenoiseError::from)This change maintains the original error type, enables pattern‐matching on
WhitenoiseError::MessageProcessing, and removes the genericanyhowwrapping.src/whitenoise/relays.rs (1)
101-113: Docs parameter mismatch (account vs pubkey).The function takes
&Accountbut docs still referencepubkey. Update to prevent API confusion.- /// Fetches the status of relays associated with a user's public key. + /// Fetches the status of relays associated with an account. @@ - /// * `pubkey` - The `PublicKey` of the user whose relay statuses should be fetched. + /// * `account` - The account whose relay statuses should be fetched.src/whitenoise/groups.rs (2)
110-113: Bug: invalid time arithmetic onTimestamp.
Timestamp::now().add(…)with a raw integer likely won’t compile (or is semantically wrong). Elsewhere you correctly useDuration. Do the same here.- use std::ops::Add; - let one_month_future = Timestamp::now().add(30 * 24 * 60 * 60); + let one_month_future = Timestamp::now() + Duration::from_secs(30 * 24 * 60 * 60);
63-65: Typo in log message (“Succefully”).Fix spelling to avoid noisy logs and ease searching.
- tracing::debug!("Succefully fetched the key packages of members"); + tracing::debug!("Successfully fetched the key packages of members");src/whitenoise/mod.rs (1)
262-265: Replace unsafeunwrap()onget_instance()There’s one call site in
src/whitenoise/mod.rsthat still usesWhitenoise::get_instance().unwrap()which will panic if the singleton hasn’t been initialized. All other callers already use
?or map the error:
- src/whitenoise/users.rs:
let whitenoise = Whitenoise::get_instance()?;- src/nostr_manager/mod.rs:
let whitenoise = Whitenoise::get_instance() .map_err(|e| NostrManagerError::EventProcessingError(e.to_string()))?;- src/bin/integration_test.rs:
let whitenoise = Whitenoise::get_instance()?;Please update the convenience function in src/whitenoise/mod.rs to return a
Result(or otherwise handle theErr(WhitenoiseError::Initialization)case) instead of unconditionally unwrapping. For example:- pub fn instance() -> &'static Whitenoise { - Whitenoise::get_instance().unwrap() - } + pub fn instance() -> Result<&'static Whitenoise, WhitenoiseError> { + Whitenoise::get_instance() + }Or, if you really need a panicking getter, document that it will panic and add an explicit message:
pub fn instance_or_panic() -> &'static Whitenoise { Whitenoise::get_instance() .expect("Whitenoise must be initialized before calling instance_or_panic()") }This ensures that uninitialized-access errors are handled deliberately, rather than triggering a silent panic deep in the library.
src/whitenoise/users.rs (1)
195-283: Excessive relay-list publishing due to per-item add/remove calls
sync_relays_for_typeremoves and adds relays one-by-one viaself.remove_relay/self.add_relay. In this codebase, those calls trigger a background publish of the relay list each time, causing N redundant publications and potential rate-limiting or event spam when many changes occur.Consider batching:
- Apply all DB mutations silently, then publish once at the end if any changes occurred.
- Options:
- Add “silent” variants of
add_relay/remove_relayonUserthat don’t publish.- Or, at the end of this method, fetch the account via
Whitenoise::find_account_by_pubkeyand callbackground_publish_account_relay_list(account, relay_type)once.Illustrative change at the end of the method:
- Ok(true) + // After applying all changes, publish once + if let Ok(account) = whitenoise.find_account_by_pubkey(&self.pubkey).await { + let _ = whitenoise + .background_publish_account_relay_list(&account, relay_type) + .await; + } + Ok(true)If you prefer not to introduce a dependency on
Accounthere, I can propose “silent” DB helpers onUserand move the publish to callers.src/whitenoise/accounts.rs (5)
118-149:add_relayalways publishes — provide a way to batchThis method triggers
background_publish_account_relay_liston every add. In setup flows that add multiple relays, this creates unnecessary duplicate publications. Provide a way to:
- Add multiple relays without publishing, then publish once; or
- Accept a
publish: boolflag.This will also harmonize with the batching suggestion in
User::sync_relays_for_type.Example:
- pub async fn add_relay(&self, relay: &Relay, relay_type: RelayType, whitenoise: &Whitenoise) -> Result<()> { + pub async fn add_relay(&self, relay: &Relay, relay_type: RelayType, whitenoise: &Whitenoise) -> Result<()> { let user = self.user(&whitenoise.database).await?; user.add_relay(relay, relay_type, &whitenoise.database).await?; - whitenoise.background_publish_account_relay_list(self, relay_type).await?; + whitenoise.background_publish_account_relay_list(self, relay_type).await?; tracing::debug!(...); Ok(()) } + + // Optionally add: + pub(crate) async fn add_relay_silent(&self, relay: &Relay, relay_type: RelayType, whitenoise: &Whitenoise) -> Result<()> { + let user = self.user(&whitenoise.database).await?; + user.add_relay(relay, relay_type, &whitenoise.database).await + }Then use the silent variant inside bulk-setup paths.
150-179: Same batching concern forremove_relayMirrors the comment above — per-item publish will spam events during large diffs. Provide a “silent” remove or batch at the caller.
I can update both setup paths to publish once at the end.
503-534: Duplicate publications in new-account setup
setup_relays_for_new_accountcallsadd_relays_to_account(which publishes on every add), then explicitly callspublish_relay_listfor each type, causing redundant events.Fix options:
- Use “silent” add inside setup, then publish once per type (preferred).
- Or remove the explicit
publish_relay_listcalls and rely on a single publish after adding all relays of the type.Minimal change:
- let nip65_relays = self - .setup_new_account_relay_type(account, RelayType::Nip65, &default_relays) + let nip65_relays = self + .setup_new_account_relay_type(account, RelayType::Nip65, &default_relays) .await?; ... - // Always publish all relay lists for new accounts + // Publish once per type for new accounts self.publish_relay_list(&nip65_relays, RelayType::Nip65, &nip65_relays, &keys).await?; self.publish_relay_list(&inbox_relays, RelayType::Inbox, &nip65_relays, &keys).await?; self.publish_relay_list(&key_package_relays, RelayType::KeyPackage, &nip65_relays, &keys).await?;…and change
setup_new_account_relay_typeto perform silent DB adds.
536-590: Duplicate publications in existing-account setup; violates “only publish when needed” intentThis path also uses
add_relays_to_account(publishes per add) and then conditionally publishes again whenshould_publish_*is true, defeating the optimization.Adopt silent adds and publish only when
should_publish_*is true. See previous comment for pattern.
649-659:add_relays_to_accountshould not publish per item in bulk flowsThis helper loops over relays and calls
account.add_relay, publishing each time. Provide a bulk-add path that avoids per-item publishes (or switch to silent add).Proposed helper:
- async fn add_relays_to_account(&self, account: &mut Account, relays: &[Relay], relay_type: RelayType) -> Result<()> { - for relay in relays { - account.add_relay(relay, relay_type, self).await?; - } - Ok(()) - } + async fn add_relays_to_account_silent(&self, account: &mut Account, relays: &[Relay], relay_type: RelayType) -> Result<()> { + for relay in relays { + let user = account.user(&self.database).await?; + user.add_relay(relay, relay_type, &self.database).await?; + } + Ok(()) + }Then use the silent version in setup paths and publish once.
🧹 Nitpick comments (44)
src/media/encryption.rs (1)
56-86: Add negative-path tests (wrong key and wrong nonce).Consider adding tests that ensure decryption fails with an incorrect key and with a mutated nonce. This guards against accidental misuse and regressions.
@@ async fn test_decrypt_file() { @@ assert_eq!(decrypted, data); } + + #[tokio::test] + async fn test_decrypt_file_wrong_key_fails() { + let keys = Keys::generate(); + let other_keys = Keys::generate(); + let data = b"test data"; + let encrypted = encrypt_file(data, &keys.secret_key().to_secret_bytes()).unwrap(); + let err = decrypt_file(&encrypted.0, &other_keys.secret_key().to_secret_bytes(), &encrypted.1).unwrap_err(); + assert!(matches!(err, MediaError::Decryption(_))); + } + + #[tokio::test] + async fn test_decrypt_file_wrong_nonce_fails() { + let keys = Keys::generate(); + let data = b"test data"; + let (ciphertext, mut nonce) = encrypt_file(data, &keys.secret_key().to_secret_bytes()).unwrap(); + nonce[0] ^= 0xFF; + let err = decrypt_file(&ciphertext, &keys.secret_key().to_secret_bytes(), &nonce).unwrap_err(); + assert!(matches!(err, MediaError::Decryption(_))); + }src/media/sanitizer.rs (3)
147-190: Compute alpha and bits-per-pixel from the decoded image, not the container format.
has_alphaandbits_per_pixelare inferred from the input container, which can be wrong (e.g., WebP can have alpha). Useimg.color()to derive accurate values.Apply this diff:
@@ - Ok(SafeMediaMetadata { + let color = img.color(); + Ok(SafeMediaMetadata { @@ - has_alpha: Some(matches!(input_format, ImageFormat::Png | ImageFormat::Gif)), - bits_per_pixel: Some(match input_format { - ImageFormat::Png => 32, - ImageFormat::Jpeg => 24, - ImageFormat::Gif => 8, - _ => 24, - }), + has_alpha: Some(color.has_alpha()), + bits_per_pixel: Some((color.bytes_per_pixel() as u8) * 8),Update the WebP metadata assertions accordingly if needed.
208-226: Extract metadata from the sanitized bytes to reflect the final artifact.Currently metadata is extracted from the original
data, which may differ from the sanitized output (format/quality). Extract fromsanitized_datainstead.@@ fn sanitize_image_file(data: &[u8], mime_type: &str) -> Result<SanitizedMedia, MediaError> { - // Extract metadata - let metadata = extract_image_metadata(data, mime_type)?; + // Extract metadata from the sanitized data + let metadata = extract_image_metadata(&sanitized_data, mime_type)?;
271-289: Doc comment mentions mp4parse, but implementation doesn’t use it.The comment says “Videos (extracts metadata using mp4parse)” but the code returns basic metadata only. Update the docs or wire in mp4 parsing.
@@ -/// - Videos (extracts metadata using mp4parse) +/// - Videos (basic metadata only; no container parsing yet)src/media/cache.rs (4)
171-223: Test table schema aligns with code; consider adding an index.The schema includes
account_pubkey. For performance and uniqueness guarantees at scale, consider an index on(mls_group_id, file_hash, account_pubkey).CREATE INDEX IF NOT EXISTS idx_media_files_lookup ON media_files(mls_group_id, file_hash, account_pubkey);
270-281: Add assertion foraccount_pubkeyonceMediaFileexposes it.After adding
account_pubkeytoMediaFile, assert it here to harden the test.let media_file = add_to_cache( @@ .unwrap(); // Verify database entry assert_eq!(media_file.mls_group_id, group.mls_group_id); + assert_eq!(media_file.account_pubkey, account_pubkey);
308-321: Prefer reading raw bytes in the test to avoid UTF-8 assumptions.
read_to_stringwill fail for binary objects. Since the cache stores arbitrary bytes, compare bytes.- let contents = fs::read_to_string(&expected_path).unwrap(); - assert_eq!(contents, "test file content"); + let contents = fs::read(&expected_path).unwrap(); + assert_eq!(contents, b"test file content");
38-47: Alladd_to_cachecall sites updated withaccount_pubkeyVerified that every invocation of
add_to_cachenow includes the newaccount_pubkeyparameter:
- src/media/cache.rs:270 (production code)
- src/media/cache.rs:346 (test code)
- src/media/mod.rs:111 (public API surface)
No missing call sites remain—this change won’t break existing callers.
Optional improvement: consider defining a newtype (e.g.
struct AccountPubkey(String)) foraccount_pubkeyto improve clarity and type safety as you expand multi-account support.src/whitenoise/key_packages.rs (2)
29-31: Stale comment about a lock — there’s no lock being held here.The comment “Extract key package data while holding the lock” is misleading; this code path doesn’t acquire any lock. Please update or remove to prevent confusion during future maintenance.
- // Extract key package data while holding the lock + // Extract key package data for the account let (encoded_key_package, tags) = self.encoded_key_package(account).await?;
64-74: Avoid buffering the entire stream when only the first (unique) event is needed.For a specific
event_id, the first match suffices. Short-circuiting reduces memory and latency, and makes intent clearer.- let mut key_package_events = Vec::new(); - while let Some(event) = key_package_stream.next().await { - key_package_events.push(event); - } + let first_event = key_package_stream.next().await;And then below:
- if let Some(event) = key_package_events.first() { + if let Some(event) = first_event.as_ref() { if delete_mls_stored_keys { let nostr_mls = Account::create_nostr_mls(account.pubkey, &self.config.data_dir)?; let key_package = nostr_mls.parse_key_package(event)?; nostr_mls.delete_key_package_from_storage(&key_package)?; }src/whitenoise/event_processor/event_handlers/handle_metadata.rs (2)
21-25: Fix incorrect tracing target to reflect this module.The target references
nostr_manager::fetch_all_user_data, which doesn’t match this handler and makes log filtering harder.Apply:
- tracing::error!( - target: "whitenoise::nostr_manager::fetch_all_user_data", + tracing::error!( + target: "whitenoise::event_handlers::handle_metadata", "Failed to parse metadata for user {}: {}", event.pubkey, e );
16-16: Drop redundantlet _ = ...?;binding.You’re already propagating errors with
?. The binding to_is unnecessary.- let _ = user.save(&self.database).await?; + user.save(&self.database).await?;src/whitenoise/database/relays.rs (1)
87-93: Use the existingFrom<RelayRow> for Relayto reduce duplication.Minor simplification—use the conversion you defined.
- Ok(Relay { - id: Some(relay_row.id), - url: relay_row.url, - created_at: relay_row.created_at, - updated_at: relay_row.updated_at, - }) + Ok(relay_row.into())src/whitenoise/secrets_store.rs (2)
1-4: Grouped std imports are fine; alignfsusage for consistency.You import
fsbut still callstd::fs::...in a few places. Prefer the importedfsto keep it uniform.Examples to adjust elsewhere (non-exhaustive):
- Replace
std::fs::read_to_stringwithfs::read_to_string- Replace
std::fs::create_dir_allwithfs::create_dir_all- Replace
std::fs::writewithfs::write
218-222: Doc/behavior mismatch for keyring entry creation errors.The docstring says this returns an error if Entry creation fails, but the implementation swallows that error by using
if let Ok(entry) = Entry::new(...).Propagate Entry creation errors while still ignoring deletion failures:
- } else { - let entry = Entry::new(SERVICE_NAME, hex_pubkey.as_str()); - if let Ok(entry) = entry { - let _ = entry.delete_credential(); - } - } + } else { + let entry = Entry::new(SERVICE_NAME, hex_pubkey.as_str()) + .map_err(SecretsStoreError::KeyringError)?; + let _ = entry.delete_credential(); // keep idempotent semantics + }src/whitenoise/error.rs (1)
56-58: Standardize error casing (“MLS”).Minor copy edit for consistency with the rest of the codebase and the MLS acronym.
- #[error("Nostr Mls error: {0}")] + #[error("Nostr MLS error: {0}")]src/whitenoise/database/group_information.rs (1)
154-171: Avoid manual placeholder construction; use QueryBuilder for safety and clarity.Building the
INclause with string formatting is easy to get wrong.sqlx::QueryBuilderproduces the right number of placeholders and bindings safely.Apply this diff:
- // Build dynamic query with correct number of placeholders - let placeholders = "?,".repeat(id_bytes.len()); - let placeholders = placeholders.trim_end_matches(','); - - let query = format!( - "SELECT id, mls_group_id, group_type, created_at, updated_at - FROM group_information - WHERE mls_group_id IN ({})", - placeholders - ); - - // Build and execute query with bindings - let mut query_builder = sqlx::query_as::<_, GroupInformationRow>(&query); - for id_bytes in &id_bytes { - query_builder = query_builder.bind(id_bytes); - } - - let rows = query_builder.fetch_all(&database.pool).await?; + // Build and execute query using QueryBuilder to avoid manual placeholder formatting + let mut qb = sqlx::QueryBuilder::new( + "SELECT id, mls_group_id, group_type, created_at, updated_at FROM group_information WHERE mls_group_id IN (", + ); + { + let mut separated = qb.separated(", "); + for bytes in &id_bytes { + separated.push_bind(bytes); + } + } + qb.push(")"); + let rows: Vec<GroupInformationRow> = qb + .build_query_as() + .fetch_all(&database.pool) + .await?;Add this import at the top of the file:
use sqlx::QueryBuilder;src/whitenoise/event_processor/mod.rs (4)
41-42: Fix doc comment typo in subscription hash formula.“accouny_pubkey” → “account_pubkey”.
- /// where hashed_pubkey = SHA256(session salt || accouny_pubkey)[..12] + /// where hashed_pubkey = SHA256(session salt || account_pubkey)[..12]
56-66: Consider avoiding O(n) DB scan to resolve subscription hashes.
Account::all(&self.database)on every event can become a hotspot with many accounts. Cache a hash→pubkey map (refresh on account changes) or index in DB to resolve in O(1) or O(log n).I can sketch a small in-memory cache keyed by the 12-char hash if helpful.
242-245: Generalize error message; function is used beyond MLS.
account_from_subscription_idis called for giftwrap and other flows; message mentions “MLS” specifically.- let sub_id = subscription_id.ok_or_else(|| { - WhitenoiseError::InvalidEvent( - "MLS message received without subscription ID".to_string(), - ) - })?; + let sub_id = subscription_id.ok_or_else(|| { + WhitenoiseError::InvalidEvent("Event received without subscription ID".to_string()) + })?;
257-261: Fix tracing target/context.Log target references
process_mls_messagewhile insideaccount_from_subscription_id. Update for accurate filtering.- tracing::debug!( - target: "whitenoise::event_processor::process_mls_message", - "Processing MLS message for account: {}", - target_pubkey.to_hex() - ); + tracing::debug!( + target: "whitenoise::event_processor::account_from_subscription_id", + "Resolved account for subscription: {}", + target_pubkey.to_hex() + );src/whitenoise/event_processor/event_handlers/handle_giftwrap.rs (1)
11-15: Fix misleading log message.The handler does implement MLS welcome processing below; the log currently says “processing not yet implemented”.
Apply this diff to correct the message:
- tracing::info!( - target: "whitenoise::event_handlers::handle_giftwrap", - "Giftwrap received for account: {} - processing not yet implemented", - account.pubkey.to_hex() - ); + tracing::info!( + target: "whitenoise::event_handlers::handle_giftwrap", + "Giftwrap received for account: {} - attempting unwrap and dispatch", + account.pubkey.to_hex() + );src/whitenoise/database/accounts.rs (1)
219-227: Prefer explicit “persisted ID” checks before relational ops.These queries bind
self.id/user.id(Options) directly; binding NULL may lead to surprising behavior or SQL errors. Returning a clear app error when the entities aren’t persisted improves diagnostics.Apply the pattern below, assuming
WhitenoiseError::{AccountNotPersisted,UserNotPersisted}exist (adjust to your actual variants):pub(crate) async fn is_following_user( &self, user: &User, database: &Database, ) -> Result<bool, WhitenoiseError> { - let result = sqlx::query( + let account_id = self.id.ok_or(WhitenoiseError::AccountNotPersisted)?; + let user_id = user.id.ok_or(WhitenoiseError::UserNotPersisted)?; + let result = sqlx::query( "SELECT COUNT(*) FROM account_follows WHERE account_id = ? AND user_id = ?", ) - .bind(self.id) - .bind(user.id) + .bind(account_id) + .bind(user_id) .fetch_one(&database.pool) .await?; Ok(result.get::<i64, _>(0) > 0) }Make analogous changes in
follow_userandunfollow_user.Also applies to: 243-258, 275-285
src/whitenoise/event_processor/event_handlers/handle_relay_list.rs (1)
12-16: Avoid cloningeventif not required.If
NostrManager::relay_urls_from_eventcan accept&Event, pass a reference to eliminate the clone.- let relay_urls = NostrManager::relay_urls_from_event(event.clone()); + let relay_urls = NostrManager::relay_urls_from_event(&event);If the helper currently takes ownership, consider changing its signature to borrow.
src/whitenoise/app_settings.rs (1)
1-2: Imports reorg LGTM; minor ergonomics nit.
- Grouped imports are cleaner and consistent with the refactor.
- Minor:
ThemeModeis small and trivially copyable—consider derivingCopyfor convenience.-#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize, Hash)] +#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize, Hash)] pub enum ThemeMode {Also applies to: 6-6
src/whitenoise/messages.rs (2)
44-49: Avoid variable shadowing and normalize error type path.
- The local binding named
messageshadows themessage: Stringparameter, which lowers readability. Rename the binding to something descriptive.- Elsewhere we use
nostr_mls::Error; here it’snostr_mls::error::Error. Normalize to the former for consistency.Apply:
- let message = nostr_mls + let stored_message = nostr_mls .get_message(&event_id)? - .ok_or(WhitenoiseError::NostrMlsError( - nostr_mls::error::Error::MessageNotFound, - ))?; + .ok_or(WhitenoiseError::NostrMlsError(nostr_mls::Error::MessageNotFound))?; @@ - let tokens = self.nostr.parse(&message.content); + let tokens = self.nostr.parse(&stored_message.content); @@ - Ok(MessageWithTokens::new(message, tokens)) + Ok(MessageWithTokens::new(stored_message, tokens))Also applies to: 61-64
139-141: Avoidunwrap(); useexpect()for safer invariants.
ensure_id()should guarantee presence, butexpect()gives a clearer backtrace if invariants break.- let event_id = inner_event.id.unwrap(); // This is guaranteed to be Some by ensure_id + let event_id = inner_event + .id + .expect("ensure_id() must set an event id before use");src/whitenoise/relays.rs (2)
57-66: Avoid silent fallback when mapping Kind -> RelayType.Defaulting unknown
KindtoRelayType::Nip65can hide bugs. At minimum, log a warning; ideally, useTryFrom<Kind>and propagate an error for unsupported kinds.Low-impact mitigation:
- _ => RelayType::Nip65, // Default fallback + _ => { + tracing::warn!( + target: "whitenoise::relays", + "Unexpected Kind {:?}; defaulting to RelayType::Nip65. Consider handling explicitly.", + kind + ); + RelayType::Nip65 + }
115-124: Simplify deduplication.Use iterator collection to a set directly; avoids a temporary Vec -> Set dance and tightens intent.
- let mut unique_relay_urls = HashSet::new(); - for relay in all_relays { - unique_relay_urls.insert(relay.url); - } + let unique_relay_urls: HashSet<_> = all_relays.into_iter().map(|r| r.url).collect();src/whitenoise/groups.rs (2)
246-249: Avoid cloninggroup_relays.You can iterate by reference;
find_or_create_relay_by_urlaccepts&RelayUrl.- for relay_url in group_relays.clone() { - let db_relay = self.find_or_create_relay_by_url(&relay_url).await?; + for relay_url in &group_relays { + let db_relay = self.find_or_create_relay_by_url(relay_url).await?; relays.insert(db_relay); }
314-316: Comment/code mismatch about inbox-relay fallback.The comment promises a fallback if a contact has no inbox relays, but the code doesn’t implement it. Either implement a fallback (e.g., account’s NIP-65 relays) or update the comment to reflect actual behavior.
If a fallback is desired, confirm the intended source (creator’s NIP-65 relays vs. defaults) and I can draft the patch.
src/whitenoise/utils.rs (1)
53-57: Hard-coded column name in SQL decode error.
index: "nip65_relays"will be misleading if this utility is reused for inbox or key-package columns. Either pass the column name as a parameter or use a neutral label.Minimal tweak:
- .map_err(|e| sqlx::Error::ColumnDecode { - index: "nip65_relays".to_owned(), + .map_err(|e| sqlx::Error::ColumnDecode { + index: "relays".to_owned(), source: Box::new(e), })Alternatively, introduce
parse_relays_from_sql_with_col(column: &str, relays: String)and have the current function call it with"relays".src/whitenoise/follows.rs (2)
73-82: Backfill background fetch for batch follows.
follow_usertriggersbackground_fetch_user_datawhen a user record is newly created;follow_userscurrently doesn’t. Mirror the behavior to avoid inconsistent state when bulk-following.- for pubkey in pubkeys { - let (user, _) = User::find_or_create_by_pubkey(pubkey, &self.database).await?; - users.push(user); - } + for pubkey in pubkeys { + let (user, newly_created) = User::find_or_create_by_pubkey(pubkey, &self.database).await?; + if newly_created { + self.background_fetch_user_data(&user).await?; + } + users.push(user); + }If you prefer, I can batch these fetches concurrently with a bounded buffer to avoid long-tail latency.
39-41: Use the User API directly inunfollow_user(and for consistency, inis_following_user)The
find_user_by_pubkeyhelper onWhitenoisesimply delegates toUser::find_by_pubkey, so calling the model’s API directly removes one extra indirection and aligns with our “remove whitenoise methods” goal.• In
src/whitenoise/follows.rs
– Lines 39–41 (unfollow_user):- let user = self.find_user_by_pubkey(pubkey).await?; + let user = User::find_by_pubkey(pubkey, &self.database).await?; account.unfollow_user(&user, &self.database).await?; self.background_publish_account_follow_list(account).await?;• In the same file, lines 56–58 (
is_following_user):- let user = self.find_user_by_pubkey(pubkey).await?; + let user = User::find_by_pubkey(pubkey, &self.database).await?; account.is_following_user(&user, &self.database).awaitNo other breaking changes—
find_user_by_pubkeyremains available onWhitenoise(and still used by our integration tests), so this is purely an internal, optional cleanup.src/whitenoise/welcomes.rs (2)
84-96: Optimize relay deduplication using HashSet.The current approach sorts and deduplicates the vector which has O(n log n) complexity. Since you're already importing
HashSet, use it directly for O(n) deduplication.- // Collect all relays from all groups into a single vector - for group in &groups { - let relays = nostr_mls.get_relays(&group.mls_group_id)?; - group_relays.extend(relays); - } - - // Remove duplicates by sorting and deduplicating - group_relays.sort(); - group_relays.dedup(); + // Collect all relays from all groups using HashSet for deduplication + let mut relay_set = HashSet::new(); + for group in &groups { + let relays = nostr_mls.get_relays(&group.mls_group_id)?; + relay_set.extend(relays); + } + let group_relays: Vec<_> = relay_set.into_iter().collect();
203-204: Remove or implement the commented sleep.There's a commented
sleep(Duration::from_secs(3))which suggests timing concerns during testing. Either remove it if no longer needed, or uncomment and properly handle the timing requirements.// Give some time for the event processor to process welcome messages - // sleep(Duration::from_secs(3)); + tokio::time::sleep(tokio::time::Duration::from_secs(3)).await;src/whitenoise/mod.rs (1)
208-242: Consider extracting account loading logic.The account loading and subscription setup logic within
initialize_whitenoisecould be extracted to a separate method for better separation of concerns and testability.+ async fn load_and_setup_accounts(whitenoise_ref: &'static Whitenoise) -> Result<()> { + let accounts = Account::all(&whitenoise_ref.database).await?; + for account in accounts { + // Trigger background data fetch for each account (non-critical) + if let Err(e) = whitenoise_ref.background_sync_account_data(&account).await { + tracing::warn!( + target: "whitenoise::load_accounts", + "Failed to trigger background fetch for account {}: {}", + account.pubkey.to_hex(), + e + ); + // Continue - background fetch failure should not prevent account loading + } + // Setup subscriptions for this account + match whitenoise_ref.setup_subscriptions(&account).await { + Ok(()) => { + tracing::debug!( + target: "whitenoise::initialize_whitenoise", + "Successfully set up subscriptions for account: {}", + account.pubkey.to_hex() + ); + } + Err(e) => { + tracing::warn!( + target: "whitenoise::initialize_whitenoise", + "Failed to set up subscriptions for account {}: {}", + account.pubkey.to_hex(), + e + ); + // Continue with other accounts instead of failing completely + } + } + } + Ok(()) + } // Fetch events and setup subscriptions for all accounts after event processing has started - { - let accounts = Account::all(&whitenoise_ref.database).await?; - for account in accounts { - // Trigger background data fetch for each account (non-critical) - if let Err(e) = whitenoise_ref.background_sync_account_data(&account).await { - tracing::warn!( - target: "whitenoise::load_accounts", - "Failed to trigger background fetch for account {}: {}", - account.pubkey.to_hex(), - e - ); - // Continue - background fetch failure should not prevent account loading - } - // Setup subscriptions for this account - match whitenoise_ref.setup_subscriptions(&account).await { - Ok(()) => { - tracing::debug!( - target: "whitenoise::initialize_whitenoise", - "Successfully set up subscriptions for account: {}", - account.pubkey.to_hex() - ); - } - Err(e) => { - tracing::warn!( - target: "whitenoise::initialize_whitenoise", - "Failed to set up subscriptions for account {}: {}", - account.pubkey.to_hex(), - e - ); - // Continue with other accounts instead of failing completely - } - } - } - } + Self::load_and_setup_accounts(whitenoise_ref).await?;src/whitenoise/users.rs (2)
39-52: Syncs only on content change, not recency — clarify behaviorThe implementation updates when metadata differs, assuming
fetch_metadata_fromreturns the freshest event. If “newer” semantics are important, either compare event timestamps or adjust the docs to say “if changed.”Would you like me to update the docstring to explicitly say “if the fetched metadata differs from the cached metadata”?
474-485: Minor: test user construction duplicated across testsMultiple tests inline identical
Userinitializations. Consider a small helper to reduce duplication and keep tests focused on behavior.I can provide a
fn make_user(pubkey: PublicKey, name: &str) -> Userhelper if you want.src/whitenoise/accounts.rs (5)
19-38: Good error taxonomy; consider adding a Blossom error variant
AccountErroris well-structured withFrommappings. Inupload_profile_pictureyou downcast toWhitenoiseError::Other(anyhow!(...)). For parity and type safety, consider adding:
BlossomError(#[from] nostr_blossom::Error)(or the client’s error type)
This avoids losing specificity and makes handling easier upstream.I can wire this through
WhitenoiseErrorviaFrom<AccountError>if desired.
276-286: Relay connect: dedup and error handling
- Dedup by URL rather than
Relayto avoid relying onHash/Eqfor the whole struct:
let mut urls: HashSet<RelayUrl> = HashSet::new(); urls.extend(self.nip65_relays(...).await?.into_iter().map(|r| r.url)); ...- Consider propagating errors from
connect()(if available) instead of ignoring the result.I can provide the exact diff once we confirm
nostr.client.connect()’s signature.
384-397: Inefficient account count (loads full rows)
get_accounts_countloads all accounts into memory just to count them. Add aSELECT COUNT(*)query indatabase/accounts.rsand use that here.Example:
- pub async fn get_accounts_count(&self) -> Result<usize> { - let accounts = Account::all(&self.database).await?; - Ok(accounts.len()) - } + pub async fn get_accounts_count(&self) -> Result<usize> { + let count = crate::whitenoise::database::accounts::count(&self.database).await?; + Ok(count as usize) + }I can add the
counthelper with the exact SQL if you want.
813-882: Redundant dedup in subscriptions (non-blocking)
nostr_mls.get_relays()already returns a deduplicated set (BTreeSet). You additionally dedup with a HashSet and then collect again. It’s harmless but could be simplified by directly iterating the set.This keeps the method leaner without changing behavior.
11-18: Import style inconsistency for RelayType across filesHere you import
RelayTypefrom the crate root (use crate::RelayType;), whileusers.rsimports it viacrate::whitenoise::relays::{Relay, RelayType}. Prefer one style for consistency across the module.I can submit a follow-up patch normalizing imports.
jgmontoya
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! I think CodeRabbit comments make sense though
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
src/whitenoise/accounts.rs (3)
669-693: Spawned task drops errors; log failures inside the taskUsing
?insidetokio::spawnreturns anErrthat gets dropped since theJoinHandleisn’t awaited. This causes silent failures for metadata publishes.Apply this diff to log errors instead of dropping them:
@@ - tokio::spawn(async move { - tracing::debug!(target: "whitenoise::accounts::background_publish_user_metadata", "Background task: Publishing metadata for account: {:?}", account_clone.pubkey); - - nostr - .publish_metadata_with_signer(&user.metadata, &relays, signer) - .await?; - - tracing::debug!(target: "whitenoise::accounts::background_publish_user_metadata", "Successfully published metadata for account: {:?}", account_clone.pubkey); - Ok::<(), WhitenoiseError>(()) - }); + tokio::spawn(async move { + tracing::debug!( + target: "whitenoise::accounts::background_publish_user_metadata", + "Background task: Publishing metadata for account: {:?}", + account_clone.pubkey + ); + if let Err(e) = nostr + .publish_metadata_with_signer(&user.metadata, &relays, signer) + .await + { + tracing::error!( + target: "whitenoise::accounts::background_publish_user_metadata", + "Failed to publish metadata for account {}: {}", + account_clone.pubkey.to_hex(), + e + ); + } else { + tracing::debug!( + target: "whitenoise::accounts::background_publish_user_metadata", + "Successfully published metadata for account: {:?}", + account_clone.pubkey + ); + } + });
694-722: Same error-dropping issue for relay list publicationMirror the fix from metadata publishing to avoid silent failures when publishing relay lists.
Apply this diff:
@@ - tokio::spawn(async move { + tokio::spawn(async move { tracing::debug!(target: "whitenoise::accounts::background_publish_account_relay_list", "Background task: Publishing relay list for account: {:?}", account_clone.pubkey); - - nostr - .publish_relay_list_with_signer(&relays, relay_type, &target_relays, keys) - .await?; - - tracing::debug!(target: "whitenoise::accounts::background_publish_account_relay_list", "Successfully published relay list for account: {:?}", account_clone.pubkey); - Ok::<(), WhitenoiseError>(()) - }); + if let Err(e) = nostr + .publish_relay_list_with_signer(&relays, relay_type, &target_relays, keys) + .await + { + tracing::error!( + target: "whitenoise::accounts::background_publish_account_relay_list", + "Failed to publish relay list for account {}: {}", + account_clone.pubkey.to_hex(), + e + ); + } else { + tracing::debug!( + target: "whitenoise::accounts::background_publish_account_relay_list", + "Successfully published relay list for account: {:?}", + account_clone.pubkey + ); + } + });
724-748: Same error-dropping issue for follow list publicationEnsure follow-list publish failures are logged rather than silently dropped.
Apply this diff:
@@ - tokio::spawn(async move { + tokio::spawn(async move { tracing::debug!(target: "whitenoise::accounts::background_publish_account_follow_list", "Background task: Publishing follow list for account: {:?}", account_clone.pubkey); - - nostr - .publish_follow_list_with_signer(&follows_pubkeys, &relays, keys) - .await?; - - tracing::debug!(target: "whitenoise::accounts::background_publish_account_follow_list", "Successfully published follow list for account: {:?}", account_clone.pubkey); - Ok::<(), WhitenoiseError>(()) - }); + if let Err(e) = nostr + .publish_follow_list_with_signer(&follows_pubkeys, &relays, keys) + .await + { + tracing::error!( + target: "whitenoise::accounts::background_publish_account_follow_list", + "Failed to publish follow list for account {}: {}", + account_clone.pubkey.to_hex(), + e + ); + } else { + tracing::debug!( + target: "whitenoise::accounts::background_publish_account_follow_list", + "Successfully published follow list for account: {:?}", + account_clone.pubkey + ); + } + });
🧹 Nitpick comments (6)
src/whitenoise/accounts.rs (6)
50-70: Avoid unwrap on persisted User.id; propagate a concrete error instead
user.id.unwrap()can panic if an invariant is ever violated. Prefer explicit error propagation to preserve failure context and keep panic-free code paths.Apply this diff:
@@ - let account = Account { - id: None, - user_id: user.id.unwrap(), + let user_id = user.id.ok_or(WhitenoiseError::UserNotPersisted)?; + let account = Account { + id: None, + user_id, pubkey: keys.public_key(), last_synced_at: None, created_at: Utc::now(), updated_at: Utc::now(), };
118-148: Reduce redundant relay-list publishes when (bulk) adding/removing relaysEvery add/remove triggers
background_publish_account_relay_list, and setup flows later publish the whole list again. During initial setup this can spawn N+1 publishes per relay type and cause avoidable network churn.
- Keep
Account::add_relay/remove_relayas-is for external API.- In internal bulk paths (e.g.,
add_relays_to_account), write directly viaUser::add_relayand publish once afterwards (which you already do in setup flows).Apply this diff to make
add_relays_to_accountbulk-write without auto-publish side effects:@@ async fn add_relays_to_account( &self, account: &mut Account, relays: &[Relay], relay_type: RelayType, ) -> Result<()> { - for relay in relays { - account.add_relay(relay, relay_type, self).await?; - } + // Bulk insert without triggering per-relay publishes; callers publish once afterwards. + let user = account.user(&self.database).await?; + for relay in relays { + user.add_relay(relay, relay_type, &self.database).await?; + } Ok(()) }This keeps the public API ergonomics while avoiding redundant background tasks during setup/migrations.
Also applies to: 150-178, 644-654
217-249: Prefer concrete error variants overOther(anyhow!(...))for Blossom uploadType-erasing Blossom upload errors into
WhitenoiseError::Other(anyhow::Error)loses actionable detail. Consider adding a dedicated error variant (e.g.,WhitenoiseError::Blossom(#[from] nostr_blossom::Error)) or mapping to an existing specific variant to preserve context and enable targeted handling/telemetry.If you want, I can open a follow-up to add a
Blossomerror variant inWhitenoiseErrorand update call sites.
379-392: Avoid loading all rows just to count accounts
get_accounts_count()currently loads all accounts and takeslen(). For large datasets this is wasteful.Prefer
SELECT COUNT(*) FROM accountsvia a lightweight DB call (e.g.,sqlx::query_scalar::<_, i64>("SELECT COUNT(*) FROM accounts")). If you’d like, I can add aDatabase::count_accounts()helper and wire this up.
355-377: Docstring mismatch for logout parameter; update to reflectpubkeyThe docs mention “the account to log out,” but the method takes a
&PublicKey. Also the note about “in-memory accounts list” isn’t reflected in this method.Apply this doc-only diff:
@@ - /// Logs out the user associated with the given account. + /// Logs out the user associated with the given public key. @@ - /// * `account` - The account to log out. + /// * `pubkey` - Public key of the account to log out.If there’s still an in-memory list elsewhere, consider mentioning where it’s updated (or clarify that it’s managed lazily on next startup).
808-876: Minor: reuse already-fetched user relays to avoid duplicate DB hitsYou already fetch
user_relaysat Line 854 for logging; pass it to the subscription setup instead of re-awaiting.@@ - let user_relays = account.nip65_relays(self).await?; + let user_relays = account.nip65_relays(self).await?; @@ - self.nostr - .setup_account_subscriptions_with_signer( - account.pubkey, - &account.nip65_relays(self).await?, - &account.inbox_relays(self).await?, - &group_relays_vec, - &nostr_group_ids, - keys, - ) + self.nostr + .setup_account_subscriptions_with_signer( + account.pubkey, + &user_relays, + &account.inbox_relays(self).await?, + &group_relays_vec, + &nostr_group_ids, + keys, + ) .await?;
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (2)
src/whitenoise/accounts.rs(8 hunks)src/whitenoise/users.rs(8 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- src/whitenoise/users.rs
🧰 Additional context used
🧠 Learnings (4)
📓 Common learnings
Learnt from: jgmontoya
PR: parres-hq/whitenoise#310
File: src/whitenoise/accounts/relays.rs:260-278
Timestamp: 2025-08-11T10:23:59.406Z
Learning: In `src/whitenoise/accounts/relays.rs`, the `publish_relay_list_for_account` method is only called after the account setup is complete, which guarantees that `account.nip65_relays` is already populated and will never be empty. Therefore, no fallback logic is needed when `target_relays` is None.
Learnt from: jmcorgan
PR: parres-hq/whitenoise#318
File: src/whitenoise/accounts/welcomes.rs:53-53
Timestamp: 2025-08-17T19:35:53.509Z
Learning: In the Whitenoise codebase, Account::create_nostr_mls should have its errors mapped to WhitenoiseError::NostrMlsError instead of using anyhow::anyhow!() wrapping, as this preserves type safety and specific error information.
📚 Learning: 2025-08-17T19:34:30.290Z
Learnt from: jmcorgan
PR: parres-hq/whitenoise#318
File: src/whitenoise/accounts/messages.rs:193-195
Timestamp: 2025-08-17T19:34:30.290Z
Learning: In the whitenoise codebase, `Account::create_nostr_mls()` returns `Result<NostrMls<NostrMlsSqliteStorage>, AccountError>`, and `WhitenoiseError` has an `Account(#[from] AccountError)` variant that automatically converts. When handling MLS creation errors, use the `?` operator to let `AccountError` convert naturally to `WhitenoiseError::Account` rather than manually mapping to `WhitenoiseError::Configuration`.
Applied to files:
src/whitenoise/accounts.rs
📚 Learning: 2025-08-17T19:35:53.509Z
Learnt from: jmcorgan
PR: parres-hq/whitenoise#318
File: src/whitenoise/accounts/welcomes.rs:53-53
Timestamp: 2025-08-17T19:35:53.509Z
Learning: In the Whitenoise codebase, Account::create_nostr_mls should have its errors mapped to WhitenoiseError::NostrMlsError instead of using anyhow::anyhow!() wrapping, as this preserves type safety and specific error information.
Applied to files:
src/whitenoise/accounts.rs
📚 Learning: 2025-08-11T10:23:59.406Z
Learnt from: jgmontoya
PR: parres-hq/whitenoise#310
File: src/whitenoise/accounts/relays.rs:260-278
Timestamp: 2025-08-11T10:23:59.406Z
Learning: In `src/whitenoise/accounts/relays.rs`, the `publish_relay_list_for_account` method is only called after the account setup is complete, which guarantees that `account.nip65_relays` is already populated and will never be empty. Therefore, no fallback logic is needed when `target_relays` is None.
Applied to files:
src/whitenoise/accounts.rs
🧬 Code graph analysis (1)
src/whitenoise/accounts.rs (4)
src/whitenoise/database/users.rs (3)
relays(157-189)from(66-74)find_by_pubkey(127-141)src/whitenoise/database/relays.rs (1)
from(49-56)src/whitenoise/relays.rs (3)
from(38-44)from(48-54)from(58-65)src/whitenoise/database/accounts.rs (3)
user(152-159)all(92-102)find_by_pubkey(118-137)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: check (ubuntu-latest, native)
🔇 Additional comments (6)
src/whitenoise/accounts.rs (6)
85-93: Relays retrieval via user linkage looks goodClean separation: account resolves its user, then defers to user->relays by type. Straightforward, minimal DB round trips.
194-215: Metadata update pipeline is solidLocal persistence followed by background publish is the right balance between UX and reliability. Tracepoints are helpful.
250-257: MLS storage initialization helper is concise and focusedSolid helper that isolates path layout and storage creation; errors flow via
AccountErroras expected.
259-269: Group-id loading helper is correct and returns stable hex IDsGood use of
create_nostr_mlsand hex encoding for stable identifiers.
271-281: Explicitly handle errors from Client::connect()The call to
whitenoise.nostr.client.connect().await;silently drops any potential failure. If
connect()returns aResult, you should either propagate the error or handle it explicitly—otherwise failed connections will go unnoticed at runtime.Suggested change:
-your_task -whitenoise.nostr.client.connect().await; +// Propagate any error from connect() +whitenoise.nostr.client.connect().await?;• Location: src/whitenoise/accounts.rs:279
• Please verify the signature ofnostr_sdk::Client::connect()(in the upstream crate) to confirm whether it returns aResultor(), and update this call accordingly.
766-779: No DB type mismatch—last_synced_atis stored as INTEGER and consistently handled as epoch-millisAll evidence in the code shows that we persist and load
last_synced_atas an integer number of milliseconds since the epoch:
- In the SQLite schema (used for in-process or test databases), the table is created as:
(src/whitenoise/database/accounts.rs)CREATE TABLE accounts ( …, last_synced_at INTEGER, created_at INTEGER NOT NULL, updated_at INTEGER NOT NULL )- The Rust
AccountRowstruct declaresand its loader usespub last_synced_at: Option<DateTime<Utc>>,wherelet last_synced_at = match row.try_get::<Option<i64>, _>("last_synced_at")? { Some(_) => Some(parse_timestamp(row, "last_synced_at")?), None => None, };parse_timestampconverts the storedi64(millis) into aDateTime<Utc>.- All
INSERT/UPDATEcalls—including the snippet in question—bindts.timestamp_millis()(ani64) against that same INTEGER column.Because the column is defined as INTEGER and the code uniformly writes and reads epoch-millis, there is no mismatch here.
Likely an incorrect or invalid review comment.
Summary by CodeRabbit
New Features
Improvements
Refactor