-
Notifications
You must be signed in to change notification settings - Fork 36
Feat/add group information table #319
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…upInformation struct
Create group_information record based on it
WalkthroughAdds persistent group metadata via a new Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant C as Caller / Test
participant W as Whitenoise
participant MLS as nostr_mls
participant DB as group_information DB
C->>W: create_group(creator, members, admins, config, group_type?)
W->>MLS: create_mls_group(creator, members, admins, config)
MLS-->>W: mls_group_id
W->>W: determine group_type (explicit or infer by size)
W->>DB: find_or_create_by_mls_group_id(mls_group_id, group_type)
DB-->>W: GroupInformation
W-->>C: Group (mls_group_id, persisted metadata)
sequenceDiagram
autonumber
participant S as Onboarding Trigger
participant W as Whitenoise
participant U as User
participant NR as Network Relays
participant DB as Stored Relays
S->>W: new user created / onboarding
W->>U: update_relay_lists()
U->>DB: get_query_relays() or Relay::defaults()
U->>NR: fetch_user_relays(type)
NR-->>U: network relays
U->>DB: sync_relays_for_type(remove/add)
U->>W: update_metadata() (fetch & persist)
W-->>S: onboarding complete (relays/metadata updated)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
Poem
Tip 🔌 Remote MCP (Model Context Protocol) integration is now available!Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats. 📜 Recent review detailsConfiguration used: CodeRabbit UI 💡 Knowledge Base configuration:
You can enable these sources in your CodeRabbit configuration. 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (6)
src/whitenoise/relays.rs (1)
77-92: Log parse failures for default relay URLs (and consider minor ergonomic tweaks).Current logic silently drops any malformed URL. Given these are static defaults, a warn-level log helps surface accidental typos. Also, iterating explicitly makes adding logging straightforward.
Apply this diff to add logging while preserving behavior:
- pub(crate) fn defaults() -> Vec<Relay> { - let urls: &[&str] = if cfg!(debug_assertions) { - &["ws://localhost:8080", "ws://localhost:7777"] - } else { - &[ - "wss://relay.damus.io", - "wss://relay.primal.net", - "wss://nos.lol", - ] - }; - - urls.iter() - .filter_map(|&url_str| RelayUrl::parse(url_str).ok()) - .map(|url| Relay::new(&url)) - .collect() - } + pub(crate) fn defaults() -> Vec<Relay> { + let urls: &[&str] = if cfg!(debug_assertions) { + &["ws://localhost:8080", "ws://localhost:7777"] + } else { + &[ + "wss://relay.damus.io", + "wss://relay.primal.net", + "wss://nos.lol", + ] + }; + + let mut relays = Vec::with_capacity(urls.len()); + for url_str in urls { + match RelayUrl::parse(url_str) { + Ok(url) => relays.push(Relay::new(&url)), + Err(e) => { + tracing::warn!( + target: "whitenoise::relays::defaults", + "Skipping invalid default relay URL {}: {}", + url_str, + e + ); + } + } + } + relays + }src/whitenoise/accounts/core.rs (2)
348-350: Optionally parallelize find_or_create to speed up startup.The default list is small, so this is non-critical, but you can resolve all insert/finds concurrently to reduce latency.
- for Relay { url, .. } in Relay::defaults() { - let relay = self.find_or_create_relay(&url).await?; - default_relays.push(relay); - } - Ok(default_relays) + let futs = Relay::defaults() + .into_iter() + .map(|Relay { url, .. }| self.find_or_create_relay(&url)); + let default_relays = futures::future::try_join_all(futs).await?; + Ok(default_relays)Note: This uses futures::future::try_join_all; ensure the futures crate is available (commonly is). If not, keep the current sequential approach.
893-917: Outdated comment: no DashSet involved here.The comment mentions converting a DashSet, but
Relay::defaults()already returns a Vec. Update the wording to avoid confusion.- // Verify that all relay sets contain the same default relays - // Convert DashSet to Vec to avoid iterator type issues + // Verify that all relay sets contain the same default relay URLs let default_relays_vec: Vec<RelayUrl> = default_relays.iter().map(|r| r.url.clone()).collect();src/whitenoise/follows.rs (1)
40-41: Don’t block follow on relay-list update failures for newly created users.Failing
update_relay_listsnow abortsfollow_user, which can degrade UX on transient errors. Consider best-effort behavior: log and continue to establish the follow.- user.update_relay_lists(self).await?; + if let Err(e) = user.update_relay_lists(self).await { + tracing::warn!( + target: "whitenoise::follows::follow_user", + "Failed to update relay lists for newly created user {}: {}", + pubkey.to_hex(), + e + ); + }Follow-up: For consistency, you might want to add the same best-effort relay-list update in
follow_userswhen a user is newly created. I can draft that change if you’d like.src/whitenoise/accounts/groups.rs (1)
49-61: Consider extracting relay list update to a separate concernThe relay list update logic (lines 50-61) adds complexity to the group creation flow. While the error handling is graceful (logging warnings without aborting), this side effect makes the function harder to test and reason about. Consider extracting this to a separate post-creation hook or making it configurable.
src/whitenoise/database/group_information.rs (1)
150-159: Consider SQL injection safety for dynamic queryWhile the current implementation uses parameterized bindings (which is good), building dynamic SQL with string formatting could be a maintenance risk. The placeholders are correctly generated and the values are properly bound, but consider using a query builder or a more robust approach for constructing IN clauses.
Consider using a more robust approach for building dynamic IN queries:
- // Build dynamic query with correct number of placeholders - let placeholders = "?,".repeat(id_bytes.len()); - let placeholders = placeholders.trim_end_matches(','); - - let query = format!( - "SELECT id, mls_group_id, group_type, created_at, updated_at - FROM group_information - WHERE mls_group_id IN ({})", - placeholders - ); + // Build dynamic query with correct number of placeholders + let placeholders: Vec<&str> = vec!["?"; id_bytes.len()]; + let placeholders_str = placeholders.join(","); + + let query = format!( + "SELECT id, mls_group_id, group_type, created_at, updated_at + FROM group_information + WHERE mls_group_id IN ({})", + placeholders_str + );This approach is slightly cleaner and avoids the trim operation.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (15)
db_migrations/0010_add_group_information_table.sql(1 hunks)src/bin/integration_test.rs(1 hunks)src/lib.rs(1 hunks)src/media/cache.rs(0 hunks)src/whitenoise/accounts/core.rs(3 hunks)src/whitenoise/accounts/groups.rs(12 hunks)src/whitenoise/accounts/welcomes.rs(1 hunks)src/whitenoise/database/group_information.rs(1 hunks)src/whitenoise/database/mod.rs(1 hunks)src/whitenoise/event_processor/event_handlers/handle_mls_message.rs(2 hunks)src/whitenoise/follows.rs(1 hunks)src/whitenoise/group_information.rs(1 hunks)src/whitenoise/mod.rs(5 hunks)src/whitenoise/relays.rs(1 hunks)src/whitenoise/users.rs(3 hunks)
💤 Files with no reviewable changes (1)
- src/media/cache.rs
🧰 Additional context used
📓 Path-based instructions (1)
src/bin/integration_test.rs
📄 CodeRabbit Inference Engine (.cursor/rules/integration-test.mdc)
The integration test in src/bin/integration_test.rs should ALWAYS be run with the
just int-testcommand and not run on its own with different log and data directories.
Files:
src/bin/integration_test.rs
🧠 Learnings (4)
📚 Learning: 2025-08-18T07:22:47.078Z
Learnt from: jgmontoya
PR: parres-hq/whitenoise#318
File: src/whitenoise/accounts/messages.rs:83-91
Timestamp: 2025-08-18T07:22:47.078Z
Learning: In the Whitenoise codebase, `nostr_mls.get_relays()` returns a `std::collections::BTreeSet<RelayUrl>`, which means relay URLs are already deduplicated and don't need additional deduplication logic.
Applied to files:
src/whitenoise/relays.rssrc/whitenoise/mod.rssrc/whitenoise/accounts/core.rs
📚 Learning: 2025-08-11T10:23:59.406Z
Learnt from: jgmontoya
PR: parres-hq/whitenoise#310
File: src/whitenoise/accounts/relays.rs:260-278
Timestamp: 2025-08-11T10:23:59.406Z
Learning: In `src/whitenoise/accounts/relays.rs`, the `publish_relay_list_for_account` method is only called after the account setup is complete, which guarantees that `account.nip65_relays` is already populated and will never be empty. Therefore, no fallback logic is needed when `target_relays` is None.
Applied to files:
src/whitenoise/mod.rssrc/whitenoise/accounts/core.rs
📚 Learning: 2025-08-11T10:25:55.436Z
Learnt from: jgmontoya
PR: parres-hq/whitenoise#310
File: src/whitenoise/accounts/mod.rs:669-692
Timestamp: 2025-08-11T10:25:55.436Z
Learning: In the Whitenoise codebase, the `NostrManager::publish_event_builder_with_signer` method automatically ensures relay connectivity by calling `self.add_relays()` before publishing events. This means explicit relay connectivity checks are not needed before calling publishing methods that use this function.
Applied to files:
src/whitenoise/mod.rs
📚 Learning: 2025-08-17T19:34:30.290Z
Learnt from: jmcorgan
PR: parres-hq/whitenoise#318
File: src/whitenoise/accounts/messages.rs:193-195
Timestamp: 2025-08-17T19:34:30.290Z
Learning: In the whitenoise codebase, `Account::create_nostr_mls()` returns `Result<NostrMls<NostrMlsSqliteStorage>, AccountError>`, and `WhitenoiseError` has an `Account(#[from] AccountError)` variant that automatically converts. When handling MLS creation errors, use the `?` operator to let `AccountError` convert naturally to `WhitenoiseError::Account` rather than manually mapping to `WhitenoiseError::Configuration`.
Applied to files:
src/whitenoise/accounts/groups.rs
🧬 Code Graph Analysis (7)
src/whitenoise/relays.rs (2)
src/whitenoise/utils.rs (1)
urls(103-105)src/whitenoise/mod.rs (1)
new(45-59)
src/whitenoise/follows.rs (1)
src/whitenoise/database/accounts.rs (1)
user(152-159)
src/whitenoise/mod.rs (2)
src/whitenoise/database/users.rs (1)
relays(153-185)src/whitenoise/relays.rs (1)
defaults(77-92)
src/whitenoise/users.rs (3)
src/whitenoise/relays.rs (2)
defaults(77-92)new(68-75)src/whitenoise/mod.rs (2)
create_mock_whitenoise(467-523)new(45-59)src/whitenoise/database/users.rs (1)
relays(153-185)
src/whitenoise/accounts/core.rs (1)
src/whitenoise/relays.rs (1)
defaults(77-92)
src/whitenoise/group_information.rs (2)
src/whitenoise/mod.rs (2)
new(45-59)create_mock_whitenoise(467-523)src/whitenoise/database/group_information.rs (2)
find_or_create_by_mls_group_id(108-123)find_by_mls_group_ids(140-172)
src/whitenoise/accounts/groups.rs (3)
src/whitenoise/database/users.rs (1)
find_or_create_by_pubkey(88-107)src/whitenoise/group_information.rs (2)
create_for_group(69-84)get_by_mls_group_id(87-98)src/whitenoise/mod.rs (1)
create_nostr_group_config_data(620-628)
🔇 Additional comments (29)
src/whitenoise/database/mod.rs (1)
15-15: Module exposure LGTM.Adding
pub mod group_information;cleanly wires the new DB layer. Migration runner (MIGRATOR) will pick up the new migration as usual.src/lib.rs (1)
59-59: Potential breaking change: GroupType no longer re-exported.Dropping
GroupTypefrom the public surface may break downstream users. If intentional, consider calling it out in release notes. If accidental or if you want to preserve API stability, re-export the newGroupTypefrom your domain layer.You can re-expose the Whitenoise
GroupTypealongside other re-exports:// Add near other pub use lines pub use crate::whitenoise::group_information::GroupType;src/whitenoise/event_processor/event_handlers/handle_mls_message.rs (2)
62-62: LGTM! Test correctly updated for new create_group signature.The test correctly passes
Nonefor the new fifth parameter (group_type: Option), which maintains the default behavior while adapting to the extended API.
112-112: LGTM! Test correctly updated for new create_group signature.The test correctly passes
Nonefor the new fifth parameter (group_type: Option), maintaining consistency with the first test case.src/bin/integration_test.rs (1)
282-282: LGTM! Integration test correctly updated for new create_group signature.The integration test correctly passes
Nonefor the new fifth parameter (group_type: Option), which maintains the default behavior while adapting to the extended API signature that supports group typing functionality.db_migrations/0010_add_group_information_table.sql (1)
1-8: Well-designed migration for group information metadata.The migration creates a clean table structure for persisting group metadata with appropriate constraints:
- Uses
BLOB NOT NULL UNIQUEformls_group_idwhich correctly enforces one-to-one relationship with MLS groups- Defaults
group_typeto 'group' which provides sensible fallback behavior- Includes standard audit columns (
created_at,updated_at) with proper defaultsThe foreign key relationship is clearly documented in the comment, and the table structure aligns well with the GroupInformation domain model.
src/whitenoise/mod.rs (5)
14-14: LGTM! New group_information module properly exposed.The module is correctly added to expose the GroupInformation and GroupType functionality to the public API.
29-29: LGTM! Relay imports correctly added.The wildcard import enables access to Relay-related abstractions needed for the updated relay handling logic.
197-199: LGTM! Default relay initialization properly updated.The code correctly switches from
Account::default_relays()toRelay::defaults()and properly extracts the URL from the Relay struct for database operations. This aligns with the new relay abstraction pattern.
207-209: LGTM! Nostr client relay addition properly updated.The code correctly uses
Relay::defaults()and extracts the URL for the Nostr client operations, maintaining consistency with the new relay handling approach.
498-500: LGTM! Test utilities properly updated for new relay pattern.The test utilities correctly use
Relay::defaults()and extract URLs into a Vec for the nostr client, maintaining consistency with the new relay abstraction while preserving the test functionality.src/whitenoise/accounts/welcomes.rs (1)
276-276: LGTM! Test correctly updated for new create_group signature.The test correctly passes
Nonefor the new fifth parameter (group_type: Option), maintaining the default behavior while adapting to the extended API that supports group typing functionality.src/whitenoise/accounts/groups.rs (3)
150-158: Good implementation of group type persistenceThe group information persistence is well-structured. The participant count calculation correctly includes the creator, and the error handling is appropriate with the
?operator propagating database errors properly.
485-492: Good test coverage for DirectMessage group typeThe test case properly validates the DirectMessage group type inference for 2-participant groups. The assertion pattern is consistent with other test cases and properly checks both the group creation and the persisted group information.
613-641: Well-structured test case for DirectMessage groupsThe test properly validates DirectMessage group creation and type persistence. The test follows the established pattern and includes appropriate assertions for the group type.
src/whitenoise/users.rs (5)
39-63: Comprehensive relay list update implementation with good error handlingThe
update_relay_listsmethod is well-structured with appropriate logging and graceful error handling. The pattern of updating NIP-65 relays first, then using those (or falling back) for secondary relay types is logical. The method continues processing even when individual relay type updates fail, which is a good resilience pattern.
65-78: Appropriate fallback to default relaysThe
get_query_relaysmethod correctly falls back to default relays when no NIP-65 relays are stored. This ensures that relay operations can proceed even for new users without configured relays.
148-234: Robust relay synchronization with HashSet comparisonThe
sync_relays_for_typemethod efficiently compares stored and network relays using HashSet operations. The implementation correctly:
- Fetches network relay state
- Compares with stored state using HashSet equality
- Only applies changes when necessary
- Handles individual relay add/remove failures gracefully
The use of HashSet for URL comparison is appropriate and efficient.
368-386: Good test coverage for HashSet behavior with RelayUrlsThe test validates that RelayUrl instances can be properly compared for equality and used in HashSet collections. This is important for the relay synchronization logic that relies on HashSet operations.
388-439: Comprehensive integration test for relay list updatesThe test properly validates the relay update flow with initial relays. The test acknowledges that network calls would be mocked in a real environment and handles both success and error cases appropriately.
src/whitenoise/group_information.rs (5)
9-40: Well-designed GroupType enum with proper trait implementationsThe
GroupTypeenum is properly designed with appropriate trait implementations:
- Serde support for serialization
- Default implementation returning
Group- Display for string representation
- FromStr for parsing with proper error handling
The string representations ("group", "direct_message") are clear and consistent.
52-57: Correct inference logic for group typesThe type inference correctly identifies 2-participant groups as DirectMessage and all others as Group. This aligns with typical messaging patterns where 1-on-1 conversations are treated differently from group chats.
69-84: Well-documented create_for_group methodThe method properly handles both explicit and inferred group types. The implementation correctly:
- Uses explicit type when provided
- Falls back to inference based on participant count
- Delegates to the database layer for persistence
100-138: Efficient batch retrieval with automatic creationThe
get_by_mls_group_idsmethod efficiently handles multiple group IDs:
- Returns early if all records exist
- Creates missing records with default type
- Preserves the input order of IDs
The use of HashMap for lookups is appropriate for performance.
157-513: Excellent test coverageThe test suite is comprehensive and covers all important scenarios:
- Default values and trait implementations
- Type inference logic
- Idempotency of create operations
- Batch operations with mixed existing/missing records
- Order preservation in batch operations
- Empty input handling
All test cases are well-structured and properly assert expected behavior.
src/whitenoise/database/group_information.rs (4)
18-43: Correct sqlx::FromRow implementationThe FromRow implementation properly handles the database-to-struct mapping with appropriate type conversions. The use of
parse_timestampfor date parsing andGroupId::from_slicefor the MLS group ID are correct.
45-62: Appropriate error mapping in conversionThe
into_group_informationmethod correctly maps string parsing errors toWhitenoiseError::Configurationwith a descriptive error message. This provides good context for debugging invalid database values.
175-195: Clean implementation of insert_newThe private helper method properly handles new record insertion with RETURNING clause for atomic insert-and-fetch. The use of current timestamp in milliseconds for both created_at and updated_at is correct for new records.
198-404: Thorough database layer testingThe test suite provides excellent coverage of the database layer:
- Not found scenarios
- Create and find operations
- Type preservation when finding existing records
- Default type handling
- Batch operations with various edge cases
- Type conversion round-trips
All tests properly use the mock whitenoise setup and assert expected database behavior.
| pub async fn follow_user(&self, account: &Account, pubkey: &PublicKey) -> Result<()> { | ||
| let (user, newly_created) = User::find_or_create_by_pubkey(pubkey, &self.database).await?; | ||
| if newly_created { | ||
| user.update_relay_lists(self).await?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why don't we also fetch metadata here? Just to make the time we block shorter?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just didn't do it because it was a bit out-of-scope wrt this PR and wasn't sure if we wanted to block here for longer but taking a second look at it I think it's ok. Will add it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
src/whitenoise/accounts/groups.rs
Outdated
| assert!(group.admin_pubkeys.contains(&creator_account.pubkey)); | ||
| assert!(group.admin_pubkeys.contains(&member_pubkeys[0])); | ||
|
|
||
| if let Ok(group_info) = |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The result of GroupInformation::get_by_mls_group_id being Err should make this test fail?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
well spotted, fixing!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (9)
src/whitenoise/users.rs (9)
29-40: Persist metadata only when it changes and bump updated_atAvoid unnecessary writes when fetched metadata is identical, and explicitly update
updated_atto reflect changes.- if let Some(metadata) = metadata { - self.metadata = metadata; - self.save(&whitenoise.database).await?; - } + if let Some(metadata) = metadata { + if metadata != self.metadata { + self.metadata = metadata; + self.updated_at = Utc::now(); + self.save(&whitenoise.database).await?; + } else { + tracing::debug!( + target: "whitenoise::users::update_metadata", + "Metadata unchanged for user {}", + self.pubkey + ); + } + }
52-76: Add tracing instrumentation to correlate logs for a given userYou already log steps inside; adding an instrumented span makes correlation and sampling easier without sprinkling extra fields in each log.
- pub(crate) async fn update_relay_lists(&self, whitenoise: &Whitenoise) -> Result<()> { + #[tracing::instrument(skip(whitenoise), level = "info", fields(pubkey = %self.pubkey))] + pub(crate) async fn update_relay_lists(&self, whitenoise: &Whitenoise) -> Result<()> {
102-111: Clarify log wording (“other types” → “subsequent queries”)The current message could be read as updating relays for “other types”. It actually refreshes the NIP-65 set for subsequent use.
- "Updated NIP-65 relays for user {}, now using {} relays for other types", + "Updated NIP-65 relays for user {}; using {} relays for subsequent queries",
180-187: Avoid borrowing-from-Vec dance; compare owned sets to simplify and reduce lifetimes jugglingConstruct
HashSet<RelayUrl>for both sides. This removes the need to keep a Vec alive just to back reference-based sets and makes intent clearer.- let stored_relays = self.relays(relay_type, &whitenoise.database).await?; - let network_relay_urls_vec: Vec<_> = network_relay_urls.into_iter().collect(); - - // Check if there are any changes needed - let stored_urls: HashSet<&RelayUrl> = stored_relays.iter().map(|r| &r.url).collect(); - let network_urls_set = network_relay_urls_vec.iter().collect(); + let stored_relays = self.relays(relay_type, &whitenoise.database).await?; + // Build owned sets for straight comparison and membership checks + let stored_urls: HashSet<RelayUrl> = stored_relays.iter().map(|r| r.url.clone()).collect(); + let network_urls_set: HashSet<RelayUrl> = network_relay_urls.into_iter().collect(); @@ - for existing_relay in &stored_relays { - if !network_urls_set.contains(&existing_relay.url) { + for existing_relay in &stored_relays { + if !network_urls_set.contains(&existing_relay.url) { if let Err(e) = self .remove_relay(existing_relay, relay_type, &whitenoise.database) .await { @@ - for new_relay_url in &network_relay_urls_vec { - if !stored_urls.contains(new_relay_url) { - let relay = whitenoise.find_or_create_relay(new_relay_url).await?; + for new_relay_url in &network_urls_set { + if !stored_urls.contains(new_relay_url) { + let relay = whitenoise.find_or_create_relay(new_relay_url).await?; if let Err(e) = self .add_relay(&relay, relay_type, &whitenoise.database) .await {Also applies to: 207-224, 226-244
197-205: Consider wrapping remove/add in a DB transaction per relay typeAtomicity would prevent ending up with partial updates if the process crashes mid-way. Since individual failures are already tolerated, this is optional but improves consistency.
Would you like a quick sketch using your DB abstraction to group the remove/add batch in a single transaction?
401-452: Strengthen expectation: update_relay_lists should generally not errorGiven the implementation swallows per-type network failures and proceeds, an
Errhere typically indicates a DB-level failure. Consider assertingis_ok()to catch regressions earlier.Apply this diff to simplify:
- // The test should succeed even if network calls fail (graceful degradation) - match result { - Ok(()) => { - // Success case - verify the user still has relays - let relays = saved_user - .relays(RelayType::Nip65, &whitenoise.database) - .await - .unwrap(); - assert!( - !relays.is_empty(), - "User should still have relays after update" - ); - } - Err(_) => { - // Error case is also acceptable since we don't have real network responses - // The important part is that the method handles errors gracefully - } - } + assert!(result.is_ok(), "update_relay_lists should degrade gracefully"); + // Verify the user still has some NIP-65 relays (initial or updated) + let relays = saved_user + .relays(RelayType::Nip65, &whitenoise.database) + .await + .unwrap(); + assert!( + !relays.is_empty(), + "User should still have relays after update" + );
455-484: Same point: prefer asserting success over accepting both Ok/ErrThe method is designed to degrade gracefully; asserting
is_ok()here will surface unexpected critical failures.- // Should not panic and should handle the no-relays case gracefully - match result { - Ok(()) => { - // Check if any default relays were potentially added - println!("Update completed successfully"); - } - Err(e) => { - // Errors are expected when there are no network responses - println!("Update failed as expected: {}", e); - } - } + assert!(result.is_ok(), "Expected graceful handling with default relays");
516-531: Optionally validate that defaults are actually returnedTo tighten the contract, assert equality with
Relay::defaults()URLs (order-insensitive) rather than just non-empty.I can add a small helper to compare by URL ignoring order if you’d like.
595-633: Consider asserting updated_at behavior once you bump it on changeIf you adopt the
updated_at = Utc::now()suggestion, it’s worth adding an assertion here to ensure we don’t regress that behavior.Want me to update this test to capture-and-assert
updated_atsemantics?
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (3)
src/whitenoise/accounts/groups.rs(11 hunks)src/whitenoise/follows.rs(1 hunks)src/whitenoise/users.rs(3 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- src/whitenoise/follows.rs
- src/whitenoise/accounts/groups.rs
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: check (ubuntu-latest, native)
🔇 Additional comments (5)
src/whitenoise/users.rs (5)
78-91: Sane defaulting for query relaysFalling back to
Relay::defaults()when no stored NIP-65 relays exist is a pragmatic choice and keeps network operations resilient.
167-179: Graceful network error handling is goodLogging at warn and propagating the error to the caller (so upstream can decide) is appropriate. Downstream call sites already degrade gracefully.
381-399: RelayUrl value-equality test looks goodConfirms expected Hash/Eq semantics, including HashSet de-duplication by value.
532-569: Good preservation test for metadata when network yields no updateConfirms no unintended mutations and that
update_metadatais idempotent with no network changes.
570-594: Metadata update with no NIP-65 relays: sensible behaviorUsing defaults for fetch and preserving metadata on no response is correct.
3771c34 to
515a47d
Compare
Summary by CodeRabbit
New Features
Refactor
Chores
Tests