Skip to content

Conversation

@fedemkr
Copy link
Member

@fedemkr fedemkr commented Dec 3, 2025

🎟️ Tracking

PM-14031

📔 Objective

Improve the performance of searches throughout the app, specially regarding memory usage. This is particularly useful for iOS extensions where several searches in a row could potentially crash the extension given the memory limits (this also depends on the vault size).

  • Moved filtering of whether the cipher is deleted and also the group check to be done as pre filter on encrypted ciphers so less ciphers get decrypted.
  • Added a publisher for the VaultListFilter from the processor, so any time something from the filter like the search text query changes a new filter value gets published. Therefore each time the search starts, the publisher is subscribed only once for the three internal ones (filter, ciphers, available fido2 credentials) which saves a lot of memory piling up vs the previous approach.
    • A new SearchProcessorMediator has been added to handle the subscription, cancellation and handling search results with a delegate to share the logic amongst the different vault list processors.
  • The preparedData now is cleaned after build returns in DefaultVaultListSectionsBuilder to make Swift clean the internal arrays faster. This has a large caveat given that I had to make it optional, so if the caller calls any of the methods after build nothing will be done; which is kind of OK as nothing should be called after build in the builder but something to raise up. 🤔 I could add a DEBUG warning/assertion here just in case it happens.

Note

These changes could also be done to other cipher list flows but this PR only focuses on searches to decrease the testing surface. Additional tasks track the necessary changes on other parts of the app.

⏰ Reminders before review

  • Contributor guidelines followed
  • All formatters and local linters executed and passed
  • Written new unit and / or integration tests where applicable
  • Protected functional changes with optionality (feature flags)
  • Used internationalization (i18n) for all UI strings
  • CI builds passed
  • Communicated to DevOps any deployment requirements
  • Updated any necessary documentation (Confluence, contributing docs) or informed the documentation team

🦮 Reviewer guidelines

  • 👍 (:+1:) or similar for great changes
  • 📝 (:memo:) or ℹ️ (:information_source:) for notes or general info
  • ❓ (:question:) for questions
  • 🤔 (:thinking:) or 💭 (:thought_balloon:) for more open inquiry that's not quite a confirmed issue and could potentially benefit from discussion
  • 🎨 (:art:) for suggestions / improvements
  • ❌ (:x:) or ⚠️ (:warning:) for more significant problems or concerns needing attention
  • 🌱 (:seedling:) or ♻️ (:recycle:) for future improvements or indications of technical debt
  • ⛏ (:pick:) for minor or nitpick changes

@github-actions
Copy link
Contributor

github-actions bot commented Dec 3, 2025

Logo
Checkmarx One – Scan Summary & Details9b9663eb-b168-4fce-b1aa-31efd6fe550b

Great job! No new security vulnerabilities introduced in this pull request

@codecov
Copy link

codecov bot commented Dec 3, 2025

Codecov Report

❌ Patch coverage is 99.09639% with 3 lines in your changes missing coverage. Please review.
✅ Project coverage is 85.44%. Comparing base (8680221) to head (7396d45).

Files with missing lines Patch % Lines
...e/Vault/Services/CiphersClientWrapperService.swift 50.00% 2 Missing ⚠️
.../TestHelpers/MockCiphersClientWrapperService.swift 94.11% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #2179      +/-   ##
==========================================
+ Coverage   85.42%   85.44%   +0.02%     
==========================================
  Files        1733     1735       +2     
  Lines      146246   146520     +274     
==========================================
+ Hits       124926   125197     +271     
- Misses      21320    21323       +3     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

…ter from the processor. Therefore the vault strategy can subscribe to filter changes (like search) and react to that as another publisher. This improves the behavior given that while the user searches the publisher is subscribed once to all internal publishers (filter, cipher, available fido2 credentials). Also moved the logic for that into the SearchProcessorMediator to be able to share it with all the processor that search the vault.
@fedemkr fedemkr marked this pull request as ready for review December 11, 2025 21:23
@fedemkr fedemkr requested review from a team and matt-livefront as code owners December 11, 2025 21:23
@claude
Copy link

claude bot commented Dec 11, 2025

Claude finished @fedemkr's task in 3m 17s —— View job


Code Review: Search Memory Performance Improvements

I've conducted a comprehensive code review following Bitwarden engineering standards. The PR successfully implements search performance optimizations through pre-filtering and publisher-based subscriptions.

Overall Assessment: APPROVED

The implementation is architecturally sound with proper memory management and good test coverage. The performance optimizations are secure and effective.


Key Strengths

Performance Optimization 🚀

  • Pre-filtering on encrypted ciphers before decryption significantly reduces memory usage
  • Publisher-based search prevents subscription pile-up from rapid searches
  • Proper cleanup of preparedData after build accelerates memory release

Memory Management

  • Correct use of weak delegate references prevents retain cycles
  • Proper task cancellation in SearchProcessorMediator.deinit and stopSearching()
  • Single subscription model prevents memory accumulation

Security 🔒

  • Pre-filtering only accesses metadata fields (deletedDate, type, folderId, collectionIds, totp field presence)
  • No encrypted data is exposed during filtering
  • Zero-knowledge architecture preserved

Test Coverage

  • Comprehensive tests for Cipher.belongsToGroup() covering all 11 cipher/group combinations
  • SearchProcessorMediatorTests validates subscription lifecycle and cleanup
  • Updated existing tests for preFilter parameter

Addressing PR Discussion Points

Regarding preparedData Optional Approach (matt-livefront's suggestion)

The current implementation clears preparedData after build() by resetting to empty struct:

defer { preparedData = VaultListPreparedData() }

Recommendation: This approach is preferable to making it optional because:

  1. Avoids adding guard statements to every builder method
  2. Maintains single-use semantics (builder shouldn't be reused after build())
  3. Releases memory immediately without nil-checking overhead
  4. Builder instances are created fresh per use (not reused), so single-use is the expected pattern

The caveat mentioned in the PR description is theoretical - builders aren't reused in practice. No DEBUG assertion needed.

Memory Leak Concern (Line 111 of SearchProcessorMediator)

Reviewing the code:

vaultSearchListSubscriptionTask = Task { [weak self] in
    guard let self else { return }
    // ...
}

No memory leak exists. The [weak self] capture is correct. The guard let self unwrap creates a strong reference only for the duration of the task block, which is the proper pattern for tasks. The task is cancelled in:

  1. stopSearching() - line 100
  2. deinit - line 83

This ensures proper cleanup when the mediator is deallocated. The missing stopSearching() calls in processor deinit methods that you identified are not necessary - the mediator's own deinit handles cleanup when the processor is deallocated.

Delegation vs Closure (matt-livefront's suggestion)

The delegate pattern is appropriate here. While currently single-method, the architecture supports future expansion (e.g., onError(_:) handling) without breaking changes. The initialization timing constraint (can't pass closure before super.init) also favors the delegate approach.


Architecture Compliance ✅

  • Unidirectional Data Flow: Publisher chain correctly flows from filter → ciphers → processed results
  • Service Container: Proper DI through ServiceContainer and factory pattern
  • Mediator Pattern: SearchProcessorMediator appropriately separates subscription logic from processor
  • Testing Guidelines: Comprehensive test coverage following established patterns

Minor Improvements (Non-blocking)

  1. Redundant nil assignment (SearchProcessorMediator.swift:86)

    searchProcessorMediatorDelegate = nil

    This is redundant since delegate is weak and will be automatically nilled. Can be removed for clarity.

  2. Method naming (Already addressed by fedemkr)
    The rename from onFilterChanged to updateFilter improves clarity. ✅

  3. Documentation enhancement
    Consider documenting the single-use nature of VaultListSectionsBuilder after build() in the protocol DocC comments.


Security Review ✅

  • Pre-filtering operates only on metadata fields available in encrypted Cipher objects
  • No decryption occurs during filtering phase
  • Proper validation of group membership before decryption
  • No exposure of encrypted content or keys

Performance Impact 📊

Memory Savings Example: For a vault with 1000 ciphers searching in "Login" group:

  • Before: Decrypt all 1000 → filter → ~1000 objects in memory
  • After: Pre-filter to ~400 logins → decrypt 400 → ~400 objects in memory
  • Result: ~60% memory reduction

This is particularly critical for iOS extensions with strict 30-50MB memory limits.


Conclusion

This PR successfully achieves its performance goals while maintaining code quality, security, and architecture compliance. The implementation is production-ready.

Recommendation: APPROVE and merge after final review of matt-livefront's comments (all appear resolved).



func build() -> VaultListData {
vaultListData
defer { preparedData = nil }
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Memory Management Concern: Optional preparedData Caveat

The comment mentions this is a "large caveat" that methods will do nothing if called after build(). This is indeed risky:

func build() -> VaultListData {
    defer { preparedData = nil }  // Sets to nil after return
    return vaultListData
}

Issue: If a caller incorrectly calls any builder method after build(), it will silently do nothing due to the guard let preparedData else { return self } checks in all methods.

Suggestions:

  1. Add a DEBUG assertion as mentioned in the PR description to catch misuse during development:
func addSearchResultsSection(options: VaultListOptions) -> VaultListSectionsBuilder {
    #if DEBUG
    assert(preparedData != nil, "Builder methods cannot be called after build()")
    #endif
    guard let preparedData else { return self }
    // ...
}
  1. Consider adding a private var hasBuilt = false flag and checking it in all methods to make the error more explicit.

  2. Document this behavior prominently in the protocol documentation above line 10.

Alternative: Instead of making preparedData optional, consider a different approach like making the builder single-use or returning a new builder type after build() that has no methods (typestate pattern).


/// Deinitializes this processor mediator.
deinit {
vaultSearchListSubscriptionTask?.cancel()
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍 Good Memory Management

Excellent cleanup in the deinit:

  • Cancels the ongoing task
  • Nils out the task reference
  • Clears the delegate (weak reference cleanup)

This follows best practices for preventing retain cycles and ensuring proper resource cleanup when the mediator is deallocated.


func startSearching() {
vaultSearchListSubscriptionTask?.cancel()
vaultSearchListSubscriptionTask = Task { [weak self] in
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤔 Task Cancellation and Memory: Missing Weak Self Check

The [weak self] capture is good, but there's a subtle issue: the task is cancelled and set to nil in line 104, but if the task is already running when startSearching() is called again, there could be a brief moment where both tasks exist.

Current behavior: Correct - cancels old task before creating new one.

However, consider adding a small delay or ensuring the old task completes cancellation:

func startSearching() {
    vaultSearchListSubscriptionTask?.cancel()
    vaultSearchListSubscriptionTask = nil  // Add explicit nil assignment
    
    vaultSearchListSubscriptionTask = Task { [weak self] in
        guard let self else { return }
        // ...
    }
}

This is a minor suggestion since the current implementation should work correctly due to Swift's task cancellation semantics.

var vaultListFilterPublisher: AnyPublisher<VaultListFilter, Error> {
vaultListFilterSubject
.setFailureType(to: Error.self)
.removeDuplicates() // Prevent duplicate emissions if same filter sent twice
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

📝 Architecture Note: Publisher Design

The removeDuplicates() operator is excellent for preventing unnecessary work when the same filter is sent multiple times. This is particularly important for search performance where rapid typing could cause duplicate emissions.

Equatable Requirement: This relies on VaultListFilter conforming to Equatable. Verified that this is the case, which is good for this optimization to work correctly.

/// Whether the cipher belongs to a group.
/// - Parameter group: The group to filter.
/// - Returns: `true` if the cipher belongs to the group, `false` otherwise.
func belongsToGroup(_ group: VaultListGroup) -> Bool {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Excellent Optimization: Pre-Decryption Filtering

This new extension method is a key performance improvement. By moving the group check to work on encrypted Cipher objects rather than decrypted CipherListView objects, you avoid expensive decryption operations for ciphers that won't be displayed.

Memory Impact: This significantly reduces memory pressure because:

  1. Fewer decryption operations = fewer decrypted objects in memory
  2. Encrypted Cipher objects are smaller than decrypted CipherListView objects
  3. Particularly beneficial in extensions with tight memory limits

Architecture: This follows the principle of "fail fast" - filter early before expensive operations.

The implementation correctly handles all group types including checking deletedDate for trash filtering (line 30).

/// - ciphers: The ciphers to decrypt and process
/// - preFilter: A closure to filter ciphers before decryption.
/// - onCipher: The action to perform on each decrypted cipher.
func decryptAndProcessCiphersInBatch(
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Key Performance Enhancement: Pre-Filter Parameter

The addition of the preFilter parameter is a critical optimization for memory performance:

preFilter: (Cipher) throws -> Bool

Benefits:

  1. Filters before decryption: Reduces the number of expensive decryption operations
  2. Memory efficiency: Fewer objects need to be decrypted and held in memory
  3. Extensibility: Allows different filtering strategies per use case

Usage Pattern (from VaultListDataPreparator.swift):

preFilter: { cipher in
    filterEncryptedCipherByDeletedAndGroup(cipher: cipher, filter: filter)
}

This moves the "deleted" and "group" checks to happen on encrypted ciphers, which is much more efficient than decrypting everything first and then filtering.

Default Value: The default { _ in true } (line 32) maintains backward compatibility for cases that don't need pre-filtering.

do {
let decryptResult = try await clientService.vault().ciphers().decryptListWithFailures(
ciphers: Array(ciphers[start ..< end]),
ciphers: Array(ciphers[start ..< end].filter(preFilter)),
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍 Correct Implementation: Filter Before Decrypt

The pre-filter is correctly applied before decryption:

ciphers: Array(ciphers[start ..< end].filter(preFilter))

This ensures that only the ciphers passing the pre-filter are sent to the expensive decryptListWithFailures operation, achieving the memory optimization goal.

Performance Math: If pre-filtering removes 50% of ciphers, you save:

  • 50% fewer decryption operations (expensive crypto operations)
  • 50% fewer CipherListView objects in memory
  • Faster overall search performance

///
/// - Note: Strategies are typically struct-based value types with injected dependencies
/// (services, factories) to maintain testability and immutability.
protocol VaultListSearchDirectorStrategy { // sourcery: AutoMockable
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

New Protocol for Publisher-Based Search

This new protocol VaultListSearchDirectorStrategy is a key architectural improvement:

Key Difference from Existing Strategies:

  • Takes filterPublisher: AnyPublisher<VaultListFilter, Error> instead of a single filter: VaultListFilter
  • Returns a publisher that emits updates as the filter changes
  • This enables reactive, subscription-based search rather than one-shot queries

Memory Benefits:

  1. Single subscription: The processor subscribes once to the combined publisher stream
  2. No accumulated subscriptions: Previous approach could pile up multiple subscriptions with each search
  3. Proper cleanup: When search stops, the subscription is cancelled, freeing resources

Architecture Note: This follows a reactive programming pattern that's well-suited for search where filter changes frequently (e.g., as user types).

The comprehensive DocC documentation (lines 7-40) is excellent and explains the pattern clearly.

}
.eraseToAnyPublisher()
.values
try await Publishers.CombineLatest(
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reactive Publisher Combination

This implementation uses combineLatest to react to changes from either the filter or the ciphers:

Publishers.CombineLatest(
    filterPublisher,
    cipherService.ciphersPublisher(),
)

Benefits:

  • Reacts to both filter changes (user typing) and cipher changes (sync, edits)
  • Automatically triggers new search results when either input changes
  • Single subscription maintained throughout search session

Memory Efficiency: Each emission creates new VaultListData, but:

  1. Old data is immediately released when new data replaces it in processor state
  2. The asyncTryMap processes asynchronously, preventing blocking
  3. Combined with pre-filtering in prepareSearchData, only necessary ciphers are decrypted

Note: With rapid filter changes (fast typing), this could trigger many emissions, but this is mitigated by removeDuplicates() in SearchProcessorMediator.vaultListFilterPublisher.

let cipher = Cipher.fixture(deletedDate: nil)
XCTAssertFalse(cipher.belongsToGroup(.trash))
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Excellent Test Coverage

The test coverage for belongsToGroup(_:) is comprehensive, covering:

All Group Types (11 cipher types):

  • ✅ Card, Login, Identity, SecureNote, SSHKey types
  • ✅ TOTP (with and without TOTP data, non-login type)
  • ✅ No folder, folder matching/not matching
  • ✅ Collection matching/not matching/empty
  • ✅ Trash (deleted/not deleted)

Edge Cases:

  • ✅ Cipher without TOTP when checking .totp group
  • ✅ Cipher with no folder when checking specific folder
  • ✅ Cipher with no collections when checking collection group
  • ✅ Multiple collection IDs with contains check

This level of test coverage ensures the pre-filtering logic works correctly, which is critical for the performance optimization to be effective. Any bugs in belongsToGroup would cause incorrect filtering and visible UI issues.

}

// Should only have one subscription active (second startSearching cancelled the first)
XCTAssertEqual(delegate.onNewSearchResultsCallsCount, 1)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Critical Test: Subscription Management

This test (test_startSearching_cancelsPrevious) validates the key memory optimization of this PR:

subject.startSearching()
subject.startSearching()  // Second call should cancel first

What it verifies:

  • Multiple calls to startSearching() don't create multiple active subscriptions
  • Only one subscription exists at a time (line 172)
  • Previous subscription is properly cancelled when a new one starts

Why this matters for memory:

  • Without proper cancellation, each search could pile up subscriptions
  • In rapid succession (user typing quickly), this could create memory pressure
  • The test confirms only 1 callback occurs, proving old subscriptions are cleaned up

Related Code: In SearchProcessorMediator.swift:104, the implementation cancels before creating new:

vaultSearchListSubscriptionTask?.cancel()
vaultSearchListSubscriptionTask = Task { ... }

This test proves the optimization works as intended.

}

deinit {
reviewPromptTask?.cancel()
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Proper Resource Cleanup in deinit

The addition of searchProcessorMediator.stopSearching() in the processor's deinit is critical for preventing memory leaks:

deinit {
    reviewPromptTask?.cancel()
    searchProcessorMediator.stopSearching()  // ← New cleanup
}

Why this matters:

  1. Task cancellation: Stops any ongoing search subscription tasks
  2. Publisher cleanup: Releases the publisher chain and prevents further emissions
  3. Memory release: Allows all associated search data to be deallocated

Architecture compliance: This follows the pattern established in SearchProcessorMediator.deinit (lines 83-87), creating a complete cleanup chain:

  • Processor deinit → calls mediator.stopSearching()
  • Mediator.stopSearching() → cancels task
  • Mediator deinit → clears delegate and task references

This is especially important for iOS extensions where memory limits are strict.

/// - Returns: `true` if the cipher passes the filter, `false` otherwise.
func filterEncryptedCipherByDeletedAndGroup(
cipher: Cipher,
filter: VaultListFilter,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Critical Pre-Filter Implementation

This new helper function filterEncryptedCipherByDeletedAndGroup is the core of the memory optimization:

func filterEncryptedCipherByDeletedAndGroup(cipher: Cipher, filter: VaultListFilter) -> Bool

Key Performance Characteristics:

  1. Works on encrypted Cipher - no decryption needed for filtering
  2. Checks deletion status first - fast property access on encrypted data
  3. Uses belongsToGroup - operates on encrypted properties (type, folderId, collectionIds, deletedDate)

Logic Flow:

  • If cipher is deleted → only include if explicitly searching trash
  • If group filter exists (and not trash) → use belongsToGroup on encrypted cipher
  • Otherwise → include cipher (will be filtered later by other criteria)

Memory Impact Example:
If a vault has 1000 ciphers but user is searching in "Login" group:

  • Before: Decrypt all 1000 → filter → ~1000 decrypted objects in memory
  • After: Pre-filter to ~400 logins → decrypt 400 → ~400 decrypted objects in memory
  • Savings: ~60% fewer decryptions, ~60% less memory used

This is especially critical in iOS extensions with 30-50MB memory limits.

Copy link
Collaborator

@matt-livefront matt-livefront left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is great, just have a few suggestions and a potential memory leak that I noticed.

let errorReporter: ErrorReporter
/// Vault list data prepared to be used by the builder.
let preparedData: VaultListPreparedData
var preparedData: VaultListPreparedData?
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤔 Instead of making this optional/nil after calling build, could you set it to a new instance of the struct? That would avoid the need for adding the guards. The downside being you wouldn't be able to differentiate an initially empty prepared data struct and one that has already been "built", but maybe that isn't a needed requirement.

Comment on lines +102 to +107
searchProcessorMediator = services.searchProcessorMediatorFactory.make()

super.init(state: state)

searchProcessorMediator.setDelegate(self)
searchProcessorMediator.setAutofillListMode(autofillListMode)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🎨 What do you think about providing the delegate and list mode parameters to make() rather than calling separate functions? It seems like that would help enforce that those are required (if they are) and prevent them from being missed.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I started doing it that way but it was a bit tricky to get the autofillListMode as it needs the super.init to be called (same happens with delegate self). Whereas if I put the factory.make after the super.init it errors given that searchProcessorMediator has not been initialized before.
I could get the mode from (appExtensionDelegate as? AutofillAppExtensionDelegate)?.autofillListMode ?? .passwords but I didn't think it was clean to repeat that logic in the init when we have the computed property. Moreover, doing it in the coordinator would end up in something similar repeating such logic.
Additionally, the list mode is not required in all cases; it's only used in this processor, so it felt right to have it as a set function can call it when needed.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, needing access to self makes sense why you did it the way you did. Hm, you could use a lazy var, but it's not as nice as just initializing everything in init 🤔

private lazy var searchProcessorMediator: SearchProcessorMediator = {
    let searchProcessorMediator = services.searchProcessorMediatorFactory.make()
    searchProcessorMediator.setDelegate(self)
    searchProcessorMediator.setAutofillListMode(autofillListMode)
    return searchProcessorMediator
}()

Alternatively, maybe it makes sense in this case to provide the autofill mode and delegate/closure as parameters to startSearching()? It looks like both of those things are only used within the startSearching() function so that may simplify some things (no set method, no instance variable).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At the very beginning I started like that and then changed to this approach. However, I see it simplifies a lot the structure/logic so I agree I'll make the change. Thanks for the feedback Matt! 🎉


// MARK: Methods

func onFilterChanged(_ filter: VaultListFilter) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🎨 The method name onFilterChanged(_:) feels like it would be used to provide a closure for when the filter changes. In this case, it's setting the filter. Would it be better to name this setFilter(_:) similar to setAutofillListMode(_:)?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense, changing it to updateFilter(_:) to indicate that the same instance can/may update the filter several times as I see set... as it may indicate a one-time operation.

deinit {
reviewPromptTask?.cancel()

searchProcessorMediator.stopSearching()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤔 How critical is it that stopSearching() is called in deinit? I noticed that this is missing from VaultGroupProcessor and VaultItemSelectionProcessor. On one hand, I think it's good to be explicit, but on the other hand, this is easy to miss. Given that SearchProcessorMediator also cancels the task in deinit, I wonder if we need the extra logic to explicitly stop searching, or if we could rely on deinit to cancel the task?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed as it's unnecessary per the reasons you mentioned.

Comment on lines +665 to +669
extension VaultListProcessor: SearchProcessorMediatorDelegate {
func onNewSearchResults(data: VaultListData) {
let items = data.sections.first?.items ?? []
state.searchResults = items
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤔 Given that the delegate is only a single method, and there's minimal logic in the delegate, would it be simpler to pass a closure to SearchProcessorMediator instead of using delegation? Using delegation has a few downsides in that you have to 1) remember to set the delegate and 2) it separates the logic associated with SearchProcessorMediator. If you foresee other methods being added though, then I could see delegation making more sense.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I went for the delegate because I was thinking we could add some other functions like onError(_:) if we'd need to handle that particularly on the processors or some more particular features we could integrate in there if necessary.
By YAGNI I could go with your approach but because of a similar explanation like the above comment we could forget to set the closure as it couldn't be set before super.init.
What do you think?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think providing a closure to handle search results as a parameter to startSearching() could be a good fit. That would require it from the API perspective and alleviate the need for accessing self before the processor is initialized. What do you think about that?

Comment on lines +106 to +115
guard let self else { return }

do {
let publisher = try await vaultRepository.vaultSearchListPublisher(
mode: autofillListMode,
filterPublisher: vaultListFilterPublisher,
)

for try await vaultListData in publisher {
guard !Task.isCancelled else { break }
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤔 It looks like there's a memory leak after searching from the group view and then returning to the list view. I'm wondering if it's caused by this strong self reference here. Maybe it's because of the missing stopSearching() in the processor's deinit? But I also wonder if it'd be better to keep the self reference weak rather than converting it to a strong self immediately for the duration of the task, so it doesn't create a retain cycle.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤔 I may be missing something but which strong self reference are you seeing? I marked it as weak here.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this case, [weak self] only makes the capture weak at the moment the closure is created. Once guard let self runs, which will happen immediately once the task starts, the weak self reference is promoted to a strong one for the rest of the closure.

Using [weak self]/guard let self works better for closures that won't run right away and don't contain long-running loops. If self is deallocated before the closure starts, then the guard will return, and the weak reference won't be promoted to a strong one.

However, the task here will end up making a strong reference to self and then because it's subscribing to a publisher, the task is going to stay alive until the publisher completes, maintaining a strong reference the entire time.

I think you could restructure it like this to avoid a strong reference:

    func startSearching() {
        vaultSearchListSubscriptionTask?.cancel()
        vaultSearchListSubscriptionTask = Task { [weak self, autofillListMode, vaultListFilterPublisher] in
            do {
                let publisher = try await self?.vaultRepository.vaultSearchListPublisher(
                    mode: autofillListMode,
                    filterPublisher: vaultListFilterPublisher,
                )
                guard let publisher else { return }

                for try await vaultListData in publisher {
                    guard !Task.isCancelled else { break }

                    await self?.searchProcessorMediatorDelegate?.onNewSearchResults(data: vaultListData)
                }
            } catch {
                self?.errorReporter.log(error: error)
            }
        }
    }

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants