-
Notifications
You must be signed in to change notification settings - Fork 86
[PM-14031] Improve search memory performance #2179
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
[PM-14031] Improve search memory performance #2179
Conversation
…prove performance so less ciphers get decrypted when not necessary.
|
Great job! No new security vulnerabilities introduced in this pull request |
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #2179 +/- ##
==========================================
+ Coverage 85.42% 85.44% +0.02%
==========================================
Files 1733 1735 +2
Lines 146246 146520 +274
==========================================
+ Hits 124926 125197 +271
- Misses 21320 21323 +3 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
…ter from the processor. Therefore the vault strategy can subscribe to filter changes (like search) and react to that as another publisher. This improves the behavior given that while the user searches the publisher is subscribed once to all internal publishers (filter, cipher, available fido2 credentials). Also moved the logic for that into the SearchProcessorMediator to be able to share it with all the processor that search the vault.
|
Claude finished @fedemkr's task in 3m 17s —— View job Code Review: Search Memory Performance ImprovementsI've conducted a comprehensive code review following Bitwarden engineering standards. The PR successfully implements search performance optimizations through pre-filtering and publisher-based subscriptions. Overall Assessment: APPROVED ✅The implementation is architecturally sound with proper memory management and good test coverage. The performance optimizations are secure and effective. Key StrengthsPerformance Optimization 🚀
Memory Management ✅
Security 🔒
Test Coverage ✅
Addressing PR Discussion PointsRegarding
|
|
|
||
| func build() -> VaultListData { | ||
| vaultListData | ||
| defer { preparedData = nil } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The comment mentions this is a "large caveat" that methods will do nothing if called after build(). This is indeed risky:
func build() -> VaultListData {
defer { preparedData = nil } // Sets to nil after return
return vaultListData
}Issue: If a caller incorrectly calls any builder method after build(), it will silently do nothing due to the guard let preparedData else { return self } checks in all methods.
Suggestions:
- Add a
DEBUGassertion as mentioned in the PR description to catch misuse during development:
func addSearchResultsSection(options: VaultListOptions) -> VaultListSectionsBuilder {
#if DEBUG
assert(preparedData != nil, "Builder methods cannot be called after build()")
#endif
guard let preparedData else { return self }
// ...
}-
Consider adding a
private var hasBuilt = falseflag and checking it in all methods to make the error more explicit. -
Document this behavior prominently in the protocol documentation above line 10.
Alternative: Instead of making preparedData optional, consider a different approach like making the builder single-use or returning a new builder type after build() that has no methods (typestate pattern).
|
|
||
| /// Deinitializes this processor mediator. | ||
| deinit { | ||
| vaultSearchListSubscriptionTask?.cancel() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 Good Memory Management
Excellent cleanup in the deinit:
- Cancels the ongoing task
- Nils out the task reference
- Clears the delegate (weak reference cleanup)
This follows best practices for preventing retain cycles and ensuring proper resource cleanup when the mediator is deallocated.
|
|
||
| func startSearching() { | ||
| vaultSearchListSubscriptionTask?.cancel() | ||
| vaultSearchListSubscriptionTask = Task { [weak self] in |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🤔 Task Cancellation and Memory: Missing Weak Self Check
The [weak self] capture is good, but there's a subtle issue: the task is cancelled and set to nil in line 104, but if the task is already running when startSearching() is called again, there could be a brief moment where both tasks exist.
Current behavior: Correct - cancels old task before creating new one.
However, consider adding a small delay or ensuring the old task completes cancellation:
func startSearching() {
vaultSearchListSubscriptionTask?.cancel()
vaultSearchListSubscriptionTask = nil // Add explicit nil assignment
vaultSearchListSubscriptionTask = Task { [weak self] in
guard let self else { return }
// ...
}
}This is a minor suggestion since the current implementation should work correctly due to Swift's task cancellation semantics.
| var vaultListFilterPublisher: AnyPublisher<VaultListFilter, Error> { | ||
| vaultListFilterSubject | ||
| .setFailureType(to: Error.self) | ||
| .removeDuplicates() // Prevent duplicate emissions if same filter sent twice |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
📝 Architecture Note: Publisher Design
The removeDuplicates() operator is excellent for preventing unnecessary work when the same filter is sent multiple times. This is particularly important for search performance where rapid typing could cause duplicate emissions.
Equatable Requirement: This relies on VaultListFilter conforming to Equatable. Verified that this is the case, which is good for this optimization to work correctly.
| /// Whether the cipher belongs to a group. | ||
| /// - Parameter group: The group to filter. | ||
| /// - Returns: `true` if the cipher belongs to the group, `false` otherwise. | ||
| func belongsToGroup(_ group: VaultListGroup) -> Bool { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
✨ Excellent Optimization: Pre-Decryption Filtering
This new extension method is a key performance improvement. By moving the group check to work on encrypted Cipher objects rather than decrypted CipherListView objects, you avoid expensive decryption operations for ciphers that won't be displayed.
Memory Impact: This significantly reduces memory pressure because:
- Fewer decryption operations = fewer decrypted objects in memory
- Encrypted
Cipherobjects are smaller than decryptedCipherListViewobjects - Particularly beneficial in extensions with tight memory limits
Architecture: This follows the principle of "fail fast" - filter early before expensive operations.
The implementation correctly handles all group types including checking deletedDate for trash filtering (line 30).
| /// - ciphers: The ciphers to decrypt and process | ||
| /// - preFilter: A closure to filter ciphers before decryption. | ||
| /// - onCipher: The action to perform on each decrypted cipher. | ||
| func decryptAndProcessCiphersInBatch( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
✨ Key Performance Enhancement: Pre-Filter Parameter
The addition of the preFilter parameter is a critical optimization for memory performance:
preFilter: (Cipher) throws -> BoolBenefits:
- Filters before decryption: Reduces the number of expensive decryption operations
- Memory efficiency: Fewer objects need to be decrypted and held in memory
- Extensibility: Allows different filtering strategies per use case
Usage Pattern (from VaultListDataPreparator.swift):
preFilter: { cipher in
filterEncryptedCipherByDeletedAndGroup(cipher: cipher, filter: filter)
}This moves the "deleted" and "group" checks to happen on encrypted ciphers, which is much more efficient than decrypting everything first and then filtering.
Default Value: The default { _ in true } (line 32) maintains backward compatibility for cases that don't need pre-filtering.
| do { | ||
| let decryptResult = try await clientService.vault().ciphers().decryptListWithFailures( | ||
| ciphers: Array(ciphers[start ..< end]), | ||
| ciphers: Array(ciphers[start ..< end].filter(preFilter)), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 Correct Implementation: Filter Before Decrypt
The pre-filter is correctly applied before decryption:
ciphers: Array(ciphers[start ..< end].filter(preFilter))This ensures that only the ciphers passing the pre-filter are sent to the expensive decryptListWithFailures operation, achieving the memory optimization goal.
Performance Math: If pre-filtering removes 50% of ciphers, you save:
- 50% fewer decryption operations (expensive crypto operations)
- 50% fewer
CipherListViewobjects in memory - Faster overall search performance
| /// | ||
| /// - Note: Strategies are typically struct-based value types with injected dependencies | ||
| /// (services, factories) to maintain testability and immutability. | ||
| protocol VaultListSearchDirectorStrategy { // sourcery: AutoMockable |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
✨ New Protocol for Publisher-Based Search
This new protocol VaultListSearchDirectorStrategy is a key architectural improvement:
Key Difference from Existing Strategies:
- Takes
filterPublisher: AnyPublisher<VaultListFilter, Error>instead of a singlefilter: VaultListFilter - Returns a publisher that emits updates as the filter changes
- This enables reactive, subscription-based search rather than one-shot queries
Memory Benefits:
- Single subscription: The processor subscribes once to the combined publisher stream
- No accumulated subscriptions: Previous approach could pile up multiple subscriptions with each search
- Proper cleanup: When search stops, the subscription is cancelled, freeing resources
Architecture Note: This follows a reactive programming pattern that's well-suited for search where filter changes frequently (e.g., as user types).
The comprehensive DocC documentation (lines 7-40) is excellent and explains the pattern clearly.
| } | ||
| .eraseToAnyPublisher() | ||
| .values | ||
| try await Publishers.CombineLatest( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
✨ Reactive Publisher Combination
This implementation uses combineLatest to react to changes from either the filter or the ciphers:
Publishers.CombineLatest(
filterPublisher,
cipherService.ciphersPublisher(),
)Benefits:
- Reacts to both filter changes (user typing) and cipher changes (sync, edits)
- Automatically triggers new search results when either input changes
- Single subscription maintained throughout search session
Memory Efficiency: Each emission creates new VaultListData, but:
- Old data is immediately released when new data replaces it in processor state
- The
asyncTryMapprocesses asynchronously, preventing blocking - Combined with pre-filtering in
prepareSearchData, only necessary ciphers are decrypted
Note: With rapid filter changes (fast typing), this could trigger many emissions, but this is mitigated by removeDuplicates() in SearchProcessorMediator.vaultListFilterPublisher.
| let cipher = Cipher.fixture(deletedDate: nil) | ||
| XCTAssertFalse(cipher.belongsToGroup(.trash)) | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
✅ Excellent Test Coverage
The test coverage for belongsToGroup(_:) is comprehensive, covering:
All Group Types (11 cipher types):
- ✅ Card, Login, Identity, SecureNote, SSHKey types
- ✅ TOTP (with and without TOTP data, non-login type)
- ✅ No folder, folder matching/not matching
- ✅ Collection matching/not matching/empty
- ✅ Trash (deleted/not deleted)
Edge Cases:
- ✅ Cipher without TOTP when checking .totp group
- ✅ Cipher with no folder when checking specific folder
- ✅ Cipher with no collections when checking collection group
- ✅ Multiple collection IDs with contains check
This level of test coverage ensures the pre-filtering logic works correctly, which is critical for the performance optimization to be effective. Any bugs in belongsToGroup would cause incorrect filtering and visible UI issues.
| } | ||
|
|
||
| // Should only have one subscription active (second startSearching cancelled the first) | ||
| XCTAssertEqual(delegate.onNewSearchResultsCallsCount, 1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
✅ Critical Test: Subscription Management
This test (test_startSearching_cancelsPrevious) validates the key memory optimization of this PR:
subject.startSearching()
subject.startSearching() // Second call should cancel firstWhat it verifies:
- Multiple calls to
startSearching()don't create multiple active subscriptions - Only one subscription exists at a time (line 172)
- Previous subscription is properly cancelled when a new one starts
Why this matters for memory:
- Without proper cancellation, each search could pile up subscriptions
- In rapid succession (user typing quickly), this could create memory pressure
- The test confirms only 1 callback occurs, proving old subscriptions are cleaned up
Related Code: In SearchProcessorMediator.swift:104, the implementation cancels before creating new:
vaultSearchListSubscriptionTask?.cancel()
vaultSearchListSubscriptionTask = Task { ... }This test proves the optimization works as intended.
| } | ||
|
|
||
| deinit { | ||
| reviewPromptTask?.cancel() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
✅ Proper Resource Cleanup in deinit
The addition of searchProcessorMediator.stopSearching() in the processor's deinit is critical for preventing memory leaks:
deinit {
reviewPromptTask?.cancel()
searchProcessorMediator.stopSearching() // ← New cleanup
}Why this matters:
- Task cancellation: Stops any ongoing search subscription tasks
- Publisher cleanup: Releases the publisher chain and prevents further emissions
- Memory release: Allows all associated search data to be deallocated
Architecture compliance: This follows the pattern established in SearchProcessorMediator.deinit (lines 83-87), creating a complete cleanup chain:
- Processor deinit → calls mediator.stopSearching()
- Mediator.stopSearching() → cancels task
- Mediator deinit → clears delegate and task references
This is especially important for iOS extensions where memory limits are strict.
| /// - Returns: `true` if the cipher passes the filter, `false` otherwise. | ||
| func filterEncryptedCipherByDeletedAndGroup( | ||
| cipher: Cipher, | ||
| filter: VaultListFilter, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
✨ Critical Pre-Filter Implementation
This new helper function filterEncryptedCipherByDeletedAndGroup is the core of the memory optimization:
func filterEncryptedCipherByDeletedAndGroup(cipher: Cipher, filter: VaultListFilter) -> BoolKey Performance Characteristics:
- Works on encrypted
Cipher- no decryption needed for filtering - Checks deletion status first - fast property access on encrypted data
- Uses
belongsToGroup- operates on encrypted properties (type, folderId, collectionIds, deletedDate)
Logic Flow:
- If cipher is deleted → only include if explicitly searching trash
- If group filter exists (and not trash) → use
belongsToGroupon encrypted cipher - Otherwise → include cipher (will be filtered later by other criteria)
Memory Impact Example:
If a vault has 1000 ciphers but user is searching in "Login" group:
- Before: Decrypt all 1000 → filter → ~1000 decrypted objects in memory
- After: Pre-filter to ~400 logins → decrypt 400 → ~400 decrypted objects in memory
- Savings: ~60% fewer decryptions, ~60% less memory used
This is especially critical in iOS extensions with 30-50MB memory limits.
matt-livefront
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is great, just have a few suggestions and a potential memory leak that I noticed.
| let errorReporter: ErrorReporter | ||
| /// Vault list data prepared to be used by the builder. | ||
| let preparedData: VaultListPreparedData | ||
| var preparedData: VaultListPreparedData? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🤔 Instead of making this optional/nil after calling build, could you set it to a new instance of the struct? That would avoid the need for adding the guards. The downside being you wouldn't be able to differentiate an initially empty prepared data struct and one that has already been "built", but maybe that isn't a needed requirement.
| searchProcessorMediator = services.searchProcessorMediatorFactory.make() | ||
|
|
||
| super.init(state: state) | ||
|
|
||
| searchProcessorMediator.setDelegate(self) | ||
| searchProcessorMediator.setAutofillListMode(autofillListMode) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🎨 What do you think about providing the delegate and list mode parameters to make() rather than calling separate functions? It seems like that would help enforce that those are required (if they are) and prevent them from being missed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I started doing it that way but it was a bit tricky to get the autofillListMode as it needs the super.init to be called (same happens with delegate self). Whereas if I put the factory.make after the super.init it errors given that searchProcessorMediator has not been initialized before.
I could get the mode from (appExtensionDelegate as? AutofillAppExtensionDelegate)?.autofillListMode ?? .passwords but I didn't think it was clean to repeat that logic in the init when we have the computed property. Moreover, doing it in the coordinator would end up in something similar repeating such logic.
Additionally, the list mode is not required in all cases; it's only used in this processor, so it felt right to have it as a set function can call it when needed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, needing access to self makes sense why you did it the way you did. Hm, you could use a lazy var, but it's not as nice as just initializing everything in init 🤔
private lazy var searchProcessorMediator: SearchProcessorMediator = {
let searchProcessorMediator = services.searchProcessorMediatorFactory.make()
searchProcessorMediator.setDelegate(self)
searchProcessorMediator.setAutofillListMode(autofillListMode)
return searchProcessorMediator
}()
Alternatively, maybe it makes sense in this case to provide the autofill mode and delegate/closure as parameters to startSearching()? It looks like both of those things are only used within the startSearching() function so that may simplify some things (no set method, no instance variable).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At the very beginning I started like that and then changed to this approach. However, I see it simplifies a lot the structure/logic so I agree I'll make the change. Thanks for the feedback Matt! 🎉
|
|
||
| // MARK: Methods | ||
|
|
||
| func onFilterChanged(_ filter: VaultListFilter) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🎨 The method name onFilterChanged(_:) feels like it would be used to provide a closure for when the filter changes. In this case, it's setting the filter. Would it be better to name this setFilter(_:) similar to setAutofillListMode(_:)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Makes sense, changing it to updateFilter(_:) to indicate that the same instance can/may update the filter several times as I see set... as it may indicate a one-time operation.
| deinit { | ||
| reviewPromptTask?.cancel() | ||
|
|
||
| searchProcessorMediator.stopSearching() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🤔 How critical is it that stopSearching() is called in deinit? I noticed that this is missing from VaultGroupProcessor and VaultItemSelectionProcessor. On one hand, I think it's good to be explicit, but on the other hand, this is easy to miss. Given that SearchProcessorMediator also cancels the task in deinit, I wonder if we need the extra logic to explicitly stop searching, or if we could rely on deinit to cancel the task?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removed as it's unnecessary per the reasons you mentioned.
| extension VaultListProcessor: SearchProcessorMediatorDelegate { | ||
| func onNewSearchResults(data: VaultListData) { | ||
| let items = data.sections.first?.items ?? [] | ||
| state.searchResults = items | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🤔 Given that the delegate is only a single method, and there's minimal logic in the delegate, would it be simpler to pass a closure to SearchProcessorMediator instead of using delegation? Using delegation has a few downsides in that you have to 1) remember to set the delegate and 2) it separates the logic associated with SearchProcessorMediator. If you foresee other methods being added though, then I could see delegation making more sense.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I went for the delegate because I was thinking we could add some other functions like onError(_:) if we'd need to handle that particularly on the processors or some more particular features we could integrate in there if necessary.
By YAGNI I could go with your approach but because of a similar explanation like the above comment we could forget to set the closure as it couldn't be set before super.init.
What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think providing a closure to handle search results as a parameter to startSearching() could be a good fit. That would require it from the API perspective and alleviate the need for accessing self before the processor is initialized. What do you think about that?
| guard let self else { return } | ||
|
|
||
| do { | ||
| let publisher = try await vaultRepository.vaultSearchListPublisher( | ||
| mode: autofillListMode, | ||
| filterPublisher: vaultListFilterPublisher, | ||
| ) | ||
|
|
||
| for try await vaultListData in publisher { | ||
| guard !Task.isCancelled else { break } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🤔 It looks like there's a memory leak after searching from the group view and then returning to the list view. I'm wondering if it's caused by this strong self reference here. Maybe it's because of the missing stopSearching() in the processor's deinit? But I also wonder if it'd be better to keep the self reference weak rather than converting it to a strong self immediately for the duration of the task, so it doesn't create a retain cycle.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🤔 I may be missing something but which strong self reference are you seeing? I marked it as weak here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In this case, [weak self] only makes the capture weak at the moment the closure is created. Once guard let self runs, which will happen immediately once the task starts, the weak self reference is promoted to a strong one for the rest of the closure.
Using [weak self]/guard let self works better for closures that won't run right away and don't contain long-running loops. If self is deallocated before the closure starts, then the guard will return, and the weak reference won't be promoted to a strong one.
However, the task here will end up making a strong reference to self and then because it's subscribing to a publisher, the task is going to stay alive until the publisher completes, maintaining a strong reference the entire time.
I think you could restructure it like this to avoid a strong reference:
func startSearching() {
vaultSearchListSubscriptionTask?.cancel()
vaultSearchListSubscriptionTask = Task { [weak self, autofillListMode, vaultListFilterPublisher] in
do {
let publisher = try await self?.vaultRepository.vaultSearchListPublisher(
mode: autofillListMode,
filterPublisher: vaultListFilterPublisher,
)
guard let publisher else { return }
for try await vaultListData in publisher {
guard !Task.isCancelled else { break }
await self?.searchProcessorMediatorDelegate?.onNewSearchResults(data: vaultListData)
}
} catch {
self?.errorReporter.log(error: error)
}
}
}
…istSectionsBuilder by preparedData not being optional and setting it back to its default struct when building.

🎟️ Tracking
PM-14031
📔 Objective
Improve the performance of searches throughout the app, specially regarding memory usage. This is particularly useful for iOS extensions where several searches in a row could potentially crash the extension given the memory limits (this also depends on the vault size).
VaultListFilterfrom the processor, so any time something from the filter like the search text query changes a new filter value gets published. Therefore each time the search starts, the publisher is subscribed only once for the three internal ones (filter, ciphers, available fido2 credentials) which saves a lot of memory piling up vs the previous approach.SearchProcessorMediatorhas been added to handle the subscription, cancellation and handling search results with a delegate to share the logic amongst the different vault list processors.preparedDatanow is cleaned afterbuildreturns inDefaultVaultListSectionsBuilderto make Swift clean the internal arrays faster. This has a large caveat given that I had to make it optional, so if the caller calls any of the methods afterbuildnothing will be done; which is kind of OK as nothing should be called afterbuildin the builder but something to raise up. 🤔 I could add a DEBUG warning/assertion here just in case it happens.Note
These changes could also be done to other cipher list flows but this PR only focuses on searches to decrease the testing surface. Additional tasks track the necessary changes on other parts of the app.
⏰ Reminders before review
🦮 Reviewer guidelines
:+1:) or similar for great changes:memo:) or ℹ️ (:information_source:) for notes or general info:question:) for questions:thinking:) or 💭 (:thought_balloon:) for more open inquiry that's not quite a confirmed issue and could potentially benefit from discussion:art:) for suggestions / improvements:x:) or:warning:) for more significant problems or concerns needing attention:seedling:) or ♻️ (:recycle:) for future improvements or indications of technical debt:pick:) for minor or nitpick changes