-
Notifications
You must be signed in to change notification settings - Fork 58
fix(flagd): Fix stream observation in FlagSyncService #1590
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
fix(flagd): Fix stream observation in FlagSyncService #1590
Conversation
… added Stream cancellation Signed-off-by: Guido Breitenhuber <guido.breitenhuber@dynatrace.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @guidobrei, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request significantly refactors the FlagSyncService to enhance the stability and correctness of stream observation. The primary goal is to prevent a potential deadlock scenario and improve the synchronization of stream cancellation. By removing a blocking queue and implementing a unified cancellation context, the service now handles metadata retrieval and flag synchronization more robustly, ensuring that stream failures or metadata issues lead to proper stream restarts.
Highlights
- Deadlock Prevention: Eliminated a potential deadlock in FlagSyncService by removing the incomingQueue and its blocking take() operation.
- Stream Synchronization: Resolved synchronization issues between getMetadata and syncFlags by sharing a single Context.CancellableContext, ensuring consistent cancellation behavior.
- Refactored Stream Observation: Introduced a dedicated SyncStreamObserver class to manage gRPC stream events directly, simplifying the stream handling logic.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point by creating a comment using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands on the current page.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in pull request comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request effectively addresses a potential deadlock in FlagSyncService
by removing the blocking incomingQueue.take()
call and refactors the stream handling logic to be more robust. The fix for the synchronization issue between getMetadata
and syncFlags
by sharing a CancellableContext
is also a good improvement. My review includes a few suggestions to further improve the new implementation, mainly around error handling and thread interruption best practices. Specifically, I've pointed out an opportunity to restore the error message in the error payload, adjust a logging level, and properly handle InterruptedException
.
...e/contrib/providers/flagd/resolver/process/storage/connector/sync/SyncStreamQueueSource.java
Outdated
Show resolved
Hide resolved
...e/contrib/providers/flagd/resolver/process/storage/connector/sync/SyncStreamQueueSource.java
Outdated
Show resolved
Hide resolved
...e/contrib/providers/flagd/resolver/process/storage/connector/sync/SyncStreamQueueSource.java
Outdated
Show resolved
Hide resolved
...e/contrib/providers/flagd/resolver/process/storage/connector/sync/SyncStreamQueueSource.java
Outdated
Show resolved
Hide resolved
...e/contrib/providers/flagd/resolver/process/storage/connector/sync/SyncStreamQueueSource.java
Show resolved
Hide resolved
localStub.syncFlags(syncRequest.build(), new QueueingStreamObserver<SyncFlagsResponse>(incomingQueue)); | ||
localStub.syncFlags(syncRequest.build(), streamObserver); | ||
|
||
while (!shutdown.get() && !streamObserver.context.isCancelled()) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of this loop, couldn't we cancel the context in the shutdown()
method?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This method is executed in a thread. shutdown
is an AtomicBoolean
to signal shutdown to the stream RPC and the underlying channel connector. But you're right, we maybe want to cancel the stream when we get a shutdown signal.
Will modify the code.
...e/contrib/providers/flagd/resolver/process/storage/connector/sync/SyncStreamQueueSource.java
Show resolved
Hide resolved
...e/contrib/providers/flagd/resolver/process/storage/connector/sync/SyncStreamQueueSource.java
Outdated
Show resolved
Hide resolved
...e/contrib/providers/flagd/resolver/process/storage/connector/sync/SyncStreamQueueSource.java
Outdated
Show resolved
Hide resolved
Signed-off-by: Guido Breitenhuber <guido.breitenhuber@dynatrace.com>
Signed-off-by: Guido Breitenhuber <guido.breitenhuber@dynatrace.com>
Signed-off-by: Guido Breitenhuber <guido.breitenhuber@dynatrace.com>
Signed-off-by: Guido Breitenhuber <guido.breitenhuber@dynatrace.com>
Signed-off-by: Guido Breitenhuber <guido.breitenhuber@dynatrace.com>
Signed-off-by: Guido Breitenhuber <guido.breitenhuber@dynatrace.com>
Signed-off-by: Guido Breitenhuber <guido.breitenhuber@dynatrace.com>
/gemini summary |
Hello guidobrei, Thank you for submitting this pull request and for requesting a review. I've analyzed the changes and here's a summary and a detailed review. Pull Request SummaryThis pull request, titled "fix(flagd): Fix stream observation in FlagSyncService", addresses two critical issues within the Detailed ReviewI've reviewed the changes based on correctness, efficiency, and maintainability, adhering to the Google Java Style Guide for any stylistic suggestions. Overall FeedbackThis is a well-structured and impactful change. The refactoring effectively addresses the identified issues, leading to a more robust and maintainable stream observation mechanism. The separation of concerns into Review CommentsFile:
File:
ConclusionThe changes in this pull request are well-thought-out and effectively address the described problems. The refactoring improves the code's correctness, efficiency, and maintainability. I recommend merging this PR once any minor suggestions are addressed. |
|
||
// inner loop for handling messages | ||
while (!shutdown.get() && !Context.current().isCancelled()) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Was this call to the current context Context.current().isCancelled()
intentional or should it have been a call to the context context.isCancelled()
, created before and used in the metadata block above to also skip flag sync when?
Context.current() != context
, because the call to Context.current().withCancellation()
only creates a new context, but does not set it as current context.
In the previous version, flags are always synced, even if getMetadata fails. This behavior is different in my implementation. In the new version, when getMatadata fails, we continue the loop and try again.
@toddbaert I saw this change was introduced by you here. Any chance you can recall the intention? Based on the log message in the metadata catch block, I'd say we want the syncFlags stream to start over again.
observer.metadata = getMetadata(); | ||
} catch (Exception metaEx) { | ||
// retry if getMetadata fails | ||
log.warn("Metadata request exception, retrying.", metaEx); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure about the log level. Elevating from debug
to warn
might be too obtrusive.
This looks like a good simplification. I will thoroughly review on Monday 🙏 |
Signed-off-by: Guido Breitenhuber <guido.breitenhuber@dynatrace.com>
This PR
Related Issues
Fixes #1583
Notes
We've identified two related issues in current implementation:
incomingQueue.take()
some message put in the blocking queue.getMetadata
fails we want to cancel the stream. However, this never worked properly because the two parts listened on different cancellation tokens.The new implementation eliminates problem 1) by removing
incomingQueue
at all and mitigates 2) by sharing a cancellation token.Follow-up Tasks
Still open is the removal of deprecated getMetadata open-feature/flagd#1584.
How to test
Clagd testbed covers FlagSyncService