-
Notifications
You must be signed in to change notification settings - Fork 883
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add watch request timeout to prevent watch request hang #5732
base: master
Are you sure you want to change the base?
Conversation
…hang Signed-off-by: xigang <wangxigang2014@gmail.com>
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Codecov ReportAttention: Patch coverage is
❗ Your organization needs to install the Codecov GitHub app to enable full functionality. Additional details and impacted files@@ Coverage Diff @@
## master #5732 +/- ##
=======================================
Coverage 40.90% 40.90%
=======================================
Files 650 650
Lines 55182 55196 +14
=======================================
+ Hits 22573 22579 +6
- Misses 31171 31179 +8
Partials 1438 1438
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
cc @XiShanYongYe-Chang @RainbowMango @ikaven1024 PTAL. |
return nil, err | ||
case <-time.After(30 * time.Second): | ||
// If the watch request times out, return an error, and the client will retry. | ||
return nil, fmt.Errorf("timeout waiting for watch for resource %v in cluster %q", gvr.String(), cluster) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@xigang Hi, if a watch request is hanging and causes a timeout, will the hanging watch request continue to exist in the subprocess?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@zhzhuang-zju Yes, there is this issue. When a watch request times out, the goroutine needs to be terminated.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point! Then that case we have to cancel the context passed to cache.Watch().
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So this patch intends to terminate the hanging by raising an error after a period of time. Is this the idea?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Another question:
Before starting the Watch
, we tried to get the cache of that cluster, I'm curious why this cache still exists even after the cluster is gone. Do we have a chance to clean the cache?
karmada/pkg/search/proxy/store/multi_cluster_cache.go
Lines 333 to 336 in e7b6513
cache := c.cacheForClusterResource(cluster, gvr) | |
if cache == nil { | |
continue | |
} |
What type of PR is this?
/kind bug
What this PR does / why we need it:
When the federate-apiserver's watch request to the member cluster gets stuck, it will cause the watch request from the federated client to get stuck as well.
Which issue(s) this PR fixes:
Fixes #5672
Special notes for your reviewer:
Does this PR introduce a user-facing change?: