Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add watch request timeout to prevent watch request hang #5732

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

xigang
Copy link
Member

@xigang xigang commented Oct 23, 2024

What type of PR is this?
/kind bug

What this PR does / why we need it:
When the federate-apiserver's watch request to the member cluster gets stuck, it will cause the watch request from the federated client to get stuck as well.

Which issue(s) this PR fixes:
Fixes #5672

Special notes for your reviewer:

Does this PR introduce a user-facing change?:

NONE

…hang

Signed-off-by: xigang <wangxigang2014@gmail.com>
@karmada-bot karmada-bot added the kind/bug Categorizes issue or PR as related to a bug. label Oct 23, 2024
@karmada-bot
Copy link
Collaborator

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign ikaven1024 for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@karmada-bot karmada-bot added the size/M Denotes a PR that changes 30-99 lines, ignoring generated files. label Oct 23, 2024
@xigang xigang changed the title Add timeout for watch requests to member clusters to prevent request … Add watch request timeout to prevent watch request hang Oct 23, 2024
@codecov-commenter
Copy link

⚠️ Please install the 'codecov app svg image' to ensure uploads and comments are reliably processed by Codecov.

Codecov Report

Attention: Patch coverage is 57.14286% with 9 lines in your changes missing coverage. Please review.

Project coverage is 40.90%. Comparing base (331145f) to head (5e6e24b).

Files with missing lines Patch % Lines
pkg/search/proxy/store/multi_cluster_cache.go 57.14% 8 Missing and 1 partial ⚠️

❗ Your organization needs to install the Codecov GitHub app to enable full functionality.

Additional details and impacted files
@@           Coverage Diff           @@
##           master    #5732   +/-   ##
=======================================
  Coverage   40.90%   40.90%           
=======================================
  Files         650      650           
  Lines       55182    55196   +14     
=======================================
+ Hits        22573    22579    +6     
- Misses      31171    31179    +8     
  Partials     1438     1438           
Flag Coverage Δ
unittests 40.90% <57.14%> (+<0.01%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@xigang
Copy link
Member Author

xigang commented Oct 23, 2024

return nil, err
case <-time.After(30 * time.Second):
// If the watch request times out, return an error, and the client will retry.
return nil, fmt.Errorf("timeout waiting for watch for resource %v in cluster %q", gvr.String(), cluster)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@xigang Hi, if a watch request is hanging and causes a timeout, will the hanging watch request continue to exist in the subprocess?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@zhzhuang-zju Yes, there is this issue. When a watch request times out, the goroutine needs to be terminated.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point! Then that case we have to cancel the context passed to cache.Watch().

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So this patch intends to terminate the hanging by raising an error after a period of time. Is this the idea?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another question:
Before starting the Watch, we tried to get the cache of that cluster, I'm curious why this cache still exists even after the cluster is gone. Do we have a chance to clean the cache?

cache := c.cacheForClusterResource(cluster, gvr)
if cache == nil {
continue
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Watch Request Blocked When Member Cluster Offline
5 participants