-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NodeNotReady
test flakes on Release-1.3 test jobs
#9379
Comments
/triage accepted |
Would be good to link the Slack thread directly :) |
Did we talk to sig-k8s-infra and ask them if our problems could be related to their problem? |
Absolutely! I wasn't sure if it was ok to link a Slack discussion. |
Thanks for bringing this up. I was not sure if we wanted to triage this on our end first. |
Good point, sorry I forgot what I said yesterday :). But I guess we're probably already at the point where we can't do much |
Any news? (I was on PTO the last few weeks) |
I am following up on #sigs-k8s-infra over here https://kubernetes.slack.com/archives/CCK68P2Q2/p1698254656470629?thread_ts=1694458280.112599&cid=CCK68P2Q2 Thanks for the reminder! |
Hm @ameukam as far as I can tell the issue still exists: https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/periodic-cluster-api-e2e-mink8s-release-1-3/1724739536589688832 |
FYI, still getting jobs with |
@chrischdi can we open a new issue and close this one? The title is highly misleading |
Created follow-up issue #9901 /close Because release-1.3 jobs are gone. |
@chrischdi: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Which jobs are flaking?
periodic-cluster-api-e2e-release-1-3
.Which tests are flaking?
Not really applicable since the test never gets triggered.
Since when has it been flaking?
The exact start date is unknown, but this flake seems to be evident since Sept 1st.
Refer to: kubernetes/org#4433 (comment)
Refer to : https://kubernetes.slack.com/archives/C8TSNPY4T/p1694020825316969
Testgrid link
https://testgrid.k8s.io/sig-cluster-lifecycle-cluster-api-1.3#capi-e2e-release-1-3
Reason for failure (if possible)
The test does not get triggered. The pod info, upon scrolling all the way down says
Node not ready
Anything else we need to know?
This might be related to the thread going on in #sig-k8s-infra on
Nodes are randomly freezing and failing 🧵
Label(s) to be applied
/kind flake
One or more /area label. See https://github.com/kubernetes-sigs/cluster-api/labels?q=area for the list of labels.
The text was updated successfully, but these errors were encountered: