-
Notifications
You must be signed in to change notification settings - Fork 4.8k
Automation of OCP-55033 #30460
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Automation of OCP-55033 #30460
Conversation
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: asahay19 The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
Risk analysis has seen new tests most likely introduced by this PR. New Test Risks for sha: c55cf6a
New tests seen in this PR at sha: c55cf6a
|
| @@ -0,0 +1,19 @@ | |||
| package node | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The file name can follow the convention other tests are using for example: test/extended/node/image_volume.go
I suggest test/extended/node/check_log_level.go
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually this File have been created for all the node QE e2e test cases which will be migrating from openshift-test-private to origin. Because all the cases are not linked to a Feature. So, for any specific feature yes we can create file like that but for the cases in OTP , we need one file like this .
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The origin test suite already has the ability to run test cases in parallel, that would run faster than grouping the tests. If checking the log level is one test case, it can run independently. So we may not need a common "node.go"
test/extended/node/node.go
Outdated
| //e2e "k8s.io/kubernetes/test/e2e/framework" | ||
| ) | ||
|
|
||
| var _ = g.Describe("[sig-node] NODE initContainer policy,volume,readines,quota", func() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Already using [sig-node] , so "NODE" can be removed
| @@ -0,0 +1,61 @@ | |||
| package node | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This can be part of the test instead of a node_utils as this is the real test.
| e2e "k8s.io/kubernetes/test/e2e/framework" | ||
| ) | ||
|
|
||
| func assertKubeletLogLevel(oc *exutil.CLI) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did the test pass on all the platforms? I think the debug pod image may not be available on all platforms
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I checked on aws, gcp, arm64 and amd64 builds.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did you check if all the tests that run as part of the PR itself passes? For example, microshift.
| @@ -0,0 +1,12 @@ | |||
| reviewers: | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Adding reviewers could be a different PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, it does in a same PR. I have updated it now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm requesting this change to the OWNERS to be a different PR so that this PR focusses on the test.
|
Job Failure Risk Analysis for sha: bc75f72
Risk analysis has seen new tests most likely introduced by this PR. New Test Risks for sha: bc75f72
New tests seen in this PR at sha: bc75f72
|
test/extended/node/node.go
Outdated
| //e2e "k8s.io/kubernetes/test/e2e/framework" | ||
| ) | ||
|
|
||
| var _ = g.Describe("[sig-node] NODE initContainer policy,volume,readines,quota", func() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think "NODE initContainer policy,volume,readines,quota" is historically-created, not precise, use "Kubelet, CRI-O, CPU manager" can be better and we can update it along with other feature cases coming in.
test/extended/node/node.go
Outdated
| import ( | ||
| g "github.com/onsi/ginkgo/v2" | ||
| exutil "github.com/openshift/origin/test/extended/util" | ||
| //e2e "k8s.io/kubernetes/test/e2e/framework" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please remove the comment if it is never used in the code
test/extended/node/node_utils.go
Outdated
| return false, nil | ||
| } | ||
| } else { | ||
| e2e.Logf("\n NODES ARE NOT READY\n ") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| e2e.Logf("\n NODES ARE NOT READY\n ") | |
| e2e.Logf("\n Node %s is not Ready, Skipping\n ", node) |
test/extended/node/node_utils.go
Outdated
| if waitErr != nil { | ||
| e2e.Logf("Kubelet Log level is:\n %v\n", kubeservice) | ||
| e2e.Logf("Running Proccess of kubelet are:\n %v\n", kublet) | ||
| AssertWaitPollNoErr(waitErr, "KUBELET_LOG_LEVEL is not expected") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you can delete func AssertWaitPollNoErr() and replace it with:
o.Expect(waitErr).NotTo(o.HaveOccurred(), "KUBELET_LOG_LEVEL is not expected, timed out")
|
/test verify |
|
@asahay19: The following tests failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
|
New PR link : http://github.com/openshift/origin/pull/30512 |
This case is about checking the Kubelet log level is 2.
Here is the test case link : https://polarion.engineering.redhat.com/polarion/#/project/OSE/workitem?id=OCP-55033
I have ran that locally and it got passed:
` ./openshift-tests run-test "[sig-node] NODE initContainer policy,volume,readines,quota check KUBELET_LOG_LEVEL is 2 [Suite:openshift/conformance/parallel]"
Running Suite: - /Users/asahay/OCP-55033/origin
Random Seed: 1762945017 - will randomize all specs
Will run 1 of 1 specs
[sig-node] NODE initContainer policy,volume,readines,quota check KUBELET_LOG_LEVEL is 2
github.com/openshift/origin/test/extended/node/node.go:15
STEP: Creating a kubernetes client @ 11/12/25 16:27:01.774
I1112 16:27:01.775613 60942 discovery.go:214] Invalidating discovery information
STEP: check Kubelet Log Level
@ 11/12/25 16:27:01.775
I1112 16:27:13.581561 60942 node_utils.go:22]
Node Names are ip-10-0-11-243.us-east-2.compute.internal ip-10-0-15-30.us-east-2.compute.internal ip-10-0-55-165.us-east-2.compute.internal ip-10-0-56-233.us-east-2.compute.internal ip-10-0-75-169.us-east-2.compute.internal ip-10-0-95-145.us-east-2.compute.internal
I1112 16:27:14.689276 60942 node_utils.go:28]
Node ip-10-0-11-243.us-east-2.compute.internal Status is True
I1112 16:27:24.692931 60942 node_utils.go:37] KUBELET_LOG_LEVEL is 2.
• [22.939 seconds]
Ran 1 of 1 Specs in 22.939 seconds
SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 0 Skipped
[
{
"name": "[sig-node] NODE initContainer policy,volume,readines,quota check KUBELET_LOG_LEVEL is 2 [Suite:openshift/conformance/parallel]",
"lifecycle": "blocking",
"duration": 22939,
"startTime": "2025-11-12 10:57:01.757463 UTC",
"endTime": "2025-11-12 10:57:24.697116 UTC",
"result": "passed",
"output": " STEP: Creating a kubernetes client @ 11/12/25 16:27:01.774\n STEP: check Kubelet Log Level\n @ 11/12/25 16:27:01.775\nI1112 16:27:13.581561 60942 node_utils.go:22] \nNode Names are ip-10-0-11-243.us-east-2.compute.internal ip-10-0-15-30.us-east-2.compute.internal ip-10-0-55-165.us-east-2.compute.internal ip-10-0-56-233.us-east-2.compute.internal ip-10-0-75-169.us-east-2.compute.internal ip-10-0-95-145.us-east-2.compute.internal\nI1112 16:27:14.689276 60942 node_utils.go:28] \nNode ip-10-0-11-243.us-east-2.compute.internal Status is True\n\nI1112 16:27:24.692931 60942 node_utils.go:37] KUBELET_LOG_LEVEL is 2. \n\n"
}
]`