-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
enhance: add active column for profile list cmd #14079
Conversation
Welcome @vharsh! |
Hi @vharsh. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Can one of the admins verify this patch? |
@medyagh PTAL, thanks. |
@medyagh @spowelljr Can you approve the workflows now? Do the current changes look okay? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it would be good to implement a small integration test to make sure that the JSON output is acting correctly. ie. Start a cluster, check that the correct cluster has Active: true
, then change the active minikube profile <profile>
and then check the JSON output again and make sure the Active: true
changed.
@spowelljr Should the |
Okay, I think I've understood what I need to do. |
// TODO(@vharsh): Figure out why go vet complains when this is moved into a loop | ||
defer CleanupWithLogs(t, profiles[0], cancel) | ||
defer CleanupWithLogs(t, profiles[1], cancel) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@spowelljr For some reason calling CleanupWithLogs
with the cancel func reference inside a for loop triggers warnings but calling it outside one doesn't.
The exact error: the cancel function is not used on all paths (possible context leak)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My guess is that it's assuming that profiles
could be empty (even though that's not possible) and that cancel would never be called. Could maybe try doing defer cancel.Cancel()
right after ctx, cancel := context.WithTimeout
and then putting CleanupWithLogs
back in the for
loop.
/ok-to-test |
kvm2 driver with docker runtime
Times for minikube start: 53.0s 51.6s 54.1s 52.3s 53.0s Times for minikube ingress: 31.1s 28.2s 28.7s 25.2s 29.2s docker driver with docker runtime |
defer CleanupWithLogs(t, profiles[0], cancel) | ||
defer CleanupWithLogs(t, profiles[1], cancel) | ||
for _, p := range profiles { | ||
rr, err := Run(t, exec.CommandContext(ctx, Target(), "start", "-p", p)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will need to append StartArgs()
to the args list, then pass the args list into the command.
args := append([]string{"start", "-p", p}, StartArgs()...)
rr, err := Run(t, exec.CommandContext(ctx, Target(), args...))
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, I believe StartArgs()
are the things which are making the E2E run with different minikube-drivers, I'll just add StartArgs()
call to all minikube start
operations in my test.
These are the flake rates of all failed tests.
Too many tests failed - See test logs for more details. To see the flake rates of all tests by environment, click here. |
Signed-off-by: Harsh Vardhan <harsh.vardhan@mayadata.io>
Signed-off-by: Harsh Vardhan <harsh.vardhan@mayadata.io>
Signed-off-by: Harsh Vardhan <harsh.vardhan@mayadata.io>
Signed-off-by: Harsh Vardhan <harsh.vardhan@mayadata.io>
Signed-off-by: Harsh Vardhan <harsh.vardhan@mayadata.io>
Signed-off-by: Harsh Vardhan <harsh.vardhan@mayadata.io>
Signed-off-by: Harsh Vardhan <harsh.vardhan@mayadata.io>
kvm2 driver with docker runtime
Times for minikube start: 52.1s 51.4s 49.0s 51.1s 51.4s Times for minikube ingress: 25.5s 30.1s 27.1s 25.5s 26.0s docker driver with docker runtime
Times for minikube start: 41.1s 24.6s 24.5s 23.6s 24.1s Times for minikube ingress: 19.9s 20.9s 21.9s 23.9s 23.4s docker driver with containerd runtime
Times for minikube start: 33.6s 28.3s 32.1s 27.7s 32.2s Times for minikube ingress: 18.9s 31.4s 17.9s 20.9s 17.9s |
These are the flake rates of all failed tests.
Too many tests failed - See test logs for more details. To see the flake rates of all tests by environment, click here. |
/retest |
/retest-this-please |
kvm2 driver with docker runtime
Times for minikube ingress: 28.1s 29.7s 28.2s 28.1s 26.1s Times for minikube start: 54.0s 51.3s 52.3s 52.4s 52.2s docker driver with docker runtime
Times for minikube start: 29.4s 25.1s 25.4s 25.3s 26.2s Times for minikube ingress: 23.5s 24.0s 21.4s 22.5s 22.9s docker driver with containerd runtime
Times for minikube start: 32.0s 40.6s 30.1s 33.4s 32.7s Times for minikube ingress: 22.4s 22.4s 32.4s 18.9s 18.9s |
These are the flake rates of all failed tests.
Too many tests failed - See test logs for more details. To see the flake rates of all tests by environment, click here. |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: spowelljr, vharsh The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
fixes: #14073
Before:
After:
Signed-off-by: Harsh Vardhan harsh.vardhan@mayadata.io