-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
improve UI advice when user needs to delete the cluster #10460
Comments
@medyagh You mean we do not need the log |
/assign |
@medyagh btw I forgot to add the output I received after running the command without the changes done in #10473
I was not getting errors below
|
/assign |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
/assign If I understand the issue, it's redundant and confusing to have both of these warnings. The first warning comes from this line where if the memory flag or the memory size has been changed then the warning is generated. Note that memory size is not the only resource that has a warning like this, there are warnings for CPUs, disk size, and extra disk numbers. The second warning comes from this line where if the host fails to start twice it fails with a dynamically generated message. These two warning are reached from this line appears to be trying to re-generate the cluster config just after deleting the cluster, giving warning one. Then it tries to provision a new cluster leading to warning two. In this case, does the first error message still make sense? If we are re-generating the config after deleting then is it still an issue if we change the memory size? We could change updateExistingConfigFromFlags() to not log existing cluster conflict warnings since there is no existing cluster, we are just using the old cluster's config to generate a new config. We could add a new flag argument to indicate that there is no existing cluster or we could make a new function that re-generates a config without the assumption that there is an existing cluster. I will have a crack at the code this week and any advice would be much appreciated since I probably made a lot of mistakes and would like to learn. |
@Zanderax please feel free to take a stab at it and make a PR ! |
Sorry, my state opened up from a 3-month lockdown last week so I forgot about this a little bit as I was indulging in the novelty of leaving my house. I'll try to get around to it soon. |
Sorry I'm going to unassign myself, I didn't expect to be this busy after lockdown finished and I won't be able to get to this. Sorry about that :) |
/assign |
@e-tienne Are you still working on this? I unassigned this task from you, but if you are actively working on it please reassign yourself, thanks! |
/assign |
Looks like this occurs when the driver is set to |
/assign |
/assign |
In contrast to the discussion here: #10460 (comment), I found that the first statement was not because of deletion and recreation of the cluster. |
here we confuse the user with two things
❗ You cannot change the memory size for an exiting minikube cluster. Please first delete the cluster.
and
😿 Failed to start ssh bare metal machine. Running "minikube delete -p foo" may fix it: config: please provide an IP address
we should had sticked with first advice
The text was updated successfully, but these errors were encountered: