-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
emit more status output during 'Preparing Kubernetes' #12082
Comments
@drewpca minikube not reporting a status line for more than 90 seconds would definitely be an issue under normal operations as the entire process itself should finish within 90 seconds. With that said, from looking at the logs minikube was not in a healthy state, evident by the extreme amount of retries being executed. Question about this run in particular, was this run after gcloud was cancelled via |
Yes there was a ctrl-c (that's part of our repro). I believe we 'minikube stop' in that case. We 'minikube delete' if you use 'gcloud code clean-up'. |
See #12232 for a fix to the cloudshell/cgroups issue. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
I'm testing
gcloud beta code dev
in cloudshell. A pretty reproducible problem is that minikube emits some status json lines and then is quiet for 90sec. 90sec+ between steps is too long, and gcloud decides MK is hung. I'd rather not raise that timeout, since it's bad UX for the status to be stalled even for 10 sec. Please consider emitting more status lines. Per our early discussions, you might use fractional step numbers if you're reporting a long single step.I'm attaching the log of a broken run. Because of buffering, the --output=json status lines are at the bottom. However, we write some status-bar ascii to the output, so you can see that the last status display appeared here at 23:01:03:
...and then 90 sec later, it's about 23:03:33, and we can see gcloud killing MK:
run2.txt
The text was updated successfully, but these errors were encountered: