-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Upgrade minikube to kubernetes 1.10.5 to address backoffLimit bug #3074
Comments
Thank you! |
Until we get a release with this out, you should be able to specify version 1.10.5 on your own: $ minikube start --kubernetes-version=v1.10.5
Starting local Kubernetes v1.10.5 cluster...
Starting VM... |
Still not fixed in latest minikube release. This bug seems critical to me. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Obsolete, since we're at v1.13.2 nowadays. |
Is this a BUG REPORT or FEATURE REQUEST? (choose one):
Critical bug in kubernetes v1.10.x
See: kubernetes/kubernetes#62382
Please provide the following details:
Environment:
Win64
Minikube version (use
minikube version
): 0.28.2What happened:
When a job pod fails, kubernetes ignores
backoffLimit
and creates an infinite number of pods. See:kubernetes/kubernetes#62382
What you expected to happen:
The job should stop creating pods after reaching the backoff limit.
How to reproduce it (as minimally and precisely as possible):
Create a failing job with
.spec.backoffLimit
set to 3Output of
minikube logs
(if applicable):Anything else do we need to know:
Fixed in Kubernetes
1.10.5
The text was updated successfully, but these errors were encountered: