Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Minikube fails to start kube-dns #2019

Closed
gregd72002 opened this issue Oct 1, 2017 · 12 comments
Closed

Minikube fails to start kube-dns #2019

gregd72002 opened this issue Oct 1, 2017 · 12 comments
Labels
co/none-driver kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@gregd72002
Copy link

Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG REPORT

Environment:

Environment variables:

  • CHANGE_MINIKUBE_NONE_USER=true
  • KUBECONFIG=/home/ec2-user/.kube/config
  • MINIKUBE_HOME=/home/ec2-user

Minikube version (use minikube version): minikube version: v0.22.2

  • OS (e.g. from /etc/os-release): Amazon Linux AMI 2017.03
  • VM Driver (e.g. cat ~/.minikube/machines/minikube/config.json | grep DriverName): none
  • ISO version (e.g. cat ~/.minikube/machines/minikube/config.json | grep -i ISO or minikube ssh cat /etc/VERSION): [none?]
  • Install tools:
  • Others:
    The above can be generated in one go with the following commands (can be copied and pasted directly into your terminal):

What happened:
After starting minikube (sudo -E minikube start --memory 8000 --cpus 2 --vm-driver=none)
kube-dns fails to start
kube-system po/kube-dns-910330662-qb464 1/3 CrashLoopBackOff 12 15m

What you expected to happen:
kube-dns starts

**Output of kubectl logs kube-dns-910330662-qb464 --namespace=kube-system -c kubedns

I1001 14:32:09.527073     141 dns.go:174] Waiting for services and endpoints to be initialized from apiserver...

E1001 14:32:34.027299     141 reflector.go:199] k8s.io/dns/vendor/k8s.io/client-go/tools/cache/reflector.go:94: Failed to list *v1.ConfigMap: Get https://10.0.0.1:443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-dns&resourceVersion=0: dial tcp 10.0.0.1:443: i/o timeout

I1001 14:32:34.527053     141 dns.go:174] Waiting for services and endpoints to be initialized from apiserver...

I1001 14:32:35.027022     141 dns.go:174] Waiting for services and endpoints to be initialized from apiserver...

E1001 14:32:35.031767     141 reflector.go:199] k8s.io/dns/vendor/k8s.io/client-go/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.0.0.1:443/api/v1/services?resourceVersion=0: dial tcp 10.0.0.1:443: i/o timeout

E1001 14:32:35.031827     141 reflector.go:199] k8s.io/dns/vendor/k8s.io/client-go/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.0.0.1:443/api/v1/endpoints?resourceVersion=0: dial tcp 10.0.0.1:443: i/o timeout

Anything else do we need to know:
**Output of kubectl get all --all-namespaces

kubectl get all --all-namespaces
NAMESPACE     NAME              DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
kube-system   deploy/kube-dns   1         1         1            0           15m

NAMESPACE     NAME                    DESIRED   CURRENT   READY     AGE
kube-system   rs/kube-dns-910330662   1         1         0         15m

NAMESPACE     NAME              DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
kube-system   deploy/kube-dns   1         1         1            0           15m

NAMESPACE     NAME                                     READY     STATUS             RESTARTS   AGE
kube-system   po/kube-addon-manager-ip-172-31-43-108   1/1       Running            6          15m
kube-system   po/kube-dns-910330662-qb464              1/3       CrashLoopBackOff   12         15m
kube-system   po/kubernetes-dashboard-qmgwx            0/1       CrashLoopBackOff   7          15m

NAMESPACE     NAME                      DESIRED   CURRENT   READY     AGE
kube-system   rc/kubernetes-dashboard   1         1         0         15m
@gregd72002 gregd72002 changed the title Minikube Minikube fails to start kube-dns Oct 1, 2017
@r2d4 r2d4 added co/none-driver kind/bug Categorizes issue or PR as related to a bug. labels Oct 5, 2017
@m3co-code
Copy link

m3co-code commented Oct 13, 2017

The same problem exists locally on my Mac. Same minikube version.

@Siilwyn
Copy link

Siilwyn commented Oct 23, 2017

@gregd72002 not sure but could this be the same issue as #2027?

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 21, 2018
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 20, 2018
@a4abhishek
Copy link

Problem still persist.
minikube version: 0.24.0
Kubernetes version: Client: 1.9.3, Server: 1.8.0
OS: Ubuntu 17.10
VM-Driver: none

Find minikube logs here.

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Mar 20, 2018
@gokhandincer
Copy link

I have same problem
I run minikube start with this command minikube start --extra-config=apiserver.Authorization.Mode=RBAC

kube-system   kube-addon-manager-minikube             1/1       Running            0          4m
kube-system   kube-dns-54cccfbdf8-m7wdr               2/3       CrashLoopBackOff   5          4m
kube-system   kubernetes-dashboard-77d8b98585-bf6tw   0/1       CrashLoopBackOff   5          4m
kube-system   storage-provisioner                     1/1       Running            0          4m

minikube version : 0.25.2
Kubernetes version: 1.9.4
OS: Mac OS X High Sierra v10.13.3

@Hartimer
Copy link

Hartimer commented Apr 4, 2018

The very same thing happens to me. I have a slightly more recent EC2 image:

Linux ip-172-31-34-27 4.9.77-31.58.amzn1.x86_64 #1 SMP Thu Jan 18 22:15:23 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

Both the dashboard pod and the storage-provider also fail as a consequence (they get to running but eventually crash).

Dashboard:

2018/04/04 22:59:01 Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service accounts configuration) or the --apiserver-host param points to a server that does not exist. Reason: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: i/o timeout
Refer to our FAQ and wiki pages for more information: https://github.com/kubernetes/dashboard/wiki/FAQ

Storage provisioner:
F0404 22:59:11.144046 1 main.go:37] Error getting server version: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: i/o timeout

Any ideas? @gregd72002 have you figured out the problem?

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 3, 2018
@malagant
Copy link

We had success when disabling selinux.

@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 9, 2018
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/none-driver kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

10 participants