Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Localkube crashing: "Connection reset by peer" #1252

Closed
imathews opened this issue Mar 16, 2017 · 43 comments
Closed

Localkube crashing: "Connection reset by peer" #1252

imathews opened this issue Mar 16, 2017 · 43 comments
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@imathews
Copy link

This is a BUG REPORT

Minikube version : v0.17.1
Environment:

  • OS : MacOS 10.12.3
  • VM Driver: virtualbox
  • ISO version: minikube-v1.0.7.iso

What happened:
When running minikube, my node.js application is failing fairly regularly (~ every 15-30min), printing the error:

error: read tcp 192.168.99.1:50064->192.168.99.100:8443: read: connection reset by peer

When I then run, for example, kubectl get pods, I get the message

The connection to the server 192.168.99.100:8443 was refused - did you specify the right host or port?

minikube status prints:

minikubeVM: Running
localkube: Stopped

In order to get things back up and running, I need to run minikube start (which for some reason takes several minutes) — though at this point the networking and name resolution between different services is broken (e.g., nginx can't discover the nodejs app), and the only practical resolution is to restart all of my kubernetes services.

What you expected to happen:
Minikube and localkube should persists until they are explicitly stopped.

How to reproduce it (as minimally and precisely as possible):
This is the hardest part — sometimes I get crashes every 5 minutes, sometimes it goes for hours without any problem, and crashes seem to be independent of my development behavior. This is affecting all four developers on our team, who all have fairly similar setups. I've tried downgrading all the way to v0.13 with no luck.

@r2d4
Copy link
Contributor

r2d4 commented Mar 16, 2017

Can you provide the output of minikube logs?

@r2d4 r2d4 added the kind/bug Categorizes issue or PR as related to a bug. label Mar 16, 2017
@r2d4
Copy link
Contributor

r2d4 commented Mar 16, 2017

And any steps to reproduce, I haven't seen localkube panicing like this before

@imathews
Copy link
Author

Hi @r2d4 — see attached for the logs.

minikube_crash_log.txt

With respect to recreating this issue, it's quite tricky in how sporadic it is, and my understanding of minikube internals are probably too limited to understand how what I'm doing might interact. However, maybe I can describe my environment a bit more and what's going on when the crash happens. First, to list out our services and pods:

kubectl get pods
NAME                              READY     STATUS    RESTARTS   AGE
app-4022611113-7v8d7              1/1       Running   1          3h
data-master-2974411674-56cg4      1/1       Running   1          3h
data-slave-1560132676-643x2       1/1       Running   1          3h
data-slave-1560132676-hkvb0       1/1       Running   1          3h
data-slave-1560132676-xzg43       1/1       Running   1          3h
maps-572703488-xgzh1              1/1       Running   1          3h
model-compiler-2579016729-svmxz   1/1       Running   1          3h
nginx-1306570553-zx4rh            1/1       Running   2          3h
postgres-2463450622-24zz4         1/1       Running   1          3h
redis-master-547309603-40dnq      1/1       Running   1          3h
redis-slave-1382215924-4p2l2      1/1       Running   1          3h
redis-slave-1382215924-pqfgd      1/1       Running   1          3h
kubectl get services
app              10.0.0.173   <none>        8443/TCP                     3h
data-master      10.0.0.123   <none>        8080/TCP                     3h
data-slave       10.0.0.198   <none>        8080/TCP                     3h
kubernetes       10.0.0.1     <none>        443/TCP                      3h
maps             10.0.0.15    <none>        8080/TCP                     3h
model-compiler   10.0.0.156   <none>        8080/TCP                     3h
nginx            10.0.0.45    <pending>     80:31022/TCP,443:30968/TCP   3h
postgres         10.0.0.95    <none>        5432/TCP                     3h
redis-master     10.0.0.42    <none>        6379/TCP                     3h
redis-slave      10.0.0.204   <none>        6379/TCP                     3h

The cluster is run with 4GB of memory, and when we're running it we're generally forwarding three ports to localhost (nginx -> 8443, postgres -> 5432, redis -> 6379). We're mounting a couple of volumes based on the host path, and then streaming the logs for one of the pods.

Beyond this, everything (I think) is fairly standard. We run nodejs for several of the containers, and have nodemon polling the hostMount to look for changes in files and restart the server accordingly. This bug seems to happen irrespective of whether or not anything is really happening within the cluster, and doesn't seem to be related to resource usage.

If there's any other detail I can provide, or troubleshooting I can do, please let me know. Many thanks!

@r2d4
Copy link
Contributor

r2d4 commented Mar 17, 2017

Thanks for the detailed reports

One more thing - can you output minikube ssh ps -aux and see if there are any zombie processes?

rootDiskErr: <nil>, rootInodeErr: cmd [find /mnt/sda1/var/lib/docker/aufs/diff/21e1f05d4ffcab32160a3bebd5279e559c7955a38137221ff72793316262677e -xdev -printf .] failed. stderr: find: unrecognized: -printf

This error from the logs was a result of an old bug in the image that has since been fixed
#923

If you see a bunch of zombie processes, you might want to make sure you're using the latest version of the minikube iso with minikube delete then rm -rf ~/.minikube/cache then start it normally.

@imathews
Copy link
Author

Thanks @r2d4 — I just realized that those logs were from a colleague's machine on an older build of minikube (0.14); unfortunately I'm still getting this behavior in 0.17.1. I ran minikube delete and removed the cache, and then started minikube again (confirmed minikube version prints v0.17.1).

Interestingly, when I run minikube logs after the crash, it now just prints -- no entries --. I'm attaching some relevant files that might help us debug:

[before crash] minikube logs.txt — this was taken when everything was up and running and seemingly stable; crashed 15min later without me touching anything.

[after crash] minikube logs.txt

[before crash] minikube ps -aux.txt

[after crash] minikube ps -aux.txt

@imathews
Copy link
Author

^ @r2d4 Just pinging on this, seeing if you have any guidance about how I might best debug. It's definitely hampering our team quite a bit, and I'm happy to sink a fair amount of time into investigating, but any pointers would be incredibly useful and save me a lot of time.

Many thanks!

@stevesloka
Copy link
Contributor

FYI on my linux machine I'm seeing the same behavior where I minikube start and localkube will be stopped. I've tested with KVM and Virtualbox, same behavior. Happy to provide my logs if that will help.

@r2d4
Copy link
Contributor

r2d4 commented Mar 23, 2017

@imathews I haven't been able to tackle the root of the issue - although this repeated line from the logs might be worth investigating

Mar 17 18:51:54 minikube localkube[3706]: E0317 18:51:54.663464 3706 utils.go:91] Unable to get uid from job migrations in namespace default

Which comes from some of the cron job code
https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/cronjob/utils.go#L94

There looks like there are still a few issues with cronjobs keeping around deleted pods. kubernetes/kubernetes#28977. Maybe they aren't getting deleted and thats causing issues? A shot in the dark though.

You might want to try our kubernetes 1.6 branch from CI. (We'll be merging this into master once kubernetes 1.6 goes GA)

https://storage.googleapis.com/minikube-builds/1266/minikube-darwin-amd64
https://storage.googleapis.com/minikube-builds/1266/minikube-linux-amd64
https://storage.googleapis.com/minikube-builds/1266/minikube-windows-amd64.exe

And @stevesloka any additional logs would help!

@imathews
Copy link
Author

^ Awesome, thanks for the pointers. I'll investigate and let you know what I find.

@sebgoa
Copy link

sebgoa commented Mar 25, 2017

@r2d4 I am seeing a similar behavior with v0.17.1 it seems very unstable and seems indeed localkube seems to crash.

OSX 10.12.3

no cron jobs ever..

@sebgoa
Copy link

sebgoa commented Mar 25, 2017

minikube logs

Mar 25 18:32:18 minikube systemd[1]: localkube.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Mar 25 18:32:18 minikube systemd[1]: localkube.service: Unit entered failed state.
Mar 25 18:32:18 minikube systemd[1]: localkube.service: Failed with result 'exit-code'.
Mar 25 18:32:21 minikube systemd[1]: localkube.service: Service hold-off time over, scheduling restart.
Mar 25 18:32:21 minikube systemd[1]: Stopped Localkube.
Mar 25 18:32:21 minikube systemd[1]: Starting Localkube...
Mar 25 18:32:22 minikube localkube[24074]: I0325 18:32:22.000213   24074 start.go:77] Feature gates:%!(EXTRA string=)
Mar 25 18:32:22 minikube localkube[24074]: localkube host ip address: 10.0.2.15
Mar 25 18:32:22 minikube localkube[24074]: I0325 18:32:22.011282   24074 server.go:215] Using iptables Proxier.
Mar 25 18:32:22 minikube localkube[24074]: W0325 18:32:22.011848   24074 server.go:468] Failed to retrieve node info: Get http://127.0.0.1:8080/api/v1/nodes/minikube: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Mar 25 18:32:22 minikube localkube[24074]: W0325 18:32:22.012137   24074 proxier.go:249] invalid nodeIP, initialize kube-proxy with 127.0.0.1 as nodeIP
Mar 25 18:32:22 minikube localkube[24074]: W0325 18:32:22.012284   24074 proxier.go:254] clusterCIDR not specified, unable to distinguish between internal and external traffic
Mar 25 18:32:22 minikube localkube[24074]: I0325 18:32:22.012515   24074 server.go:227] Tearing down userspace rules.
Mar 25 18:32:22 minikube localkube[24074]: Starting etcd...
Mar 25 18:32:22 minikube localkube[24074]: E0325 18:32:22.040442   24074 reflector.go:188] pkg/proxy/config/api.go:30: Failed to list *api.Service: Get http://127.0.0.1:8080/api/v1/services?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Mar 25 18:32:22 minikube localkube[24074]: E0325 18:32:22.041227   24074 reflector.go:188] pkg/proxy/config/api.go:33: Failed to list *api.Endpoints: Get http://127.0.0.1:8080/api/v1/endpoints?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Mar 25 18:32:22 minikube localkube[24074]: recovered store from snapshot at index 130013
Mar 25 18:32:22 minikube localkube[24074]: name = kubeetcd
Mar 25 18:32:22 minikube localkube[24074]: data dir = /var/lib/localkube/etcd
Mar 25 18:32:22 minikube localkube[24074]: member dir = /var/lib/localkube/etcd/member
Mar 25 18:32:22 minikube localkube[24074]: heartbeat = 100ms
Mar 25 18:32:22 minikube localkube[24074]: election = 1000ms
Mar 25 18:32:22 minikube localkube[24074]: snapshot count = 10000
Mar 25 18:32:22 minikube localkube[24074]: advertise client URLs = http://0.0.0.0:2379
Mar 25 18:32:22 minikube localkube[24074]: restarting member fcf2ad36debdd5bb in cluster 7f055ae3b0912328 at commit index 130290
Mar 25 18:32:22 minikube localkube[24074]: fcf2ad36debdd5bb became follower at term 90
Mar 25 18:32:22 minikube localkube[24074]: newRaft fcf2ad36debdd5bb [peers: [fcf2ad36debdd5bb], term: 90, commit: 130290, applied: 130013, lastindex: 130290, lastterm: 90]
Mar 25 18:32:22 minikube localkube[24074]: enabled capabilities for version 3.0
Mar 25 18:32:22 minikube localkube[24074]: added member fcf2ad36debdd5bb [http://0.0.0.0:2380] to cluster 7f055ae3b0912328 from store
Mar 25 18:32:22 minikube localkube[24074]: set the cluster version to 3.0 from store
Mar 25 18:32:22 minikube localkube[24074]: starting server... [version: 3.0.14, cluster version: 3.0]
Mar 25 18:32:22 minikube localkube[24074]: Starting apiserver...
Mar 25 18:32:22 minikube localkube[24074]: Starting controller-manager...
Mar 25 18:32:22 minikube localkube[24074]: Starting scheduler...
Mar 25 18:32:22 minikube localkube[24074]: Starting kubelet...
Mar 25 18:32:22 minikube localkube[24074]: Starting proxy...
Mar 25 18:32:22 minikube localkube[24074]: Starting storage-provisioner...
Mar 25 18:32:22 minikube localkube[24074]: I0325 18:32:22.245344   24074 config.go:527] Will report 10.0.2.15 as public IP address.
Mar 25 18:32:22 minikube localkube[24074]: E0325 18:32:22.252614   24074 controllermanager.go:125] unable to register configz: register config "componentconfig" twice
Mar 25 18:32:22 minikube localkube[24074]: E0325 18:32:22.253765   24074 server.go:78] unable to register configz: register config "componentconfig" twice
Mar 25 18:32:22 minikube localkube[24074]: I0325 18:32:22.254756   24074 feature_gate.go:189] feature gates: map[]
Mar 25 18:32:22 minikube localkube[24074]: E0325 18:32:22.254998   24074 server.go:297] unable to register configz: register config "componentconfig" twice
Mar 25 18:32:22 minikube localkube[24074]: W0325 18:32:22.255146   24074 server.go:605] Could not load kubeconfig file /var/lib/kubelet/kubeconfig: stat /var/lib/kubelet/kubeconfig: no such file or directory. Using default client config instead.
Mar 25 18:32:22 minikube localkube[24074]: I0325 18:32:22.255597   24074 docker.go:356] Connecting to docker on unix:///var/run/docker.sock
Mar 25 18:32:22 minikube localkube[24074]: I0325 18:32:22.255781   24074 docker.go:376] Start docker client with request timeout=2m0s
Mar 25 18:32:22 minikube localkube[24074]: I0325 18:32:22.256159   24074 conntrack.go:81] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
Mar 25 18:32:22 minikube localkube[24074]: E0325 18:32:22.267762   24074 leaderelection.go:228] error retrieving resource lock kube-system/kube-controller-manager: Get http://127.0.0.1:8080/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Mar 25 18:32:22 minikube localkube[24074]: I0325 18:32:22.268948   24074 conntrack.go:66] Setting conntrack hashsize to 32768
Mar 25 18:32:22 minikube localkube[24074]: I0325 18:32:22.273852   24074 conntrack.go:81] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
Mar 25 18:32:22 minikube localkube[24074]: I0325 18:32:22.274130   24074 conntrack.go:81] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
Mar 25 18:32:22 minikube localkube[24074]: E0325 18:32:22.274488   24074 leaderelection.go:228] error retrieving resource lock kube-system/kube-scheduler: Get http://127.0.0.1:8080/api/v1/namespaces/kube-system/endpoints/kube-scheduler: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Mar 25 18:32:22 minikube localkube[24074]: storage-provisioner: Exit with error: Error getting server version: Get http://localhost:8080/version: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Mar 25 18:32:22 minikube localkube[24074]: E0325 18:32:22.290830   24074 event.go:208] Unable to write event: 'Post http://127.0.0.1:8080/api/v1/namespaces/default/events: dial tcp 127.0.0.1:8080: getsockopt: connection refused' (may retry after sleeping)
Mar 25 18:32:22 minikube localkube[24074]: E0325 18:32:22.290906   24074 reflector.go:199] k8s.io/minikube/vendor/k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:481: Failed to list *extensions.ReplicaSet: Get http://127.0.0.1:8080/apis/extensions/v1beta1/replicasets?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Mar 25 18:32:22 minikube localkube[24074]: E0325 18:32:22.290953   24074 reflector.go:199] pkg/controller/informers/factory.go:89: Failed to list *api.PersistentVolumeClaim: Get http://127.0.0.1:8080/api/v1/persistentvolumeclaims?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Mar 25 18:32:22 minikube localkube[24074]: E0325 18:32:22.291011   24074 reflector.go:199] k8s.io/minikube/vendor/k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:473: Failed to list *api.ReplicationController: Get http://127.0.0.1:8080/api/v1/replicationcontrollers?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Mar 25 18:32:22 minikube localkube[24074]: E0325 18:32:22.291071   24074 reflector.go:199] k8s.io/minikube/vendor/k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:470: Failed to list *api.Service: Get http://127.0.0.1:8080/api/v1/services?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Mar 25 18:32:22 minikube localkube[24074]: E0325 18:32:22.291114   24074 reflector.go:199] k8s.io/minikube/vendor/k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:466: Failed to list *api.PersistentVolume: Get http://127.0.0.1:8080/api/v1/persistentvolumes?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Mar 25 18:32:22 minikube localkube[24074]: E0325 18:32:22.291159   24074 reflector.go:199] k8s.io/minikube/vendor/k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:463: Failed to list *api.Node: Get http://127.0.0.1:8080/api/v1/nodes?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Mar 25 18:32:22 minikube localkube[24074]: E0325 18:32:22.291208   24074 reflector.go:199] k8s.io/minikube/vendor/k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:460: Failed to list *api.Pod: Get http://127.0.0.1:8080/api/v1/pods?fieldSelector=spec.nodeName%21%3D%2Cstatus.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Mar 25 18:32:22 minikube localkube[24074]: E0325 18:32:22.291280   24074 reflector.go:199] k8s.io/minikube/vendor/k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:457: Failed to list *api.Pod: Get http://127.0.0.1:8080/api/v1/pods?fieldSelector=spec.nodeName%3D%2Cstatus.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Mar 25 18:32:22 minikube localkube[24074]: E0325 18:32:22.291325   24074 reflector.go:199] k8s.io/minikube/vendor/k8s.io/kubernetes/plugin/pkg/admission/resourcequota/resource_access.go:83: Failed to list *api.ResourceQuota: Get http://127.0.0.1:8080/api/v1/resourcequotas?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Mar 25 18:32:22 minikube localkube[24074]: E0325 18:32:22.291382   24074 reflector.go:199] k8s.io/minikube/vendor/k8s.io/kubernetes/plugin/pkg/admission/storageclass/default/admission.go:75: Failed to list *storage.StorageClass: Get http://127.0.0.1:8080/apis/storage.k8s.io/v1beta1/storageclasses?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Mar 25 18:32:22 minikube localkube[24074]: E0325 18:32:22.291715   24074 reflector.go:199] k8s.io/minikube/vendor/k8s.io/kubernetes/plugin/pkg/admission/serviceaccount/admission.go:119: Failed to list *api.Secret: Get http://127.0.0.1:8080/api/v1/secrets?fieldSelector=type%3Dkubernetes.io%2Fservice-account-token&resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Mar 25 18:32:22 minikube localkube[24074]: E0325 18:32:22.292246   24074 reflector.go:199] k8s.io/minikube/vendor/k8s.io/kubernetes/plugin/pkg/admission/serviceaccount/admission.go:103: Failed to list *api.ServiceAccount: Get http://127.0.0.1:8080/api/v1/serviceaccounts?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Mar 25 18:32:22 minikube localkube[24074]: I0325 18:32:22.309321   24074 manager.go:143] cAdvisor running in container: "/system.slice/localkube.service"
Mar 25 18:32:22 minikube localkube[24074]: E0325 18:32:22.367031   24074 reflector.go:199] pkg/controller/informers/factory.go:89: Failed to list *api.Namespace: Get http://127.0.0.1:8080/api/v1/namespaces?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Mar 25 18:32:22 minikube localkube[24074]: E0325 18:32:22.367682   24074 reflector.go:199] pkg/controller/informers/factory.go:89: Failed to list *api.LimitRange: Get http://127.0.0.1:8080/api/v1/limitranges?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Mar 25 18:32:22 minikube localkube[24074]: I0325 18:32:22.369579   24074 fs.go:117] Filesystem partitions: map[/dev/sda1:{mountpoint:/mnt/sda1 major:8 minor:1 fsType:ext4 blockSize:0}]
Mar 25 18:32:22 minikube localkube[24074]: I0325 18:32:22.371383   24074 manager.go:198] Machine: {NumCores:2 CpuFrequency:1599999 MemoryCapacity:2097799168 MachineID:431b7ae93e8747ea8e594b465d74fded SystemUUID:01A6AC49-1B9B-4AEF-9FEB-61E7AD3D0E4F BootID:a6299bd4-711f-4c02-949b-3eb5db54d6cc Filesystems:[{Device:rootfs Capacity:0 Type:vfs Inodes:0 HasInodes:true} {Device:/dev/sda1 Capacity:19163156480 Type:vfs Inodes:2434064 HasInodes:true}] DiskMap:map[8:0:{Name:sda Major:8 Minor:0 Size:20971520000 Scheduler:cfq}] NetworkDevices:[{Name:eth0 MacAddress:08:00:27:d3:25:94 Speed:1000 Mtu:1500} {Name:eth1 MacAddress:08:00:27:48:51:84 Speed:1000 Mtu:1500} {Name:sit0 MacAddress:00:00:00:00 Speed:0 Mtu:1480}] Topology:[{Id:0 Memory:2097799168 Cores:[{Id:0 Threads:[0] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2} {Size:3145728 Type:Unified Level:3}]} {Id:1 Threads:[1] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2} {Size:3145728 Type:Unified Level:3}]}] Caches:[]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None}
Mar 25 18:32:22 minikube localkube[24074]: I0325 18:32:22.373789   24074 manager.go:204] Version: {KernelVersion:4.7.2 ContainerOsVersion:Buildroot 2016.08 DockerVersion:1.11.1 CadvisorVersion: CadvisorRevision:}
Mar 25 18:32:22 minikube localkube[24074]: W0325 18:32:22.384614   24074 container_manager_linux.go:205] Running with swap on is not supported, please disable swap! This will be a fatal error by default starting in K8s v1.6! In the meantime, you can opt-in to making this a fatal error by enabling --experimental-fail-swap-on.
Mar 25 18:32:22 minikube localkube[24074]: I0325 18:32:22.385154   24074 kubelet.go:242] Adding manifest file: /etc/kubernetes/manifests
Mar 25 18:32:22 minikube localkube[24074]: I0325 18:32:22.385318   24074 kubelet.go:252] Watching apiserver
Mar 25 18:32:22 minikube localkube[24074]: [restful] 2017/03/25 18:32:22 log.go:30: [restful/swagger] listing is available at https://10.0.2.15:8443/swaggerapi/
Mar 25 18:32:22 minikube localkube[24074]: [restful] 2017/03/25 18:32:22 log.go:30: [restful/swagger] https://10.0.2.15:8443/swaggerui/ is mapped to folder /swagger-ui/
Mar 25 18:32:22 minikube localkube[24074]: E0325 18:32:22.409446   24074 reflector.go:188] pkg/kubelet/kubelet.go:386: Failed to list *api.Node: Get http://127.0.0.1:8080/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Mar 25 18:32:22 minikube localkube[24074]: E0325 18:32:22.409575   24074 reflector.go:188] pkg/kubelet/kubelet.go:378: Failed to list *api.Service: Get http://127.0.0.1:8080/api/v1/services?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Mar 25 18:32:22 minikube localkube[24074]: E0325 18:32:22.409640   24074 reflector.go:188] pkg/kubelet/config/apiserver.go:44: Failed to list *api.Pod: Get http://127.0.0.1:8080/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Mar 25 18:32:22 minikube localkube[24074]: W0325 18:32:22.410350   24074 kubelet_network.go:69] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
Mar 25 18:32:22 minikube localkube[24074]: I0325 18:32:22.410600   24074 kubelet.go:477] Hairpin mode set to "hairpin-veth"
Mar 25 18:32:22 minikube localkube[24074]: I0325 18:32:22.428991   24074 docker_manager.go:256] Setting dockerRoot to /mnt/sda1/var/lib/docker
Mar 25 18:32:22 minikube localkube[24074]: I0325 18:32:22.429297   24074 docker_manager.go:259] Setting cgroupDriver to cgroupfs
Mar 25 18:32:22 minikube localkube[24074]: I0325 18:32:22.429852   24074 kubelet_network.go:226] Setting Pod CIDR:  -> 10.180.1.0/24
Mar 25 18:32:22 minikube localkube[24074]: I0325 18:32:22.431846   24074 server.go:770] Started kubelet v1.5.3
Mar 25 18:32:22 minikube localkube[24074]: E0325 18:32:22.433094   24074 kubelet.go:1145] Image garbage collection failed: unable to find data for container /
Mar 25 18:32:22 minikube localkube[24074]: I0325 18:32:22.435189   24074 kubelet_node_status.go:204] Setting node annotation to enable volume controller attach/detach
Mar 25 18:32:22 minikube localkube[24074]: I0325 18:32:22.436128   24074 server.go:123] Starting to listen on 0.0.0.0:10250
Mar 25 18:32:22 minikube localkube[24074]: E0325 18:32:22.465264   24074 event.go:208] Unable to write event: 'Post http://127.0.0.1:8080/api/v1/namespaces/default/events: dial tcp 127.0.0.1:8080: getsockopt: connection refused' (may retry after sleeping)
Mar 25 18:32:22 minikube localkube[24074]: E0325 18:32:22.480495   24074 kubelet.go:1634] Failed to check if disk space is available for the runtime: failed to get fs info for "runtime": unable to find data for container /
Mar 25 18:32:22 minikube localkube[24074]: E0325 18:32:22.482684   24074 kubelet.go:1642] Failed to check if disk space is available on the root partition: failed to get fs info for "root": unable to find data for container /
Mar 25 18:32:22 minikube localkube[24074]: I0325 18:32:22.484513   24074 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer
Mar 25 18:32:22 minikube localkube[24074]: I0325 18:32:22.484842   24074 status_manager.go:129] Starting to sync pod status with apiserver
Mar 25 18:32:22 minikube localkube[24074]: I0325 18:32:22.484978   24074 kubelet.go:1714] Starting kubelet main sync loop.
Mar 25 18:32:22 minikube localkube[24074]: I0325 18:32:22.485095   24074 kubelet.go:1725] skipping pod synchronization - [container runtime is down]
Mar 25 18:32:22 minikube localkube[24074]: I0325 18:32:22.487053   24074 volume_manager.go:242] Starting Kubelet Volume Manager
Mar 25 18:32:22 minikube localkube[24074]: storage-provisioner: Exit with error: Error getting server version: Get http://localhost:8080/version: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Mar 25 18:32:22 minikube localkube[24074]: I0325 18:32:22.487233   24074 serve.go:88] Serving securely on 0.0.0.0:8443
Mar 25 18:32:22 minikube localkube[24074]: I0325 18:32:22.502688   24074 serve.go:102] Serving insecurely on 127.0.0.1:8080
Mar 25 18:32:22 minikube systemd[1]: Started Localkube.
Mar 25 18:32:22 minikube localkube[24074]: I0325 18:32:22.565883   24074 factory.go:295] Registering Docker factory
Mar 25 18:32:22 minikube localkube[24074]: I0325 18:32:22.603705   24074 factory.go:89] Registering Rkt factory
Mar 25 18:32:22 minikube localkube[24074]: I0325 18:32:22.605343   24074 factory.go:54] Registering systemd factory
Mar 25 18:32:22 minikube localkube[24074]: I0325 18:32:22.606147   24074 factory.go:86] Registering Raw factory
Mar 25 18:32:22 minikube localkube[24074]: I0325 18:32:22.607454   24074 manager.go:1106] Started watching for new ooms in manager
Mar 25 18:32:22 minikube localkube[24074]: I0325 18:32:22.603874   24074 kubelet_node_status.go:204] Setting node annotation to enable volume controller attach/detach
Mar 25 18:32:22 minikube localkube[24074]: I0325 18:32:22.621608   24074 oomparser.go:185] oomparser using systemd
Mar 25 18:32:22 minikube localkube[24074]: I0325 18:32:22.622500   24074 manager.go:288] Starting recovery of all containers
Mar 25 18:32:22 minikube localkube[24074]: I0325 18:32:22.654803   24074 kubelet_node_status.go:74] Attempting to register node minikube
Mar 25 18:32:22 minikube localkube[24074]: fcf2ad36debdd5bb is starting a new election at term 90
Mar 25 18:32:22 minikube localkube[24074]: fcf2ad36debdd5bb became candidate at term 91
Mar 25 18:32:22 minikube localkube[24074]: fcf2ad36debdd5bb received vote from fcf2ad36debdd5bb at term 91
Mar 25 18:32:22 minikube localkube[24074]: fcf2ad36debdd5bb became leader at term 91
Mar 25 18:32:22 minikube localkube[24074]: raft.node: fcf2ad36debdd5bb elected leader fcf2ad36debdd5bb at term 91
Mar 25 18:32:22 minikube localkube[24074]: published {Name:kubeetcd ClientURLs:[http://0.0.0.0:2379]} to cluster 7f055ae3b0912328
Mar 25 18:32:22 minikube localkube[24074]: I0325 18:32:22.858889   24074 controller.go:262] Starting provisioner controller 6313c72c-1189-11e7-9bbe-080027d32594!
Mar 25 18:32:22 minikube localkube[24074]: I0325 18:32:22.905963   24074 kubelet_node_status.go:113] Node minikube was previously registered
Mar 25 18:32:22 minikube localkube[24074]: I0325 18:32:22.906007   24074 kubelet_node_status.go:77] Successfully registered node minikube
Mar 25 18:32:22 minikube localkube[24074]: I0325 18:32:22.923983   24074 kubelet_network.go:226] Setting Pod CIDR: 10.180.1.0/24 ->
Mar 25 18:32:23 minikube localkube[24074]: I0325 18:32:23.547022   24074 manager.go:293] Recovery completed
Mar 25 18:32:23 minikube localkube[24074]: I0325 18:32:23.556336   24074 rkt.go:56] starting detectRktContainers thread
Mar 25 18:32:24 minikube localkube[24074]: I0325 18:32:24.390590   24074 leaderelection.go:188] sucessfully acquired lease kube-system/kube-scheduler
Mar 25 18:32:24 minikube localkube[24074]: I0325 18:32:24.390663   24074 event.go:217] Event(api.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"kube-scheduler", UID:"e2fb3ccb-0fd5-11e7-ba4d-080027d32594", APIVersion:"v1", ResourceVersion:"49171", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube became leader
Mar 25 18:32:25 minikube localkube[24074]: I0325 18:32:25.533081   24074 leaderelection.go:188] sucessfully acquired lease kube-system/kube-controller-manager
Mar 25 18:32:25 minikube localkube[24074]: I0325 18:32:25.543210   24074 event.go:217] Event(api.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"kube-controller-manager", UID:"e34a9921-0fd5-11e7-ba4d-080027d32594", APIVersion:"v1", ResourceVersion:"49173", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube became leader
Mar 25 18:32:25 minikube localkube[24074]: I0325 18:32:25.544186   24074 plugins.go:94] No cloud provider specified.
Mar 25 18:32:25 minikube localkube[24074]: W0325 18:32:25.544492   24074 controllermanager.go:285] Unsuccessful parsing of cluster CIDR : invalid CIDR address:
Mar 25 18:32:25 minikube localkube[24074]: W0325 18:32:25.544657   24074 controllermanager.go:289] Unsuccessful parsing of service CIDR : invalid CIDR address:
Mar 25 18:32:25 minikube localkube[24074]: I0325 18:32:25.545361   24074 nodecontroller.go:189] Sending events to api server.
Mar 25 18:32:25 minikube localkube[24074]: E0325 18:32:25.547008   24074 controllermanager.go:305] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail.
Mar 25 18:32:25 minikube localkube[24074]: I0325 18:32:25.547093   24074 controllermanager.go:322] Will not configure cloud provider routes for allocate-node-cidrs: false, configure-cloud-routes: true.
Mar 25 18:32:25 minikube localkube[24074]: E0325 18:32:25.547485   24074 util.go:45] Metric for replenishment_controller already registered
Mar 25 18:32:25 minikube localkube[24074]: E0325 18:32:25.547519   24074 util.go:45] Metric for replenishment_controller already registered
Mar 25 18:32:25 minikube localkube[24074]: E0325 18:32:25.547532   24074 util.go:45] Metric for replenishment_controller already registered
Mar 25 18:32:25 minikube localkube[24074]: E0325 18:32:25.547651   24074 util.go:45] Metric for replenishment_controller already registered
Mar 25 18:32:25 minikube localkube[24074]: E0325 18:32:25.547664   24074 util.go:45] Metric for replenishment_controller already registered
Mar 25 18:32:25 minikube localkube[24074]: I0325 18:32:25.547939   24074 replication_controller.go:219] Starting RC Manager
Mar 25 18:32:25 minikube localkube[24074]: panic: runtime error: invalid memory address or nil pointer dereference
Mar 25 18:32:25 minikube localkube[24074]: [signal SIGSEGV: segmentation violation code=0x1 addr=0x60 pc=0xdecd8d]
Mar 25 18:32:25 minikube localkube[24074]: goroutine 2017 [running]:
Mar 25 18:32:25 minikube localkube[24074]: panic(0x3592cc0, 0xc420018030)
Mar 25 18:32:25 minikube localkube[24074]:         /usr/local/go/src/runtime/panic.go:500 +0x1a1
Mar 25 18:32:25 minikube localkube[24074]: k8s.io/minikube/vendor/k8s.io/kubernetes/pkg/apimachinery/registered.(*APIRegistrationManager).RESTMapper(0xc4200eea50, 0x0, 0x0, 0x0, 0xc424dc2a48, 0x2)
Mar 25 18:32:25 minikube localkube[24074]:         /go/src/k8s.io/minikube/vendor/k8s.io/kubernetes/pkg/apimachinery/registered/registered.go:313 +0x24d
Mar 25 18:32:25 minikube localkube[24074]: k8s.io/minikube/vendor/k8s.io/kubernetes/pkg/apimachinery/registered.(*APIRegistrationManager).RESTMapper-fm(0x0, 0x0, 0x0, 0x0, 0x0)
Mar 25 18:32:25 minikube localkube[24074]:         /go/src/k8s.io/minikube/vendor/k8s.io/kubernetes/pkg/apimachinery/registered/registered.go:106 +0x48
Mar 25 18:32:25 minikube localkube[24074]: k8s.io/minikube/vendor/k8s.io/kubernetes/cmd/kube-controller-manager/app.StartControllers(0xc420125680, 0xc420b461a0, 0x675a7c0, 0xc420b461a0, 0x675a7c0, 0xc420b461a0, 0xc424961aa0, 0x6756300, 0xc42191e080, 0xc422816790, ...)
Mar 25 18:32:25 minikube localkube[24074]:         /go/src/k8s.io/minikube/vendor/k8s.io/kubernetes/cmd/kube-controller-manager/app/controllermanager.go:367 +0x1996
Mar 25 18:32:25 minikube localkube[24074]: k8s.io/minikube/vendor/k8s.io/kubernetes/cmd/kube-controller-manager/app.Run.func2(0xc424961aa0)
Mar 25 18:32:25 minikube localkube[24074]:         /go/src/k8s.io/minikube/vendor/k8s.io/kubernetes/cmd/kube-controller-manager/app/controllermanager.go:180 +0xc4
Mar 25 18:32:25 minikube localkube[24074]: created by k8s.io/minikube/vendor/k8s.io/kubernetes/pkg/client/leaderelection.(*LeaderElector).Run
Mar 25 18:32:25 minikube localkube[24074]:         /go/src/k8s.io/minikube/vendor/k8s.io/kubernetes/pkg/client/leaderelection/leaderelection.go:150 +0x97
Mar 25 18:32:25 minikube systemd[1]: localkube.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Mar 25 18:32:25 minikube systemd[1]: localkube.service: Unit entered failed state.
Mar 25 18:32:25 minikube systemd[1]: localkube.service: Failed with result 'exit-code'.
Mar 25 18:32:28 minikube systemd[1]: localkube.service: Service hold-off time over, scheduling restart.
Mar 25 18:32:28 minikube systemd[1]: Stopped Localkube.
Mar 25 18:32:28 minikube systemd[1]: Starting Localkube...
Mar 25 18:32:29 minikube localkube[24289]: I0325 18:32:29.026845   24289 start.go:77] Feature gates:%!(EXTRA string=)
Mar 25 18:32:29 minikube localkube[24289]: localkube host ip address: 10.0.2.15
Mar 25 18:32:29 minikube localkube[24289]: I0325 18:32:29.036079   24289 server.go:215] Using iptables Proxier.
Mar 25 18:32:29 minikube localkube[24289]: W0325 18:32:29.036820   24289 server.go:468] Failed to retrieve node info: Get http://127.0.0.1:8080/api/v1/nodes/minikube: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Mar 25 18:32:29 minikube localkube[24289]: W0325 18:32:29.036947   24289 proxier.go:249] invalid nodeIP, initialize kube-proxy with 127.0.0.1 as nodeIP
Mar 25 18:32:29 minikube localkube[24289]: W0325 18:32:29.036955   24289 proxier.go:254] clusterCIDR not specified, unable to distinguish between internal and external traffic
Mar 25 18:32:29 minikube localkube[24289]: I0325 18:32:29.036990   24289 server.go:227] Tearing down userspace rules.
Mar 25 18:32:29 minikube localkube[24289]: Starting etcd...
Mar 25 18:32:29 minikube localkube[24289]: E0325 18:32:29.063248   24289 reflector.go:188] pkg/proxy/config/api.go:33: Failed to list *api.Endpoints: Get http://127.0.0.1:8080/api/v1/endpoints?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Mar 25 18:32:29 minikube localkube[24289]: E0325 18:32:29.063693   24289 reflector.go:188] pkg/proxy/config/api.go:30: Failed to list *api.Service: Get http://127.0.0.1:8080/api/v1/services?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Mar 25 18:32:29 minikube localkube[24289]: recovered store from snapshot at index 130013
Mar 25 18:32:29 minikube localkube[24289]: name = kubeetcd
Mar 25 18:32:29 minikube localkube[24289]: data dir = /var/lib/localkube/etcd
Mar 25 18:32:29 minikube localkube[24289]: member dir = /var/lib/localkube/etcd/member
Mar 25 18:32:29 minikube localkube[24289]: heartbeat = 100ms
Mar 25 18:32:29 minikube localkube[24289]: election = 1000ms
Mar 25 18:32:29 minikube localkube[24289]: snapshot count = 10000
Mar 25 18:32:29 minikube localkube[24289]: advertise client URLs = http://0.0.0.0:2379
Mar 25 18:32:29 minikube localkube[24289]: restarting member fcf2ad36debdd5bb in cluster 7f055ae3b0912328 at commit index 130304
Mar 25 18:32:29 minikube localkube[24289]: fcf2ad36debdd5bb became follower at term 91
Mar 25 18:32:29 minikube localkube[24289]: newRaft fcf2ad36debdd5bb [peers: [fcf2ad36debdd5bb], term: 91, commit: 130304, applied: 130013, lastindex: 130304, lastterm: 91]
Mar 25 18:32:29 minikube localkube[24289]: enabled capabilities for version 3.0
Mar 25 18:32:29 minikube localkube[24289]: added member fcf2ad36debdd5bb [http://0.0.0.0:2380] to cluster 7f055ae3b0912328 from store
Mar 25 18:32:29 minikube localkube[24289]: set the cluster version to 3.0 from store
Mar 25 18:32:29 minikube localkube[24289]: starting server... [version: 3.0.14, cluster version: 3.0]
Mar 25 18:32:29 minikube localkube[24289]: Starting apiserver...
Mar 25 18:32:29 minikube localkube[24289]: Starting controller-manager...
Mar 25 18:32:29 minikube localkube[24289]: Starting scheduler...
Mar 25 18:32:29 minikube localkube[24289]: Starting kubelet...
Mar 25 18:32:29 minikube localkube[24289]: Starting proxy...
Mar 25 18:32:29 minikube localkube[24289]: Starting storage-provisioner...
Mar 25 18:32:29 minikube localkube[24289]: I0325 18:32:29.269272   24289 config.go:527] Will report 10.0.2.15 as public IP address.
Mar 25 18:32:29 minikube localkube[24289]: I0325 18:32:29.269824   24289 feature_gate.go:189] feature gates: map[]
Mar 25 18:32:29 minikube localkube[24289]: E0325 18:32:29.269949   24289 server.go:297] unable to register configz: register config "componentconfig" twice
Mar 25 18:32:29 minikube localkube[24289]: W0325 18:32:29.269980   24289 server.go:605] Could not load kubeconfig file /var/lib/kubelet/kubeconfig: stat /var/lib/kubelet/kubeconfig: no such file or directory. Using default client config instead.
Mar 25 18:32:29 minikube localkube[24289]: I0325 18:32:29.270262   24289 docker.go:356] Connecting to docker on unix:///var/run/docker.sock
Mar 25 18:32:29 minikube localkube[24289]: I0325 18:32:29.270333   24289 docker.go:376] Start docker client with request timeout=2m0s
Mar 25 18:32:29 minikube localkube[24289]: E0325 18:32:29.270708   24289 controllermanager.go:125] unable to register configz: register config "componentconfig" twice
Mar 25 18:32:29 minikube localkube[24289]: E0325 18:32:29.271414   24289 server.go:78] unable to register configz: register config "componentconfig" twice
Mar 25 18:32:29 minikube localkube[24289]: I0325 18:32:29.282929   24289 conntrack.go:81] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
Mar 25 18:32:29 minikube localkube[24289]: E0325 18:32:29.285297   24289 leaderelection.go:228] error retrieving resource lock kube-system/kube-controller-manager: Get http://127.0.0.1:8080/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Mar 25 18:32:29 minikube localkube[24289]: E0325 18:32:29.285458   24289 reflector.go:199] k8s.io/minikube/vendor/k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:457: Failed to list *api.Pod: Get http://127.0.0.1:8080/api/v1/pods?fieldSelector=spec.nodeName%3D%2Cstatus.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Mar 25 18:32:29 minikube localkube[24289]: E0325 18:32:29.289992   24289 leaderelection.go:228] error retrieving resource lock kube-system/kube-scheduler: Get http://127.0.0.1:8080/api/v1/namespaces/kube-system/endpoints/kube-scheduler: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Mar 25 18:32:29 minikube localkube[24289]: I0325 18:32:29.290883   24289 conntrack.go:66] Setting conntrack hashsize to 32768
Mar 25 18:32:29 minikube localkube[24289]: E0325 18:32:29.292536   24289 reflector.go:199] k8s.io/minikube/vendor/k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:460: Failed to list *api.Pod: Get http://127.0.0.1:8080/api/v1/pods?fieldSelector=spec.nodeName%21%3D%2Cstatus.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Mar 25 18:32:29 minikube localkube[24289]: E0325 18:32:29.292675   24289 reflector.go:199] k8s.io/minikube/vendor/k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:463: Failed to list *api.Node: Get http://127.0.0.1:8080/api/v1/nodes?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Mar 25 18:32:29 minikube localkube[24289]: E0325 18:32:29.292755   24289 reflector.go:199] k8s.io/minikube/vendor/k8s.io/kubernetes/plugin/pkg/admission/resourcequota/resource_access.go:83: Failed to list *api.ResourceQuota: Get http://127.0.0.1:8080/api/v1/resourcequotas?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Mar 25 18:32:29 minikube localkube[24289]: E0325 18:32:29.292823   24289 reflector.go:199] k8s.io/minikube/vendor/k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:481: Failed to list *extensions.ReplicaSet: Get http://127.0.0.1:8080/apis/extensions/v1beta1/replicasets?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Mar 25 18:32:29 minikube localkube[24289]: E0325 18:32:29.292867   24289 reflector.go:199] k8s.io/minikube/vendor/k8s.io/kubernetes/plugin/pkg/admission/storageclass/default/admission.go:75: Failed to list *storage.StorageClass: Get http://127.0.0.1:8080/apis/storage.k8s.io/v1beta1/storageclasses?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Mar 25 18:32:29 minikube localkube[24289]: E0325 18:32:29.292920   24289 reflector.go:199] pkg/controller/informers/factory.go:89: Failed to list *api.PersistentVolumeClaim: Get http://127.0.0.1:8080/api/v1/persistentvolumeclaims?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Mar 25 18:32:29 minikube localkube[24289]: E0325 18:32:29.292964   24289 reflector.go:199] k8s.io/minikube/vendor/k8s.io/kubernetes/plugin/pkg/admission/serviceaccount/admission.go:119: Failed to list *api.Secret: Get http://127.0.0.1:8080/api/v1/secrets?fieldSelector=type%3Dkubernetes.io%2Fservice-account-token&resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Mar 25 18:32:29 minikube localkube[24289]: E0325 18:32:29.293003   24289 reflector.go:199] k8s.io/minikube/vendor/k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:473: Failed to list *api.ReplicationController: Get http://127.0.0.1:8080/api/v1/replicationcontrollers?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Mar 25 18:32:29 minikube localkube[24289]: E0325 18:32:29.293042   24289 reflector.go:199] k8s.io/minikube/vendor/k8s.io/kubernetes/plugin/pkg/admission/serviceaccount/admission.go:103: Failed to list *api.ServiceAccount: Get http://127.0.0.1:8080/api/v1/serviceaccounts?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Mar 25 18:32:29 minikube localkube[24289]: storage-provisioner: Exit with error: Error getting server version: Get http://localhost:8080/version: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Mar 25 18:32:29 minikube localkube[24289]: E0325 18:32:29.293118   24289 reflector.go:199] k8s.io/minikube/vendor/k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:470: Failed to list *api.Service: Get http://127.0.0.1:8080/api/v1/services?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Mar 25 18:32:29 minikube localkube[24289]: E0325 18:32:29.293154   24289 reflector.go:199] k8s.io/minikube/vendor/k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:466: Failed to list *api.PersistentVolume: Get http://127.0.0.1:8080/api/v1/persistentvolumes?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Mar 25 18:32:29 minikube localkube[24289]: I0325 18:32:29.294856   24289 conntrack.go:81] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
Mar 25 18:32:29 minikube localkube[24289]: I0325 18:32:29.294981   24289 conntrack.go:81] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
Mar 25 18:32:29 minikube localkube[24289]: E0325 18:32:29.296803   24289 event.go:208] Unable to write event: 'Post http://127.0.0.1:8080/api/v1/namespaces/default/events: dial tcp 127.0.0.1:8080: getsockopt: connection refused' (may retry after sleeping)
Mar 25 18:32:29 minikube localkube[24289]: I0325 18:32:29.313083   24289 manager.go:143] cAdvisor running in container: "/system.slice/localkube.service"
Mar 25 18:32:29 minikube localkube[24289]: I0325 18:32:29.357962   24289 fs.go:117] Filesystem partitions: map[/dev/sda1:{mountpoint:/mnt/sda1 major:8 minor:1 fsType:ext4 blockSize:0}]
Mar 25 18:32:29 minikube localkube[24289]: I0325 18:32:29.363351   24289 manager.go:198] Machine: {NumCores:2 CpuFrequency:1599999 MemoryCapacity:2097799168 MachineID:431b7ae93e8747ea8e594b465d74fded SystemUUID:01A6AC49-1B9B-4AEF-9FEB-61E7AD3D0E4F BootID:a6299bd4-711f-4c02-949b-3eb5db54d6cc Filesystems:[{Device:/dev/sda1 Capacity:19163156480 Type:vfs Inodes:2434064 HasInodes:true} {Device:rootfs Capacity:0 Type:vfs Inodes:0 HasInodes:true}] DiskMap:map[8:0:{Name:sda Major:8 Minor:0 Size:20971520000 Scheduler:cfq}] NetworkDevices:[{Name:eth0 MacAddress:08:00:27:d3:25:94 Speed:1000 Mtu:1500} {Name:eth1 MacAddress:08:00:27:48:51:84 Speed:1000 Mtu:1500} {Name:sit0 MacAddress:00:00:00:00 Speed:0 Mtu:1480}] Topology:[{Id:0 Memory:2097799168 Cores:[{Id:0 Threads:[0] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2} {Size:3145728 Type:Unified Level:3}]} {Id:1 Threads:[1] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2} {Size:3145728 Type:Unified Level:3}]}] Caches:[]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None}
Mar 25 18:32:29 minikube localkube[24289]: I0325 18:32:29.372748   24289 manager.go:204] Version: {KernelVersion:4.7.2 ContainerOsVersion:Buildroot 2016.08 DockerVersion:1.11.1 CadvisorVersion: CadvisorRevision:}
Mar 25 18:32:29 minikube localkube[24289]: W0325 18:32:29.380832   24289 container_manager_linux.go:205] Running with swap on is not supported, please disable swap! This will be a fatal error by default starting in K8s v1.6! In the meantime, you can opt-in to making this a fatal error by enabling --experimental-fail-swap-on.
Mar 25 18:32:29 minikube localkube[24289]: I0325 18:32:29.381244   24289 kubelet.go:242] Adding manifest file: /etc/kubernetes/manifests
Mar 25 18:32:29 minikube localkube[24289]: I0325 18:32:29.381395   24289 kubelet.go:252] Watching apiserver
Mar 25 18:32:29 minikube localkube[24289]: E0325 18:32:29.401836   24289 reflector.go:188] pkg/kubelet/kubelet.go:386: Failed to list *api.Node: Get http://127.0.0.1:8080/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Mar 25 18:32:29 minikube localkube[24289]: E0325 18:32:29.401956   24289 reflector.go:188] pkg/kubelet/kubelet.go:378: Failed to list *api.Service: Get http://127.0.0.1:8080/api/v1/services?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Mar 25 18:32:29 minikube localkube[24289]: E0325 18:32:29.402070   24289 reflector.go:188] pkg/kubelet/config/apiserver.go:44: Failed to list *api.Pod: Get http://127.0.0.1:8080/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Mar 25 18:32:29 minikube localkube[24289]: E0325 18:32:29.405139   24289 reflector.go:199] pkg/controller/informers/factory.go:89: Failed to list *api.Namespace: Get http://127.0.0.1:8080/api/v1/namespaces?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Mar 25 18:32:29 minikube localkube[24289]: E0325 18:32:29.405457   24289 reflector.go:199] pkg/controller/informers/factory.go:89: Failed to list *api.LimitRange: Get http://127.0.0.1:8080/api/v1/limitranges?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Mar 25 18:32:29 minikube localkube[24289]: W0325 18:32:29.407998   24289 kubelet_network.go:69] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
Mar 25 18:32:29 minikube localkube[24289]: I0325 18:32:29.408055   24289 kubelet.go:477] Hairpin mode set to "hairpin-veth"
Mar 25 18:32:29 minikube localkube[24289]: [restful] 2017/03/25 18:32:29 log.go:30: [restful/swagger] listing is available at https://10.0.2.15:8443/swaggerapi/
Mar 25 18:32:29 minikube localkube[24289]: [restful] 2017/03/25 18:32:29 log.go:30: [restful/swagger] https://10.0.2.15:8443/swaggerui/ is mapped to folder /swagger-ui/
Mar 25 18:32:29 minikube localkube[24289]: I0325 18:32:29.428548   24289 docker_manager.go:256] Setting dockerRoot to /mnt/sda1/var/lib/docker
Mar 25 18:32:29 minikube localkube[24289]: I0325 18:32:29.428907   24289 docker_manager.go:259] Setting cgroupDriver to cgroupfs
Mar 25 18:32:29 minikube localkube[24289]: I0325 18:32:29.429303   24289 kubelet_network.go:226] Setting Pod CIDR:  -> 10.180.1.0/24
Mar 25 18:32:29 minikube localkube[24289]: I0325 18:32:29.431419   24289 server.go:770] Started kubelet v1.5.3
Mar 25 18:32:29 minikube localkube[24289]: E0325 18:32:29.435086   24289 kubelet.go:1145] Image garbage collection failed: unable to find data for container /
Mar 25 18:32:29 minikube localkube[24289]: I0325 18:32:29.435859   24289 kubelet_node_status.go:204] Setting node annotation to enable volume controller attach/detach
Mar 25 18:32:29 minikube localkube[24289]: I0325 18:32:29.437497   24289 server.go:123] Starting to listen on 0.0.0.0:10250
Mar 25 18:32:29 minikube localkube[24289]: E0325 18:32:29.469250   24289 event.go:208] Unable to write event: 'Post http://127.0.0.1:8080/api/v1/namespaces/default/events: dial tcp 127.0.0.1:8080: getsockopt: connection refused' (may retry after sleeping)
Mar 25 18:32:29 minikube localkube[24289]: E0325 18:32:29.479841   24289 kubelet.go:1634] Failed to check if disk space is available for the runtime: failed to get fs info for "runtime": unable to find data for container /
Mar 25 18:32:29 minikube localkube[24289]: E0325 18:32:29.479941   24289 kubelet.go:1642] Failed to check if disk space is available on the root partition: failed to get fs info for "root": unable to find data for container /
Mar 25 18:32:29 minikube localkube[24289]: I0325 18:32:29.482467   24289 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer
Mar 25 18:32:29 minikube localkube[24289]: I0325 18:32:29.482510   24289 status_manager.go:129] Starting to sync pod status with apiserver
Mar 25 18:32:29 minikube localkube[24289]: I0325 18:32:29.482569   24289 kubelet.go:1714] Starting kubelet main sync loop.
Mar 25 18:32:29 minikube localkube[24289]: I0325 18:32:29.482579   24289 kubelet.go:1725] skipping pod synchronization - [container runtime is down]
Mar 25 18:32:29 minikube localkube[24289]: I0325 18:32:29.482965   24289 volume_manager.go:242] Starting Kubelet Volume Manager
Mar 25 18:32:29 minikube localkube[24289]: storage-provisioner: Exit with error: Error getting server version: Get http://localhost:8080/version: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Mar 25 18:32:29 minikube localkube[24289]: I0325 18:32:29.586118   24289 kubelet_node_status.go:204] Setting node annotation to enable volume controller attach/detach
Mar 25 18:32:29 minikube localkube[24289]: I0325 18:32:29.605854   24289 factory.go:295] Registering Docker factory
Mar 25 18:32:29 minikube localkube[24289]: E0325 18:32:29.632093   24289 kubelet.go:1634] Failed to check if disk space is available for the runtime: failed to get fs info for "runtime": unable to find data for container /
Mar 25 18:32:29 minikube localkube[24289]: E0325 18:32:29.635086   24289 kubelet.go:1642] Failed to check if disk space is available on the root partition: failed to get fs info for "root": unable to find data for container /
Mar 25 18:32:29 minikube localkube[24289]: I0325 18:32:29.635257   24289 kubelet_node_status.go:74] Attempting to register node minikube
Mar 25 18:32:29 minikube localkube[24289]: I0325 18:32:29.636349   24289 factory.go:89] Registering Rkt factory
Mar 25 18:32:29 minikube localkube[24289]: I0325 18:32:29.636391   24289 factory.go:54] Registering systemd factory
Mar 25 18:32:29 minikube localkube[24289]: I0325 18:32:29.636715   24289 factory.go:86] Registering Raw factory
Mar 25 18:32:29 minikube localkube[24289]: I0325 18:32:29.637083   24289 manager.go:1106] Started watching for new ooms in manager
Mar 25 18:32:29 minikube localkube[24289]: I0325 18:32:29.640929   24289 oomparser.go:185] oomparser using systemd
Mar 25 18:32:29 minikube localkube[24289]: I0325 18:32:29.641688   24289 manager.go:288] Starting recovery of all containers
Mar 25 18:32:29 minikube localkube[24289]: I0325 18:32:29.739295   24289 serve.go:88] Serving securely on 0.0.0.0:8443
Mar 25 18:32:29 minikube localkube[24289]: I0325 18:32:29.739388   24289 serve.go:102] Serving insecurely on 127.0.0.1:8080
Mar 25 18:32:29 minikube systemd[1]: Started Localkube.
Mar 25 18:32:29 minikube localkube[24289]: E0325 18:32:29.740450   24289 kubelet_node_status.go:98] Unable to register node "minikube" with API server: Post http://127.0.0.1:8080/api/v1/nodes: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Mar 25 18:32:29 minikube localkube[24289]: I0325 18:32:29.816907   24289 controller.go:262] Starting provisioner controller 673859eb-1189-11e7-857f-080027d32594!
Mar 25 18:32:29 minikube localkube[24289]: fcf2ad36debdd5bb is starting a new election at term 91
Mar 25 18:32:29 minikube localkube[24289]: fcf2ad36debdd5bb became candidate at term 92
Mar 25 18:32:29 minikube localkube[24289]: fcf2ad36debdd5bb received vote from fcf2ad36debdd5bb at term 92
Mar 25 18:32:29 minikube localkube[24289]: fcf2ad36debdd5bb became leader at term 92
Mar 25 18:32:29 minikube localkube[24289]: raft.node: fcf2ad36debdd5bb elected leader fcf2ad36debdd5bb at term 92
Mar 25 18:32:29 minikube localkube[24289]: published {Name:kubeetcd ClientURLs:[http://0.0.0.0:2379]} to cluster 7f055ae3b0912328
Mar 25 18:32:29 minikube localkube[24289]: I0325 18:32:29.943694   24289 kubelet_node_status.go:204] Setting node annotation to enable volume controller attach/detach
Mar 25 18:32:29 minikube localkube[24289]: I0325 18:32:29.961075   24289 kubelet_node_status.go:74] Attempting to register node minikube
Mar 25 18:32:30 minikube localkube[24289]: I0325 18:32:30.096487   24289 kubelet_node_status.go:113] Node minikube was previously registered
Mar 25 18:32:30 minikube localkube[24289]: I0325 18:32:30.096565   24289 kubelet_node_status.go:77] Successfully registered node minikube
Mar 25 18:32:30 minikube localkube[24289]: I0325 18:32:30.119380   24289 kubelet_network.go:226] Setting Pod CIDR: 10.180.1.0/24 ->
Mar 25 18:32:30 minikube localkube[24289]: I0325 18:32:30.525968   24289 manager.go:293] Recovery completed
Mar 25 18:32:30 minikube localkube[24289]: I0325 18:32:30.527224   24289 rkt.go:56] starting detectRktContainers thread
Mar 25 18:32:31 minikube localkube[24289]: I0325 18:32:31.862475   24289 leaderelection.go:188] sucessfully acquired lease kube-system/kube-controller-manager
Mar 25 18:32:31 minikube localkube[24289]: I0325 18:32:31.865822   24289 plugins.go:94] No cloud provider specified.
Mar 25 18:32:31 minikube localkube[24289]: W0325 18:32:31.866255   24289 controllermanager.go:285] Unsuccessful parsing of cluster CIDR : invalid CIDR address:
Mar 25 18:32:31 minikube localkube[24289]: W0325 18:32:31.866652   24289 controllermanager.go:289] Unsuccessful parsing of service CIDR : invalid CIDR address:
Mar 25 18:32:31 minikube localkube[24289]: I0325 18:32:31.867627   24289 nodecontroller.go:189] Sending events to api server.
Mar 25 18:32:31 minikube localkube[24289]: E0325 18:32:31.868902   24289 controllermanager.go:305] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail.
Mar 25 18:32:31 minikube localkube[24289]: I0325 18:32:31.881558   24289 controllermanager.go:322] Will not configure cloud provider routes for allocate-node-cidrs: false, configure-cloud-routes: true.
Mar 25 18:32:31 minikube localkube[24289]: E0325 18:32:31.883424   24289 util.go:45] Metric for replenishment_controller already registered
Mar 25 18:32:31 minikube localkube[24289]: E0325 18:32:31.883498   24289 util.go:45] Metric for replenishment_controller already registered
Mar 25 18:32:31 minikube localkube[24289]: E0325 18:32:31.883516   24289 util.go:45] Metric for replenishment_controller already registered
Mar 25 18:32:31 minikube localkube[24289]: E0325 18:32:31.883559   24289 util.go:45] Metric for replenishment_controller already registered
Mar 25 18:32:31 minikube localkube[24289]: E0325 18:32:31.883576   24289 util.go:45] Metric for replenishment_controller already registered
Mar 25 18:32:31 minikube localkube[24289]: I0325 18:32:31.869694   24289 event.go:217] Event(api.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"kube-controller-manager", UID:"e34a9921-0fd5-11e7-ba4d-080027d32594", APIVersion:"v1", ResourceVersion:"49177", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube became leader
Mar 25 18:32:31 minikube localkube[24289]: I0325 18:32:31.877230   24289 replication_controller.go:219] Starting RC Manager
Mar 25 18:32:31 minikube localkube[24289]: panic: runtime error: invalid memory address or nil pointer dereference
Mar 25 18:32:31 minikube localkube[24289]: [signal SIGSEGV: segmentation violation code=0x1 addr=0x60 pc=0xdecd8d]
Mar 25 18:32:31 minikube localkube[24289]: goroutine 2000 [running]:
Mar 25 18:32:31 minikube localkube[24289]: panic(0x3592cc0, 0xc420018030)
Mar 25 18:32:31 minikube localkube[24289]:         /usr/local/go/src/runtime/panic.go:500 +0x1a1
Mar 25 18:32:31 minikube localkube[24289]: k8s.io/minikube/vendor/k8s.io/kubernetes/pkg/apimachinery/registered.(*APIRegistrationManager).RESTMapper(0xc42022c410, 0x0, 0x0, 0x0, 0xc42215f2bc, 0x2)
Mar 25 18:32:31 minikube localkube[24289]:         /go/src/k8s.io/minikube/vendor/k8s.io/kubernetes/pkg/apimachinery/registered/registered.go:313 +0x24d
Mar 25 18:32:31 minikube localkube[24289]: k8s.io/minikube/vendor/k8s.io/kubernetes/pkg/apimachinery/registered.(*APIRegistrationManager).RESTMapper-fm(0x0, 0x0, 0x0, 0x0, 0x0)
Mar 25 18:32:31 minikube localkube[24289]:         /go/src/k8s.io/minikube/vendor/k8s.io/kubernetes/pkg/apimachinery/registered/registered.go:106 +0x48
Mar 25 18:32:31 minikube localkube[24289]: k8s.io/minikube/vendor/k8s.io/kubernetes/cmd/kube-controller-manager/app.StartControllers(0xc420259400, 0xc4216bc000, 0x675a7c0, 0xc4216bc000, 0x675a7c0, 0xc4216bc000, 0xc4248d1080, 0x6756300, 0xc4216a1580, 0x4, ...)
Mar 25 18:32:31 minikube localkube[24289]:         /go/src/k8s.io/minikube/vendor/k8s.io/kubernetes/cmd/kube-controller-manager/app/controllermanager.go:367 +0x1996
Mar 25 18:32:31 minikube localkube[24289]: k8s.io/minikube/vendor/k8s.io/kubernetes/cmd/kube-controller-manager/app.Run.func2(0xc4248d1080)
Mar 25 18:32:31 minikube localkube[24289]:         /go/src/k8s.io/minikube/vendor/k8s.io/kubernetes/cmd/kube-controller-manager/app/controllermanager.go:180 +0xc4
Mar 25 18:32:31 minikube localkube[24289]: created by k8s.io/minikube/vendor/k8s.io/kubernetes/pkg/client/leaderelection.(*LeaderElector).Run
Mar 25 18:32:31 minikube localkube[24289]:         /go/src/k8s.io/minikube/vendor/k8s.io/kubernetes/pkg/client/leaderelection/leaderelection.go:150 +0x97
Mar 25 18:32:31 minikube systemd[1]: localkube.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Mar 25 18:32:31 minikube systemd[1]: localkube.service: Unit entered failed state.
Mar 25 18:32:31 minikube systemd[1]: localkube.service: Failed with result 'exit-code'.

sebgoa@foobar  (resize) $

@imathews
Copy link
Author

@sebgoa @r2d4 After much trial and error, I've been able to keep minikube running consistently by using the k8s 1.6 branch (pull 1266) and the xhyve driver.

I was never able to get it stable when using virtualbox, tried several permutations of removing jobs, other services, etc... But very glad to have a solution that works.

@sebgoa
Copy link

sebgoa commented Mar 25, 2017

and virtualbox version is 5.1.18 r114002

pretty sure it is a problem in v0.17.1 , never had that problem before, it started appearing with this version.

@kokhang
Copy link
Contributor

kokhang commented Mar 31, 2017

Im also running into the same issue. I started minikube as such minikube --vm-driver=kvm start --kubernetes-version=1.6.0. When it starts, it is working fine. But as soon as i restart localkube (systemctl restart localkube), i get into this error and localkube never recovers. Here is my logs: https://gist.github.com/kokhang/999ac2af0aabad67d22b32b7ef8249aa

My minikube version is also v0.17.1

@donspaulding
Copy link

I'm hitting this issue on 0.17.1 as well. My logs look similar to @sebgoa's. My localkube never recovers even with minikube stop && minikube start. Notably if I run the localkube command via minikube ssh I get the same error that's in the logs where it panics after being elected the leader and then logs "Starting RC Manager".

E0331 18:05:18.725999    5839 proxier.go:1108] can't open "nodePort for kube-system/kubernetes-dashboard:" (:30000/tcp), skipping this nodePort: listen tcp :30000: bind: address already in use
E0331 18:05:18.726416    5839 proxier.go:1108] can't open "nodePort for kube-system/default-http-backend:" (:30001/tcp), skipping this nodePort: listen tcp :30001: bind: address already in use
I0331 18:05:21.016048    5839 leaderelection.go:188] sucessfully acquired lease kube-system/kube-controller-manager
I0331 18:05:21.027744    5839 event.go:217] Event(api.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"kube-controller-manager", UID:"4448f432-163a-11e7-adee-080027fc20a7", APIVersion:"v1", ResourceVersion:"2605", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube became leader
I0331 18:05:21.036610    5839 plugins.go:94] No cloud provider specified.
W0331 18:05:21.045275    5839 controllermanager.go:285] Unsuccessful parsing of cluster CIDR : invalid CIDR address: 
W0331 18:05:21.045366    5839 controllermanager.go:289] Unsuccessful parsing of service CIDR : invalid CIDR address: 
I0331 18:05:21.045832    5839 nodecontroller.go:189] Sending events to api server.
E0331 18:05:21.046362    5839 controllermanager.go:305] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail.
I0331 18:05:21.046417    5839 controllermanager.go:322] Will not configure cloud provider routes for allocate-node-cidrs: false, configure-cloud-routes: true.
E0331 18:05:21.047258    5839 util.go:45] Metric for replenishment_controller already registered
E0331 18:05:21.047307    5839 util.go:45] Metric for replenishment_controller already registered
E0331 18:05:21.047315    5839 util.go:45] Metric for replenishment_controller already registered
E0331 18:05:21.047333    5839 util.go:45] Metric for replenishment_controller already registered
E0331 18:05:21.047340    5839 util.go:45] Metric for replenishment_controller already registered
I0331 18:05:21.037994    5839 replication_controller.go:219] Starting RC Manager
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x60 pc=0xdee55d]

goroutine 2155 [running]:
panic(0x3576740, 0xc4200140b0)
	/usr/local/go/src/runtime/panic.go:500 +0x1a1
k8s.io/minikube/vendor/k8s.io/kubernetes/pkg/apimachinery/registered.(*APIRegistrationManager).RESTMapper(0xc42005c7d0, 0x0, 0x0, 0x0, 0xc4219c19bc, 0x2)
	/var/lib/jenkins/go2/src/k8s.io/minikube/_gopath/src/k8s.io/minikube/vendor/k8s.io/kubernetes/pkg/apimachinery/registered/registered.go:313 +0x24d
k8s.io/minikube/vendor/k8s.io/kubernetes/pkg/apimachinery/registered.(*APIRegistrationManager).RESTMapper-fm(0x0, 0x0, 0x0, 0x0, 0x0)
	/var/lib/jenkins/go2/src/k8s.io/minikube/_gopath/src/k8s.io/minikube/vendor/k8s.io/kubernetes/pkg/apimachinery/registered/registered.go:106 +0x48
k8s.io/minikube/vendor/k8s.io/kubernetes/cmd/kube-controller-manager/app.StartControllers(0xc420437400, 0xc42160ad00, 0x6766ea0, 0xc42160ad00, 0x6766ea0, 0xc42160ad00, 0xc42361bc80, 0x67629e0, 0xc421849580, 0x3b64f77, ...)
	/var/lib/jenkins/go2/src/k8s.io/minikube/_gopath/src/k8s.io/minikube/vendor/k8s.io/kubernetes/cmd/kube-controller-manager/app/controllermanager.go:367 +0x1996
k8s.io/minikube/vendor/k8s.io/kubernetes/cmd/kube-controller-manager/app.Run.func2(0xc42361bc80)
	/var/lib/jenkins/go2/src/k8s.io/minikube/_gopath/src/k8s.io/minikube/vendor/k8s.io/kubernetes/cmd/kube-controller-manager/app/controllermanager.go:180 +0xc4
created by k8s.io/minikube/vendor/k8s.io/kubernetes/pkg/client/leaderelection.(*LeaderElector).Run
	/var/lib/jenkins/go2/src/k8s.io/minikube/_gopath/src/k8s.io/minikube/vendor/k8s.io/kubernetes/pkg/client/leaderelection/leaderelection.go:150 +0x97

I wonder if it's related to an addon which has a ReplicationController which is crashing? Here's my minikube addons list:

- default-storageclass: enabled
- kube-dns: enabled
- heapster: disabled
- ingress: enabled
- registry-creds: enabled
- addon-manager: enabled
- dashboard: enabled

And a kubectl get rc --namespace=kube-system:

NAME                       DESIRED   CURRENT   READY     AGE
default-http-backend       1         1         0         16m
kube-dns-v20               1         1         0         40m
kubernetes-dashboard       1         1         0         40m
nginx-ingress-controller   1         1         0         16m
registry-creds             1         1         0         25m

My localkube flaps up and down as systemctl attempts to restart it continuously.

@r2d4
Copy link
Contributor

r2d4 commented Mar 31, 2017

@donspaulding Are you running kubernetes 1.6? There might be some weird behavior with some of the addons (since they haven't all been upgraded to the latest versions).

There also might be weirdness if you upgrade your cluster from 1.5 -> 1.6 without deleting. We don't guarantee in-place upgrades right now, but its something we would really like to have in the future.

@donspaulding
Copy link

@r2d4 Nope, 1.5.3.

don@box : $ kubectl version
Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.3", GitCommit:"029c3a408176b55c30846f0faedf56aae5992e9b", GitTreeState:"clean", BuildDate:"2017-02-15T06:40:50Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.3", GitCommit:"029c3a408176b55c30846f0faedf56aae5992e9b", GitTreeState:"clean", BuildDate:"1970-01-01T00:00:00Z", GoVersion:"go1.7.1", Compiler:"gc", Platform:"linux/amd64"}

@donspaulding
Copy link

FWIW, without knowing what's really happening, I can't shake the feeling that the panic log is pointing right at the error. Here's the last line in the codebase which spits a message out to the log:

https://github.com/kubernetes/kubernetes/blob/v1.5.3/pkg/controller/replication/replication_controller.go#L219

I suspect one of the two goroutines in that file to be the problem, since we never get to the "Shutting down RC Manager" logging call at the bottom. I don't know golang or the k8s codebase, so I could be way off, and of course there's been a huge amount of churn in those files between 1.5.3 and 1.6. I'll try to see in the next couple of days whether or not running k8s 1.6.0 makes a difference.

@donspaulding
Copy link

This seems related to minikube #1090 and perhaps even kubernetes #43430.

@r2d4
Copy link
Contributor

r2d4 commented Apr 3, 2017

Thanks for the additional debugging @donspaulding. Were you able to test this with 1.6?

@donspaulding
Copy link

I'm recreating my minikube VM with 1.6.0 now. I've had this issue off and on, so I don't know that I'll know very quickly how successful the version bump is. But I'm willing to give it a go for a couple days.

That being said, it will probably be a bit before we upgrade our production clusters to 1.6.0 and one of the main reasons to use minikube is to achieve dev/prod parity. I'm happy to do my part with debugging this, but if upgrading to 1.6.0 fixes the issue, would you expect this issue to be closed as "wontfix-pending-1.6.0-adoption"? If not, what would the next steps be? How can I help get a fix for this on 1.5.X?

Thanks for your help with this @r2d4!

@thrawn01
Copy link

thrawn01 commented Apr 5, 2017

I just ran into this issue also.

minikube version: v0.17.1
DriverName: "virtualbox"
Boot2DockerURL: "--snip--/.minikube/cache/iso/minikube-v1.0.7.iso

Can confirm it only happens after minikube stop && minikube start

Apr 05 18:20:56 minikube localkube[10177]: [signal SIGSEGV: segmentation violation code=0x1 addr=0x60 pc=0xdecd8d]
Apr 05 18:20:56 minikube localkube[10177]: goroutine 1680 [running]:
Apr 05 18:20:56 minikube localkube[10177]: panic(0x3592cc0, 0xc420018030)
Apr 05 18:20:56 minikube localkube[10177]:         /usr/local/go/src/runtime/panic.go:500 +0x1a1
Apr 05 18:20:56 minikube localkube[10177]: k8s.io/minikube/vendor/k8s.io/kubernetes/pkg/apimachinery/registered.(*APIRegistrationManager).RESTMapper(0xc4200ee640, 0x0, 0x0, 0x0, 0xc420854cfc, 0x2)
Apr 05 18:20:56 minikube localkube[10177]:         /go/src/k8s.io/minikube/vendor/k8s.io/kubernetes/pkg/apimachinery/registered/registered.go:313 +0x24d
Apr 05 18:20:56 minikube localkube[10177]: k8s.io/minikube/vendor/k8s.io/kubernetes/pkg/apimachinery/registered.(*APIRegistrationManager).RESTMapper-fm(0x0, 0x0, 0x0, 0x0, 0x0)
Apr 05 18:20:56 minikube localkube[10177]:         /go/src/k8s.io/minikube/vendor/k8s.io/kubernetes/pkg/apimachinery/registered/registered.go:106 +0x48
Apr 05 18:20:56 minikube localkube[10177]: k8s.io/minikube/vendor/k8s.io/kubernetes/cmd/kube-controller-manager/app.StartControllers(0xc420664c80, 0xc4207f1040, 0x675a7c0, 0xc4207f1040, 0x675a7c0, 0xc4207f1040, 0xc421472780, 0x6756300, 0xc421e10cc0, 0x0, ...)
Apr 05 18:20:56 minikube localkube[10177]:         /go/src/k8s.io/minikube/vendor/k8s.io/kubernetes/cmd/kube-controller-manager/app/controllermanager.go:367 +0x1996
Apr 05 18:20:56 minikube localkube[10177]: k8s.io/minikube/vendor/k8s.io/kubernetes/cmd/kube-controller-manager/app.Run.func2(0xc421472780)
Apr 05 18:20:56 minikube localkube[10177]:         /go/src/k8s.io/minikube/vendor/k8s.io/kubernetes/cmd/kube-controller-manager/app/controllermanager.go:180 +0xc4
Apr 05 18:20:56 minikube localkube[10177]: created by k8s.io/minikube/vendor/k8s.io/kubernetes/pkg/client/leaderelection.(*LeaderElector).Run
Apr 05 18:20:56 minikube localkube[10177]:         /go/src/k8s.io/minikube/vendor/k8s.io/kubernetes/pkg/client/leaderelection/leaderelection.go:150 +0x97
Apr 05 18:20:56 minikube systemd[1]: localkube.service: Main process exited, code=exited, status=2/INVALIDARGUMENT

@sebgoa
Copy link

sebgoa commented Apr 8, 2017

@r2d4 this is also happening with v1.6.0

Here is the user experience (with 1.5.3), in the span of 20 seconds:

$ kubectl get pods
The connection to the server 192.168.99.100:8443 was refused - did you specify the right host or port?
foobar:foobar sebgoa$ kubectl get pods
NAME                           READY     STATUS    RESTARTS   AGE
minio-minio-2719945787-lgvqd   1/1       Running   1          2d
slack-1979453829-704p1         1/1       Running   0          20h
thumb-2132546596-8j1h9         1/1       Running   0          20h
foobar:foobar sebgoa$ kubectl get pods
The connection to the server 192.168.99.100:8443 was refused - did you specify the right host or port?
foobar:foobar sebgoa$ kubectl get pods
The connection to the server 192.168.99.100:8443 was refused - did you specify the right host or port?
foobar:foobar sebgoa$ kubectl get pods
NAME                           READY     STATUS    RESTARTS   AGE
minio-minio-2719945787-lgvqd   1/1       Running   1          2d
slack-1979453829-704p1         1/1       Running   0          20h
thumb-2132546596-8j1h9         1/1       Running   0          20h
foobar:foobar sebgoa$ minikube version
minikube version: v0.17.1
foobar:foobar sebgoa$ kubectl version
Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.4", GitCommit:"7243c69eb523aa4377bce883e7c0dd76b84709a1", GitTreeState:"clean", BuildDate:"2017-03-07T23:53:09Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"darwin/amd64"}
The connection to the server 192.168.99.100:8443 was refused - did you specify the right host or port?
foobar:foobar sebgoa$ kubectl version
Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.4", GitCommit:"7243c69eb523aa4377bce883e7c0dd76b84709a1", GitTreeState:"clean", BuildDate:"2017-03-07T23:53:09Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"darwin/amd64"}
The connection to the server 192.168.99.100:8443 was refused - did you specify the right host or port?
foobar:foobar sebgoa$ kubectl version
Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.4", GitCommit:"7243c69eb523aa4377bce883e7c0dd76b84709a1", GitTreeState:"clean", BuildDate:"2017-03-07T23:53:09Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"darwin/amd64"}
The connection to the server 192.168.99.100:8443 was refused - did you specify the right host or port?
foobar:foobar sebgoa$ kubectl version
Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.4", GitCommit:"7243c69eb523aa4377bce883e7c0dd76b84709a1", GitTreeState:"clean", BuildDate:"2017-03-07T23:53:09Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"darwin/amd64"}
The connection to the server 192.168.99.100:8443 was refused - did you specify the right host or port?
foobar:foobar sebgoa$ kubectl version
Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.4", GitCommit:"7243c69eb523aa4377bce883e7c0dd76b84709a1", GitTreeState:"clean", BuildDate:"2017-03-07T23:53:09Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.3", GitCommit:"029c3a408176b55c30846f0faedf56aae5992e9b", GitTreeState:"clean", BuildDate:"1970-01-01T00:00:00Z", GoVersion:"go1.7.3", Compiler:"gc", Platform:"linux/amd64"}

This should become a blocker, it is a big user facing issue (whatever is happening).

@zcahana
Copy link
Contributor

zcahana commented Apr 9, 2017

I was experiencing this too, intermittently, with minikube v0.17.1 and kube 1.5.3.
Bumped up to minikube v0.18 with kube 1.6, and it seems to be resolved (at least, I haven't seen this happen since the upgrade).

Note that the panicking code path (kube-controller-manager --> StartControllers() --> APIRegistrationManager.RESTMapper()) doesn't exist in the vendored k8s packages in v0.18, so it's a good bet that it's indeed resolved.

@donspaulding
Copy link

Following up on this, it seems that when I start my minikube with minikube start --kubernetes-version=1.6.0 it doesn't exhibit this problem. I've been running on 1.6.0 for over a week with no indication of this bug. So, hooray, I guess?

I think for now, I'll just plan on running on k8s v1.6.0 in my minikube dev environment, but my previous questions/misgivings about this as a solution remain.

Anything else I can do to figure out what the exact nature of the issue is?

@donspaulding
Copy link

For reference, my minikube version...

don@box : $ minikube version
minikube version: v0.17.1

... and kubernetes version...

don@box : $ kubectl version
Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.3", GitCommit:"029c3a408176b55c30846f0faedf56aae5992e9b", GitTreeState:"clean", BuildDate:"2017-02-15T06:40:50Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.0", GitCommit:"fff5156092b56e6bd60fff75aad4dc9de6b6ef37", GitTreeState:"dirty", BuildDate:"1970-01-01T00:00:00Z", GoVersion:"go1.7", Compiler:"gc", Platform:"linux/amd64"}

@r2d4
Copy link
Contributor

r2d4 commented Apr 11, 2017

Yeah, I'm keeping this open until we figure out exactly whats causing the issue in 1.5.3. Although for those reading it seems like this was fixed in 1.6.

I haven't been able to reproduce this yet. Have you been able to reproduce on a vanilla minikube cluster @donspaulding ? If not, then maybe some more information on the types of resources you're running on minikube (TPRs, etc.)

@etburke
Copy link

etburke commented Apr 14, 2017

I'm experiencing this behavior with VirtualBox v5.1.18 and these versions:

minikube version: v0.18.0
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.1", GitCommit:"b0b7a323cc5a4a2019b2e9520c21c7830b7f708e", GitTreeState:"clean", BuildDate:"2017-04-03T20:44:38Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.0", GitCommit:"fff5156092b56e6bd60fff75aad4dc9de6b6ef37", GitTreeState:"dirty", BuildDate:"2017-04-07T20:46:46Z", GoVersion:"go1.7.3", Compiler:"gc", Platform:"linux/amd64"}

Here are my logs:
minikube.log.txt

@donspaulding
Copy link

@r2d4 I can't say that I've ever experienced this on a "vanilla minikube" because we script the setup of minikube to get our dev clusters in a deploy-ready state, so I'm always running a number of pods even on an idle cluster.

Also, I've just recently started experiencing this with version 1.6.0, or maybe not. I get a different traceback now, which is perhaps not surprising, but maybe it's still related?

Regarding TPRs, here's the only one I have:

don@box : $ kubectl get thirdpartyresource --all-namespaces
NAMESPACE                       NAME                                                        DESCRIPTION   VERSION(S)
certificate.stable.k8s.psg.io   A specification of a Let's Encrypt Certificate to manage.   v1

That resource in particular is created upon installation of kube-cert-manager.

I'm about to delete/recreate my cluster, and this time I'll see if just deploying k-c-m is enough to trigger the behavior. For reference, here's the logs I'm getting when I hit this issue on 1.6.0:

Apr 17 20:01:59 minikube systemd[1]: Starting Localkube...
Apr 17 20:02:00 minikube localkube[12227]: I0417 20:02:00.064333   12227 start.go:77] Feature gates:%!(EXTRA string=)
Apr 17 20:02:00 minikube localkube[12227]: recovered store from snapshot at index 130020
Apr 17 20:02:00 minikube localkube[12227]: name = kubeetcd
Apr 17 20:02:00 minikube localkube[12227]: data dir = /var/lib/localkube/etcd
Apr 17 20:02:00 minikube localkube[12227]: member dir = /var/lib/localkube/etcd/member
Apr 17 20:02:00 minikube localkube[12227]: heartbeat = 100ms
Apr 17 20:02:00 minikube localkube[12227]: election = 1000ms
Apr 17 20:02:00 minikube localkube[12227]: snapshot count = 10000
Apr 17 20:02:00 minikube localkube[12227]: advertise client URLs = http://0.0.0.0:2379
Apr 17 20:02:00 minikube localkube[12227]: restarting member fcf2ad36debdd5bb in cluster 7f055ae3b0912328 at commit index 137930
Apr 17 20:02:00 minikube localkube[12227]: fcf2ad36debdd5bb became follower at term 8220
Apr 17 20:02:00 minikube localkube[12227]: newRaft fcf2ad36debdd5bb [peers: [fcf2ad36debdd5bb], term: 8220, commit: 137930, applied: 130020, lastindex: 137930, lastterm: 8220]
Apr 17 20:02:00 minikube localkube[12227]: enabled capabilities for version 3.0
Apr 17 20:02:00 minikube localkube[12227]: added member fcf2ad36debdd5bb [http://0.0.0.0:2380] to cluster 7f055ae3b0912328 from store
Apr 17 20:02:00 minikube localkube[12227]: set the cluster version to 3.0 from store
Apr 17 20:02:00 minikube localkube[12227]: starting server... [version: 3.0.17, cluster version: 3.0]
Apr 17 20:02:00 minikube localkube[12227]: localkube host ip address: 10.0.2.15
Apr 17 20:02:00 minikube localkube[12227]: W0417 20:02:00.277687   12227 authentication.go:362] AnonymousAuth is not allowed with the AllowAll authorizer.  Resetting AnonymousAuth to false. You should use a different authorizer
Apr 17 20:02:00 minikube localkube[12227]: I0417 20:02:00.345365   12227 server.go:225] Using iptables Proxier.
Apr 17 20:02:00 minikube localkube[12227]: W0417 20:02:00.352317   12227 server.go:469] Failed to retrieve node info: Get http://127.0.0.1:8080/api/v1/nodes/minikube: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Apr 17 20:02:00 minikube localkube[12227]: W0417 20:02:00.352512   12227 proxier.go:304] invalid nodeIP, initializing kube-proxy with 127.0.0.1 as nodeIP
Apr 17 20:02:00 minikube localkube[12227]: W0417 20:02:00.352603   12227 proxier.go:309] clusterCIDR not specified, unable to distinguish between internal and external traffic
Apr 17 20:02:00 minikube localkube[12227]: I0417 20:02:00.352696   12227 server.go:249] Tearing down userspace rules.
Apr 17 20:02:00 minikube localkube[12227]: E0417 20:02:00.375728   12227 reflector.go:201] k8s.io/minikube/vendor/k8s.io/kubernetes/pkg/proxy/config/api.go:49: Failed to list *api.Endpoints: Get http://127.0.0.1:8080/api/v1/endpoints?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Apr 17 20:02:00 minikube localkube[12227]: E0417 20:02:00.375934   12227 reflector.go:201] k8s.io/minikube/vendor/k8s.io/kubernetes/pkg/proxy/config/api.go:46: Failed to list *api.Service: Get http://127.0.0.1:8080/api/v1/services?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Apr 17 20:02:00 minikube localkube[12227]: W0417 20:02:00.619611   12227 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Apr 17 20:02:00 minikube localkube[12227]: W0417 20:02:00.621221   12227 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Apr 17 20:02:00 minikube localkube[12227]: W0417 20:02:00.621431   12227 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Apr 17 20:02:00 minikube localkube[12227]: W0417 20:02:00.621727   12227 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Apr 17 20:02:00 minikube localkube[12227]: W0417 20:02:00.622028   12227 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Apr 17 20:02:00 minikube localkube[12227]: W0417 20:02:00.622287   12227 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Apr 17 20:02:00 minikube localkube[12227]: W0417 20:02:00.622577   12227 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Apr 17 20:02:00 minikube localkube[12227]: W0417 20:02:00.622942   12227 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Apr 17 20:02:00 minikube localkube[12227]: W0417 20:02:00.623281   12227 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Apr 17 20:02:00 minikube localkube[12227]: W0417 20:02:00.623632   12227 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Apr 17 20:02:00 minikube localkube[12227]: W0417 20:02:00.624024   12227 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Apr 17 20:02:00 minikube localkube[12227]: W0417 20:02:00.624992   12227 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Apr 17 20:02:00 minikube localkube[12227]: W0417 20:02:00.626210   12227 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Apr 17 20:02:00 minikube localkube[12227]: W0417 20:02:00.627026   12227 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Apr 17 20:02:00 minikube localkube[12227]: W0417 20:02:00.627625   12227 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Apr 17 20:02:00 minikube localkube[12227]: W0417 20:02:00.627775   12227 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Apr 17 20:02:00 minikube localkube[12227]: W0417 20:02:00.647906   12227 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Apr 17 20:02:00 minikube localkube[12227]: W0417 20:02:00.647956   12227 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Apr 17 20:02:00 minikube localkube[12227]: W0417 20:02:00.648242   12227 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Apr 17 20:02:00 minikube localkube[12227]: W0417 20:02:00.648313   12227 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Apr 17 20:02:00 minikube localkube[12227]: W0417 20:02:00.649724   12227 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Apr 17 20:02:00 minikube localkube[12227]: W0417 20:02:00.649907   12227 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Apr 17 20:02:00 minikube localkube[12227]: W0417 20:02:00.650128   12227 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Apr 17 20:02:00 minikube localkube[12227]: W0417 20:02:00.650315   12227 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Apr 17 20:02:00 minikube localkube[12227]: W0417 20:02:00.650362   12227 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Apr 17 20:02:00 minikube localkube[12227]: W0417 20:02:00.650559   12227 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Apr 17 20:02:00 minikube localkube[12227]: W0417 20:02:00.650832   12227 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Apr 17 20:02:00 minikube localkube[12227]: W0417 20:02:00.651125   12227 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Apr 17 20:02:00 minikube localkube[12227]: W0417 20:02:00.651447   12227 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Apr 17 20:02:00 minikube localkube[12227]: W0417 20:02:00.651648   12227 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Apr 17 20:02:00 minikube localkube[12227]: W0417 20:02:00.651892   12227 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Apr 17 20:02:00 minikube localkube[12227]: W0417 20:02:00.652053   12227 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Apr 17 20:02:00 minikube localkube[12227]: W0417 20:02:00.652221   12227 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Apr 17 20:02:00 minikube localkube[12227]: W0417 20:02:00.652394   12227 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Apr 17 20:02:00 minikube localkube[12227]: W0417 20:02:00.652494   12227 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Apr 17 20:02:00 minikube localkube[12227]: W0417 20:02:00.652610   12227 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Apr 17 20:02:00 minikube localkube[12227]: W0417 20:02:00.652670   12227 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Apr 17 20:02:00 minikube localkube[12227]: W0417 20:02:00.652726   12227 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Apr 17 20:02:00 minikube localkube[12227]: W0417 20:02:00.652826   12227 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Apr 17 20:02:00 minikube localkube[12227]: W0417 20:02:00.652916   12227 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Apr 17 20:02:00 minikube localkube[12227]: W0417 20:02:00.653136   12227 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Apr 17 20:02:00 minikube localkube[12227]: W0417 20:02:00.653363   12227 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Apr 17 20:02:00 minikube localkube[12227]: W0417 20:02:00.653444   12227 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Apr 17 20:02:00 minikube localkube[12227]: W0417 20:02:00.653515   12227 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Apr 17 20:02:00 minikube localkube[12227]: E0417 20:02:00.674670   12227 reflector.go:201] k8s.io/minikube/vendor/k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:70: Failed to list *api.LimitRange: Get https://localhost:8443/api/v1/limitranges?resourceVersion=0: dial tcp 127.0.0.1:8443: getsockopt: connection refused
Apr 17 20:02:00 minikube localkube[12227]: E0417 20:02:00.675458   12227 reflector.go:201] k8s.io/minikube/vendor/k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:70: Failed to list *api.ServiceAccount: Get https://localhost:8443/api/v1/serviceaccounts?resourceVersion=0: dial tcp 127.0.0.1:8443: getsockopt: connection refused
Apr 17 20:02:00 minikube localkube[12227]: E0417 20:02:00.680585   12227 reflector.go:201] k8s.io/minikube/vendor/k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:70: Failed to list *api.Secret: Get https://localhost:8443/api/v1/secrets?resourceVersion=0: dial tcp 127.0.0.1:8443: getsockopt: connection refused
Apr 17 20:02:00 minikube localkube[12227]: E0417 20:02:00.680969   12227 reflector.go:201] k8s.io/minikube/vendor/k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:70: Failed to list *api.ResourceQuota: Get https://localhost:8443/api/v1/resourcequotas?resourceVersion=0: dial tcp 127.0.0.1:8443: getsockopt: connection refused
Apr 17 20:02:00 minikube localkube[12227]: E0417 20:02:00.681177   12227 reflector.go:201] k8s.io/minikube/vendor/k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:70: Failed to list *storage.StorageClass: Get https://localhost:8443/apis/storage.k8s.io/v1beta1/storageclasses?resourceVersion=0: dial tcp 127.0.0.1:8443: getsockopt: connection refused
Apr 17 20:02:00 minikube localkube[12227]: E0417 20:02:00.681443   12227 reflector.go:201] k8s.io/minikube/vendor/k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:70: Failed to list *api.Namespace: Get https://localhost:8443/api/v1/namespaces?resourceVersion=0: dial tcp 127.0.0.1:8443: getsockopt: connection refused
Apr 17 20:02:00 minikube localkube[12227]: [restful] 2017/04/17 20:02:00 log.go:30: [restful/swagger] listing is available at https://10.0.2.15:8443/swaggerapi/
Apr 17 20:02:00 minikube localkube[12227]: [restful] 2017/04/17 20:02:00 log.go:30: [restful/swagger] https://10.0.2.15:8443/swaggerui/ is mapped to folder /swagger-ui/
Apr 17 20:02:00 minikube localkube[12227]: I0417 20:02:00.751650   12227 serve.go:79] Serving securely on 0.0.0.0:8443
Apr 17 20:02:00 minikube localkube[12227]: I0417 20:02:00.751845   12227 serve.go:94] Serving insecurely on 127.0.0.1:8080
Apr 17 20:02:00 minikube systemd[1]: Started Localkube.
Apr 17 20:02:00 minikube localkube[12227]: fcf2ad36debdd5bb is starting a new election at term 8220
Apr 17 20:02:00 minikube localkube[12227]: fcf2ad36debdd5bb became candidate at term 8221
Apr 17 20:02:00 minikube localkube[12227]: fcf2ad36debdd5bb received vote from fcf2ad36debdd5bb at term 8221
Apr 17 20:02:00 minikube localkube[12227]: fcf2ad36debdd5bb became leader at term 8221
Apr 17 20:02:00 minikube localkube[12227]: raft.node: fcf2ad36debdd5bb elected leader fcf2ad36debdd5bb at term 8221
Apr 17 20:02:00 minikube localkube[12227]: published {Name:kubeetcd ClientURLs:[http://0.0.0.0:2379]} to cluster 7f055ae3b0912328
Apr 17 20:02:01 minikube localkube[12227]: Starting controller-manager...
Apr 17 20:02:01 minikube localkube[12227]: Starting scheduler...
Apr 17 20:02:01 minikube localkube[12227]: Starting kubelet...
Apr 17 20:02:01 minikube localkube[12227]: Starting proxy...
Apr 17 20:02:01 minikube localkube[12227]: Starting storage-provisioner...
Apr 17 20:02:01 minikube localkube[12227]: I0417 20:02:01.478458   12227 conntrack.go:81] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
Apr 17 20:02:01 minikube localkube[12227]: I0417 20:02:01.480087   12227 conntrack.go:66] Setting conntrack hashsize to 32768
Apr 17 20:02:01 minikube localkube[12227]: E0417 20:02:01.482103   12227 controllermanager.go:120] unable to register configz: register config "componentconfig" twice
Apr 17 20:02:01 minikube localkube[12227]: I0417 20:02:01.486286   12227 feature_gate.go:144] feature gates: map[]
Apr 17 20:02:01 minikube localkube[12227]: E0417 20:02:01.488320   12227 server.go:312] unable to register configz: register config "componentconfig" twice
Apr 17 20:02:01 minikube localkube[12227]: W0417 20:02:01.488575   12227 server.go:715] Could not load kubeconfig file /var/lib/kubelet/kubeconfig: stat /var/lib/kubelet/kubeconfig: no such file or directory. Using default client config instead.
Apr 17 20:02:01 minikube localkube[12227]: E0417 20:02:01.488783   12227 server.go:157] unable to register configz: register config "componentconfig" twice
Apr 17 20:02:01 minikube localkube[12227]: I0417 20:02:01.490273   12227 docker.go:364] Connecting to docker on unix:///var/run/docker.sock
Apr 17 20:02:01 minikube localkube[12227]: I0417 20:02:01.490423   12227 docker.go:384] Start docker client with request timeout=2m0s
Apr 17 20:02:01 minikube localkube[12227]: I0417 20:02:01.486832   12227 conntrack.go:81] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
Apr 17 20:02:01 minikube localkube[12227]: I0417 20:02:01.491737   12227 conntrack.go:81] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
Apr 17 20:02:01 minikube localkube[12227]: I0417 20:02:01.487374   12227 leaderelection.go:179] attempting to acquire leader lease...
Apr 17 20:02:01 minikube localkube[12227]: I0417 20:02:01.487428   12227 leaderelection.go:179] attempting to acquire leader lease...
Apr 17 20:02:01 minikube localkube[12227]: I0417 20:02:01.504101   12227 controller.go:249] Starting provisioner controller b87f6266-23a8-11e7-8d61-080027588d6f!
Apr 17 20:02:01 minikube localkube[12227]: E0417 20:02:01.562374   12227 controller.go:401] Claim "kube-system/kcm-kube-cert-manager": StorageClass "default" not found
Apr 17 20:02:01 minikube localkube[12227]: I0417 20:02:01.572304   12227 manager.go:143] cAdvisor running in container: "/system.slice/localkube.service"
Apr 17 20:02:01 minikube localkube[12227]: I0417 20:02:01.586123   12227 fs.go:117] Filesystem partitions: map[/dev/sda1:{mountpoint:/mnt/sda1 major:8 minor:1 fsType:ext4 blockSize:0}]
Apr 17 20:02:01 minikube localkube[12227]: I0417 20:02:01.587365   12227 manager.go:198] Machine: {NumCores:2 CpuFrequency:2713460 MemoryCapacity:2097647616 MachineID:xxxxx SystemUUID:xxxxxx BootID:xxxxxx Filesystems:[{Device:rootfs Capacity:0 Type:vfs Inodes:0 HasInodes:true} {Device:/dev/sda1 Capacity:19163156480 Type:vfs Inodes:2434064 HasInodes:true}] DiskMap:map[8:0:{Name:sda Major:8 Minor:0 Size:20971520000 Scheduler:cfq}] NetworkDevices:[{Name:eth0 MacAddress:xxxxxx Speed:1000 Mtu:1500} {Name:eth1 MacAddress:xxxxx Speed:1000 Mtu:1500} {Name:sit0 MacAddress:00:00:00:00 Speed:0 Mtu:1480}] Topology:[{Id:0 Memory:2097647616 Cores:[{Id:0 Threads:[0] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2} {Size:8388608 Type:Unified Level:3}]} {Id:1 Threads:[1] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2} {Size:8388608 Type:Unified Level:3}]}] Caches:[]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None}
Apr 17 20:02:01 minikube localkube[12227]: I0417 20:02:01.589670   12227 manager.go:204] Version: {KernelVersion:4.7.2 ContainerOsVersion:Buildroot 2016.08 DockerVersion:1.11.1 CadvisorVersion: CadvisorRevision:}
Apr 17 20:02:01 minikube localkube[12227]: I0417 20:02:01.590171   12227 server.go:509] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
Apr 17 20:02:01 minikube localkube[12227]: W0417 20:02:01.594164   12227 container_manager_linux.go:218] Running with swap on is not supported, please disable swap! This will be a fatal error by default starting in K8s v1.6! In the meantime, you can opt-in to making this a fatal error by enabling --experimental-fail-swap-on.
Apr 17 20:02:01 minikube localkube[12227]: I0417 20:02:01.594255   12227 container_manager_linux.go:245] container manager verified user specified cgroup-root exists: /
Apr 17 20:02:01 minikube localkube[12227]: I0417 20:02:01.594264   12227 container_manager_linux.go:250] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs ProtectKernelDefaults:false EnableCRI:true NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>}]} ExperimentalQOSReserved:map[]}
Apr 17 20:02:01 minikube localkube[12227]: I0417 20:02:01.594385   12227 kubelet.go:255] Adding manifest file: /etc/kubernetes/manifests
Apr 17 20:02:01 minikube localkube[12227]: I0417 20:02:01.594444   12227 kubelet.go:265] Watching apiserver
Apr 17 20:02:01 minikube localkube[12227]: W0417 20:02:01.598605   12227 kubelet_network.go:70] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
Apr 17 20:02:01 minikube localkube[12227]: I0417 20:02:01.599589   12227 kubelet.go:494] Hairpin mode set to "hairpin-veth"
Apr 17 20:02:01 minikube localkube[12227]: I0417 20:02:01.616820   12227 docker_service.go:187] Docker cri networking managed by kubernetes.io/no-op
Apr 17 20:02:01 minikube localkube[12227]: I0417 20:02:01.620078   12227 docker_service.go:204] Setting cgroupDriver to cgroupfs
Apr 17 20:02:01 minikube localkube[12227]: I0417 20:02:01.629398   12227 remote_runtime.go:41] Connecting to runtime service /var/run/dockershim.sock
Apr 17 20:02:01 minikube localkube[12227]: I0417 20:02:01.630751   12227 kuberuntime_manager.go:171] Container runtime docker initialized, version: 1.11.1, apiVersion: 1.23.0
Apr 17 20:02:01 minikube localkube[12227]: I0417 20:02:01.630915   12227 kuberuntime_manager.go:902] updating runtime config through cri with podcidr 10.180.1.0/24
Apr 17 20:02:01 minikube localkube[12227]: I0417 20:02:01.631200   12227 docker_service.go:277] docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.180.1.0/24,},}
Apr 17 20:02:01 minikube localkube[12227]: I0417 20:02:01.631423   12227 kubelet_network.go:326] Setting Pod CIDR:  -> 10.180.1.0/24
Apr 17 20:02:01 minikube localkube[12227]: I0417 20:02:01.632281   12227 server.go:869] Started kubelet v1.6.0
Apr 17 20:02:01 minikube localkube[12227]: E0417 20:02:01.632753   12227 kubelet.go:1165] Image garbage collection failed: unable to find data for container /
Apr 17 20:02:01 minikube localkube[12227]: I0417 20:02:01.632884   12227 server.go:127] Starting to listen on 0.0.0.0:10250
Apr 17 20:02:01 minikube localkube[12227]: I0417 20:02:01.635633   12227 server.go:294] Adding debug handlers to kubelet server.
Apr 17 20:02:01 minikube localkube[12227]: I0417 20:02:01.636423   12227 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer
Apr 17 20:02:01 minikube localkube[12227]: I0417 20:02:01.636433   12227 status_manager.go:140] Starting to sync pod status with apiserver
Apr 17 20:02:01 minikube localkube[12227]: I0417 20:02:01.636440   12227 kubelet.go:1741] Starting kubelet main sync loop.
Apr 17 20:02:01 minikube localkube[12227]: I0417 20:02:01.636450   12227 kubelet.go:1752] skipping pod synchronization - [container runtime is down PLEG is not healthy: pleg was last seen active 2562047h47m16.854775807s ago; threshold is 3m0s]
Apr 17 20:02:01 minikube localkube[12227]: I0417 20:02:01.636941   12227 volume_manager.go:248] Starting Kubelet Volume Manager
Apr 17 20:02:01 minikube localkube[12227]: I0417 20:02:01.652470   12227 factory.go:309] Registering Docker factory
Apr 17 20:02:01 minikube localkube[12227]: I0417 20:02:01.654901   12227 factory.go:89] Registering Rkt factory
Apr 17 20:02:01 minikube localkube[12227]: I0417 20:02:01.655061   12227 factory.go:54] Registering systemd factory
Apr 17 20:02:01 minikube localkube[12227]: I0417 20:02:01.655713   12227 factory.go:86] Registering Raw factory
Apr 17 20:02:01 minikube localkube[12227]: I0417 20:02:01.656133   12227 manager.go:1106] Started watching for new ooms in manager
Apr 17 20:02:01 minikube localkube[12227]: W0417 20:02:01.663544   12227 docker_sandbox.go:263] Couldn't find network status for roadrunner/roadrunner-new-ui-roadrunner-2109328614-b25sd through plugin: invalid network status for
Apr 17 20:02:01 minikube localkube[12227]: I0417 20:02:01.664031   12227 oomparser.go:185] oomparser using systemd
Apr 17 20:02:01 minikube localkube[12227]: I0417 20:02:01.666743   12227 manager.go:288] Starting recovery of all containers
Apr 17 20:02:01 minikube localkube[12227]: I0417 20:02:01.708606   12227 trace.go:61] Trace "Update /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication" (started 2017-04-17 20:02:00.899840939 +0000 UTC):
Apr 17 20:02:01 minikube localkube[12227]: [19.215µs] [19.215µs] About to convert to expected version
Apr 17 20:02:01 minikube localkube[12227]: [72.508µs] [53.293µs] Conversion done
Apr 17 20:02:01 minikube localkube[12227]: [76.004µs] [3.496µs] About to store object in database
Apr 17 20:02:01 minikube localkube[12227]: [808.651867ms] [808.575863ms] Object stored in database
Apr 17 20:02:01 minikube localkube[12227]: [808.658386ms] [6.519µs] Self-link added
Apr 17 20:02:01 minikube localkube[12227]: "Update /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication" [808.707818ms] [49.432µs] END
Apr 17 20:02:01 minikube localkube[12227]: W0417 20:02:01.728609   12227 docker_sandbox.go:263] Couldn't find network status for kube-system/kubernetes-dashboard-np0qz through plugin: invalid network status for
Apr 17 20:02:01 minikube localkube[12227]: W0417 20:02:01.735165   12227 docker_sandbox.go:263] Couldn't find network status for kube-system/kubernetes-dashboard-np0qz through plugin: invalid network status for
Apr 17 20:02:01 minikube localkube[12227]: I0417 20:02:01.745458   12227 kubelet_node_status.go:230] Setting node annotation to enable volume controller attach/detach
Apr 17 20:02:01 minikube localkube[12227]: I0417 20:02:01.754617   12227 kubelet_node_status.go:77] Attempting to register node minikube
Apr 17 20:02:01 minikube localkube[12227]: W0417 20:02:01.784186   12227 docker_sandbox.go:263] Couldn't find network status for kube-system/tiller-deploy-1012751306-zds5d through plugin: invalid network status for
Apr 17 20:02:01 minikube localkube[12227]: I0417 20:02:01.796013   12227 kubelet_node_status.go:128] Node minikube was previously registered
Apr 17 20:02:01 minikube localkube[12227]: I0417 20:02:01.796736   12227 kubelet_node_status.go:80] Successfully registered node minikube
Apr 17 20:02:01 minikube localkube[12227]: W0417 20:02:01.798608   12227 docker_sandbox.go:263] Couldn't find network status for kube-system/tiller-deploy-1012751306-zds5d through plugin: invalid network status for
Apr 17 20:02:01 minikube localkube[12227]: I0417 20:02:01.796991   12227 leaderelection.go:189] successfully acquired lease kube-system/kube-controller-manager
Apr 17 20:02:01 minikube localkube[12227]: I0417 20:02:01.797135   12227 event.go:217] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"kube-controller-manager", UID:"xxxxxx", APIVersion:"v1", ResourceVersion:"144689", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube became leader
Apr 17 20:02:01 minikube localkube[12227]: I0417 20:02:01.815607   12227 kuberuntime_manager.go:902] updating runtime config through cri with podcidr
Apr 17 20:02:01 minikube localkube[12227]: I0417 20:02:01.816089   12227 docker_service.go:277] docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}
Apr 17 20:02:01 minikube localkube[12227]: I0417 20:02:01.821363   12227 leaderelection.go:189] successfully acquired lease kube-system/kube-scheduler
Apr 17 20:02:01 minikube localkube[12227]: I0417 20:02:01.821723   12227 kubelet_network.go:326] Setting Pod CIDR: 10.180.1.0/24 ->
Apr 17 20:02:01 minikube localkube[12227]: I0417 20:02:01.822458   12227 event.go:217] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"kube-scheduler", UID:"xxxxxx", APIVersion:"v1", ResourceVersion:"144691", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube became leader
Apr 17 20:02:01 minikube localkube[12227]: W0417 20:02:01.906815   12227 docker_sandbox.go:263] Couldn't find network status for kube-system/nginx-ingress-controller-3578480276-739tp through plugin: invalid network status for
Apr 17 20:02:01 minikube localkube[12227]: W0417 20:02:01.907914   12227 docker_sandbox.go:263] Couldn't find network status for kube-system/nginx-ingress-controller-3578480276-739tp through plugin: invalid network status for
Apr 17 20:02:01 minikube localkube[12227]: I0417 20:02:01.946275   12227 controllermanager.go:437] Started "podgc"
Apr 17 20:02:01 minikube localkube[12227]: E0417 20:02:01.946492   12227 util.go:45] Metric for serviceaccount_controller already registered
Apr 17 20:02:01 minikube localkube[12227]: I0417 20:02:01.946527   12227 controllermanager.go:437] Started "serviceaccount"
Apr 17 20:02:01 minikube localkube[12227]: I0417 20:02:01.946878   12227 controllermanager.go:437] Started "job"
Apr 17 20:02:01 minikube localkube[12227]: I0417 20:02:01.947221   12227 controllermanager.go:437] Started "deployment"
Apr 17 20:02:01 minikube localkube[12227]: I0417 20:02:01.947611   12227 controllermanager.go:437] Started "replicaset"
Apr 17 20:02:01 minikube localkube[12227]: E0417 20:02:01.947742   12227 certificates.go:38] Failed to start certificate controller: open /etc/kubernetes/ca/ca.pem: no such file or directory
Apr 17 20:02:01 minikube localkube[12227]: W0417 20:02:01.947754   12227 controllermanager.go:434] Skipping "certificatesigningrequests"
Apr 17 20:02:01 minikube localkube[12227]: W0417 20:02:01.947762   12227 controllermanager.go:421] "bootstrapsigner" is disabled
Apr 17 20:02:01 minikube localkube[12227]: I0417 20:02:01.948087   12227 controllermanager.go:437] Started "replicationcontroller"
Apr 17 20:02:01 minikube localkube[12227]: I0417 20:02:01.948376   12227 controllermanager.go:437] Started "statefuleset"
Apr 17 20:02:01 minikube localkube[12227]: panic: runtime error: invalid memory address or nil pointer dereference
Apr 17 20:02:01 minikube localkube[12227]: [signal SIGSEGV: segmentation violation code=0x1 addr=0x50 pc=0x179ecbd]
Apr 17 20:02:01 minikube localkube[12227]: goroutine 2004 [running]:
Apr 17 20:02:01 minikube localkube[12227]: panic(0x37c8480, 0xc420016030)
Apr 17 20:02:01 minikube localkube[12227]:         /usr/local/go/src/runtime/panic.go:500 +0x1a1
Apr 17 20:02:01 minikube localkube[12227]: k8s.io/minikube/vendor/k8s.io/apimachinery/pkg/apimachinery/registered.(*APIRegistrationManager).RESTMapper(0xc42017def0, 0x0, 0x0, 0x0, 0x20, 0x1100000009)
Apr 17 20:02:01 minikube localkube[12227]:         /usr/local/google/home/mrick/go/src/k8s.io/minikube/_gopath/src/k8s.io/minikube/vendor/k8s.io/apimachinery/pkg/apimachinery/registered/registered.go:286 +0x24d
Apr 17 20:02:01 minikube localkube[12227]: k8s.io/minikube/vendor/k8s.io/kubernetes/cmd/kube-controller-manager/app.startNamespaceController(0x6909e20, 0xc424d4b500, 0x6914e40, 0xc4257e2330, 0x0, 0x0, 0x0, 0x0, 0xc42020ac10, 0x1, ...)
Apr 17 20:02:01 minikube localkube[12227]:         /usr/local/google/home/mrick/go/src/k8s.io/minikube/_gopath/src/k8s.io/minikube/vendor/k8s.io/kubernetes/cmd/kube-controller-manager/app/core.go:103 +0x5d
Apr 17 20:02:01 minikube localkube[12227]: k8s.io/minikube/vendor/k8s.io/kubernetes/cmd/kube-controller-manager/app.StartControllers(0xc4257b7620, 0xc422484580, 0x6909e20, 0xc424d4b500, 0x6909e20, 0xc424d4b500, 0xc421322ea0, 0xc422517200, 0x20002)
Apr 17 20:02:01 minikube localkube[12227]:         /usr/local/google/home/mrick/go/src/k8s.io/minikube/_gopath/src/k8s.io/minikube/vendor/k8s.io/kubernetes/cmd/kube-controller-manager/app/controllermanager.go:428 +0x5b3
Apr 17 20:02:01 minikube localkube[12227]: k8s.io/minikube/vendor/k8s.io/kubernetes/cmd/kube-controller-manager/app.Run.func2(0xc421322ea0)
Apr 17 20:02:01 minikube localkube[12227]:         /usr/local/google/home/mrick/go/src/k8s.io/minikube/_gopath/src/k8s.io/minikube/vendor/k8s.io/kubernetes/cmd/kube-controller-manager/app/controllermanager.go:180 +0xc6
Apr 17 20:02:01 minikube localkube[12227]: created by k8s.io/minikube/vendor/k8s.io/kubernetes/pkg/client/leaderelection.(*LeaderElector).Run
Apr 17 20:02:01 minikube localkube[12227]:         /usr/local/google/home/mrick/go/src/k8s.io/minikube/_gopath/src/k8s.io/minikube/vendor/k8s.io/kubernetes/pkg/client/leaderelection/leaderelection.go:150 +0x97
Apr 17 20:02:01 minikube systemd[1]: localkube.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Apr 17 20:02:01 minikube systemd[1]: localkube.service: Unit entered failed state.
Apr 17 20:02:01 minikube systemd[1]: localkube.service: Failed with result 'exit-code'.
Apr 17 20:02:05 minikube systemd[1]: localkube.service: Service hold-off time over, scheduling restart.
Apr 17 20:02:05 minikube systemd[1]: Stopped Localkube.

@donspaulding
Copy link

I've figured out a way to reproduce this.

Basic steps

  1. Disable all addons (done mainly to limit the variables in play), which requires a running cluster, AFAIK.
  2. Start the cluster (--kubernetes-version doesn't matter, I tried it with both the default and 1.6.0)
  3. Run kubectl get po --namespace=kube-system enough times to satisfy yourself that the cluster is responding normally (I usually run it about 10 times to make sure that it's up.
  4. Run helm init
  5. Run helm install https://mirusresearch.github.io/charts/stable/kube-cert-manager-1.0.0.tgz --name=kcm --namespace=kube-system --set=api_key=abc123,api_secret=xyz456
  6. Run kubectl get po --namespace=kube-system enough times to satisfy yourself that the cluster is still responding normally.
  7. Run minikube stop && minikube start
  8. Run kubectl get po --namespace=kube-system and you'll see that localkube is flapping up and down every few seconds because this command responds with the error in the initial issue report.

For reference, here's my vital statistics:

Mon Apr 17 18:10:22 CDT 2017  ~/dev/roadrunner @new-ui  minikube 
don@box : $ minikube version
minikube version: v0.18.0

Mon Apr 17 18:10:49 CDT 2017  ~/dev/roadrunner @new-ui  minikube 
don@box : $ minikube addons list
- default-storageclass: disabled
- kube-dns: disabled
- heapster: disabled
- ingress: disabled
- registry-creds: disabled
- addon-manager: enabled
- dashboard: disabled

Mon Apr 17 18:11:01 CDT 2017  ~/dev/roadrunner @new-ui  minikube 
don@box : $ kubectl version
Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.3", GitCommit:"029c3a408176b55c30846f0faedf56aae5992e9b", GitTreeState:"clean", BuildDate:"2017-02-15T06:40:50Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.0", GitCommit:"fff5156092b56e6bd60fff75aad4dc9de6b6ef37", GitTreeState:"dirty", BuildDate:"1970-01-01T00:00:00Z", GoVersion:"go1.7", Compiler:"gc", Platform:"linux/amd64"}

Mon Apr 17 18:11:31 CDT 2017  ~/dev/roadrunner @new-ui  minikube 
don@box : $ kubectl get po --namespace=kube-system
NAME                                     READY     STATUS    RESTARTS   AGE
kcm-kube-cert-manager-2090276146-2ck08   0/2       Pending   0          8m
kube-addon-manager-minikube              1/1       Running   0          11m
tiller-deploy-1012751306-47j12           1/1       Running   0          9m

Notice the kcm pod is in Pending status. It's waiting on a PVC to be fulfilled.

Mon Apr 17 18:14:10 CDT 2017  ~/dev/roadrunner @new-ui  minikube 
don@box : $ kubectl describe po kcm-kube-cert-manager-2090276146-2ck08
Name:		kcm-kube-cert-manager-2090276146-2ck08
Namespace:	kube-system
Node:		/
Labels:		app=kcm-kube-cert-manager
		pod-template-hash=2090276146
Status:		Pending
IP:		
Controllers:	ReplicaSet/kcm-kube-cert-manager-2090276146
Containers:
  kube-cert-manager:
    Image:	alectroemel/kube-cert-manager:latest
    Port:	
    Args:
      -data-dir=/var/lib/cert-manager
      -acme-url=https://acme-v01.api.letsencrypt.org/directory
    Volume Mounts:
      /var/lib/cert-manager from data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-5hxq6 (ro)
    Environment Variables:
      DNSMADEEASY_API_KEY:	<set to the key 'api_key' in secret 'kcm-kube-cert-manager'>
      DNSMADEEASY_API_SECRET:	<set to the key 'api_secret' in secret 'kcm-kube-cert-manager'>
  kubectl:
    Image:	palmstonegames/kubectl-proxy:1.4.0
    Port:	
    Volume Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-5hxq6 (ro)
    Environment Variables:	<none>
Conditions:
  Type		Status
  PodScheduled 	False 
Volumes:
  data:
    Type:	PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:	kcm-kube-cert-manager
    ReadOnly:	false
  default-token-5hxq6:
    Type:	Secret (a volume populated by a Secret)
    SecretName:	default-token-5hxq6
QoS Class:	BestEffort
Tolerations:	<none>
Events:
  FirstSeen	LastSeen	Count	From			SubObjectPath	Type		Reason			Message
  ---------	--------	-----	----			-------------	--------	------			-------
  10m		10m		6	{default-scheduler }			Warning		FailedScheduling	SchedulerPredicates failed due to PersistentVolumeClaim is not bound: "kcm-kube-cert-manager", which is unexpected.
  7m		7m		1	{default-scheduler }			Warning		FailedScheduling	SchedulerPredicates failed due to PersistentVolumeClaim is not bound: "kcm-kube-cert-manager", which is unexpected.
  7m		7m		1	{default-scheduler }			Warning		FailedScheduling	SchedulerPredicates failed due to PersistentVolumeClaim is not bound: "kcm-kube-cert-manager", which is unexpected.
  7m		7m		1	{default-scheduler }			Warning		FailedScheduling	SchedulerPredicates failed due to PersistentVolumeClaim is not bound: "kcm-kube-cert-manager", which is unexpected.
...SNIP LOTS MORE LIKE THIS...

Here's the PVC:

don@box : $ kubectl describe pvc
Name:		kcm-kube-cert-manager
Namespace:	kube-system
StorageClass:	default
Status:		Pending
Volume:		
Labels:		app=kcm-kube-cert-manager
Capacity:	
Access Modes:	
Events:
  FirstSeen	LastSeen	Count	From				SubObjectPath	Type		Reason			Message
  ---------	--------	-----	----				-------------	--------	------			-------
  14m		14m		3	{persistentvolume-controller }			Warning		ProvisioningFailed	storageclass.storage.k8s.io "default" not found

It seems as though the problem doesn't show up until the first restart of minikube after the problematic resources have been deployed. I doubt anybody else is experiencing this issue for the same reason that I am (i.e. because they're using the mirusresearch/stable/kube-cert-manager helm chart). Still, it would seem that all it takes is some combination of resources to trip up localkube on initial startup and then you get these same symptoms?

@r2d4
Copy link
Contributor

r2d4 commented Apr 18, 2017

Hey @donspaulding thanks for the detailed notes. I was able to reproduce this. Taking a deeper look into it now.

@r2d4
Copy link
Contributor

r2d4 commented Apr 18, 2017

cc @aaron-prindle @dlorenc seems a lot of people have been having this issue

@r2d4
Copy link
Contributor

r2d4 commented Apr 19, 2017

I'm able to reproduce it with just a minikube start, create a TPR, minikube stop, minikube start

apiVersion: extensions/v1beta1
kind: ThirdPartyResource
metadata:
  name: cron-tab.stable.example.com
description: "A specification of a Pod to run on a cron style schedule"
versions:
- name: v1

@r2d4
Copy link
Contributor

r2d4 commented Apr 21, 2017

I sent a PR to fix this issue upstream

kubernetes/kubernetes#44771

r2d4 added a commit to r2d4/minikube that referenced this issue May 1, 2017
Reference: kubernetes/kubernetes#44771

Fixes kubernetes#1252

TPRs are incorrectly coupled with the RestMapper right now.  The real
solution is for TPRs to not register themselves with the RestMapper.
This is a short term patch for minikube until the work is done
upstream.  On start/stop, the namespace controller and the garbage
collector controller both call this code and panic since TPRs have
registered themselves with enabled versions but have no group metadata.
@r2d4 r2d4 closed this as completed in #1431 May 2, 2017
dalehamel pushed a commit to dalehamel/minikube that referenced this issue May 3, 2017
Reference: kubernetes/kubernetes#44771

Fixes kubernetes#1252

TPRs are incorrectly coupled with the RestMapper right now.  The real
solution is for TPRs to not register themselves with the RestMapper.
This is a short term patch for minikube until the work is done
upstream.  On start/stop, the namespace controller and the garbage
collector controller both call this code and panic since TPRs have
registered themselves with enabled versions but have no group metadata.
@r2d4 r2d4 mentioned this issue May 9, 2017
@ash2k
Copy link
Member

ash2k commented May 26, 2017

I'm sorry, but why this issue is closed? It still happens with 0.19.0 running kube 1.6.3.

@r2d4
Copy link
Contributor

r2d4 commented May 29, 2017

Sorry, there was a slight copy paste error with my patch, the fix will be in the next release

#1497

@vyfster
Copy link

vyfster commented Jul 20, 2017

Is this fixed in minikube v0.20.0 with kubernetes v1.6.4? I'm experiencing this issue and seems like it could be the same issue. I had to restart (minikube stop && minikube start) multiple times when following this example - setting up ingress.

minikube logs prints -- no entries --

> kubectl version
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.0", GitCommit:"d3ada0119e776222f11ec7945e6d860061339aad", GitTreeState:"clean", BuildDate:"2017-06-29T23:15:59Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"dirty", BuildDate:"2017-06-22T04:31:09Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
> minikube version
minikube version: v0.20.0

I'm on Windows 10 running in hyper-v. Happy to provide any other info if it would help.

@DenisBiondic
Copy link
Contributor

I am also having a "similar" issue - win 10 hyperv 0.20 kube 1.6.4 --- it seems it has something to do with ingress addon for me, I keep geeting localkube service crashed without a trace on minikube vm. I was using helm & draft - now that I disabled ingress it seems to work fine

@vyfster
Copy link

vyfster commented Jul 21, 2017

I don't think this is related. I believe my issue is caused when dynamic memory is turned on (Hyper-V). If I turn off dynamic memory then I don't seem to have a problem.

I noticed the following in the event viewer:
'minikube' has encountered a fatal error. The guest operating system reported that it failed with the following error codes: ErrorCode0: 0x7F2454576109, ErrorCode1: 0x40000000, ErrorCode2: 0x1686F10, ErrorCode3: 0x7F245366F5C0, ErrorCode4: 0x7F24547B9548. If the problem persists, contact Product Support for the guest operating system. (Virtual machine ID D10E910C-6528-42CC-AA19-7378D1071A91)

The VM automatically restarts after which localkube is not running. This happens at around the 1'45" mark after minikube start (I realise this could / would be different as it is dependent on hardware that the VM is running on). Memory allocation goes from 2048 MB to 3840 MB, lasts for 10 seconds and then the VM restarts.

** disclaimer - I don't yet use minkube / k8s in anger as I'm still learning how to use it.

@dsanders1234
Copy link

I too am seeing the issue - win 10 hyper-v minikube v0.20 with kubernetes 1.7.0. I disabled the ingress addon and that seems to have fixed the problem.

@DenisBiondic
Copy link
Contributor

I've documented the hyper v dynamic memory issue here: https://github.com/kubernetes/minikube/blob/master/docs/drivers.md#hyperv-driver

@dsanders1234
Copy link

Thank you @DenisBiondic . Disabling dynamic memory on Hyper-V seems to have fixed the problem, as I can now use the ingress addon. I did have an issue where localkube had stopped, but I had shut down my laptop and turned it back on with it plugged into a docking station that has Ethernet. The Primary Virtual Switch was setup to point to the WiFi adapter.

@DenisBiondic
Copy link
Contributor

@dsanders1234 funny thing is, it has nothing to do with ingress or any other addon ... simply the problem is when you have something active going on, hyper-v tends to fail with dynamic memory (because it tries to allocate more). I managed to crash it with draft & helm as well. Perhaps there is a better fix than turning off dynamic memory completely off, but I don't have it at the moment. What is to be noted, though, is that after minikube delete / start the machine in Hyper-V will again be in dynamic memory mode...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests