Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

v1.13.0 tunnel on Windows: not listening at port #9189

Closed
cowwoc opened this issue Sep 5, 2020 · 11 comments
Closed

v1.13.0 tunnel on Windows: not listening at port #9189

cowwoc opened this issue Sep 5, 2020 · 11 comments
Labels
area/tunnel Support for the tunnel command kind/support Categorizes issue or PR as a support question. long-term-support Long-term support issues that can't be fixed in code os/windows priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.

Comments

@cowwoc
Copy link

cowwoc commented Sep 5, 2020

Steps to reproduce the issue:

  1. minikube start --driver=docker returns:
* minikube v1.13.0 on Microsoft Windows 10 Pro 10.0.19041 Build 19041
* Using the docker driver based on user configuration
* Starting control plane node minikube in cluster minikube
* Creating docker container (CPUs=2, Memory=8100MB) ...
* Preparing Kubernetes v1.19.0 on Docker 19.03.8 ...
* Verifying Kubernetes components...
* Enabled addons: default-storageclass, storage-provisioner
* Done! kubectl is now configured to use "minikube" by default

2. kubectl create deployment hello-minikube --image=k8s.gcr.io/echoserver:1.10
returns:

deployment.apps/hello-minikube created
  1. `kubectl expose deployment hello-minikube --type=NodePort --port=8080
    returns:
service/hello-minikube exposed
  1. minikube service hello-minikube --url --alsologtostderr
    returns:
I0905 00:17:04.540011   34448 mustload.go:66] Loading cluster: minikube
I0905 00:17:04.602010   34448 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Status}}
I0905 00:17:04.725011   34448 host.go:65] Checking if "minikube" exists ...
I0905 00:17:04.755010   34448 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube
I0905 00:17:04.882011   34448 api_server.go:146] Checking apiserver status ...
I0905 00:17:04.930012   34448 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0905 00:17:04.965011   34448 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0905 00:17:05.086012   34448 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32799 SSHKeyPath:C:\Users\Gili\.minikube\machines\minikube\id_rsa Username:docker}
I0905 00:17:05.245010   34448 ssh_runner.go:148] Run: sudo egrep ^[0-9]+:freezer: /proc/1737/cgroup
I0905 00:17:05.261012   34448 api_server.go:162] apiserver freezer: "20:freezer:/docker/9d1bad48ea81732681b72aeeafd3e18d7b5c2784f4ec179c2428f9ab6fdfc12b/kubepods/burstable/pod824fa06b554fc8c2b6258d0a0c8718d2/f72a2fdad4d48175a252d4d643f477b064552d1d33aee4d565e7cd31762e2f2e"
I0905 00:17:05.308010   34448 ssh_runner.go:148] Run: sudo cat /sys/fs/cgroup/freezer/docker/9d1bad48ea81732681b72aeeafd3e18d7b5c2784f4ec179c2428f9ab6fdfc12b/kubepods/burstable/pod824fa06b554fc8c2b6258d0a0c8718d2/f72a2fdad4d48175a252d4d643f477b064552d1d33aee4d565e7cd31762e2f2e/freezer.state
I0905 00:17:05.318011   34448 api_server.go:184] freezer state: "THAWED"
I0905 00:17:05.318011   34448 api_server.go:221] Checking apiserver healthz at https://127.0.0.1:32796/healthz ...
I0905 00:17:05.327011   34448 api_server.go:241] https://127.0.0.1:32796/healthz returned 200:
ok
I0905 00:17:05.354010   34448 service.go:213] Found service: &Service{ObjectMeta:{hello-minikube  default /api/v1/namespaces/default/services/hello-minikube 82a6b581-db49-40bd-bc8f-7bc71c12ed16 454 0 2020-09-05 00:15:39 -0400 EDT <nil> <nil> map[app:hello-minikube] map[] [] []  [{kubectl-expose Update v1 2020-09-05 00:15:39 -0400 EDT FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 112 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 101 120 116 101 114 110 97 108 84 114 97 102 102 105 99 80 111 108 105 99 121 34 58 123 125 44 34 102 58 112 111 114 116 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 112 111 114 116 92 34 58 56 48 56 48 44 92 34 112 114 111 116 111 99 111 108 92 34 58 92 34 84 67 80 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 112 111 114 116 34 58 123 125 44 34 102 58 112 114 111 116 111 99 111 108 34 58 123 125 44 34 102 58 116 97 114 103 101 116 80 111 114 116 34 58 123 125 125 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 112 34 58 123 125 125 44 34 102 58 115 101 115 115 105 111 110 65 102 102 105 110 105 116 121 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125],}}]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:8080,TargetPort:{0 8080 },NodePort:31498,},},Selector:map[string]string{app: hello-minikube,},ClusterIP:10.108.180.54,Type:NodePort,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:Cluster,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamily:nil,TopologyKeys:[],},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},},}
I0905 00:17:05.368014   34448 service.go:213] Found service: &Service{ObjectMeta:{hello-minikube  default /api/v1/namespaces/default/services/hello-minikube 82a6b581-db49-40bd-bc8f-7bc71c12ed16 454 0 2020-09-05 00:15:39 -0400 EDT <nil> <nil> map[app:hello-minikube] map[] [] []  [{kubectl-expose Update v1 2020-09-05 00:15:39 -0400 EDT FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 112 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 101 120 116 101 114 110 97 108 84 114 97 102 102 105 99 80 111 108 105 99 121 34 58 123 125 44 34 102 58 112 111 114 116 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 112 111 114 116 92 34 58 56 48 56 48 44 92 34 112 114 111 116 111 99 111 108 92 34 58 92 34 84 67 80 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 112 111 114 116 34 58 123 125 44 34 102 58 112 114 111 116 111 99 111 108 34 58 123 125 44 34 102 58 116 97 114 103 101 116 80 111 114 116 34 58 123 125 125 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 112 34 58 123 125 125 44 34 102 58 115 101 115 115 105 111 110 65 102 102 105 110 105 116 121 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125],}}]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:8080,TargetPort:{0 8080 },NodePort:31498,},},Selector:map[string]string{app: hello-minikube,},ClusterIP:10.108.180.54,Type:NodePort,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:Cluster,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamily:nil,TopologyKeys:[],},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},},}
I0905 00:17:05.375012   34448 host.go:65] Checking if "minikube" exists ...
I0905 00:17:05.415011   34448 cli_runner.go:110] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0905 00:17:05.605010   34448 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0905 00:17:05.757012   34448 out.go:109] * Starting tunnel for service hello-minikube.
* Starting tunnel for service hello-minikube.
|-----------|----------------|-------------|------------------------|
| NAMESPACE |      NAME      | TARGET PORT |          URL           |
|-----------|----------------|-------------|------------------------|
| default   | hello-minikube |             | http://127.0.0.1:17309 |
|-----------|----------------|-------------|------------------------|
I0905 00:17:06.762057   34448 out.go:109] http://127.0.0.1:17309
http://127.0.0.1:17309
W0905 00:17:06.763056   34448 out.go:145] ! Because you are using a Docker driver on windows, the terminal needs to be open to run it.
! Because you are using a Docker driver on windows, the terminal needs to be open to run it.
  1. Opened http://127.0.0.1:17058 in a web browser, which returns "connection refused".
  2. Checked netstat -an and sure enough no process is listening on that port

Optional: Full output of minikube logs command:

* ==> Docker <==
* -- Logs begin at Sat 2020-09-05 04:14:14 UTC, end at Sat 2020-09-05 04:17:42 UTC. --
* Sep 05 04:14:14 minikube systemd[1]: Starting Docker Application Container Engine...
* Sep 05 04:14:14 minikube dockerd[161]: time="2020-09-05T04:14:14.766333600Z" level=info msg="Starting up"
* Sep 05 04:14:14 minikube dockerd[161]: time="2020-09-05T04:14:14.768792600Z" level=info msg="parsed scheme: \"unix\"" module=grpc
* Sep 05 04:14:14 minikube dockerd[161]: time="2020-09-05T04:14:14.768817800Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
* Sep 05 04:14:14 minikube dockerd[161]: time="2020-09-05T04:14:14.768849700Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
* Sep 05 04:14:14 minikube dockerd[161]: time="2020-09-05T04:14:14.768860100Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
* Sep 05 04:14:14 minikube dockerd[161]: time="2020-09-05T04:14:14.794959100Z" level=info msg="parsed scheme: \"unix\"" module=grpc
* Sep 05 04:14:14 minikube dockerd[161]: time="2020-09-05T04:14:14.794984100Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
* Sep 05 04:14:14 minikube dockerd[161]: time="2020-09-05T04:14:14.794998600Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
* Sep 05 04:14:14 minikube dockerd[161]: time="2020-09-05T04:14:14.795005500Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
* Sep 05 04:14:15 minikube dockerd[161]: time="2020-09-05T04:14:15.530906700Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
* Sep 05 04:14:15 minikube dockerd[161]: time="2020-09-05T04:14:15.571967100Z" level=warning msg="Your kernel does not support cgroup blkio weight"
* Sep 05 04:14:15 minikube dockerd[161]: time="2020-09-05T04:14:15.572007200Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
* Sep 05 04:14:15 minikube dockerd[161]: time="2020-09-05T04:14:15.572015100Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
* Sep 05 04:14:15 minikube dockerd[161]: time="2020-09-05T04:14:15.572019200Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
* Sep 05 04:14:15 minikube dockerd[161]: time="2020-09-05T04:14:15.572023900Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
* Sep 05 04:14:15 minikube dockerd[161]: time="2020-09-05T04:14:15.572027800Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
* Sep 05 04:14:15 minikube dockerd[161]: time="2020-09-05T04:14:15.573808600Z" level=info msg="Loading containers: start."
* Sep 05 04:14:15 minikube dockerd[161]: time="2020-09-05T04:14:15.577051000Z" level=warning msg="Running modprobe bridge br_netfilter failed with message: modprobe: WARNING: Module bridge not found in directory /lib/modules/4.19.104-microsoft-standard\nmodprobe: WARNING: Module br_netfilter not found in directory /lib/modules/4.19.104-microsoft-standard\n, error: exit status 1"
* Sep 05 04:14:16 minikube dockerd[161]: time="2020-09-05T04:14:16.003657200Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
* Sep 05 04:14:16 minikube dockerd[161]: time="2020-09-05T04:14:16.062939500Z" level=info msg="Loading containers: done."
* Sep 05 04:14:16 minikube dockerd[161]: time="2020-09-05T04:14:16.380524100Z" level=info msg="Docker daemon" commit=afacb8b7f0 graphdriver(s)=overlay2 version=19.03.8
* Sep 05 04:14:16 minikube dockerd[161]: time="2020-09-05T04:14:16.380634600Z" level=info msg="Daemon has completed initialization"
* Sep 05 04:14:16 minikube dockerd[161]: time="2020-09-05T04:14:16.433659300Z" level=info msg="API listen on /run/docker.sock"
* Sep 05 04:14:16 minikube systemd[1]: Started Docker Application Container Engine.
* Sep 05 04:14:27 minikube systemd[1]: docker.service: Current command vanished from the unit file, execution of the command list won't be resumed.
* Sep 05 04:14:27 minikube systemd[1]: Stopping Docker Application Container Engine...
* Sep 05 04:14:27 minikube dockerd[161]: time="2020-09-05T04:14:27.462764300Z" level=info msg="Processing signal 'terminated'"
* Sep 05 04:14:27 minikube dockerd[161]: time="2020-09-05T04:14:27.463495300Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
* Sep 05 04:14:27 minikube dockerd[161]: time="2020-09-05T04:14:27.463956300Z" level=info msg="Daemon shutdown complete"
* Sep 05 04:14:27 minikube systemd[1]: docker.service: Succeeded.
* Sep 05 04:14:27 minikube systemd[1]: Stopped Docker Application Container Engine.
* Sep 05 04:14:27 minikube systemd[1]: Starting Docker Application Container Engine...
* Sep 05 04:14:27 minikube dockerd[389]: time="2020-09-05T04:14:27.502359900Z" level=info msg="Starting up"
* Sep 05 04:14:27 minikube dockerd[389]: time="2020-09-05T04:14:27.504340300Z" level=info msg="parsed scheme: \"unix\"" module=grpc
* Sep 05 04:14:27 minikube dockerd[389]: time="2020-09-05T04:14:27.504376400Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
* Sep 05 04:14:27 minikube dockerd[389]: time="2020-09-05T04:14:27.504710400Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
* Sep 05 04:14:27 minikube dockerd[389]: time="2020-09-05T04:14:27.504888100Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
* Sep 05 04:14:27 minikube dockerd[389]: time="2020-09-05T04:14:27.506849600Z" level=info msg="parsed scheme: \"unix\"" module=grpc
* Sep 05 04:14:27 minikube dockerd[389]: time="2020-09-05T04:14:27.506887400Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
* Sep 05 04:14:27 minikube dockerd[389]: time="2020-09-05T04:14:27.506904800Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
* Sep 05 04:14:27 minikube dockerd[389]: time="2020-09-05T04:14:27.506911800Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
* Sep 05 04:14:27 minikube dockerd[389]: time="2020-09-05T04:14:27.513895300Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
* Sep 05 04:14:27 minikube dockerd[389]: time="2020-09-05T04:14:27.522005800Z" level=warning msg="Your kernel does not support cgroup blkio weight"
* Sep 05 04:14:27 minikube dockerd[389]: time="2020-09-05T04:14:27.522047800Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
* Sep 05 04:14:27 minikube dockerd[389]: time="2020-09-05T04:14:27.522055000Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
* Sep 05 04:14:27 minikube dockerd[389]: time="2020-09-05T04:14:27.522059200Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
* Sep 05 04:14:27 minikube dockerd[389]: time="2020-09-05T04:14:27.522063000Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
* Sep 05 04:14:27 minikube dockerd[389]: time="2020-09-05T04:14:27.522066700Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
* Sep 05 04:14:27 minikube dockerd[389]: time="2020-09-05T04:14:27.522216600Z" level=info msg="Loading containers: start."
* Sep 05 04:14:27 minikube dockerd[389]: time="2020-09-05T04:14:27.523626600Z" level=warning msg="Running modprobe bridge br_netfilter failed with message: modprobe: WARNING: Module bridge not found in directory /lib/modules/4.19.104-microsoft-standard\nmodprobe: WARNING: Module br_netfilter not found in directory /lib/modules/4.19.104-microsoft-standard\n, error: exit status 1"
* Sep 05 04:14:27 minikube dockerd[389]: time="2020-09-05T04:14:27.608034900Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
* Sep 05 04:14:27 minikube dockerd[389]: time="2020-09-05T04:14:27.640289600Z" level=info msg="Loading containers: done."
* Sep 05 04:14:27 minikube dockerd[389]: time="2020-09-05T04:14:27.650697200Z" level=info msg="Docker daemon" commit=afacb8b7f0 graphdriver(s)=overlay2 version=19.03.8
* Sep 05 04:14:27 minikube dockerd[389]: time="2020-09-05T04:14:27.650764600Z" level=info msg="Daemon has completed initialization"
* Sep 05 04:14:27 minikube dockerd[389]: time="2020-09-05T04:14:27.665902600Z" level=info msg="API listen on /var/run/docker.sock"
* Sep 05 04:14:27 minikube dockerd[389]: time="2020-09-05T04:14:27.665919100Z" level=info msg="API listen on [::]:2376"
* Sep 05 04:14:27 minikube systemd[1]: Started Docker Application Container Engine.
* 
* ==> container status <==
* CONTAINER           IMAGE                                                                                           CREATED             STATE               NAME                      ATTEMPT             POD ID
* 966e9115b99e4       k8s.gcr.io/echoserver@sha256:cb5c1bddd1b5665e1867a7fa1b5fa843a47ee433bbb75d4293888b71def53229   2 minutes ago       Running             echoserver                0                   c6c3cd1f83bd1
* 6804ce7070231       bad58561c4be7                                                                                   2 minutes ago       Running             storage-provisioner       0                   1c4fd385c37eb
* 88abb1de26a48       bfe3a36ebd252                                                                                   2 minutes ago       Running             coredns                   0                   99a8950690493
* 987b00aae63e2       bc9c328f379ce                                                                                   2 minutes ago       Running             kube-proxy                0                   39573b25b022f
* defaec38d15dd       cbdc8369d8b15                                                                                   3 minutes ago       Running             kube-scheduler            0                   5bdc8a8e327cf
* 6ff1f556f8b89       09d665d529d07                                                                                   3 minutes ago       Running             kube-controller-manager   0                   2e540c623e8a7
* f72a2fdad4d48       1b74e93ece2f5                                                                                   3 minutes ago       Running             kube-apiserver            0                   b8cb3d713ef36
* 2a161151ff02a       d4ca8726196cb                                                                                   3 minutes ago       Running             etcd                      0                   453cec9b5eb39
* 
* ==> coredns [88abb1de26a4] <==
* .:53
* [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
* CoreDNS-1.7.0
* linux/amd64, go1.14.4, f59c03d
* 
* ==> describe nodes <==
* Name:               minikube
* Roles:              master
* Labels:             beta.kubernetes.io/arch=amd64
*                     beta.kubernetes.io/os=linux
*                     kubernetes.io/arch=amd64
*                     kubernetes.io/hostname=minikube
*                     kubernetes.io/os=linux
*                     minikube.k8s.io/commit=0c5e9de4ca6f9c55147ae7f90af97eff5befef5f-dirty
*                     minikube.k8s.io/name=minikube
*                     minikube.k8s.io/updated_at=2020_09_05T00_14_58_0700
*                     minikube.k8s.io/version=v1.13.0
*                     node-role.kubernetes.io/master=
* Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
*                     node.alpha.kubernetes.io/ttl: 0
*                     volumes.kubernetes.io/controller-managed-attach-detach: true
* CreationTimestamp:  Sat, 05 Sep 2020 04:14:39 +0000
* Taints:             <none>
* Unschedulable:      false
* Lease:
*   HolderIdentity:  minikube
*   AcquireTime:     <unset>
*   RenewTime:       Sat, 05 Sep 2020 04:17:33 +0000
* Conditions:
*   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
*   ----             ------  -----------------                 ------------------                ------                       -------
*   MemoryPressure   False   Sat, 05 Sep 2020 04:15:44 +0000   Sat, 05 Sep 2020 04:14:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
*   DiskPressure     False   Sat, 05 Sep 2020 04:15:44 +0000   Sat, 05 Sep 2020 04:14:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
*   PIDPressure      False   Sat, 05 Sep 2020 04:15:44 +0000   Sat, 05 Sep 2020 04:14:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
*   Ready            True    Sat, 05 Sep 2020 04:15:44 +0000   Sat, 05 Sep 2020 04:14:53 +0000   KubeletReady                 kubelet is posting ready status
* Addresses:
*   InternalIP:  172.17.0.3
*   Hostname:    minikube
* Capacity:
*   cpu:                4
*   ephemeral-storage:  263174212Ki
*   hugepages-2Mi:      0
*   memory:             26208280Ki
*   pods:               110
* Allocatable:
*   cpu:                4
*   ephemeral-storage:  263174212Ki
*   hugepages-2Mi:      0
*   memory:             26208280Ki
*   pods:               110
* System Info:
*   Machine ID:                 bb96923417d143dc8cc5dcc9328b1aeb
*   System UUID:                bb96923417d143dc8cc5dcc9328b1aeb
*   Boot ID:                    4c461974-ce1a-46c7-a895-535c2231cdfd
*   Kernel Version:             4.19.104-microsoft-standard
*   OS Image:                   Ubuntu 20.04 LTS
*   Operating System:           linux
*   Architecture:               amd64
*   Container Runtime Version:  docker://19.3.8
*   Kubelet Version:            v1.19.0
*   Kube-Proxy Version:         v1.19.0
* Non-terminated Pods:          (8 in total)
*   Namespace                   Name                                CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
*   ---------                   ----                                ------------  ----------  ---------------  -------------  ---
*   default                     hello-minikube-5d9b964bfb-l9qvl     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m46s
*   kube-system                 coredns-f9fd979d6-4s474             100m (2%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m53s
*   kube-system                 etcd-minikube                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m59s
*   kube-system                 kube-apiserver-minikube             250m (6%)     0 (0%)      0 (0%)           0 (0%)         2m58s
*   kube-system                 kube-controller-manager-minikube    200m (5%)     0 (0%)      0 (0%)           0 (0%)         2m58s
*   kube-system                 kube-proxy-f6r7r                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m53s
*   kube-system                 kube-scheduler-minikube             100m (2%)     0 (0%)      0 (0%)           0 (0%)         2m58s
*   kube-system                 storage-provisioner                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m57s
* Allocated resources:
*   (Total limits may be over 100 percent, i.e., overcommitted.)
*   Resource           Requests    Limits
*   --------           --------    ------
*   cpu                650m (16%)  0 (0%)
*   memory             70Mi (0%)   170Mi (0%)
*   ephemeral-storage  0 (0%)      0 (0%)
*   hugepages-2Mi      0 (0%)      0 (0%)
* Events:
*   Type    Reason                   Age                  From                  Message
*   ----    ------                   ----                 ----                  -------
*   Normal  NodeHasSufficientMemory  3m8s (x5 over 3m8s)  kubelet, minikube     Node minikube status is now: NodeHasSufficientMemory
*   Normal  NodeHasNoDiskPressure    3m8s (x5 over 3m8s)  kubelet, minikube     Node minikube status is now: NodeHasNoDiskPressure
*   Normal  NodeHasSufficientPID     3m8s (x4 over 3m8s)  kubelet, minikube     Node minikube status is now: NodeHasSufficientPID
*   Normal  Starting                 2m59s                kubelet, minikube     Starting kubelet.
*   Normal  NodeHasSufficientMemory  2m59s                kubelet, minikube     Node minikube status is now: NodeHasSufficientMemory
*   Normal  NodeHasNoDiskPressure    2m59s                kubelet, minikube     Node minikube status is now: NodeHasNoDiskPressure
*   Normal  NodeHasSufficientPID     2m59s                kubelet, minikube     Node minikube status is now: NodeHasSufficientPID
*   Normal  NodeAllocatableEnforced  2m59s                kubelet, minikube     Updated Node Allocatable limit across pods
*   Normal  Starting                 2m52s                kube-proxy, minikube  Starting kube-proxy.
*   Normal  NodeReady                2m49s                kubelet, minikube     Node minikube status is now: NodeReady
* 
* ==> dmesg <==
* [Sep 5 03:11] WSL2: Performing memory compaction.
* [Sep 5 03:13] WSL2: Performing memory compaction.
* [Sep 5 03:14] WSL2: Performing memory compaction.
* [Sep 5 03:15] WSL2: Performing memory compaction.
* [Sep 5 03:16] WSL2: Performing memory compaction.
* [Sep 5 03:17] WSL2: Performing memory compaction.
* [Sep 5 03:18] WSL2: Performing memory compaction.
* [Sep 5 03:19] WSL2: Performing memory compaction.
* [Sep 5 03:20] WSL2: Performing memory compaction.
* [Sep 5 03:21] WSL2: Performing memory compaction.
* [Sep 5 03:22] WSL2: Performing memory compaction.
* [Sep 5 03:23] WSL2: Performing memory compaction.
* [Sep 5 03:24] WSL2: Performing memory compaction.
* [Sep 5 03:25] WSL2: Performing memory compaction.
* [Sep 5 03:26] WSL2: Performing memory compaction.
* [Sep 5 03:27] WSL2: Performing memory compaction.
* [Sep 5 03:28] WSL2: Performing memory compaction.
* [Sep 5 03:29] WSL2: Performing memory compaction.
* [Sep 5 03:30] WSL2: Performing memory compaction.
* [Sep 5 03:31] WSL2: Performing memory compaction.
* [Sep 5 03:32] WSL2: Performing memory compaction.
* [Sep 5 03:33] WSL2: Performing memory compaction.
* [Sep 5 03:34] WSL2: Performing memory compaction.
* [Sep 5 03:35] WSL2: Performing memory compaction.
* [Sep 5 03:36] WSL2: Performing memory compaction.
* [Sep 5 03:37] WSL2: Performing memory compaction.
* [Sep 5 03:38] WSL2: Performing memory compaction.
* [Sep 5 03:39] WSL2: Performing memory compaction.
* [Sep 5 03:40] WSL2: Performing memory compaction.
* [Sep 5 03:42] WSL2: Performing memory compaction.
* [Sep 5 03:43] WSL2: Performing memory compaction.
* [Sep 5 03:44] WSL2: Performing memory compaction.
* [Sep 5 03:45] WSL2: Performing memory compaction.
* [Sep 5 03:46] WSL2: Performing memory compaction.
* [Sep 5 03:47] WSL2: Performing memory compaction.
* [Sep 5 03:48] WSL2: Performing memory compaction.
* [Sep 5 03:49] WSL2: Performing memory compaction.
* [Sep 5 03:50] WSL2: Performing memory compaction.
* [Sep 5 03:51] WSL2: Performing memory compaction.
* [Sep 5 03:52] WSL2: Performing memory compaction.
* [  +7.588089] ICMPv6: process `sysctl' is using deprecated sysctl (syscall) net.ipv6.neigh.docker0.base_reachable_time - use net.ipv6.neigh.docker0.base_reachable_time_ms instead
* [Sep 5 03:53] WSL2: Performing memory compaction.
* [Sep 5 03:54] WSL2: Performing memory compaction.
* [Sep 5 03:55] WSL2: Performing memory compaction.
* [Sep 5 03:56] WSL2: Performing memory compaction.
* [Sep 5 03:58] WSL2: Performing memory compaction.
* [Sep 5 04:03] WSL2: Performing memory compaction.
* [Sep 5 04:04] WSL2: Performing memory compaction.
* [Sep 5 04:05] WSL2: Performing memory compaction.
* [Sep 5 04:06] WSL2: Performing memory compaction.
* [Sep 5 04:07] WSL2: Performing memory compaction.
* [Sep 5 04:08] WSL2: Performing memory compaction.
* [Sep 5 04:09] WSL2: Performing memory compaction.
* [Sep 5 04:10] WSL2: Performing memory compaction.
* [Sep 5 04:12] WSL2: Performing memory compaction.
* [Sep 5 04:13] WSL2: Performing memory compaction.
* [Sep 5 04:14] WSL2: Performing memory compaction.
* [Sep 5 04:15] WSL2: Performing memory compaction.
* [Sep 5 04:16] WSL2: Performing memory compaction.
* [Sep 5 04:17] WSL2: Performing memory compaction.
* 
* ==> etcd [2a161151ff02] <==
* 2020-09-05 04:14:36.098563 I | etcdmain: etcd Version: 3.4.9
* 2020-09-05 04:14:36.098594 I | etcdmain: Git SHA: 54ba95891
* 2020-09-05 04:14:36.098597 I | etcdmain: Go Version: go1.12.17
* 2020-09-05 04:14:36.098599 I | etcdmain: Go OS/Arch: linux/amd64
* 2020-09-05 04:14:36.098602 I | etcdmain: setting maximum number of CPUs to 4, total number of available CPUs is 4
* [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
* 2020-09-05 04:14:36.098660 I | embed: peerTLS: cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
* 2020-09-05 04:14:36.099036 I | embed: name = minikube
* 2020-09-05 04:14:36.099044 I | embed: data dir = /var/lib/minikube/etcd
* 2020-09-05 04:14:36.099047 I | embed: member dir = /var/lib/minikube/etcd/member
* 2020-09-05 04:14:36.099120 I | embed: heartbeat = 100ms
* 2020-09-05 04:14:36.099123 I | embed: election = 1000ms
* 2020-09-05 04:14:36.099126 I | embed: snapshot count = 10000
* 2020-09-05 04:14:36.099131 I | embed: advertise client URLs = https://172.17.0.3:2379
* 2020-09-05 04:14:36.156526 I | etcdserver: starting member b273bc7741bcb020 in cluster 86482fea2286a1d2
* raft2020/09/05 04:14:36 INFO: b273bc7741bcb020 switched to configuration voters=()
* raft2020/09/05 04:14:36 INFO: b273bc7741bcb020 became follower at term 0
* raft2020/09/05 04:14:36 INFO: newRaft b273bc7741bcb020 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
* raft2020/09/05 04:14:36 INFO: b273bc7741bcb020 became follower at term 1
* raft2020/09/05 04:14:36 INFO: b273bc7741bcb020 switched to configuration voters=(12858828581462913056)
* 2020-09-05 04:14:36.172592 W | auth: simple token is not cryptographically signed
* 2020-09-05 04:14:36.243133 I | etcdserver: starting server... [version: 3.4.9, cluster version: to_be_decided]
* 2020-09-05 04:14:36.244597 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
* 2020-09-05 04:14:36.244710 I | embed: listening for metrics on http://127.0.0.1:2381
* 2020-09-05 04:14:36.245250 I | etcdserver: b273bc7741bcb020 as single-node; fast-forwarding 9 ticks (election ticks 10)
* 2020-09-05 04:14:36.246142 I | embed: listening for peers on 172.17.0.3:2380
* raft2020/09/05 04:14:36 INFO: b273bc7741bcb020 switched to configuration voters=(12858828581462913056)
* 2020-09-05 04:14:36.246313 I | etcdserver/membership: added member b273bc7741bcb020 [https://172.17.0.3:2380] to cluster 86482fea2286a1d2
* raft2020/09/05 04:14:36 INFO: b273bc7741bcb020 is starting a new election at term 1
* raft2020/09/05 04:14:36 INFO: b273bc7741bcb020 became candidate at term 2
* raft2020/09/05 04:14:36 INFO: b273bc7741bcb020 received MsgVoteResp from b273bc7741bcb020 at term 2
* raft2020/09/05 04:14:36 INFO: b273bc7741bcb020 became leader at term 2
* raft2020/09/05 04:14:36 INFO: raft.node: b273bc7741bcb020 elected leader b273bc7741bcb020 at term 2
* 2020-09-05 04:14:36.629954 I | etcdserver: published {Name:minikube ClientURLs:[https://172.17.0.3:2379]} to cluster 86482fea2286a1d2
* 2020-09-05 04:14:36.630140 I | embed: ready to serve client requests
* 2020-09-05 04:14:36.630889 I | embed: serving client requests on 172.17.0.3:2379
* 2020-09-05 04:14:36.630941 I | etcdserver: setting up the initial cluster version to 3.4
* 2020-09-05 04:14:36.631507 I | embed: ready to serve client requests
* 2020-09-05 04:14:36.632190 I | embed: serving client requests on 127.0.0.1:2379
* 2020-09-05 04:14:36.650652 N | etcdserver/membership: set the initial cluster version to 3.4
* 2020-09-05 04:14:36.650724 I | etcdserver/api: enabled capabilities for version 3.4
* 2020-09-05 04:14:46.703512 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-09-05 04:14:48.376548 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-09-05 04:14:58.376741 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-09-05 04:15:08.376761 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-09-05 04:15:18.376601 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-09-05 04:15:28.376717 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-09-05 04:15:38.376745 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-09-05 04:15:48.376614 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-09-05 04:15:58.377054 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-09-05 04:16:08.376460 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-09-05 04:16:18.376814 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-09-05 04:16:28.376866 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-09-05 04:16:38.376656 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-09-05 04:16:48.377083 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-09-05 04:16:58.376585 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-09-05 04:17:08.376437 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-09-05 04:17:18.376656 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-09-05 04:17:28.376486 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-09-05 04:17:38.376989 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 
* ==> kernel <==
*  04:17:43 up 6 days,  8:08,  0 users,  load average: 0.14, 0.38, 0.54
* Linux minikube 4.19.104-microsoft-standard #1 SMP Wed Feb 19 06:37:35 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
* PRETTY_NAME="Ubuntu 20.04 LTS"
* 
* ==> kube-apiserver [f72a2fdad4d4] <==
* I0905 04:14:39.830567       1 secure_serving.go:197] Serving securely on [::]:8443
* I0905 04:14:39.830787       1 tlsconfig.go:240] Starting DynamicServingCertificateController
* I0905 04:14:39.830993       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
* I0905 04:14:39.831002       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
* I0905 04:14:39.831030       1 autoregister_controller.go:141] Starting autoregister controller
* I0905 04:14:39.831032       1 cache.go:32] Waiting for caches to sync for autoregister controller
* I0905 04:14:39.831346       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
* I0905 04:14:39.831352       1 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller
* I0905 04:14:39.831629       1 crdregistration_controller.go:111] Starting crd-autoregister controller
* I0905 04:14:39.831635       1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister
* I0905 04:14:39.831654       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
* I0905 04:14:39.831668       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
* I0905 04:14:39.835678       1 dynamic_serving_content.go:130] Starting aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key
* I0905 04:14:39.835825       1 available_controller.go:404] Starting AvailableConditionController
* I0905 04:14:39.835888       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
* I0905 04:14:39.835953       1 controller.go:83] Starting OpenAPI AggregationController
* I0905 04:14:39.835993       1 customresource_discovery_controller.go:209] Starting DiscoveryController
* I0905 04:14:39.852456       1 controller.go:86] Starting OpenAPI controller
* I0905 04:14:39.852629       1 naming_controller.go:291] Starting NamingConditionController
* I0905 04:14:39.852713       1 establishing_controller.go:76] Starting EstablishingController
* I0905 04:14:39.852785       1 nonstructuralschema_controller.go:186] Starting NonStructuralSchemaConditionController
* I0905 04:14:39.857483       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
* I0905 04:14:39.857531       1 crd_finalizer.go:266] Starting CRDFinalizer
* I0905 04:14:39.931564       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
* I0905 04:14:39.931870       1 shared_informer.go:247] Caches are synced for crd-autoregister 
* I0905 04:14:39.931583       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
* I0905 04:14:39.931600       1 cache.go:39] Caches are synced for autoregister controller
* I0905 04:14:39.936149       1 cache.go:39] Caches are synced for AvailableConditionController controller
* E0905 04:14:39.952093       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/172.17.0.3, ResourceVersion: 0, AdditionalErrorMsg: 
* I0905 04:14:40.830191       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
* I0905 04:14:40.830230       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
* I0905 04:14:40.834459       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
* I0905 04:14:40.838203       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
* I0905 04:14:40.838357       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
* I0905 04:14:41.363997       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
* I0905 04:14:41.409313       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
* W0905 04:14:41.514121       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [172.17.0.3]
* I0905 04:14:41.515031       1 controller.go:606] quota admission added evaluator for: endpoints
* I0905 04:14:41.518796       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
* I0905 04:14:42.290377       1 controller.go:606] quota admission added evaluator for: serviceaccounts
* I0905 04:14:43.080108       1 controller.go:606] quota admission added evaluator for: deployments.apps
* I0905 04:14:43.485885       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
* I0905 04:14:43.602417       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
* I0905 04:14:49.290084       1 controller.go:606] quota admission added evaluator for: replicasets.apps
* I0905 04:14:49.347157       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
* I0905 04:15:08.444748       1 client.go:360] parsed scheme: "passthrough"
* I0905 04:15:08.444780       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
* I0905 04:15:08.444786       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I0905 04:15:45.720059       1 client.go:360] parsed scheme: "passthrough"
* I0905 04:15:45.720106       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
* I0905 04:15:45.720113       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I0905 04:16:19.729277       1 client.go:360] parsed scheme: "passthrough"
* I0905 04:16:19.729337       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
* I0905 04:16:19.729343       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I0905 04:16:52.700372       1 client.go:360] parsed scheme: "passthrough"
* I0905 04:16:52.700421       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
* I0905 04:16:52.700428       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I0905 04:17:36.945916       1 client.go:360] parsed scheme: "passthrough"
* I0905 04:17:36.945964       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
* I0905 04:17:36.945971       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* 
* ==> kube-controller-manager [6ff1f556f8b8] <==
* I0905 04:14:48.237851       1 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key
* E0905 04:14:48.487218       1 core.go:90] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
* W0905 04:14:48.487249       1 controllermanager.go:541] Skipping "service"
* I0905 04:14:48.737322       1 controllermanager.go:549] Started "persistentvolume-expander"
* I0905 04:14:48.737434       1 expand_controller.go:319] Starting expand controller
* I0905 04:14:48.737442       1 shared_informer.go:240] Waiting for caches to sync for expand
* I0905 04:14:48.987296       1 controllermanager.go:549] Started "job"
* I0905 04:14:48.987355       1 job_controller.go:148] Starting job controller
* I0905 04:14:48.987361       1 shared_informer.go:240] Waiting for caches to sync for job
* I0905 04:14:49.237135       1 controllermanager.go:549] Started "pvc-protection"
* I0905 04:14:49.237176       1 pvc_protection_controller.go:110] Starting PVC protection controller
* I0905 04:14:49.237184       1 shared_informer.go:240] Waiting for caches to sync for PVC protection
* I0905 04:14:49.237951       1 shared_informer.go:240] Waiting for caches to sync for resource quota
* W0905 04:14:49.244521       1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist
* I0905 04:14:49.285788       1 shared_informer.go:247] Caches are synced for ReplicaSet 
* I0905 04:14:49.287180       1 shared_informer.go:247] Caches are synced for deployment 
* I0905 04:14:49.287762       1 shared_informer.go:247] Caches are synced for service account 
* I0905 04:14:49.288467       1 shared_informer.go:247] Caches are synced for job 
* I0905 04:14:49.288490       1 shared_informer.go:247] Caches are synced for endpoint_slice 
* I0905 04:14:49.288497       1 shared_informer.go:247] Caches are synced for ReplicationController 
* I0905 04:14:49.291409       1 shared_informer.go:247] Caches are synced for namespace 
* I0905 04:14:49.294860       1 shared_informer.go:247] Caches are synced for endpoint 
* I0905 04:14:49.297390       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-f9fd979d6 to 1"
* I0905 04:14:49.313559       1 event.go:291] "Event occurred" object="kube-system/coredns-f9fd979d6" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-f9fd979d6-4s474"
* I0905 04:14:49.316250       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
* I0905 04:14:49.332095       1 shared_informer.go:247] Caches are synced for disruption 
* I0905 04:14:49.332128       1 disruption.go:339] Sending events to api server.
* I0905 04:14:49.341079       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client 
* I0905 04:14:49.341156       1 shared_informer.go:247] Caches are synced for daemon sets 
* I0905 04:14:49.341173       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving 
* I0905 04:14:49.341188       1 shared_informer.go:247] Caches are synced for stateful set 
* I0905 04:14:49.341580       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown 
* I0905 04:14:49.341611       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
* I0905 04:14:49.341716       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client 
* I0905 04:14:49.341929       1 shared_informer.go:247] Caches are synced for certificate-csrapproving 
* I0905 04:14:49.342274       1 shared_informer.go:247] Caches are synced for GC 
* I0905 04:14:49.342415       1 shared_informer.go:247] Caches are synced for PVC protection 
* I0905 04:14:49.342498       1 shared_informer.go:247] Caches are synced for persistent volume 
* I0905 04:14:49.342649       1 shared_informer.go:247] Caches are synced for PV protection 
* I0905 04:14:49.342680       1 shared_informer.go:247] Caches are synced for expand 
* I0905 04:14:49.342796       1 shared_informer.go:247] Caches are synced for bootstrap_signer 
* I0905 04:14:49.343965       1 shared_informer.go:247] Caches are synced for TTL 
* I0905 04:14:49.385413       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-f6r7r"
* I0905 04:14:49.413358       1 shared_informer.go:247] Caches are synced for attach detach 
* I0905 04:14:49.487380       1 shared_informer.go:247] Caches are synced for HPA 
* I0905 04:14:49.538213       1 shared_informer.go:247] Caches are synced for resource quota 
* I0905 04:14:49.539085       1 shared_informer.go:247] Caches are synced for resource quota 
* I0905 04:14:49.587244       1 shared_informer.go:247] Caches are synced for taint 
* I0905 04:14:49.587315       1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: 
* W0905 04:14:49.587347       1 node_lifecycle_controller.go:1044] Missing timestamp for Node minikube. Assuming now as a timestamp.
* I0905 04:14:49.587459       1 node_lifecycle_controller.go:1195] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
* I0905 04:14:49.587507       1 taint_manager.go:187] Starting NoExecuteTaintManager
* I0905 04:14:49.587596       1 event.go:291] "Event occurred" object="minikube" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node minikube event: Registered Node minikube in Controller"
* I0905 04:14:49.594828       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
* I0905 04:14:49.887679       1 shared_informer.go:247] Caches are synced for garbage collector 
* I0905 04:14:49.887700       1 garbagecollector.go:137] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
* I0905 04:14:49.895546       1 shared_informer.go:247] Caches are synced for garbage collector 
* I0905 04:14:54.587757       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
* I0905 04:14:56.493118       1 event.go:291] "Event occurred" object="default/hello-minikube" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-minikube-5d9b964bfb to 1"
* I0905 04:14:56.508134       1 event.go:291] "Event occurred" object="default/hello-minikube-5d9b964bfb" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-minikube-5d9b964bfb-l9qvl"
* 
* ==> kube-proxy [987b00aae63e] <==
* I0905 04:14:50.155587       1 node.go:136] Successfully retrieved node IP: 172.17.0.3
* I0905 04:14:50.155797       1 server_others.go:111] kube-proxy node IP is an IPv4 address (172.17.0.3), assume IPv4 operation
* W0905 04:14:50.176544       1 proxier.go:639] Failed to read file /lib/modules/4.19.104-microsoft-standard/modules.builtin with error open /lib/modules/4.19.104-microsoft-standard/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
* W0905 04:14:50.177874       1 proxier.go:649] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
* W0905 04:14:50.179154       1 proxier.go:649] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
* W0905 04:14:50.180400       1 proxier.go:649] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
* W0905 04:14:50.181790       1 proxier.go:649] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
* W0905 04:14:50.183064       1 proxier.go:649] Failed to load kernel module nf_conntrack with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
* W0905 04:14:50.183187       1 server_others.go:579] Unknown proxy mode "", assuming iptables proxy
* I0905 04:14:50.183253       1 server_others.go:186] Using iptables Proxier.
* W0905 04:14:50.183260       1 server_others.go:456] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined
* I0905 04:14:50.183263       1 server_others.go:467] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local
* I0905 04:14:50.183477       1 server.go:650] Version: v1.19.0
* I0905 04:14:50.183785       1 conntrack.go:52] Setting nf_conntrack_max to 131072
* I0905 04:14:50.183850       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
* I0905 04:14:50.183896       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
* I0905 04:14:50.184015       1 config.go:315] Starting service config controller
* I0905 04:14:50.184082       1 shared_informer.go:240] Waiting for caches to sync for service config
* I0905 04:14:50.184123       1 config.go:224] Starting endpoint slice config controller
* I0905 04:14:50.184142       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
* I0905 04:14:50.284327       1 shared_informer.go:247] Caches are synced for endpoint slice config 
* I0905 04:14:50.284361       1 shared_informer.go:247] Caches are synced for service config 
* 
* ==> kube-scheduler [defaec38d15d] <==
* I0905 04:14:36.838194       1 registry.go:173] Registering SelectorSpread plugin
* I0905 04:14:36.838253       1 registry.go:173] Registering SelectorSpread plugin
* I0905 04:14:37.544303       1 serving.go:331] Generated self-signed cert in-memory
* W0905 04:14:39.936914       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
* W0905 04:14:39.936957       1 authentication.go:294] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
* W0905 04:14:39.936974       1 authentication.go:295] Continuing without authentication configuration. This may treat all requests as anonymous.
* W0905 04:14:39.936979       1 authentication.go:296] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
* I0905 04:14:39.947443       1 registry.go:173] Registering SelectorSpread plugin
* I0905 04:14:39.947770       1 registry.go:173] Registering SelectorSpread plugin
* I0905 04:14:39.953744       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
* I0905 04:14:39.954775       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
* I0905 04:14:39.954941       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
* I0905 04:14:39.955116       1 tlsconfig.go:240] Starting DynamicServingCertificateController
* E0905 04:14:39.958326       1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
* E0905 04:14:39.961383       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
* E0905 04:14:39.961459       1 reflector.go:127] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
* E0905 04:14:39.967634       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
* E0905 04:14:39.967708       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
* E0905 04:14:39.968537       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
* E0905 04:14:39.968646       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
* E0905 04:14:39.968746       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
* E0905 04:14:39.968826       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
* E0905 04:14:39.968892       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
* E0905 04:14:39.968960       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
* E0905 04:14:39.969031       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
* E0905 04:14:39.969117       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
* E0905 04:14:40.866167       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
* E0905 04:14:40.878482       1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
* E0905 04:14:40.890840       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
* E0905 04:14:40.993082       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
* E0905 04:14:41.084301       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
* E0905 04:14:41.123127       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
* E0905 04:14:41.158407       1 reflector.go:127] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
* I0905 04:14:43.755560       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
* 
* ==> kubelet <==
* -- Logs begin at Sat 2020-09-05 04:14:14 UTC, end at Sat 2020-09-05 04:17:44 UTC. --
* Sep 05 04:14:43 minikube kubelet[2089]: I0905 04:14:43.589899    2089 volume_manager.go:265] Starting Kubelet Volume Manager
* Sep 05 04:14:43 minikube kubelet[2089]: I0905 04:14:43.590254    2089 desired_state_of_world_populator.go:139] Desired state populator starts to run
* Sep 05 04:14:43 minikube kubelet[2089]: I0905 04:14:43.679603    2089 status_manager.go:158] Starting to sync pod status with apiserver
* Sep 05 04:14:43 minikube kubelet[2089]: I0905 04:14:43.679648    2089 kubelet.go:1741] Starting kubelet main sync loop.
* Sep 05 04:14:43 minikube kubelet[2089]: E0905 04:14:43.679685    2089 kubelet.go:1765] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
* Sep 05 04:14:43 minikube kubelet[2089]: I0905 04:14:43.737939    2089 client.go:87] parsed scheme: "unix"
* Sep 05 04:14:43 minikube kubelet[2089]: I0905 04:14:43.737983    2089 client.go:87] scheme "unix" not registered, fallback to default scheme
* Sep 05 04:14:43 minikube kubelet[2089]: I0905 04:14:43.738000    2089 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}
* Sep 05 04:14:43 minikube kubelet[2089]: I0905 04:14:43.738005    2089 clientconn.go:948] ClientConn switching balancer to "pick_first"
* Sep 05 04:14:43 minikube kubelet[2089]: I0905 04:14:43.755395    2089 kubelet_node_status.go:70] Attempting to register node minikube
* Sep 05 04:14:43 minikube kubelet[2089]: E0905 04:14:43.780372    2089 kubelet.go:1765] skipping pod synchronization - container runtime status check may not have completed yet
* Sep 05 04:14:43 minikube kubelet[2089]: I0905 04:14:43.789772    2089 kubelet_node_status.go:108] Node minikube was previously registered
* Sep 05 04:14:43 minikube kubelet[2089]: I0905 04:14:43.789850    2089 kubelet_node_status.go:73] Successfully registered node minikube
* Sep 05 04:14:43 minikube kubelet[2089]: I0905 04:14:43.954973    2089 cpu_manager.go:184] [cpumanager] starting with none policy
* Sep 05 04:14:43 minikube kubelet[2089]: I0905 04:14:43.955003    2089 cpu_manager.go:185] [cpumanager] reconciling every 10s
* Sep 05 04:14:43 minikube kubelet[2089]: I0905 04:14:43.955019    2089 state_mem.go:36] [cpumanager] initializing new in-memory state store
* Sep 05 04:14:43 minikube kubelet[2089]: I0905 04:14:43.955142    2089 state_mem.go:88] [cpumanager] updated default cpuset: ""
* Sep 05 04:14:43 minikube kubelet[2089]: I0905 04:14:43.955149    2089 state_mem.go:96] [cpumanager] updated cpuset assignments: "map[]"
* Sep 05 04:14:43 minikube kubelet[2089]: I0905 04:14:43.955156    2089 policy_none.go:43] [cpumanager] none policy: Start
* Sep 05 04:14:43 minikube kubelet[2089]: I0905 04:14:43.960784    2089 plugin_manager.go:114] Starting Kubelet Plugin Manager
* Sep 05 04:14:43 minikube kubelet[2089]: I0905 04:14:43.980545    2089 topology_manager.go:233] [topologymanager] Topology Admit Handler
* Sep 05 04:14:43 minikube kubelet[2089]: I0905 04:14:43.992125    2089 topology_manager.go:233] [topologymanager] Topology Admit Handler
* Sep 05 04:14:43 minikube kubelet[2089]: I0905 04:14:43.998676    2089 topology_manager.go:233] [topologymanager] Topology Admit Handler
* Sep 05 04:14:44 minikube kubelet[2089]: I0905 04:14:44.011684    2089 topology_manager.go:233] [topologymanager] Topology Admit Handler
* Sep 05 04:14:44 minikube kubelet[2089]: I0905 04:14:44.031283    2089 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/13118b761000f8fe2c4662d5f32d9532-etcd-certs") pod "etcd-minikube" (UID: "13118b761000f8fe2c4662d5f32d9532")
* Sep 05 04:14:44 minikube kubelet[2089]: I0905 04:14:44.031327    2089 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/13118b761000f8fe2c4662d5f32d9532-etcd-data") pod "etcd-minikube" (UID: "13118b761000f8fe2c4662d5f32d9532")
* Sep 05 04:14:44 minikube kubelet[2089]: I0905 04:14:44.131566    2089 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/282bfbc855c0e9296869c3f54a940e7a-kubeconfig") pod "kube-scheduler-minikube" (UID: "282bfbc855c0e9296869c3f54a940e7a")
* Sep 05 04:14:44 minikube kubelet[2089]: I0905 04:14:44.131618    2089 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/824fa06b554fc8c2b6258d0a0c8718d2-usr-share-ca-certificates") pod "kube-apiserver-minikube" (UID: "824fa06b554fc8c2b6258d0a0c8718d2")
* Sep 05 04:14:44 minikube kubelet[2089]: I0905 04:14:44.131634    2089 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/fed4517a236f09d37c85c5e69aa2a890-usr-share-ca-certificates") pod "kube-controller-manager-minikube" (UID: "fed4517a236f09d37c85c5e69aa2a890")
* Sep 05 04:14:44 minikube kubelet[2089]: I0905 04:14:44.131644    2089 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/824fa06b554fc8c2b6258d0a0c8718d2-ca-certs") pod "kube-apiserver-minikube" (UID: "824fa06b554fc8c2b6258d0a0c8718d2")
* Sep 05 04:14:44 minikube kubelet[2089]: I0905 04:14:44.131656    2089 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-ca-certificates" (UniqueName: "kubernetes.io/host-path/824fa06b554fc8c2b6258d0a0c8718d2-etc-ca-certificates") pod "kube-apiserver-minikube" (UID: "824fa06b554fc8c2b6258d0a0c8718d2")
* Sep 05 04:14:44 minikube kubelet[2089]: I0905 04:14:44.131665    2089 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/fed4517a236f09d37c85c5e69aa2a890-ca-certs") pod "kube-controller-manager-minikube" (UID: "fed4517a236f09d37c85c5e69aa2a890")
* Sep 05 04:14:44 minikube kubelet[2089]: I0905 04:14:44.131675    2089 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-ca-certificates" (UniqueName: "kubernetes.io/host-path/fed4517a236f09d37c85c5e69aa2a890-etc-ca-certificates") pod "kube-controller-manager-minikube" (UID: "fed4517a236f09d37c85c5e69aa2a890")
* Sep 05 04:14:44 minikube kubelet[2089]: I0905 04:14:44.131688    2089 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-local-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/fed4517a236f09d37c85c5e69aa2a890-usr-local-share-ca-certificates") pod "kube-controller-manager-minikube" (UID: "fed4517a236f09d37c85c5e69aa2a890")
* Sep 05 04:14:44 minikube kubelet[2089]: I0905 04:14:44.131717    2089 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/fed4517a236f09d37c85c5e69aa2a890-k8s-certs") pod "kube-controller-manager-minikube" (UID: "fed4517a236f09d37c85c5e69aa2a890")
* Sep 05 04:14:44 minikube kubelet[2089]: I0905 04:14:44.131744    2089 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/fed4517a236f09d37c85c5e69aa2a890-flexvolume-dir") pod "kube-controller-manager-minikube" (UID: "fed4517a236f09d37c85c5e69aa2a890")
* Sep 05 04:14:44 minikube kubelet[2089]: I0905 04:14:44.131781    2089 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/fed4517a236f09d37c85c5e69aa2a890-kubeconfig") pod "kube-controller-manager-minikube" (UID: "fed4517a236f09d37c85c5e69aa2a890")
* Sep 05 04:14:44 minikube kubelet[2089]: I0905 04:14:44.131797    2089 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/824fa06b554fc8c2b6258d0a0c8718d2-k8s-certs") pod "kube-apiserver-minikube" (UID: "824fa06b554fc8c2b6258d0a0c8718d2")
* Sep 05 04:14:44 minikube kubelet[2089]: I0905 04:14:44.131818    2089 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-local-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/824fa06b554fc8c2b6258d0a0c8718d2-usr-local-share-ca-certificates") pod "kube-apiserver-minikube" (UID: "824fa06b554fc8c2b6258d0a0c8718d2")
* Sep 05 04:14:44 minikube kubelet[2089]: I0905 04:14:44.131832    2089 reconciler.go:157] Reconciler: start to sync state
* Sep 05 04:14:49 minikube kubelet[2089]: I0905 04:14:49.392797    2089 topology_manager.go:233] [topologymanager] Topology Admit Handler
* Sep 05 04:14:49 minikube kubelet[2089]: I0905 04:14:49.452896    2089 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/53481da0-cfa9-429d-a10e-370306278ab1-xtables-lock") pod "kube-proxy-f6r7r" (UID: "53481da0-cfa9-429d-a10e-370306278ab1")
* Sep 05 04:14:49 minikube kubelet[2089]: I0905 04:14:49.452942    2089 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/53481da0-cfa9-429d-a10e-370306278ab1-kube-proxy") pod "kube-proxy-f6r7r" (UID: "53481da0-cfa9-429d-a10e-370306278ab1")
* Sep 05 04:14:49 minikube kubelet[2089]: I0905 04:14:49.452959    2089 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/53481da0-cfa9-429d-a10e-370306278ab1-lib-modules") pod "kube-proxy-f6r7r" (UID: "53481da0-cfa9-429d-a10e-370306278ab1")
* Sep 05 04:14:49 minikube kubelet[2089]: I0905 04:14:49.452973    2089 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-r2k54" (UniqueName: "kubernetes.io/secret/53481da0-cfa9-429d-a10e-370306278ab1-kube-proxy-token-r2k54") pod "kube-proxy-f6r7r" (UID: "53481da0-cfa9-429d-a10e-370306278ab1")
* Sep 05 04:14:49 minikube kubelet[2089]: W0905 04:14:49.997716    2089 pod_container_deletor.go:79] Container "39573b25b022f978fb4bf0aa81208687936d79946eb60b20a701e63a3645a94b" not found in pod's containers
* Sep 05 04:14:56 minikube kubelet[2089]: I0905 04:14:56.509959    2089 topology_manager.go:233] [topologymanager] Topology Admit Handler
* Sep 05 04:14:56 minikube kubelet[2089]: I0905 04:14:56.571754    2089 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-9f79t" (UniqueName: "kubernetes.io/secret/bcb8fd3e-d313-496b-8e34-4a4db658b178-default-token-9f79t") pod "hello-minikube-5d9b964bfb-l9qvl" (UID: "bcb8fd3e-d313-496b-8e34-4a4db658b178")
* Sep 05 04:14:57 minikube kubelet[2089]: I0905 04:14:57.203395    2089 topology_manager.go:233] [topologymanager] Topology Admit Handler
* Sep 05 04:14:57 minikube kubelet[2089]: I0905 04:14:57.273682    2089 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/776545cd-05b0-4429-94eb-955706c190b6-config-volume") pod "coredns-f9fd979d6-4s474" (UID: "776545cd-05b0-4429-94eb-955706c190b6")
* Sep 05 04:14:57 minikube kubelet[2089]: I0905 04:14:57.273742    2089 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-cchqm" (UniqueName: "kubernetes.io/secret/776545cd-05b0-4429-94eb-955706c190b6-coredns-token-cchqm") pod "coredns-f9fd979d6-4s474" (UID: "776545cd-05b0-4429-94eb-955706c190b6")
* Sep 05 04:14:57 minikube kubelet[2089]: W0905 04:14:57.521554    2089 pod_container_deletor.go:79] Container "c6c3cd1f83bd1e9e85edd1483ba71ca250157a5c6a392a129064c30a34416aec" not found in pod's containers
* Sep 05 04:14:57 minikube kubelet[2089]: W0905 04:14:57.521782    2089 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-minikube-5d9b964bfb-l9qvl through plugin: invalid network status for
* Sep 05 04:14:58 minikube kubelet[2089]: W0905 04:14:58.107728    2089 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-f9fd979d6-4s474 through plugin: invalid network status for
* Sep 05 04:14:58 minikube kubelet[2089]: W0905 04:14:58.526728    2089 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-minikube-5d9b964bfb-l9qvl through plugin: invalid network status for
* Sep 05 04:14:58 minikube kubelet[2089]: W0905 04:14:58.529810    2089 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-f9fd979d6-4s474 through plugin: invalid network status for
* Sep 05 04:15:02 minikube kubelet[2089]: I0905 04:15:02.175119    2089 topology_manager.go:233] [topologymanager] Topology Admit Handler
* Sep 05 04:15:02 minikube kubelet[2089]: I0905 04:15:02.302873    2089 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/eb1561bb-3f74-4479-8e3b-b382a943d750-tmp") pod "storage-provisioner" (UID: "eb1561bb-3f74-4479-8e3b-b382a943d750")
* Sep 05 04:15:02 minikube kubelet[2089]: I0905 04:15:02.302918    2089 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-6nd6m" (UniqueName: "kubernetes.io/secret/eb1561bb-3f74-4479-8e3b-b382a943d750-storage-provisioner-token-6nd6m") pod "storage-provisioner" (UID: "eb1561bb-3f74-4479-8e3b-b382a943d750")
* Sep 05 04:15:16 minikube kubelet[2089]: W0905 04:15:16.626560    2089 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-minikube-5d9b964bfb-l9qvl through plugin: invalid network status for
* 
* ==> storage-provisioner [6804ce707023] <==
* I0905 04:15:03.108475       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/k8s.io-minikube-hostpath...
* I0905 04:15:03.137582       1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath
* I0905 04:15:03.137918       1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a71e96bf-e965-45f5-a6d0-cb20983c861a", APIVersion:"v1", ResourceVersion:"418", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_77868073-c049-4f1b-a590-353349e05916 became leader
* I0905 04:15:03.137944       1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_minikube_77868073-c049-4f1b-a590-353349e05916!
* I0905 04:15:03.238145       1 controller.go:848] Started provisioner controller k8s.io/minikube-hostpath_minikube_77868073-c049-4f1b-a590-353349e05916!
@tstromberg tstromberg changed the title Minikube tunnel does not bind to port v1.13.0 tunnel on Windows: not listening at port Sep 5, 2020
@tstromberg tstromberg added area/tunnel Support for the tunnel command os/windows labels Sep 5, 2020
@tstromberg
Copy link
Contributor

I wonder if this has to do with the native ssh change in v1.13. Are you able to replicate this, @sharifelgamal ?

@cowwoc
Copy link
Author

cowwoc commented Sep 6, 2020

@tstromberg I just tried version 1.12.3 and had the same problem.

@cowwoc
Copy link
Author

cowwoc commented Sep 6, 2020

Interestingly, if I invoke kubectl port-forward service/hello-minikube 17309:8080 I can hit the service on port 17309 just fine.

@priyawadhwa priyawadhwa added the kind/support Categorizes issue or PR as a support question. label Sep 8, 2020
@priyawadhwa priyawadhwa added the priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. label Oct 21, 2020
@Meg4Bit
Copy link

Meg4Bit commented Oct 24, 2020

Same issue( and no solution

@medyagh
Copy link
Member

medyagh commented Jan 20, 2021

@cowwoc @Meg4Bit I am curious if u have tried with powershell run with administrative privilege ?

I wonder if you have open SSH installed ? or if installing openSSH on windows would fix this ?

@Meg4Bit
Copy link

Meg4Bit commented Jan 20, 2021

I've tried running powershell as administrator. Open SSH was installed.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 20, 2021
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 20, 2021
@ilya-zuyev ilya-zuyev removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label May 26, 2021
@spowelljr
Copy link
Member

I tried this using minikube v1.22.0 using Powershell with admin and it worked for me. Could you try and let me know it it works for you now @cowwoc @Meg4Bit?

@spowelljr spowelljr added the long-term-support Long-term support issues that can't be fixed in code label Jul 28, 2021
@cowwoc
Copy link
Author

cowwoc commented Jul 28, 2021

I'm sorry. I'm not set up to test this at the moment. I would need to disable hardware virtualization in Windows and so on and I haven't worked with Docker for months. Is someone else able to confirm this is fixed?

@spowelljr
Copy link
Member

@cowwoc No worries, thanks for responding. I'm going to close this issue since I'm pretty sure this is resolved, but if someone comments saying otherwise I'll be happy to reopen this issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/tunnel Support for the tunnel command kind/support Categorizes issue or PR as a support question. long-term-support Long-term support issues that can't be fixed in code os/windows priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.
Projects
None yet
Development

No branches or pull requests

9 participants