Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

registry plugin doesn't support --image-mirror-country: Client.Timeout exceeded while awaiting headers #6352

Open
jiaqiang-cmcc opened this issue Jan 20, 2020 · 8 comments
Labels
area/registry registry related issues help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/backlog Higher priority than priority/awaiting-more-evidence.

Comments

@jiaqiang-cmcc
Copy link

jiaqiang-cmcc commented Jan 20, 2020

The exact command to reproduce the issue:

minikube start --insecure-registry=registry.kube-system.svc.cluster.local:80 --image-mirror-country=cn --registry-mirror=https://registry.docker-cn.com --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers

minikube addons enable registry

It seems that image "gcr.io/google_containers/kube-registry-proxy:0.4" cannot be pulled back to my minikube.

The following images are pulling during minikube starting. But there is no gcr.io/google_containers/kube-registry-proxy:0.4. In case I don't have access to gcr.io, what should I do ? anything I should add to --image-repository= when minikube is started ?

$ docker images
REPOSITORY                                                                    TAG                 IMAGE ID            CREATED             SIZE
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy                v1.17.0             7d54289267dc        6 weeks ago         116MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver            v1.17.0             0cae8d5cc64c        6 weeks ago         171MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager   v1.17.0             5eb3b7486872        6 weeks ago         161MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler            v1.17.0             78c190f736b1        6 weeks ago         94.4MB
registry.cn-hangzhou.aliyuncs.com/google_containers/dashboard                 v2.0.0-beta8        eb51a3597525        6 weeks ago         90.8MB
registry.cn-hangzhou.aliyuncs.com/google_containers/coredns                   1.6.5               70f311871ae1        2 months ago        41.6MB
registry.cn-hangzhou.aliyuncs.com/google_containers/etcd                      3.4.3-0             303ce5db0e90        2 months ago        288MB
registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-scraper           v1.0.2              3b08661dc379        2 months ago        40.1MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-addon-manager        v9.0.2              bd12a212f9dc        5 months ago        83.1MB
registry.cn-hangzhou.aliyuncs.com/google_containers/pause                     3.1                 da86e6ba6ca1        2 years ago         742kB
registry.cn-hangzhou.aliyuncs.com/google_containers/storage-provisioner       v1.8.1              4689081edb10        2 years ago         80.8MB
registry.hub.docker.com/library/registry                                      2.6.1               c2a449c9f834        2 years ago         33.2MB

The full output of the command that failed:

$ kubectl get po --namespace=kube-system
NAME                               READY   STATUS             RESTARTS   AGE
coredns-7f9c544f75-q4sq5           1/1     Running            0          13m
coredns-7f9c544f75-xfthn           1/1     Running            0          13m
etcd-minikube                      1/1     Running            0          13m
kube-addon-manager-minikube        1/1     Running            0          13m
kube-apiserver-minikube            1/1     Running            0          13m
kube-controller-manager-minikube   1/1     Running            0          13m
kube-proxy-qwj97                   1/1     Running            0          13m
kube-scheduler-minikube            1/1     Running            0          13m
registry-nx6zh                     1/1     Running            0          13m
registry-proxy-qkjhx               0/1     ImagePullBackOff   0          13m
storage-provisioner                1/1     Running            0          13m

$ kubectl describe po registry-proxy-qkjhx --namespace=kube-system
Name:         registry-proxy-qkjhx
Namespace:    kube-system
Priority:     0
Node:         minikube/192.168.39.218
Start Time:   Mon, 20 Jan 2020 17:11:29 +0800
Labels:       addonmanager.kubernetes.io/mode=Reconcile
              controller-revision-hash=675799b8c9
              kubernetes.io/minikube-addons=registry
              pod-template-generation=1
              registry-proxy=true
Annotations:  <none>
Status:       Pending
IP:           172.17.0.4
IPs:
  IP:           172.17.0.4
Controlled By:  DaemonSet/registry-proxy
Containers:
  registry-proxy:
    Container ID:   
    Image:          gcr.io/google_containers/kube-registry-proxy:0.4
    Image ID:       
    Port:           80/TCP
    Host Port:      5000/TCP
    State:          Waiting
      Reason:       ImagePullBackOff
    Ready:          False
    Restart Count:  0
    Environment:
      REGISTRY_HOST:  registry.kube-system.svc.cluster.local
      REGISTRY_PORT:  80
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-bq5h4 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  default-token-bq5h4:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-bq5h4
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/disk-pressure:NoSchedule
                 node.kubernetes.io/memory-pressure:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute
                 node.kubernetes.io/pid-pressure:NoSchedule
                 node.kubernetes.io/unreachable:NoExecute
                 node.kubernetes.io/unschedulable:NoSchedule
Events:
  Type     Reason     Age                   From               Message
  ----     ------     ----                  ----               -------
  Normal   Scheduled  18m                   default-scheduler  Successfully assigned kube-system/registry-proxy-qkjhx to minikube
  Normal   Pulling    15m (x4 over 18m)     kubelet, minikube  Pulling image "gcr.io/google_containers/kube-registry-proxy:0.4"
  Warning  Failed     15m (x4 over 18m)     kubelet, minikube  Failed to pull image "gcr.io/google_containers/kube-registry-proxy:0.4": rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
  Warning  Failed     15m (x4 over 18m)     kubelet, minikube  Error: ErrImagePull
  Normal   BackOff    8m18s (x33 over 18m)  kubelet, minikube  Back-off pulling image "gcr.io/google_containers/kube-registry-proxy:0.4"
  Warning  Failed     3m22s (x54 over 18m)  kubelet, minikube  Error: ImagePullBackOff


The output of the minikube logs command:

$ minikube logs
==> Docker <==
-- Logs begin at Mon 2020-01-20 09:08:55 UTC, end at Mon 2020-01-20 09:25:19 UTC. --
Jan 20 09:09:12 minikube dockerd[2093]: time="2020-01-20T09:09:12.626744514Z" level=info msg="parsed scheme: "unix"" module=grpc
Jan 20 09:09:12 minikube dockerd[2093]: time="2020-01-20T09:09:12.626794931Z" level=info msg="scheme "unix" not registered, fallback to default scheme" module=grpc
Jan 20 09:09:12 minikube dockerd[2093]: time="2020-01-20T09:09:12.626832469Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 }] }" module=grpc
Jan 20 09:09:12 minikube dockerd[2093]: time="2020-01-20T09:09:12.626868004Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
Jan 20 09:09:13 minikube dockerd[2093]: time="2020-01-20T09:09:13.035747267Z" level=warning msg="Your kernel does not support cgroup blkio weight"
Jan 20 09:09:13 minikube dockerd[2093]: time="2020-01-20T09:09:13.036147045Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
Jan 20 09:09:13 minikube dockerd[2093]: time="2020-01-20T09:09:13.036303712Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
Jan 20 09:09:13 minikube dockerd[2093]: time="2020-01-20T09:09:13.036506124Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
Jan 20 09:09:13 minikube dockerd[2093]: time="2020-01-20T09:09:13.036647150Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
Jan 20 09:09:13 minikube dockerd[2093]: time="2020-01-20T09:09:13.036781524Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
Jan 20 09:09:13 minikube dockerd[2093]: time="2020-01-20T09:09:13.037535770Z" level=info msg="Loading containers: start."
Jan 20 09:09:13 minikube dockerd[2093]: time="2020-01-20T09:09:13.386030523Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Jan 20 09:09:13 minikube dockerd[2093]: time="2020-01-20T09:09:13.546088285Z" level=info msg="Loading containers: done."
Jan 20 09:09:13 minikube dockerd[2093]: time="2020-01-20T09:09:13.664409608Z" level=info msg="Docker daemon" commit=633a0ea838 graphdriver(s)=overlay2 version=19.03.5
Jan 20 09:09:13 minikube dockerd[2093]: time="2020-01-20T09:09:13.664841660Z" level=info msg="Daemon has completed initialization"
Jan 20 09:09:14 minikube dockerd[2093]: time="2020-01-20T09:09:14.116447960Z" level=info msg="API listen on /var/run/docker.sock"
Jan 20 09:09:14 minikube systemd[1]: Started Docker Application Container Engine.
Jan 20 09:09:14 minikube dockerd[2093]: time="2020-01-20T09:09:14.116659259Z" level=info msg="API listen on [::]:2376"
Jan 20 09:10:44 minikube dockerd[2093]: time="2020-01-20T09:10:44.390250834Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/0506afe5606067424b6b554daa7578e4ad7aae338fc0509f96ac65eeba516c24/shim.sock" debug=false pid=3756
Jan 20 09:10:45 minikube dockerd[2093]: time="2020-01-20T09:10:45.496045792Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/9819a22dfdf7ac7d496a0477a4286cc4a5a254e9ba8cdd3ca61e51cb9eae3d17/shim.sock" debug=false pid=3810
Jan 20 09:10:45 minikube dockerd[2093]: time="2020-01-20T09:10:45.640301680Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/f24393214be1f517310c4137f1f61e65ebb348fd3e60238f3041613e72759c29/shim.sock" debug=false pid=3854
Jan 20 09:10:46 minikube dockerd[2093]: time="2020-01-20T09:10:46.270941016Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/52d51244c4289ff40e4bc301877c18b2c9cb0693bb0e3f2193241bceee63bd0d/shim.sock" debug=false pid=3897
Jan 20 09:10:46 minikube dockerd[2093]: time="2020-01-20T09:10:46.997431251Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/60505c8e707d1bc0e579f0472d2645f319aca1a64d2e853354561061508fb569/shim.sock" debug=false pid=3968
Jan 20 09:10:47 minikube dockerd[2093]: time="2020-01-20T09:10:47.503258158Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/edc13c382ffa5410912c7984d9389f760ef128c5ad74fc6cf26e256a8e356448/shim.sock" debug=false pid=4030
Jan 20 09:10:48 minikube dockerd[2093]: time="2020-01-20T09:10:48.652595005Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/0eaf0c140933b62f048292fece8c3eced4d4d206f80a2872b7cf6b840fa1d115/shim.sock" debug=false pid=4117
Jan 20 09:10:48 minikube dockerd[2093]: time="2020-01-20T09:10:48.945113378Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/f5620a6df0dabfbe87b66fc2bc2a655040a5cc6cc04df09dee49abeaed8c7542/shim.sock" debug=false pid=4161
Jan 20 09:10:49 minikube dockerd[2093]: time="2020-01-20T09:10:49.610289999Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/7ed12d7e8609b55f346cf9ec5e36ef27a34f4170492d37b23bbd60affffa5594/shim.sock" debug=false pid=4283
Jan 20 09:10:49 minikube dockerd[2093]: time="2020-01-20T09:10:49.766636316Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/dbd3a9dbd7f56019deaccd19d61900db4a9b702292340121cf16f6073320d9cb/shim.sock" debug=false pid=4333
Jan 20 09:11:38 minikube dockerd[2093]: time="2020-01-20T09:11:38.953869188Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/1fa68df87dc9c487b83fef0190423fd74518c8ba1e5c8eb4eabbba1e8fee75be/shim.sock" debug=false pid=5246
Jan 20 09:11:39 minikube dockerd[2093]: time="2020-01-20T09:11:39.141995732Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d3d6af3deec058fde23667f1da024218e5d1f7539cbefc523f168d1d57f8cf4c/shim.sock" debug=false pid=5307
Jan 20 09:11:41 minikube dockerd[2093]: time="2020-01-20T09:11:41.636152498Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/10c48b8194af46f85fde113caf19bc0c90922119631ea2807b519130640c1903/shim.sock" debug=false pid=5427
Jan 20 09:11:42 minikube dockerd[2093]: time="2020-01-20T09:11:42.490400455Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ae8e777e5d98e350fdeb9933b6812763f8321ce5063c76f2fbfa38d5e8b1a57e/shim.sock" debug=false pid=5483
Jan 20 09:11:43 minikube dockerd[2093]: time="2020-01-20T09:11:43.351864079Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/872fee44c6f25df8d332b6a597e4cb85ee8ee1059863add9763ace786a7eed41/shim.sock" debug=false pid=5553
Jan 20 09:11:43 minikube dockerd[2093]: time="2020-01-20T09:11:43.716630709Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/4dae73137d89405fe256a5775da5d34668c232171ecdaa9b8e8e3a43538c1533/shim.sock" debug=false pid=5614
Jan 20 09:11:44 minikube dockerd[2093]: time="2020-01-20T09:11:44.604819004Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d28e5e2b0517d41ef43cbe1499846ed75a053dd4d5a0aaa185344dcada04946e/shim.sock" debug=false pid=5679
Jan 20 09:11:44 minikube dockerd[2093]: time="2020-01-20T09:11:44.798066048Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ed39fe945b8e556c4ba92810c10d4fa73102c171bfe7d8ec77e2186ecff71a73/shim.sock" debug=false pid=5730
Jan 20 09:11:45 minikube dockerd[2093]: time="2020-01-20T09:11:45.681258838Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/354ae5e40ed3fe822739b41c87f4beaac5b69aff9837c465d2c7437bb72f7028/shim.sock" debug=false pid=5777
Jan 20 09:11:46 minikube dockerd[2093]: time="2020-01-20T09:11:46.492616050Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/4c15cd2a379427a8904a18b90705e0147b399969603112f9755aec2c4ce55795/shim.sock" debug=false pid=5824
Jan 20 09:11:59 minikube dockerd[2093]: time="2020-01-20T09:11:59.663470926Z" level=warning msg="Error getting v2 registry: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Jan 20 09:11:59 minikube dockerd[2093]: time="2020-01-20T09:11:59.663563714Z" level=info msg="Attempting next endpoint for pull after error: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Jan 20 09:11:59 minikube dockerd[2093]: time="2020-01-20T09:11:59.663697381Z" level=error msg="Handler for POST /images/create returned error: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Jan 20 09:12:46 minikube dockerd[2093]: time="2020-01-20T09:12:46.634197990Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/db49403fc86034805b68ad8c58682cde60952cfe7e334807413e7da3a8855e28/shim.sock" debug=false pid=6658
Jan 20 09:12:59 minikube dockerd[2093]: time="2020-01-20T09:12:59.365262267Z" level=warning msg="Error getting v2 registry: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Jan 20 09:12:59 minikube dockerd[2093]: time="2020-01-20T09:12:59.365547741Z" level=info msg="Attempting next endpoint for pull after error: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Jan 20 09:12:59 minikube dockerd[2093]: time="2020-01-20T09:12:59.365628788Z" level=error msg="Handler for POST /images/create returned error: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Jan 20 09:13:41 minikube dockerd[2093]: time="2020-01-20T09:13:41.046112676Z" level=warning msg="Error getting v2 registry: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Jan 20 09:13:41 minikube dockerd[2093]: time="2020-01-20T09:13:41.046584969Z" level=info msg="Attempting next endpoint for pull after error: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Jan 20 09:13:41 minikube dockerd[2093]: time="2020-01-20T09:13:41.046651696Z" level=error msg="Handler for POST /images/create returned error: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Jan 20 09:14:47 minikube dockerd[2093]: time="2020-01-20T09:14:47.914368710Z" level=warning msg="Error getting v2 registry: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Jan 20 09:14:47 minikube dockerd[2093]: time="2020-01-20T09:14:47.914955555Z" level=info msg="Attempting next endpoint for pull after error: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Jan 20 09:14:47 minikube dockerd[2093]: time="2020-01-20T09:14:47.915065529Z" level=error msg="Handler for POST /images/create returned error: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Jan 20 09:16:23 minikube dockerd[2093]: time="2020-01-20T09:16:23.878521890Z" level=warning msg="Error getting v2 registry: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Jan 20 09:16:23 minikube dockerd[2093]: time="2020-01-20T09:16:23.878689340Z" level=info msg="Attempting next endpoint for pull after error: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Jan 20 09:16:23 minikube dockerd[2093]: time="2020-01-20T09:16:23.880726015Z" level=error msg="Handler for POST /images/create returned error: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Jan 20 09:19:31 minikube dockerd[2093]: time="2020-01-20T09:19:31.088300022Z" level=warning msg="Error getting v2 registry: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Jan 20 09:19:31 minikube dockerd[2093]: time="2020-01-20T09:19:31.090465619Z" level=info msg="Attempting next endpoint for pull after error: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Jan 20 09:19:31 minikube dockerd[2093]: time="2020-01-20T09:19:31.090840107Z" level=error msg="Handler for POST /images/create returned error: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Jan 20 09:24:57 minikube dockerd[2093]: time="2020-01-20T09:24:57.088139808Z" level=warning msg="Error getting v2 registry: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Jan 20 09:24:57 minikube dockerd[2093]: time="2020-01-20T09:24:57.088175661Z" level=info msg="Attempting next endpoint for pull after error: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Jan 20 09:24:57 minikube dockerd[2093]: time="2020-01-20T09:24:57.088203020Z" level=error msg="Handler for POST /images/create returned error: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"

==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
db49403fc8603 registry.hub.docker.com/library/registry@sha256:5eaafa2318aa0c4c52f95077c2a68bed0b13f6d2b464835723d4de1484052299 12 minutes ago Running registry 0 4dae73137d894
4c15cd2a37942 70f311871ae12 13 minutes ago Running coredns 0 ae8e777e5d98e
354ae5e40ed3f 4689081edb103 13 minutes ago Running storage-provisioner 0 10c48b8194af4
ed39fe945b8e5 7d54289267dc5 13 minutes ago Running kube-proxy 0 d3d6af3deec05
d28e5e2b0517d 70f311871ae12 13 minutes ago Running coredns 0 1fa68df87dc9c
dbd3a9dbd7f56 bd12a212f9dcb 14 minutes ago Running kube-addon-manager 0 60505c8e707d1
7ed12d7e8609b 303ce5db0e90d 14 minutes ago Running etcd 0 52d51244c4289
f5620a6df0dab 5eb3b74868724 14 minutes ago Running kube-controller-manager 0 9819a22dfdf7a
0eaf0c140933b 78c190f736b11 14 minutes ago Running kube-scheduler 0 f24393214be1f
edc13c382ffa5 0cae8d5cc64c7 14 minutes ago Running kube-apiserver 0 0506afe560606

==> coredns ["4c15cd2a3794"] <==
[INFO] plugin/ready: Still waiting on: "kubernetes"
.:53
[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
CoreDNS-1.6.5
linux/amd64, go1.13.4, c2fd1b2
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
E0120 09:12:19.081795 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
I0120 09:12:19.081722 1 trace.go:82] Trace[1529501427]: "Reflector pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98 ListAndWatch" (started: 2020-01-20 09:11:49.046382204 +0000 UTC m=+2.438647545) (total time: 30.035039158s):
Trace[1529501427]: [30.035039158s] [30.035039158s] END
E0120 09:12:19.081795 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0120 09:12:19.081795 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0120 09:12:19.081795 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
I0120 09:12:19.082994 1 trace.go:82] Trace[1475512200]: "Reflector pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98 ListAndWatch" (started: 2020-01-20 09:11:49.08076108 +0000 UTC m=+2.473026438) (total time: 30.002169113s):
Trace[1475512200]: [30.002169113s] [30.002169113s] END
E0120 09:12:19.083050 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0120 09:12:19.083050 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0120 09:12:19.083050 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0120 09:12:19.083050 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
I0120 09:12:19.102484 1 trace.go:82] Trace[1555623021]: "Reflector pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98 ListAndWatch" (started: 2020-01-20 09:11:49.046503913 +0000 UTC m=+2.438769211) (total time: 30.055898344s):
Trace[1555623021]: [30.055898344s] [30.055898344s] END
E0120 09:12:19.102523 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0120 09:12:19.102523 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0120 09:12:19.102523 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0120 09:12:19.102523 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
[INFO] plugin/ready: Still waiting on: "kubernetes"

==> coredns ["d28e5e2b0517"] <==
E0120 09:12:19.082825 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0120 09:12:19.083521 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get h[tps://10.I6N0F1O:4 43/upgii/v//rendpoints?lilmiw=500iresour:e V"eksibon=n:edial tcp 10.96.0.1:443: i/o timeout
E0120 09:12:1903 1 reflecto..:g5o3:
125][pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Namespacde: 3et https://10.96.0.1:-413./api/v1/lnianmeuspaces?limit=500&resourceV2r
sion=[: dial tcp 10.96.0.1:443: i/o timeout
"kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
I0120 09:12:19.082416 1 trace.go:82] Trace[766649685]: "Reflector pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98 ListAndWatch" (started: 2020-01-20 09:11:49.046413403 +0000 UTC m=+2.711122651) (total time: 30.035188007s):
Trace[766649685]: [30.035188007s] [30.035188007s] END
E0120 09:12:19.082825 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0120 09:12:19.082825 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0120 09:12:19.082825 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
I0120 09:12:19.082550 1 trace.go:82] Trace[610316780]: "Reflector pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98 ListAndWatch" (started: 2020-01-20 09:11:49.080831302 +0000 UTC m=+2.745540587) (total time: 30.001649774s):
Trace[610316780]: [30.001649774s] [30.001649774s] END
E0120 09:12:19.083521 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0120 09:12:19.083521 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0120 09:12:19.083521 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
I0120 09:12:19.102864 1 trace.go:82] Trace[1493103633]: "Reflector pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98 ListAndWatch" (started: 2020-01-20 09:11:49.037812884 +0000 UTC m=+2.702522130) (total time: 30.065016111s):
Trace[1493103633]: [30.065016111s] [30.065016111s] END
E0120 09:12:19.102903 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0120 09:12:19.102903 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0120 09:12:19.102903 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout

==> dmesg <==
[Jan20 09:08] You have booted with nomodeset. This means your GPU drivers are DISABLED
[ +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
[ +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
[ +0.018555] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
[ +11.781286] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[ +0.565095] systemd[1]: Failed to bump fs.file-max, ignoring: Invalid argument
[ +0.004696] systemd-fstab-generator[1141]: Ignoring "noauto" for root device
[ +0.002042] systemd[1]: File /usr/lib/systemd/system/systemd-journald.service:12 configures an IP firewall (IPAddressDeny=any), but the local system does not support BPF/cgroup based firewalling.
[ +0.000001] systemd[1]: Proceeding WITHOUT firewalling in effect! (This warning is only shown for the first loaded unit using IP firewalling.)
[ +0.784965] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
[ +1.100964] vboxguest: loading out-of-tree module taints kernel.
[ +0.002716] vboxguest: PCI device not found, probably running on physical hardware.
[Jan20 09:09] systemd-fstab-generator[1995]: Ignoring "noauto" for root device
[Jan20 09:10] systemd-fstab-generator[2820]: Ignoring "noauto" for root device
[ +12.303548] systemd-fstab-generator[3205]: Ignoring "noauto" for root device
[ +22.542467] kauditd_printk_skb: 68 callbacks suppressed
[ +15.078777] NFSD: Unable to end grace period: -110
[Jan20 09:11] systemd-fstab-generator[4763]: Ignoring "noauto" for root device
[ +24.034937] kauditd_printk_skb: 29 callbacks suppressed
[ +20.642162] kauditd_printk_skb: 5 callbacks suppressed
[Jan20 09:12] kauditd_printk_skb: 38 callbacks suppressed

==> kernel <==
09:25:19 up 16 min, 0 users, load average: 1.53, 1.21, 0.75
Linux minikube 4.19.81 #1 SMP Tue Dec 10 16:09:50 PST 2019 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2019.02.7"

==> kube-addon-manager ["dbd3a9dbd7f5"] <==
error: no objects passed to apply
error: no objects passed to apply
error: no objects passed to apply
INFO: Leader election disabled.
INFO: == Kubernetes addon ensure completed at 2020-01-20T09:24:53+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
daemonset.apps/registry-proxy unchanged
replicationcontroller/registry unchanged
service/registry unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2020-01-20T09:24:55+00:00 ==
INFO: Leader election disabled.
INFO: == Kubernetes addon ensure completed at 2020-01-20T09:24:57+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
daemonset.apps/registry-proxy unchanged
replicationcontroller/registry unchanged
service/registry unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2020-01-20T09:24:59+00:00 ==
INFO: Leader election disabled.
INFO: == Kubernetes addon ensure completed at 2020-01-20T09:25:02+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
daemonset.apps/registry-proxy unchanged
replicationcontroller/registry unchanged
service/registry unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2020-01-20T09:25:04+00:00 ==
error: no objects passed to apply
error: no objects passed to apply
error: no objects passed to apply
INFO: Leader election disabled.
INFO: == Kubernetes addon ensure completed at 2020-01-20T09:25:07+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
daemonset.apps/registry-proxy unchanged
replicationcontroller/registry unchanged
service/registry unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2020-01-20T09:25:09+00:00 ==
INFO: Leader election disabled.
INFO: == Kubernetes addon ensure completed at 2020-01-20T09:25:12+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
daemonset.apps/registry-proxy unchanged
replicationcontroller/registry unchanged
service/registry unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2020-01-20T09:25:14+00:00 ==
INFO: Leader election disabled.
INFO: == Kubernetes addon ensure completed at 2020-01-20T09:25:17+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
daemonset.apps/registry-proxy unchanged
replicationcontroller/registry unchanged
service/registry unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2020-01-20T09:25:19+00:00 ==

==> kube-apiserver ["edc13c382ffa"] <==
Trace[628829937]: [627.459101ms] [627.02932ms] Object stored in database
I0120 09:11:34.701762 1 trace.go:116] Trace[105510540]: "GuaranteedUpdate etcd3" type:*coordination.Lease (started: 2020-01-20 09:11:34.155974311 +0000 UTC m=+46.580632552) (total time: 545.725663ms):
Trace[105510540]: [545.672413ms] [545.576442ms] Transaction committed
I0120 09:11:34.702156 1 trace.go:116] Trace[1125741886]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler,user-agent:kube-scheduler/v1.17.0 (linux/amd64) kubernetes/70132b0/leader-election,client:127.0.0.1 (started: 2020-01-20 09:11:34.15592512 +0000 UTC m=+46.580583354) (total time: 546.009449ms):
Trace[1125741886]: [545.884129ms] [545.852525ms] Object stored in database
I0120 09:11:34.704830 1 trace.go:116] Trace[830124614]: "GuaranteedUpdate etcd3" type:*core.Pod (started: 2020-01-20 09:11:34.159302685 +0000 UTC m=+46.583960926) (total time: 545.449434ms):
Trace[830124614]: [545.23791ms] [544.853612ms] Transaction committed
I0120 09:11:34.705614 1 trace.go:116] Trace[1801897007]: "Patch" url:/api/v1/namespaces/kube-system/pods/registry-proxy-qkjhx/status,user-agent:kubelet/v1.17.0 (linux/amd64) kubernetes/70132b0,client:127.0.0.1 (started: 2020-01-20 09:11:34.159259163 +0000 UTC m=+46.583917398) (total time: 546.295676ms):
Trace[1801897007]: [545.940973ms] [545.620474ms] Object stored in database
I0120 09:11:34.711145 1 trace.go:116] Trace[1914401000]: "Create" url:/apis/storage.k8s.io/v1/storageclasses,user-agent:kubectl/v1.13.2 (linux/amd64) kubernetes/cff46ab,client:127.0.0.1 (started: 2020-01-20 09:11:34.158254592 +0000 UTC m=+46.582912859) (total time: 552.827372ms):
Trace[1914401000]: [552.827372ms] [552.721828ms] END
I0120 09:11:35.357014 1 trace.go:116] Trace[753099451]: "Get" url:/api/v1/namespaces/kube-system/endpoints/kube-controller-manager,user-agent:kube-controller-manager/v1.17.0 (linux/amd64) kubernetes/70132b0/leader-election,client:127.0.0.1 (started: 2020-01-20 09:11:34.36372386 +0000 UTC m=+46.788382093) (total time: 992.077971ms):
Trace[753099451]: [991.989714ms] [991.977289ms] About to write a response
I0120 09:11:35.358452 1 trace.go:116] Trace[202119255]: "Get" url:/api/v1/namespaces/kube-system/pods/registry-nx6zh,user-agent:kubelet/v1.17.0 (linux/amd64) kubernetes/70132b0,client:127.0.0.1 (started: 2020-01-20 09:11:34.708971062 +0000 UTC m=+47.133629387) (total time: 649.337055ms):
Trace[202119255]: [649.236401ms] [649.221398ms] About to write a response
I0120 09:11:36.396872 1 trace.go:116] Trace[724693616]: "GuaranteedUpdate etcd3" type:*coordination.Lease (started: 2020-01-20 09:11:35.741202967 +0000 UTC m=+48.165861323) (total time: 655.598151ms):
Trace[724693616]: [655.556857ms] [655.04304ms] Transaction committed
I0120 09:11:36.397719 1 trace.go:116] Trace[1035564814]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager,user-agent:kube-controller-manager/v1.17.0 (linux/amd64) kubernetes/70132b0/leader-election,client:127.0.0.1 (started: 2020-01-20 09:11:35.740984613 +0000 UTC m=+48.165642931) (total time: 656.664252ms):
Trace[1035564814]: [656.453425ms] [656.301959ms] Object stored in database
I0120 09:11:36.400813 1 trace.go:116] Trace[1796178468]: "Get" url:/api/v1/namespaces/kube-system/pods/etcd-minikube,user-agent:kubelet/v1.17.0 (linux/amd64) kubernetes/70132b0,client:127.0.0.1 (started: 2020-01-20 09:11:35.749834086 +0000 UTC m=+48.174492410) (total time: 650.909051ms):
Trace[1796178468]: [650.698698ms] [650.683601ms] About to write a response
I0120 09:11:36.410551 1 trace.go:116] Trace[1785181933]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (started: 2020-01-20 09:11:35.370685687 +0000 UTC m=+47.795344040) (total time: 1.039798949s):
Trace[1785181933]: [369.593802ms] [369.593802ms] initial value restored
Trace[1785181933]: [1.032769813s] [663.176011ms] Transaction prepared
I0120 09:11:38.178690 1 trace.go:116] Trace[274382530]: "GuaranteedUpdate etcd3" type:*core.Pod (started: 2020-01-20 09:11:37.657238349 +0000 UTC m=+50.081896595) (total time: 521.386537ms):
Trace[274382530]: [521.386537ms] [521.355056ms] END
I0120 09:11:38.179084 1 trace.go:116] Trace[1091355541]: "Patch" url:/api/v1/namespaces/kube-system/pods/storage-provisioner,user-agent:kubectl/v1.13.2 (linux/amd64) kubernetes/cff46ab,client:127.0.0.1 (started: 2020-01-20 09:11:37.657096311 +0000 UTC m=+50.081754546) (total time: 521.934571ms):
Trace[1091355541]: [521.637571ms] [521.27512ms] Object stored in database
I0120 09:11:40.348845 1 trace.go:116] Trace[850181359]: "GuaranteedUpdate etcd3" type:*core.Pod (started: 2020-01-20 09:11:39.717923949 +0000 UTC m=+52.142582195) (total time: 630.830741ms):
Trace[850181359]: [630.830741ms] [630.768908ms] END
I0120 09:11:40.349217 1 trace.go:116] Trace[887761306]: "Patch" url:/api/v1/namespaces/kube-system/pods/kube-controller-manager-minikube/status,user-agent:kubelet/v1.17.0 (linux/amd64) kubernetes/70132b0,client:127.0.0.1 (started: 2020-01-20 09:11:39.717729137 +0000 UTC m=+52.142387376) (total time: 631.438317ms):
Trace[887761306]: [631.173124ms] [630.657575ms] Object stored in database
I0120 09:11:46.394091 1 trace.go:116] Trace[604830255]: "Create" url:/apis/rbac.authorization.k8s.io/v1/clusterrolebindings,user-agent:kubectl/v1.13.2 (linux/amd64) kubernetes/cff46ab,client:127.0.0.1 (started: 2020-01-20 09:11:45.753293427 +0000 UTC m=+58.177951685) (total time: 640.775357ms):
Trace[604830255]: [640.775357ms] [640.668537ms] END
I0120 09:11:48.252946 1 trace.go:116] Trace[1416904277]: "Get" url:/api/v1/namespaces/kube-system/endpoints/kube-controller-manager,user-agent:kube-controller-manager/v1.17.0 (linux/amd64) kubernetes/70132b0/leader-election,client:127.0.0.1 (started: 2020-01-20 09:11:47.748495895 +0000 UTC m=+60.173154230) (total time: 504.379565ms):
Trace[1416904277]: [504.25712ms] [504.215791ms] About to write a response
I0120 09:11:48.253125 1 trace.go:116] Trace[1116841882]: "Get" url:/api/v1/namespaces/kube-system/pods/kube-proxy-qwj97,user-agent:kubelet/v1.17.0 (linux/amd64) kubernetes/70132b0,client:127.0.0.1 (started: 2020-01-20 09:11:47.099073872 +0000 UTC m=+59.523732112) (total time: 1.153980059s):
Trace[1116841882]: [1.153846838s] [1.153843102s] About to write a response
I0120 09:11:48.905740 1 trace.go:116] Trace[1622123314]: "GuaranteedUpdate etcd3" type:*coordination.Lease (started: 2020-01-20 09:11:48.29479326 +0000 UTC m=+60.719451597) (total time: 610.886723ms):
Trace[1622123314]: [610.845967ms] [610.477854ms] Transaction committed
I0120 09:11:48.906768 1 trace.go:116] Trace[960103258]: "GuaranteedUpdate etcd3" type:*core.Endpoints (started: 2020-01-20 09:11:48.257746613 +0000 UTC m=+60.682404853) (total time: 648.988666ms):
Trace[960103258]: [648.959181ms] [648.880204ms] Transaction committed
I0120 09:11:48.907263 1 trace.go:116] Trace[327127564]: "Update" url:/api/v1/namespaces/kube-system/endpoints/kube-controller-manager,user-agent:kube-controller-manager/v1.17.0 (linux/amd64) kubernetes/70132b0/leader-election,client:127.0.0.1 (started: 2020-01-20 09:11:48.257704972 +0000 UTC m=+60.682363206) (total time: 649.465632ms):
Trace[327127564]: [649.198129ms] [649.17131ms] Object stored in database
I0120 09:11:48.907377 1 trace.go:116] Trace[1306892583]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/minikube,user-agent:kubelet/v1.17.0 (linux/amd64) kubernetes/70132b0,client:127.0.0.1 (started: 2020-01-20 09:11:48.294569531 +0000 UTC m=+60.719227864) (total time: 612.661602ms):
Trace[1306892583]: [612.141758ms] [611.985298ms] Object stored in database
I0120 09:11:48.906528 1 trace.go:116] Trace[1340785861]: "Get" url:/api/v1/namespaces/kube-system/pods/storage-provisioner,user-agent:kubelet/v1.17.0 (linux/amd64) kubernetes/70132b0,client:127.0.0.1 (started: 2020-01-20 09:11:48.262149458 +0000 UTC m=+60.686807693) (total time: 644.321576ms):
Trace[1340785861]: [644.163095ms] [644.159293ms] About to write a response
I0120 09:11:48.913389 1 trace.go:116] Trace[1700638672]: "GuaranteedUpdate etcd3" type:*apps.DaemonSet (started: 2020-01-20 09:11:48.259855234 +0000 UTC m=+60.684513474) (total time: 653.449984ms):
Trace[1700638672]: [653.339451ms] [653.23957ms] Transaction committed
I0120 09:11:48.913883 1 trace.go:116] Trace[1591865192]: "Update" url:/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy/status,user-agent:kube-controller-manager/v1.17.0 (linux/amd64) kubernetes/70132b0/system:serviceaccount:kube-system:daemon-set-controller,client:127.0.0.1 (started: 2020-01-20 09:11:48.259779946 +0000 UTC m=+60.684438181) (total time: 654.050412ms):
Trace[1591865192]: [653.874104ms] [653.829354ms] Object stored in database
I0120 09:12:21.680771 1 trace.go:116] Trace[778312184]: "Get" url:/api/v1/namespaces/kube-system/endpoints/kube-controller-manager,user-agent:kube-controller-manager/v1.17.0 (linux/amd64) kubernetes/70132b0/leader-election,client:127.0.0.1 (started: 2020-01-20 09:12:20.765063984 +0000 UTC m=+93.189722240) (total time: 915.601586ms):
Trace[778312184]: [915.476438ms] [915.459476ms] About to write a response
I0120 09:12:21.686987 1 trace.go:116] Trace[233099635]: "Get" url:/api/v1/namespaces/kube-system/endpoints/kube-scheduler,user-agent:kube-scheduler/v1.17.0 (linux/amd64) kubernetes/70132b0/leader-election,client:127.0.0.1 (started: 2020-01-20 09:12:20.897582627 +0000 UTC m=+93.322240861) (total time: 789.298646ms):
Trace[233099635]: [789.139766ms] [789.125114ms] About to write a response
I0120 09:12:37.890088 1 trace.go:116] Trace[1181824132]: "List etcd3" key:/ingress/kube-system,resourceVersion:,limit:0,continue: (started: 2020-01-20 09:12:37.389363466 +0000 UTC m=+109.814021874) (total time: 500.633459ms):
Trace[1181824132]: [500.633459ms] [500.633459ms] END
I0120 09:12:37.890442 1 trace.go:116] Trace[287554782]: "List" url:/apis/extensions/v1beta1/namespaces/kube-system/ingresses,user-agent:kubectl/v1.13.2 (linux/amd64) kubernetes/cff46ab,client:127.0.0.1 (started: 2020-01-20 09:12:37.389142608 +0000 UTC m=+109.813800972) (total time: 501.145353ms):
Trace[287554782]: [500.98254ms] [500.821172ms] Listing from storage done

==> kube-controller-manager ["f5620a6df0da"] <==
I0120 09:11:12.606044 1 controllermanager.go:533] Started "replicationcontroller"
I0120 09:11:12.606370 1 replica_set.go:180] Starting replicationcontroller controller
I0120 09:11:12.606471 1 shared_informer.go:197] Waiting for caches to sync for ReplicationController
I0120 09:11:12.657646 1 node_lifecycle_controller.go:77] Sending events to api server
E0120 09:11:12.657862 1 core.go:232] failed to start cloud node lifecycle controller: no cloud provider provided
W0120 09:11:12.657885 1 controllermanager.go:525] Skipping "cloud-node-lifecycle"
W0120 09:11:12.657920 1 controllermanager.go:525] Skipping "ttl-after-finished"
I0120 09:11:12.816177 1 controllermanager.go:533] Started "namespace"
I0120 09:11:12.816458 1 namespace_controller.go:200] Starting namespace controller
I0120 09:11:12.816500 1 shared_informer.go:197] Waiting for caches to sync for namespace
I0120 09:11:13.273660 1 controllermanager.go:533] Started "disruption"
I0120 09:11:13.274256 1 disruption.go:330] Starting disruption controller
I0120 09:11:13.274608 1 shared_informer.go:197] Waiting for caches to sync for disruption
I0120 09:11:13.499903 1 controllermanager.go:533] Started "ttl"
I0120 09:11:13.499951 1 ttl_controller.go:116] Starting TTL controller
I0120 09:11:13.501376 1 shared_informer.go:197] Waiting for caches to sync for TTL
I0120 09:11:13.504847 1 shared_informer.go:197] Waiting for caches to sync for garbage collector
I0120 09:11:13.521601 1 shared_informer.go:197] Waiting for caches to sync for resource quota
I0120 09:11:13.537811 1 shared_informer.go:204] Caches are synced for PV protection
W0120 09:11:13.545355 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist
I0120 09:11:13.575368 1 shared_informer.go:204] Caches are synced for service account
I0120 09:11:13.592182 1 shared_informer.go:204] Caches are synced for taint
I0120 09:11:13.592335 1 node_lifecycle_controller.go:1443] Initializing eviction metric for zone:
W0120 09:11:13.592429 1 node_lifecycle_controller.go:1058] Missing timestamp for Node minikube. Assuming now as a timestamp.
I0120 09:11:13.592494 1 node_lifecycle_controller.go:1259] Controller detected that zone is now in state Normal.
I0120 09:11:13.592595 1 taint_manager.go:186] Starting NoExecuteTaintManager
I0120 09:11:13.592729 1 event.go:281] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"8f77c17f-451f-4708-914d-52543478fa76", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in Controller
I0120 09:11:13.593360 1 shared_informer.go:204] Caches are synced for bootstrap_signer
I0120 09:11:13.597069 1 shared_informer.go:204] Caches are synced for PVC protection
I0120 09:11:13.601504 1 shared_informer.go:204] Caches are synced for TTL
I0120 09:11:13.606441 1 shared_informer.go:204] Caches are synced for daemon sets
I0120 09:11:13.614571 1 shared_informer.go:204] Caches are synced for job
I0120 09:11:13.616763 1 shared_informer.go:204] Caches are synced for namespace
I0120 09:11:13.617608 1 shared_informer.go:204] Caches are synced for HPA
I0120 09:11:13.623858 1 shared_informer.go:204] Caches are synced for GC
I0120 09:11:13.629415 1 shared_informer.go:204] Caches are synced for attach detach
I0120 09:11:13.652769 1 shared_informer.go:204] Caches are synced for ClusterRoleAggregator
I0120 09:11:13.734032 1 shared_informer.go:204] Caches are synced for expand
I0120 09:11:13.783851 1 shared_informer.go:204] Caches are synced for persistent volume
I0120 09:11:13.806816 1 shared_informer.go:204] Caches are synced for ReplicationController
I0120 09:11:13.837474 1 shared_informer.go:204] Caches are synced for endpoint
I0120 09:11:13.915036 1 shared_informer.go:204] Caches are synced for deployment
I0120 09:11:13.925286 1 shared_informer.go:204] Caches are synced for ReplicaSet
E0120 09:11:13.939773 1 clusterroleaggregation_controller.go:180] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
I0120 09:11:13.969030 1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"27015eb6-e02b-470b-b42c-7afa74e0be82", APIVersion:"apps/v1", ResourceVersion:"215", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-7f9c544f75 to 2
I0120 09:11:13.972145 1 shared_informer.go:204] Caches are synced for resource quota
I0120 09:11:13.975500 1 shared_informer.go:204] Caches are synced for disruption
I0120 09:11:13.975568 1 disruption.go:338] Sending events to api server.
I0120 09:11:14.005868 1 shared_informer.go:204] Caches are synced for stateful set
I0120 09:11:14.021844 1 shared_informer.go:204] Caches are synced for resource quota
I0120 09:11:14.043093 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-7f9c544f75", UID:"adf5d52a-559d-4ff2-8cc5-4905240624c4", APIVersion:"apps/v1", ResourceVersion:"324", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-7f9c544f75-xfthn
I0120 09:11:14.043720 1 event.go:281] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"7b935b22-40d7-4dd5-90e0-19a1a1c630c0", APIVersion:"apps/v1", ResourceVersion:"228", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-qwj97
I0120 09:11:14.071676 1 shared_informer.go:204] Caches are synced for certificate-csrsigning
I0120 09:11:14.100426 1 shared_informer.go:204] Caches are synced for certificate-csrapproving
I0120 09:11:14.105989 1 shared_informer.go:204] Caches are synced for garbage collector
I0120 09:11:14.130428 1 shared_informer.go:204] Caches are synced for garbage collector
I0120 09:11:14.130471 1 garbagecollector.go:138] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0120 09:11:14.280288 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-7f9c544f75", UID:"adf5d52a-559d-4ff2-8cc5-4905240624c4", APIVersion:"apps/v1", ResourceVersion:"324", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-7f9c544f75-q4sq5
I0120 09:11:16.982246 1 event.go:281] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"registry-proxy", UID:"783d45f5-ba07-4011-9b7c-bcdbcb2599d3", APIVersion:"apps/v1", ResourceVersion:"363", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: registry-proxy-qkjhx
I0120 09:11:17.042849 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"kube-system", Name:"registry", UID:"d02125e8-8c78-4e20-bba4-6fb77b2eb983", APIVersion:"v1", ResourceVersion:"366", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: registry-nx6zh

==> kube-proxy ["ed39fe945b8e"] <==
W0120 09:11:51.549586 1 server_others.go:323] Unknown proxy mode "", assuming iptables proxy
I0120 09:11:51.877956 1 node.go:135] Successfully retrieved node IP: 192.168.39.218
I0120 09:11:51.878055 1 server_others.go:145] Using iptables Proxier.
W0120 09:11:51.900165 1 proxier.go:286] clusterCIDR not specified, unable to distinguish between internal and external traffic
I0120 09:11:51.903889 1 server.go:571] Version: v1.17.0
I0120 09:11:51.921259 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
I0120 09:11:51.921289 1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0120 09:11:51.921800 1 conntrack.go:83] Setting conntrack hashsize to 32768
I0120 09:11:51.926409 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0120 09:11:51.926455 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0120 09:11:51.948568 1 config.go:313] Starting service config controller
I0120 09:11:51.948584 1 shared_informer.go:197] Waiting for caches to sync for service config
I0120 09:11:51.950811 1 config.go:131] Starting endpoints config controller
I0120 09:11:51.950823 1 shared_informer.go:197] Waiting for caches to sync for endpoints config
I0120 09:11:52.151120 1 shared_informer.go:204] Caches are synced for endpoints config
I0120 09:11:52.151724 1 shared_informer.go:204] Caches are synced for service config

==> kube-scheduler ["0eaf0c140933"] <==
E0120 09:10:56.854237 1 reflector.go:156] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0120 09:10:56.857403 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0120 09:10:56.859509 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0120 09:10:56.861565 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0120 09:10:56.864510 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0120 09:10:56.864832 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0120 09:10:57.841409 1 reflector.go:156] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0120 09:10:57.845068 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0120 09:10:57.846512 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0120 09:10:57.849914 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0120 09:10:57.852488 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0120 09:10:57.855017 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0120 09:10:57.857796 1 reflector.go:156] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0120 09:10:57.860246 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0120 09:10:57.861953 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0120 09:10:57.864180 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0120 09:10:57.865920 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0120 09:10:57.867581 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0120 09:10:58.843955 1 reflector.go:156] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0120 09:10:58.848495 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0120 09:10:58.850243 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0120 09:10:58.853137 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0120 09:10:58.854674 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0120 09:10:58.857414 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0120 09:10:58.859829 1 reflector.go:156] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0120 09:10:58.862201 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0120 09:10:58.864734 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0120 09:10:58.866854 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0120 09:10:58.868038 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0120 09:10:58.869791 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0120 09:10:59.847494 1 reflector.go:156] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0120 09:10:59.854171 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0120 09:10:59.855485 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0120 09:10:59.856891 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0120 09:10:59.857946 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0120 09:10:59.860066 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0120 09:10:59.862243 1 reflector.go:156] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0120 09:10:59.865090 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0120 09:10:59.867854 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0120 09:10:59.870829 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0120 09:10:59.871837 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0120 09:10:59.872358 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0120 09:11:00.849852 1 reflector.go:156] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0120 09:11:00.856909 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0120 09:11:00.861441 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0120 09:11:00.861768 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0120 09:11:00.871242 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0120 09:11:00.873107 1 reflector.go:156] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0120 09:11:00.873647 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0120 09:11:00.874117 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0120 09:11:00.874951 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0120 09:11:00.876044 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0120 09:11:00.876609 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0120 09:11:00.877105 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0120 09:11:01.856832 1 reflector.go:156] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
I0120 09:11:02.081669 1 leaderelection.go:242] attempting to acquire leader lease kube-system/kube-scheduler...
I0120 09:11:02.504287 1 leaderelection.go:252] successfully acquired lease kube-system/kube-scheduler
E0120 09:11:02.870483 1 reflector.go:156] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0120 09:11:03.871649 1 reflector.go:156] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
I0120 09:11:04.967373 1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file

==> kubelet <==
-- Logs begin at Mon 2020-01-20 09:08:55 UTC, end at Mon 2020-01-20 09:25:19 UTC. --
Jan 20 09:14:09 minikube kubelet[4772]: E0120 09:14:09.868591 4772 pod_workers.go:191] Error syncing pod bf12065c-9adb-46bd-a09b-287a9db5e53b ("registry-proxy-qkjhx_kube-system(bf12065c-9adb-46bd-a09b-287a9db5e53b)"), skipping: failed to "StartContainer" for "registry-proxy" with ImagePullBackOff: "Back-off pulling image "gcr.io/google_containers/kube-registry-proxy:0.4""
Jan 20 09:14:20 minikube kubelet[4772]: E0120 09:14:20.866758 4772 pod_workers.go:191] Error syncing pod bf12065c-9adb-46bd-a09b-287a9db5e53b ("registry-proxy-qkjhx_kube-system(bf12065c-9adb-46bd-a09b-287a9db5e53b)"), skipping: failed to "StartContainer" for "registry-proxy" with ImagePullBackOff: "Back-off pulling image "gcr.io/google_containers/kube-registry-proxy:0.4""
Jan 20 09:14:47 minikube kubelet[4772]: E0120 09:14:47.915394 4772 remote_image.go:113] PullImage "gcr.io/google_containers/kube-registry-proxy:0.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Jan 20 09:14:47 minikube kubelet[4772]: E0120 09:14:47.915936 4772 kuberuntime_image.go:50] Pull image "gcr.io/google_containers/kube-registry-proxy:0.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Jan 20 09:14:47 minikube kubelet[4772]: E0120 09:14:47.916076 4772 kuberuntime_manager.go:803] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Jan 20 09:14:47 minikube kubelet[4772]: E0120 09:14:47.916200 4772 pod_workers.go:191] Error syncing pod bf12065c-9adb-46bd-a09b-287a9db5e53b ("registry-proxy-qkjhx_kube-system(bf12065c-9adb-46bd-a09b-287a9db5e53b)"), skipping: failed to "StartContainer" for "registry-proxy" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Jan 20 09:15:02 minikube kubelet[4772]: E0120 09:15:02.868867 4772 pod_workers.go:191] Error syncing pod bf12065c-9adb-46bd-a09b-287a9db5e53b ("registry-proxy-qkjhx_kube-system(bf12065c-9adb-46bd-a09b-287a9db5e53b)"), skipping: failed to "StartContainer" for "registry-proxy" with ImagePullBackOff: "Back-off pulling image "gcr.io/google_containers/kube-registry-proxy:0.4""
Jan 20 09:15:16 minikube kubelet[4772]: E0120 09:15:16.868916 4772 pod_workers.go:191] Error syncing pod bf12065c-9adb-46bd-a09b-287a9db5e53b ("registry-proxy-qkjhx_kube-system(bf12065c-9adb-46bd-a09b-287a9db5e53b)"), skipping: failed to "StartContainer" for "registry-proxy" with ImagePullBackOff: "Back-off pulling image "gcr.io/google_containers/kube-registry-proxy:0.4""
Jan 20 09:15:31 minikube kubelet[4772]: E0120 09:15:31.868878 4772 pod_workers.go:191] Error syncing pod bf12065c-9adb-46bd-a09b-287a9db5e53b ("registry-proxy-qkjhx_kube-system(bf12065c-9adb-46bd-a09b-287a9db5e53b)"), skipping: failed to "StartContainer" for "registry-proxy" with ImagePullBackOff: "Back-off pulling image "gcr.io/google_containers/kube-registry-proxy:0.4""
Jan 20 09:15:44 minikube kubelet[4772]: E0120 09:15:44.868517 4772 pod_workers.go:191] Error syncing pod bf12065c-9adb-46bd-a09b-287a9db5e53b ("registry-proxy-qkjhx_kube-system(bf12065c-9adb-46bd-a09b-287a9db5e53b)"), skipping: failed to "StartContainer" for "registry-proxy" with ImagePullBackOff: "Back-off pulling image "gcr.io/google_containers/kube-registry-proxy:0.4""
Jan 20 09:15:55 minikube kubelet[4772]: E0120 09:15:55.866668 4772 pod_workers.go:191] Error syncing pod bf12065c-9adb-46bd-a09b-287a9db5e53b ("registry-proxy-qkjhx_kube-system(bf12065c-9adb-46bd-a09b-287a9db5e53b)"), skipping: failed to "StartContainer" for "registry-proxy" with ImagePullBackOff: "Back-off pulling image "gcr.io/google_containers/kube-registry-proxy:0.4""
Jan 20 09:16:23 minikube kubelet[4772]: E0120 09:16:23.881563 4772 remote_image.go:113] PullImage "gcr.io/google_containers/kube-registry-proxy:0.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Jan 20 09:16:23 minikube kubelet[4772]: E0120 09:16:23.881650 4772 kuberuntime_image.go:50] Pull image "gcr.io/google_containers/kube-registry-proxy:0.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Jan 20 09:16:23 minikube kubelet[4772]: E0120 09:16:23.881741 4772 kuberuntime_manager.go:803] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Jan 20 09:16:23 minikube kubelet[4772]: E0120 09:16:23.881803 4772 pod_workers.go:191] Error syncing pod bf12065c-9adb-46bd-a09b-287a9db5e53b ("registry-proxy-qkjhx_kube-system(bf12065c-9adb-46bd-a09b-287a9db5e53b)"), skipping: failed to "StartContainer" for "registry-proxy" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Jan 20 09:16:34 minikube kubelet[4772]: E0120 09:16:34.868256 4772 pod_workers.go:191] Error syncing pod bf12065c-9adb-46bd-a09b-287a9db5e53b ("registry-proxy-qkjhx_kube-system(bf12065c-9adb-46bd-a09b-287a9db5e53b)"), skipping: failed to "StartContainer" for "registry-proxy" with ImagePullBackOff: "Back-off pulling image "gcr.io/google_containers/kube-registry-proxy:0.4""
Jan 20 09:16:46 minikube kubelet[4772]: E0120 09:16:46.868147 4772 pod_workers.go:191] Error syncing pod bf12065c-9adb-46bd-a09b-287a9db5e53b ("registry-proxy-qkjhx_kube-system(bf12065c-9adb-46bd-a09b-287a9db5e53b)"), skipping: failed to "StartContainer" for "registry-proxy" with ImagePullBackOff: "Back-off pulling image "gcr.io/google_containers/kube-registry-proxy:0.4""
Jan 20 09:16:57 minikube kubelet[4772]: E0120 09:16:57.870589 4772 pod_workers.go:191] Error syncing pod bf12065c-9adb-46bd-a09b-287a9db5e53b ("registry-proxy-qkjhx_kube-system(bf12065c-9adb-46bd-a09b-287a9db5e53b)"), skipping: failed to "StartContainer" for "registry-proxy" with ImagePullBackOff: "Back-off pulling image "gcr.io/google_containers/kube-registry-proxy:0.4""
Jan 20 09:17:08 minikube kubelet[4772]: E0120 09:17:08.869766 4772 pod_workers.go:191] Error syncing pod bf12065c-9adb-46bd-a09b-287a9db5e53b ("registry-proxy-qkjhx_kube-system(bf12065c-9adb-46bd-a09b-287a9db5e53b)"), skipping: failed to "StartContainer" for "registry-proxy" with ImagePullBackOff: "Back-off pulling image "gcr.io/google_containers/kube-registry-proxy:0.4""
Jan 20 09:17:20 minikube kubelet[4772]: E0120 09:17:20.865800 4772 pod_workers.go:191] Error syncing pod bf12065c-9adb-46bd-a09b-287a9db5e53b ("registry-proxy-qkjhx_kube-system(bf12065c-9adb-46bd-a09b-287a9db5e53b)"), skipping: failed to "StartContainer" for "registry-proxy" with ImagePullBackOff: "Back-off pulling image "gcr.io/google_containers/kube-registry-proxy:0.4""
Jan 20 09:17:31 minikube kubelet[4772]: E0120 09:17:31.868686 4772 pod_workers.go:191] Error syncing pod bf12065c-9adb-46bd-a09b-287a9db5e53b ("registry-proxy-qkjhx_kube-system(bf12065c-9adb-46bd-a09b-287a9db5e53b)"), skipping: failed to "StartContainer" for "registry-proxy" with ImagePullBackOff: "Back-off pulling image "gcr.io/google_containers/kube-registry-proxy:0.4""
Jan 20 09:17:46 minikube kubelet[4772]: E0120 09:17:46.868784 4772 pod_workers.go:191] Error syncing pod bf12065c-9adb-46bd-a09b-287a9db5e53b ("registry-proxy-qkjhx_kube-system(bf12065c-9adb-46bd-a09b-287a9db5e53b)"), skipping: failed to "StartContainer" for "registry-proxy" with ImagePullBackOff: "Back-off pulling image "gcr.io/google_containers/kube-registry-proxy:0.4""
Jan 20 09:18:00 minikube kubelet[4772]: E0120 09:18:00.866770 4772 pod_workers.go:191] Error syncing pod bf12065c-9adb-46bd-a09b-287a9db5e53b ("registry-proxy-qkjhx_kube-system(bf12065c-9adb-46bd-a09b-287a9db5e53b)"), skipping: failed to "StartContainer" for "registry-proxy" with ImagePullBackOff: "Back-off pulling image "gcr.io/google_containers/kube-registry-proxy:0.4""
Jan 20 09:18:13 minikube kubelet[4772]: E0120 09:18:13.869379 4772 pod_workers.go:191] Error syncing pod bf12065c-9adb-46bd-a09b-287a9db5e53b ("registry-proxy-qkjhx_kube-system(bf12065c-9adb-46bd-a09b-287a9db5e53b)"), skipping: failed to "StartContainer" for "registry-proxy" with ImagePullBackOff: "Back-off pulling image "gcr.io/google_containers/kube-registry-proxy:0.4""
Jan 20 09:18:25 minikube kubelet[4772]: E0120 09:18:25.869228 4772 pod_workers.go:191] Error syncing pod bf12065c-9adb-46bd-a09b-287a9db5e53b ("registry-proxy-qkjhx_kube-system(bf12065c-9adb-46bd-a09b-287a9db5e53b)"), skipping: failed to "StartContainer" for "registry-proxy" with ImagePullBackOff: "Back-off pulling image "gcr.io/google_containers/kube-registry-proxy:0.4""
Jan 20 09:18:36 minikube kubelet[4772]: E0120 09:18:36.869591 4772 pod_workers.go:191] Error syncing pod bf12065c-9adb-46bd-a09b-287a9db5e53b ("registry-proxy-qkjhx_kube-system(bf12065c-9adb-46bd-a09b-287a9db5e53b)"), skipping: failed to "StartContainer" for "registry-proxy" with ImagePullBackOff: "Back-off pulling image "gcr.io/google_containers/kube-registry-proxy:0.4""
Jan 20 09:18:47 minikube kubelet[4772]: E0120 09:18:47.870864 4772 pod_workers.go:191] Error syncing pod bf12065c-9adb-46bd-a09b-287a9db5e53b ("registry-proxy-qkjhx_kube-system(bf12065c-9adb-46bd-a09b-287a9db5e53b)"), skipping: failed to "StartContainer" for "registry-proxy" with ImagePullBackOff: "Back-off pulling image "gcr.io/google_containers/kube-registry-proxy:0.4""
Jan 20 09:19:02 minikube kubelet[4772]: E0120 09:19:02.869632 4772 pod_workers.go:191] Error syncing pod bf12065c-9adb-46bd-a09b-287a9db5e53b ("registry-proxy-qkjhx_kube-system(bf12065c-9adb-46bd-a09b-287a9db5e53b)"), skipping: failed to "StartContainer" for "registry-proxy" with ImagePullBackOff: "Back-off pulling image "gcr.io/google_containers/kube-registry-proxy:0.4""
Jan 20 09:19:31 minikube kubelet[4772]: E0120 09:19:31.092497 4772 remote_image.go:113] PullImage "gcr.io/google_containers/kube-registry-proxy:0.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Jan 20 09:19:31 minikube kubelet[4772]: E0120 09:19:31.092739 4772 kuberuntime_image.go:50] Pull image "gcr.io/google_containers/kube-registry-proxy:0.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Jan 20 09:19:31 minikube kubelet[4772]: E0120 09:19:31.093049 4772 kuberuntime_manager.go:803] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Jan 20 09:19:31 minikube kubelet[4772]: E0120 09:19:31.093250 4772 pod_workers.go:191] Error syncing pod bf12065c-9adb-46bd-a09b-287a9db5e53b ("registry-proxy-qkjhx_kube-system(bf12065c-9adb-46bd-a09b-287a9db5e53b)"), skipping: failed to "StartContainer" for "registry-proxy" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Jan 20 09:19:46 minikube kubelet[4772]: E0120 09:19:46.870414 4772 pod_workers.go:191] Error syncing pod bf12065c-9adb-46bd-a09b-287a9db5e53b ("registry-proxy-qkjhx_kube-system(bf12065c-9adb-46bd-a09b-287a9db5e53b)"), skipping: failed to "StartContainer" for "registry-proxy" with ImagePullBackOff: "Back-off pulling image "gcr.io/google_containers/kube-registry-proxy:0.4""
Jan 20 09:19:57 minikube kubelet[4772]: E0120 09:19:57.889270 4772 pod_workers.go:191] Error syncing pod bf12065c-9adb-46bd-a09b-287a9db5e53b ("registry-proxy-qkjhx_kube-system(bf12065c-9adb-46bd-a09b-287a9db5e53b)"), skipping: failed to "StartContainer" for "registry-proxy" with ImagePullBackOff: "Back-off pulling image "gcr.io/google_containers/kube-registry-proxy:0.4""
Jan 20 09:20:12 minikube kubelet[4772]: E0120 09:20:12.882167 4772 pod_workers.go:191] Error syncing pod bf12065c-9adb-46bd-a09b-287a9db5e53b ("registry-proxy-qkjhx_kube-system(bf12065c-9adb-46bd-a09b-287a9db5e53b)"), skipping: failed to "StartContainer" for "registry-proxy" with ImagePullBackOff: "Back-off pulling image "gcr.io/google_containers/kube-registry-proxy:0.4""
Jan 20 09:20:27 minikube kubelet[4772]: E0120 09:20:27.873225 4772 pod_workers.go:191] Error syncing pod bf12065c-9adb-46bd-a09b-287a9db5e53b ("registry-proxy-qkjhx_kube-system(bf12065c-9adb-46bd-a09b-287a9db5e53b)"), skipping: failed to "StartContainer" for "registry-proxy" with ImagePullBackOff: "Back-off pulling image "gcr.io/google_containers/kube-registry-proxy:0.4""
Jan 20 09:20:39 minikube kubelet[4772]: E0120 09:20:39.871968 4772 pod_workers.go:191] Error syncing pod bf12065c-9adb-46bd-a09b-287a9db5e53b ("registry-proxy-qkjhx_kube-system(bf12065c-9adb-46bd-a09b-287a9db5e53b)"), skipping: failed to "StartContainer" for "registry-proxy" with ImagePullBackOff: "Back-off pulling image "gcr.io/google_containers/kube-registry-proxy:0.4""
Jan 20 09:20:54 minikube kubelet[4772]: E0120 09:20:54.868115 4772 pod_workers.go:191] Error syncing pod bf12065c-9adb-46bd-a09b-287a9db5e53b ("registry-proxy-qkjhx_kube-system(bf12065c-9adb-46bd-a09b-287a9db5e53b)"), skipping: failed to "StartContainer" for "registry-proxy" with ImagePullBackOff: "Back-off pulling image "gcr.io/google_containers/kube-registry-proxy:0.4""
Jan 20 09:21:08 minikube kubelet[4772]: E0120 09:21:08.870827 4772 pod_workers.go:191] Error syncing pod bf12065c-9adb-46bd-a09b-287a9db5e53b ("registry-proxy-qkjhx_kube-system(bf12065c-9adb-46bd-a09b-287a9db5e53b)"), skipping: failed to "StartContainer" for "registry-proxy" with ImagePullBackOff: "Back-off pulling image "gcr.io/google_containers/kube-registry-proxy:0.4""
Jan 20 09:21:22 minikube kubelet[4772]: E0120 09:21:22.868918 4772 pod_workers.go:191] Error syncing pod bf12065c-9adb-46bd-a09b-287a9db5e53b ("registry-proxy-qkjhx_kube-system(bf12065c-9adb-46bd-a09b-287a9db5e53b)"), skipping: failed to "StartContainer" for "registry-proxy" with ImagePullBackOff: "Back-off pulling image "gcr.io/google_containers/kube-registry-proxy:0.4""
Jan 20 09:21:33 minikube kubelet[4772]: E0120 09:21:33.871779 4772 pod_workers.go:191] Error syncing pod bf12065c-9adb-46bd-a09b-287a9db5e53b ("registry-proxy-qkjhx_kube-system(bf12065c-9adb-46bd-a09b-287a9db5e53b)"), skipping: failed to "StartContainer" for "registry-proxy" with ImagePullBackOff: "Back-off pulling image "gcr.io/google_containers/kube-registry-proxy:0.4""
Jan 20 09:21:48 minikube kubelet[4772]: E0120 09:21:48.868728 4772 pod_workers.go:191] Error syncing pod bf12065c-9adb-46bd-a09b-287a9db5e53b ("registry-proxy-qkjhx_kube-system(bf12065c-9adb-46bd-a09b-287a9db5e53b)"), skipping: failed to "StartContainer" for "registry-proxy" with ImagePullBackOff: "Back-off pulling image "gcr.io/google_containers/kube-registry-proxy:0.4""
Jan 20 09:21:59 minikube kubelet[4772]: E0120 09:21:59.876856 4772 pod_workers.go:191] Error syncing pod bf12065c-9adb-46bd-a09b-287a9db5e53b ("registry-proxy-qkjhx_kube-system(bf12065c-9adb-46bd-a09b-287a9db5e53b)"), skipping: failed to "StartContainer" for "registry-proxy" with ImagePullBackOff: "Back-off pulling image "gcr.io/google_containers/kube-registry-proxy:0.4""
Jan 20 09:22:13 minikube kubelet[4772]: E0120 09:22:13.866434 4772 pod_workers.go:191] Error syncing pod bf12065c-9adb-46bd-a09b-287a9db5e53b ("registry-proxy-qkjhx_kube-system(bf12065c-9adb-46bd-a09b-287a9db5e53b)"), skipping: failed to "StartContainer" for "registry-proxy" with ImagePullBackOff: "Back-off pulling image "gcr.io/google_containers/kube-registry-proxy:0.4""
Jan 20 09:22:28 minikube kubelet[4772]: E0120 09:22:28.868233 4772 pod_workers.go:191] Error syncing pod bf12065c-9adb-46bd-a09b-287a9db5e53b ("registry-proxy-qkjhx_kube-system(bf12065c-9adb-46bd-a09b-287a9db5e53b)"), skipping: failed to "StartContainer" for "registry-proxy" with ImagePullBackOff: "Back-off pulling image "gcr.io/google_containers/kube-registry-proxy:0.4""
Jan 20 09:22:40 minikube kubelet[4772]: E0120 09:22:40.866953 4772 pod_workers.go:191] Error syncing pod bf12065c-9adb-46bd-a09b-287a9db5e53b ("registry-proxy-qkjhx_kube-system(bf12065c-9adb-46bd-a09b-287a9db5e53b)"), skipping: failed to "StartContainer" for "registry-proxy" with ImagePullBackOff: "Back-off pulling image "gcr.io/google_containers/kube-registry-proxy:0.4""
Jan 20 09:22:52 minikube kubelet[4772]: E0120 09:22:52.866427 4772 pod_workers.go:191] Error syncing pod bf12065c-9adb-46bd-a09b-287a9db5e53b ("registry-proxy-qkjhx_kube-system(bf12065c-9adb-46bd-a09b-287a9db5e53b)"), skipping: failed to "StartContainer" for "registry-proxy" with ImagePullBackOff: "Back-off pulling image "gcr.io/google_containers/kube-registry-proxy:0.4""
Jan 20 09:23:03 minikube kubelet[4772]: E0120 09:23:03.869311 4772 pod_workers.go:191] Error syncing pod bf12065c-9adb-46bd-a09b-287a9db5e53b ("registry-proxy-qkjhx_kube-system(bf12065c-9adb-46bd-a09b-287a9db5e53b)"), skipping: failed to "StartContainer" for "registry-proxy" with ImagePullBackOff: "Back-off pulling image "gcr.io/google_containers/kube-registry-proxy:0.4""
Jan 20 09:23:15 minikube kubelet[4772]: E0120 09:23:15.873927 4772 pod_workers.go:191] Error syncing pod bf12065c-9adb-46bd-a09b-287a9db5e53b ("registry-proxy-qkjhx_kube-system(bf12065c-9adb-46bd-a09b-287a9db5e53b)"), skipping: failed to "StartContainer" for "registry-proxy" with ImagePullBackOff: "Back-off pulling image "gcr.io/google_containers/kube-registry-proxy:0.4""
Jan 20 09:23:26 minikube kubelet[4772]: E0120 09:23:26.868580 4772 pod_workers.go:191] Error syncing pod bf12065c-9adb-46bd-a09b-287a9db5e53b ("registry-proxy-qkjhx_kube-system(bf12065c-9adb-46bd-a09b-287a9db5e53b)"), skipping: failed to "StartContainer" for "registry-proxy" with ImagePullBackOff: "Back-off pulling image "gcr.io/google_containers/kube-registry-proxy:0.4""
Jan 20 09:23:37 minikube kubelet[4772]: E0120 09:23:37.887053 4772 pod_workers.go:191] Error syncing pod bf12065c-9adb-46bd-a09b-287a9db5e53b ("registry-proxy-qkjhx_kube-system(bf12065c-9adb-46bd-a09b-287a9db5e53b)"), skipping: failed to "StartContainer" for "registry-proxy" with ImagePullBackOff: "Back-off pulling image "gcr.io/google_containers/kube-registry-proxy:0.4""
Jan 20 09:23:51 minikube kubelet[4772]: E0120 09:23:51.868861 4772 pod_workers.go:191] Error syncing pod bf12065c-9adb-46bd-a09b-287a9db5e53b ("registry-proxy-qkjhx_kube-system(bf12065c-9adb-46bd-a09b-287a9db5e53b)"), skipping: failed to "StartContainer" for "registry-proxy" with ImagePullBackOff: "Back-off pulling image "gcr.io/google_containers/kube-registry-proxy:0.4""
Jan 20 09:24:04 minikube kubelet[4772]: E0120 09:24:04.868285 4772 pod_workers.go:191] Error syncing pod bf12065c-9adb-46bd-a09b-287a9db5e53b ("registry-proxy-qkjhx_kube-system(bf12065c-9adb-46bd-a09b-287a9db5e53b)"), skipping: failed to "StartContainer" for "registry-proxy" with ImagePullBackOff: "Back-off pulling image "gcr.io/google_containers/kube-registry-proxy:0.4""
Jan 20 09:24:15 minikube kubelet[4772]: E0120 09:24:15.868865 4772 pod_workers.go:191] Error syncing pod bf12065c-9adb-46bd-a09b-287a9db5e53b ("registry-proxy-qkjhx_kube-system(bf12065c-9adb-46bd-a09b-287a9db5e53b)"), skipping: failed to "StartContainer" for "registry-proxy" with ImagePullBackOff: "Back-off pulling image "gcr.io/google_containers/kube-registry-proxy:0.4""
Jan 20 09:24:26 minikube kubelet[4772]: E0120 09:24:26.868554 4772 pod_workers.go:191] Error syncing pod bf12065c-9adb-46bd-a09b-287a9db5e53b ("registry-proxy-qkjhx_kube-system(bf12065c-9adb-46bd-a09b-287a9db5e53b)"), skipping: failed to "StartContainer" for "registry-proxy" with ImagePullBackOff: "Back-off pulling image "gcr.io/google_containers/kube-registry-proxy:0.4""
Jan 20 09:24:57 minikube kubelet[4772]: E0120 09:24:57.089015 4772 remote_image.go:113] PullImage "gcr.io/google_containers/kube-registry-proxy:0.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Jan 20 09:24:57 minikube kubelet[4772]: E0120 09:24:57.089401 4772 kuberuntime_image.go:50] Pull image "gcr.io/google_containers/kube-registry-proxy:0.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Jan 20 09:24:57 minikube kubelet[4772]: E0120 09:24:57.089558 4772 kuberuntime_manager.go:803] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Jan 20 09:24:57 minikube kubelet[4772]: E0120 09:24:57.089639 4772 pod_workers.go:191] Error syncing pod bf12065c-9adb-46bd-a09b-287a9db5e53b ("registry-proxy-qkjhx_kube-system(bf12065c-9adb-46bd-a09b-287a9db5e53b)"), skipping: failed to "StartContainer" for "registry-proxy" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Jan 20 09:25:10 minikube kubelet[4772]: E0120 09:25:10.878769 4772 pod_workers.go:191] Error syncing pod bf12065c-9adb-46bd-a09b-287a9db5e53b ("registry-proxy-qkjhx_kube-system(bf12065c-9adb-46bd-a09b-287a9db5e53b)"), skipping: failed to "StartContainer" for "registry-proxy" with ImagePullBackOff: "Back-off pulling image "gcr.io/google_containers/kube-registry-proxy:0.4""

==> storage-provisioner ["354ae5e40ed3"] <==

The operating system version:

$ uname -a
Linux myarch 5.3.13-arch1-1 #1 SMP PREEMPT Sun, 24 Nov 2019 10:15:50 +0000 x86_64 GNU/Linux

@medyagh medyagh added the area/registry registry related issues label Jan 22, 2020
@medyagh
Copy link
Member

medyagh commented Jan 22, 2020

@iaqiang-cmcc do you mind sharing the minikube version and the driver you are using ?

and also do you happen to be using a VPN or proxy?

@jiaqiang-cmcc
Copy link
Author

@medyagh There is no proxy or vpn using in my archlinux PC. For vm-driver, I am using kvm.

$ minikube version
minikube version: v1.6.2
commit: 54f28ac5d3a815d1196cd5d57d707439ee4bb392

@jiaqiang-cmcc
Copy link
Author

jiaqiang-cmcc commented Jan 22, 2020

Why the pod registry-proxy-qkjhx in my minikube always try to pull the images from gcr.io instead of aliyun mirror registry ?

I just try to pull images from my arch PC using purely docker pull cmd and it works, which means there is no problem within network between my PC and registry.cn-hangzhou.aliyuncs.com/google_containers, right ?

$ eval $(minikube docker-env)
$ docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-registry-proxy:0.4
0.4: Pulling from google_containers/kube-registry-proxy
5040bd298390: Pull complete 
e915ff4147fc: Pull complete 
c63757cad867: Pull complete 
edfc538c7cdf: Pull complete 
c8a3a64ed327: Pull complete 
Digest: sha256:554eed9e04d023edfbb5e5197396a4682a6c99857839fd111fc3ee40d18c3f03
Status: Downloaded newer image for registry.cn-hangzhou.aliyuncs.com/google_containers/kube-registry-proxy:0.4
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-registry-proxy:0.4

$ kubectl get po --namespace=kube-system
NAME                               READY   STATUS         RESTARTS   AGE
coredns-7f9c544f75-q4sq5           1/1     Running        1          43h
coredns-7f9c544f75-xfthn           1/1     Running        1          43h
etcd-minikube                      1/1     Running        1          43h
kube-addon-manager-minikube        1/1     Running        1          43h
kube-apiserver-minikube            1/1     Running        1          43h
kube-controller-manager-minikube   1/1     Running        1          43h
kube-proxy-qwj97                   1/1     Running        1          43h
kube-scheduler-minikube            1/1     Running        1          43h
registry-nx6zh                     1/1     Running        1          43h
registry-proxy-qkjhx               0/1     ErrImagePull   0          43h
storage-provisioner                1/1     Running        1          43h

@medyagh medyagh added the kind/support Categorizes issue or PR as a support question. label Jan 22, 2020
@CrossBound
Copy link
Contributor

Not sure if this was resolved or not but I've got the same issue.

minikube version: v1.7.3
commit: 436667c

NAME                               READY   STATUS             RESTARTS   AGE
coredns-6955765f44-99666           1/1     Running            0          9m50s
coredns-6955765f44-tcfkb           1/1     Running            0          9m50s
etcd-minikube                      1/1     Running            0          9m36s
kube-apiserver-minikube            1/1     Running            0          9m36s
kube-controller-manager-minikube   1/1     Running            0          9m36s
kube-proxy-fwfgg                   1/1     Running            0          9m50s
kube-scheduler-minikube            1/1     Running            0          9m36s
registry-proxy-h9hvt               0/1     ImagePullBackOff   0          2m12s
registry-rwrxx                     1/1     Running            0          2m12s
storage-provisioner                1/1     Running            0          9m53s
PS D:\Projects\LEAF\Kubernetes\Config>
Events:
  Type     Reason     Age                  From               Message
  ----     ------     ----                 ----               -------
  Normal   Scheduled  2m26s                default-scheduler  Successfully assigned kube-system/registry-proxy-h9hvt to minikube
  Normal   Pulling    43s (x3 over 2m25s)  kubelet, minikube  Pulling image "gcr.io/google_containers/kube-registry-proxy:0.4"
  Warning  Failed     13s (x3 over 110s)   kubelet, minikube  Failed to pull image "gcr.io/google_containers/kube-registry-proxy:0.4": rpc error: code = Unknown desc = error pulling image configuration: Get https://storage.googleapis.com/artifacts.google-containers.appspot.com/containers/images/sha256:60dc18151daf8df97f82f5d510aaf2657916cb473abf872ddeec9df443d333ce: dial tcp 172.217.1.240:443: i/o timeout
  Warning  Failed     13s (x3 over 110s)   kubelet, minikube  Error: ErrImagePull
  Normal   BackOff    2s (x3 over 110s)    kubelet, minikube  Back-off pulling image "gcr.io/google_containers/kube-registry-proxy:0.4"
  Warning  Failed     2s (x3 over 110s)    kubelet, minikube  Error: ImagePullBackOff

@wushuzh
Copy link

wushuzh commented Feb 27, 2020

No, there is no response or suggestion for this issue.

@medyagh is it possible to give us some help on this issue ?

@CrossBound
Copy link
Contributor

I am amazed: this is a show-stopper problem and not a single comment from a maintainer. I can't work with minikube without the ability to deploy my custom docker images to kubernetes. I have switched to canonical's microk8s and it is working for me.

@CrossBound
Copy link
Contributor

Correction, there was a single comment in January.

@tstromberg tstromberg changed the title fail to pull the image for addon registry registry plugin doesn't support --image-mirror-country: Client.Timeout exceeded while awaiting headers Apr 16, 2020
@tstromberg
Copy link
Contributor

@CrossBound - Your issue appears to be different, and indicative of a VM that is unable to access the internet as configured. Please open a separate issue.

@tstromberg tstromberg added kind/bug Categorizes issue or PR as related to a bug. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. and removed kind/support Categorizes issue or PR as a support question. labels Apr 16, 2020
@tstromberg tstromberg added help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. priority/backlog Higher priority than priority/awaiting-more-evidence. and removed priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. labels May 28, 2020
@medyagh medyagh added the lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. label Jul 22, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/registry registry related issues help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants