-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
VPN causes: waiting for apiserver: timed out waiting for the condition #4302
Comments
Hey @nwyatt! First, my apologies for the poor user experience here. Things look healthy in the logs, so it seems like a bug on our part. We just changed the logic for apiserver checking in minikube v1.1.0, do you mind upgrading to v1.1.0 to check if it fixes your particular case? Many thanks for the bug report! |
I believe this issue was resolved in the v1.1.0 release. Please try upgrading to the latest release of minikube and run If the same issue occurs, please re-open this bug. Thank you opening this bug report, and for your patience! |
Retried with 1.1.0 and it seems to hang on "Launching Kubernetes". Will try a couple things and reopen if I can't get it to work. The exact command to reproduce the issue: minikube start The full output of the command that failed: 😄 minikube v1.1.0 on darwin (amd64) The output of the minikube logs command: ==> coredns <== ==> dmesg <== ==> kernel <== ==> kube-addon-manager <== ==> kube-apiserver <== ==> kube-proxy <== ==> kube-scheduler <== ==> kubelet <== ==> storage-provisioner <== |
Yes it looks like I have the same or similar problem on 1.1.0 😄 minikube v1.1.0 on darwin (amd64) 💣 Error restarting cluster: waiting for apiserver: timed out waiting for the condition 😿 Sorry that minikube crashed. If this was unexpected, we would love to hear from you: |
I don't think I have permissions to reopen this issue. |
/reopen
…On Thu, May 23, 2019, 5:31 PM Nat Wyatt ***@***.***> wrote:
I don't think I have permissions to reopen this issue.
—
You are receiving this because you modified the open/close state.
Reply to this email directly, view it on GitHub
<#4302?email_source=notifications&email_token=AAAYYMC5B63PPP64TW5CIP3PW4ZPDA5CNFSM4HOGSONKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODWD22KA#issuecomment-495430952>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAAYYMD36LNCJTM4T7HAWBDPW4ZPDANCNFSM4HOGSONA>
.
|
@tstromberg: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
I made a little progress on this. I was able to get through the quickstart by adding a route manually as described in #3747.
vboxnet0 is added by virtualbox but for somereason the MacOS kernel isn't routing to it? |
..and as mentioned in that issue, the Cisco Anyconnect VPN is implicated. When I run the VPN everything gets routed through the VPN's gateway and it has no idea about the 192.168.99.0 network. If I try to add the route, I get an error.
When I shut down the VPN, the route that I manually added is still there, but it doesn't get used (as reported by traceroute). But if I re-add it, everything is good again. So if this is a minikube issue, it's only about the diagnostics. The cause is the VPN.
|
Ouch. Thanks for the additional diagnostics here. Does the behavior change at all if you are starting from a fresh VM, like using |
I agree with your assessment that this issue is almost certainly VPN related. This sounds very much like this bug report: https://www.virtualbox.org/ticket/14293 I don't think this is DNS related, but do you mind trying this workaround? I see it mentioned on several forums where people are talking about solving Cisco AnyConnect and VirtualBox networking issues:
Alternatively, it's possible that Cisco AnyConnect with the hyperkit VM driver may behave differently. Thanks for the very detailed bug report. It's definitely helpful. |
I'm experiencing this issue verbatim, except I don't have to re-add |
minikube v1.4 does a slightly better job of debugging this issue, but in the end, it now comes down to making sure the VPN is configured to allow access to the virtual machine launched. We now have some documentation around this: https://minikube.sigs.k8s.io/docs/reference/networking/vpn/ |
Closing because this should no longer happen in minikube v1.6: at least for the same reasons. |
minikube version: v1.23.2 - still have the same issue this under the Cisco Any Connect VPN. |
Still have the same issue today with minikube 1.24 and Cisco VPN -- clearly not resolved -- even with allow LAN traffic in Cisco's options. |
Please take a look a this comments : #1099 (comment) |
The exact command to reproduce the issue:
minikube start
The full output of the command that failed:
🤹 Downloading Kubernetes v1.14.1 images in the background ...
🔥 Creating virtualbox VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
📶 "minikube" IP address is 192.168.99.100
🐳 Configuring Docker as the container runtime ...
🐳 Version of container runtime is 18.06.3-ce
⌛ Waiting for image downloads to complete ...
✨ Preparing Kubernetes environment ...
💾 Downloading kubeadm v1.14.1
💾 Downloading kubelet v1.14.1
🚜 Pulling images required by Kubernetes v1.14.1 ...
🚀 Launching Kubernetes v1.14.1 using kubeadm ...
⌛ Waiting for pods: apiserver
💣 Error starting cluster: wait: waiting for component=kube-apiserver: timed out waiting for the condition
😿 Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
👉 https://github.com/kubernetes/minikube/issues/new
❌ Problems detected in "kube-addon-manager":
error: no objeINctsF passed tO: ==o a Entepply
error: no objects passed tINFO:o apply
error: no objects passcluedsterrolebind to aing.rbacppl.authorizaty
The output of the
minikube logs
command:==> coredns <==
.:53
2019-05-21T01:00:44.788Z [INFO] CoreDNS-1.3.1
2019-05-21T01:00:44.788Z [INFO] linux/amd64, go1.11.4, 6b56a9c
CoreDNS-1.3.1
linux/amd64, go1.11.4, 6b56a9c
2019-05-21T01:00:44.789Z [INFO] plugin/reload: Running configuration MD5 = 599b9eb76b8c147408aed6a0bbe0f669
==> dmesg <==
[ +5.000714] hpet1: lost 318 rtc interrupts
[ +5.001340] hpet1: lost 318 rtc interrupts
[ +5.000573] hpet1: lost 318 rtc interrupts
[May21 01:10] hpet1: lost 318 rtc interrupts
[ +5.002394] hpet1: lost 319 rtc interrupts
[ +4.999774] hpet1: lost 319 rtc interrupts
[ +5.001611] hpet1: lost 318 rtc interrupts
[ +5.003641] hpet1: lost 318 rtc interrupts
[ +5.002075] hpet1: lost 319 rtc interrupts
[ +5.003913] hpet1: lost 318 rtc interrupts
[ +5.004012] hpet1: lost 318 rtc interrupts
[ +5.003597] hpet1: lost 319 rtc interrupts
[ +5.001587] hpet1: lost 318 rtc interrupts
[ +5.003505] hpet1: lost 319 rtc interrupts
[ +5.003605] hpet1: lost 318 rtc interrupts
[May21 01:11] hpet1: lost 319 rtc interrupts
[ +5.000461] hpet1: lost 318 rtc interrupts
[ +5.001194] hpet1: lost 318 rtc interrupts
[ +5.000937] hpet1: lost 318 rtc interrupts
[ +5.000990] hpet1: lost 318 rtc interrupts
[ +5.001918] hpet1: lost 318 rtc interrupts
[ +5.000062] hpet1: lost 318 rtc interrupts
[ +5.001929] hpet1: lost 318 rtc interrupts
[ +5.000839] hpet1: lost 318 rtc interrupts
[ +5.001052] hpet1: lost 318 rtc interrupts
[ +5.003339] hpet1: lost 318 rtc interrupts
[ +5.002997] hpet1: lost 320 rtc interrupts
[May21 01:12] hpet1: lost 318 rtc interrupts
[ +5.001611] hpet1: lost 318 rtc interrupts
[ +5.002261] hpet1: lost 318 rtc interrupts
[ +5.001908] hpet1: lost 318 rtc interrupts
[ +5.000722] hpet1: lost 318 rtc interrupts
[ +5.003967] hpet1: lost 318 rtc interrupts
[ +5.004101] hpet1: lost 319 rtc interrupts
[ +5.001064] hpet1: lost 319 rtc interrupts
[ +5.001861] hpet1: lost 319 rtc interrupts
[ +4.999888] hpet1: lost 319 rtc interrupts
[ +5.003522] hpet1: lost 318 rtc interrupts
[ +5.004202] hpet1: lost 318 rtc interrupts
[May21 01:13] hpet1: lost 319 rtc interrupts
[ +5.001042] hpet1: lost 318 rtc interrupts
[ +5.005190] hpet1: lost 320 rtc interrupts
[ +5.001376] hpet1: lost 318 rtc interrupts
[ +5.001892] hpet1: lost 318 rtc interrupts
[ +5.001782] hpet1: lost 318 rtc interrupts
[ +5.001287] hpet1: lost 318 rtc interrupts
[ +5.001858] hpet1: lost 318 rtc interrupts
[ +5.001843] hpet1: lost 318 rtc interrupts
[ +5.001376] hpet1: lost 318 rtc interrupts
[ +5.001668] hpet1: lost 319 rtc interrupts
==> kernel <==
01:13:55 up 16 min, 0 users, load average: 0.46, 0.28, 0.27
Linux minikube 4.15.0 #1 SMP Thu Apr 25 20:51:48 UTC 2019 x86_64 GNU/Linux
==> kube-addon-manager <==
error: no objects passed to apply
INFO: == Kubernetes addon reconcile completed at 2019-05-21T01:06:12+00:00 ==
IeNFO: rLreaoder is minikube
INFO: == Kubernetes addon ensure completed at 2019-05-21T01:07:10+00:00 ==
r: no objects passed to apply
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-05-21T01:07:12+00:00 ==
eINFO: Leader is mrrinor: ikubeno o
bjects passed to apply
error: no objects passed to apply
error: no objects passed Ito applyNFO: == Ku
bernetes addon ensure errorcompleted at 2: n0o objects passed to apply
19-05-21T01:08:10+00:00 ==
error: no objects passed to apply
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-05-21T01:08:11+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-05-21T01:09:11+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-05-21T01:09:12+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-05-21T01:10:10+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-05-21T01:10:12+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-05-21T01:11:10+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-05-21T01:11:12+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-05-21T01:12:10+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-05-21T01:12:12+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-05-21T01:13:10+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-05-21T01:13:11+00:00 ==
==> kube-apiserver <==
I0521 01:00:01.934081 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0521 01:00:01.972553 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0521 01:00:02.011988 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0521 01:00:02.053148 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0521 01:00:02.091327 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0521 01:00:02.132058 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0521 01:00:02.173028 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0521 01:00:02.213673 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0521 01:00:02.253271 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0521 01:00:02.292192 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0521 01:00:02.332378 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0521 01:00:02.372968 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0521 01:00:02.413483 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0521 01:00:02.453303 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0521 01:00:02.491931 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0521 01:00:02.532516 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0521 01:00:02.574068 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0521 01:00:02.615355 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0521 01:00:02.651882 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0521 01:00:02.692424 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0521 01:00:02.732514 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0521 01:00:02.772421 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0521 01:00:02.813252 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0521 01:00:02.851364 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0521 01:00:02.893279 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0521 01:00:02.933081 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0521 01:00:02.971367 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0521 01:00:02.974040 1 storage_rbac.go:254] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0521 01:00:03.015618 1 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0521 01:00:03.051902 1 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0521 01:00:03.091517 1 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0521 01:00:03.131413 1 controller.go:606] quota admission added evaluator for: endpoints
I0521 01:00:03.132330 1 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0521 01:00:03.173998 1 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0521 01:00:03.215681 1 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0521 01:00:03.249995 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0521 01:00:03.253558 1 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0521 01:00:03.296259 1 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0521 01:00:03.333993 1 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0521 01:00:03.378831 1 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0521 01:00:03.433266 1 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0521 01:00:03.455323 1 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0521 01:00:03.496190 1 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
W0521 01:00:03.649536 1 lease.go:222] Resetting endpoints for master service "kubernetes" to [192.168.99.100]
I0521 01:00:04.359725 1 controller.go:606] quota admission added evaluator for: serviceaccounts
I0521 01:00:04.913323 1 controller.go:606] quota admission added evaluator for: deployments.apps
I0521 01:00:05.189354 1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I0521 01:00:11.306246 1 controller.go:606] quota admission added evaluator for: replicasets.apps
I0521 01:00:11.337800 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
E0521 01:12:27.783315 1 watcher.go:208] watch chan error: etcdserver: mvcc: required revision has been compacted
==> kube-proxy <==
W0521 01:00:12.973115 1 server_others.go:267] Flag proxy-mode="" unknown, assuming iptables proxy
I0521 01:00:13.106524 1 server_others.go:147] Using iptables Proxier.
W0521 01:00:13.114072 1 proxier.go:319] clusterCIDR not specified, unable to distinguish between internal and external traffic
I0521 01:00:13.120602 1 server.go:555] Version: v1.14.1
I0521 01:00:13.178350 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
I0521 01:00:13.178427 1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0521 01:00:13.178971 1 conntrack.go:83] Setting conntrack hashsize to 32768
I0521 01:00:13.183047 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0521 01:00:13.183101 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0521 01:00:13.183183 1 config.go:202] Starting service config controller
I0521 01:00:13.183210 1 controller_utils.go:1027] Waiting for caches to sync for service config controller
I0521 01:00:13.183221 1 config.go:102] Starting endpoints config controller
I0521 01:00:13.183227 1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
I0521 01:00:13.286639 1 controller_utils.go:1034] Caches are synced for endpoints config controller
I0521 01:00:13.286768 1 controller_utils.go:1034] Caches are synced for service config controller
==> kube-scheduler <==
I0521 00:59:57.399041 1 serving.go:319] Generated self-signed cert in-memory
W0521 00:59:57.691838 1 authentication.go:249] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
W0521 00:59:57.691863 1 authentication.go:252] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
W0521 00:59:57.691873 1 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
I0521 00:59:57.694717 1 server.go:142] Version: v1.14.1
I0521 00:59:57.694764 1 defaults.go:87] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
W0521 00:59:57.696397 1 authorization.go:47] Authorization is disabled
W0521 00:59:57.696422 1 authentication.go:55] Authentication is disabled
I0521 00:59:57.696434 1 deprecated_insecure_serving.go:49] Serving healthz insecurely on [::]:10251
I0521 00:59:57.697263 1 secure_serving.go:116] Serving securely on 127.0.0.1:10259
E0521 01:00:00.194839 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0521 01:00:00.205644 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0521 01:00:00.210168 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0521 01:00:00.215920 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0521 01:00:00.216970 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0521 01:00:00.217143 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0521 01:00:00.217356 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0521 01:00:00.218615 1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0521 01:00:00.219952 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0521 01:00:00.220817 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0521 01:00:01.197279 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0521 01:00:01.207802 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0521 01:00:01.212044 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0521 01:00:01.217864 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0521 01:00:01.218677 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0521 01:00:01.220656 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0521 01:00:01.220826 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0521 01:00:01.221785 1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0521 01:00:01.223468 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0521 01:00:01.224690 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
I0521 01:00:03.100821 1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
I0521 01:00:03.203038 1 controller_utils.go:1034] Caches are synced for scheduler controller
I0521 01:00:03.203245 1 leaderelection.go:217] attempting to acquire leader lease kube-system/kube-scheduler...
I0521 01:00:03.209227 1 leaderelection.go:227] successfully acquired lease kube-system/kube-scheduler
==> kubelet <==
-- Logs begin at Tue 2019-05-21 00:58:03 UTC, end at Tue 2019-05-21 01:13:55 UTC. --
May 21 00:59:58 minikube kubelet[3219]: I0521 00:59:58.173979 3219 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach
May 21 00:59:58 minikube kubelet[3219]: I0521 00:59:58.174274 3219 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach
May 21 00:59:58 minikube kubelet[3219]: I0521 00:59:58.174569 3219 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach
May 21 00:59:58 minikube kubelet[3219]: E0521 00:59:58.271869 3219 kubelet.go:2244] node "minikube" not found
May 21 00:59:58 minikube kubelet[3219]: E0521 00:59:58.373672 3219 kubelet.go:2244] node "minikube" not found
May 21 00:59:58 minikube kubelet[3219]: E0521 00:59:58.474065 3219 kubelet.go:2244] node "minikube" not found
May 21 00:59:58 minikube kubelet[3219]: E0521 00:59:58.574671 3219 kubelet.go:2244] node "minikube" not found
May 21 00:59:58 minikube kubelet[3219]: E0521 00:59:58.674900 3219 kubelet.go:2244] node "minikube" not found
May 21 00:59:58 minikube kubelet[3219]: E0521 00:59:58.775786 3219 kubelet.go:2244] node "minikube" not found
May 21 00:59:58 minikube kubelet[3219]: E0521 00:59:58.876074 3219 kubelet.go:2244] node "minikube" not found
May 21 00:59:58 minikube kubelet[3219]: E0521 00:59:58.976206 3219 kubelet.go:2244] node "minikube" not found
May 21 00:59:59 minikube kubelet[3219]: E0521 00:59:59.077716 3219 kubelet.go:2244] node "minikube" not found
May 21 00:59:59 minikube kubelet[3219]: I0521 00:59:59.176564 3219 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach
May 21 00:59:59 minikube kubelet[3219]: E0521 00:59:59.178368 3219 kubelet.go:2244] node "minikube" not found
May 21 00:59:59 minikube kubelet[3219]: E0521 00:59:59.278575 3219 kubelet.go:2244] node "minikube" not found
May 21 00:59:59 minikube kubelet[3219]: E0521 00:59:59.379668 3219 kubelet.go:2244] node "minikube" not found
May 21 00:59:59 minikube kubelet[3219]: E0521 00:59:59.479896 3219 kubelet.go:2244] node "minikube" not found
May 21 00:59:59 minikube kubelet[3219]: E0521 00:59:59.580916 3219 kubelet.go:2244] node "minikube" not found
May 21 00:59:59 minikube kubelet[3219]: E0521 00:59:59.682320 3219 kubelet.go:2244] node "minikube" not found
May 21 00:59:59 minikube kubelet[3219]: E0521 00:59:59.782839 3219 kubelet.go:2244] node "minikube" not found
May 21 00:59:59 minikube kubelet[3219]: E0521 00:59:59.883482 3219 kubelet.go:2244] node "minikube" not found
May 21 00:59:59 minikube kubelet[3219]: E0521 00:59:59.983911 3219 kubelet.go:2244] node "minikube" not found
May 21 01:00:00 minikube kubelet[3219]: E0521 01:00:00.084967 3219 kubelet.go:2244] node "minikube" not found
May 21 01:00:00 minikube kubelet[3219]: E0521 01:00:00.186713 3219 kubelet.go:2244] node "minikube" not found
May 21 01:00:00 minikube kubelet[3219]: E0521 01:00:00.242761 3219 controller.go:194] failed to get node "minikube" when trying to set owner ref to the node lease: nodes "minikube" not found
May 21 01:00:00 minikube kubelet[3219]: E0521 01:00:00.287339 3219 kubelet.go:2244] node "minikube" not found
May 21 01:00:00 minikube kubelet[3219]: I0521 01:00:00.287400 3219 reconciler.go:154] Reconciler: start to sync state
May 21 01:00:00 minikube kubelet[3219]: I0521 01:00:00.301917 3219 kubelet_node_status.go:75] Successfully registered node minikube
May 21 01:00:00 minikube kubelet[3219]: E0521 01:00:00.327055 3219 controller.go:115] failed to ensure node lease exists, will retry in 1.6s, error: namespaces "kube-node-lease" not found
May 21 01:00:03 minikube kubelet[3219]: E0521 01:00:03.057721 3219 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15a08ce6d61f4257", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf30f14287ec5e57, ext:152723365, loc:(*time.Location)(0x800e8e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf30f14287ec5e57, ext:152723365, loc:(*time.Location)(0x800e8e0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
May 21 01:00:03 minikube kubelet[3219]: E0521 01:00:03.113062 3219 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15a08ce6db396a74", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node minikube status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf30f1428d068674, ext:238323679, loc:(*time.Location)(0x800e8e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf30f1428d068674, ext:238323679, loc:(*time.Location)(0x800e8e0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
May 21 01:00:03 minikube kubelet[3219]: E0521 01:00:03.171969 3219 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15a08ce6db38eed1", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node minikube status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf30f1428d060ad1, ext:238292012, loc:(*time.Location)(0x800e8e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf30f1428d060ad1, ext:238292012, loc:(*time.Location)(0x800e8e0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
May 21 01:00:03 minikube kubelet[3219]: E0521 01:00:03.230649 3219 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15a08ce6db397c0f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node minikube status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf30f1428d06980f, ext:238328168, loc:(*time.Location)(0x800e8e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf30f1428d06980f, ext:238328168, loc:(*time.Location)(0x800e8e0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
May 21 01:00:03 minikube kubelet[3219]: E0521 01:00:03.289852 3219 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15a08ce6db38eed1", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node minikube status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf30f1428d060ad1, ext:238292012, loc:(*time.Location)(0x800e8e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf30f1428e269453, ext:257201569, loc:(*time.Location)(0x800e8e0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
May 21 01:00:03 minikube kubelet[3219]: E0521 01:00:03.346943 3219 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15a08ce6db396a74", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node minikube status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf30f1428d068674, ext:238323679, loc:(*time.Location)(0x800e8e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf30f1428e26a274, ext:257205195, loc:(*time.Location)(0x800e8e0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
May 21 01:00:03 minikube kubelet[3219]: E0521 01:00:03.412734 3219 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15a08ce6db397c0f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node minikube status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf30f1428d06980f, ext:238328168, loc:(*time.Location)(0x800e8e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf30f1428e26ae16, ext:257208165, loc:(*time.Location)(0x800e8e0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
May 21 01:00:03 minikube kubelet[3219]: E0521 01:00:03.473622 3219 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15a08ce6dc9031b7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf30f1428e5d4db7, ext:260787980, loc:(*time.Location)(0x800e8e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf30f1428e5d4db7, ext:260787980, loc:(*time.Location)(0x800e8e0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
May 21 01:00:11 minikube kubelet[3219]: I0521 01:00:11.450791 3219 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/c8dfa847-7b63-11e9-867f-0800279b5afd-kube-proxy") pod "kube-proxy-6j77l" (UID: "c8dfa847-7b63-11e9-867f-0800279b5afd")
May 21 01:00:11 minikube kubelet[3219]: I0521 01:00:11.451376 3219 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/c8dfa847-7b63-11e9-867f-0800279b5afd-xtables-lock") pod "kube-proxy-6j77l" (UID: "c8dfa847-7b63-11e9-867f-0800279b5afd")
May 21 01:00:11 minikube kubelet[3219]: I0521 01:00:11.451455 3219 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-npzwt" (UniqueName: "kubernetes.io/secret/c8dfa847-7b63-11e9-867f-0800279b5afd-kube-proxy-token-npzwt") pod "kube-proxy-6j77l" (UID: "c8dfa847-7b63-11e9-867f-0800279b5afd")
May 21 01:00:11 minikube kubelet[3219]: I0521 01:00:11.451509 3219 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/c8dfa847-7b63-11e9-867f-0800279b5afd-lib-modules") pod "kube-proxy-6j77l" (UID: "c8dfa847-7b63-11e9-867f-0800279b5afd")
May 21 01:00:11 minikube kubelet[3219]: I0521 01:00:11.552803 3219 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c8dc2d71-7b63-11e9-867f-0800279b5afd-config-volume") pod "coredns-fb8b8dccf-6dflw" (UID: "c8dc2d71-7b63-11e9-867f-0800279b5afd")
May 21 01:00:11 minikube kubelet[3219]: I0521 01:00:11.553724 3219 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-twpd7" (UniqueName: "kubernetes.io/secret/c8dc2d71-7b63-11e9-867f-0800279b5afd-coredns-token-twpd7") pod "coredns-fb8b8dccf-6dflw" (UID: "c8dc2d71-7b63-11e9-867f-0800279b5afd")
May 21 01:00:11 minikube kubelet[3219]: I0521 01:00:11.553961 3219 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c8d985e8-7b63-11e9-867f-0800279b5afd-config-volume") pod "coredns-fb8b8dccf-n5s8g" (UID: "c8d985e8-7b63-11e9-867f-0800279b5afd")
May 21 01:00:11 minikube kubelet[3219]: I0521 01:00:11.554050 3219 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-twpd7" (UniqueName: "kubernetes.io/secret/c8d985e8-7b63-11e9-867f-0800279b5afd-coredns-token-twpd7") pod "coredns-fb8b8dccf-n5s8g" (UID: "c8d985e8-7b63-11e9-867f-0800279b5afd")
May 21 01:00:12 minikube kubelet[3219]: I0521 01:00:12.056906 3219 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-nxpcd" (UniqueName: "kubernetes.io/secret/c93abb08-7b63-11e9-867f-0800279b5afd-storage-provisioner-token-nxpcd") pod "storage-provisioner" (UID: "c93abb08-7b63-11e9-867f-0800279b5afd")
May 21 01:00:12 minikube kubelet[3219]: I0521 01:00:12.056950 3219 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/c93abb08-7b63-11e9-867f-0800279b5afd-tmp") pod "storage-provisioner" (UID: "c93abb08-7b63-11e9-867f-0800279b5afd")
May 21 01:00:12 minikube kubelet[3219]: W0521 01:00:12.489905 3219 pod_container_deletor.go:75] Container "39b0d11437407df2e84d8dde7f186925c1dfb0a0241dd0c9febf58d8aa558b81" not found in pod's containers
May 21 01:00:12 minikube kubelet[3219]: W0521 01:00:12.529307 3219 container.go:409] Failed to create summary reader for "/system.slice/run-rd90405dcc3304eef8455488f89f786ef.scope": none of the resources are being tracked.
May 21 01:00:13 minikube kubelet[3219]: W0521 01:00:13.108753 3219 pod_container_deletor.go:75] Container "d275bdcacb5ea334f17970fb1a7071b14a0bdceb03bdefca48c8df3d0d0306ba" not found in pod's containers
==> storage-provisioner <==
The operating system version:
MacOS 10.13.6
Output of minikube status
host: Running
kubelet: Running
apiserver: Stopped
kubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.100
The text was updated successfully, but these errors were encountered: