Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to override node-name: lookup minikube on x:53: no such host #7161

Closed
ialidzhikov opened this issue Mar 23, 2020 · 3 comments · Fixed by #7238
Closed

Unable to override node-name: lookup minikube on x:53: no such host #7161

ialidzhikov opened this issue Mar 23, 2020 · 3 comments · Fixed by #7238
Labels
kind/bug Categorizes issue or PR as related to a bug. kind/support Categorizes issue or PR as a support question.

Comments

@ialidzhikov
Copy link

The exact command to reproduce the issue:

$ minikube start \
  --profile profile-v1.18 \
  --kubernetes-version "v1.18.0-rc.1" \
  --vm-driver virtualbox \
  --extra-config=kubeadm.node-name=minikube \
  --extra-config=kubelet.hostname-override=minikube

The full output of the command that failed:

😄  [test-v1.18] minikube v1.8.2 on Darwin 10.15.3
    ▪ KUBECONFIG=/Users/i331370/.kube/config
✨  Using the virtualbox driver based on user configuration
🔥  Creating virtualbox VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
🐳  Preparing Kubernetes v1.18.0-rc.1 on Docker 19.03.6 ...
    ▪ kubeadm.node-name=minikube
    ▪ kubelet.hostname-override=minikube
🚀  Launching Kubernetes ...

💣  Error starting cluster: init failed. output: "-- stdout --\n[init] Using Kubernetes version: v1.18.0-rc.1\n[preflight] Running pre-flight checks\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Generating \"apiserver-kubelet-client\" certificate and key\n[certs] Generating \"front-proxy-ca\" certificate and key\n[certs] Generating \"front-proxy-client\" certificate and key\n[certs] Generating \"etcd/ca\" certificate and key\n[certs] Generating \"etcd/server\" certificate and key\n[certs] etcd/server serving cert is signed for DNS names [minikube localhost] and IPs [192.168.99.105 127.0.0.1 ::1]\n[certs] Generating \"etcd/peer\" certificate and key\n[certs] etcd/peer serving cert is signed for DNS names [minikube localhost] and IPs [192.168.99.105 127.0.0.1 ::1]\n[certs] Generating \"etcd/healthcheck-client\" certificate and key\n[certs] Generating \"apiserver-etcd-client\" certificate and key\n[certs] Generating \"sa\" key and public key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\". This can take up to 4m0s\n[apiclient] All control plane components are healthy after 13.003771 seconds\n[upload-config] Storing the configuration used in ConfigMap \"kubeadm-config\" in the \"kube-system\" Namespace\n[kubelet] Creating a ConfigMap \"kubelet-config-1.18\" in namespace kube-system with the configuration for the kubelets in the cluster\n[kubelet-check] Initial timeout of 40s passed.\n[kubelet-check] It seems like the kubelet isn't running or healthy.\n[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.\n\n-- /stdout --\n** stderr ** \nW0323 16:58:21.182181    2851 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]\n\t[WARNING Hostname]: hostname \"minikube\" could not be reached\n\t[WARNING Hostname]: hostname \"minikube\": lookup minikube on 10.0.2.3:53: no such host\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nW0323 16:58:24.553548    2851 manifests.go:225] the default kube-apiserver authorization-mode is \"Node,RBAC\"; using \"Node,RBAC\"\nW0323 16:58:24.554639    2851 manifests.go:225] the default kube-apiserver authorization-mode is \"Node,RBAC\"; using \"Node,RBAC\"\nerror execution phase upload-config/kubelet: Error writing Crisocket information for the control-plane node: timed out waiting for the condition\nTo see the stack trace of this error execute with --v=5 or higher\n\n** /stderr **": /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0-rc.1:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --node-name=minikube --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.0-rc.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [minikube localhost] and IPs [192.168.99.105 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [minikube localhost] and IPs [192.168.99.105 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 13.003771 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.

stderr:
W0323 16:58:21.182181    2851 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	[WARNING Hostname]: hostname "minikube" could not be reached
	[WARNING Hostname]: hostname "minikube": lookup minikube on 10.0.2.3:53: no such host
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0323 16:58:24.553548    2851 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0323 16:58:24.554639    2851 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase upload-config/kubelet: Error writing Crisocket information for the control-plane node: timed out waiting for the condition
To see the stack trace of this error execute with --v=5 or higher


😿  minikube is exiting due to an error. If the above message is not useful, open an issue:
👉  https://github.com/kubernetes/minikube/issues/new/choose

The output of the minikube logs command:

$ minikube logs -p profile-v1.18

==> Docker <==
-- Logs begin at Mon 2020-03-23 16:44:38 UTC, end at Mon 2020-03-23 16:52:25 UTC. --
Mar 23 16:45:10 test-v1.18 dockerd[2591]: time="2020-03-23T16:45:10.739963449Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1
Mar 23 16:45:10 test-v1.18 dockerd[2591]: time="2020-03-23T16:45:10.740019389Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1
Mar 23 16:45:10 test-v1.18 dockerd[2591]: time="2020-03-23T16:45:10.740069743Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1
Mar 23 16:45:10 test-v1.18 dockerd[2591]: time="2020-03-23T16:45:10.740081504Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1
Mar 23 16:45:10 test-v1.18 dockerd[2591]: time="2020-03-23T16:45:10.740089997Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1
Mar 23 16:45:10 test-v1.18 dockerd[2591]: time="2020-03-23T16:45:10.740098927Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1
Mar 23 16:45:10 test-v1.18 dockerd[2591]: time="2020-03-23T16:45:10.740107418Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1
Mar 23 16:45:10 test-v1.18 dockerd[2591]: time="2020-03-23T16:45:10.740116625Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1
Mar 23 16:45:10 test-v1.18 dockerd[2591]: time="2020-03-23T16:45:10.740124752Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1
Mar 23 16:45:10 test-v1.18 dockerd[2591]: time="2020-03-23T16:45:10.740135846Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1
Mar 23 16:45:10 test-v1.18 dockerd[2591]: time="2020-03-23T16:45:10.740192086Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2
Mar 23 16:45:10 test-v1.18 dockerd[2591]: time="2020-03-23T16:45:10.740303086Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1
Mar 23 16:45:10 test-v1.18 dockerd[2591]: time="2020-03-23T16:45:10.740666051Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1
Mar 23 16:45:10 test-v1.18 dockerd[2591]: time="2020-03-23T16:45:10.740696495Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1
Mar 23 16:45:10 test-v1.18 dockerd[2591]: time="2020-03-23T16:45:10.740726141Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
Mar 23 16:45:10 test-v1.18 dockerd[2591]: time="2020-03-23T16:45:10.740769314Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
Mar 23 16:45:10 test-v1.18 dockerd[2591]: time="2020-03-23T16:45:10.740778991Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
Mar 23 16:45:10 test-v1.18 dockerd[2591]: time="2020-03-23T16:45:10.740786957Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
Mar 23 16:45:10 test-v1.18 dockerd[2591]: time="2020-03-23T16:45:10.740794282Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
Mar 23 16:45:10 test-v1.18 dockerd[2591]: time="2020-03-23T16:45:10.740802279Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
Mar 23 16:45:10 test-v1.18 dockerd[2591]: time="2020-03-23T16:45:10.740809801Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
Mar 23 16:45:10 test-v1.18 dockerd[2591]: time="2020-03-23T16:45:10.740817049Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
Mar 23 16:45:10 test-v1.18 dockerd[2591]: time="2020-03-23T16:45:10.740824343Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
Mar 23 16:45:10 test-v1.18 dockerd[2591]: time="2020-03-23T16:45:10.740851384Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
Mar 23 16:45:10 test-v1.18 dockerd[2591]: time="2020-03-23T16:45:10.740861156Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
Mar 23 16:45:10 test-v1.18 dockerd[2591]: time="2020-03-23T16:45:10.740868769Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
Mar 23 16:45:10 test-v1.18 dockerd[2591]: time="2020-03-23T16:45:10.740876167Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
Mar 23 16:45:10 test-v1.18 dockerd[2591]: time="2020-03-23T16:45:10.741012489Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock"
Mar 23 16:45:10 test-v1.18 dockerd[2591]: time="2020-03-23T16:45:10.741061541Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock"
Mar 23 16:45:10 test-v1.18 dockerd[2591]: time="2020-03-23T16:45:10.741072508Z" level=info msg="containerd successfully booted in 0.004406s"
Mar 23 16:45:10 test-v1.18 dockerd[2591]: time="2020-03-23T16:45:10.752227050Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Mar 23 16:45:10 test-v1.18 dockerd[2591]: time="2020-03-23T16:45:10.752370277Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Mar 23 16:45:10 test-v1.18 dockerd[2591]: time="2020-03-23T16:45:10.752495671Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
Mar 23 16:45:10 test-v1.18 dockerd[2591]: time="2020-03-23T16:45:10.752562888Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Mar 23 16:45:10 test-v1.18 dockerd[2591]: time="2020-03-23T16:45:10.754992630Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Mar 23 16:45:10 test-v1.18 dockerd[2591]: time="2020-03-23T16:45:10.755033402Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Mar 23 16:45:10 test-v1.18 dockerd[2591]: time="2020-03-23T16:45:10.755058876Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
Mar 23 16:45:10 test-v1.18 dockerd[2591]: time="2020-03-23T16:45:10.755078448Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Mar 23 16:45:10 test-v1.18 dockerd[2591]: time="2020-03-23T16:45:10.773694886Z" level=warning msg="Your kernel does not support cgroup blkio weight"
Mar 23 16:45:10 test-v1.18 dockerd[2591]: time="2020-03-23T16:45:10.773735035Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
Mar 23 16:45:10 test-v1.18 dockerd[2591]: time="2020-03-23T16:45:10.773742195Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
Mar 23 16:45:10 test-v1.18 dockerd[2591]: time="2020-03-23T16:45:10.773747284Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
Mar 23 16:45:10 test-v1.18 dockerd[2591]: time="2020-03-23T16:45:10.773752373Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
Mar 23 16:45:10 test-v1.18 dockerd[2591]: time="2020-03-23T16:45:10.773757508Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
Mar 23 16:45:10 test-v1.18 dockerd[2591]: time="2020-03-23T16:45:10.774037951Z" level=info msg="Loading containers: start."
Mar 23 16:45:10 test-v1.18 dockerd[2591]: time="2020-03-23T16:45:10.891735369Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Mar 23 16:45:10 test-v1.18 dockerd[2591]: time="2020-03-23T16:45:10.926117240Z" level=info msg="Loading containers: done."
Mar 23 16:45:11 test-v1.18 dockerd[2591]: time="2020-03-23T16:45:11.339575304Z" level=info msg="Docker daemon" commit=369ce74a3c graphdriver(s)=overlay2 version=19.03.6
Mar 23 16:45:11 test-v1.18 dockerd[2591]: time="2020-03-23T16:45:11.339665291Z" level=info msg="Daemon has completed initialization"
Mar 23 16:45:11 test-v1.18 systemd[1]: Started Docker Application Container Engine.
Mar 23 16:45:11 test-v1.18 dockerd[2591]: time="2020-03-23T16:45:11.376904888Z" level=info msg="API listen on /var/run/docker.sock"
Mar 23 16:45:11 test-v1.18 dockerd[2591]: time="2020-03-23T16:45:11.377155448Z" level=info msg="API listen on [::]:2376"
Mar 23 16:45:17 test-v1.18 dockerd[2591]: time="2020-03-23T16:45:17.622025760Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/00eaca5acffc072c53ee0db156ba7e7a015ca23357166281449370d28967c4c3/shim.sock" debug=false pid=3460
Mar 23 16:45:17 test-v1.18 dockerd[2591]: time="2020-03-23T16:45:17.648544066Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/7c0d506ca234975d8f96214aac439b6e443ae0511141ed026a59ec9bf9230504/shim.sock" debug=false pid=3478
Mar 23 16:45:17 test-v1.18 dockerd[2591]: time="2020-03-23T16:45:17.657949052Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ca1f6495340577c7ffbe8a61227c3b6a479e5621f4fae5affcab2ff50e519c88/shim.sock" debug=false pid=3480
Mar 23 16:45:17 test-v1.18 dockerd[2591]: time="2020-03-23T16:45:17.722335145Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/6d53e84e3bb280d5d37adfc7b16a427a8792402ff55038fd27780a7cb98b7b89/shim.sock" debug=false pid=3523
Mar 23 16:45:17 test-v1.18 dockerd[2591]: time="2020-03-23T16:45:17.928491986Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b966653912a458891549b84293140a1e74958a970b1f6bed9d1d2f6eceec0047/shim.sock" debug=false pid=3607
Mar 23 16:45:17 test-v1.18 dockerd[2591]: time="2020-03-23T16:45:17.942994630Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/607fa8f8e39b07624ecebe530fcd2d7203549892c436806b5134301859ca4d2a/shim.sock" debug=false pid=3618
Mar 23 16:45:17 test-v1.18 dockerd[2591]: time="2020-03-23T16:45:17.949803358Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/4eeda3a79711abc3704c6b6bee92d0367f6134c4f1523e20566fe7825c6834a6/shim.sock" debug=false pid=3631
Mar 23 16:45:18 test-v1.18 dockerd[2591]: time="2020-03-23T16:45:18.077544672Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/09afecbf4abde9c25ae301c3bde0d4626644b420bbb87b2860d0457fef06921c/shim.sock" debug=false pid=3686

==> container status <==
CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
09afecbf4abde       bce13e0cc95a6       7 minutes ago       Running             kube-scheduler            0                   6d53e84e3bb28
4eeda3a79711a       b4f6b0bffa351       7 minutes ago       Running             kube-controller-manager   0                   ca1f649534057
607fa8f8e39b0       5347d260989ad       7 minutes ago       Running             kube-apiserver            0                   7c0d506ca2349
b966653912a45       303ce5db0e90d       7 minutes ago       Running             etcd                      0                   00eaca5acffc0

==> dmesg <==
[Mar23 16:44] You have booted with nomodeset. This means your GPU drivers are DISABLED
[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
[  +0.201394] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
[  +2.803375] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[ +10.296134] systemd[1]: Failed to bump fs.file-max, ignoring: Invalid argument
[  +0.008755] systemd-fstab-generator[1349]: Ignoring "noauto" for root device
[  +0.001580] systemd[1]: File /usr/lib/systemd/system/systemd-journald.service:12 configures an IP firewall (IPAddressDeny=any), but the local system does not support BPF/cgroup based firewalling.
[  +0.000002] systemd[1]: Proceeding WITHOUT firewalling in effect! (This warning is only shown for the first loaded unit using IP firewalling.)
[  +0.422280] vboxvideo: loading out-of-tree module taints kernel.
[  +0.000039] vboxvideo: Unknown symbol ttm_bo_mmap (err -2)
[  +0.000010] vboxvideo: Unknown symbol ttm_bo_global_release (err -2)
[  +0.000009] vboxvideo: Unknown symbol ttm_bo_manager_func (err -2)
[  +0.000003] vboxvideo: Unknown symbol ttm_bo_global_init (err -2)
[  +0.000007] vboxvideo: Unknown symbol ttm_bo_device_release (err -2)
[  +0.000013] vboxvideo: Unknown symbol ttm_bo_kunmap (err -2)
[  +0.000006] vboxvideo: Unknown symbol ttm_bo_del_sub_from_lru (err -2)
[  +0.000007] vboxvideo: Unknown symbol ttm_bo_device_init (err -2)
[  +0.000039] vboxvideo: Unknown symbol ttm_bo_init_mm (err -2)
[  +0.000018] vboxvideo: Unknown symbol ttm_bo_dma_acc_size (err -2)
[  +0.000004] vboxvideo: Unknown symbol ttm_tt_init (err -2)
[  +0.000001] vboxvideo: Unknown symbol ttm_bo_kmap (err -2)
[  +0.000005] vboxvideo: Unknown symbol ttm_bo_add_to_lru (err -2)
[  +0.000004] vboxvideo: Unknown symbol ttm_mem_global_release (err -2)
[  +0.000002] vboxvideo: Unknown symbol ttm_mem_global_init (err -2)
[  +0.000010] vboxvideo: Unknown symbol ttm_bo_init (err -2)
[  +0.000001] vboxvideo: Unknown symbol ttm_bo_validate (err -2)
[  +0.000004] vboxvideo: Unknown symbol ttm_bo_put (err -2)
[  +0.000003] vboxvideo: Unknown symbol ttm_tt_fini (err -2)
[  +0.000002] vboxvideo: Unknown symbol ttm_bo_eviction_valuable (err -2)
[  +0.025140] vgdrvHeartbeatInit: Setting up heartbeat to trigger every 2000 milliseconds
[  +0.000337] vboxguest: misc device minor 58, IRQ 20, I/O port d020, MMIO at 00000000f0000000 (size 0x400000)
[  +0.195456] hpet1: lost 699 rtc interrupts
[  +0.036591] VBoxService 5.2.32 r132073 (verbosity: 0) linux.amd64 (Jul 12 2019 10:32:28) release log
              00:00:00.004066 main     Log opened 2020-03-23T16:44:38.792228000Z
[  +0.000058] 00:00:00.004182 main     OS Product: Linux
[  +0.000030] 00:00:00.004217 main     OS Release: 4.19.94
[  +0.000028] 00:00:00.004246 main     OS Version: #1 SMP Fri Mar 6 11:41:28 PST 2020
[  +0.000035] 00:00:00.004273 main     Executable: /usr/sbin/VBoxService
              00:00:00.004274 main     Process ID: 2093
              00:00:00.004274 main     Package type: LINUX_64BITS_GENERIC
[  +0.000029] 00:00:00.004310 main     5.2.32 r132073 started. Verbose level = 0
[  +0.002175] 00:00:00.006476 main     Error: Service 'control' failed to initialize: VERR_INVALID_PARAMETER
[  +0.000078] 00:00:00.006561 main     Session 0 is about to close ...
[  +0.000045] 00:00:00.006607 main     Stopping all guest processes ...
[  +0.000028] 00:00:00.006636 main     Closing all guest files ...
[  +0.000703] 00:00:00.007330 main     Ended.
[  +0.422174] hpet1: lost 11 rtc interrupts
[  +0.140077] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
[ +12.085718] systemd-fstab-generator[2343]: Ignoring "noauto" for root device
[  +0.153867] systemd-fstab-generator[2359]: Ignoring "noauto" for root device
[  +0.145606] systemd-fstab-generator[2375]: Ignoring "noauto" for root device
[Mar23 16:45] kauditd_printk_skb: 65 callbacks suppressed
[  +1.364183] systemd-fstab-generator[2795]: Ignoring "noauto" for root device
[  +1.348352] systemd-fstab-generator[2995]: Ignoring "noauto" for root device
[  +3.476763] kauditd_printk_skb: 107 callbacks suppressed
[Mar23 16:46] NFSD: Unable to end grace period: -110

==> kernel <==
 16:52:25 up 8 min,  0 users,  load average: 0.39, 0.54, 0.33
Linux test-v1.18 4.19.94 #1 SMP Fri Mar 6 11:41:28 PST 2020 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2019.02.9"

==> kube-apiserver [607fa8f8e39b] <==
I0323 16:45:20.148614       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I0323 16:45:20.156746       1 client.go:361] parsed scheme: "endpoint"
I0323 16:45:20.156778       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
W0323 16:45:20.277248       1 genericapiserver.go:409] Skipping API batch/v2alpha1 because it has no resources.
W0323 16:45:20.284530       1 genericapiserver.go:409] Skipping API discovery.k8s.io/v1alpha1 because it has no resources.
W0323 16:45:20.294038       1 genericapiserver.go:409] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W0323 16:45:20.310631       1 genericapiserver.go:409] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0323 16:45:20.313764       1 genericapiserver.go:409] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0323 16:45:20.326577       1 genericapiserver.go:409] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0323 16:45:20.346383       1 genericapiserver.go:409] Skipping API apps/v1beta2 because it has no resources.
W0323 16:45:20.346582       1 genericapiserver.go:409] Skipping API apps/v1beta1 because it has no resources.
I0323 16:45:20.356941       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0323 16:45:20.357109       1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I0323 16:45:20.358673       1 client.go:361] parsed scheme: "endpoint"
I0323 16:45:20.358737       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I0323 16:45:20.366452       1 client.go:361] parsed scheme: "endpoint"
I0323 16:45:20.366581       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I0323 16:45:22.305946       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
I0323 16:45:22.306113       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
I0323 16:45:22.306512       1 dynamic_serving_content.go:130] Starting serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key
I0323 16:45:22.307043       1 secure_serving.go:178] Serving securely on [::]:8443
I0323 16:45:22.307874       1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0323 16:45:22.308250       1 crd_finalizer.go:266] Starting CRDFinalizer
I0323 16:45:22.308316       1 apiservice_controller.go:94] Starting APIServiceRegistrationController
I0323 16:45:22.308367       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0323 16:45:22.310797       1 controller.go:81] Starting OpenAPI AggregationController
I0323 16:45:22.310837       1 autoregister_controller.go:141] Starting autoregister controller
I0323 16:45:22.310841       1 cache.go:32] Waiting for caches to sync for autoregister controller
I0323 16:45:22.311419       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I0323 16:45:22.311440       1 shared_informer.go:223] Waiting for caches to sync for cluster_authentication_trust_controller
I0323 16:45:22.311659       1 available_controller.go:387] Starting AvailableConditionController
I0323 16:45:22.311675       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0323 16:45:22.312060       1 crdregistration_controller.go:111] Starting crd-autoregister controller
I0323 16:45:22.312078       1 shared_informer.go:223] Waiting for caches to sync for crd-autoregister
I0323 16:45:22.312098       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
I0323 16:45:22.312241       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
I0323 16:45:22.372442       1 controller.go:86] Starting OpenAPI controller
I0323 16:45:22.375452       1 customresource_discovery_controller.go:209] Starting DiscoveryController
I0323 16:45:22.375723       1 naming_controller.go:291] Starting NamingConditionController
I0323 16:45:22.375872       1 establishing_controller.go:76] Starting EstablishingController
I0323 16:45:22.375994       1 nonstructuralschema_controller.go:186] Starting NonStructuralSchemaConditionController
I0323 16:45:22.376104       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
E0323 16:45:22.384750       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.99.104, ResourceVersion: 0, AdditionalErrorMsg:
I0323 16:45:22.423638       1 shared_informer.go:230] Caches are synced for crd-autoregister
I0323 16:45:22.423813       1 cache.go:39] Caches are synced for autoregister controller
I0323 16:45:22.424130       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller
I0323 16:45:22.424190       1 cache.go:39] Caches are synced for AvailableConditionController controller
I0323 16:45:22.515190       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0323 16:45:23.305786       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0323 16:45:23.305827       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0323 16:45:23.326510       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
I0323 16:45:23.343832       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
I0323 16:45:23.343880       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
I0323 16:45:24.230737       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0323 16:45:24.315291       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W0323 16:45:24.456316       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.99.104]
I0323 16:45:24.457442       1 controller.go:606] quota admission added evaluator for: endpoints
I0323 16:45:24.462595       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0323 16:45:25.852828       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I0323 16:45:26.196790       1 controller.go:606] quota admission added evaluator for: serviceaccounts

==> kube-controller-manager [4eeda3a79711] <==
I0323 16:45:31.243291       1 controllermanager.go:533] Started "endpointslice"
I0323 16:45:31.243317       1 endpointslice_controller.go:213] Starting endpoint slice controller
I0323 16:45:31.243740       1 shared_informer.go:223] Waiting for caches to sync for endpoint_slice
I0323 16:45:31.493012       1 controllermanager.go:533] Started "podgc"
I0323 16:45:31.493332       1 gc_controller.go:89] Starting GC controller
I0323 16:45:31.493427       1 shared_informer.go:223] Waiting for caches to sync for GC
I0323 16:45:32.401120       1 controllermanager.go:533] Started "garbagecollector"
W0323 16:45:32.401159       1 controllermanager.go:525] Skipping "root-ca-cert-publisher"
I0323 16:45:32.401521       1 garbagecollector.go:133] Starting garbage collector controller
I0323 16:45:32.402545       1 shared_informer.go:223] Waiting for caches to sync for garbage collector
I0323 16:45:32.402627       1 graph_builder.go:282] GraphBuilder running
I0323 16:45:32.426455       1 controllermanager.go:533] Started "tokencleaner"
I0323 16:45:32.426483       1 tokencleaner.go:118] Starting token cleaner controller
I0323 16:45:32.426820       1 shared_informer.go:223] Waiting for caches to sync for token_cleaner
I0323 16:45:32.426826       1 shared_informer.go:230] Caches are synced for token_cleaner
I0323 16:45:32.453107       1 controllermanager.go:533] Started "daemonset"
I0323 16:45:32.453151       1 daemon_controller.go:257] Starting daemon sets controller
I0323 16:45:32.453461       1 shared_informer.go:223] Waiting for caches to sync for daemon sets
I0323 16:45:32.643392       1 controllermanager.go:533] Started "job"
I0323 16:45:32.643500       1 job_controller.go:144] Starting job controller
I0323 16:45:32.643509       1 shared_informer.go:223] Waiting for caches to sync for job
I0323 16:45:32.893515       1 controllermanager.go:533] Started "statefulset"
I0323 16:45:32.894304       1 shared_informer.go:223] Waiting for caches to sync for resource quota
I0323 16:45:32.894419       1 stateful_set.go:146] Starting stateful set controller
I0323 16:45:32.894437       1 shared_informer.go:223] Waiting for caches to sync for stateful set
I0323 16:45:32.932986       1 shared_informer.go:230] Caches are synced for namespace
I0323 16:45:32.943437       1 shared_informer.go:230] Caches are synced for HPA
I0323 16:45:32.943821       1 shared_informer.go:230] Caches are synced for expand
I0323 16:45:32.944422       1 shared_informer.go:230] Caches are synced for service account
I0323 16:45:32.945313       1 shared_informer.go:230] Caches are synced for PVC protection
I0323 16:45:32.953807       1 shared_informer.go:230] Caches are synced for ClusterRoleAggregator
I0323 16:45:32.958220       1 shared_informer.go:230] Caches are synced for ReplicationController
I0323 16:45:32.970897       1 shared_informer.go:230] Caches are synced for persistent volume
E0323 16:45:32.993642       1 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
I0323 16:45:32.994165       1 shared_informer.go:230] Caches are synced for ReplicaSet
E0323 16:45:32.994702       1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
I0323 16:45:32.995336       1 shared_informer.go:230] Caches are synced for certificate-csrapproving
I0323 16:45:32.997506       1 shared_informer.go:230] Caches are synced for certificate-csrsigning
I0323 16:45:32.997563       1 shared_informer.go:230] Caches are synced for GC
I0323 16:45:32.997623       1 shared_informer.go:230] Caches are synced for bootstrap_signer
I0323 16:45:32.997677       1 shared_informer.go:230] Caches are synced for TTL
I0323 16:45:33.006281       1 shared_informer.go:230] Caches are synced for PV protection
I0323 16:45:33.042775       1 shared_informer.go:230] Caches are synced for disruption
I0323 16:45:33.042906       1 disruption.go:339] Sending events to api server.
I0323 16:45:33.044441       1 shared_informer.go:230] Caches are synced for deployment
E0323 16:45:33.045838       1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
I0323 16:45:33.240954       1 shared_informer.go:230] Caches are synced for endpoint
I0323 16:45:33.243992       1 shared_informer.go:230] Caches are synced for endpoint_slice
I0323 16:45:33.346439       1 shared_informer.go:230] Caches are synced for taint
I0323 16:45:33.346584       1 taint_manager.go:187] Starting NoExecuteTaintManager
I0323 16:45:33.443894       1 shared_informer.go:230] Caches are synced for job
I0323 16:45:33.453840       1 shared_informer.go:230] Caches are synced for daemon sets
I0323 16:45:33.494524       1 shared_informer.go:230] Caches are synced for stateful set
I0323 16:45:33.505156       1 shared_informer.go:230] Caches are synced for garbage collector
I0323 16:45:33.505194       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0323 16:45:33.532314       1 shared_informer.go:230] Caches are synced for attach detach
I0323 16:45:33.594710       1 shared_informer.go:230] Caches are synced for resource quota
I0323 16:45:33.596365       1 shared_informer.go:230] Caches are synced for resource quota
I0323 16:45:33.895099       1 shared_informer.go:223] Waiting for caches to sync for garbage collector
I0323 16:45:33.895276       1 shared_informer.go:230] Caches are synced for garbage collector

==> kube-scheduler [09afecbf4abd] <==
I0323 16:45:18.562514       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0323 16:45:18.563527       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0323 16:45:19.141334       1 serving.go:313] Generated self-signed cert in-memory
W0323 16:45:22.404198       1 authentication.go:349] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0323 16:45:22.404548       1 authentication.go:297] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0323 16:45:22.404635       1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
W0323 16:45:22.406313       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0323 16:45:22.431637       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0323 16:45:22.432053       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
W0323 16:45:22.436674       1 authorization.go:47] Authorization is disabled
W0323 16:45:22.436868       1 authentication.go:40] Authentication is disabled
I0323 16:45:22.437264       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
I0323 16:45:22.438644       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0323 16:45:22.438670       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0323 16:45:22.439150       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
I0323 16:45:22.439276       1 tlsconfig.go:240] Starting DynamicServingCertificateController
E0323 16:45:22.441239       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0323 16:45:22.441769       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0323 16:45:22.442078       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0323 16:45:22.442378       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0323 16:45:22.443141       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0323 16:45:22.443147       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0323 16:45:22.443419       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0323 16:45:22.443507       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0323 16:45:22.443634       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0323 16:45:22.443639       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0323 16:45:22.444489       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0323 16:45:22.446369       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0323 16:45:22.449781       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0323 16:45:22.451161       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0323 16:45:22.451956       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0323 16:45:22.454103       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0323 16:45:22.455601       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0323 16:45:23.853318       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
I0323 16:45:24.839119       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0323 16:45:25.839842       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/kube-scheduler...
I0323 16:45:25.870111       1 leaderelection.go:252] successfully acquired lease kube-system/kube-scheduler

==> kubelet <==
-- Logs begin at Mon 2020-03-23 16:44:38 UTC, end at Mon 2020-03-23 16:52:25 UTC. --
Mar 23 16:52:19 test-v1.18 kubelet[11089]: E0323 16:52:19.886727   11089 kubelet.go:2267] node "m01" not found
Mar 23 16:52:19 test-v1.18 kubelet[11089]: E0323 16:52:19.988040   11089 kubelet.go:2267] node "m01" not found
Mar 23 16:52:20 test-v1.18 kubelet[11089]: E0323 16:52:20.088521   11089 kubelet.go:2267] node "m01" not found
Mar 23 16:52:20 test-v1.18 kubelet[11089]: E0323 16:52:20.189985   11089 kubelet.go:2267] node "m01" not found
Mar 23 16:52:20 test-v1.18 kubelet[11089]: E0323 16:52:20.290662   11089 kubelet.go:2267] node "m01" not found
Mar 23 16:52:20 test-v1.18 kubelet[11089]: E0323 16:52:20.392106   11089 kubelet.go:2267] node "m01" not found
Mar 23 16:52:20 test-v1.18 kubelet[11089]: E0323 16:52:20.492778   11089 kubelet.go:2267] node "m01" not found
Mar 23 16:52:20 test-v1.18 kubelet[11089]: E0323 16:52:20.594766   11089 kubelet.go:2267] node "m01" not found
Mar 23 16:52:20 test-v1.18 kubelet[11089]: E0323 16:52:20.695317   11089 kubelet.go:2267] node "m01" not found
Mar 23 16:52:20 test-v1.18 kubelet[11089]: E0323 16:52:20.795792   11089 kubelet.go:2267] node "m01" not found
Mar 23 16:52:20 test-v1.18 kubelet[11089]: E0323 16:52:20.896870   11089 kubelet.go:2267] node "m01" not found
Mar 23 16:52:20 test-v1.18 kubelet[11089]: E0323 16:52:20.998109   11089 kubelet.go:2267] node "m01" not found
Mar 23 16:52:21 test-v1.18 kubelet[11089]: E0323 16:52:21.099304   11089 kubelet.go:2267] node "m01" not found
Mar 23 16:52:21 test-v1.18 kubelet[11089]: E0323 16:52:21.199756   11089 kubelet.go:2267] node "m01" not found
Mar 23 16:52:21 test-v1.18 kubelet[11089]: E0323 16:52:21.301188   11089 kubelet.go:2267] node "m01" not found
Mar 23 16:52:21 test-v1.18 kubelet[11089]: E0323 16:52:21.401333   11089 kubelet.go:2267] node "m01" not found
Mar 23 16:52:21 test-v1.18 kubelet[11089]: E0323 16:52:21.502136   11089 kubelet.go:2267] node "m01" not found
Mar 23 16:52:21 test-v1.18 kubelet[11089]: E0323 16:52:21.603383   11089 kubelet.go:2267] node "m01" not found
Mar 23 16:52:21 test-v1.18 kubelet[11089]: E0323 16:52:21.704025   11089 kubelet.go:2267] node "m01" not found
Mar 23 16:52:21 test-v1.18 kubelet[11089]: E0323 16:52:21.804550   11089 kubelet.go:2267] node "m01" not found
Mar 23 16:52:21 test-v1.18 kubelet[11089]: E0323 16:52:21.905238   11089 kubelet.go:2267] node "m01" not found
Mar 23 16:52:22 test-v1.18 kubelet[11089]: E0323 16:52:22.005891   11089 kubelet.go:2267] node "m01" not found
Mar 23 16:52:22 test-v1.18 kubelet[11089]: E0323 16:52:22.106174   11089 kubelet.go:2267] node "m01" not found
Mar 23 16:52:22 test-v1.18 kubelet[11089]: E0323 16:52:22.208534   11089 kubelet.go:2267] node "m01" not found
Mar 23 16:52:22 test-v1.18 kubelet[11089]: E0323 16:52:22.309457   11089 kubelet.go:2267] node "m01" not found
Mar 23 16:52:22 test-v1.18 kubelet[11089]: E0323 16:52:22.409953   11089 kubelet.go:2267] node "m01" not found
Mar 23 16:52:22 test-v1.18 kubelet[11089]: E0323 16:52:22.510868   11089 kubelet.go:2267] node "m01" not found
Mar 23 16:52:22 test-v1.18 kubelet[11089]: E0323 16:52:22.612114   11089 kubelet.go:2267] node "m01" not found
Mar 23 16:52:22 test-v1.18 kubelet[11089]: E0323 16:52:22.712379   11089 kubelet.go:2267] node "m01" not found
Mar 23 16:52:22 test-v1.18 kubelet[11089]: E0323 16:52:22.812665   11089 kubelet.go:2267] node "m01" not found
Mar 23 16:52:22 test-v1.18 kubelet[11089]: E0323 16:52:22.912796   11089 kubelet.go:2267] node "m01" not found
Mar 23 16:52:23 test-v1.18 kubelet[11089]: E0323 16:52:23.013033   11089 kubelet.go:2267] node "m01" not found
Mar 23 16:52:23 test-v1.18 kubelet[11089]: E0323 16:52:23.113410   11089 kubelet.go:2267] node "m01" not found
Mar 23 16:52:23 test-v1.18 kubelet[11089]: E0323 16:52:23.215934   11089 kubelet.go:2267] node "m01" not found
Mar 23 16:52:23 test-v1.18 kubelet[11089]: E0323 16:52:23.316352   11089 kubelet.go:2267] node "m01" not found
Mar 23 16:52:23 test-v1.18 kubelet[11089]: E0323 16:52:23.417511   11089 kubelet.go:2267] node "m01" not found
Mar 23 16:52:23 test-v1.18 kubelet[11089]: E0323 16:52:23.518136   11089 kubelet.go:2267] node "m01" not found
Mar 23 16:52:23 test-v1.18 kubelet[11089]: E0323 16:52:23.618957   11089 kubelet.go:2267] node "m01" not found
Mar 23 16:52:23 test-v1.18 kubelet[11089]: E0323 16:52:23.719471   11089 kubelet.go:2267] node "m01" not found
Mar 23 16:52:23 test-v1.18 kubelet[11089]: E0323 16:52:23.819664   11089 kubelet.go:2267] node "m01" not found
Mar 23 16:52:23 test-v1.18 kubelet[11089]: E0323 16:52:23.920556   11089 kubelet.go:2267] node "m01" not found
Mar 23 16:52:24 test-v1.18 kubelet[11089]: E0323 16:52:24.021664   11089 kubelet.go:2267] node "m01" not found
Mar 23 16:52:24 test-v1.18 kubelet[11089]: E0323 16:52:24.122058   11089 kubelet.go:2267] node "m01" not found
Mar 23 16:52:24 test-v1.18 kubelet[11089]: E0323 16:52:24.222992   11089 kubelet.go:2267] node "m01" not found
Mar 23 16:52:24 test-v1.18 kubelet[11089]: E0323 16:52:24.324068   11089 kubelet.go:2267] node "m01" not found
Mar 23 16:52:24 test-v1.18 kubelet[11089]: E0323 16:52:24.424375   11089 kubelet.go:2267] node "m01" not found
Mar 23 16:52:24 test-v1.18 kubelet[11089]: E0323 16:52:24.524710   11089 kubelet.go:2267] node "m01" not found
Mar 23 16:52:24 test-v1.18 kubelet[11089]: E0323 16:52:24.625574   11089 kubelet.go:2267] node "m01" not found
Mar 23 16:52:24 test-v1.18 kubelet[11089]: E0323 16:52:24.726173   11089 kubelet.go:2267] node "m01" not found
Mar 23 16:52:24 test-v1.18 kubelet[11089]: E0323 16:52:24.826945   11089 kubelet.go:2267] node "m01" not found
Mar 23 16:52:24 test-v1.18 kubelet[11089]: E0323 16:52:24.927321   11089 kubelet.go:2267] node "m01" not found
Mar 23 16:52:25 test-v1.18 kubelet[11089]: E0323 16:52:25.027986   11089 kubelet.go:2267] node "m01" not found
Mar 23 16:52:25 test-v1.18 kubelet[11089]: E0323 16:52:25.128520   11089 kubelet.go:2267] node "m01" not found
Mar 23 16:52:25 test-v1.18 kubelet[11089]: E0323 16:52:25.229171   11089 kubelet.go:2267] node "m01" not found
Mar 23 16:52:25 test-v1.18 kubelet[11089]: E0323 16:52:25.329718   11089 kubelet.go:2267] node "m01" not found
Mar 23 16:52:25 test-v1.18 kubelet[11089]: E0323 16:52:25.398041   11089 controller.go:136] failed to ensure node lease exists, will retry in 7s, error: leases.coordination.k8s.io "m01" is forbidden: User "system:node:minikube" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease": can only access node lease with the same name as the requesting node
Mar 23 16:52:25 test-v1.18 kubelet[11089]: E0323 16:52:25.430094   11089 kubelet.go:2267] node "m01" not found
Mar 23 16:52:25 test-v1.18 kubelet[11089]: E0323 16:52:25.530702   11089 kubelet.go:2267] node "m01" not found
Mar 23 16:52:25 test-v1.18 kubelet[11089]: E0323 16:52:25.631054   11089 kubelet.go:2267] node "m01" not found
Mar 23 16:52:25 test-v1.18 kubelet[11089]: E0323 16:52:25.731458   11089 kubelet.go:2267] node "m01" not found

The operating system version:

$ sw_vers
ProductName:	Mac OS X
ProductVersion:	10.15.3
BuildVersion:	19D76
@ialidzhikov
Copy link
Author

/kind bug

@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Mar 23, 2020
@vpnachev
Copy link

I believe this is related to #6200

@priyawadhwa priyawadhwa added the kind/support Categorizes issue or PR as a support question. label Mar 25, 2020
@tstromberg tstromberg changed the title Cannot specify the --node-name Cannot specify the --node-name: lookup minikube on 10.0.2.3:53: no such host Mar 25, 2020
@tstromberg tstromberg changed the title Cannot specify the --node-name: lookup minikube on 10.0.2.3:53: no such host Unable to override node-name: lookup minikube on x:53: no such host Mar 25, 2020
@tstromberg
Copy link
Contributor

tstromberg commented Mar 25, 2020

I assumed this was fixed at head, but I'm seeing the same behavior there:

[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.

stderr:
W0325 17:33:03.257504    2929 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
	[WARNING Hostname]: hostname "minikube" could not be reached
	[WARNING Hostname]: hostname "minikube": lookup minikube on 10.0.2.3:53: no such host
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0325 17:33:05.972259    2929 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0325 17:33:05.973298    2929 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase upload-config/kubelet: Error writing Crisocket information for the control-plane node: timed out waiting for the condition
To see the stack trace of this error execute with --v=5 or higher

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. kind/support Categorizes issue or PR as a support question.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants