Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

vmware: Shared folder does not work: VMware tools not installed #6013

Closed
mvgijssel opened this issue Dec 4, 2019 · 5 comments
Closed

vmware: Shared folder does not work: VMware tools not installed #6013

mvgijssel opened this issue Dec 4, 2019 · 5 comments
Labels
co/vmware-driver Issues with vmware driver help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. priority/backlog Higher priority than priority/awaiting-more-evidence.

Comments

@mvgijssel
Copy link

Running minikube with the vmware driver does not share a folder with the host as documented https://kubernetes.io/docs/setup/learning-environment/minikube/#mounted-host-folders.

The shared folder is shown in VMware Fusion, but not available in the host.
image

The exact command to reproduce the issue:

minikube start --vm-driver vmware

The output of the minikube logs command:


==> Docker <==
-- Logs begin at Wed 2019-12-04 15:43:15 UTC, end at Wed 2019-12-04 16:43:14 UTC. --
Dec 04 15:43:29 minikube dockerd[5224]: time="2019-12-04T15:43:29.522680214Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
Dec 04 15:43:29 minikube dockerd[5224]: time="2019-12-04T15:43:29.522693004Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
Dec 04 15:43:29 minikube dockerd[5224]: time="2019-12-04T15:43:29.522701213Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
Dec 04 15:43:29 minikube dockerd[5224]: time="2019-12-04T15:43:29.522716349Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
Dec 04 15:43:29 minikube dockerd[5224]: time="2019-12-04T15:43:29.522727226Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
Dec 04 15:43:29 minikube dockerd[5224]: time="2019-12-04T15:43:29.522737049Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
Dec 04 15:43:29 minikube dockerd[5224]: time="2019-12-04T15:43:29.522744937Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
Dec 04 15:43:29 minikube dockerd[5224]: time="2019-12-04T15:43:29.522752316Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
Dec 04 15:43:29 minikube dockerd[5224]: time="2019-12-04T15:43:29.522759905Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
Dec 04 15:43:29 minikube dockerd[5224]: time="2019-12-04T15:43:29.522805574Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
Dec 04 15:43:29 minikube dockerd[5224]: time="2019-12-04T15:43:29.522818274Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
Dec 04 15:43:29 minikube dockerd[5224]: time="2019-12-04T15:43:29.522826476Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
Dec 04 15:43:29 minikube dockerd[5224]: time="2019-12-04T15:43:29.522834378Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
Dec 04 15:43:29 minikube dockerd[5224]: time="2019-12-04T15:43:29.523226677Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock"
Dec 04 15:43:29 minikube dockerd[5224]: time="2019-12-04T15:43:29.523335954Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock"
Dec 04 15:43:29 minikube dockerd[5224]: time="2019-12-04T15:43:29.523348006Z" level=info msg="containerd successfully booted in 0.009235s"
Dec 04 15:43:29 minikube dockerd[5224]: time="2019-12-04T15:43:29.531287779Z" level=info msg="parsed scheme: "unix"" module=grpc
Dec 04 15:43:29 minikube dockerd[5224]: time="2019-12-04T15:43:29.531379080Z" level=info msg="scheme "unix" not registered, fallback to default scheme" module=grpc
Dec 04 15:43:29 minikube dockerd[5224]: time="2019-12-04T15:43:29.531415725Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 }] }" module=grpc
Dec 04 15:43:29 minikube dockerd[5224]: time="2019-12-04T15:43:29.531484601Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
Dec 04 15:43:29 minikube dockerd[5224]: time="2019-12-04T15:43:29.533298135Z" level=info msg="parsed scheme: "unix"" module=grpc
Dec 04 15:43:29 minikube dockerd[5224]: time="2019-12-04T15:43:29.533338296Z" level=info msg="scheme "unix" not registered, fallback to default scheme" module=grpc
Dec 04 15:43:29 minikube dockerd[5224]: time="2019-12-04T15:43:29.533366540Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 }] }" module=grpc
Dec 04 15:43:29 minikube dockerd[5224]: time="2019-12-04T15:43:29.533381909Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
Dec 04 15:43:29 minikube dockerd[5224]: time="2019-12-04T15:43:29.548578207Z" level=warning msg="Your kernel does not support cgroup blkio weight"
Dec 04 15:43:29 minikube dockerd[5224]: time="2019-12-04T15:43:29.548650097Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
Dec 04 15:43:29 minikube dockerd[5224]: time="2019-12-04T15:43:29.548659031Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
Dec 04 15:43:29 minikube dockerd[5224]: time="2019-12-04T15:43:29.548663566Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
Dec 04 15:43:29 minikube dockerd[5224]: time="2019-12-04T15:43:29.548668164Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
Dec 04 15:43:29 minikube dockerd[5224]: time="2019-12-04T15:43:29.548672225Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
Dec 04 15:43:29 minikube dockerd[5224]: time="2019-12-04T15:43:29.549148169Z" level=info msg="Loading containers: start."
Dec 04 15:43:29 minikube dockerd[5224]: time="2019-12-04T15:43:29.614899793Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Dec 04 15:43:29 minikube dockerd[5224]: time="2019-12-04T15:43:29.648409421Z" level=info msg="Loading containers: done."
Dec 04 15:43:29 minikube dockerd[5224]: time="2019-12-04T15:43:29.694027623Z" level=info msg="Docker daemon" commit=633a0ea838 graphdriver(s)=overlay2 version=19.03.5
Dec 04 15:43:29 minikube dockerd[5224]: time="2019-12-04T15:43:29.694154249Z" level=info msg="Daemon has completed initialization"
Dec 04 15:43:29 minikube systemd[1]: Started Docker Application Container Engine.
Dec 04 15:43:29 minikube dockerd[5224]: time="2019-12-04T15:43:29.726619503Z" level=info msg="API listen on /var/run/docker.sock"
Dec 04 15:43:29 minikube dockerd[5224]: time="2019-12-04T15:43:29.726689186Z" level=info msg="API listen on [::]:2376"
Dec 04 15:44:40 minikube dockerd[5224]: time="2019-12-04T15:44:40.856577826Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/57b17617968f24dbcf51d966f9015701bb0c007d0fbe7387d03f017a2288a0dd/shim.sock" debug=false pid=7578
Dec 04 15:44:40 minikube dockerd[5224]: time="2019-12-04T15:44:40.859142512Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b65c312c1cdce21a3d36ed37d3b0357eaf0dfef0e1b2cde75959a113c21d9e8b/shim.sock" debug=false pid=7586
Dec 04 15:44:40 minikube dockerd[5224]: time="2019-12-04T15:44:40.859200925Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/92a301eacae2837203102030629bfb3b4fe48d60336b66e69b0ec9a86b31e00c/shim.sock" debug=false pid=7584
Dec 04 15:44:40 minikube dockerd[5224]: time="2019-12-04T15:44:40.860325691Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/1166366e4d121cc41d19d164e8be368951d9c74653db057ed4b62f7d1675c3ef/shim.sock" debug=false pid=7593
Dec 04 15:44:40 minikube dockerd[5224]: time="2019-12-04T15:44:40.860642702Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/e538155a44f554567d63a68cd4d9e64703269195484656edf92366eb704edc7f/shim.sock" debug=false pid=7596
Dec 04 15:44:41 minikube dockerd[5224]: time="2019-12-04T15:44:41.464540164Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/122dbd9e4a959349bc552c3c6032f901d769262423622458d75867118dc24ef9/shim.sock" debug=false pid=7879
Dec 04 15:44:41 minikube dockerd[5224]: time="2019-12-04T15:44:41.465182985Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c20e417a964f63dac28d5a2c65c87ea9635a8964038f586908024e5be8e6c29d/shim.sock" debug=false pid=7880
Dec 04 15:44:41 minikube dockerd[5224]: time="2019-12-04T15:44:41.467169838Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/e9ec7ab1531ed6fa96ca40a4cbd869ae21392c48a620c0b0de5f81786a4d1a31/shim.sock" debug=false pid=7890
Dec 04 15:44:41 minikube dockerd[5224]: time="2019-12-04T15:44:41.477140353Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/4d75a3a09fd359ba39e266ff5b5aeed0db91ef98f890791ccc22eaa3b2eb0d53/shim.sock" debug=false pid=7932
Dec 04 15:44:45 minikube dockerd[5224]: time="2019-12-04T15:44:45.506053194Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/e3a8c1d4f37ca229d0a593f1425325d406ecd6eb84fd6192d915c6599ed093c2/shim.sock" debug=false pid=8416
Dec 04 15:45:09 minikube dockerd[5224]: time="2019-12-04T15:45:09.736069724Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/8986144d02fc1ed739e7c7e25dd4281fbf58c81ad177d25452dee02aa29cdba8/shim.sock" debug=false pid=9296
Dec 04 15:45:09 minikube dockerd[5224]: time="2019-12-04T15:45:09.931863617Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/a52bc0bcd17c4620afb3ca83db66961a77d7972b490c621fe15512a1e6d05e52/shim.sock" debug=false pid=9347
Dec 04 15:45:10 minikube dockerd[5224]: time="2019-12-04T15:45:10.028084021Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/32452ef8578827e912cf0e39bc84a08b67f59140ca4bacee14c1f699ae8d5cbb/shim.sock" debug=false pid=9413
Dec 04 15:45:10 minikube dockerd[5224]: time="2019-12-04T15:45:10.057454070Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/32da4028fdbbdecda609b277ce23c2cc2bc17b90161ca13ead1e762059ba39a6/shim.sock" debug=false pid=9457
Dec 04 15:45:10 minikube dockerd[5224]: time="2019-12-04T15:45:10.393507860Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/eaa43b33314168d154bbc696210481aaa4e085d5854536e889264d3bf74a62bc/shim.sock" debug=false pid=9600
Dec 04 15:45:10 minikube dockerd[5224]: time="2019-12-04T15:45:10.660956647Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/44813a0b37bdb8010e7c5af5a0e10cb9e1a03c88cb870d04744aeedbfbd09eb3/shim.sock" debug=false pid=9701
Dec 04 15:45:11 minikube dockerd[5224]: time="2019-12-04T15:45:11.492422226Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/4b06bfdacdafafa4704237511ebed425897e66550b22ba00712084169fe8a4a4/shim.sock" debug=false pid=9798
Dec 04 15:45:11 minikube dockerd[5224]: time="2019-12-04T15:45:11.495924922Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ff2bb92cc64212a43e56823a2d4e9882f570b8f3d2c673e5e19d4b29c8d0f944/shim.sock" debug=false pid=9805
Dec 04 15:45:11 minikube dockerd[5224]: time="2019-12-04T15:45:11.977717259Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/7d66059d1d974af6796528de8619022b639881e3c7756049b808f65defc4a107/shim.sock" debug=false pid=9922
Dec 04 15:45:12 minikube dockerd[5224]: time="2019-12-04T15:45:12.033352641Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/3102283318a5dfa40cb3321ef3e560d24d90e520e59ad6e47029cface841330a/shim.sock" debug=false pid=9948
Dec 04 15:45:15 minikube dockerd[5224]: time="2019-12-04T15:45:15.002797170Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b38541543fa473c8ec90bd6b5027dec24ee72eff4ca90030e3aee5909de09657/shim.sock" debug=false pid=10111
Dec 04 15:45:29 minikube dockerd[5224]: time="2019-12-04T15:45:29.992865941Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/98c8115af1d0b062affff7cfefc598e5de93d13bcb53fbc13fc9a32f7a9ea3ac/shim.sock" debug=false pid=10574

==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
98c8115af1d0b gcr.io/google_containers/kube-registry-proxy@sha256:1040f25a5273de0d72c54865a8efd47e3292de9fb8e5353e3fa76736b854f2da 4 minutes ago Running registry-proxy 0 32da4028fdbbd
b38541543fa47 registry.hub.docker.com/library/registry@sha256:5eaafa2318aa0c4c52f95077c2a68bed0b13f6d2b464835723d4de1484052299 4 minutes ago Running registry 0 32452ef857882
3102283318a5d 70f311871ae12 4 minutes ago Running coredns 0 4b06bfdacdafa
7d66059d1d974 70f311871ae12 4 minutes ago Running coredns 0 ff2bb92cc6421
44813a0b37bdb 4689081edb103 4 minutes ago Running storage-provisioner 0 eaa43b3331416
a52bc0bcd17c4 22243b9b56e72 4 minutes ago Running kube-proxy 0 8986144d02fc1
e3a8c1d4f37ca k8s.gcr.io/kube-addon-manager@sha256:3e315022a842d782a28e729720f21091dde21f1efea28868d65ec595ad871616 4 minutes ago Running kube-addon-manager 0 1166366e4d121
4d75a3a09fd35 303ce5db0e90d 5 minutes ago Running etcd 0 b65c312c1cdce
e9ec7ab1531ed f691d6df3b823 5 minutes ago Running kube-controller-manager 0 92a301eacae28
122dbd9e4a959 82be4be24fb6b 5 minutes ago Running kube-scheduler 0 e538155a44f55
c20e417a964f6 44ebc8208c5da 5 minutes ago Running kube-apiserver 0 57b17617968f2

==> coredns ["3102283318a5"] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
CoreDNS-1.6.5
linux/amd64, go1.13.4, c2fd1b2

==> coredns ["7d66059d1d97"] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
CoreDNS-1.6.5
linux/amd64, go1.13.4, c2fd1b2

==> dmesg <==
[Dec 4 15:42] You have booted with nomodeset. This means your GPU drivers are DISABLED
[ +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
[ +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
[ +0.574945] core: CPUID marked event: 'cpu cycles' unavailable
[ +0.000001] core: CPUID marked event: 'instructions' unavailable
[ +0.000001] core: CPUID marked event: 'bus cycles' unavailable
[ +0.000000] core: CPUID marked event: 'cache references' unavailable
[ +0.000001] core: CPUID marked event: 'cache misses' unavailable
[ +0.000001] core: CPUID marked event: 'branch instructions' unavailable
[ +0.000000] core: CPUID marked event: 'branch misses' unavailable
[ +0.006906] #2
[ +0.002076] #3
[ +0.003274] #4
[ +0.003033] #5
[ +0.004137] #6
[ +0.004476] #7
[ +0.008025] #9
[ +0.003396] #10
[ +0.003176] #11
[ +0.036575] pmd_set_huge: Cannot satisfy [mem 0xf0000000-0xf0200000] with a huge-page mapping due to MTRR override.
[Dec 4 15:43] sd 0:0:0:0: [sda] Assuming drive cache: write through
[ +0.105952] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[ +5.480115] systemd[1]: Failed to bump fs.file-max, ignoring: Invalid argument
[ +0.019118] systemd-fstab-generator[2787]: Ignoring "noauto" for root device
[ +0.001864] systemd[1]: File /usr/lib/systemd/system/systemd-journald.service:12 configures an IP firewall (IPAddressDeny=any), but the local system does not support BPF/cgroup based firewalling.
[ +0.000002] systemd[1]: Proceeding WITHOUT firewalling in effect! (This warning is only shown for the first loaded unit using IP firewalling.)
[ +1.239543] vboxguest: loading out-of-tree module taints kernel.
[ +0.004445] vboxguest: PCI device not found, probably running on physical hardware.
[ +0.110490] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
[ +7.500558] systemd-fstab-generator[5111]: Ignoring "noauto" for root device
[ +28.939540] systemd-fstab-generator[6119]: Ignoring "noauto" for root device
[Dec 4 15:44] systemd-fstab-generator[7019]: Ignoring "noauto" for root device
[ +23.506002] kauditd_printk_skb: 68 callbacks suppressed
[ +8.154380] systemd-fstab-generator[8591]: Ignoring "noauto" for root device
[Dec 4 15:45] kauditd_printk_skb: 29 callbacks suppressed
[ +5.613971] NFSD: Unable to end grace period: -110
[ +3.632780] kauditd_printk_skb: 59 callbacks suppressed

==> kernel <==
15:49:42 up 6 min, 0 users, load average: 1.07, 1.09, 0.57
Linux minikube 4.19.81 #1 SMP Tue Nov 26 10:19:39 PST 2019 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2019.02.7"

==> kube-addon-manager ["e3a8c1d4f37c"] <==
error: no objects passed to apply
error: no objects passed to apply
error: no objects passed to apply
error: no objects passed to apply
error: no objects passed to apply
error: no objects passed to apply
INFO: Leader election disabled.
INFO: == Kubernetes addon ensure completed at 2019-12-04T15:49:16+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
daemonset.apps/registry-proxy unchanged
replicationcontroller/registry unchanged
service/registry unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-12-04T15:49:17+00:00 ==
INFO: Leader election disabled.
INFO: == Kubernetes addon ensure completed at 2019-12-04T15:49:20+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
daemonset.apps/registry-proxy unchanged
replicationcontroller/registry unchanged
service/registry unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-12-04T15:49:22+00:00 ==
INFO: Leader election disabled.
INFO: == Kubernetes addon ensure completed at 2019-12-04T15:49:25+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
daemonset.apps/registry-proxy unchanged
replicationcontroller/registry unchanged
service/registry unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-12-04T15:49:26+00:00 ==
INFO: Leader election disabled.
INFO: == Kubernetes addon ensure completed at 2019-12-04T15:49:30+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
daemonset.apps/registry-proxy unchanged
replicationcontroller/registry unchanged
service/registry unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-12-04T15:49:32+00:00 ==
INFO: Leader election disabled.
INFO: == Kubernetes addon ensure completed at 2019-12-04T15:49:35+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
daemonset.apps/registry-proxy unchanged
replicationcontroller/registry unchanged
service/registry unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-12-04T15:49:36+00:00 ==
INFO: Leader election disabled.
INFO: == Kubernetes addon ensure completed at 2019-12-04T15:49:40+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
daemonset.apps/registry-proxy unchanged
replicationcontroller/registry unchanged
service/registry unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-12-04T15:49:42+00:00 ==

==> kube-apiserver ["c20e417a964f"] <==
W1204 15:44:42.917707 1 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W1204 15:44:42.920236 1 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I1204 15:44:42.928983 1 client.go:361] parsed scheme: "endpoint"
I1204 15:44:42.929060 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
W1204 15:44:42.934885 1 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W1204 15:44:42.951246 1 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W1204 15:44:42.951296 1 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I1204 15:44:42.989462 1 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook,RuntimeClass.
I1204 15:44:42.989548 1 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeClass,ResourceQuota.
I1204 15:44:42.992488 1 client.go:361] parsed scheme: "endpoint"
I1204 15:44:42.992526 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I1204 15:44:43.001693 1 client.go:361] parsed scheme: "endpoint"
I1204 15:44:43.001792 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I1204 15:44:45.646738 1 dynamic_cafile_content.go:166] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
I1204 15:44:45.646845 1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
I1204 15:44:45.647035 1 secure_serving.go:178] Serving securely on [::]:8443
I1204 15:44:45.647127 1 dynamic_serving_content.go:129] Starting serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key
I1204 15:44:45.647170 1 tlsconfig.go:219] Starting DynamicServingCertificateController
I1204 15:44:45.647390 1 autoregister_controller.go:140] Starting autoregister controller
I1204 15:44:45.647432 1 cache.go:32] Waiting for caches to sync for autoregister controller
I1204 15:44:45.647475 1 crdregistration_controller.go:111] Starting crd-autoregister controller
I1204 15:44:45.647481 1 shared_informer.go:197] Waiting for caches to sync for crd-autoregister
I1204 15:44:45.647656 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I1204 15:44:45.647664 1 shared_informer.go:197] Waiting for caches to sync for cluster_authentication_trust_controller
I1204 15:44:45.647686 1 apiservice_controller.go:94] Starting APIServiceRegistrationController
I1204 15:44:45.647691 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I1204 15:44:45.647712 1 controller.go:81] Starting OpenAPI AggregationController
I1204 15:44:45.647744 1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
I1204 15:44:45.647768 1 dynamic_cafile_content.go:166] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
I1204 15:44:45.647859 1 available_controller.go:386] Starting AvailableConditionController
I1204 15:44:45.647897 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I1204 15:44:45.647932 1 crd_finalizer.go:263] Starting CRDFinalizer
I1204 15:44:45.648141 1 naming_controller.go:288] Starting NamingConditionController
I1204 15:44:45.648306 1 establishing_controller.go:73] Starting EstablishingController
I1204 15:44:45.648331 1 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
I1204 15:44:45.648346 1 apiapproval_controller.go:185] Starting KubernetesAPIApprovalPolicyConformantConditionController
E1204 15:44:45.648386 1 controller.go:151] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.46.131, ResourceVersion: 0, AdditionalErrorMsg:
I1204 15:44:45.648604 1 controller.go:85] Starting OpenAPI controller
I1204 15:44:45.648647 1 customresource_discovery_controller.go:208] Starting DiscoveryController
I1204 15:44:45.747937 1 cache.go:39] Caches are synced for autoregister controller
I1204 15:44:45.747977 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I1204 15:44:45.747939 1 shared_informer.go:204] Caches are synced for crd-autoregister
I1204 15:44:45.747969 1 shared_informer.go:204] Caches are synced for cluster_authentication_trust_controller
I1204 15:44:45.748077 1 cache.go:39] Caches are synced for AvailableConditionController controller
I1204 15:44:46.648653 1 controller.go:107] OpenAPI AggregationController: Processing item
I1204 15:44:46.648698 1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I1204 15:44:46.648715 1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I1204 15:44:46.653983 1 storage_scheduling.go:133] created PriorityClass system-node-critical with value 2000001000
I1204 15:44:46.659078 1 storage_scheduling.go:133] created PriorityClass system-cluster-critical with value 2000000000
I1204 15:44:46.659123 1 storage_scheduling.go:142] all system priority classes are created successfully or already exist.
I1204 15:44:46.979502 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I1204 15:44:47.011695 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W1204 15:44:47.100790 1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.46.131]
I1204 15:44:47.101615 1 controller.go:606] quota admission added evaluator for: endpoints
I1204 15:44:47.797302 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I1204 15:44:48.425733 1 controller.go:606] quota admission added evaluator for: serviceaccounts
I1204 15:44:48.449264 1 controller.go:606] quota admission added evaluator for: deployments.apps
I1204 15:44:48.657738 1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I1204 15:44:55.672084 1 controller.go:606] quota admission added evaluator for: replicasets.apps
I1204 15:44:55.872741 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps

==> kube-controller-manager ["e9ec7ab1531e"] <==
I1204 15:44:54.670045 1 controllermanager.go:533] Started "deployment"
I1204 15:44:54.670130 1 deployment_controller.go:152] Starting deployment controller
I1204 15:44:54.670137 1 shared_informer.go:197] Waiting for caches to sync for deployment
I1204 15:44:54.818868 1 controllermanager.go:533] Started "csrsigning"
I1204 15:44:54.818925 1 certificate_controller.go:118] Starting certificate controller "csrsigning"
I1204 15:44:54.818940 1 shared_informer.go:197] Waiting for caches to sync for certificate-csrsigning
I1204 15:44:54.969402 1 controllermanager.go:533] Started "csrapproving"
I1204 15:44:54.969480 1 certificate_controller.go:118] Starting certificate controller "csrapproving"
I1204 15:44:54.969489 1 shared_informer.go:197] Waiting for caches to sync for certificate-csrapproving
I1204 15:44:55.222108 1 controllermanager.go:533] Started "persistentvolume-binder"
I1204 15:44:55.222218 1 pv_controller_base.go:294] Starting persistent volume controller
I1204 15:44:55.222229 1 shared_informer.go:197] Waiting for caches to sync for persistent volume
I1204 15:44:55.470750 1 controllermanager.go:533] Started "persistentvolume-expander"
I1204 15:44:55.470780 1 expand_controller.go:319] Starting expand controller
I1204 15:44:55.470823 1 shared_informer.go:197] Waiting for caches to sync for expand
I1204 15:44:55.471301 1 shared_informer.go:197] Waiting for caches to sync for resource quota
I1204 15:44:55.473798 1 shared_informer.go:197] Waiting for caches to sync for garbage collector
W1204 15:44:55.489628 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist
I1204 15:44:55.519920 1 shared_informer.go:204] Caches are synced for service account
I1204 15:44:55.519964 1 shared_informer.go:204] Caches are synced for certificate-csrsigning
I1204 15:44:55.519977 1 shared_informer.go:204] Caches are synced for PV protection
I1204 15:44:55.520200 1 shared_informer.go:204] Caches are synced for ReplicaSet
I1204 15:44:55.532877 1 shared_informer.go:204] Caches are synced for PVC protection
I1204 15:44:55.548795 1 shared_informer.go:204] Caches are synced for job
I1204 15:44:55.567082 1 shared_informer.go:204] Caches are synced for namespace
I1204 15:44:55.569909 1 shared_informer.go:204] Caches are synced for GC
I1204 15:44:55.570006 1 shared_informer.go:204] Caches are synced for certificate-csrapproving
I1204 15:44:55.570344 1 shared_informer.go:204] Caches are synced for bootstrap_signer
I1204 15:44:55.571431 1 shared_informer.go:204] Caches are synced for HPA
I1204 15:44:55.571758 1 shared_informer.go:204] Caches are synced for attach detach
I1204 15:44:55.572805 1 shared_informer.go:204] Caches are synced for endpoint
I1204 15:44:55.589798 1 shared_informer.go:204] Caches are synced for TTL
I1204 15:44:55.670457 1 shared_informer.go:204] Caches are synced for deployment
I1204 15:44:55.674462 1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"5aebf104-6488-4f84-95d0-43666e046788", APIVersion:"apps/v1", ResourceVersion:"176", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-6955765f44 to 2
I1204 15:44:55.677499 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-6955765f44", UID:"eb90d961-bfb9-49e9-9b7d-a56a931c98b9", APIVersion:"apps/v1", ResourceVersion:"303", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-6955765f44-ld72c
I1204 15:44:55.680340 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-6955765f44", UID:"eb90d961-bfb9-49e9-9b7d-a56a931c98b9", APIVersion:"apps/v1", ResourceVersion:"303", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-6955765f44-4qjt4
I1204 15:44:55.722426 1 shared_informer.go:204] Caches are synced for ReplicationController
I1204 15:44:55.770056 1 shared_informer.go:204] Caches are synced for taint
I1204 15:44:55.770157 1 node_lifecycle_controller.go:1443] Initializing eviction metric for zone:
I1204 15:44:55.770188 1 taint_manager.go:186] Starting NoExecuteTaintManager
W1204 15:44:55.770206 1 node_lifecycle_controller.go:1058] Missing timestamp for Node minikube. Assuming now as a timestamp.
I1204 15:44:55.770247 1 node_lifecycle_controller.go:1209] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
I1204 15:44:55.770278 1 event.go:281] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"187c9d64-2846-4f8f-bc17-317a23cf1a65", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in Controller
I1204 15:44:55.822401 1 shared_informer.go:204] Caches are synced for persistent volume
I1204 15:44:55.870503 1 shared_informer.go:204] Caches are synced for daemon sets
I1204 15:44:55.871428 1 shared_informer.go:204] Caches are synced for expand
I1204 15:44:55.877161 1 event.go:281] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"dbb00c5b-d41e-44c2-bcff-6ddb93d0c9b4", APIVersion:"apps/v1", ResourceVersion:"181", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-4rzdp
I1204 15:44:55.954269 1 shared_informer.go:204] Caches are synced for ClusterRoleAggregator
E1204 15:44:55.966291 1 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
I1204 15:44:56.031573 1 shared_informer.go:204] Caches are synced for garbage collector
I1204 15:44:56.031612 1 garbagecollector.go:138] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I1204 15:44:56.033864 1 shared_informer.go:204] Caches are synced for stateful set
I1204 15:44:56.043717 1 shared_informer.go:204] Caches are synced for resource quota
I1204 15:44:56.070735 1 shared_informer.go:204] Caches are synced for disruption
I1204 15:44:56.070765 1 disruption.go:338] Sending events to api server.
I1204 15:44:56.071813 1 shared_informer.go:204] Caches are synced for resource quota
I1204 15:44:56.074045 1 shared_informer.go:204] Caches are synced for garbage collector
I1204 15:44:56.482104 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"kube-system", Name:"registry", UID:"bf29de1b-621e-454e-8313-9caaf7aefc1a", APIVersion:"v1", ResourceVersion:"338", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: registry-rbmbw
I1204 15:45:09.215688 1 event.go:281] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"registry-proxy", UID:"720fe64a-083d-4eb5-845a-eb7a3e29902e", APIVersion:"apps/v1", ResourceVersion:"337", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: registry-proxy-scl75
I1204 15:45:10.771748 1 node_lifecycle_controller.go:1236] Controller detected that some Nodes are Ready. Exiting master disruption mode.

==> kube-proxy ["a52bc0bcd17c"] <==
W1204 15:45:10.121169 1 server_others.go:323] Unknown proxy mode "", assuming iptables proxy
I1204 15:45:10.128847 1 node.go:135] Successfully retrieved node IP: 192.168.46.131
I1204 15:45:10.128878 1 server_others.go:145] Using iptables Proxier.
W1204 15:45:10.129001 1 proxier.go:286] clusterCIDR not specified, unable to distinguish between internal and external traffic
I1204 15:45:10.129896 1 server.go:571] Version: v1.17.0-rc.1
I1204 15:45:10.130311 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 393216
I1204 15:45:10.130345 1 conntrack.go:52] Setting nf_conntrack_max to 393216
I1204 15:45:10.130630 1 conntrack.go:83] Setting conntrack hashsize to 98304
I1204 15:45:10.144223 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I1204 15:45:10.144302 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I1204 15:45:10.145070 1 config.go:313] Starting service config controller
I1204 15:45:10.145113 1 shared_informer.go:197] Waiting for caches to sync for service config
I1204 15:45:10.145138 1 config.go:131] Starting endpoints config controller
I1204 15:45:10.145149 1 shared_informer.go:197] Waiting for caches to sync for endpoints config
I1204 15:45:10.245341 1 shared_informer.go:204] Caches are synced for service config
I1204 15:45:10.245341 1 shared_informer.go:204] Caches are synced for endpoints config

==> kube-scheduler ["122dbd9e4a95"] <==
I1204 15:44:42.058596 1 serving.go:312] Generated self-signed cert in-memory
W1204 15:44:42.266652 1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found
W1204 15:44:42.266739 1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found
W1204 15:44:45.671946 1 authentication.go:348] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W1204 15:44:45.671969 1 authentication.go:296] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W1204 15:44:45.672257 1 authentication.go:297] Continuing without authentication configuration. This may treat all requests as anonymous.
W1204 15:44:45.672269 1 authentication.go:298] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
W1204 15:44:45.687055 1 authorization.go:47] Authorization is disabled
W1204 15:44:45.687100 1 authentication.go:92] Authentication is disabled
I1204 15:44:45.687119 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
I1204 15:44:45.688512 1 configmap_cafile_content.go:205] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1204 15:44:45.688551 1 shared_informer.go:197] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1204 15:44:45.689450 1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
I1204 15:44:45.689687 1 tlsconfig.go:219] Starting DynamicServingCertificateController
E1204 15:44:45.690966 1 reflector.go:156] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E1204 15:44:45.691638 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E1204 15:44:45.692552 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E1204 15:44:45.692606 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E1204 15:44:45.692823 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E1204 15:44:45.692899 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E1204 15:44:45.693146 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E1204 15:44:45.693184 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E1204 15:44:45.693219 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E1204 15:44:45.693392 1 reflector.go:156] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E1204 15:44:45.693425 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E1204 15:44:45.693392 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E1204 15:44:46.692425 1 reflector.go:156] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E1204 15:44:46.693054 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E1204 15:44:46.694629 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E1204 15:44:46.695655 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E1204 15:44:46.697260 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E1204 15:44:46.698102 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E1204 15:44:46.699448 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E1204 15:44:46.700358 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E1204 15:44:46.701348 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E1204 15:44:46.702799 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E1204 15:44:46.703958 1 reflector.go:156] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E1204 15:44:46.705283 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
I1204 15:44:47.789056 1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1204 15:44:47.789842 1 leaderelection.go:242] attempting to acquire leader lease kube-system/kube-scheduler...
I1204 15:44:47.799054 1 leaderelection.go:252] successfully acquired lease kube-system/kube-scheduler

==> kubelet <==
-- Logs begin at Wed 2019-12-04 15:43:15 UTC, end at Wed 2019-12-04 16:43:14 UTC. --
Dec 04 15:45:09 minikube kubelet[8600]: I1204 15:45:09.028832 8600 server.go:1113] Started kubelet
Dec 04 15:45:09 minikube kubelet[8600]: I1204 15:45:09.029371 8600 server.go:143] Starting to listen on 0.0.0.0:10250
Dec 04 15:45:09 minikube kubelet[8600]: I1204 15:45:09.030231 8600 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
Dec 04 15:45:09 minikube kubelet[8600]: I1204 15:45:09.030346 8600 volume_manager.go:265] Starting Kubelet Volume Manager
Dec 04 15:45:09 minikube kubelet[8600]: I1204 15:45:09.035158 8600 server.go:354] Adding debug handlers to kubelet server.
Dec 04 15:45:09 minikube kubelet[8600]: I1204 15:45:09.037790 8600 desired_state_of_world_populator.go:138] Desired state populator starts to run
Dec 04 15:45:09 minikube kubelet[8600]: I1204 15:45:09.054611 8600 status_manager.go:157] Starting to sync pod status with apiserver
Dec 04 15:45:09 minikube kubelet[8600]: I1204 15:45:09.054656 8600 kubelet.go:1820] Starting kubelet main sync loop.
Dec 04 15:45:09 minikube kubelet[8600]: E1204 15:45:09.054699 8600 kubelet.go:1844] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
Dec 04 15:45:09 minikube kubelet[8600]: I1204 15:45:09.131139 8600 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
Dec 04 15:45:09 minikube kubelet[8600]: E1204 15:45:09.154969 8600 kubelet.go:1844] skipping pod synchronization - container runtime status check may not have completed yet
Dec 04 15:45:09 minikube kubelet[8600]: I1204 15:45:09.158568 8600 cpu_manager.go:173] [cpumanager] starting with none policy
Dec 04 15:45:09 minikube kubelet[8600]: I1204 15:45:09.158598 8600 cpu_manager.go:174] [cpumanager] reconciling every 10s
Dec 04 15:45:09 minikube kubelet[8600]: I1204 15:45:09.158606 8600 policy_none.go:43] [cpumanager] none policy: Start
Dec 04 15:45:09 minikube kubelet[8600]: I1204 15:45:09.159857 8600 plugin_manager.go:114] Starting Kubelet Plugin Manager
Dec 04 15:45:09 minikube kubelet[8600]: I1204 15:45:09.162237 8600 kubelet_node_status.go:70] Attempting to register node minikube
Dec 04 15:45:09 minikube kubelet[8600]: I1204 15:45:09.168450 8600 kubelet_node_status.go:112] Node minikube was previously registered
Dec 04 15:45:09 minikube kubelet[8600]: I1204 15:45:09.168726 8600 kubelet_node_status.go:73] Successfully registered node minikube
Dec 04 15:45:09 minikube kubelet[8600]: I1204 15:45:09.540831 8600 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/f9a5c7d1c73bf2fdb45f281377ee3cf2-usr-share-ca-certificates") pod "kube-apiserver-minikube" (UID: "f9a5c7d1c73bf2fdb45f281377ee3cf2")
Dec 04 15:45:09 minikube kubelet[8600]: I1204 15:45:09.540975 8600 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/9894acea9018c47f3a9c819337b62838-ca-certs") pod "kube-controller-manager-minikube" (UID: "9894acea9018c47f3a9c819337b62838")
Dec 04 15:45:09 minikube kubelet[8600]: I1204 15:45:09.541107 8600 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/9894acea9018c47f3a9c819337b62838-usr-share-ca-certificates") pod "kube-controller-manager-minikube" (UID: "9894acea9018c47f3a9c819337b62838")
Dec 04 15:45:09 minikube kubelet[8600]: I1204 15:45:09.541175 8600 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/09b49a2e-fd4f-43e5-8bf7-a0c7035e7a53-xtables-lock") pod "kube-proxy-4rzdp" (UID: "09b49a2e-fd4f-43e5-8bf7-a0c7035e7a53")
Dec 04 15:45:09 minikube kubelet[8600]: I1204 15:45:09.541276 8600 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-v2pwz" (UniqueName: "kubernetes.io/secret/09b49a2e-fd4f-43e5-8bf7-a0c7035e7a53-kube-proxy-token-v2pwz") pod "kube-proxy-4rzdp" (UID: "09b49a2e-fd4f-43e5-8bf7-a0c7035e7a53")
Dec 04 15:45:09 minikube kubelet[8600]: I1204 15:45:09.541356 8600 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/c3e29047da86ce6690916750ab69c40b-kubeconfig") pod "kube-addon-manager-minikube" (UID: "c3e29047da86ce6690916750ab69c40b")
Dec 04 15:45:09 minikube kubelet[8600]: I1204 15:45:09.541390 8600 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/f9a5c7d1c73bf2fdb45f281377ee3cf2-ca-certs") pod "kube-apiserver-minikube" (UID: "f9a5c7d1c73bf2fdb45f281377ee3cf2")
Dec 04 15:45:09 minikube kubelet[8600]: I1204 15:45:09.541417 8600 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/f9a5c7d1c73bf2fdb45f281377ee3cf2-k8s-certs") pod "kube-apiserver-minikube" (UID: "f9a5c7d1c73bf2fdb45f281377ee3cf2")
Dec 04 15:45:09 minikube kubelet[8600]: I1204 15:45:09.541460 8600 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/48daf585b3e35fb9498804418872e32c-etcd-data") pod "etcd-minikube" (UID: "48daf585b3e35fb9498804418872e32c")
Dec 04 15:45:09 minikube kubelet[8600]: I1204 15:45:09.541490 8600 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/4a36f01b-0b32-4025-98ec-40e1a464dbd6-tmp") pod "storage-provisioner" (UID: "4a36f01b-0b32-4025-98ec-40e1a464dbd6")
Dec 04 15:45:09 minikube kubelet[8600]: I1204 15:45:09.541551 8600 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/9894acea9018c47f3a9c819337b62838-flexvolume-dir") pod "kube-controller-manager-minikube" (UID: "9894acea9018c47f3a9c819337b62838")
Dec 04 15:45:09 minikube kubelet[8600]: I1204 15:45:09.541585 8600 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/9894acea9018c47f3a9c819337b62838-k8s-certs") pod "kube-controller-manager-minikube" (UID: "9894acea9018c47f3a9c819337b62838")
Dec 04 15:45:09 minikube kubelet[8600]: I1204 15:45:09.541648 8600 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/0624b0cab6f4c5c9272df35c28be2760-kubeconfig") pod "kube-scheduler-minikube" (UID: "0624b0cab6f4c5c9272df35c28be2760")
Dec 04 15:45:09 minikube kubelet[8600]: I1204 15:45:09.541689 8600 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "addons" (UniqueName: "kubernetes.io/host-path/c3e29047da86ce6690916750ab69c40b-addons") pod "kube-addon-manager-minikube" (UID: "c3e29047da86ce6690916750ab69c40b")
Dec 04 15:45:09 minikube kubelet[8600]: I1204 15:45:09.541740 8600 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-mn8fs" (UniqueName: "kubernetes.io/secret/4a36f01b-0b32-4025-98ec-40e1a464dbd6-storage-provisioner-token-mn8fs") pod "storage-provisioner" (UID: "4a36f01b-0b32-4025-98ec-40e1a464dbd6")
Dec 04 15:45:09 minikube kubelet[8600]: I1204 15:45:09.541792 8600 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-8gmlf" (UniqueName: "kubernetes.io/secret/f1610b95-7aa2-423a-9d38-66a9bec68c1e-default-token-8gmlf") pod "registry-proxy-scl75" (UID: "f1610b95-7aa2-423a-9d38-66a9bec68c1e")
Dec 04 15:45:09 minikube kubelet[8600]: I1204 15:45:09.541834 8600 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-8gmlf" (UniqueName: "kubernetes.io/secret/a3b634dc-ad30-4869-8f5a-450374eb4576-default-token-8gmlf") pod "registry-rbmbw" (UID: "a3b634dc-ad30-4869-8f5a-450374eb4576")
Dec 04 15:45:09 minikube kubelet[8600]: I1204 15:45:09.541920 8600 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/9894acea9018c47f3a9c819337b62838-kubeconfig") pod "kube-controller-manager-minikube" (UID: "9894acea9018c47f3a9c819337b62838")
Dec 04 15:45:09 minikube kubelet[8600]: I1204 15:45:09.542000 8600 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/09b49a2e-fd4f-43e5-8bf7-a0c7035e7a53-kube-proxy") pod "kube-proxy-4rzdp" (UID: "09b49a2e-fd4f-43e5-8bf7-a0c7035e7a53")
Dec 04 15:45:09 minikube kubelet[8600]: I1204 15:45:09.542180 8600 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/09b49a2e-fd4f-43e5-8bf7-a0c7035e7a53-lib-modules") pod "kube-proxy-4rzdp" (UID: "09b49a2e-fd4f-43e5-8bf7-a0c7035e7a53")
Dec 04 15:45:09 minikube kubelet[8600]: I1204 15:45:09.542270 8600 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/48daf585b3e35fb9498804418872e32c-etcd-certs") pod "etcd-minikube" (UID: "48daf585b3e35fb9498804418872e32c")
Dec 04 15:45:09 minikube kubelet[8600]: I1204 15:45:09.542320 8600 reconciler.go:156] Reconciler: start to sync state
Dec 04 15:45:10 minikube kubelet[8600]: W1204 15:45:10.487696 8600 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/registry-rbmbw through plugin: invalid network status for
Dec 04 15:45:10 minikube kubelet[8600]: W1204 15:45:10.509518 8600 pod_container_deletor.go:75] Container "32da4028fdbbdecda609b277ce23c2cc2bc17b90161ca13ead1e762059ba39a6" not found in pod's containers
Dec 04 15:45:10 minikube kubelet[8600]: W1204 15:45:10.509698 8600 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/registry-proxy-scl75 through plugin: invalid network status for
Dec 04 15:45:10 minikube kubelet[8600]: W1204 15:45:10.512558 8600 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/registry-rbmbw through plugin: invalid network status for
Dec 04 15:45:10 minikube kubelet[8600]: W1204 15:45:10.516554 8600 pod_container_deletor.go:75] Container "32452ef8578827e912cf0e39bc84a08b67f59140ca4bacee14c1f699ae8d5cbb" not found in pod's containers
Dec 04 15:45:10 minikube kubelet[8600]: I1204 15:45:10.748776 8600 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/64d4c684-2c9d-49a4-8b6f-6cd266f58e8d-config-volume") pod "coredns-6955765f44-4qjt4" (UID: "64d4c684-2c9d-49a4-8b6f-6cd266f58e8d")
Dec 04 15:45:10 minikube kubelet[8600]: I1204 15:45:10.748912 8600 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-k276n" (UniqueName: "kubernetes.io/secret/6130b4c6-87e0-42b0-a364-5a56daeb42ba-coredns-token-k276n") pod "coredns-6955765f44-ld72c" (UID: "6130b4c6-87e0-42b0-a364-5a56daeb42ba")
Dec 04 15:45:10 minikube kubelet[8600]: I1204 15:45:10.748965 8600 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-k276n" (UniqueName: "kubernetes.io/secret/64d4c684-2c9d-49a4-8b6f-6cd266f58e8d-coredns-token-k276n") pod "coredns-6955765f44-4qjt4" (UID: "64d4c684-2c9d-49a4-8b6f-6cd266f58e8d")
Dec 04 15:45:10 minikube kubelet[8600]: I1204 15:45:10.748998 8600 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/6130b4c6-87e0-42b0-a364-5a56daeb42ba-config-volume") pod "coredns-6955765f44-ld72c" (UID: "6130b4c6-87e0-42b0-a364-5a56daeb42ba")
Dec 04 15:45:11 minikube kubelet[8600]: W1204 15:45:11.541780 8600 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/registry-proxy-scl75 through plugin: invalid network status for
Dec 04 15:45:11 minikube kubelet[8600]: W1204 15:45:11.544841 8600 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/registry-rbmbw through plugin: invalid network status for
Dec 04 15:45:11 minikube kubelet[8600]: W1204 15:45:11.910194 8600 pod_container_deletor.go:75] Container "ff2bb92cc64212a43e56823a2d4e9882f570b8f3d2c673e5e19d4b29c8d0f944" not found in pod's containers
Dec 04 15:45:11 minikube kubelet[8600]: W1204 15:45:11.910385 8600 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-6955765f44-ld72c through plugin: invalid network status for
Dec 04 15:45:11 minikube kubelet[8600]: W1204 15:45:11.969324 8600 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-6955765f44-4qjt4 through plugin: invalid network status for
Dec 04 15:45:11 minikube kubelet[8600]: W1204 15:45:11.970606 8600 pod_container_deletor.go:75] Container "4b06bfdacdafafa4704237511ebed425897e66550b22ba00712084169fe8a4a4" not found in pod's containers
Dec 04 15:45:12 minikube kubelet[8600]: W1204 15:45:12.978392 8600 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-6955765f44-ld72c through plugin: invalid network status for
Dec 04 15:45:12 minikube kubelet[8600]: W1204 15:45:12.983422 8600 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-6955765f44-4qjt4 through plugin: invalid network status for
Dec 04 15:45:15 minikube kubelet[8600]: W1204 15:45:15.008654 8600 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/registry-rbmbw through plugin: invalid network status for
Dec 04 15:45:16 minikube kubelet[8600]: W1204 15:45:16.180796 8600 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/registry-rbmbw through plugin: invalid network status for
Dec 04 15:45:30 minikube kubelet[8600]: W1204 15:45:30.280887 8600 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/registry-proxy-scl75 through plugin: invalid network status for

==> storage-provisioner ["44813a0b37bd"] <==

The operating system version:
minikube version

minikube version: v1.6.0-beta.1
commit: a7a5d1e981c85e97f35be38ff5cb8f510a570eea-dirty

vmrun --version

vmrun version 1.17.0 build-15018442
@tstromberg tstromberg changed the title Shared folder does not work for vmware driver Shared folder does not work for vmware driver: VMware tools not installed Dec 19, 2019
@tstromberg tstromberg added kind/bug Categorizes issue or PR as related to a bug. co/vmware-driver Issues with vmware driver labels Dec 19, 2019
@tstromberg tstromberg changed the title Shared folder does not work for vmware driver: VMware tools not installed vmware: Shared folder does not work: VMware tools not installed Dec 19, 2019
@tstromberg
Copy link
Contributor

It looks like this has never worked for VMware, probably because VMware tools isn't installed, as your warning message mentions. To do so would require, as far as I know, a buildroot package similar to the one we use for the VirtualBox tools:

https://github.com/kubernetes/minikube/tree/master/deploy/iso/minikube-iso/package/vbox-guest

Help wanted!

In the mean time, one can use minikube mount, though the userspace mount is definitely slower and more quirky than vmwares.

@tstromberg tstromberg added help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. priority/backlog Higher priority than priority/awaiting-more-evidence. and removed kind/bug Categorizes issue or PR as related to a bug. labels Dec 19, 2019
@mvgijssel
Copy link
Author

Okay, good to know I'm not crazy! Currently using the minikube mount command which suffices.

@tercenya
Copy link

tercenya commented Jan 8, 2020

I can't duplicate either part of this issue; I see the shares mounted:

» minikube --vm-driver=vmware start
😄  minikube v1.6.2 on Darwin 10.15.2
✨  Selecting 'vmware' driver from user configuration (alternates: [hyperkit vmwarefusion])
🔥  Creating vmware VM (CPUs=2, Memory=2000MB, Disk=20000MB) ...
🐳  Preparing Kubernetes v1.17.0 on Docker '19.03.5' ...
🚜  Pulling images ...
🚀  Launching Kubernetes ...
⌛  Waiting for cluster to come online ...
🏄  Done! kubectl is now configured to use "minikube"

» minikube version
minikube version: v1.6.2
commit: 54f28ac5d3a815d1196cd5d57d707439ee4bb392

» vmrun | head -2
vmrun version 1.17.0 build-15018442

» minikube ssh

$ mount | grep hgfs
vmhgfs-fuse on /mnt/hgfs type fuse.vmhgfs-fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
$ ls /mnt/hgfs
Users
$ ls /mnt/hgfs/Users/
Shared  tercenya

I also see vmtoolsd running:

$ ps ax | grep vm
 3203 ?        Sl     0:01 /usr/bin/vmtoolsd
 3386 ?        Ssl    0:00 /usr/bin/vmhgfs-fuse .host:/ /mnt/hgfs -o subtype=vmhgfs-fuse,allow_other

I didn't look at the sources to see how this was accomplished.

That said the documentation is misleading - it suggests the folder in the VM will be /Users (instead of /mnt/hgfs/Users), but this is better anyway.

@mvgijssel If you're running Catalina, something to check would be that you've given Fusion (and maybe Terminal/iTerm) disk access Full Disk Access in Security & Privacy?

mvgijssel added a commit to mvgijssel/website that referenced this issue Jan 14, 2020
Related kubernetes/minikube#6013

When using the `vmware` driver  for minikube the shared folder will be in `/mnt/hgfs` instead of directly on root `/`.
@mvgijssel
Copy link
Author

Thanks @tercenya, you're totally right! Confirmed that the vmware shared folders functionality is working. Created a PR for the documentation kubernetes/website#18674.

Thanks again for the help :)

k8s-ci-robot pushed a commit to kubernetes/website that referenced this issue Jan 15, 2020
Related kubernetes/minikube#6013

When using the `vmware` driver  for minikube the shared folder will be in `/mnt/hgfs` instead of directly on root `/`.
wawa0210 pushed a commit to wawa0210/website that referenced this issue Mar 2, 2020
Related kubernetes/minikube#6013

When using the `vmware` driver  for minikube the shared folder will be in `/mnt/hgfs` instead of directly on root `/`.
@sanarena
Copy link

sanarena commented Apr 22, 2020

Running VMWare 11.5, I ran into this problem.

'minikube ssh -- vmhgfs-fuse --enabled' returns 'vmhgfs-fuse: 0 - HGFS FUSE client enabled' however /Users folder is empty and no hgfs folder in /mnt/

If I re-create vmware cluster (minikube delete, minikube start --driver=vmwarefusion --kubernetes-version v1.18.0), it is working perfectly. /Users folder is mounted correctly.
However if i run 'minikube stop' and 'minikube start' again, mounted folder is gone.

I found a temporarily workout, If i go to setting of vmware, disable sharing folders, and re-enable sharing folders again, I can see folder is mounted under /mnt/hgfs/Users

Running minikube mount, throws this error:

minikube mount ${HOME}:/vm

💣 Error getting the host IP address to use from within the VM: Error, attempted to get host ip address for unsupported driver

😿 minikube is exiting due to an error. If the above message is not useful, open an issue:
👉 https://github.com/kubernetes/minikube/issues/new/choose

This is on a clean install of minikube over vmware.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/vmware-driver Issues with vmware driver help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

No branches or pull requests

4 participants