Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failed to start minikube - failed to acquire bootstrap client lock: bad file descriptor #11022

Closed
bamason14 opened this issue Apr 8, 2021 · 15 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-problem-regex priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.

Comments

@bamason14
Copy link

Steps to reproduce the issue:

  1. minikube start --driver=docker --cpus=2 --memory=8g --addons=ingress

$ minikube start --driver=docker --cpus=2 --memory=8g --addons=ingress --alsologtostderr
I0408 07:05:54.554411 4173 out.go:239] Setting OutFile to fd 1 ...
I0408 07:05:54.554717 4173 out.go:286] TERM=vt100,COLORTERM=, which probably does not support color
I0408 07:05:54.554732 4173 out.go:252] Setting ErrFile to fd 2...
I0408 07:05:54.554739 4173 out.go:286] TERM=vt100,COLORTERM=, which probably does not support color
I0408 07:05:54.554900 4173 root.go:308] Updating PATH: /rhome/dadmmason/.minikube/bin
W0408 07:05:54.555235 4173 root.go:283] Error reading config file at /rhome/dadmmason/.minikube/config/config.json: open /rhome/dadmmason/.minikube/config/config.json: no such file or directory
I0408 07:05:54.568864 4173 out.go:246] Setting JSON to false
I0408 07:05:54.570570 4173 start.go:108] hostinfo: {"hostname":"ohdlawx0001.dev.mig.corp","uptime":1056,"bootTime":1617878898,"procs":247,"os":"linux","platform":"redhat","platformFamily":"rhel","platformVersion":"8.3","kernelVersion":"4.18.0-240.15.1.el8_3.x86_64","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"cee83a86-c71e-4c49-8bbe-e9a356a31865"}
I0408 07:05:54.570639 4173 start.go:118] virtualization:
I0408 07:05:54.574648 4173 out.go:129] * minikube v1.18.1 on Redhat 8.3

  • minikube v1.18.1 on Redhat 8.3
    I0408 07:05:54.575046 4173 notify.go:126] Checking for updates...
    I0408 07:05:54.575230 4173 driver.go:323] Setting default libvirt URI to qemu:///system
    I0408 07:05:54.631667 4173 docker.go:118] docker version: linux-20.10.5
    I0408 07:05:54.631730 4173 cli_runner.go:115] Run: docker system info --format "{{json .}}"
    I0408 07:05:54.720395 4173 info.go:253] docker info: {ID:FBFI:BBUD:H4M2:Y6U4:SY7R:GT7R:RLC5:7733:BL4B:VBBP:LENX:NONN Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem xfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:33 SystemTime:2021-04-08 07:05:54.667907792 -0400 EDT LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.18.0-240.15.1.el8_3.x86_64 OperatingSystem:Red Hat Enterprise Linux 8.3 (Ootpa) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:12347408384 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ohdlawx0001.dev.mig.corp Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker]] Warnings:}}
    I0408 07:05:54.720485 4173 docker.go:215] overlay module found
    I0408 07:05:54.725267 4173 out.go:129] * Using the docker driver based on user configuration
  • Using the docker driver based on user configuration
    I0408 07:05:54.725292 4173 start.go:276] selected driver: docker
    I0408 07:05:54.725302 4173 start.go:718] validating driver "docker" against
    I0408 07:05:54.725329 4173 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:}
    W0408 07:05:54.725449 4173 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
    W0408 07:05:54.725501 4173 out.go:191] ! Your cgroup does not allow setting memory.
    ! Your cgroup does not allow setting memory.
    I0408 07:05:54.728750 4173 out.go:129] - More information: https://docs.doInfo.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
    • More information: https://docs.doInfo.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
      I0408 07:05:54.729049 4173 cli_runner.go:115] Run: docker system info --format "{{json .}}"
      I0408 07:05:54.806350 4173 info.go:253] docker info: {ID:FBFI:BBUD:H4M2:Y6U4:SY7R:GT7R:RLC5:7733:BL4B:VBBP:LENX:NONN Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem xfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:33 SystemTime:2021-04-08 07:05:54.763151806 -0400 EDT LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.18.0-240.15.1.el8_3.x86_64 OperatingSystem:Red Hat Enterprise Linux 8.3 (Ootpa) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:12347408384 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ohdlawx0001.dev.mig.corp Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker]] Warnings:}}
      I0408 07:05:54.806476 4173 start_flags.go:251] no existing cluster config was found, will generate one from the flags
      I0408 07:05:54.806598 4173 start_flags.go:696] Wait components to verify : map[apiserver:true system_pods:true]
      I0408 07:05:54.806625 4173 cni.go:74] Creating CNI manager for ""
      I0408 07:05:54.806635 4173 cni.go:140] CNI unnecessary in this configuration, recommending no CNI
      I0408 07:05:54.806644 4173 start_flags.go:395] config:
      {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:8192 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] Network: MultiNodeRequested:false}
      I0408 07:05:54.810463 4173 out.go:129] * Starting control plane node minikube in cluster minikube
  • Starting control plane node minikube in cluster minikube
    I0408 07:05:54.846194 4173 cache.go:120] Beginning downloading kic base image for docker with docker
    I0408 07:05:54.850304 4173 out.go:129] * Pulling base image ...
  • Pulling base image ...
    I0408 07:05:54.850352 4173 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
    I0408 07:05:54.850632 4173 cache.go:145] Downloading gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e to local daemon
    I0408 07:05:54.850672 4173 image.go:140] Writing gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e to local daemon
    I0408 07:05:54.892802 4173 preload.go:122] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
    I0408 07:05:54.892821 4173 cache.go:54] Caching tarball of preloaded images
    I0408 07:05:54.892852 4173 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
    I0408 07:05:54.931980 4173 preload.go:122] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
    I0408 07:05:54.934984 4173 out.go:129] * Downloading Kubernetes v1.20.2 preload ...
  • Downloading Kubernetes v1.20.2 preload ...
    I0408 07:05:54.935351 4173 download.go:78] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4 -> /rhome/dadmmason/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4

    preloaded-images-k8s-v9-v1....: 491.22 MiB / 491.22 MiB 100.00% 35.61 Mi
    I0408 07:06:10.033058 4173 preload.go:160] saving checksum for preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4 ...
    I0408 07:06:10.190864 4173 preload.go:177] verifying checksumm of /rhome/dadmmason/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4 ...
    I0408 07:06:11.981545 4173 cache.go:57] Finished verifying existence of preloaded tar for v1.20.2 on docker
    I0408 07:06:11.981946 4173 profile.go:148] Saving config to /rhome/dadmmason/.minikube/profiles/minikube/config.json ...
    I0408 07:06:11.981985 4173 lock.go:36] WriteFile acquiring /rhome/dadmmason/.minikube/profiles/minikube/config.json: {Name:mkdd6468410fe3fb8a81afb70f8741815dfa701f Clock:{} Delay:500ms Timeout:1m0s Cancel:}
    I0408 07:06:15.101380 4173 cache.go:148] successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e
    I0408 07:06:15.101422 4173 cache.go:185] Successfully downloaded all kic artifacts
    I0408 07:06:15.101471 4173 start.go:313] acquiring machines lock for minikube: {Name:mke106008088022af601d1ad8a563b2b2afd8f7d Clock:{} Delay:500ms Timeout:10m0s Cancel:}
    I0408 07:06:15.101687 4173 start.go:317] acquired machines lock for "minikube" in 196.097µs
    I0408 07:06:15.102185 4173 start.go:89] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:8192 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}
    I0408 07:06:15.102283 4173 start.go:126] createHost starting for "" (driver="docker")
    I0408 07:06:15.109615 4173 out.go:150] * Creating docker container (CPUs=2, Memory=8192MB) ...

  • Creating docker container (CPUs=2, Memory=8192MB) ...| I0408 07:06:15.109852 4173 start.go:160] libmachine.API.Create for "minikube" (driver="docker")
    I0408 07:06:15.109892 4173 client.go:168] LocalClient.Create starting
    I0408 07:06:15.110307 4173 client.go:171] LocalClient.Create took 405.321µs \ I0408 07:06:17.111158 4173 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
    I0408 07:06:17.111324 4173 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube | W0408 07:06:17.160619 4173 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 1
    I0408 07:06:17.160767 4173 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
    stdout:

stderr:
Error: No such container: minikube \ I0408 07:06:17.437396 4173 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
W0408 07:06:17.481955 4173 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 1
I0408 07:06:17.482064 4173 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:

stderr:
Error: No such container: minikube / I0408 07:06:18.022550 4173 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
W0408 07:06:18.065523 4173 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 1
I0408 07:06:18.065624 4173 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:

stderr:
Error: No such container: minikube \ I0408 07:06:18.721180 4173 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube | W0408 07:06:18.764668 4173 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 1
I0408 07:06:18.764767 4173 retry.go:31] will retry after 791.196345ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:

stderr:
Error: No such container: minikube | I0408 07:06:19.556677 4173 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
W0408 07:06:19.600683 4173 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 1
W0408 07:06:19.600788 4173 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:

stderr:
Error: No such container: minikube

W0408 07:06:19.600808 4173 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:

stderr:
Error: No such container: minikube
I0408 07:06:19.600820 4173 start.go:129] duration metric: createHost completed in 4.498526015s
I0408 07:06:19.600828 4173 start.go:80] releasing machines lock for "minikube", held for 4.499126077s
W0408 07:06:19.600850 4173 start.go:425] error starting host: creating host: create: bootstrapping certificates: failed to acquire bootstrap client lock: %!v(MISSING) bad file descriptor
I0408 07:06:19.600913 4173 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}} / W0408 07:06:19.644039 4173 cli_runner.go:162] docker container inspect minikube --format={{.State.Status}} returned with exit code 1
I0408 07:06:19.644097 4173 delete.go:46] couldn't inspect container "minikube" before deleting: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1
stdout:

stderr:
Error: No such container: minikube
I0408 07:06:19.644300 4173 cli_runner.go:115] Run: sudo -n podman container inspect minikube --format={{.State.Status}}
W0408 07:06:19.666438 4173 cli_runner.go:162] sudo -n podman container inspect minikube --format={{.State.Status}} returned with exit code 1
I0408 07:06:19.666501 4173 delete.go:46] couldn't inspect container "minikube" before deleting: unknown state "minikube": sudo -n podman container inspect minikube --format={{.State.Status}}: exit status 1
stdout:

stderr:
sudo: podman: command not found
W0408 07:06:19.666541 4173 start.go:430] delete host: Docker machine "minikube" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
W0408 07:06:19.666678 4173 out.go:191] ! StartHost failed, but will try again: creating host: create: bootstrapping certificates: failed to acquire bootstrap client lock: %!v(MISSING) bad file descriptor
! StartHost failed, but will try again: creating host: create: bootstrapping certificates: failed to acquire bootstrap client lock: %!v(MISSING) bad file descriptor
I0408 07:06:19.666713 4173 start.go:440] Will try again in 5 seconds ...
I0408 07:06:24.667885 4173 start.go:313] acquiring machines lock for minikube: {Name:mke106008088022af601d1ad8a563b2b2afd8f7d Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I0408 07:06:24.668501 4173 start.go:317] acquired machines lock for "minikube" in 545.261µs
I0408 07:06:24.668553 4173 start.go:89] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:8192 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}
I0408 07:06:24.668681 4173 start.go:126] createHost starting for "" (driver="docker")
I0408 07:06:24.673516 4173 out.go:150] * Creating docker container (CPUs=2, Memory=8192MB) ...

  • Creating docker container (CPUs=2, Memory=8192MB) ...I0408 07:06:24.673668 4173 start.go:160] libmachine.API.Create for "minikube" (driver="docker")
    I0408 07:06:24.673701 4173 client.go:168] LocalClient.Create starting
    | I0408 07:06:24.673831 4173 client.go:171] LocalClient.Create took 118.461µs \ I0408 07:06:26.674348 4173 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
    I0408 07:06:26.674416 4173 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube | W0408 07:06:26.720037 4173 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 1
    I0408 07:06:26.720199 4173 retry.go:31] will retry after 231.159374ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
    stdout:

stderr:
Error: No such container: minikube - I0408 07:06:26.952146 4173 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube \ W0408 07:06:27.007420 4173 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 1
I0408 07:06:27.007517 4173 retry.go:31] will retry after 445.058653ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:

stderr:
Error: No such container: minikube \ I0408 07:06:27.453344 4173 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube | W0408 07:06:27.498271 4173 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 1
I0408 07:06:27.498371 4173 retry.go:31] will retry after 318.170823ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:

stderr:
Error: No such container: minikube \ I0408 07:06:27.816973 4173 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
W0408 07:06:27.861005 4173 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 1
I0408 07:06:27.861106 4173 retry.go:31] will retry after 553.938121ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:

stderr:
Error: No such container: minikube / I0408 07:06:28.415952 4173 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
W0408 07:06:28.459541 4173 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 1
I0408 07:06:28.459655 4173 retry.go:31] will retry after 755.539547ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:

stderr:
Error: No such container: minikube / I0408 07:06:29.216213 4173 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
W0408 07:06:29.276618 4173 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 1
W0408 07:06:29.276727 4173 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:

stderr:
Error: No such container: minikube

W0408 07:06:29.276743 4173 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:

stderr:
Error: No such container: minikube
I0408 07:06:29.276755 4173 start.go:129] duration metric: createHost completed in 4.60806132s
I0408 07:06:29.276762 4173 start.go:80] releasing machines lock for "minikube", held for 4.608237876s
W0408 07:06:29.276912 4173 out.go:191] * Failed to start docker container. Running "minikube delete" may fix it: creating host: create: bootstrapping certificates: failed to acquire bootstrap client lock: %!v(MISSING) bad file descriptor

  • Failed to start docker container. Running "minikube delete" may fix it: creating host: create: bootstrapping certificates: failed to acquire bootstrap client lock: %!v(MISSING) bad file descriptor
    I0408 07:06:29.283409 4173 out.go:129]

W0408 07:06:29.283628 4173 out.go:191] X Exiting due to GUEST_PROVISION: Failed to start host: creating host: create: bootstrapping certificates: failed to acquire bootstrap client lock: %!v(MISSING) bad file descriptor
X Exiting due to GUEST_PROVISION: Failed to start host: creating host: create: bootstrapping certificates: failed to acquire bootstrap client lock: %!v(MISSING) bad file descriptor
W0408 07:06:29.283689 4173 out.go:191] *
*
W0408 07:06:29.283730 4173 out.go:191] * If the above advice does not help, please let us know:

Full output of failed command:

Full output of minikube start command used, if not already included:

Optional: Full output of minikube logs command:

@ilya-zuyev ilya-zuyev added the kind/support Categorizes issue or PR as a support question. label Apr 8, 2021
@ilya-zuyev
Copy link
Contributor

Hi @bamason14! Thanks for reporting this. Does minikube start without ingress add-on enabled? Could you collect more logs, using -v=5 --alsologtostderr options?

@bamason14
Copy link
Author

It still fails.

$ minikube start --driver=docker --cpus=2 --memory=8g -v=5 --alsologtostderr
I0408 16:09:33.406874 40810 out.go:239] Setting OutFile to fd 1 ...
I0408 16:09:33.407161 40810 out.go:286] TERM=vt100,COLORTERM=, which probably does not support color
I0408 16:09:33.407172 40810 out.go:252] Setting ErrFile to fd 2...
I0408 16:09:33.407176 40810 out.go:286] TERM=vt100,COLORTERM=, which probably does not support color
I0408 16:09:33.407265 40810 root.go:308] Updating PATH: /rhome/dadmmason/.minikube/bin
W0408 16:09:33.407527 40810 root.go:283] Error reading config file at /rhome/dadmmason/.minikube/config/config.json: open /rhome/dadmmason/.minikube/config/config.json: no such file or directory
I0408 16:09:33.417788 40810 out.go:246] Setting JSON to false
I0408 16:09:33.419293 40810 start.go:108] hostinfo: {"hostname":"ohdlawx0001.dev.mig.corp","uptime":33675,"bootTime":1617878898,"procs":244,"os":"linux","platform":"redhat","platformFamily":"rhel","platformVersion":"8.3","kernelVersion":"4.18.0-240.15.1.el8_3.x86_64","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"cee83a86-c71e-4c49-8bbe-e9a356a31865"}
I0408 16:09:33.419352 40810 start.go:118] virtualization:
I0408 16:09:33.422530 40810 out.go:129] * minikube v1.18.1 on Redhat 8.3

  • minikube v1.18.1 on Redhat 8.3
    I0408 16:09:33.422886 40810 notify.go:126] Checking for updates...
    I0408 16:09:33.423245 40810 driver.go:323] Setting default libvirt URI to qemu:///system
    I0408 16:09:33.478611 40810 docker.go:118] docker version: linux-20.10.5
    I0408 16:09:33.478794 40810 cli_runner.go:115] Run: docker system info --format "{{json .}}"
    I0408 16:09:33.562891 40810 info.go:253] docker info: {ID:FBFI:BBUD:H4M2:Y6U4:SY7R:GT7R:RLC5:7733:BL4B:VBBP:LENX:NONN Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem xfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:33 SystemTime:2021-04-08 16:09:33.515323851 -0400 EDT LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.18.0-240.15.1.el8_3.x86_64 OperatingSystem:Red Hat Enterprise Linux 8.3 (Ootpa) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:12347408384 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ohdlawx0001.dev.mig.corp Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker]] Warnings:}}
    I0408 16:09:33.562981 40810 docker.go:215] overlay module found
    I0408 16:09:33.566396 40810 out.go:129] * Using the docker driver based on user configuration
  • Using the docker driver based on user configuration
    I0408 16:09:33.566416 40810 start.go:276] selected driver: docker
    I0408 16:09:33.566423 40810 start.go:718] validating driver "docker" against
    I0408 16:09:33.566441 40810 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:}
    W0408 16:09:33.566476 40810 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
    W0408 16:09:33.566520 40810 out.go:191] ! Your cgroup does not allow setting memory.
    ! Your cgroup does not allow setting memory.
    I0408 16:09:33.569240 40810 out.go:129] - More information: https://docs.doInfo.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
    • More information: https://docs.doInfo.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
      I0408 16:09:33.569529 40810 cli_runner.go:115] Run: docker system info --format "{{json .}}"
      I0408 16:09:33.649891 40810 info.go:253] docker info: {ID:FBFI:BBUD:H4M2:Y6U4:SY7R:GT7R:RLC5:7733:BL4B:VBBP:LENX:NONN Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem xfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:33 SystemTime:2021-04-08 16:09:33.604220242 -0400 EDT LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.18.0-240.15.1.el8_3.x86_64 OperatingSystem:Red Hat Enterprise Linux 8.3 (Ootpa) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:12347408384 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ohdlawx0001.dev.mig.corp Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker]] Warnings:}}
      I0408 16:09:33.649999 40810 start_flags.go:251] no existing cluster config was found, will generate one from the flags
      I0408 16:09:33.650124 40810 start_flags.go:696] Wait components to verify : map[apiserver:true system_pods:true]
      I0408 16:09:33.650143 40810 cni.go:74] Creating CNI manager for ""
      I0408 16:09:33.650153 40810 cni.go:140] CNI unnecessary in this configuration, recommending no CNI
      I0408 16:09:33.650179 40810 start_flags.go:395] config:
      {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:8192 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] Network: MultiNodeRequested:false}
      I0408 16:09:33.653911 40810 out.go:129] * Starting control plane node minikube in cluster minikube
  • Starting control plane node minikube in cluster minikube
    I0408 16:09:33.690608 40810 cache.go:120] Beginning downloading kic base image for docker with docker
    I0408 16:09:33.693348 40810 out.go:129] * Pulling base image ...
  • Pulling base image ...
    I0408 16:09:33.693393 40810 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
    I0408 16:09:33.693709 40810 cache.go:145] Downloading gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e to local daemon
    I0408 16:09:33.693721 40810 image.go:140] Writing gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e to local daemon
    I0408 16:09:33.693749 40810 image.go:145] Getting image gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e
    I0408 16:09:33.753088 40810 preload.go:122] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
    I0408 16:09:33.753109 40810 cache.go:54] Caching tarball of preloaded images
    I0408 16:09:33.753135 40810 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
    I0408 16:09:33.795948 40810 preload.go:122] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
    I0408 16:09:33.798790 40810 out.go:129] * Downloading Kubernetes v1.20.2 preload ...
  • Downloading Kubernetes v1.20.2 preload ...
    I0408 16:09:33.799281 40810 download.go:78] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4 -> /rhome/dadmmason/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4

    preloaded-images-k8s-v9-v1....: 5.12 MiB / 491.22 MiB [>__] 1.04% ? p/s ?I0408 16:09:34.295617 40810 image.go:158] Writing image gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e
    preloaded-images-k8s-v9-v1....: 491.22 MiB / 491.22 MiB 100.00% 27.65 Mi^[[28~
    I0408 16:09:52.795494 40810 preload.go:160] saving checksum for preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4 ...
    I0408 16:09:52.937302 40810 preload.go:177] verifying checksumm of /rhome/dadmmason/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4 ...
    I0408 16:09:54.662906 40810 cache.go:57] Finished verifying existence of preloaded tar for v1.20.2 on docker
    I0408 16:09:54.663216 40810 profile.go:148] Saving config to /rhome/dadmmason/.minikube/profiles/minikube/config.json ...
    I0408 16:09:54.663255 40810 lock.go:36] WriteFile acquiring /rhome/dadmmason/.minikube/profiles/minikube/config.json: {Name:mkdd6468410fe3fb8a81afb70f8741815dfa701f Clock:{} Delay:500ms Timeout:1m0s Cancel:}
    I0408 16:09:55.620913 40810 cache.go:148] successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e
    I0408 16:09:55.620945 40810 cache.go:185] Successfully downloaded all kic artifacts
    I0408 16:09:55.620980 40810 start.go:313] acquiring machines lock for minikube: {Name:mke106008088022af601d1ad8a563b2b2afd8f7d Clock:{} Delay:500ms Timeout:10m0s Cancel:}
    I0408 16:09:55.621148 40810 start.go:317] acquired machines lock for "minikube" in 147.045µs
    I0408 16:09:55.621603 40810 start.go:89] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:8192 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}
    I0408 16:09:55.621690 40810 start.go:126] createHost starting for "" (driver="docker")
    I0408 16:09:55.625412 40810 out.go:150] * Creating docker container (CPUs=2, Memory=8192MB) ...

  • Creating docker container (CPUs=2, Memory=8192MB) ...| I0408 16:09:55.625632 40810 start.go:160] libmachine.API.Create for "minikube" (driver="docker")
    I0408 16:09:55.625657 40810 client.go:168] LocalClient.Create starting
    I0408 16:09:55.626000 40810 client.go:171] LocalClient.Create took 337.047µs \ I0408 16:09:57.626845 40810 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
    I0408 16:09:57.626962 40810 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube | W0408 16:09:57.672613 40810 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 1
    I0408 16:09:57.672733 40810 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
    stdout:

stderr:
Error: No such container: minikube \ I0408 16:09:57.949208 40810 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
W0408 16:09:57.991606 40810 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 1
I0408 16:09:57.991707 40810 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:

stderr:
Error: No such container: minikube | I0408 16:09:58.533001 40810 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube / W0408 16:09:58.574641 40810 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 1
I0408 16:09:58.574739 40810 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:

stderr:
Error: No such container: minikube \ I0408 16:09:59.230163 40810 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube | W0408 16:09:59.275294 40810 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 1
I0408 16:09:59.275418 40810 retry.go:31] will retry after 791.196345ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:

stderr:
Error: No such container: minikube | I0408 16:10:00.066799 40810 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
W0408 16:10:00.133947 40810 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 1
W0408 16:10:00.134049 40810 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:

stderr:
Error: No such container: minikube

W0408 16:10:00.134065 40810 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:

stderr:
Error: No such container: minikube
I0408 16:10:00.134082 40810 start.go:129] duration metric: createHost completed in 4.51237909s
I0408 16:10:00.134091 40810 start.go:80] releasing machines lock for "minikube", held for 4.512916004s
W0408 16:10:00.134118 40810 start.go:425] error starting host: creating host: create: bootstrapping certificates: failed to acquire bootstrap client lock: %!v(MISSING) bad file descriptor
I0408 16:10:00.134187 40810 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}} / W0408 16:10:00.187715 40810 cli_runner.go:162] docker container inspect minikube --format={{.State.Status}} returned with exit code 1
I0408 16:10:00.187777 40810 delete.go:46] couldn't inspect container "minikube" before deleting: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1
stdout:

stderr:
Error: No such container: minikube
I0408 16:10:00.187981 40810 cli_runner.go:115] Run: sudo -n podman container inspect minikube --format={{.State.Status}}
W0408 16:10:00.216529 40810 cli_runner.go:162] sudo -n podman container inspect minikube --format={{.State.Status}} returned with exit code 1
I0408 16:10:00.216599 40810 delete.go:46] couldn't inspect container "minikube" before deleting: unknown state "minikube": sudo -n podman container inspect minikube --format={{.State.Status}}: exit status 1
stdout:

stderr:
sudo: podman: command not found
W0408 16:10:00.216638 40810 start.go:430] delete host: Docker machine "minikube" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
W0408 16:10:00.216821 40810 out.go:191] ! StartHost failed, but will try again: creating host: create: bootstrapping certificates: failed to acquire bootstrap client lock: %!v(MISSING) bad file descriptor
! StartHost failed, but will try again: creating host: create: bootstrapping certificates: failed to acquire bootstrap client lock: %!v(MISSING) bad file descriptor
I0408 16:10:00.216842 40810 start.go:440] Will try again in 5 seconds ...
I0408 16:10:05.217021 40810 start.go:313] acquiring machines lock for minikube: {Name:mke106008088022af601d1ad8a563b2b2afd8f7d Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I0408 16:10:05.217550 40810 start.go:317] acquired machines lock for "minikube" in 208.251µs
I0408 16:10:05.217619 40810 start.go:89] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:8192 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}
I0408 16:10:05.217727 40810 start.go:126] createHost starting for "" (driver="docker")
I0408 16:10:05.222821 40810 out.go:150] * Creating docker container (CPUs=2, Memory=8192MB) ...

  • Creating docker container (CPUs=2, Memory=8192MB) ...| I0408 16:10:05.222979 40810 start.go:160] libmachine.API.Create for "minikube" (driver="docker")
    I0408 16:10:05.223009 40810 client.go:168] LocalClient.Create starting
    I0408 16:10:05.223201 40810 client.go:171] LocalClient.Create took 184.83µs \ I0408 16:10:07.223683 40810 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
    I0408 16:10:07.223876 40810 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube | W0408 16:10:07.270698 40810 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 1
    I0408 16:10:07.270802 40810 retry.go:31] will retry after 231.159374ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
    stdout:

stderr:
Error: No such container: minikube - I0408 16:10:07.502792 40810 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube \ W0408 16:10:07.543983 40810 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 1
I0408 16:10:07.544082 40810 retry.go:31] will retry after 445.058653ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:

stderr:
Error: No such container: minikube \ I0408 16:10:07.989949 40810 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
W0408 16:10:08.032282 40810 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 1
I0408 16:10:08.032381 40810 retry.go:31] will retry after 318.170823ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:

stderr:
Error: No such container: minikube \ I0408 16:10:08.351123 40810 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
W0408 16:10:08.398349 40810 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 1
I0408 16:10:08.398478 40810 retry.go:31] will retry after 553.938121ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:

stderr:
Error: No such container: minikube / I0408 16:10:08.952878 40810 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
W0408 16:10:08.996104 40810 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 1
I0408 16:10:08.996225 40810 retry.go:31] will retry after 755.539547ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:

stderr:
Error: No such container: minikube / I0408 16:10:09.752561 40810 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
W0408 16:10:09.794190 40810 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 1
W0408 16:10:09.794306 40810 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:

stderr:
Error: No such container: minikube

W0408 16:10:09.794336 40810 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:

stderr:
Error: No such container: minikube
I0408 16:10:09.794390 40810 start.go:129] duration metric: createHost completed in 4.576654714s
I0408 16:10:09.794405 40810 start.go:80] releasing machines lock for "minikube", held for 4.576841834s
W0408 16:10:09.794624 40810 out.go:191] * Failed to start docker container. Running "minikube delete" may fix it: creating host: create: bootstrapping certificates: failed to acquire bootstrap client lock: %!v(MISSING) bad file descriptor

  • Failed to start docker container. Running "minikube delete" may fix it: creating host: create: bootstrapping certificates: failed to acquire bootstrap client lock: %!v(MISSING) bad file descriptor
    I0408 16:10:09.797653 40810 out.go:129]

W0408 16:10:09.797750 40810 out.go:191] X Exiting due to GUEST_PROVISION: Failed to start host: creating host: create: bootstrapping certificates: failed to acquire bootstrap client lock: %!v(MISSING) bad file descriptor
X Exiting due to GUEST_PROVISION: Failed to start host: creating host: create: bootstrapping certificates: failed to acquire bootstrap client lock: %!v(MISSING) bad file descriptor
W0408 16:10:09.797785 40810 out.go:191] *
*
W0408 16:10:09.797845 40810 out.go:191] * If the above advice does not help, please let us know:

@bamason14
Copy link
Author

I rebuilt host and now everything is working. Closing issue as I have no way or recreating issue.

@spowelljr spowelljr changed the title Failed to start minikube Failed to start minikube - failed to acquire bootstrap client lock: bad file descriptor May 5, 2021
@kiennguyen94
Copy link

@bamason14 Hi, could you elaborate on rebuilding host? What did you do specifically with the host?

@craustin
Copy link

craustin commented May 21, 2021

I can repro this issue in RHEL8. Let me know if anyone has any debugging commands to try.
creating host: create: bootstrapping certificates: failed to acquire bootstrap client lock: %!v(MISSING) bad file descriptor

@defurn
Copy link

defurn commented May 25, 2021

I guess having an nfs share for home dir could cause this.. can't acquire locks...?

@craustin
Copy link

Yes, I think my NFS homedir caused this in my case. Setting MINIKUBE_HOME outside my homedir fixed it. (https://minikube.sigs.k8s.io/docs/handbook/config/#environment-variables)

@medyagh medyagh reopened this May 26, 2021
@medyagh
Copy link
Member

medyagh commented May 26, 2021

@craustin @defurn is there a way that miniukube could detect NFS home dir so at least we could relax the Lock for NFS ? Or suggest the user to change homedir ?

@medyagh medyagh added kind/bug Categorizes issue or PR as related to a bug. needs-problem-regex labels May 26, 2021
@andriyDev andriyDev added the triage/needs-information Indicates an issue needs more information in order to work on it. label Jun 30, 2021
@spowelljr spowelljr removed kind/support Categorizes issue or PR as a support question. triage/needs-information Indicates an issue needs more information in order to work on it. labels Jul 14, 2021
@sharifelgamal sharifelgamal added the priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. label Jul 28, 2021
@netwrk197
Copy link

I have the same issue..

root@minikube:/containerfile# ls
ls: cannot access 'server.php': Bad file descriptor
ls: cannot access 'node_modules': Bad file descriptor
ls: cannot access 'README.md': Bad file descriptor
ls: cannot access 'resources': Bad file descriptor
ls: cannot access 'app': Bad file descriptor
ls: cannot access 'bootstrap': Bad file descriptor
ls: cannot access 'webpack.mix.js': Bad file descriptor
ls: cannot access 'artisan': Bad file descriptor
ls: cannot access 'package.json': Bad file descriptor
ls: cannot access 'config': Bad file descriptor
ls: cannot access 'phpunit.xml': Bad file descriptor
ls: cannot access 'composer.json': Bad file descriptor
ls: cannot access 'tests': Bad file descriptor
ls: cannot access 'khanglq.txt': Bad file descriptor
ls: cannot access 'storage': Bad file descriptor
ls: cannot access 'composer.lock': Bad file descriptor
ls: cannot access 'docker': Bad file descriptor
ls: cannot access 'docker-compose.yml': Bad file descriptor
ls: cannot access 'vendor': Bad file descriptor
ls: cannot access 'frontend': Bad file descriptor
ls: cannot access 'public': Bad file descriptor
ls: cannot access 'package-lock.json': Bad file descriptor
ls: cannot access 'database': Bad file descriptor
ls: cannot access 'routes': Bad file descriptor
README.md artisan composer.json config docker frontend node_modules package.json public routes storage vendor
app bootstrap composer.lock database docker-compose.yml khanglq.txt package-lock.json phpunit.xml resources server.php tests webpack.mix.js

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 10, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Dec 10, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@ilanRosenbaum
Copy link

ilanRosenbaum commented Oct 10, 2022

Reproduced on Centos 7.9, more information: https://stackoverflow.com/questions/74020334/minikube-failing-to-start-in-centos-7-9

@lucasandre22
Copy link

Yes, I think my NFS homedir caused this in my case. Setting MINIKUBE_HOME outside my homedir fixed it. (https://minikube.sigs.k8s.io/docs/handbook/config/#environment-variables)

Thank you! This was exactly the issue that I was having and I could find anywhere how to fix it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-problem-regex priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.
Projects
None yet
Development

No branches or pull requests