Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kvm2: Machine didn't return an IP after 120 seconds #3566

Closed
cmdpwnd opened this issue Jan 21, 2019 · 20 comments
Closed

kvm2: Machine didn't return an IP after 120 seconds #3566

cmdpwnd opened this issue Jan 21, 2019 · 20 comments
Labels
cause/firewall-or-proxy When firewalls or proxies seem to be interfering co/kvm2-driver KVM2 driver related issues kind/support Categorizes issue or PR as a support question. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.

Comments

@cmdpwnd
Copy link

cmdpwnd commented Jan 21, 2019

Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG

Please provide the following details:

Environment: VMware Workstation 11

Minikube version: minikube version: v0.33.1

  • OS:
PRETTY_NAME="Debian GNU/Linux 9 (stretch)"
NAME="Debian GNU/Linux"
VERSION_ID="9"
VERSION="9 (stretch)"
ID=debian
  • VM Driver: "DriverName": "kvm2",
  • ISO version:
"Boot2DockerURL": "file:///home/cmdpwnd/.minikube/cache/iso/minikube-v0.33.1.iso",
        "ISO": "/home/cmdpwnd/.minikube/machines/minikube/boot2docker.iso",
  • Install tools: N/A
  • Others:
VMware Workstation 11:
    Enable VM Settings/CPU/Virtualize Intel VT-x/EPT or AMD-V/RVI
    Enable VM Settings/CPU/Virtualize CPU performance counters

What happened:
E0121 11:39:38.385150 1862 start.go:205] Error starting host: Error creating host: Error creating machine: Error in driver during machine creation: Machine didn't return an IP after 120 seconds.
What you expected to happen:
Success??
How to reproduce it: Copy/Paste will do

sudo apt install qemu-kvm libvirt-clients libvirt-daemon-system
newgrp libvirtd
sudo adduser $(whoami) libvirtd
sudo adduser $(whoami) libvirt
sudo adduser $(whoami) libvirt-qemu
curl -LO https://storage.googleapis.com/minikube/releases/latest/docker-machine-driver-kvm2
sudo install docker-machine-driver-kvm2 /usr/local/bin/
sudo chown -R $(whoami):libvirtd /var/run/libvirt
sudo systemctl restart libvirtd
virsh --connect qemu:///system net-start default
minikube start -v9 --vm-driver kvm2

Anything else we need to know?: If you don't clear minikube after initial failure, on rerun expect:

Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
@cmdpwnd cmdpwnd changed the title VMware Workstation / Debian 9: Machine didn't return an IP after 120 seconds Debian 9: Machine didn't return an IP after 120 seconds Jan 21, 2019
@tstromberg tstromberg changed the title Debian 9: Machine didn't return an IP after 120 seconds kvm2 on top of VMware workstation: Machine didn't return an IP after 120 seconds Jan 23, 2019
@tstromberg tstromberg changed the title kvm2 on top of VMware workstation: Machine didn't return an IP after 120 seconds kvm2 on VMware workstation: Machine didn't return an IP after 120 seconds Jan 23, 2019
@tstromberg tstromberg added os/linux co/kvm2-driver KVM2 driver related issues kind/bug Categorizes issue or PR as related to a bug. labels Jan 23, 2019
@tstromberg
Copy link
Contributor

tstromberg commented Jan 23, 2019

I'm pretty confident this is a kvm/libvirt issue that we should be able to detect, but don't know how to yet. Probably also related to the use of nested VM's. Have you tried running minikube outside of VMware workstation?

https://fedoraproject.org/wiki/How_to_debug_Virtualization_problems has some guidance on debugging kvm/libvirt issues, but I am especially curious what this command emits:

virt-host-validate

along with:

virsh net-list --all

@tstromberg tstromberg added cause/nested-vm-config When nested VM's appear to play a role priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. triage/needs-information Indicates an issue needs more information in order to work on it. labels Jan 23, 2019
@gbraad
Copy link
Contributor

gbraad commented Jan 26, 2019 via email

@cmdpwnd
Copy link
Author

cmdpwnd commented Jan 29, 2019

@tstromberg I'd read through those related minishift issues before opening an issue here... I've replicated my scenario with stretch installed on hardware using the info defined in this issue, and have already ruled out VMware as the root cause, hence the initial title change. This is hardware agnostic, and could be specific to Debian 9. To my knowledge (not sure) kvm2 is the only driver working with Debian 9 and minikube

For additional info though: (same output regardless of virtualization on Intel)

I'm not worried about IOMMU because there's no need for a passthrough device.

cmdpwnd@debian:~$ virt-host-validate
  QEMU: Checking for hardware virtualization                                 : PASS
  QEMU: Checking if device /dev/kvm exists                                   : PASS
  QEMU: Checking if device /dev/kvm is accessible                            : PASS
  QEMU: Checking if device /dev/vhost-net exists                             : PASS
  QEMU: Checking if device /dev/net/tun exists                               : PASS
  QEMU: Checking for cgroup 'memory' controller support                      : PASS
  QEMU: Checking for cgroup 'memory' controller mount-point                  : PASS
  QEMU: Checking for cgroup 'cpu' controller support                         : PASS
  QEMU: Checking for cgroup 'cpu' controller mount-point                     : PASS
  QEMU: Checking for cgroup 'cpuacct' controller support                     : PASS
  QEMU: Checking for cgroup 'cpuacct' controller mount-point                 : PASS
  QEMU: Checking for cgroup 'cpuset' controller support                      : PASS
  QEMU: Checking for cgroup 'cpuset' controller mount-point                  : PASS
  QEMU: Checking for cgroup 'devices' controller support                     : PASS
  QEMU: Checking for cgroup 'devices' controller mount-point                 : PASS
  QEMU: Checking for cgroup 'blkio' controller support                       : PASS
  QEMU: Checking for cgroup 'blkio' controller mount-point                   : PASS
  QEMU: Checking for device assignment IOMMU support                         : WARN (No ACPI DMAR table found, IOMMU either disabled in BIOS or not supported by this hardware platform)
   LXC: Checking for Linux >= 2.6.26                                         : PASS
   LXC: Checking for namespace ipc                                           : PASS
   LXC: Checking for namespace mnt                                           : PASS
   LXC: Checking for namespace pid                                           : PASS
   LXC: Checking for namespace uts                                           : PASS
   LXC: Checking for namespace net                                           : PASS
   LXC: Checking for namespace user                                          : PASS
   LXC: Checking for cgroup 'memory' controller support                      : PASS
   LXC: Checking for cgroup 'memory' controller mount-point                  : PASS
   LXC: Checking for cgroup 'cpu' controller support                         : PASS
   LXC: Checking for cgroup 'cpu' controller mount-point                     : PASS
   LXC: Checking for cgroup 'cpuacct' controller support                     : PASS
   LXC: Checking for cgroup 'cpuacct' controller mount-point                 : PASS
   LXC: Checking for cgroup 'cpuset' controller support                      : PASS
   LXC: Checking for cgroup 'cpuset' controller mount-point                  : PASS
   LXC: Checking for cgroup 'devices' controller support                     : PASS
   LXC: Checking for cgroup 'devices' controller mount-point                 : PASS
   LXC: Checking for cgroup 'blkio' controller support                       : PASS
   LXC: Checking for cgroup 'blkio' controller mount-point                   : PASS
cmdpwnd@debian:~$
cmdpwnd@debian:~$ cat /etc/default/grub | grep intel
GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on"
cmdpwnd@debian:~$
cmdpwnd@debian:~$ virsh --connect qemu:///system net-list --all
 Name                 State      Autostart     Persistent
----------------------------------------------------------
 default              active     yes           yes
 minikube-net         active     yes           yes

cmdpwnd@debian:~$

@cmdpwnd cmdpwnd changed the title kvm2 on VMware workstation: Machine didn't return an IP after 120 seconds kvm2 on Debian 9: Machine didn't return an IP after 120 seconds Jan 30, 2019
@tstromberg
Copy link
Contributor

tstromberg commented Jan 30, 2019

@cmdpwnd - thanks for the update! Do you mind running a few commands to help narrow this issue down a bit further?

I use kvm2 on Debian every day, but I suspect we have some environmental differences. First, let's get the virsh version:

virsh --version
// my output: 4.10.0

We can roughly emulate the path the kvm driver uses to determine the IP address by first finding the name of the bridge for minikube-net, though it's probably virbr1#:

virsh --connect qemu:///system dumpxml minikube | grep minikube-net
// my output: <source network='minikube-net' bridge='virbr1'/>

From there, the kvm driver (even libvirt upstream!) parses dnsmasq status (?!?!?) to get the IP address of the interface from the bridge name we just discovered:

grep ip-address /var/lib/libvirt/dnsmasq/virbr1.status
// my output: ip-address": "192.168.39.150",

It seems like there should be a more straightforward way to do this with more recent releases of libvirt, since virsh has no problem with displaying the IP address here:

sudo virsh domifaddr minikube

// my output:

 Name       MAC address          Protocol     Address
-------------------------------------------------------------------------------
 vnet0      70:16:ec:d1:8c:51    ipv4         192.168.122.182/24
 vnet1      a0:3d:49:b1:84:02    ipv4         192.168.39.150/24

If you don't mind repeating the same commands, I think I can figure out how to improve the kvm driver to do the correct thing here.

@cmdpwnd
Copy link
Author

cmdpwnd commented Jan 30, 2019

@tstromberg alright, I think we're getting somewhere now 👍 . My networking is totally dead.

cmdpwnd@debian:~$ virsh --version
3.0.0
cmdpwnd@debian:~$ virsh --connect qemu:///system dumpxml minikube | grep minikube-ne
      <source network='minikube-net' bridge='virbr1'/>
cmdpwnd@debian:~$ grep ip-address /var/lib/libvirt/dnsmasq/virbr1.status
#The file is empty, same for virbr0 (virsh network "default")
cmdpwnd@debian:~$ sudo virsh domifaddr minikube
 Name       MAC address          Protocol     Address
-------------------------------------------------------------------------------
cmdpwnd@debian:~$

Additionally, here's the config that libvirtd should be pulling:

cmdpwnd@debian:~$ sudo cat /var/lib/libvirt/dnsmasq/minikube-net.conf
##WARNING:  THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE
##OVERWRITTEN AND LOST.  Changes to this configuration should be made using:
##    virsh net-edit minikube-net
## or other application using the libvirt API.
##
## dnsmasq conf file created by libvirt
strict-order
port=0
pid-file=/var/run/libvirt/network/minikube-net.pid
except-interface=lo
bind-dynamic
interface=virbr1
dhcp-option=3
no-resolv
ra-param=*,0,0
dhcp-range=192.168.39.2,192.168.39.254
dhcp-no-override
dhcp-authoritative
dhcp-lease-max=253
dhcp-hostsfile=/var/lib/libvirt/dnsmasq/minikube-net.hostsfile
cmdpwnd@debian:~$
cmdpwnd@debian:~$ sudo cat /var/lib/libvirt/dnsmasq/default.conf
##WARNING:  THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE
##OVERWRITTEN AND LOST.  Changes to this configuration should be made using:
##    virsh net-edit default
## or other application using the libvirt API.
##
## dnsmasq conf file created by libvirt
strict-order
pid-file=/var/run/libvirt/network/default.pid
except-interface=lo
bind-dynamic
interface=virbr0
dhcp-range=192.168.122.2,192.168.122.254
dhcp-no-override
dhcp-authoritative
dhcp-lease-max=253
dhcp-hostsfile=/var/lib/libvirt/dnsmasq/default.hostsfile
addn-hosts=/var/lib/libvirt/dnsmasq/default.addnhosts
cmdpwnd@debian:~$

@anselvo
Copy link

anselvo commented Feb 20, 2019

Hi, i have the same problem. This is my output

$ virsh --version
4.6.0
$ virsh --connect qemu:///system dumpxml minikube | grep minikube-net
     <source network='minikube-net' bridge='virbr2'/>
$ grep ip-address /var/lib/libvirt/dnsmasq/virbr1.status
#The file is empty
$ sudo virsh domifaddr minikube
 Name       MAC address          Protocol     Address
-------------------------------------------------------------------------------
 #empty

@cmdpwnd Do you resolved this problem? If, yes, can you tell how?

@anselvo
Copy link

anselvo commented Feb 20, 2019

My logs for command:

$ minikube start --vm-driver kvm2 -v 8 --alsologtostderr
logs.txt

@riggtravis
Copy link

I'm running into this same issue on Ubuntu 18.10 I'm gonna dump all my information to compare against what everyone else is experiencing.

Environment:

Distributor ID:	Ubuntu
Description:	Ubuntu 18.10
Release:	18.10
Codename:	cosmic

Minikube Version: 0.34.1

VM Driver: kvm2

virt-host-validate:

  QEMU: Checking for hardware virtualization                                 : PASS
  QEMU: Checking if device /dev/kvm exists                                   : PASS
  QEMU: Checking if device /dev/kvm is accessible                            : PASS
  QEMU: Checking if device /dev/vhost-net exists                             : PASS
  QEMU: Checking if device /dev/net/tun exists                               : PASS
  QEMU: Checking for cgroup 'memory' controller support                      : PASS
  QEMU: Checking for cgroup 'memory' controller mount-point                  : PASS
  QEMU: Checking for cgroup 'cpu' controller support                         : PASS
  QEMU: Checking for cgroup 'cpu' controller mount-point                     : PASS
  QEMU: Checking for cgroup 'cpuacct' controller support                     : PASS
  QEMU: Checking for cgroup 'cpuacct' controller mount-point                 : PASS
  QEMU: Checking for cgroup 'cpuset' controller support                      : PASS
  QEMU: Checking for cgroup 'cpuset' controller mount-point                  : PASS
  QEMU: Checking for cgroup 'devices' controller support                     : PASS
  QEMU: Checking for cgroup 'devices' controller mount-point                 : PASS
  QEMU: Checking for cgroup 'blkio' controller support                       : PASS
  QEMU: Checking for cgroup 'blkio' controller mount-point                   : PASS
  QEMU: Checking for device assignment IOMMU support                         : WARN (No ACPI IVRS table found, IOMMU either disabled in BIOS or not supported by this hardware platform)
   LXC: Checking for Linux >= 2.6.26                                         : PASS
   LXC: Checking for namespace ipc                                           : PASS
   LXC: Checking for namespace mnt                                           : PASS
   LXC: Checking for namespace pid                                           : PASS
   LXC: Checking for namespace uts                                           : PASS
   LXC: Checking for namespace net                                           : PASS
   LXC: Checking for namespace user                                          : PASS
   LXC: Checking for cgroup 'memory' controller support                      : PASS
   LXC: Checking for cgroup 'memory' controller mount-point                  : PASS
   LXC: Checking for cgroup 'cpu' controller support                         : PASS
   LXC: Checking for cgroup 'cpu' controller mount-point                     : PASS
   LXC: Checking for cgroup 'cpuacct' controller support                     : PASS
   LXC: Checking for cgroup 'cpuacct' controller mount-point                 : PASS
   LXC: Checking for cgroup 'cpuset' controller support                      : PASS
   LXC: Checking for cgroup 'cpuset' controller mount-point                  : PASS
   LXC: Checking for cgroup 'devices' controller support                     : PASS
   LXC: Checking for cgroup 'devices' controller mount-point                 : PASS
   LXC: Checking for cgroup 'blkio' controller support                       : PASS
   LXC: Checking for cgroup 'blkio' controller mount-point                   : PASS
   LXC: Checking if device /sys/fs/fuse/connections exists                   : PASS

virsh net-list --all:

 Name                 State      Autostart     Persistent
----------------------------------------------------------
 default              active     yes           yes
 minikube-net         active     yes           yes

virsh --version: 4.6.0

virsh --connect qemu:///system dumpxml minikube | grep minikube-net:

virsh --connect qemu:///system dumpxml minikube | grep minikube-net

grep ip-address /var/lib/libvirt/dnsmasq/virbr1.status: 192.168.39.230

 Name       MAC address          Protocol     Address
-------------------------------------------------------------------------------
 vnet0      28:82:76:da:78:91    ipv4         192.168.122.31/24
 vnet1      6c:a2:16:03:e1:54    ipv4         192.168.39.230/24

Hopefully this adds some more datapoints to help us figure out what's going on here

@analogue
Copy link

To add another datapoint, I was running into the same issue (how I got here), but in the process of reproducing, it magically fixed itself (the worst kind of fix!).

Environment: VMWare Fusion 10.1.5 running on Mac OSX 10.14.3

Minikube version: minikube version: v0.35.0

OS:

DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=18.04
DISTRIB_CODENAME=bionic
DISTRIB_DESCRIPTION="Ubuntu 18.04.2 LTS"

VM Driver: "DriverName": "kvm2"

Others:

VMware Workstation 11:
    Enable VM Settings/CPU/Virtualize Intel VT-x/EPT or AMD-V/RVI
    Enable VM Settings/CPU/Virtualize CPU performance counters
    Enable IO MMU

Initial failure was exactly the same as @cmdpwnd 's.

Additional info:

spatel@vm-yelp:~$ virt-host-validate 
  QEMU: Checking for hardware virtualization                                 : PASS
  QEMU: Checking if device /dev/kvm exists                                   : PASS
  QEMU: Checking if device /dev/kvm is accessible                            : PASS
  QEMU: Checking if device /dev/vhost-net exists                             : PASS
  QEMU: Checking if device /dev/net/tun exists                               : PASS
  QEMU: Checking for cgroup 'memory' controller support                      : PASS
  QEMU: Checking for cgroup 'memory' controller mount-point                  : PASS
  QEMU: Checking for cgroup 'cpu' controller support                         : PASS
  QEMU: Checking for cgroup 'cpu' controller mount-point                     : PASS
  QEMU: Checking for cgroup 'cpuacct' controller support                     : PASS
  QEMU: Checking for cgroup 'cpuacct' controller mount-point                 : PASS
  QEMU: Checking for cgroup 'cpuset' controller support                      : PASS
  QEMU: Checking for cgroup 'cpuset' controller mount-point                  : PASS
  QEMU: Checking for cgroup 'devices' controller support                     : PASS
  QEMU: Checking for cgroup 'devices' controller mount-point                 : PASS
  QEMU: Checking for cgroup 'blkio' controller support                       : PASS
  QEMU: Checking for cgroup 'blkio' controller mount-point                   : PASS
  QEMU: Checking for device assignment IOMMU support                         : PASS
  QEMU: Checking if IOMMU is enabled by kernel                               : WARN (IOMMU appears to be disabled in kernel. Add intel_iommu=on to kernel cmdline arguments)
   LXC: Checking for Linux >= 2.6.26                                         : PASS
   LXC: Checking for namespace ipc                                           : PASS
   LXC: Checking for namespace mnt                                           : PASS
   LXC: Checking for namespace pid                                           : PASS
   LXC: Checking for namespace uts                                           : PASS
   LXC: Checking for namespace net                                           : PASS
   LXC: Checking for namespace user                                          : PASS
   LXC: Checking for cgroup 'memory' controller support                      : PASS
   LXC: Checking for cgroup 'memory' controller mount-point                  : PASS
   LXC: Checking for cgroup 'cpu' controller support                         : PASS
   LXC: Checking for cgroup 'cpu' controller mount-point                     : PASS
   LXC: Checking for cgroup 'cpuacct' controller support                     : PASS
   LXC: Checking for cgroup 'cpuacct' controller mount-point                 : PASS
   LXC: Checking for cgroup 'cpuset' controller support                      : PASS
   LXC: Checking for cgroup 'cpuset' controller mount-point                  : PASS
   LXC: Checking for cgroup 'devices' controller support                     : PASS
   LXC: Checking for cgroup 'devices' controller mount-point                 : PASS
   LXC: Checking for cgroup 'blkio' controller support                       : PASS
   LXC: Checking for cgroup 'blkio' controller mount-point                   : PASS
   LXC: Checking if device /sys/fs/fuse/connections exists                   : PASS
spatel@vm-yelp:~$ virsh net-list --all
 Name                 State      Autostart     Persistent
----------------------------------------------------------
 default              active     yes           yes
 minikube-net         active     yes           yes
spatel@vm-yelp:~$ virsh --version
4.0.0
spatel@vm-yelp:~$ virsh --connect qemu:///system dumpxml minikube | grep minikube-net
      <source network='minikube-net' bridge='virbr1'/>
spatel@vm-yelp:~$ grep ip-address /var/lib/libvirt/dnsmasq/virbr1.status
    "ip-address": "192.168.39.242",
spatel@vm-yelp:~$ sudo virsh domifaddr minikube
[sudo] password for spatel: 
 Name       MAC address          Protocol     Address
-------------------------------------------------------------------------------
 vnet0      [redacted]        ipv4         192.168.122.146/24
 vnet1      [redacted]        ipv4         192.168.39.242/24
spatel@vm-yelp:~$ sudo cat /var/lib/libvirt/dnsmasq/minikube-net.conf
##WARNING:  THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE
##OVERWRITTEN AND LOST.  Changes to this configuration should be made using:
##    virsh net-edit minikube-net
## or other application using the libvirt API.
##
## dnsmasq conf file created by libvirt
strict-order
user=libvirt-dnsmasq
port=0
pid-file=/var/run/libvirt/network/minikube-net.pid
except-interface=lo
bind-dynamic
interface=virbr1
dhcp-option=3
no-resolv
ra-param=*,0,0
dhcp-range=192.168.39.2,192.168.39.254
dhcp-no-override
dhcp-authoritative
dhcp-lease-max=253
dhcp-hostsfile=/var/lib/libvirt/dnsmasq/minikube-net.hostsfile
spatel@vm-yelp:~$ sudo cat /var/lib/libvirt/dnsmasq/default.conf
##WARNING:  THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE
##OVERWRITTEN AND LOST.  Changes to this configuration should be made using:
##    virsh net-edit default
## or other application using the libvirt API.
##
## dnsmasq conf file created by libvirt
strict-order
user=libvirt-dnsmasq
pid-file=/var/run/libvirt/network/default.pid
except-interface=lo
bind-dynamic
interface=virbr0
dhcp-range=192.168.122.2,192.168.122.254
dhcp-no-override
dhcp-authoritative
dhcp-lease-max=253
dhcp-hostsfile=/var/lib/libvirt/dnsmasq/default.hostsfile
addn-hosts=/var/lib/libvirt/dnsmasq/default.addnhosts

So how did things magically start working? After a fresh reboot of the VM, minikube delete and then the start:

$ minikube start --vm-driver kvm2 -v 8 --alsologtostderr

output.log

spatel@vm-yelp:~$ minikube status
host: Running
kubelet: Running
apiserver: Running
kubectl: Correctly Configured: pointing to minikube-vm at 192.168.39.242

@tx0c
Copy link

tx0c commented Apr 24, 2019

another reproducing here, checked all above virsh outputs got similar results:

[...]
(minikube) DBG | Waiting for machine to come up 19/40
(minikube) DBG | Waiting for machine to come up 20/40
(minikube) DBG | Waiting for machine to come up 21/40
(minikube) DBG | Waiting for machine to come up 22/40
(minikube) DBG | Waiting for machine to come up 23/40
(minikube) DBG | Waiting for machine to come up 24/40
(minikube) DBG | Waiting for machine to come up 25/40
(minikube) DBG | Waiting for machine to come up 26/40
(minikube) DBG | Waiting for machine to come up 27/40
(minikube) DBG | Waiting for machine to come up 28/40
(minikube) DBG | Waiting for machine to come up 29/40
(minikube) DBG | Waiting for machine to come up 30/40
(minikube) DBG | Waiting for machine to come up 31/40
(minikube) DBG | Waiting for machine to come up 32/40
(minikube) DBG | Waiting for machine to come up 33/40
(minikube) DBG | Waiting for machine to come up 34/40
(minikube) DBG | Waiting for machine to come up 35/40
(minikube) DBG | Waiting for machine to come up 36/40
(minikube) DBG | Waiting for machine to come up 37/40
(minikube) DBG | Waiting for machine to come up 38/40
(minikube) DBG | Waiting for machine to come up 39/40
(minikube) DBG | Waiting for machine to come up 40/40
I0424 17:06:53.580893   28159 start.go:384] StartHost: create: Error creating machine: Error in driver during machine creation: Machine didn't return an IP after 120 seconds
I0424 17:06:53.580971   28159 utils.go:122] non-retriable error: create: Error creating machine: Error in driver during machine creation: Machine didn't return an IP after 120 seconds
W0424 17:06:53.581310   28159 exit.go:99] Unable to start VM: create: Error creating machine: Error in driver during machine creation: Machine didn't return an IP after 120 seconds

💣  Unable to start VM: create: Error creating machine: Error in driver during machine creation: Machine didn't return an IP after 120 seconds

😿  Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
👉  https://github.com/kubernetes/minikube/issues/new
Command exited with non-zero status 70
0.12user 0.02system 2:07.16elapsed 0%CPU (0avgtext+0avgdata 29468maxresident)k
16inputs+16outputs (0major+2903minor)pagefaults 0swaps

real	2m7.165s
user	0m0.128s
sys	0m0.031s

tstromberg commented on Jan 30 •
@cmdpwnd - thanks for the update! Do you mind running a few commands to help narrow this issue down a bit further?

to @tstromberg I don't mind to run more debugging commands, while do you (or any other developer ) have an update on this?

@tstromberg tstromberg changed the title kvm2 on Debian 9: Machine didn't return an IP after 120 seconds Error in driver during machine creation: Machine didn't return an IP after 120 seconds May 14, 2019
@tstromberg
Copy link
Contributor

tstromberg commented May 22, 2019

If you run into this, please try upgrading to the most recent kvm2 machine driver and report back:

curl -LO https://storage.googleapis.com/minikube/releases/latest/docker-machine-driver-kvm2 && sudo install docker-machine-driver-kvm2 /usr/local/bin/

Then run minikube delete to remove the old state.

@tstromberg tstromberg added r/2019q2 Issue was last reviewed 2019q2 and removed cause/nested-vm-config When nested VM's appear to play a role labels May 22, 2019
@tstromberg tstromberg added cause/firewall-or-proxy When firewalls or proxies seem to be interfering needs-solution-message Issues where where offering a solution for an error would be helpful priority/backlog Higher priority than priority/awaiting-more-evidence. and removed triage/needs-information Indicates an issue needs more information in order to work on it. kind/bug Categorizes issue or PR as related to a bug. os/linux priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. labels Jul 17, 2019
@sharifelgamal sharifelgamal added the help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. label Jul 18, 2019
@sdstewar
Copy link

Also having this issue as described with v1.2.0 of the driver.

@rihardsk
Copy link

rihardsk commented Aug 5, 2019

I was having the same issue (even with kvm driver v1.2.0). Turns out that for me too conflicting nftables rules were at fault and sudo nft flush ruleset fixed the issue.

Now i just have to figure out what rules to add to /etc/nftables.conf to solve this properly.

@rihardsk
Copy link

I found out what was causing problems in my config. The rules in my nftables input chain were dropping packets coming from the minikube network interfaces. I had to add

iifname "virbr1" counter return
iifname "virbr0" counter return

to fix that.

Here's the full set of rules in my input chain, if anyone's interested:

table inet filter {
  chain input {
    type filter hook input priority 0;

    # allow established/related connections
    ct state {established, related} accept

    # early drop of invalid connections
    ct state invalid counter drop

    # allow from loopback
    iifname lo accept

    # allow icmp
    ip protocol icmp accept
    ip6 nexthdr icmpv6 accept

    # allow ssh
    tcp dport ssh accept

    # don't clash with minikube
    iifname "virbr1" counter return
    iifname "virbr0" counter return

    # everything else
    counter
    reject with icmpx type port-unreachable
  }
}

I also made sure to use iptables-nft instead of iptables-legacy (installed iptables-nft on Arch Linux which replaces iptables) to rule out conflicts between iptables and nftables.

@abdulhadad
Copy link

It's seem issue on vmware nested virtualization,, I got it working on Vmware player 15 on Windows 10 1903 and with this setting in vmx:

vhv.enable = "TRUE"
vpmc.enable = "TRUE"
vvtd.enable = "TRUE"

But if vpmc.enable = "TRUE" option deleted in vmx or using hypervisor.cpuid.v0 = "FALSE" option exist in vmx, minikube failed to start,

I also test the minikube iso on libvirt using this command, and showing the same behaviour :

virt-install --virt-type=kvm --name=test --ram 2048 --vcpus=1 --virt-type=kvm --hvm --cdrom  ~/.minikube/cache/iso/minikube-v1.3.0.iso --network network=default --disk pool=default,size=20,bus=virtio,format=qcow2 

Reference:
https://fabianlee.org/2018/08/27/kvm-bare-metal-virtualization-on-ubuntu-with-kvm/
https://communities.vmware.com/docs/DOC-8970

@tstromberg tstromberg added kind/support Categorizes issue or PR as a support question. and removed help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. needs-solution-message Issues where where offering a solution for an error would be helpful priority/backlog Higher priority than priority/awaiting-more-evidence. r/2019q2 Issue was last reviewed 2019q2 labels Sep 20, 2019
@tstromberg
Copy link
Contributor

minikube v1.4 now gives a little bit more documentation around this, but I'll leave this open for others who run into this support issue.

@tstromberg tstromberg added the lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. label Sep 20, 2019
@medyagh
Copy link
Member

medyagh commented Mar 4, 2020

@cmdpwnd I will close this issue due to no update, and sorry to hear you had this issue, please feel free to reopen.

meanwhile I recommend to give our newest driver a try in latest release

minikube start --vm-driver=docker
that might solve your issue !

@medyagh medyagh closed this as completed Mar 4, 2020
@zioalex
Copy link

zioalex commented Oct 6, 2020

JFYI I was able to workaround this just restarting libvirtd service.

sudo systemctl restart libvirtd.service

Without changing anything else minukube started perfectly.

@nikimanoledaki
Copy link

nikimanoledaki commented Sep 20, 2022

I am also facing this issue and the error persists despite trying out the suggested fixes :( Posting this here in case there are other folks in my situation! My error and logs are pretty much identical to that of the earlier users who posted their logs.

One thing that may be of interest is that DHCP appears empty immediately after the minikube start failure:

virsh net-dhcp-leases mk-minikube
 Expiry Time   MAC address   Protocol   IP address   Hostname   Client ID or DUID
-----------------------------------------------------------------------------------

A few seconds later, an address is provided:

virsh net-dhcp-leases mk-minikube
 Expiry Time           MAC address         Protocol   IP address         Hostname   Client ID or DUID
----------------------------------------------------------------------------------------------------------
 2022-09-16 09:23:31   52:54:00:ca:cc:bd   ipv4       192.168.39.60/24   minikube   01:52:54:00:ca:cc:bd

I tried increasing the time for the retry (that of machine minikube didn't return IP after 1 minute) to 3m in the code itself. It randomly made it work, but later it started failing again (the worst kind of fix!). Trying this again did not solve the issue.

I went with QEMU as a driver instead, which works without any issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cause/firewall-or-proxy When firewalls or proxies seem to be interfering co/kvm2-driver KVM2 driver related issues kind/support Categorizes issue or PR as a support question. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.
Projects
None yet
Development

Successfully merging a pull request may close this issue.