Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GUEST_PROVISION: Failed to start host #14845

Closed
Eason0729 opened this issue Aug 23, 2022 · 13 comments
Closed

GUEST_PROVISION: Failed to start host #14845

Eason0729 opened this issue Aug 23, 2022 · 13 comments
Labels
kind/support Categorizes issue or PR as a support question. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@Eason0729
Copy link

What Happened?

-> % minikube start --driver=kvm2
😄  minikube v1.26.1 on Raspbian 11.4 (arm64)
✨  Using the kvm2 driver based on user configuration
👍  Starting control plane node minikube in cluster minikube
🔥  Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
🔥  Deleting "minikube" in kvm2 ...
🤦  StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP not available after waiting: machine minikube didn't return IP after 1 minute
🔥  Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
😿  Failed to start kvm2 VM. Running "minikube delete" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP not available after waiting: machine minikube didn't return IP after 1 minute

❌  Exiting due to GUEST_PROVISION: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP not available after waiting: machine minikube didn't return IP after 1 minute

╭───────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                           │
│    😿  If the above advice does not help, please let us know:                             │
│    👉  https://github.com/kubernetes/minikube/issues/new/choose                           │
│                                                                                           │
│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                           │
╰───────────────────────────────────────────────────────────────────────────────────────────╯

In addition, I ran through the trouble shooting in the docs. I follow the instruction, but it didn't work.

-> % sudo virsh net-list --all
 Name          State    Autostart   Persistent
------------------------------------------------
 default       active   yes         yes
 mk-minikube   active   yes         yes

Attach the log file

logs.txt

Operating System

Ubuntu

Driver

KVM2

@nikimanoledaki
Copy link

I am facing the same issue.

I am running the command in a Ubuntu VM created through Vagrant + Virtualbox.

The error I’m getting while doing minikube start --driver=kvm2 is:

😿  Failed to start kvm2 VM. Running "minikube delete" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP not available after waiting: machine minikube didn't return IP after 1 minute

❌  Exiting due to GUEST_PROVISION: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP not available after waiting: machine minikube didn't return IP after 1 minute

Validating KVM support:

sudo virsh net-list --all
 Name          State    Autostart   Persistent
------------------------------------------------
 default       active   yes         yes
 mk-minikube   active   yes         yes
virt-host-validate
  QEMU: Checking for hardware virtualization                                 : PASS
  QEMU: Checking if device /dev/kvm exists                                   : PASS
  QEMU: Checking if device /dev/kvm is accessible                            : PASS
  QEMU: Checking if device /dev/vhost-net exists                             : PASS
  QEMU: Checking if device /dev/net/tun exists                               : PASS
  QEMU: Checking for cgroup 'cpu' controller support                         : PASS
  QEMU: Checking for cgroup 'cpuacct' controller support                     : PASS
  QEMU: Checking for cgroup 'cpuset' controller support                      : PASS
  QEMU: Checking for cgroup 'memory' controller support                      : PASS
  QEMU: Checking for cgroup 'devices' controller support                     : PASS
  QEMU: Checking for cgroup 'blkio' controller support                       : PASS
  QEMU: Checking for device assignment IOMMU support                         : WARN (No ACPI DMAR table found, IOMMU either disabled in BIOS or not supported by this hardware platform)
  QEMU: Checking for secure guest support                                    : WARN (Unknown if this platform has Secure Guest support)
   LXC: Checking for Linux >= 2.6.26                                         : PASS
   LXC: Checking for namespace ipc                                           : PASS
   LXC: Checking for namespace mnt                                           : PASS
   LXC: Checking for namespace pid                                           : PASS
   LXC: Checking for namespace uts                                           : PASS
   LXC: Checking for namespace net                                           : PASS
   LXC: Checking for namespace user                                          : PASS
   LXC: Checking for cgroup 'cpu' controller support                         : PASS
   LXC: Checking for cgroup 'cpuacct' controller support                     : PASS
   LXC: Checking for cgroup 'cpuset' controller support                      : PASS
   LXC: Checking for cgroup 'memory' controller support                      : PASS
   LXC: Checking for cgroup 'devices' controller support                     : PASS
   LXC: Checking for cgroup 'freezer' controller support                     : PASS
   LXC: Checking for cgroup 'blkio' controller support                       : PASS
   LXC: Checking if device /sys/fs/fuse/connections exists                   : PASS

Attach the log file

I0915 19:47:11.588863    9435 main.go:134] libmachine: (minikube) DBG | unable to find current IP address of domain minikube in network mk-minikube
I0915 19:47:11.588890    9435 main.go:134] libmachine: (minikube) DBG | I0915 19:47:11.588790    9630 retry.go:31] will retry after 9.953714808s: waiting for machine to come up
I0915 19:47:21.547328    9435 main.go:134] libmachine: (minikube) DBG | domain minikube has defined MAC address 52:54:00:ca:cc:bd in network mk-minikube
I0915 19:47:21.547968    9435 main.go:134] libmachine: (minikube) DBG | unable to find current IP address of domain minikube in network mk-minikube
I0915 19:47:21.548231    9435 main.go:134] libmachine: (minikube) DBG | unable to start VM: IP not available after waiting: machine minikube didn't return IP after 1 minute
I0915 19:47:21.554359    9435 client.go:171] LocalClient.Create took 53.955793964s
I0915 19:47:23.558822    9435 start.go:135] duration metric: createHost completed in 55.990871336s
I0915 19:47:23.558836    9435 start.go:82] releasing machines lock for "minikube", held for 55.991711494s
W0915 19:47:23.559944    9435 out.go:239] 😿  Failed to start kvm2 VM. Running "minikube delete" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP not available after waiting: machine minikube didn't return IP after 1 minute
I0915 19:47:23.567863    9435 out.go:177]
W0915 19:47:23.569916    9435 out.go:239] ❌  Exiting due to GUEST_PROVISION: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP not available after waiting: machine minikube didn't return IP after 1 minute
W0915 19:47:23.570222    9435 out.go:239]
W0915 19:47:23.573593    9435 out.go:239]

Operating System

Ubuntu

Driver

KVM2

@nikimanoledaki
Copy link

nikimanoledaki commented Sep 16, 2022

I have good news! 🎉

Looked at the MAC address of the VM trying to start (this shows up right before minikube errors out, found in logs.txt):

I0916 08:41:52.891415   10345 main.go:134] libmachine: (minikube) DBG | domain minikube has defined MAC address 52:54:00:be:1c:2a in network mk-minikube

There were no DHCP leases for the mk-minikube VM:

virsh net-dhcp-leases mk-minikube
 Expiry Time   MAC address   Protocol   IP address   Hostname   Client ID or DUID
-----------------------------------------------------------------------------------

However, without doing anything, after a few minutes, there was one!

virsh net-dhcp-leases mk-minikube
 Expiry Time           MAC address         Protocol   IP address          Hostname   Client ID or DUID
-----------------------------------------------------------------------------------------------------------
 2022-09-16 09:42:08   52:54:00:be:1c:2a   ipv4       192.168.39.134/24   minikube   01:52:54:00:be:1c:2a

It seems like it timed out and was provided an address from the range eventually. This happens a few minutes after minikube has already timed out.

SO TL;DR: We need to increase the timeout of this step: machine minikube didn't return IP after 1 minute to maybe 5 minutes. Is there currently a way to do that?? Any help would be appreciated :)

Alternatively, is it possible to kickstart/continue a failed minikube start?

Last thing - can confirm that an IP address was eventually assigned for mk-minikube's virbr1 🎉 :

arp -e
Address                  HWtype  HWaddress           Flags Mask            Iface
10.0.2.3                 ether   52:54:00:12:35:03   C                     eth0
192.168.122.124          ether   52:54:00:5e:79:f7   C                     virbr0
192.168.39.134           ether   52:54:00:be:1c:2a   C                     virbr1
_gateway                 ether   52:54:00:12:35:02   C                     eth0

@nikimanoledaki
Copy link

/kind support

@k8s-ci-robot k8s-ci-robot added the kind/support Categorizes issue or PR as a support question. label Sep 16, 2022
@nikimanoledaki
Copy link

nikimanoledaki commented Sep 16, 2022

Adding the following wait commands solved it!

minikube start --driver=kvm2 --profile=minikube --wait-timeout 15m0s --wait all

. . .

🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

@Eason0729 I hope this helps with your issue too :)

@Eason0729
Copy link
Author

I will try later

@nikimanoledaki
Copy link

nikimanoledaki commented Sep 20, 2022

Actually this only solved the problem momentarily. The command started failing again after rebooting my machine. Trying the wait flags again did not fix the problem, so they are not the solution. 😞

@Eason0729 have you tried the suggestions in this issue: #3566 ?

@Eason0729
Copy link
Author

Yes, I have tried it.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 19, 2022
@DUBANGARCIA
Copy link

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 10, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 10, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 10, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Jun 9, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

5 participants