Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kvm2: minikube network does not exist #3636

Closed
rnmhdn opened this issue Feb 7, 2019 · 19 comments
Closed

kvm2: minikube network does not exist #3636

rnmhdn opened this issue Feb 7, 2019 · 19 comments
Labels
co/kvm2-driver KVM2 driver related issues help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/backlog Higher priority than priority/awaiting-more-evidence. r/2019q2 Issue was last reviewed 2019q2

Comments

@rnmhdn
Copy link

rnmhdn commented Feb 7, 2019

Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG REPORT

Please provide the following details:

Environment:

Minikube version (use minikube version): v0.33.1

  • OS (e.g. from /etc/os-release): Linux 4.20.6-arch1-1-ARCH Need a reliable and low latency local cluster setup for Kubernetes  #1 SMP PREEMPT Thu Jan 31 08:22:01 UTC 2019 x86_64 GNU/Linux
  • VM Driver (e.g. cat ~/.minikube/machines/minikube/config.json | grep DriverName): "DriverName": "kvm2",
  • ISO version (e.g. cat ~/.minikube/machines/minikube/config.json | grep -i ISO or minikube ssh cat /etc/VERSION): "Boot2DockerURL": "file:///home/aran/.minikube/cache/iso/minikube-v0.33.1.iso",
    "ISO": "/home/aran/.minikube/machines/minikube/boot2docker.iso"
$ minikube ssh cat /etc/VERSION
E0207 22:36:29.854750    2550 ssh.go:54] Error attempting to ssh/run-ssh-command: Error getting state of host: getting connection: looking up domain: virError(Code=42, Domain=10, Message='Domain not found: no domain with matching name 'minikube'')
  • Install tools:
  • Others:
    The above can be generated in one go with the following commands (can be copied and pasted directly into your terminal):
minikube version
echo "";
echo "OS:";
cat /etc/os-release
echo "";
echo "VM driver:"; 
grep DriverName ~/.minikube/machines/minikube/config.json
echo "";
echo "ISO version";
grep -i ISO ~/.minikube/machines/minikube/config.json

What happened:
minikube network does not exist
What you expected to happen:
minikube network exists.
How to reproduce it (as minimally and precisely as possible):

    torify curl -LO https://storage.googleapis.com/kubernetes-release/release/$(torify curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl && chmod +x ./kubectl && sudo mv ./kubectl /usr/local/bin/kubectl 
    torify curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.33.1/minikube-darwin-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/
    sudo pacman -Sy libvirt qemu-headless ebtables dnsmasq 
    sudo systemctl enable libvirtd.service
    sudo pacman -Sy docker-machine
    installed minikube-bin kubectl-bin docker-machine-driver-kvm2 from AUR
    sudo pacman -S dmidecode
    sudo usermod -aG libvirt aran
    sudo reboot 
    torify wget "https://storage.googleapis.com/minikube/iso/minikube-v0.33.1.iso"
    python -m http.server

Output of minikube logs (if applicable):

F0207 22:45:38.784909    3156 logs.go:50] Error getting cluster bootstrapper: getting kubeadm bootstrapper: getting ssh client: Error creating new ssh host from driver: Error getting ssh host name for driver: machine in unknown state: getting connection: looking up domain: virError(Code=42, Domain=10, Message='Domain not found: no domain with matching name 'minikube'')

Anything else do we need to know:

$ sudo virsh net-list --all Name      State    Autostart   Persistent
--------------------------------------------
 default   active   no          yes

**Update: **
I removed .minikube and restarted libvirtd and ran the command again and now I get:
virError(Code=9, Domain=20, Message='operation failed: domain 'minikube' already exists with uuid f718bd7b-3f63-4151-a89d-02cca2a39c9b')
**Update: **
I did minikube delete
and then:

$ minikube start --iso-url "http://localhost:8000/minikube-v0.33.1.iso" --vm-driver kvm2
Starting local Kubernetes v1.13.2 cluster...
Starting VM...

and it's stuck there.
here is the last msgs in my journal:

Feb 07 23:11:43 Christopher audit[606]: VIRT_RESOURCE pid=606 uid=0 auid=4294967295 ses=4294967295 msg='virt=kvm resrc=vcpu reason=start vm="minikube" uuid=6130b1e6-5c16-454c-93ca-b6c46edfbf70 old-vcpu=0 new-vcpu=2 exe="/usr/bin/libvirtd" hostname=? addr=? terminal=? res=success'
Feb 07 23:11:43 Christopher audit[606]: VIRT_CONTROL pid=606 uid=0 auid=4294967295 ses=4294967295 msg='virt=kvm op=start reason=booted vm="minikube" uuid=6130b1e6-5c16-454c-93ca-b6c46edfbf70 vm-pid=2910 exe="/usr/bin/libvirtd" hostname=? addr=? terminal=? res=success'
Feb 07 23:11:45 Christopher kernel: virbr1: port 2(vnet1) entered learning state
Feb 07 23:11:45 Christopher kernel: virbr0: port 2(vnet0) entered learning state
Feb 07 23:11:47 Christopher NetworkManager[592]: <info>  [1549568507.1642] device (virbr0): carrier: link connected
Feb 07 23:11:47 Christopher NetworkManager[592]: <info>  [1549568507.1644] device (virbr1): carrier: link connected
Feb 07 23:11:47 Christopher kernel: virbr0: port 2(vnet0) entered forwarding state
Feb 07 23:11:47 Christopher kernel: virbr0: topology change detected, propagating
Feb 07 23:11:47 Christopher kernel: virbr1: port 2(vnet1) entered forwarding state
Feb 07 23:11:47 Christopher kernel: virbr1: topology change detected, propagating
Feb 07 23:12:08 Christopher dnsmasq-dhcp[696]: DHCPDISCOVER(virbr0) 6c:66:db:0f:21:13
Feb 07 23:12:08 Christopher dnsmasq-dhcp[696]: DHCPOFFER(virbr0) 192.168.122.215 6c:66:db:0f:21:13
Feb 07 23:12:08 Christopher dnsmasq-dhcp[696]: DHCPDISCOVER(virbr0) 6c:66:db:0f:21:13
Feb 07 23:12:08 Christopher dnsmasq-dhcp[696]: DHCPOFFER(virbr0) 192.168.122.215 6c:66:db:0f:21:13
Feb 07 23:12:08 Christopher dnsmasq-dhcp[696]: DHCPREQUEST(virbr0) 192.168.122.215 6c:66:db:0f:21:13
Feb 07 23:12:08 Christopher dnsmasq-dhcp[696]: DHCPACK(virbr0) 192.168.122.215 6c:66:db:0f:21:13 minikube
Feb 07 23:12:08 Christopher dnsmasq-dhcp[2899]: DHCPDISCOVER(virbr1) 70:8a:20:5b:bf:09
Feb 07 23:12:08 Christopher dnsmasq-dhcp[2899]: DHCPOFFER(virbr1) 192.168.39.44 70:8a:20:5b:bf:09
Feb 07 23:12:08 Christopher dnsmasq-dhcp[2899]: DHCPDISCOVER(virbr1) 70:8a:20:5b:bf:09
Feb 07 23:12:08 Christopher dnsmasq-dhcp[2899]: DHCPOFFER(virbr1) 192.168.39.44 70:8a:20:5b:bf:09
Feb 07 23:12:08 Christopher dnsmasq-dhcp[2899]: DHCPREQUEST(virbr1) 192.168.39.44 70:8a:20:5b:bf:09
Feb 07 23:12:08 Christopher dnsmasq-dhcp[2899]: DHCPACK(virbr1) 192.168.39.44 70:8a:20:5b:bf:09 minikube
@rnmhdn rnmhdn changed the title virError(Code=42, Domain=10, Message='Domain not found: no domain with matching name 'minikube'') Minikube stuck in Starting VM... Feb 7, 2019
@tstromberg
Copy link
Contributor

tstromberg commented Feb 12, 2019

Hey @aranmohyeddin - Do you mind sharing the output of: minikube start -v=8 --alsologtostderr --vm-driver=kvm2

It should show us exactly which step it's blocked on, but probably something related to networking.

Also, what release of libvirt are you using?

@tstromberg tstromberg changed the title Minikube stuck in Starting VM... kvm2: Minikube stuck in Starting VM... Feb 12, 2019
@tstromberg tstromberg added kind/bug Categorizes issue or PR as related to a bug. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. co/kvm2-driver KVM2 driver related issues triage/needs-information Indicates an issue needs more information in order to work on it. labels Feb 12, 2019
@rnmhdn
Copy link
Author

rnmhdn commented Feb 13, 2019

$ minikube start -v=8 --alsologtostderr --vm-driver=kvm2 > output
I0213 15:02:41.001125   31231 notify.go:121] Checking for updates...
I0213 15:02:41.974045   31231 start.go:120] Viper configuration:
I0213 15:02:41.977092   31231 cluster.go:74] Skipping create...Using existing machine configuration
Found binary path at /usr/bin/docker-machine-driver-kvm2
Launching plugin server for driver kvm2
Plugin server listening at address 127.0.0.1:44173
() Calling .GetVersion
Using API Version  1
() Calling .SetConfigRaw
() Calling .GetMachineName
(minikube) Calling .DriverName
(minikube) Calling .GetState
I0213 15:02:42.012188   31231 cluster.go:86] Machine state:  Running
(minikube) Calling .DriverName
Getting to WaitForSSH function...
(minikube) Calling .GetSSHHostname
(minikube) Calling .GetSSHPort
(minikube) Calling .GetSSHKeyPath
(minikube) Calling .GetSSHKeyPath
(minikube) Calling .GetSSHUsername
Using SSH client type: native
&{{{<nil> 0 [] [] []} docker [0x83a9a0] 0x83a970  [] 0s}  22 <nil> <nil>}
About to run SSH command:
exit 0
Error dialing TCP: dial tcp :22: connect: connection refused
Error dialing TCP: dial tcp :22: connect: connection refused

and

$ cat output 
Aliases:
map[string]string{}
Override:
map[string]interface {}{"v":"8", "alsologtostderr":"true"}
PFlags:
map[string]viper.FlagValue{"docker-env":viper.pflagValue{flag:(*pflag.Flag)(0xc0001dc140)}, "enable-default-cni":viper.pflagValue{flag:(*pflag.Flag)(0xc0001dca00)}, "hyperv-virtual-switch":viper.pflagValue{flag:(*pflag.Flag)(0xc000445e00)}, "keep-context":viper.pflagValue{flag:(*pflag.Flag)(0xc0004457c0)}, "memory":viper.pflagValue{flag:(*pflag.Flag)(0xc000445b80)}, "apiserver-ips":viper.pflagValue{flag:(*pflag.Flag)(0xc0001dc460)}, "apiserver-port":viper.pflagValue{flag:(*pflag.Flag)(0xc0001dc280)}, "dns-domain":viper.pflagValue{flag:(*pflag.Flag)(0xc0001dc500)}, "hyperkit-vsock-ports":viper.pflagValue{flag:(*pflag.Flag)(0xc0001dcdc0)}, "service-cluster-ip-range":viper.pflagValue{flag:(*pflag.Flag)(0xc0001dc5a0)}, "apiserver-names":viper.pflagValue{flag:(*pflag.Flag)(0xc0001dc3c0)}, "cri-socket":viper.pflagValue{flag:(*pflag.Flag)(0xc0001dc820)}, "uuid":viper.pflagValue{flag:(*pflag.Flag)(0xc0001dcc80)}, "profile":viper.pflagValue{flag:(*pflag.Flag)(0xc000444dc0)}, "cache-images":viper.pflagValue{flag:(*pflag.Flag)(0xc0001dcb40)}, "gpu":viper.pflagValue{flag:(*pflag.Flag)(0xc0001dce60)}, "hyperkit-vpnkit-sock":viper.pflagValue{flag:(*pflag.Flag)(0xc0001dcd20)}, "kubernetes-version":viper.pflagValue{flag:(*pflag.Flag)(0xc0001dc8c0)}, "kvm-network":viper.pflagValue{flag:(*pflag.Flag)(0xc000445ea0)}, "apiserver-name":viper.pflagValue{flag:(*pflag.Flag)(0xc0001dc320)}, "cpus":viper.pflagValue{flag:(*pflag.Flag)(0xc000445c20)}, "disable-driver-mounts":viper.pflagValue{flag:(*pflag.Flag)(0xc0004459a0)}, "insecure-registry":viper.pflagValue{flag:(*pflag.Flag)(0xc0001dc640)}, "nfs-share":viper.pflagValue{flag:(*pflag.Flag)(0xc0001dc000)}, "xhyve-disk-driver":viper.pflagValue{flag:(*pflag.Flag)(0xc000445f40)}, "bootstrapper":viper.pflagValue{flag:(*pflag.Flag)(0xc000444e60)}, "container-runtime":viper.pflagValue{flag:(*pflag.Flag)(0xc0001dc780)}, "extra-config":viper.pflagValue{flag:(*pflag.Flag)(0xc0001dcbe0)}, "feature-gates":viper.pflagValue{flag:(*pflag.Flag)(0xc0001dcaa0)}, "host-only-cidr":viper.pflagValue{flag:(*pflag.Flag)(0xc000445d60)}, "iso-url":viper.pflagValue{flag:(*pflag.Flag)(0xc000445a40)}, "network-plugin":viper.pflagValue{flag:(*pflag.Flag)(0xc0001dc960)}, "docker-opt":viper.pflagValue{flag:(*pflag.Flag)(0xc0001dc1e0)}, "mount":viper.pflagValue{flag:(*pflag.Flag)(0xc000445860)}, "registry-mirror":viper.pflagValue{flag:(*pflag.Flag)(0xc0001dc6e0)}, "disk-size":viper.pflagValue{flag:(*pflag.Flag)(0xc000445cc0)}, "mount-string":viper.pflagValue{flag:(*pflag.Flag)(0xc000445900)}, "nfs-shares-root":viper.pflagValue{flag:(*pflag.Flag)(0xc0001dc0a0)}, "vm-driver":viper.pflagValue{flag:(*pflag.Flag)(0xc000445ae0)}}
Env:
map[string]string{}
Key/Value Store:
map[string]interface {}{}
Config:
map[string]interface {}{"wantreporterrorprompt":false}
Defaults:
map[string]interface {}{"alsologtostderr":"false", "log_dir":"", "wantnonedriverwarning":true, "showdriverdeprecationnotification":true, "wantreporterror":false, "wantreporterrorprompt":true, "wantkubectldownloadmsg":true, "showbootstrapperdeprecationnotification":true, "v":"0", "wantupdatenotification":true, "reminderwaitperiodinhours":24}
Starting local Kubernetes v1.13.2 cluster...
Starting VM...
Waiting for SSH to be available...

and also:

$ libvirtd --version
libvirtd (libvirt) 5.0.0

@anselvo
Copy link

anselvo commented Feb 14, 2019

Hello, i have the same issue. After 120 second i get this error

E0214 13:40:20.008272   15961 start.go:205] Error starting host: Error creating host: Error creating machine: Error in driver during machine creation: Machine didn't return an IP after 120 seconds.

OS: Ubuntu 18.10
libvirtd (libvirt) 4.6.0

$ minikube status
Error getting cluster bootstrapper: getting kubeadm bootstrapper: getting ssh client: Error dialing tcp via ssh client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain

@tstromberg
Copy link
Contributor

tstromberg commented Feb 20, 2019

minikube v0.34.1 includes an updated DHCP configuration. @aranmohyeddin - do you mind testing against this release to see how the error messages are different?

Please allow minikube the 5 minutes to time out, and paste the entire output of minikube start so that I can see if there are any other clues. Are there any firewall rules on this machine? I wonder if it simply isn't allowed to connect to the newly started VM.

@aelsergeev - That may be #3434 or #3566

@j1cs
Copy link

j1cs commented Feb 28, 2019

I did have the same issue.
I installed libvirtd and firewalld (in Artix a OpenRC implementation for Archlinux) For some reason firewalld does not give IP to the VM, maybe missed some configuration (i really dont know if you need to do something in firewalld for that).
After all this the init script of libvirtd needs firewalld as a dependency so i removed it then i restarted the libvirtd services and minkube finally start up.
I hope this is useful to someone.

@tstromberg tstromberg changed the title kvm2: Minikube stuck in Starting VM... kvm2: Minikube stuck in Starting VM: minikube network does not exist Mar 22, 2019
@tstromberg tstromberg changed the title kvm2: Minikube stuck in Starting VM: minikube network does not exist kvm2: minikube network does not exist Mar 22, 2019
@tstromberg
Copy link
Contributor

tstromberg commented May 22, 2019

Can someone confirm whether or not minikube v1.1 with the latest kvm2 driver addresses this bug? I suspect it might. To upgrade the kvm2 driver:

curl -LO https://storage.googleapis.com/minikube/releases/latest/docker-machine-driver-kvm2 && sudo install docker-machine-driver-kvm2 /usr/local/bin/

@tstromberg tstromberg added the r/2019q2 Issue was last reviewed 2019q2 label May 22, 2019
@scorbin
Copy link

scorbin commented Jul 1, 2019

@tstromberg
I tried minikube v1.2 and
curl -LO https://storage.googleapis.com/minikube/releases/latest/docker-machine-driver-kvm2 && sudo install docker-machine-driver-kvm2 /usr/local/bin/
I get the same error

BTW, I did get minikube working on fedora silverblue

@tstromberg tstromberg added priority/backlog Higher priority than priority/awaiting-more-evidence. and removed priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. triage/needs-information Indicates an issue needs more information in order to work on it. labels Jul 17, 2019
@sharifelgamal sharifelgamal added the help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. label Jul 18, 2019
@tstromberg
Copy link
Contributor

Marking closed, because minikube v1.4 will teardown and recreate failed kvm2 domains.

If you do see it in v1.4, please say /reopen attach the output of minikube start --alsologtostderr -v=8

@FancyBanana
Copy link

/reopen

❯ minikube start -v=8 --alsologtostderr --vm-driver=kvm2
I1107 10:56:16.898629    8234 start.go:251] hostinfo: {"hostname":"FanciestLinux","uptime":495,"bootTime":1573120081,"procs":292,"os":"linux","platform":"arch","platformFamily":"arch","platformVersion":"18.1.2","kernelVersion":"5.3.8-3-MANJARO","virtualizationSystem":"kvm","virtualizationRole":"host","hostid":"ca54c42b-3d1f-4300-b63b-920ad5ac5c51"}
I1107 10:56:16.899756    8234 start.go:261] virtualization: kvm host
😄  minikube v1.5.2 on Arch 18.1.2
I1107 10:56:16.900308    8234 start.go:547] selectDriver: flag="kvm2", old=&{{false false https://storage.googleapis.com/minikube/iso/minikube-v1.5.1.iso 2000 2 20000 kvm2 docker  [] [] [] [] 192.168.99.1/24  default qemu:///system false false <nil> [] false [] /nfsshares  false false true} {v1.16.2  8443 minikube minikubeCA [] [] cluster.local docker    10.96.0.0/12  [] true false}}
I1107 10:56:16.948205    8234 main.go:110] libmachine: Found binary path at /home/vharabari/.minikube/bin/docker-machine-driver-kvm2
I1107 10:56:16.948297    8234 main.go:110] libmachine: Launching plugin server for driver kvm2
I1107 10:56:16.964116    8234 main.go:110] libmachine: Plugin server listening at address 127.0.0.1:36585
I1107 10:56:16.964549    8234 main.go:110] libmachine: () Calling .GetVersion
I1107 10:56:16.965079    8234 main.go:110] libmachine: Using API Version  1
I1107 10:56:16.965097    8234 main.go:110] libmachine: () Calling .SetConfigRaw
I1107 10:56:16.965353    8234 main.go:110] libmachine: () Calling .GetMachineName
I1107 10:56:16.965474    8234 main.go:110] libmachine: (minikube) Calling .DriverName
I1107 10:56:16.965548    8234 start.go:293] selected: kvm2
I1107 10:56:16.965614    8234 install.go:102] Validating docker-machine-driver-kvm2, PATH=/home/vharabari/.minikube/bin:/home/vharabari/.nvm/versions/node/v10.16.3/bin:/home/vharabari/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/lib/jvm/default/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl:/var/lib/snapd/snap/bin:/home/vharabari/.yarn/bin:/home/vharabari/.config/yarn/global/node_modules/.bin
I1107 10:56:16.988063    8234 downloader.go:60] Not caching ISO, using https://storage.googleapis.com/minikube/iso/minikube-v1.5.1.iso
I1107 10:56:16.988149    8234 profile.go:82] Saving config to /home/vharabari/.minikube/profiles/minikube/config.json ...
I1107 10:56:16.988227    8234 lock.go:41] attempting to write to file "/home/vharabari/.minikube/profiles/minikube/config.json.tmp537476698" with filemode -rw-------
I1107 10:56:16.988271    8234 cache_images.go:296] CacheImage: gcr.io/k8s-minikube/storage-provisioner:v1.8.1 -> /home/vharabari/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1
I1107 10:56:16.988279    8234 cache_images.go:296] CacheImage: k8s.gcr.io/kube-scheduler:v1.16.2 -> /home/vharabari/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.16.2
I1107 10:56:16.988302    8234 cache_images.go:302] /home/vharabari/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1 exists
I1107 10:56:16.988313    8234 cache_images.go:302] /home/vharabari/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.16.2 exists
I1107 10:56:16.988324    8234 cache_images.go:298] CacheImage: k8s.gcr.io/kube-scheduler:v1.16.2 -> /home/vharabari/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.16.2 completed in 50.166µs
I1107 10:56:16.988339    8234 cache_images.go:296] CacheImage: k8s.gcr.io/kube-controller-manager:v1.16.2 -> /home/vharabari/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.16.2
I1107 10:56:16.988355    8234 cache_images.go:296] CacheImage: k8s.gcr.io/kube-proxy:v1.16.2 -> /home/vharabari/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.16.2
I1107 10:56:16.988370    8234 cache_images.go:296] CacheImage: k8s.gcr.io/kube-apiserver:v1.16.2 -> /home/vharabari/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.16.2
I1107 10:56:16.988383    8234 cache_images.go:296] CacheImage: k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.13 -> /home/vharabari/.minikube/cache/images/k8s.gcr.io/k8s-dns-sidecar-amd64_1.14.13
I1107 10:56:16.988386    8234 cache_images.go:302] /home/vharabari/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.16.2 exists
I1107 10:56:16.988398    8234 cache_images.go:302] /home/vharabari/.minikube/cache/images/k8s.gcr.io/k8s-dns-sidecar-amd64_1.14.13 exists
I1107 10:56:16.988408    8234 cache_images.go:298] CacheImage: k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.13 -> /home/vharabari/.minikube/cache/images/k8s.gcr.io/k8s-dns-sidecar-amd64_1.14.13 completed in 28.815µs
I1107 10:56:16.988423    8234 cache_images.go:83] CacheImage k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.13 -> /home/vharabari/.minikube/cache/images/k8s.gcr.io/k8s-dns-sidecar-amd64_1.14.13 succeeded
I1107 10:56:16.988423    8234 cache_images.go:296] CacheImage: k8s.gcr.io/coredns:1.6.2 -> /home/vharabari/.minikube/cache/images/k8s.gcr.io/coredns_1.6.2
I1107 10:56:16.988437    8234 cache_images.go:296] CacheImage: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1 -> /home/vharabari/.minikube/cache/images/k8s.gcr.io/kubernetes-dashboard-amd64_v1.10.1
I1107 10:56:16.988447    8234 cache_images.go:302] /home/vharabari/.minikube/cache/images/k8s.gcr.io/coredns_1.6.2 exists
I1107 10:56:16.988453    8234 cache_images.go:302] /home/vharabari/.minikube/cache/images/k8s.gcr.io/kubernetes-dashboard-amd64_v1.10.1 exists
I1107 10:56:16.988463    8234 cache_images.go:298] CacheImage: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1 -> /home/vharabari/.minikube/cache/images/k8s.gcr.io/kubernetes-dashboard-amd64_v1.10.1 completed in 33.646µs
I1107 10:56:16.988475    8234 cache_images.go:83] CacheImage k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1 -> /home/vharabari/.minikube/cache/images/k8s.gcr.io/kubernetes-dashboard-amd64_v1.10.1 succeeded
I1107 10:56:16.988479    8234 cache_images.go:296] CacheImage: k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.13 -> /home/vharabari/.minikube/cache/images/k8s.gcr.io/k8s-dns-kube-dns-amd64_1.14.13
I1107 10:56:16.988489    8234 cache_images.go:296] CacheImage: k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.13 -> /home/vharabari/.minikube/cache/images/k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64_1.14.13
I1107 10:56:16.988499    8234 cache_images.go:302] /home/vharabari/.minikube/cache/images/k8s.gcr.io/k8s-dns-kube-dns-amd64_1.14.13 exists
I1107 10:56:16.988505    8234 cache_images.go:302] /home/vharabari/.minikube/cache/images/k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64_1.14.13 exists
I1107 10:56:16.988515    8234 cache_images.go:298] CacheImage: k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.13 -> /home/vharabari/.minikube/cache/images/k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64_1.14.13 completed in 30.044µs
I1107 10:56:16.988456    8234 cache_images.go:298] CacheImage: k8s.gcr.io/coredns:1.6.2 -> /home/vharabari/.minikube/cache/images/k8s.gcr.io/coredns_1.6.2 completed in 37.283µs
I1107 10:56:16.988530    8234 cluster.go:101] Skipping create...Using existing machine configuration
I1107 10:56:16.988357    8234 cache_images.go:302] /home/vharabari/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.16.2 exists
I1107 10:56:16.988618    8234 cache_images.go:298] CacheImage: k8s.gcr.io/kube-controller-manager:v1.16.2 -> /home/vharabari/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.16.2 completed in 282.814µs
I1107 10:56:16.988627    8234 cache_images.go:83] CacheImage k8s.gcr.io/kube-controller-manager:v1.16.2 -> /home/vharabari/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.16.2 succeeded
I1107 10:56:16.988534    8234 cache_images.go:83] CacheImage k8s.gcr.io/coredns:1.6.2 -> /home/vharabari/.minikube/cache/images/k8s.gcr.io/coredns_1.6.2 succeeded
I1107 10:56:16.988372    8234 cache_images.go:302] /home/vharabari/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.16.2 exists
I1107 10:56:16.988646    8234 cache_images.go:298] CacheImage: k8s.gcr.io/kube-proxy:v1.16.2 -> /home/vharabari/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.16.2 completed in 296.392µs
I1107 10:56:16.988653    8234 cache_images.go:83] CacheImage k8s.gcr.io/kube-proxy:v1.16.2 -> /home/vharabari/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.16.2 succeeded
I1107 10:56:16.988413    8234 cache_images.go:296] CacheImage: k8s.gcr.io/etcd:3.3.15-0 -> /home/vharabari/.minikube/cache/images/k8s.gcr.io/etcd_3.3.15-0
I1107 10:56:16.988670    8234 cache_images.go:302] /home/vharabari/.minikube/cache/images/k8s.gcr.io/etcd_3.3.15-0 exists
I1107 10:56:16.988467    8234 cache_images.go:296] CacheImage: k8s.gcr.io/pause:3.1 -> /home/vharabari/.minikube/cache/images/k8s.gcr.io/pause_3.1
I1107 10:56:16.988679    8234 cache_images.go:298] CacheImage: k8s.gcr.io/etcd:3.3.15-0 -> /home/vharabari/.minikube/cache/images/k8s.gcr.io/etcd_3.3.15-0 completed in 274.155µs
I1107 10:56:16.988697    8234 cache_images.go:302] /home/vharabari/.minikube/cache/images/k8s.gcr.io/pause_3.1 exists
I1107 10:56:16.988399    8234 cache_images.go:298] CacheImage: k8s.gcr.io/kube-apiserver:v1.16.2 -> /home/vharabari/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.16.2 completed in 35.003µs
I1107 10:56:16.988714    8234 cache_images.go:83] CacheImage k8s.gcr.io/kube-apiserver:v1.16.2 -> /home/vharabari/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.16.2 succeeded
I1107 10:56:16.988702    8234 cache_images.go:83] CacheImage k8s.gcr.io/etcd:3.3.15-0 -> /home/vharabari/.minikube/cache/images/k8s.gcr.io/etcd_3.3.15-0 succeeded
I1107 10:56:16.988313    8234 cache_images.go:298] CacheImage: gcr.io/k8s-minikube/storage-provisioner:v1.8.1 -> /home/vharabari/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1 completed in 50.338µs
I1107 10:56:16.988733    8234 cache_images.go:83] CacheImage gcr.io/k8s-minikube/storage-provisioner:v1.8.1 -> /home/vharabari/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1 succeeded
I1107 10:56:16.988519    8234 cache_images.go:296] CacheImage: k8s.gcr.io/kube-addon-manager:v9.0 -> /home/vharabari/.minikube/cache/images/k8s.gcr.io/kube-addon-manager_v9.0
I1107 10:56:16.988750    8234 cache_images.go:302] /home/vharabari/.minikube/cache/images/k8s.gcr.io/kube-addon-manager_v9.0 exists
I1107 10:56:16.988759    8234 cache_images.go:298] CacheImage: k8s.gcr.io/kube-addon-manager:v9.0 -> /home/vharabari/.minikube/cache/images/k8s.gcr.io/kube-addon-manager_v9.0 completed in 243.055µs
I1107 10:56:16.988767    8234 cache_images.go:83] CacheImage k8s.gcr.io/kube-addon-manager:v9.0 -> /home/vharabari/.minikube/cache/images/k8s.gcr.io/kube-addon-manager_v9.0 succeeded
I1107 10:56:16.988508    8234 cache_images.go:298] CacheImage: k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.13 -> /home/vharabari/.minikube/cache/images/k8s.gcr.io/k8s-dns-kube-dns-amd64_1.14.13 completed in 33.704µs
I1107 10:56:16.988779    8234 cache_images.go:83] CacheImage k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.13 -> /home/vharabari/.minikube/cache/images/k8s.gcr.io/k8s-dns-kube-dns-amd64_1.14.13 succeeded
I1107 10:56:16.988340    8234 cache_images.go:83] CacheImage k8s.gcr.io/kube-scheduler:v1.16.2 -> /home/vharabari/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.16.2 succeeded
I1107 10:56:16.988527    8234 cache_images.go:83] CacheImage k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.13 -> /home/vharabari/.minikube/cache/images/k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64_1.14.13 succeeded
I1107 10:56:16.988707    8234 cache_images.go:298] CacheImage: k8s.gcr.io/pause:3.1 -> /home/vharabari/.minikube/cache/images/k8s.gcr.io/pause_3.1 completed in 244.455µs
I1107 10:56:16.988800    8234 cache_images.go:83] CacheImage k8s.gcr.io/pause:3.1 -> /home/vharabari/.minikube/cache/images/k8s.gcr.io/pause_3.1 succeeded
I1107 10:56:16.988807    8234 cache_images.go:90] Successfully cached all images.
I1107 10:56:16.988931    8234 main.go:110] libmachine: Found binary path at /home/vharabari/.minikube/bin/docker-machine-driver-kvm2
I1107 10:56:16.988965    8234 main.go:110] libmachine: Launching plugin server for driver kvm2
I1107 10:56:17.005318    8234 main.go:110] libmachine: Plugin server listening at address 127.0.0.1:42471
I1107 10:56:17.005668    8234 main.go:110] libmachine: () Calling .GetVersion
I1107 10:56:17.006081    8234 main.go:110] libmachine: Using API Version  1
I1107 10:56:17.006101    8234 main.go:110] libmachine: () Calling .SetConfigRaw
I1107 10:56:17.006365    8234 main.go:110] libmachine: () Calling .GetMachineName
💡  Tip: Use 'minikube start -p <name>' to create a new cluster, or 'minikube delete' to delete this one.
I1107 10:56:17.006516    8234 main.go:110] libmachine: (minikube) Calling .GetState
I1107 10:56:24.234770    8234 cluster.go:113] Machine state:  Error
🔄  Retriable failure: Error getting state for host: getting connection: looking up domain: virError(Code=42, Domain=10, Message='Domain not found: no domain with matching name 'minikube'')
I1107 10:56:24.235404    8234 main.go:110] libmachine: Found binary path at /home/vharabari/.minikube/bin/docker-machine-driver-kvm2
I1107 10:56:24.235463    8234 main.go:110] libmachine: Launching plugin server for driver kvm2
I1107 10:56:24.258910    8234 main.go:110] libmachine: Plugin server listening at address 127.0.0.1:34587
I1107 10:56:24.259549    8234 main.go:110] libmachine: () Calling .GetVersion
I1107 10:56:24.260319    8234 main.go:110] libmachine: Using API Version  1
I1107 10:56:24.260356    8234 main.go:110] libmachine: () Calling .SetConfigRaw
I1107 10:56:24.261099    8234 main.go:110] libmachine: () Calling .GetMachineName
I1107 10:56:24.262040    8234 main.go:110] libmachine: Found binary path at /home/vharabari/.minikube/bin/docker-machine-driver-kvm2
I1107 10:56:24.262103    8234 main.go:110] libmachine: Launching plugin server for driver kvm2
I1107 10:56:24.282867    8234 main.go:110] libmachine: Plugin server listening at address 127.0.0.1:39805
I1107 10:56:24.283166    8234 main.go:110] libmachine: () Calling .GetVersion
I1107 10:56:24.283580    8234 main.go:110] libmachine: Using API Version  1
I1107 10:56:24.283608    8234 main.go:110] libmachine: () Calling .SetConfigRaw
I1107 10:56:24.283886    8234 main.go:110] libmachine: () Calling .GetMachineName
I1107 10:56:24.284009    8234 main.go:110] libmachine: (minikube) Calling .GetState
⚠️  Unable to get the status of the minikube cluster.
W1107 10:56:26.355828    8234 start.go:974] DeleteHost: Docker machine "minikube" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
I1107 10:56:31.879709    8234 cluster.go:101] Skipping create...Using existing machine configuration
I1107 10:56:31.880097    8234 main.go:110] libmachine: Found binary path at /home/vharabari/.minikube/bin/docker-machine-driver-kvm2
I1107 10:56:31.880146    8234 main.go:110] libmachine: Launching plugin server for driver kvm2
I1107 10:56:31.897170    8234 main.go:110] libmachine: Plugin server listening at address 127.0.0.1:38047
I1107 10:56:31.897547    8234 main.go:110] libmachine: () Calling .GetVersion
I1107 10:56:31.898012    8234 main.go:110] libmachine: Using API Version  1
I1107 10:56:31.898031    8234 main.go:110] libmachine: () Calling .SetConfigRaw
I1107 10:56:31.898283    8234 main.go:110] libmachine: () Calling .GetMachineName
💡  Tip: Use 'minikube start -p <name>' to create a new cluster, or 'minikube delete' to delete this one.
I1107 10:56:31.898460    8234 main.go:110] libmachine: (minikube) Calling .GetState
I1107 10:56:44.909577    8234 cluster.go:113] Machine state:  Error
🔄  Retriable failure: Error getting state for host: getting connection: looking up domain: virError(Code=42, Domain=10, Message='Domain not found: no domain with matching name 'minikube'')
I1107 10:56:44.911239    8234 main.go:110] libmachine: Found binary path at /home/vharabari/.minikube/bin/docker-machine-driver-kvm2
I1107 10:56:44.911357    8234 main.go:110] libmachine: Launching plugin server for driver kvm2
I1107 10:56:44.938779    8234 main.go:110] libmachine: Plugin server listening at address 127.0.0.1:43491
I1107 10:56:44.939181    8234 main.go:110] libmachine: () Calling .GetVersion
I1107 10:56:44.940028    8234 main.go:110] libmachine: Using API Version  1
I1107 10:56:44.940058    8234 main.go:110] libmachine: () Calling .SetConfigRaw
I1107 10:56:44.940359    8234 main.go:110] libmachine: () Calling .GetMachineName
I1107 10:56:44.940842    8234 main.go:110] libmachine: Found binary path at /home/vharabari/.minikube/bin/docker-machine-driver-kvm2
I1107 10:56:44.940887    8234 main.go:110] libmachine: Launching plugin server for driver kvm2
I1107 10:56:44.968418    8234 main.go:110] libmachine: Plugin server listening at address 127.0.0.1:41365
I1107 10:56:44.968839    8234 main.go:110] libmachine: () Calling .GetVersion
I1107 10:56:44.969341    8234 main.go:110] libmachine: Using API Version  1
I1107 10:56:44.969374    8234 main.go:110] libmachine: () Calling .SetConfigRaw
I1107 10:56:44.969670    8234 main.go:110] libmachine: () Calling .GetMachineName
I1107 10:56:44.969782    8234 main.go:110] libmachine: (minikube) Calling .GetState
⚠️  Unable to get the status of the minikube cluster.
W1107 10:56:47.023478    8234 start.go:974] DeleteHost: Docker machine "minikube" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.

@k8s-ci-robot
Copy link
Contributor

@FancyBanana: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen

❯ minikube start -v=8 --alsologtostderr --vm-driver=kvm2
I1107 10:56:16.898629    8234 start.go:251] hostinfo: {"hostname":"FanciestLinux","uptime":495,"bootTime":1573120081,"procs":292,"os":"linux","platform":"arch","platformFamily":"arch","platformVersion":"18.1.2","kernelVersion":"5.3.8-3-MANJARO","virtualizationSystem":"kvm","virtualizationRole":"host","hostid":"ca54c42b-3d1f-4300-b63b-920ad5ac5c51"}
I1107 10:56:16.899756    8234 start.go:261] virtualization: kvm host
😄  minikube v1.5.2 on Arch 18.1.2
I1107 10:56:16.900308    8234 start.go:547] selectDriver: flag="kvm2", old=&{{false false https://storage.googleapis.com/minikube/iso/minikube-v1.5.1.iso 2000 2 20000 kvm2 docker  [] [] [] [] 192.168.99.1/24  default qemu:///system false false <nil> [] false [] /nfsshares  false false true} {v1.16.2  8443 minikube minikubeCA [] [] cluster.local docker    10.96.0.0/12  [] true false}}
I1107 10:56:16.948205    8234 main.go:110] libmachine: Found binary path at /home/vharabari/.minikube/bin/docker-machine-driver-kvm2
I1107 10:56:16.948297    8234 main.go:110] libmachine: Launching plugin server for driver kvm2
I1107 10:56:16.964116    8234 main.go:110] libmachine: Plugin server listening at address 127.0.0.1:36585
I1107 10:56:16.964549    8234 main.go:110] libmachine: () Calling .GetVersion
I1107 10:56:16.965079    8234 main.go:110] libmachine: Using API Version  1
I1107 10:56:16.965097    8234 main.go:110] libmachine: () Calling .SetConfigRaw
I1107 10:56:16.965353    8234 main.go:110] libmachine: () Calling .GetMachineName
I1107 10:56:16.965474    8234 main.go:110] libmachine: (minikube) Calling .DriverName
I1107 10:56:16.965548    8234 start.go:293] selected: kvm2
I1107 10:56:16.965614    8234 install.go:102] Validating docker-machine-driver-kvm2, PATH=/home/vharabari/.minikube/bin:/home/vharabari/.nvm/versions/node/v10.16.3/bin:/home/vharabari/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/lib/jvm/default/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl:/var/lib/snapd/snap/bin:/home/vharabari/.yarn/bin:/home/vharabari/.config/yarn/global/node_modules/.bin
I1107 10:56:16.988063    8234 downloader.go:60] Not caching ISO, using https://storage.googleapis.com/minikube/iso/minikube-v1.5.1.iso
I1107 10:56:16.988149    8234 profile.go:82] Saving config to /home/vharabari/.minikube/profiles/minikube/config.json ...
I1107 10:56:16.988227    8234 lock.go:41] attempting to write to file "/home/vharabari/.minikube/profiles/minikube/config.json.tmp537476698" with filemode -rw-------
I1107 10:56:16.988271    8234 cache_images.go:296] CacheImage: gcr.io/k8s-minikube/storage-provisioner:v1.8.1 -> /home/vharabari/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1
I1107 10:56:16.988279    8234 cache_images.go:296] CacheImage: k8s.gcr.io/kube-scheduler:v1.16.2 -> /home/vharabari/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.16.2
I1107 10:56:16.988302    8234 cache_images.go:302] /home/vharabari/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1 exists
I1107 10:56:16.988313    8234 cache_images.go:302] /home/vharabari/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.16.2 exists
I1107 10:56:16.988324    8234 cache_images.go:298] CacheImage: k8s.gcr.io/kube-scheduler:v1.16.2 -> /home/vharabari/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.16.2 completed in 50.166µs
I1107 10:56:16.988339    8234 cache_images.go:296] CacheImage: k8s.gcr.io/kube-controller-manager:v1.16.2 -> /home/vharabari/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.16.2
I1107 10:56:16.988355    8234 cache_images.go:296] CacheImage: k8s.gcr.io/kube-proxy:v1.16.2 -> /home/vharabari/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.16.2
I1107 10:56:16.988370    8234 cache_images.go:296] CacheImage: k8s.gcr.io/kube-apiserver:v1.16.2 -> /home/vharabari/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.16.2
I1107 10:56:16.988383    8234 cache_images.go:296] CacheImage: k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.13 -> /home/vharabari/.minikube/cache/images/k8s.gcr.io/k8s-dns-sidecar-amd64_1.14.13
I1107 10:56:16.988386    8234 cache_images.go:302] /home/vharabari/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.16.2 exists
I1107 10:56:16.988398    8234 cache_images.go:302] /home/vharabari/.minikube/cache/images/k8s.gcr.io/k8s-dns-sidecar-amd64_1.14.13 exists
I1107 10:56:16.988408    8234 cache_images.go:298] CacheImage: k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.13 -> /home/vharabari/.minikube/cache/images/k8s.gcr.io/k8s-dns-sidecar-amd64_1.14.13 completed in 28.815µs
I1107 10:56:16.988423    8234 cache_images.go:83] CacheImage k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.13 -> /home/vharabari/.minikube/cache/images/k8s.gcr.io/k8s-dns-sidecar-amd64_1.14.13 succeeded
I1107 10:56:16.988423    8234 cache_images.go:296] CacheImage: k8s.gcr.io/coredns:1.6.2 -> /home/vharabari/.minikube/cache/images/k8s.gcr.io/coredns_1.6.2
I1107 10:56:16.988437    8234 cache_images.go:296] CacheImage: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1 -> /home/vharabari/.minikube/cache/images/k8s.gcr.io/kubernetes-dashboard-amd64_v1.10.1
I1107 10:56:16.988447    8234 cache_images.go:302] /home/vharabari/.minikube/cache/images/k8s.gcr.io/coredns_1.6.2 exists
I1107 10:56:16.988453    8234 cache_images.go:302] /home/vharabari/.minikube/cache/images/k8s.gcr.io/kubernetes-dashboard-amd64_v1.10.1 exists
I1107 10:56:16.988463    8234 cache_images.go:298] CacheImage: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1 -> /home/vharabari/.minikube/cache/images/k8s.gcr.io/kubernetes-dashboard-amd64_v1.10.1 completed in 33.646µs
I1107 10:56:16.988475    8234 cache_images.go:83] CacheImage k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1 -> /home/vharabari/.minikube/cache/images/k8s.gcr.io/kubernetes-dashboard-amd64_v1.10.1 succeeded
I1107 10:56:16.988479    8234 cache_images.go:296] CacheImage: k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.13 -> /home/vharabari/.minikube/cache/images/k8s.gcr.io/k8s-dns-kube-dns-amd64_1.14.13
I1107 10:56:16.988489    8234 cache_images.go:296] CacheImage: k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.13 -> /home/vharabari/.minikube/cache/images/k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64_1.14.13
I1107 10:56:16.988499    8234 cache_images.go:302] /home/vharabari/.minikube/cache/images/k8s.gcr.io/k8s-dns-kube-dns-amd64_1.14.13 exists
I1107 10:56:16.988505    8234 cache_images.go:302] /home/vharabari/.minikube/cache/images/k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64_1.14.13 exists
I1107 10:56:16.988515    8234 cache_images.go:298] CacheImage: k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.13 -> /home/vharabari/.minikube/cache/images/k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64_1.14.13 completed in 30.044µs
I1107 10:56:16.988456    8234 cache_images.go:298] CacheImage: k8s.gcr.io/coredns:1.6.2 -> /home/vharabari/.minikube/cache/images/k8s.gcr.io/coredns_1.6.2 completed in 37.283µs
I1107 10:56:16.988530    8234 cluster.go:101] Skipping create...Using existing machine configuration
I1107 10:56:16.988357    8234 cache_images.go:302] /home/vharabari/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.16.2 exists
I1107 10:56:16.988618    8234 cache_images.go:298] CacheImage: k8s.gcr.io/kube-controller-manager:v1.16.2 -> /home/vharabari/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.16.2 completed in 282.814µs
I1107 10:56:16.988627    8234 cache_images.go:83] CacheImage k8s.gcr.io/kube-controller-manager:v1.16.2 -> /home/vharabari/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.16.2 succeeded
I1107 10:56:16.988534    8234 cache_images.go:83] CacheImage k8s.gcr.io/coredns:1.6.2 -> /home/vharabari/.minikube/cache/images/k8s.gcr.io/coredns_1.6.2 succeeded
I1107 10:56:16.988372    8234 cache_images.go:302] /home/vharabari/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.16.2 exists
I1107 10:56:16.988646    8234 cache_images.go:298] CacheImage: k8s.gcr.io/kube-proxy:v1.16.2 -> /home/vharabari/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.16.2 completed in 296.392µs
I1107 10:56:16.988653    8234 cache_images.go:83] CacheImage k8s.gcr.io/kube-proxy:v1.16.2 -> /home/vharabari/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.16.2 succeeded
I1107 10:56:16.988413    8234 cache_images.go:296] CacheImage: k8s.gcr.io/etcd:3.3.15-0 -> /home/vharabari/.minikube/cache/images/k8s.gcr.io/etcd_3.3.15-0
I1107 10:56:16.988670    8234 cache_images.go:302] /home/vharabari/.minikube/cache/images/k8s.gcr.io/etcd_3.3.15-0 exists
I1107 10:56:16.988467    8234 cache_images.go:296] CacheImage: k8s.gcr.io/pause:3.1 -> /home/vharabari/.minikube/cache/images/k8s.gcr.io/pause_3.1
I1107 10:56:16.988679    8234 cache_images.go:298] CacheImage: k8s.gcr.io/etcd:3.3.15-0 -> /home/vharabari/.minikube/cache/images/k8s.gcr.io/etcd_3.3.15-0 completed in 274.155µs
I1107 10:56:16.988697    8234 cache_images.go:302] /home/vharabari/.minikube/cache/images/k8s.gcr.io/pause_3.1 exists
I1107 10:56:16.988399    8234 cache_images.go:298] CacheImage: k8s.gcr.io/kube-apiserver:v1.16.2 -> /home/vharabari/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.16.2 completed in 35.003µs
I1107 10:56:16.988714    8234 cache_images.go:83] CacheImage k8s.gcr.io/kube-apiserver:v1.16.2 -> /home/vharabari/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.16.2 succeeded
I1107 10:56:16.988702    8234 cache_images.go:83] CacheImage k8s.gcr.io/etcd:3.3.15-0 -> /home/vharabari/.minikube/cache/images/k8s.gcr.io/etcd_3.3.15-0 succeeded
I1107 10:56:16.988313    8234 cache_images.go:298] CacheImage: gcr.io/k8s-minikube/storage-provisioner:v1.8.1 -> /home/vharabari/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1 completed in 50.338µs
I1107 10:56:16.988733    8234 cache_images.go:83] CacheImage gcr.io/k8s-minikube/storage-provisioner:v1.8.1 -> /home/vharabari/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1 succeeded
I1107 10:56:16.988519    8234 cache_images.go:296] CacheImage: k8s.gcr.io/kube-addon-manager:v9.0 -> /home/vharabari/.minikube/cache/images/k8s.gcr.io/kube-addon-manager_v9.0
I1107 10:56:16.988750    8234 cache_images.go:302] /home/vharabari/.minikube/cache/images/k8s.gcr.io/kube-addon-manager_v9.0 exists
I1107 10:56:16.988759    8234 cache_images.go:298] CacheImage: k8s.gcr.io/kube-addon-manager:v9.0 -> /home/vharabari/.minikube/cache/images/k8s.gcr.io/kube-addon-manager_v9.0 completed in 243.055µs
I1107 10:56:16.988767    8234 cache_images.go:83] CacheImage k8s.gcr.io/kube-addon-manager:v9.0 -> /home/vharabari/.minikube/cache/images/k8s.gcr.io/kube-addon-manager_v9.0 succeeded
I1107 10:56:16.988508    8234 cache_images.go:298] CacheImage: k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.13 -> /home/vharabari/.minikube/cache/images/k8s.gcr.io/k8s-dns-kube-dns-amd64_1.14.13 completed in 33.704µs
I1107 10:56:16.988779    8234 cache_images.go:83] CacheImage k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.13 -> /home/vharabari/.minikube/cache/images/k8s.gcr.io/k8s-dns-kube-dns-amd64_1.14.13 succeeded
I1107 10:56:16.988340    8234 cache_images.go:83] CacheImage k8s.gcr.io/kube-scheduler:v1.16.2 -> /home/vharabari/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.16.2 succeeded
I1107 10:56:16.988527    8234 cache_images.go:83] CacheImage k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.13 -> /home/vharabari/.minikube/cache/images/k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64_1.14.13 succeeded
I1107 10:56:16.988707    8234 cache_images.go:298] CacheImage: k8s.gcr.io/pause:3.1 -> /home/vharabari/.minikube/cache/images/k8s.gcr.io/pause_3.1 completed in 244.455µs
I1107 10:56:16.988800    8234 cache_images.go:83] CacheImage k8s.gcr.io/pause:3.1 -> /home/vharabari/.minikube/cache/images/k8s.gcr.io/pause_3.1 succeeded
I1107 10:56:16.988807    8234 cache_images.go:90] Successfully cached all images.
I1107 10:56:16.988931    8234 main.go:110] libmachine: Found binary path at /home/vharabari/.minikube/bin/docker-machine-driver-kvm2
I1107 10:56:16.988965    8234 main.go:110] libmachine: Launching plugin server for driver kvm2
I1107 10:56:17.005318    8234 main.go:110] libmachine: Plugin server listening at address 127.0.0.1:42471
I1107 10:56:17.005668    8234 main.go:110] libmachine: () Calling .GetVersion
I1107 10:56:17.006081    8234 main.go:110] libmachine: Using API Version  1
I1107 10:56:17.006101    8234 main.go:110] libmachine: () Calling .SetConfigRaw
I1107 10:56:17.006365    8234 main.go:110] libmachine: () Calling .GetMachineName
💡  Tip: Use 'minikube start -p <name>' to create a new cluster, or 'minikube delete' to delete this one.
I1107 10:56:17.006516    8234 main.go:110] libmachine: (minikube) Calling .GetState
I1107 10:56:24.234770    8234 cluster.go:113] Machine state:  Error
🔄  Retriable failure: Error getting state for host: getting connection: looking up domain: virError(Code=42, Domain=10, Message='Domain not found: no domain with matching name 'minikube'')
I1107 10:56:24.235404    8234 main.go:110] libmachine: Found binary path at /home/vharabari/.minikube/bin/docker-machine-driver-kvm2
I1107 10:56:24.235463    8234 main.go:110] libmachine: Launching plugin server for driver kvm2
I1107 10:56:24.258910    8234 main.go:110] libmachine: Plugin server listening at address 127.0.0.1:34587
I1107 10:56:24.259549    8234 main.go:110] libmachine: () Calling .GetVersion
I1107 10:56:24.260319    8234 main.go:110] libmachine: Using API Version  1
I1107 10:56:24.260356    8234 main.go:110] libmachine: () Calling .SetConfigRaw
I1107 10:56:24.261099    8234 main.go:110] libmachine: () Calling .GetMachineName
I1107 10:56:24.262040    8234 main.go:110] libmachine: Found binary path at /home/vharabari/.minikube/bin/docker-machine-driver-kvm2
I1107 10:56:24.262103    8234 main.go:110] libmachine: Launching plugin server for driver kvm2
I1107 10:56:24.282867    8234 main.go:110] libmachine: Plugin server listening at address 127.0.0.1:39805
I1107 10:56:24.283166    8234 main.go:110] libmachine: () Calling .GetVersion
I1107 10:56:24.283580    8234 main.go:110] libmachine: Using API Version  1
I1107 10:56:24.283608    8234 main.go:110] libmachine: () Calling .SetConfigRaw
I1107 10:56:24.283886    8234 main.go:110] libmachine: () Calling .GetMachineName
I1107 10:56:24.284009    8234 main.go:110] libmachine: (minikube) Calling .GetState
⚠️  Unable to get the status of the minikube cluster.
W1107 10:56:26.355828    8234 start.go:974] DeleteHost: Docker machine "minikube" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
I1107 10:56:31.879709    8234 cluster.go:101] Skipping create...Using existing machine configuration
I1107 10:56:31.880097    8234 main.go:110] libmachine: Found binary path at /home/vharabari/.minikube/bin/docker-machine-driver-kvm2
I1107 10:56:31.880146    8234 main.go:110] libmachine: Launching plugin server for driver kvm2
I1107 10:56:31.897170    8234 main.go:110] libmachine: Plugin server listening at address 127.0.0.1:38047
I1107 10:56:31.897547    8234 main.go:110] libmachine: () Calling .GetVersion
I1107 10:56:31.898012    8234 main.go:110] libmachine: Using API Version  1
I1107 10:56:31.898031    8234 main.go:110] libmachine: () Calling .SetConfigRaw
I1107 10:56:31.898283    8234 main.go:110] libmachine: () Calling .GetMachineName
💡  Tip: Use 'minikube start -p <name>' to create a new cluster, or 'minikube delete' to delete this one.
I1107 10:56:31.898460    8234 main.go:110] libmachine: (minikube) Calling .GetState
I1107 10:56:44.909577    8234 cluster.go:113] Machine state:  Error
🔄  Retriable failure: Error getting state for host: getting connection: looking up domain: virError(Code=42, Domain=10, Message='Domain not found: no domain with matching name 'minikube'')
I1107 10:56:44.911239    8234 main.go:110] libmachine: Found binary path at /home/vharabari/.minikube/bin/docker-machine-driver-kvm2
I1107 10:56:44.911357    8234 main.go:110] libmachine: Launching plugin server for driver kvm2
I1107 10:56:44.938779    8234 main.go:110] libmachine: Plugin server listening at address 127.0.0.1:43491
I1107 10:56:44.939181    8234 main.go:110] libmachine: () Calling .GetVersion
I1107 10:56:44.940028    8234 main.go:110] libmachine: Using API Version  1
I1107 10:56:44.940058    8234 main.go:110] libmachine: () Calling .SetConfigRaw
I1107 10:56:44.940359    8234 main.go:110] libmachine: () Calling .GetMachineName
I1107 10:56:44.940842    8234 main.go:110] libmachine: Found binary path at /home/vharabari/.minikube/bin/docker-machine-driver-kvm2
I1107 10:56:44.940887    8234 main.go:110] libmachine: Launching plugin server for driver kvm2
I1107 10:56:44.968418    8234 main.go:110] libmachine: Plugin server listening at address 127.0.0.1:41365
I1107 10:56:44.968839    8234 main.go:110] libmachine: () Calling .GetVersion
I1107 10:56:44.969341    8234 main.go:110] libmachine: Using API Version  1
I1107 10:56:44.969374    8234 main.go:110] libmachine: () Calling .SetConfigRaw
I1107 10:56:44.969670    8234 main.go:110] libmachine: () Calling .GetMachineName
I1107 10:56:44.969782    8234 main.go:110] libmachine: (minikube) Calling .GetState
⚠️  Unable to get the status of the minikube cluster.
W1107 10:56:47.023478    8234 start.go:974] DeleteHost: Docker machine "minikube" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@rnmhdn
Copy link
Author

rnmhdn commented Nov 11, 2019

/reopen

@k8s-ci-robot
Copy link
Contributor

@aranmohyeddin: Reopened this issue.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot reopened this Nov 11, 2019
@medyagh
Copy link
Member

medyagh commented Dec 16, 2019

CC @josedonizetti
@aranmohyeddin @FancyBanana
do you mind trying with latest version of minikube? do you still have this issue ?

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 15, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 15, 2020
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@ge79so
Copy link

ge79so commented Sep 5, 2020

/reopen

#9190

@k8s-ci-robot
Copy link
Contributor

@ge79so: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen

#9190

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/kvm2-driver KVM2 driver related issues help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/backlog Higher priority than priority/awaiting-more-evidence. r/2019q2 Issue was last reviewed 2019q2
Projects
None yet
Development

No branches or pull requests