-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
vmwarefusion: failed to start after stop: Error configuring auth on host: Too many retries waiting for SSH to be available #1382
Comments
+1. Also seeing this behavior on three machines. Exact same environment. |
Same for me too. Does not start once stopped. Looks similar to #1107 Getting to WaitForSSH function... |
Same issue here, I am facing the below errors and the minikube is keep retrying.Starting VM...
|
Experiencing same issue here. Did some digging with vmrun and found that guest /home/docker/.ssh dir is missing. As a workaround I found I could get the cluster running again by:
Then running this script on host to restore missing ssh keys in guest: #!/bin/bash
MINIKUBE="${HOME}/.minikube/machines/minikube"
VMX="$MINIKUBE/minikube.vmx"
DOCKER_PUB_KEY="$MINIKUBE/id_rsa.pub"
function vmrun {
GUESTCMD=$1; shift
"/Applications/VMware Fusion.app/Contents/Library/vmrun" -gu docker -gp tcuser $GUESTCMD "$VMX" "$@"
}
vmrun runScriptInGuest /bin/bash "mkdir -p /home/docker/.ssh"
vmrun CopyFileFromHostToGuest "$DOCKER_PUB_KEY" /home/docker/.ssh/authorized_keys
vmrun runScriptInGuest /bin/bash "chown -R docker /home/docker/.ssh"
vmrun runScriptInGuest /bin/bash "chmod -R 700 /home/docker/.ssh" Then running start again now that ssh access is restored to bring it up: Did a some quick digging for a cause, found this in minikube-automount logs, minikube-automount restores userdata.tar to populate the /home/docker/.ssh dir and so without that we get the 255 error from the client ssh
/var/lib/boot2docker points onto persistent storage, so that is good:
But there is no userdata.tar contained within.
Yet to find out why userdata.tar is missing... But looks to be handled here: https://github.com/kubernetes/minikube/blob/k8s-v1.7/deploy/iso/minikube-iso/package/automount/minikube-automount So I'm thinking the logs from the guest on first boot ( |
Created a cluster from scratch: The userdata.tar get uploaded to the guest early in minikube create via vmrun:
So now it is here on the guest: Later on when minikube-automount is enabled and started it gets wiped by
Without knowledge of the other drivers, a possible fix might be to change minikube-automount to |
Thanks. After making a fresh cluster I put the tar file in by hand.
|
I used the last piece of advice from @b333z and did the
as I wasn't able to get the /var/lib/boot2docker copy to work. I'm using 0.21. But now it works - so thanks ever so much for that investigation! |
This commit seems to be a fix for the issue (minikube itself has no code dictating when userdata is copied.) Can we pull it in to minikube? |
ping? I can try just blindly replacing the commit SHA1 in Godeps.json and seeing if tests pass... |
using latest v0.23.0 and still getting the same issue, is the fix included in that version? is there any nightly build to test it? the easiest way of fixing it is just |
I wouldn't get 0.23.0 to work on MacOS at all, so thanks for the fix @urbaniak ! |
I used the script from urbaniak to get minikube to come up in VMWare Fusion 10.0.1 as well. I had the same error as #2126 |
oddly, i have to use it every single time i start minikube. |
This issue seems resolved with minikube 0.25.0 |
I no longer have vmware running on my MBP, so I cannot verify it. If more people confirm it's working, I'll close it. |
It is fixed for me on v0.25.0. |
Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT
Minikube version (use
minikube version
): 0.18.0Environment:
cat ~/.minikube/machines/minikube/config.json | grep DriverName
): vmwarefusioncat ~/.minikube/machines/minikube/config.json | grep -i ISO
orminikube ssh cat /etc/VERSION
): boot2docker.isoWhat happened:
Using Vmware Fusion in Mac OS, the first time minikube is started, it works flawlessly. However, after
minikube stop
, if I run againminikube start --vm-driver=vmwarefusion
, it will fail and never run the minikube.What you expected to happen:
Be able to start the cluster after stopping it.
How to reproduce it (as minimally and precisely as possible):
Anything else do we need to know:
The only solution I've found so far is to
minikube delete
and start over.The text was updated successfully, but these errors were encountered: