-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
none: reusing node: detecting provisioner: Too many retries waiting for SSH to be available #4132
Comments
Hmm, why is it trying to SSH to itself ? Doesn't make sense. It's not really creating a new VM, that is supplied by the user. |
We test this sequence in test/integration/start_stop_delete_test.go - so I'm curious what's going on here that is different than our test environment. |
Questioning: 'We test this sequence'
The test names in the func are: nocache_oldest, feature_gates_newest_cni, containerd_and_non_default_apiserver_port, crio_ignore_preflights. None contain 'docker'. This seems to indicate that no startStop tests with noneDriver are performed. Addressing: 'why is it SSHing to itself'
The pull request for #3387 added the DetectProvisioner invocation into startHost. DetectProvisioner runs SSH commands. The none driver doesn't support SSH commands. |
I am having the same issue with v1.0.0 |
I confirm this issue: to reproduce:
minikube output
minikube logs
|
making sure minikube is deleted before setup to avoid kubernetes#4132
making sure minikube is deleted before setup to avoid kubernetes#4132
@cduke-nokia good finding ! you are right we are skipping the none driver test in PRs are welcome :) |
update: I can not replicate this issue, even on 1.0.0 I also tried on the I have no idea how to reproduce this error anymore, even though I myself had this issue. @cduke-nokia do you still have this issue ? |
Tested with minikube v.1.20:
The same problem occurs: 'Waiting for SSH access ...' appears. |
Upstream docker-machine has this lovely hack, before calling // TODO: Not really a fan of just checking "none" or "ci-test" here.
if h.Driver.DriverName() == "none" || h.Driver.DriverName() == "ci-test" {
return nil
} Like the OP says, the |
I have the same issue in v1.3.0 |
Hello, same here in v1.3.1 on Debian 10.0 |
hello, same here in v1.3.1 on Centos 7.2 |
hello, same here in v1.3.1 on debian 9 |
Can anyone confirm whether Also, if anyone can replicate this on v1.3.1, please share the output of:
Thank you! |
@tstromberg - I tried sudo minikube delete and restarted minikube with the following commands: FYI, I am running minikube version 1.3.1 on CentOS 7 I still get this error about driver none not supporting ssh:
|
hello, same in v1.3.1 on centos 7 , any workaround? |
Hello, same issue in 1.3.1, centos 7.6 with a fresh install. |
Output of minikube start (1.3.1) on centos 7 : minikube start --vm-driver none --memory 3048 --cpus 3 --alsologtostderr --v=8 |
Hi, Looks like "none" driver is still broken. |
Hi, Thanks |
To OlivierPiron: this problem affects Minikube 1.0.0 and up; I did not encounter this problem in pre-1.0.0 versions. I can start minikube 1.0.0 to 1.3.1 with "none" driver but not after minikube stop. Workaround is to use minikube delete then minikube start will work. In other words, this sequence fails: But this sequence works: |
Can someone confirm whether minikube v1.4 also suffers this issue? v1.4 includes an updated machine-drivers version, which may help. I wasn't able to replicate it locally in v1.4. This works on my Debian based machine:
|
I'm seeing the same issue with 1.4.0 on Oracle Linux 7.6 (CentOS 7 base). |
On 1.5.0, RHEL 7.6, same issue and work around sequence also does not work. |
This is happening for me running Ubuntu 19.10 VM (virtualbox 6.0.8 r130520) with minikube 1.5.2 built from git repo.
|
Delete /root/.minikube/machines and try again. |
Confirmed on Ubuntu 19.10 VM, minikube version 1.5.2 |
Bug still exists in: Start option for minikube are: |
this issue seems to be related to this @SimonMal84 |
Re-opening because I don't see the relationship between cgroup and this issue. It's very strange that the provisioner is even attempting to use SSH. Help wanted! If you see this, you may get some mileage by using |
I have the same problem with minikube version v1.7.3. I didn't have it with 1.6.x |
I was facing the same issue in v1.3.1. I did a bit of code walkthrough in v.13.1 and found that it was having issue at "configureHost" which as per my understanding is only required when the setup is done in a vm and I did not observe any configurations specific to 'none' driver . I have tried a couple of scenarios and below mentioned fix worked for me. I did not face any issues in restarting multiple times or in usage of the cluster after using the below change. Fix: https://github.com/kubernetes/minikube/compare/v1.3.1...rajeshkudaka:fix-4132?expand=1 I will create a PR if the change can still go to v1.3.1. Please let me know. |
Environment:
minikube version:
v1.0.0
OS:
Ubuntu 16.04 LTS (Xenial Xerus)
VM Driver:
none
What happened: ```
Created a VM with none driver, stopped it, then started it again. The VM failed to start and minikube reported that it crashed.
To reproduce:
sudo minikube start --vm-driver=none
sudo minikube stop
sudo minikube start --vm-driver=none
Starting a stopped VM was working in minikube v0.28.
The text was updated successfully, but these errors were encountered: