-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
VM crashes after accessing a service with type loadbalancer on the loadbalancer ip on with the nodeport port (with minikube tunnel running) #4151
Comments
Thank you for the amazingly detailed reproduction steps. We don't quite understand what's going on here, but it sounds really really interesting. Possibly an infinite loop causing memory or CPU exhausting, or at least some kind of networking panic loop. Help wanted! |
Any chance this can be replicated without the tunnel running, such as running curl from within |
I just some more things again. Here are my observations:
|
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@jonenst do you trying to see if this issue still exists with the latest venison of minikube? |
AFAIK, this is still an issue, though it's possible that the tmpfs migration work may have effected it. We should just follow the repro case and check. |
update: I did the commands above on minikube 1.4.0 and kvm driver. and the issue seems to have been resolved,@jonenst do you mind checking if it is resolved for you as well ? and thank you for such great repeatable instructions !
on the other terminals I have minikube tunnel and also minikube ssh (running top inside)
|
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Hi,
|
@jonenst do you use a crop network or a VPN ? |
Same issue with Minikube 1.5.2 and vm-driver The issue occurs if the external IP address is accessed on any port except the "right" port (8080 with the instructions above). I'm not using any VPN. /remove-lifecycle rotten |
no |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@jonenst I tested again with Minikube 1.8.2 (and Virtualbox driver); this time, I could not reproduce the issue. Could you please test as well? /remove-lifecycle stale |
I tested again and the problem still shows:
|
@jonenst If I'm not wrong, this is a normal behaviour (on the contrary, previous versions of Minikube had an issue you reported above). In my tests (changing the IP addresses and ports to match yours):
|
When I tested yesterday, it did crash the minikube vm. minikube status didn't work anymore. a terminal with ssh minikube running 'top' froze. VBoxHeadless was running 100% cpu on the host. I didn't wait for the initial curl to timeout though. Sorry for not mentioning this in my first reply. |
I just tested again, it crashed. Waiting for the bad curl to timeout didn't fixed the problem. Interrupting
|
Strange. As noted above, when I tested last November with Minikube 1.5.2, I had the exact same problem as you, reproduced multiple times. I tested yesterday (Minikube 1.8.2, Fedora 31, Virtualbox 6.1.4) and it worked. Now I test again (same configuration) and the issue persists again 🤕 |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
I wonder if this behavior changes if CNI is enabled. |
@jonenst do u mind giving it another try with CNI enalbed? I am curious if this would fix the problem ? (with latest version of minikube)
|
still broken with driver virtualbox on latest minikube with or without "--container-runtime=containerd": without "--container-runtime=containerd": $ minikube start --driver=virtualbox
😄 minikube v1.18.1 on Fedora 29
✨ Using the virtualbox driver based on user configuration
👍 Starting control plane node minikube in cluster minikube
💾 Downloading Kubernetes v1.20.2 preload ...
> preloaded-images-k8s-v9-v1....: 491.22 MiB / 491.22 MiB 100.00% 2.36 MiB
🔥 Creating virtualbox VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
🐳 Preparing Kubernetes v1.20.2 on Docker 20.10.3 ...
▪ Generating certificates and keys ...
▪ Booting up control plane ...
▪ Configuring RBAC rules ...
🔎 Verifying Kubernetes components...
▪ Using image gcr.io/k8s-minikube/storage-provisioner:v4
🌟 Enabled addons: storage-provisioner, default-storageclass
💡 kubectl not found. If you need it, try: 'minikube kubectl -- get pods -A'
🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
$ kubectl create deployment hello-node --image=k8s.gcr.io/echoserver:1.4
deployment.apps/hello-node created
$ kubectl expose deployment hello-node --type=LoadBalancer --port=8080
service/hello-node exposed
$ minikube service --url hello-node
http://192.168.99.113:31955
$ curl http://192.168.99.113:31955
# ... OK
# minikube tunnel in other shell
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-node LoadBalancer 10.106.235.139 10.106.235.139 8080:31955/TCP 75s
$ curl http://10.106.235.139:31955/ ; echo
curl: (7) Failed to connect to 10.106.235.139 port 31955: Connection timed out
# and other shell running minikube ssh freezes
# and minikube status doesn't return anymore
$ curl http://192.168.99.113:31955
# still works though. same thing with --container-runtime=containerd:
|
Hi, running minikube tunnel and then accessing a service with the wrong port (using the nodeport instead of the loadbalancer port) crashes minikube commands. I'm using minikube 1.0 on fedora 29.
Here are commands to reproduce:
The text was updated successfully, but these errors were encountered: