Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

VM crashes after accessing a service with type loadbalancer on the loadbalancer ip on with the nodeport port (with minikube tunnel running) #4151

Open
jonenst opened this issue Apr 25, 2019 · 23 comments
Labels
area/networking networking issues area/tunnel Support for the tunnel command help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/backlog Higher priority than priority/awaiting-more-evidence.

Comments

@jonenst
Copy link

jonenst commented Apr 25, 2019

Hi, running minikube tunnel and then accessing a service with the wrong port (using the nodeport instead of the loadbalancer port) crashes minikube commands. I'm using minikube 1.0 on fedora 29.

Here are commands to reproduce:

$ ./minikube start
😄  minikube v1.0.0 on linux (amd64)
🤹  Downloading Kubernetes v1.14.0 images in the background ...
🔥  Creating virtualbox VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
📶  "minikube" IP address is 192.168.99.122
🌐  Found network options:
    ▪ HTTP_PROXY=http://10.135.89.71:3128
    ▪ HTTPS_PROXY=http://10.135.89.71:3128
🐳  Configuring Docker as the container runtime ...
    ▪ env HTTP_PROXY=http://10.135.89.71:3128
    ▪ env HTTPS_PROXY=http://10.135.89.71:3128
🐳  Version of container runtime is 18.06.2-ce
⌛  Waiting for image downloads to complete ...
✨  Preparing Kubernetes environment ...
🚜  Pulling images required by Kubernetes v1.14.0 ...
🚀  Launching Kubernetes v1.14.0 using kubeadm ... 
⌛  Waiting for pods: apiserver proxy etcd scheduler controller dns
🔑  Configuring cluster permissions ...
🤔  Verifying component health .....
💗  kubectl is now configured to use "minikube"
💡  For best results, install kubectl: https://kubernetes.io/docs/tasks/tools/install-kubectl/
🏄  Done! Thank you for using minikube!

$ kubectl create deployment hello-node --image=gcr.io/hello-minikube-zero-install/hello-node
deployment.apps/hello-node created

$ kubectl expose deployment hello-node --type=LoadBalancer --port=8080
service/hello-node exposed

$ minikube service --url hello-node 
http://192.168.99.122:30863

# the nodeport works 
$ curl http://192.168.99.122:30863 ; echo
Hello World!

# no external ip because we haven't started minikube tunnel yet
$ ./kubectl get svc
NAME         TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
hello-node   LoadBalancer   10.106.41.211   <pending>     8080:30863/TCP   78s
kubernetes   ClusterIP      10.96.0.1       <none>        443/TCP          3m30s

# now start minikube tunnel on another terminal :  $ minikube tunnel
$ ./kubectl get svc
NAME         TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)          AGE
hello-node   LoadBalancer   10.106.41.211   10.106.41.211   8080:30863/TCP   90s
kubernetes   ClusterIP      10.96.0.1       <none>          443/TCP          3m42s

# the nodeport still works 
$ curl http://192.168.99.122:30863 ; echo
Hello World!

# the external ip and port work too
$ curl http://10.106.41.211:8080 ; echo
Hello World!

# minikube and kubectl still work too
$ minikube status
host: Running
kubelet: Running
apiserver: Running
kubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.122
$ kubectl cluster-info
Kubernetes master is running at https://192.168.99.122:8443
KubeDNS is running at https://192.168.99.122:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

# optionally open another terminal and open top in the minikube vm: $ minikube ssh and then top

# This crashes minikube: using the wrong combination of loadbalancer ip and nodeport
$ curl http://10.106.41.211:30863 ; echo

# Now the ssh session running top has crashed, the vboxHeadless process spikes to 120% cpu. You have to reboot the minikube virtualbox vm to fix things.
# minikube commands don't work anymore (minikube logs, minikube ssh, minikube status)
# Note that accessing the service still works after that (through the nodeport or through the externalip and port) as well as kubectl cluster-info. Only minikube commands are crashed:
$ minikube status

💣  Error getting bootstrapper: getting kubeadm bootstrapper: command runner: getting ssh client for bootstrapper: Error dialing tcp via ssh client: ssh: handshake failed: read tcp 127.0.0.1:52960->127.0.0.1:45625: read: connection reset by peer

😿  Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
👉  https://github.com/kubernetes/minikube/issues/new
@tstromberg
Copy link
Contributor

Thank you for the amazingly detailed reproduction steps. We don't quite understand what's going on here, but it sounds really really interesting. Possibly an infinite loop causing memory or CPU exhausting, or at least some kind of networking panic loop.

Help wanted!

@tstromberg tstromberg added area/networking networking issues area/tunnel Support for the tunnel command priority/backlog Higher priority than priority/awaiting-more-evidence. kind/bug Categorizes issue or PR as related to a bug. labels Apr 25, 2019
@tstromberg tstromberg changed the title minikube commands hang after accessing a service with type loadbalancer on the loadbalancer ip on with the nodeport port (with minikube tunnel running) VM crashes after accessing a service with type loadbalancer on the loadbalancer ip on with the nodeport port (with minikube tunnel running) Apr 25, 2019
@tstromberg
Copy link
Contributor

Any chance this can be replicated without the tunnel running, such as running curl from within minikube ssh?

@jonenst
Copy link
Author

jonenst commented Apr 26, 2019

I just some more things again. Here are my observations:

  • you need minikube tunnel active: if you never started the tunnel, or if you started it and then stopped it, it doesn't break.
  • to trigger the breakage, you can connect from the host or from the minikube vm (in both cases, to the clusterip with the nodeport port)
  • it's the same behavior with a service of type NodePort

@tstromberg tstromberg added r/2019q2 Issue was last reviewed 2019q2 help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. labels May 24, 2019
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 22, 2019
@tstromberg tstromberg removed the r/2019q2 Issue was last reviewed 2019q2 label Sep 20, 2019
@medyagh
Copy link
Member

medyagh commented Sep 24, 2019

@jonenst do you trying to see if this issue still exists with the latest venison of minikube?
I will close this issue please feel free to re-open if it still exists

@medyagh medyagh closed this as completed Sep 24, 2019
@tstromberg tstromberg reopened this Sep 24, 2019
@tstromberg
Copy link
Contributor

AFAIK, this is still an issue, though it's possible that the tmpfs migration work may have effected it. We should just follow the repro case and check.

@medyagh
Copy link
Member

medyagh commented Sep 24, 2019

update: I did the commands above on minikube 1.4.0 and kvm driver.

and the issue seems to have been resolved,@jonenst do you mind checking if it is resolved for you as well ?

and thank you for such great repeatable instructions !

 minikube service --url hello-node
http://192.168.39.23:30485
 minikube service list
|----------------------|---------------------------|----------------------------|-----|
|      NAMESPACE       |           NAME            |        TARGET PORT         | URL |
|----------------------|---------------------------|----------------------------|-----|
| default              | hello-node                | http://192.168.39.23:30485 |
| default              | kubernetes                | No node port               |
| kube-system          | kube-dns                  | No node port               |
| kubernetes-dashboard | dashboard-metrics-scraper | No node port               |
| kubernetes-dashboard | kubernetes-dashboard      | No node port               |
|----------------------|---------------------------|----------------------------|-----|
curl http://192.168.39.23:30485/
Hello World!



curl http://192.168.39.23:30485/
Hello World!



kubectl get svc
NAME         TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
hello-node   LoadBalancer   10.102.27.230   <pending>     8080:30485/TCP   110s
kubernetes   ClusterIP      10.96.0.1       <none>        443/TCP          3m26s
kubectl get svc
NAME         TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)          AGE
hello-node   LoadBalancer   10.102.27.230   10.102.27.230   8080:30485/TCP   2m34s
kubernetes   ClusterIP      10.96.0.1       <none>          443/TCP          4m10s
curl http://192.168.39.23:30485/
Hello World!

curl  10.102.27.230:8080
Hello World!minikube status
host: Running
kubelet: Running
apiserver: Running
kubectl: Correctly Configured: pointing to minikube-vm at 192.168.39.23



kubectl cluster-info
Kubernetes master is running at https://192.168.39.23:8443
KubeDNS is running at https://192.168.39.23:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
curl http://192.168.39.23:30485/
Hello World!curl  10.102.27.230:8080
Hello World!]\


minikube status
host: Running
kubelet: Running
apiserver: Running
kubectl: Correctly Configured: pointing to minikube-vm at 192.168.39.23


on the other terminals I have minikube tunnel and also minikube ssh (running top inside)

$ top

top - 18:58:57 up 10 min,  1 user,  load average: 1.13, 0.58, 0.30
Tasks: 134 total,   1 running, 133 sleeping,   0 stopped,   0 zombie
%Cpu0  :   2.7/2.1     5[|||||                                                                                               ]
%Cpu1  :   2.7/2.0     5[|||||                                                                                               ]
GiB Mem : 61.1/1.8      [                                                                                                    ]
GiB Swap:  0.0/0.0      [                                                                                                    ]

  PID USER      PR  NI    VIRT    RES  %CPU  %MEM     TIME+ S COMMAND                                                                                                                             
    1 root      20   0  116.9m   7.2m   0.0   0.4   0:15.42 S /sbin/init noembed norestore                                                                                                        
 1348 root      20   0   36.0m   6.4m   0.0   0.3   0:00.26 S  `- /usr/lib/systemd/systemd-journald                                                                                               
 1637 systemd+  20   0  109.1m   2.3m   0.0   0.1   0:00.02 S  `- /usr/lib/systemd/systemd-timesyncd                                                                                              
 1641 root      20   0   43.1m   5.1m   0.0   0.3   0:00.07 S  `- /usr/lib/systemd/systemd-udevd                                                                                                  
 1655 systemd+  20   0   46.1m   4.8m   0.0   0.3   0:00.03 S  `- /usr/lib/systemd/systemd-networkd                                                                                               
 1657 systemd+  20   0   33.3m   3.8m   0.0   0.2   0:00.11 S  `- /usr/lib/systemd/systemd-resolved                                                                                               
 1665 root      20   0   12.9m   0.5m   0.0   0.0   0:00.00 S  `- /usr/sbin/rpc.mountd                                                                                                            
 1668 root      20   0    9.0m   0.5m   0.0   0.0   0:00.00 S  `- /sbin/getty -L ttyS0 115200 vt100                                                                                               
 1669 dbus      20   0   22.1m   3.4m   0.0   0.2   0:00.13 S  `- /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only                        
 1680 root      20   0    9.0m   0.4m   0.0   0.0   0:00.00 S  `- /sbin/getty -L tty1 115200 vt100                                                                                                
 1682 root      20   0   19.3m   2.2m   0.0   0.1   0:00.00 S  `- /usr/bin/rpcbind                                                                                                                
 1684 root      20   0   27.7m   4.1m   0.0   0.2   0:00.07 S  `- /usr/lib/systemd/systemd-logind                                                                                                 
 1751 root      20   0   23.3m   4.4m   0.0   0.2   0:00.00 S  `- /usr/sbin/sshd -D -e                                                                                                            
 7959 root      20   0   23.5m   4.4m   0.0   0.2   0:00.00 S      `- sshd: docker [priv]                                                                                                         
 7961 docker    20   0   23.5m   2.6m   0.0   0.1   0:00.05 S          `- sshd: docker@pts/0                                                                                                      
 7962 docker    20   0   15.9m   3.1m   0.0   0.2   0:00.00 S              `- -bash                                                                                                               

@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Oct 24, 2019
@jonenst
Copy link
Author

jonenst commented Oct 25, 2019

Hi,
the problem is the same with minikube 1.4

$ ./minikube-linux-amd64 start
😄  minikube v1.4.0 on Fedora 29
🔥  Creating virtualbox VM (CPUs=2, Memory=2000MB, Disk=20000MB) ...
🚜  Pulling images ...
🚀  Launching Kubernetes ... 
⌛  Waiting for: apiserver proxy etcd scheduler controller dns
🏄  Done! kubectl is now configured to use "minikube"
💡  For best results, install kubectl: https://kubernetes.io/docs/tasks/tools/install-kubectl/
$ ./kubectl create deployment hello-node --image=gcr.io/hello-minikube-zero-install/hello-node
deployment.apps/hello-node created
$ ./kubectl expose deployment hello-node --type=LoadBalancer --port=8080
service/hello-node exposed
$ ./kubectl get svc
NAME         TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
hello-node   LoadBalancer   10.99.216.132   <pending>     8080:31631/TCP   9s
kubernetes   ClusterIP      10.96.0.1       <none>        443/TCP          100s
$ ./kubectl get svc # here I launched minikube tunnel in another terminal
NAME         TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)          AGE
hello-node   LoadBalancer   10.99.216.132   10.99.216.132   8080:31631/TCP   27s
kubernetes   ClusterIP      10.96.0.1       <none>          443/TCP          118s
$ ./minikube-linux-amd64 status 
host: Running
kubelet: Running
apiserver: Running
kubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.137
$ ./minikube-linux-amd64 status # here I launched curl http://10.99.216.132:31631 in another terminal; hangs
💣  Error getting bootstrapper: getting kubeadm bootstrapper: command runner: getting ssh client for bootstrapper: Error dialing tcp via ssh client: ssh: handshake failed: read tcp 127.0.0.1:32828->127.0.0.1:41263: read: connection reset by peer

😿  Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
👉  https://github.com/kubernetes/minikube/issues/new/choose

@medyagh
Copy link
Member

medyagh commented Nov 4, 2019

@jonenst do you use a crop network or a VPN ?

@olivierlemasle
Copy link
Member

olivierlemasle commented Nov 22, 2019

Same issue with Minikube 1.5.2 and vm-driver virtualbox.

The issue occurs if the external IP address is accessed on any port except the "right" port (8080 with the instructions above).

I'm not using any VPN.

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Nov 22, 2019
@jonenst
Copy link
Author

jonenst commented Dec 17, 2019

do you use a crop network or a VPN ?

no

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 16, 2020
@olivierlemasle
Copy link
Member

@jonenst I tested again with Minikube 1.8.2 (and Virtualbox driver); this time, I could not reproduce the issue. Could you please test as well?

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 17, 2020
@jonenst
Copy link
Author

jonenst commented Mar 17, 2020

I tested again and the problem still shows:

$ minikube start --driver=virtualbox
😄  minikube v1.8.2 on Fedora 29
✨  Using the virtualbox driver based on user configuration
🔥  Creating virtualbox VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
🐳  Preparing Kubernetes v1.17.3 on Docker 19.03.6 ...
🚀  Launching Kubernetes ... 
🌟  Enabling addons: default-storageclass, storage-provisioner
⌛  Waiting for cluster to come online ...
🏄  Done! kubectl is now configured to use "minikube"
$ kubectl create deployment hello-node --image=gcr.io/hello-minikube-zero-install/hello-node
deployment.apps/hello-node created
$ kubectl expose deployment hello-node --type=LoadBalancer --port=8080
service/hello-node exposed
$ minikube service --url hello-node 
http://192.168.99.103:32000
$ curl http://192.168.99.103:32000 ; echo
Hello World!
$ kubectl get svc # with minikube tunnel launched in another terminal
NAME         TYPE           CLUSTER-IP       EXTERNAL-IP      PORT(S)          AGE
hello-node   LoadBalancer   10.108.145.239   10.108.145.239   8080:32000/TCP   6m22s
kubernetes   ClusterIP      10.96.0.1        <none>           443/TCP          83m
$ curl http://10.108.145.239:32000 # crashed
^C

@olivierlemasle
Copy link
Member

@jonenst If I'm not wrong, this is a normal behaviour (on the contrary, previous versions of Minikube had an issue you reported above).

In my tests (changing the IP addresses and ports to match yours):

  • curl http://192.168.99.103:32000 works (which is normal)
  • curl http://10.108.145.239:8080 works (which is normal with minikube tunnel)
  • curl http://10.108.145.239:32000 freezes and then fails after a long time ("Connection timed out")... which is also normal because it is not the right port
  • in the meantime, I can still use minikube status, kubectl get svc or other kubectl/minikube commands (which failed in the previous versions once I curled the wrong combination of address and port)

@jonenst
Copy link
Author

jonenst commented Mar 18, 2020

When I tested yesterday, it did crash the minikube vm. minikube status didn't work anymore. a terminal with ssh minikube running 'top' froze. VBoxHeadless was running 100% cpu on the host. I didn't wait for the initial curl to timeout though. Sorry for not mentioning this in my first reply.

@jonenst
Copy link
Author

jonenst commented Mar 18, 2020

I just tested again, it crashed. Waiting for the bad curl to timeout didn't fixed the problem. Interrupting minikube tunnel didn't fix the problem. I had to minikube delete the vm.

🔥  Deleting "minikube" in virtualbox ...
❌  Failed to delete cluster: host remove: /usr/bin/VBoxManage controlvm minikube poweroff failed:
0%...10%...20%...30%...40%...50%...
Progress state: NS_ERROR_FAILURE
VBoxManage: error: Failed to power off machine
VBoxManage: error: The VM session was aborted
VBoxManage: error: Details: code NS_ERROR_FAILURE (0x80004005), component SessionMachine, interface ISession
VBoxManage: error: Context: "RTEXITCODE handleControlVM(HandlerArg*)" at line 580 of file VBoxManageControlVM.cpp

📌  You may need to manually remove the "minikube" VM from your hypervisor
🔥  Removing /home/jonenst/.minikube/machines/minikube ...
💀  Removed all traces of the "minikube" cluster.

@olivierlemasle
Copy link
Member

Strange. As noted above, when I tested last November with Minikube 1.5.2, I had the exact same problem as you, reproduced multiple times.

I tested yesterday (Minikube 1.8.2, Fedora 31, Virtualbox 6.1.4) and it worked. Now I test again (same configuration) and the issue persists again 🤕

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 16, 2020
@tstromberg tstromberg added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 24, 2020
@tstromberg
Copy link
Contributor

I wonder if this behavior changes if CNI is enabled.

@medyagh
Copy link
Member

medyagh commented Feb 18, 2021

@jonenst do u mind giving it another try with CNI enalbed?

I am curious if this would fix the problem ? (with latest version of minikube)

minikube delete --all
minikube start --container-runtime=containerd

@jonenst
Copy link
Author

jonenst commented Mar 31, 2021

still broken with driver virtualbox on latest minikube with or without "--container-runtime=containerd":

without "--container-runtime=containerd":

$ minikube start --driver=virtualbox
😄  minikube v1.18.1 on Fedora 29
✨  Using the virtualbox driver based on user configuration
👍  Starting control plane node minikube in cluster minikube
💾  Downloading Kubernetes v1.20.2 preload ...
    > preloaded-images-k8s-v9-v1....: 491.22 MiB / 491.22 MiB  100.00% 2.36 MiB
🔥  Creating virtualbox VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
🐳  Preparing Kubernetes v1.20.2 on Docker 20.10.3 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v4
🌟  Enabled addons: storage-provisioner, default-storageclass
💡  kubectl not found. If you need it, try: 'minikube kubectl -- get pods -A'
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

$ kubectl create deployment hello-node --image=k8s.gcr.io/echoserver:1.4
deployment.apps/hello-node created

$ kubectl expose deployment hello-node --type=LoadBalancer --port=8080
service/hello-node exposed

$ minikube service --url hello-node 
http://192.168.99.113:31955

$ curl http://192.168.99.113:31955
# ... OK

# minikube tunnel in other shell
$ kubectl get svc
NAME         TYPE           CLUSTER-IP       EXTERNAL-IP      PORT(S)          AGE
hello-node   LoadBalancer   10.106.235.139   10.106.235.139   8080:31955/TCP   75s

$ curl http://10.106.235.139:31955/ ; echo
curl: (7) Failed to connect to 10.106.235.139 port 31955: Connection timed out
# and other shell running minikube ssh freezes
# and minikube status doesn't return anymore

$ curl http://192.168.99.113:31955
# still works though.

same thing with --container-runtime=containerd:

$ minikube start --container-runtime=containerd  --driver=virtualbox
😄  minikube v1.18.1 on Fedora 29
✨  Using the virtualbox driver based on user configuration
👍  Starting control plane node minikube in cluster minikube
💾  Downloading Kubernetes v1.20.2 preload ...
    > preloaded-images-k8s-v9-v1....: 910.67 MiB / 910.67 MiB  100.00% 2.58 MiB
🔥  Creating virtualbox VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
📦  Preparing Kubernetes v1.20.2 on containerd 1.4.3 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔗  Configuring bridge CNI (Container Networking Interface) ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v4
🌟  Enabled addons: storage-provisioner, default-storageclass
💡  kubectl not found. If you need it, try: 'minikube kubectl -- get pods -A'
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

$ kubectl create deployment hello-node --image=k8s.gcr.io/echoserver:1.4
deployment.apps/hello-node created

$ kubectl expose deployment hello-node --type=LoadBalancer --port=8080
service/hello-node exposed

$ minikube service --url hello-node 
http://192.168.99.114:31727

$ curl http://192.168.99.114:31727 ; echo 
# ok..

# minikube tunnel in other shell
$ ./kubectl get svc
NAME         TYPE           CLUSTER-IP       EXTERNAL-IP      PORT(S)          AGE
hello-node   LoadBalancer   10.110.171.116   10.110.171.116   8080:31727/TCP   116s

$ curl http://10.110.171.116:31727 ; echo 
^C
# top in minikube ssh freezes
# minikube status doesn't answer

$ curl http://192.168.99.114:31727 ; echo 
# still ok..

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/networking networking issues area/tunnel Support for the tunnel command help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

No branches or pull requests

6 participants