Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Minikube tunnel hangs on Ubuntu 20 in Windows 10 #12115

Closed
johnnybigert opened this issue Aug 4, 2021 · 28 comments
Closed

Minikube tunnel hangs on Ubuntu 20 in Windows 10 #12115

johnnybigert opened this issue Aug 4, 2021 · 28 comments
Labels
area/tunnel Support for the tunnel command kind/support Categorizes issue or PR as a support question. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. long-term-support Long-term support issues that can't be fixed in code os/wsl-windows microsoft wsl related issues

Comments

@johnnybigert
Copy link

Steps to reproduce the issue:

  1. minikube start
  2. kubectl apply -f manifests/storage-deployment.yaml
  3. kubectl apply -f manifests/storage-service.yaml
  4. minikube tunnel

Full output of minikube logs command:
out.txt

Full output of failed command:
Observed behavior: The minikube tunnel command provides no output and appears to be frozen. (However, using htop I can see that the tunnel command is using cpu and memory. Also, if I do minikube stop in another terminal, the tunnel command starts outputting error messages.)
Expected behavior: Output of service IP addresses etc. like the output in this example.

Note that the Ubuntu 20.04 is running in Windows 10 (WSL2).

@spowelljr spowelljr added area/tunnel Support for the tunnel command kind/support Categorizes issue or PR as a support question. os/wsl-windows microsoft wsl related issues labels Aug 4, 2021
@spowelljr
Copy link
Member

Hi @johnnybigert, thanks for reporting your issue with minikube!

There's a large discussion about running minikube on WSL2 on this issue #7879.

There are multiple comments discussing the tunnel command, I'd recommend reading through them and seeing if any of them are applicable to you and maybe solve your issue, thanks!

@johnnybigert
Copy link
Author

@spowelljr Thanks for responding. I read #7879, nothing helpful there what I could see, just @ashishsecdev having the same issue as me.

@spowelljr
Copy link
Member

@johnnybigert Thanks for checking out that issue, I'll see if I have time to check it out today. If you could though, try running minikube tunnel --alsologtostderr that should give you a more verbose output and possibly see where the problem lies.

@johnnybigert
Copy link
Author

johnnybigert commented Aug 10, 2021

Nice @spowelljr, got some more info:

$ minikube tunnel --alsologtostderr I0810 16:20:47.832318 42964 out.go:286] Setting OutFile to fd 1 ... I0810 16:20:47.833475 42964 out.go:338] isatty.IsTerminal(1) = true I0810 16:20:47.833541 42964 out.go:299] Setting ErrFile to fd 2... I0810 16:20:47.834013 42964 out.go:338] isatty.IsTerminal(2) = true I0810 16:20:47.834427 42964 root.go:312] Updating PATH: /home/johnny/.minikube/bin I0810 16:20:47.836683 42964 mustload.go:65] Loading cluster: minikube I0810 16:20:47.838820 42964 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}} I0810 16:20:48.061876 42964 host.go:66] Checking if "minikube" exists ... I0810 16:20:48.062366 42964 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube I0810 16:20:48.250879 42964 api_server.go:164] Checking apiserver status ... I0810 16:20:48.251030 42964 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0810 16:20:48.251212 42964 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0810 16:20:48.443480 42964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65391 SSHKeyPath:/home/johnny/.minikube/machines/minikube/id_rsa Username:docker} I0810 16:20:48.604169 42964 ssh_runner.go:149] Run: sudo egrep ^[0-9]+:freezer: /proc/2229/cgroup I0810 16:20:48.616791 42964 api_server.go:180] apiserver freezer: "7:freezer:/docker/504ab5c5ef196d5a3224239f6d9c3fcd9260f9d769071c0ef37fc7d95b970bc7/kubepods/burstable/podcefbe66f503bf010430ec3521cf31be8/4aa3af29e94614945e7ae4185c8028c567c6e5561d7042dd0100083b2b1cf6b4" I0810 16:20:48.616910 42964 ssh_runner.go:149] Run: sudo cat /sys/fs/cgroup/freezer/docker/504ab5c5ef196d5a3224239f6d9c3fcd9260f9d769071c0ef37fc7d95b970bc7/kubepods/burstable/podcefbe66f503bf010430ec3521cf31be8/4aa3af29e94614945e7ae4185c8028c567c6e5561d7042dd0100083b2b1cf6b4/freezer.state I0810 16:20:48.628164 42964 api_server.go:202] freezer state: "THAWED" I0810 16:20:48.628233 42964 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:65390/healthz ... I0810 16:20:48.638548 42964 api_server.go:265] https://127.0.0.1:65390/healthz returned 200: ok I0810 16:20:48.638635 42964 tunnel.go:57] Checking for tunnels to cleanup... I0810 16:20:48.648585 42964 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube

...and this is where it freezes. (Not sure why the newlines gets removed, sorry about the poor readability.)

@pfeigl
Copy link

pfeigl commented Aug 31, 2021

I'm having the very same problem. Here's my log if it helps, but it looks very similar to that of @johnnybigert

I0831 18:22:04.551095    3328 out.go:286] Setting OutFile to fd 88 ...
I0831 18:22:04.575864    3328 out.go:333] TERM=,COLORTERM=, which probably does not support color
I0831 18:22:04.575864    3328 out.go:299] Setting ErrFile to fd 92...
I0831 18:22:04.575864    3328 out.go:333] TERM=,COLORTERM=, which probably does not support color
I0831 18:22:04.591706    3328 mustload.go:65] Loading cluster: minikube
I0831 18:22:04.623706    3328 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0831 18:22:05.787724    3328 cli_runner.go:168] Completed: docker container inspect minikube --format={{.State.Status}}: (1.1639982s)
I0831 18:22:05.787724    3328 host.go:66] Checking if "minikube" exists ...
I0831 18:22:05.803656    3328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube
I0831 18:22:06.142604    3328 api_server.go:164] Checking apiserver status ...
I0831 18:22:06.169184    3328 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0831 18:22:06.183977    3328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0831 18:22:06.525544    3328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52996 SSHKeyPath:C:\Users\philipp.feigl\.minikube\machines\minikube\id_rsa Username:docker}
I0831 18:22:06.688352    3328 ssh_runner.go:149] Run: sudo egrep ^[0-9]+:freezer: /proc/2083/cgroup
I0831 18:22:06.699709    3328 api_server.go:180] apiserver freezer: "20:freezer:/docker/563f5ed6dc71139f783d154e8b6d83c6d3e88b4cdf35c12882ab51dee0ab7373/kubepods/burstable/podcefbe66f503bf010430ec3521cf31be8/26a2826f49d8b2eeda603e849abc41548dc757aab1fd6a03c9d2087f6118c318"
I0831 18:22:06.727813    3328 ssh_runner.go:149] Run: sudo cat /sys/fs/cgroup/freezer/docker/563f5ed6dc71139f783d154e8b6d83c6d3e88b4cdf35c12882ab51dee0ab7373/kubepods/burstable/podcefbe66f503bf010430ec3521cf31be8/26a2826f49d8b2eeda603e849abc41548dc757aab1fd6a03c9d2087f6118c318/freezer.state
I0831 18:22:06.740717    3328 api_server.go:202] freezer state: "THAWED"
I0831 18:22:06.740717    3328 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:53000/healthz ...
I0831 18:22:06.749297    3328 api_server.go:265] https://127.0.0.1:53000/healthz returned 200: ok
I0831 18:22:06.749297    3328 tunnel.go:57] Checking for tunnels to cleanup...
I0831 18:22:06.774848    3328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube

And at this point, the tunnel just sits there and no tunneling works and also no user input is requested. I also tried running the tunnel in an admin-console, but to no avail.

@pfeigl
Copy link

pfeigl commented Aug 31, 2021

Oh I think I found something. Running that last command directly in the bash gives the following error:

C:\WINDOWS\system32>docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
Template parsing error: template: :1: unexpected "/" in operand

Looking at the command, it seems like there is a problem with " beeing inside of the actual inspect string.

I just digged a little deeper. It feels like this is the core of the problem:
docker/cli#3241

Sofar I couldn't get the command to run, even after fixing the quotes, it still errors out on an unexpted / with the following error

PS C:\Users\philipp.feigl> docker inspect --format '{{ (index (index .NetworkSettings.Ports "22/tcp") 0).HostPort }}' minikube
Template parsing error: template: :1: unexpected "/" in operand

@pfeigl
Copy link

pfeigl commented Aug 31, 2021

Ok, some more digging, the correct command seems to be the following

docker container inspect -f '{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}' minikube

The following changes where needed:

  • Removed outside double quotes
  • Escaped double quotes in filter

With that the actual port was reported back as expected

The line that probably needs to be changed is this:

rr, err = runCmd(exec.Command(ociBin, "container", "inspect", "-f", fmt.Sprintf("'{{(index (index .NetworkSettings.Ports \"%d/tcp\") 0).HostPort}}'", contPort), ociID))

What I'm not sure because I'm currently not having an old docker installation available is, if we can simply add the escaping of the quotes and it'll still work with older versions and also newer version or whether we have to check the docker version and have different implementations.

@pgschr
Copy link

pgschr commented Sep 14, 2021

@pfeigl you fixed the problem with your change? did you recompile the executable?

@pfeigl
Copy link

pfeigl commented Sep 15, 2021

No, tbh I just analyzed the problem and tested the fixed command line command on the bash directly. I never changed / compiled the minikube code.

@speedyankur
Copy link

Also facing the same issue

@thanili
Copy link

thanili commented Oct 27, 2021

Any updates on that? I am also facing the same issue :/

@spowelljr spowelljr self-assigned this Nov 17, 2021
@spowelljr spowelljr added this to the 1.25.0 milestone Nov 17, 2021
@spowelljr
Copy link
Member

spowelljr commented Nov 17, 2021

Thanks for finding the cause @pfeigl, I'll take a look at implementing your change and seeing if it's backwards with older versions of Docker as you mentioned.

@spowelljr
Copy link
Member

So I was looking into this a bit, the minikube tunnel command documentation is possibly lacking.

minikube tunnel is meant to be open in a separate terminal and left running, the command won't "complete" it's meant to be ctrl+c when you're done using the tunnel.

If you do the following:

$ minikube start --driver=docker
$ kubectl create deployment hello-minikube1 --image=k8s.gcr.io/echoserver:1.4
$ kubectl expose deployment hello-minikube1 --type=LoadBalancer --port=8080
$ minikube tunnel
🏃  Starting tunnel for service hello-minikube1.

That's what I got on Ubuntu 20.04 WSL.

If you don't have a service running and run minikube tunnel --alsologtostderr:

./out/minikube tunnel --alsologtostderr
I1117 22:42:00.663169   47004 out.go:297] Setting OutFile to fd 1 ...
I1117 22:42:00.663674   47004 out.go:349] isatty.IsTerminal(1) = true
I1117 22:42:00.663773   47004 out.go:310] Setting ErrFile to fd 2...
I1117 22:42:00.663877   47004 out.go:349] isatty.IsTerminal(2) = true
I1117 22:42:00.664154   47004 root.go:315] Updating PATH: /home/powellsteven/.minikube/bin
I1117 22:42:00.664765   47004 mustload.go:65] Loading cluster: minikube
I1117 22:42:00.665406   47004 config.go:176] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
I1117 22:42:00.666603   47004 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I1117 22:42:01.111875   47004 host.go:66] Checking if "minikube" exists ...
I1117 22:42:01.113424   47004 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube
I1117 22:42:01.436035   47004 api_server.go:165] Checking apiserver status ...
I1117 22:42:01.436640   47004 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1117 22:42:01.436997   47004 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I1117 22:42:01.759673   47004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58115 SSHKeyPath:/home/powellsteven/.minikube/machines/minikube/id_rsa Username:docker}
I1117 22:42:01.900785   47004 ssh_runner.go:152] Run: sudo egrep ^[0-9]+:freezer: /proc/1948/cgroup
I1117 22:42:01.920846   47004 api_server.go:181] apiserver freezer: "20:freezer:/docker/5b546678c9426c1ce5505e451243d9aecfd4f22d552b4bc97d5c6a762e68ec88/kubepods/burstable/pod5a60ad17d917e03c0e9b4ca796aa9460/7e5ed79f72feeabeb51161a159dc6f7e5d43b3a96e8ee7666fceb1de6f0b0a1d"
I1117 22:42:01.921200   47004 ssh_runner.go:152] Run: sudo cat /sys/fs/cgroup/freezer/docker/5b546678c9426c1ce5505e451243d9aecfd4f22d552b4bc97d5c6a762e68ec88/kubepods/burstable/pod5a60ad17d917e03c0e9b4ca796aa9460/7e5ed79f72feeabeb51161a159dc6f7e5d43b3a96e8ee7666fceb1de6f0b0a1d/freezer.state
I1117 22:42:01.939607   47004 api_server.go:203] freezer state: "THAWED"
I1117 22:42:01.939830   47004 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:58114/healthz ...
I1117 22:42:01.950950   47004 api_server.go:266] https://127.0.0.1:58114/healthz returned 200:
ok
I1117 22:42:01.951140   47004 tunnel.go:57] Checking for tunnels to cleanup...
I1117 22:42:01.956582   47004 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube

Which is the exact same thing reported, but the tunnel should be operational.

minikube should do a better job notifying that the tunnel is running.

Can anyone try the steps above and let me know if their tunnel is in fact working or if there's an actual error on their end, thanks!

@rfk
Copy link

rfk commented Nov 26, 2021

Can anyone try the steps above and let me know if their tunnel is in fact working or if there's an actual error on their end, thanks!

I find myself in a similar situation to this issue, Pengwin 21.10.1 on WSL2.

When I try your steps above for hello-minikube1 I get the same result, including minikube tunnel reporting that it is Starting tunnel for service hello-minikube1 (I was a little surprised that does not prompt me for a password or generate any of the output suggested in this example though).

If I run kubectl get svc without the tunnel process, I see this output:

$ kubectl get svc
NAME              TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
hello-minikube1   LoadBalancer   10.104.136.25   <pending>     8080:32372/TCP   15m
kubernetes        ClusterIP      10.96.0.1       <none>        443/TCP          16m

With minikube tunnel running in a separate terminal, it does seem to allocate an external IP:

$ kubectl get svc
NAME              TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
hello-minikube1   LoadBalancer   10.104.136.25   127.0.0.1     8080:32372/TCP   17m
kubernetes        ClusterIP      10.96.0.1       <none>        443/TCP          19m

And I can then wget http://127.0.0.1:8080 and I get a dump of the HTTP request details. Is that enough to confirm that the tunnel is working as expected?

@rfk
Copy link

rfk commented Nov 26, 2021

Comparing the setup that led me to this bug to the apparently-working hello-minikube example, I see that kubectl is reporting my services as type ClusterIP rather than type LoadBalancer. I don't know enough about all these tools yet to know how to debug that, but it seems like a useful clue so I figured I'd share it in case it helps anyone else.

@rfk
Copy link

rfk commented Nov 26, 2021

I can confirm that if I use kubectl edit svc to change the type of my service to LoadBalancer, then the minikube tunnel running in another terminal will detect that change and successfully open a tunnel to the service. So I think at least for me, the tunnel was working correctly and it was my service setup that was incorrect. Running minikube tunnel didn't generate any output because there weren't any services for it to expose.

However, other folks on my team who are using Linux rather than WSL observed minikube tunnel appearing to work without this change, allowing them to connect to services of type ClusterIP. Is there maybe some side-effect of the tunnel that would make ClusterIP services available on Linux but not when running under WSL?

@spowelljr
Copy link
Member

spowelljr commented Dec 29, 2021

Hi @rfk,

And I can then wget http://127.0.0.1:8080 and I get a dump of the HTTP request details. Is that enough to confirm that the tunnel is working as expected?

Yes you should be getting something like:

Password:
Status:
 machine: minikube
 pid: 39087
 route: 10.96.0.0/12 -> 192.168.64.194
 minikube: Running
 services: [hello-minikube]
    errors:
  minikube: no errors
  router: no errors
  loadbalancer emulator: no errors

So I think at least for me, the tunnel was working correctly and it was my service setup that was incorrect. Running minikube tunnel didn't generate any output because there weren't any services for it to expose.

Based on the discussion on this issue I created a PR (#12976) that will be included with the next release of minikube.

Before (tunnel without service):

$ minikube tunnel

After:

$ minikube tunnel
✅  Tunnel successfully started

📌  NOTE: This process must stay alive for the tunnel to be accessible ...

Is there maybe some side-effect of the tunnel that would make ClusterIP services available on Linux but not when running under WSL?

It's possible, it will require further investigation to confirm.

@spowelljr spowelljr added the long-term-support Long-term support issues that can't be fixed in code label Dec 29, 2021
@lord22shark
Copy link

I got the same issue when running minikube tunnel --alsologtostderr.
Environment:

  • macOS Catalina 10.15.7
  • Docker 20.10.11:
  • Minikube v1.24.0

When tunnels invoke cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube the process stuck at this step. I tried the same command without double quotes (docker container inspect -f '{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}' minikube) and it ran successfully. Is there a simple way to recompile minikube so I can remove the quotes?

Extra information:

$ minikube version
minikube version: v1.24.0
commit: 76b94fb3c4e8ac5062daf70d60cf03ddcc0a741b

---

$ docker version
Client:
 Cloud integration: v1.0.22
 Version:           20.10.11
 API version:       1.41
 Go version:        go1.16.10
 Git commit:        dea9396
 Built:             Thu Nov 18 00:36:09 2021
 OS/Arch:           darwin/amd64
 Context:           default
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          20.10.11
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.16.9
  Git commit:       847da18
  Built:            Thu Nov 18 00:35:39 2021
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.4.12
  GitCommit:        7b11cfaabd73bb80907dd23182b9347b4245eb5d
 runc:
  Version:          1.0.2
  GitCommit:        v1.0.2-0-g52b36a2
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

@spowelljr
Copy link
Member

Hi @lord22shark, can you please download this development version of minikube and let me know what the output of the tunnel command is, thanks!

https://storage.googleapis.com/minikube-builds/master/minikube-darwin-amd64

@lord22shark
Copy link

Hey @spowelljr
You built worked successfully :-D. Follow the output:

$ minikube tunnel --alsologtostderr
I0104 23:08:52.514231    1794 out.go:297] Setting OutFile to fd 1 ...
I0104 23:08:52.514616    1794 out.go:349] isatty.IsTerminal(1) = true
I0104 23:08:52.514627    1794 out.go:310] Setting ErrFile to fd 2...
I0104 23:08:52.514636    1794 out.go:349] isatty.IsTerminal(2) = true
I0104 23:08:52.514766    1794 root.go:315] Updating PATH: /XXXXXXXX/.minikube/bin
I0104 23:08:52.515070    1794 mustload.go:65] Loading cluster: minikube
I0104 23:08:52.550899    1794 config.go:176] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
I0104 23:08:52.559470    1794 cli_runner.go:133] Run: docker container inspect minikube --format={{.State.Status}}
W0104 23:08:56.309157    1794 out.go:241] ❗  Executing "docker container inspect minikube --format={{.State.Status}}" took an unusually long time: 3.749208354s
❗  Executing "docker container inspect minikube --format={{.State.Status}}" took an unusually long time: 3.749208354s
W0104 23:08:56.309281    1794 out.go:241] 💡  Restarting the docker service may improve performance.
💡  Restarting the docker service may improve performance.
I0104 23:08:56.309301    1794 cli_runner.go:186] Completed: docker container inspect minikube --format={{.State.Status}}: (3.749208354s)
I0104 23:08:56.309325    1794 host.go:66] Checking if "minikube" exists ...
I0104 23:08:56.309740    1794 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube
I0104 23:08:56.803654    1794 api_server.go:165] Checking apiserver status ...
I0104 23:08:56.977056    1794 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0104 23:08:56.977236    1794 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0104 23:08:57.382218    1794 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50252 SSHKeyPath:/XXXXXXXXXX/.minikube/machines/minikube/id_rsa Username:docker}
I0104 23:08:59.302240    1794 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.325170979s)
I0104 23:08:59.302439    1794 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1703/cgroup
W0104 23:08:59.630079    1794 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1703/cgroup: Process exited with status 1
stdout:

stderr:
I0104 23:08:59.630123    1794 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:50251/healthz ...
I0104 23:08:59.643639    1794 api_server.go:266] https://127.0.0.1:50251/healthz returned 200:
ok
I0104 23:08:59.643677    1794 tunnel.go:59] Checking for tunnels to cleanup...
I0104 23:08:59.695516    1794 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0104 23:09:00.524141    1794 out.go:176] ✅  Tunnel successfully started
✅  Tunnel successfully started
I0104 23:09:00.745550    1794 out.go:176] 

I0104 23:09:00.834476    1794 out.go:176] 📌  NOTE: Please do not close this terminal as this process must stay alive for the tunnel to be accessible ...
📌  NOTE: Please do not close this terminal as this process must stay alive for the tunnel to be accessible ...
I0104 23:09:00.912486    1794 out.go:176] 

I0104 23:09:01.012652    1794 out.go:176] ❗  The service/ingress ingress-analytics requires privileged ports to be exposed: [80 443]
❗  The service/ingress ingress-analytics requires privileged ports to be exposed: [80 443]
I0104 23:09:01.090376    1794 out.go:176] 🔑  sudo permission will be asked for it.
🔑  sudo permission will be asked for it.
I0104 23:09:01.212571    1794 out.go:176] 🏃  Starting tunnel for service ingress-analytics.
🏃  Starting tunnel for service ingress-analytics.
Password:
^CI0104 23:10:40.509208    1794 out.go:176] ✋  Stopped tunnel for service ingress-analytics.
✋  Stopped tunnel for service ingress-analytics.

@spowelljr spowelljr removed their assignment Jan 26, 2022
@laundy
Copy link

laundy commented Mar 4, 2022

Until the fix is shipped, you can also use kubectl port-forward service/<service-name> <local port>:<node port> (see https://minikube.sigs.k8s.io/docs/start/ #4). This worked for me on Windows 10 with WSL2 and Docker driver.

@hnviradiya
Copy link

I am facing this issue on windows 11.

@hnviradiya
Copy link

Until the fix is shipped, you can also use kubectl port-forward service/<service-name> <local port>:<node port> (see https://minikube.sigs.k8s.io/docs/start/ #4). This worked for me on Windows 10 with WSL2 and Docker driver.

looks like because of this ingress is also not working. It is also hanging.

@oldManLemon
Copy link

Yeah I'm also hanging. I am also on win10. Got error: timed out waiting for the condition as a result. I also tried kubectl expose deployment <deployment> --type=LoadBalancer --port=8100 straight from https://minikube.sigs.k8s.io/docs/handbook/accessing/ with no luck either.

It claims it exists but I am constantly getting a timeout.

@spowelljr spowelljr modified the milestones: 1.26.0, 1.27.0-candidate Jun 24, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 22, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Oct 22, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Nov 21, 2022
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/tunnel Support for the tunnel command kind/support Categorizes issue or PR as a support question. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. long-term-support Long-term support issues that can't be fixed in code os/wsl-windows microsoft wsl related issues
Projects
None yet
Development

No branches or pull requests