Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

minikube does not support exposing services to Codespaces #15928

Closed
worldofgeese opened this issue Feb 26, 2023 · 18 comments
Closed

minikube does not support exposing services to Codespaces #15928

worldofgeese opened this issue Feb 26, 2023 · 18 comments
Labels
co/docker-driver Issues related to kubernetes in container kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done.

Comments

@worldofgeese
Copy link

worldofgeese commented Feb 26, 2023

What Happened?

A user on the #minikube Slack channel previously asked for help getting minikube to work on GitHub Codespaces. I have confirmed whether exposed as a nodeport or load balancer with minikube service or minikube tunnel respectively, minikube attempts to expose services using the internal VM network of Codespaces, 193.168.49.2 and not localhost. This is a non-starter for any workflows on Codespaces requiring service access.

Attach the log file

I've since moved to using k3d and kind in all my Dev Container and Codespaces workflows, which do expose to 0.0.0.0 and don't have logs to offer but it should be easy to reproduce: fire up this minimal Dev Container containing minikube, deploy a hello-world service, then attempt to expose with minikube service or minikube tunnel with the appropriate ingress controller deployed if desired.

Operating System

Linux

Driver

Docker

@afbjorklund
Copy link
Collaborator

It would be good if you could explain what you are proposing to change, and how it would help the minikube user ?

Deploying the development k8s cluster to localhost, or to a private VM network, is done on purpose (for security)

@worldofgeese
Copy link
Author

worldofgeese commented Feb 26, 2023

Because Codespaces is already running in a secure VM, minikube should at least expose services to 127.0.0.1 and not the Codespaces VM. Currently minikube will expose a service as e.g. http://192.168.49.2:30318 but minikube dashboard (which does work in Codespaces) to 127.0.0.1. Codespaces does support rewriting links that expose to localhost or 127.0.0.1 (see how that works here) but not if they expose to the IP of the VM.

Users will expect minikube to just work in Codespaces. There's no current method that I know of to fix minikube for use in Codespaces. Even the GitHub Codespaces development team chooses k3d over minikube for this and performance reasons.

We're going to see a lot more users running cloud development environments in the future: it'd be great to see this use-case supported.

@afbjorklund
Copy link
Collaborator

afbjorklund commented Feb 26, 2023

This sounds like it would need more adoption, similar to the katacoda examples, that are currently using "none" driver.

But the docker driver is supposed to publish all the ports to localhost, if it is running in a container (and not in a VM)

We're going to see a lot more users running cloud development environments in the future: it'd be great to see this use-case supported.

That is true, I don't think the current "let's log in as root on the control plane" tutorial is pointing in the right direction...

But probably it would need some better docs on how to run in codespaces, the JSON link you sent was a bit terse ?

@afbjorklund
Copy link
Collaborator

afbjorklund commented Feb 26, 2023

Previous attempts to make minikube work better in Lima and in Multipass have not been so successful.

Maybe this one will be better ? Reading https://docs.github.com/en/codespaces/getting-started/deep-dive

@afbjorklund
Copy link
Collaborator

afbjorklund commented Feb 26, 2023

The earlier attempts are mostly about providing virtual machines, and then running the "none" driver.

I was wondering whether running with "KIC" (or kind or k3d) would be a separate scenario / tutorial ?

The single-node cluster is simpler to explain, less magic and less moving parts, but it is also more limited.
(assuming there that setting up a multi-node VM cluster is out of scope, due to extra resource requirements)

So instead of explaining how to set up the cluster on localhost from within a virtual machine, it would show
how to set up a cluster using fakenodes running in a container runtime (that is: KIC) on said virtual machine ?

The same minikube start is supposed to work in both scenarios...

But the extra complexity of the virtual machine not being on the actual user machine, could need some extra care.
There is the same problem on katacoda, which is currently worked around by still hardcoding the port to 30000.

Knowing how it is supposed to work (with for instance kind or k3d), would make it clearer how to fix for minikube
And it might improve life for users that are stuck with environments with inferior networking, like Docker Desktop ?

@afbjorklund
Copy link
Collaborator

I consider k8s vs k3s to be out of scope for this discussion, there are pros and cons and better discussed elsewhere...

But if it is working with kind, then it should work with minikube. Similar to the "if it is working with kubeadm", before ?

@worldofgeese
Copy link
Author

I consider k8s vs k3s to be out of scope for this discussion, there are pros and cons and better discussed elsewhere...

I hope that's not the takeaway from my issue: I value the diversity and health of all these projects. With k3d and kind, both do just work given pure defaults so perhaps their implementation can serve as an example for minikube's?

@afbjorklund
Copy link
Collaborator

afbjorklund commented Feb 26, 2023

With k3d and kind, both do just work given pure defaults

Probably just bugs in minikube then, similar to the current issues with the "none" driver ? (compared with kubeadm)

Possibly related to:

    --listen-address='':
	IP Address to use to expose ports (docker and podman driver only)

@afbjorklund afbjorklund added co/docker-driver Issues related to kubernetes in container kind/feature Categorizes issue or PR as related to a new feature. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. labels Feb 26, 2023
@afbjorklund
Copy link
Collaborator

afbjorklund commented Feb 27, 2023

@worldofgeese : what is needed, is to enable extra port-forwarding. the same that is currently being done for (and hardcoded to) Docker Desktop. this will set up a localhost tunnel, that Codespaces will see and forward to the real host.

When an application running inside a codespace prints output to the terminal that contains a localhost URL, such as http://localhost:PORT or http://127.0.0.1:PORT, the port is automatically forwarded

// NeedsPortForward returns true if driver is unable provide direct IP connectivity
func NeedsPortForward(name string) bool {
        if !IsKIC(name) {
                return false
        }
        if oci.IsExternalDaemonHost(name) {
                return true
        }
        // Docker for Desktop
        if runtime.GOOS == "darwin" || runtime.GOOS == "windows" || detect.IsMicrosoftWSL() {
                return true
        }

        si, err := oci.CachedDaemonInfo(name)
        if err != nil {
                panic(err)
        }
        if runtime.GOOS == "linux" && si.DockerOS == "Docker Desktop" {
                return true
        }
        return si.Rootless
}

Since the docker0 bridge is visible from the host (the console), there is normally no need to tunnel that when using Docker Engine... But it could be added as a configuration, and that could be enabled when Codespaces is detected ?

Was confused about 192.168.49.2 mentioned as a VM.

@afbjorklund
Copy link
Collaborator

afbjorklund commented Feb 27, 2023

The environment variable to look for is called CODESPACES, it is always set when running in GitHub Codespaces.

https://docs.github.com/en/codespaces/developing-in-codespaces/default-environment-variables-for-your-codespace

@worldofgeese
Copy link
Author

worldofgeese commented Feb 27, 2023

@worldofgeese : what is needed, is to enable extra port-forwarding. the same that is currently being done for (and hardcoded to) Docker Desktop. this will set up a localhost tunnel, that Codespaces will see and forward to the real host.

Ah, that might explain why I have difficulty reaching service endpoints opened in a web browser outside a Dev Container when using Dev Containers on WSL and when the container engine is provided by the docker-ce package, not Docker Desktop. But that is another issue if I wish to try and replicate.

@afbjorklund
Copy link
Collaborator

Same code, different bug ?

@phillies
Copy link

I can confirm this issue, we use codespaces w/ minikube for a simple dev environment and we need the NodePort from one service exposed via codespaces. Using --listen-address='127.0.0.1' did not work.

My workaround for now: installing an ssh server in codespaces and creating an ssh tunnel with port forwarding from 192.168.49.2:xxxx to xxxx, which makes the port accessible from the outside.

@afbjorklund
Copy link
Collaborator

afbjorklund commented Apr 21, 2023

My workaround for now: installing an ssh server in codespaces and creating an ssh tunnel with port forwarding

That is very similar to what NeedsPortForward means.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 20, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 20, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Feb 19, 2024
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/docker-driver Issues related to kubernetes in container kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done.
Projects
None yet
Development

No branches or pull requests

5 participants