-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
minikube does not support exposing services to Codespaces #15928
Comments
It would be good if you could explain what you are proposing to change, and how it would help the minikube user ? Deploying the development k8s cluster to localhost, or to a private VM network, is done on purpose (for security) |
Because Codespaces is already running in a secure VM, minikube should at least expose services to 127.0.0.1 and not the Codespaces VM. Currently minikube will expose a service as e.g. http://192.168.49.2:30318 but Users will expect minikube to just work in Codespaces. There's no current method that I know of to fix minikube for use in Codespaces. Even the GitHub Codespaces development team chooses k3d over minikube for this and performance reasons. We're going to see a lot more users running cloud development environments in the future: it'd be great to see this use-case supported. |
This sounds like it would need more adoption, similar to the katacoda examples, that are currently using "none" driver. But the docker driver is supposed to publish all the ports to localhost, if it is running in a container (and not in a VM)
That is true, I don't think the current "let's log in as root on the control plane" tutorial is pointing in the right direction... But probably it would need some better docs on how to run in codespaces, the JSON link you sent was a bit terse ? |
Previous attempts to make minikube work better in Lima and in Multipass have not been so successful. Maybe this one will be better ? Reading https://docs.github.com/en/codespaces/getting-started/deep-dive |
The earlier attempts are mostly about providing virtual machines, and then running the "none" driver. I was wondering whether running with "KIC" (or kind or k3d) would be a separate scenario / tutorial ? The single-node cluster is simpler to explain, less magic and less moving parts, but it is also more limited. So instead of explaining how to set up the cluster on localhost from within a virtual machine, it would show The same But the extra complexity of the virtual machine not being on the actual user machine, could need some extra care. Knowing how it is supposed to work (with for instance kind or k3d), would make it clearer how to fix for minikube |
I consider k8s vs k3s to be out of scope for this discussion, there are pros and cons and better discussed elsewhere... But if it is working with |
I hope that's not the takeaway from my issue: I value the diversity and health of all these projects. With k3d and kind, both do just work given pure defaults so perhaps their implementation can serve as an example for minikube's? |
Probably just bugs in minikube then, similar to the current issues with the "none" driver ? (compared with Possibly related to:
|
@worldofgeese : what is needed, is to enable extra port-forwarding. the same that is currently being done for (and hardcoded to) Docker Desktop. this will set up a localhost tunnel, that Codespaces will see and forward to the real host.
// NeedsPortForward returns true if driver is unable provide direct IP connectivity
func NeedsPortForward(name string) bool {
if !IsKIC(name) {
return false
}
if oci.IsExternalDaemonHost(name) {
return true
}
// Docker for Desktop
if runtime.GOOS == "darwin" || runtime.GOOS == "windows" || detect.IsMicrosoftWSL() {
return true
}
si, err := oci.CachedDaemonInfo(name)
if err != nil {
panic(err)
}
if runtime.GOOS == "linux" && si.DockerOS == "Docker Desktop" {
return true
}
return si.Rootless
} Since the Was confused about 192.168.49.2 mentioned as a VM. |
The environment variable to look for is called |
Ah, that might explain why I have difficulty reaching service endpoints opened in a web browser outside a Dev Container when using Dev Containers on WSL and when the container engine is provided by the |
Same code, different bug ? |
I can confirm this issue, we use codespaces w/ minikube for a simple dev environment and we need the NodePort from one service exposed via codespaces. Using My workaround for now: installing an ssh server in codespaces and creating an ssh tunnel with port forwarding from |
That is very similar to what |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close not-planned |
@k8s-triage-robot: Closing this issue, marking it as "Not Planned". In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
What Happened?
A user on the #minikube Slack channel previously asked for help getting minikube to work on GitHub Codespaces. I have confirmed whether exposed as a nodeport or load balancer with
minikube service
orminikube tunnel
respectively, minikube attempts to expose services using the internal VM network of Codespaces, 193.168.49.2 and not localhost. This is a non-starter for any workflows on Codespaces requiring service access.Attach the log file
I've since moved to using k3d and kind in all my Dev Container and Codespaces workflows, which do expose to 0.0.0.0 and don't have logs to offer but it should be easy to reproduce: fire up this minimal Dev Container containing minikube, deploy a hello-world service, then attempt to expose with
minikube service
orminikube tunnel
with the appropriate ingress controller deployed if desired.Operating System
Linux
Driver
Docker
The text was updated successfully, but these errors were encountered: