-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
More documentation around vm-driver=none for local use #2575
Comments
@jakefeasel Kinda unrelated but do you know how to let minikube pull images I created locally? All the docs mention using minikube docker-env but it returns none in case of using docker driver. |
@harpratap see #2443, your locally available images are also available to kubernetes with vm-driver=none. |
@ncabatoff Do I need to specify some additional parameter when starting minkube to let it access those locally created images? I get this error - |
Not that I'm aware of. Note that you don't even have to pull them: with vm-driver=none it's the same docker daemon as on your desktop, i.e. the one you've been populating when you build images. So just skip the pull you're doing now, reference the images normally in your container manifests, and it should just work. At least that's been my experience. |
@ncabatoff Yes that is what I did and I got the image pull error, didn't do any manual pulling of images. Edit: Nevermind, got it working by setting imagePullPolicy to Never. Thanks! Edit Again: Don't change the imagePullPolicy, you just need to tag your local images with a version other than latest, so I just built my image |
Seems like the right place to add a couple of items that I've discovered along the way with vm-driver=none:
|
Also having issues similar to https://brokenco.de/2018/09/04/minikube-vmdriver-none.html |
Hey y'all. I'd appreciate your feedback on the pull request for an initial set of documentation for --vm-driver=none: #3240 Preview link: https://github.com/kubernetes/minikube/blob/22afb79a37436b3d98171dd09212f193fb6f45ca/docs/vmdriver-none.md Thanks! |
@jakefeasel, in your issue description could you fix the typo in
by adding a space between Also, I'm getting an |
See also:
|
@tstromberg can we standardize on either |
@cdancy - done. Now with lowercase. |
@tstromberg LGTM |
It appears my whole premise was flawed, based on the content of this PR. However, it does strike me that a lot of people are similarly misguided, given the interest in this issue and the various blog posts and comments from people trying something similar. Is there any interest within the minikube team to intentionally support this use-case? |
I'm also curious. Operating from a Linux VDI, I'm unable to run another layer of virtualization necessary to run a nested virtual machine. The CI/CD instructions have delivered me to the point that the cluster is running in Docker within the VDI. Some pods are running on the default bridge network, others are running on the default host network. This, I'm sure is the root of my dashboard and DNS issues since those pods are running on the bridge, while the remaining pods are running on the host. Now that I've discovered @jakefeasel was the author of the note to specify a bridge network, I'd like to ask for some clarification on the note's meaning. Do I need to create a new bridge network for minikube? Are your start options above doing what your note intended with the note? |
@edthorne the note I added back then was probably wrong. I wouldn't put much stock into it. Instead, I would refer to @tstromberg 's PR. |
I wrote a quickstart guide for running Minikube on Linux with The current version is here but obviously I'd remove the section on installing Gestalt before contributing it. |
@sbernheim, I tried your quickstart guide and made a few observations which could be addressed in the guide: Existing
|
If I run minikube with |
external DNS resolution works fine for me with |
@bennettellis Did it work out of the box? or you had to fiddle with the networking configurations? I used this to bootstrap:
|
For me, out of the box. I've only done this inside AWS Ubuntu 18 LTS ec2 instances as well as with a VirtualBox instance running that same Ubuntu 18 LTS OS . So out of the box is with those specific boxes and pretty much the default networking there (other than restricting ingress). |
I tried this locally on a laptop. Anyone else facing the issue? |
@slayerjain the specific commands I used to install k8s and minikube:
once that was done, started up minikube with:
|
I need to pass this flag kubernetes/kubeadm#845 to kubelet related to systemd reslovconf. If not coredns pod crash. |
@bhack Thanks! How do you pass the flag through minikube? |
@slayerjain OS detais for the host in question would help here. also, this should probably be a new issue, since this is about documentation. |
@slayerjain Yes the upstream flag was sent with Minikube. Currently Debian and Ubuntu with systemd resolvconf but probably also other Linux distro with systemd resolvconf. |
I'm on Pop OS (Ubuntu 18.04 based). @bhack Do you mean something like this?
|
Yes |
using this minikube doesn't really start on my system. I get this
|
Do you have |
on my laptop (host) - yes. |
I think your problem is unrelated to that flag. Check #3150 |
@jakefeasel can't agree any more! It is painful to look up in Doc for vm-driver=none |
@slayerjain Also check #2707 |
Initial doc has been submitted: please send further PR's to improve this doc with any tips or tricks you might know of: https://github.com/kubernetes/minikube/blob/master/docs/vmdriver-none.md Thanks! |
Interesting. Btw, I just noticed that Minikube starts by default on boot. Is there any way to disable that? |
@akaihola - Thanks for the feedback! I've added a message to the top of the Installing Minikube on Linux guide referencing @tstromberg 's cautionary MD file to indicate that developers should not use the I'll need to test out that I'm not sure how you might adapt to an IP address change while running Minikube, but maybe try the In my case, I'm running Linux within a VM rather than directly on my laptop, so the IP address doesn't tend to change while the VM is running. But it does change whenever I stop/start/restart the VM, and that command will reconnect local |
@harpratap can you tell where did you set the imagePullPolicy to Never? |
Eliminating the VM is great if you are on a Linux box. At this time, I am doing it as you suggest. However, I just stepped across microk8s which seems to target exactly that niche. Wonder how it compares to the minikube based approach outlined here. |
I don't think microk8s can work with kubectl contexts cause it is inside snap. So you need to use a specialized command instead of kubectl. |
Is this a BUG REPORT or FEATURE REQUEST? (choose one): FEATURE REQUEST
Please provide the following details:
Environment:
Minikube version (use
minikube version
): v0.25.0cat ~/.minikube/machines/minikube/config.json | grep DriverName
): nonecat ~/.minikube/machines/minikube/config.json | grep -i ISO
orminikube ssh cat /etc/VERSION
): n/aWhat happened: I spent a long time struggling to get things working properly with vm-driver=none
What you expected to happen: Find documentation which provides coverage for this very valuable use-case.
Anything else do we need to know:
I have a linux laptop that I want to use k8s on. I do not want to run a VM, since that is an inefficient use of my laptop's memory (always either too much or too little memory allocated for the given docker containers I want to run) and slower due to the extra virtualization. Also it shouldn't be necessary - I have a Linux environment, why should I have to run another VM just to run Linux? That's why I bought this laptop.
The use of vm-driver=none is hardly documented at all. What little there is does not seem to consider the value for developer machines.
Yesterday I filed #2571, but this was actually not the full story. I had a red-herring with IP addresses which led me to believe that the docker0 interface was the root of the problem. It turns out, the real problem was whatever IP address I happened to have bound to my ethernet interface would be used in the construction of the cluster. If that IP address changed for any reason (as often happens on laptops) then the whole environment was inaccessible.
The workaround was not to specify a bridge IP for docker, as I had thought. Instead you need to start minikube like so:
And then go and edit ~/.kube/config, replacing the server IP that was detected from the main network interface with "localhost". For example, mine now looks like this:
With this configuration, I can access my local cluster all of the time, even if the main network interface is disabled.
Also, we should note that it is required to have "socat" installed on the Linux environment. See this issue for details: kubernetes/kubernetes#19765 I saw this when I tried to use helm to connect to my local cluster; I got errors with port-forwarding. Since I'm using Ubuntu all I had to do was
sudo apt-get install socat
and then everything worked as expected.The text was updated successfully, but these errors were encountered: