-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MEP: Change from custom Buildroot ISO to standard Kubernetes OS #9992
Comments
TODO: Write the actual MEP |
@afbjorklund I wonder how larger our ISO becomes if we move off buildroot ? |
The guess is "double" (< 500M), but then again nobody actually cares... The software that we add on top of the base OS is so much bigger than the underlying image, so that it almost "disappears".
So it is similar to when we optimized the size of the Hopefully the total (compressed) download can be kept like 1G or so:
Like I mentioned above, this MEP is similar to when we changed the bootstrapper from "localkube" to "kubeadm"... The image does become bigger (in both cases), but you get a more standard installation and easier maintenance. Anyway, the vagrant image (ubuntu/focal64) is 499M:
https://cloud-images.ubuntu.com/ The docker image is well below that, without the kernel etc.
https://hub.docker.com/_/ubuntu The Buildroot OS base is around that size as well, i.e. 72M. I'm still interested in the smaller distributions, like docker-machine or podman-machine or localkube or k3s.... But I think the custom distribution is more of a side project, and stealing focus from the minikube development ? |
I thought this presentation was interesting: https://landley.net/aboriginal/presentation.html
Or in other words, "OS development is hard, let's go bowling". See also https://distrowatch.com/ |
it would be cool if we could have kuberentes org adopt our Buildroot OS and invest it as community , and minikube could be one of the users of it ? could we start that conversation ? It has been good work put into it for years and it has been battletested in many environments for years, wdyt? |
I see there is more interest in the vanilla images, or in some proprietary LinuxKit image like with Docker Desktop. But the discussion should definitely be had, that was (more or less) why I opened this issue in the first place... I like the Tiny Core Linux (for docker-machine) or Buildroot (for minikube) images, mostly on technical merits. But if has been hard to sell that, since people (and kubeadm) seem to prefer using Ubuntu or Fedora distributions ? |
I fail to see how this is a support question ? |
@medyagh :
I like this idea, and I think that we could start such a project. But it is still rather different from the rest of Minikube. In my own experiments I also wanted to include everything needed for
Since Buildroot doesn't have a package manager, it would mean there would be a lot of different images around... One workaround would be to use tarballs, but then one would still have to invent a whole ecosystem around that ? There is a separate discussion about creating a new Tiny Core Linux distro, for running docker-machine/podman-machine. It's like a smaller version of the same or similar project, with a different upstream and with slightly different design goals: Machine:
Minikube:
One alternative to the current tmpfs implementation on an .iso would be to just use a standard ext4 file system and an .img. Another variation would be to switch from the current isolinux over to a standard boot loader like GRUB for better portability. |
There are work for most of the distros to create a minimal one in the container space. For example, in Fedora there are different alternatives created with kickstart: https://pagure.io/fedora-kickstarts/tree/main, or in ubuntu https://cdimage.ubuntu.com/netboot/. From them, adding packages through package-manager is quite more easy to do and maintain. |
@rgordill : The coreos installation is supposed to be ephemeral/immutable, instead it uses toolbox for debugging purposes https://github.com/coreos/toolbox For the prototype we are using the "cloud" images (as available through vagrant), rather than kickstart and netboot and such. |
Usually "cloud" images are built with those alternatives. Anyway, the fact is that some low level workloads requires some OOB capabilities on the OS to be able to be deployed, like for example nmstate (https://nmstate.io/kubernetes-nmstate/user-guide/102-configuration), or they need a particular kernel version in the host. I know this is not the general use case, but if a simple way of building the iso were in place, we may select easily the one that we need, and not looking for an alternate solution to it. |
I'm not sure updating or building the ISO was complex, it was more like "maybe it shouldn't be handled by minikube project" |
But we said that we would do one more iteration of the ISO, based on Buildroot 2021.02 (which Linux kernel version is still unclear) We will cover up the lack of a package manager by using tarballs, will only be used for the container runtime and for kubernetes... But a drop-in replacement for the minikube.iso would be nice, it would be easier to compare if there was a concrete example. |
Possibly this could be used https://github.com/tianon/boot2docker-debian as a base, for the ubuntu alternative |
There have been no volunteers for this project, so any prototyping would have to use an external VM and the SSH driver. |
Currently we are using a custom image, which is built using the Buildroot tool...
It is a replacement for the original Boot2Docker image, based on Tiny Core Linux.
(The board config also draws some inspiration from the original CoreOS images)
For the KIC base image, we have instead changed to using a standard
ubuntu:20.04
We could do the same for the ISO image, base it on a standard Linux distribution.
(Preferrably one of the official ones, that are supported and tested by
kubeadm
)https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#before-you-begin
Something like:
Comparison table:
It's easier to use Ubuntu and CentOS (than Debian and RHEL), due to licensing issues etc.
Suggestion to use the same OS version as the KIC base image, which is Ubuntu 20.04 LTS.
It only needs the kernel capability to run containers, the container runtime can be provisioned.
Using a package manager makes it easier to add/remove software, compared to rebuilding.
When using one of cloud drivers for
docker-machine
, it uses an Ubuntu 16.04 image.You can see how it works, by checking out the minikube "generic" driver (with any VM).
https://docs.docker.com/machine/get-started-cloud/
#9545 (comment)
The package list could be the same as the docker image, to start with (minus kind-specific)
Currently the ISO image works by copying everything from the rootfs on the CD into a tmpfs.
The motivation for this is that it takes considerable engineering effort to keep the image updated.
It also introduces differences between the minikube environment, and a production environment.
The downsides would be less control over the image, and possibly also a larger image to download.
And most likely we still would have to do our own image adaption/build - or add
cloud-init
support.But maybe it would be easier to let someone else handle the distribution, rather than making our own.
This is similar to letting
kubeadm
handle the installation, rather than do our own bespoke "localkube".... Dude, Let's Go Bowling
--Walter Sobchak
The text was updated successfully, but these errors were encountered: