-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make custom image for KIC similar to minikube ISO #6942
Comments
ping @medyagh |
This sounds like a good idea to do. I support to have an option to run kic drivers with buildroot image in docker. |
@afbjorklund would you be intereted to take this? could I add this to the milestone? |
@medyagh : I will investigate, but don't know yet if it is reasonable to do in two weeks |
I would like to postpone this for the minikube 2.0 release, to become a target for Buildroot 2020.02 |
One thing that needs to be addressed here is settling on a kernel version to support. So if you just create a image from the same rootfs, it will not run on Ubuntu 16.04 or 18.04 This is because the default settings in the glibc build is to run with the current kernel only.
Besides glibc, systemd now also requires Stack Smashing Protection
We should be able to lower this to something like 3.10 (or perhaps 4.0): http://www.linuxfromscratch.org/lfs/view/systemd/chapter06/glibc.html
Not sure if we miss any major features by doing so, but can check.
https://en.wikipedia.org/wiki/Ubuntu_version_history#Table_of_versions Not sure what LinuxKit (Docker Desktop VM) has, but it's 4.9 or 4.14 |
we dont have to support ubuntu, we could just do whatever kernel version that makes sense for us. |
This is the user laptop we are talking about... It would be perfectly fine with an arbitrary 4.0 as well. |
There should be no major side effects, of keeping the ISO glibc more compatible with old kernels |
Note: systemd requires 3.10 rather than 3.2 BR2_PACKAGE_SYSTEMD_ARCH_SUPPORTS It also requires glibc with stack-protector: BR2_TOOLCHAIN_USES_GLIBC We have to bump it to 3.12, up from 3.10 (which was up from LFS 3.2)
It's needed by containerd and podman and crio (and also wanted by runc):
|
There's some minor tweaks needed, for compatibility with busybox and other things. But other than, it seems to be "booting":
The container runtimes failing to start is expected, because they ship unconfigured on the ISO. That "systemd-networkd-wait-online" fails is not good (2 min timeout), but was also to be expected... Systemd fails to understand that
As usual with systemd, it also fails to understand that our terminal doesn't have any color support. |
This was another problem, not sure why though:
Or what implications this particular unit failure has. |
For some reason the initial output does not show on console:
And it seems like setting $container makes the network happier:
|
Here is the current minikube.Dockerfile: (adopted from kindbase and kicbase) FROM buildroot
COPY entrypoint /usr/local/bin/entrypoint
RUN chmod +x /usr/local/bin/entrypoint
# After installing packages we cleanup by:
# - removing unwanted systemd services
# - disabling kmsg in journald (these log entries would be confusing)
#
# Next we ensure the /etc/kubernetes/manifests directory exists. Normally
# a kubeadm debain / rpm package would ensure that this exists but we install
# freshly built binaries directly when we build the node image.
#
# Finally we adjust tempfiles cleanup to be 1 minute after "boot" instead of 15m
# This is plenty after we've done initial setup for a node, but before we are
# likely to try to export logs etc.
RUN echo "Ensuring scripts are executable ..." \
&& chmod +x /usr/local/bin/entrypoint \
&& echo "Installing Packages ..." \
&& find /lib/systemd/system/sysinit.target.wants/ -name "systemd-tmpfiles-setup.service" -delete \
&& rm -f /lib/systemd/system/multi-user.target.wants/* \
&& rm -f /etc/systemd/system/*.wants/* \
&& rm -f /lib/systemd/system/local-fs.target.wants/* \
&& rm -f /lib/systemd/system/sockets.target.wants/*udev* \
&& rm -f /lib/systemd/system/sockets.target.wants/*initctl* \
&& rm -f /lib/systemd/system/basic.target.wants/* \
&& echo "ReadKMsg=no" >> /etc/systemd/journald.conf \
&& echo "Ensuring /etc/kubernetes/manifests" \
&& mkdir -p /etc/kubernetes/manifests \
&& echo "Adjusting systemd-tmpfiles timer" \
&& sed -i /usr/lib/systemd/system/systemd-tmpfiles-clean.timer -e 's#OnBootSec=.*#OnBootSec=1min#'
# systemd exits on SIGRTMIN+3, not SIGTERM (which re-executes it)
# https://bugzilla.redhat.com/show_bug.cgi?id=1201657
STOPSIGNAL SIGRTMIN+3
# NOTE: this is *only* for documentation, the entrypoint is overridden later
ENTRYPOINT [ "/usr/local/bin/entrypoint", "/sbin/init" ]
USER docker
RUN mkdir /home/docker/.ssh
USER root
# kind base-image entry-point expects a "kind" folder for product_name,product_uuid
# https://github.com/kubernetes-sigs/kind/blob/master/images/base/files/usr/local/bin/entrypoint
RUN mkdir -p /kind Note that minikube-automount will currently put it on the |
Some additional cleanup: RUN rm -f /usr/sbin/minikube-automount \
&& echo '#!/bin/sh' > /usr/sbin/minikube-automount \
&& chmod +x /usr/sbin/minikube-automount
# Remove kernel modules
RUN rm -r /lib/modules/*
RUN systemctl enable sshd |
There are still a lot of assumptions about VM==Buildroot and KIC==Ubuntu in the code base :-( // fastDetectProvisioner provides a shortcut for provisioner detection
func fastDetectProvisioner(h *host.Host) (libprovision.Provisioner, error) {
d := h.Driver.DriverName()
switch {
case driver.IsKIC(d):
return provision.NewUbuntuProvisioner(h.Driver), nil
case driver.BareMetal(d):
return libprovision.DetectProvisioner(h.Driver)
default:
return provision.NewBuildrootProvisioner(h.Driver), nil
}
} Maybe we should even make an Ubuntu ISO variant, just to try to iron some more of them out ? I'm not sure how hard it will be, can probably reuse a lot of the packaging and some boot2docker: |
Can make this available for early testing, but is not ready for public beta testing. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/remove-lifecycle rotten |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
Currently we are using the KIND base image for the KIC driver:
https://github.com/kubernetes/minikube/blob/master/hack/images/kicbase.Dockerfile
This image (docker.io/kindest/base) is in turn based on Ubuntu:
https://github.com/kubernetes-sigs/kind/blob/master/images/base/Dockerfile
As with the other base images, this one starts from a rootfs tarball:
https://hub.docker.com/_/ubuntu
See https://docs.docker.com/develop/develop-images/baseimages/
We might want to investigate using the same Linux that we are using for minikube.iso,
a custom distribution built using Buildroot which also includes systemd and the runtimes.
https://github.com/kubernetes/minikube/tree/master/deploy/iso/minikube-iso
The main difference would be that the regular ISO also includes a kernel (currently 4.19),
while the container image uses the host kernel (so it doesn't need to waste all that space)
Links:
There are lots of other small tricks needed, in order to make a running container image.
Including a lot of workaround and hacks, to be able to run
systemd
in a a container...As per above, you can see several of these in the original Ubuntu image (vs Ubuntu ISO),
as well as from the KIND and KIC projects respectively. Some of these will need to be added.
https://github.com/kubernetes-sigs/kind/blob/master/images/base/files/usr/local/bin/entrypoint
The text was updated successfully, but these errors were encountered: