-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Build ISO minikube image for ARM (aarch64) #9228
Comments
Looked into this a bit, and there is no syslinux support on the arm64 platform - so it is unlikely to be an ISO-9660.
It would work the same way though, but more like a "minikube.img". The nearest config is BR2_TARGET_ROOTFS_EXT2=y So far I have used Raspberry Pi, and it has a custom bootloader and a custom config. The other real hardware is using "Das U-Boot", but we only need something for a VM... BR2_TARGET_ROOTFS_CPIO=y |
Here is how it starts QEMU by default: amd64
arm64
So what is needed is a nice way to bundle the kernel (Image) and the rootfs (initrd.cpio.gz) into one disk image. The buildroot "genimage" script could help with this, perhaps. Then we just need some simple bootloader for it... |
Using grub2 with efi seems to be the simplest, since it has built-in support (unlike syslinux or gummiboot). boot/grub2/readme.txt amd64board/pc
ovmf: /usr/share/OVMF/OVMF_CODE.fd arm64board/aarch64-efi/
qemu-efi-aarch64: /usr/share/qemu-efi-aarch64/QEMU_EFI.fd For now we will continue without a root partition, since minikube assumes that it will be running from tmpfs. |
Here are the reference board sizes:
Will put them up as a separate project. See: https://github.com/afbjorklund/minimal-buildroot PS. They are both using ext4, just have different filenames when generating (and then a symlink between them) For the real minikube OS, we have the files in an initrd so it will go on a "boot" partition instead of the "root" partition.
|
@afbjorklund It is possible to create an iso image to boot on arm64 and (hybrid bios/uefi) amd64 if you wanted to maintain backwards compatibility, and not require UEFI firmware on Intel. I have this working in a branch. |
@bluestealth : nice, seems like you have already gotten started on it. I saw that debian had a livecd also for arm64, so it should be possible. that will probably make it easier to interface with libmachine, but it seems like your kvm2 driver needed some hacks in it... |
@afbjorklund Yes, I used the debian documentation to get it working, which is really good. Some of my KVM hacks were to get it working with Qemu, I have even more hacks in another branch to allow minikube to work as a client to foreign architectures, which is kind a mess. |
I put the client to a remote server in a different issue (#9593), it would still be useful but I think we will handle it separately. |
Will see if I can add a ISO target to the "hello world"... Then look more at the other changes, join Slack (k8s/cncf) to chat |
I'd love to try this on my Raspberry Pi. What all is it going to take, as far as we know today? Here's my naive assumption:
@bluestealth - it looks like you've poured a lot of work into your fork -- what's left before we can start playing with it? |
We played with this during Kubecon, and there was no problems as long as you shifted over to GRUB (from isolinux) It was possible to use that (efi) for amd64 as well, if we wanted to have the same bootloader for both architectures... |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
any chances to use even |
/assign |
This will be the issue where I track all our combined ISO progress, for both x86_64 EFI and aarch64. The PR where I'm testing (and where the change eventually should end up) is #13762. Things that have already happened:
Things that still need to happen:
|
The above error was fixed by giving QEMU more memory allocation. The regulatory db error is spurious. |
The networking issue for the amd64 KVM2 driver was an issue with the machine I was testing on rather than an issue with our configuration. I can verify the ISO works properly. The AppArmor issue remains, which seems to be an issue with the version of libvirt we are using and its incompatibility with UEFI. |
The workaround for the AppArmor issue is to disable AppArmor as a security driver for libvirt. This is extremely not recommended for actual use, but for my current debugging it's been useful. Modify After the config file is saved, restart libvirt: For the record, we plan on fixing this issue (which I believe is related to the verison of libvirt we're using). |
Testing the arm64 ISO on QEMU showed that the arm64 ISO rootfs isn't getting some of the systemd file copied in. That's what I'm currently looking into. |
The arm64 ISO with QEMU is now booting properly, but kubeadm is crashing. It looks like all the appropriate k8s docker images are loaded properly but are never started, so none of the essential pods can boot. |
🎉 QEMU on M1 mac (almost) works!
relevant logging:
coredns logs:
storage-prov logs:
|
I tried with this PR and this ISO sharif gave me, and I see also corednes pod saying https://storage.googleapis.com/minikube-builds/iso/testing/minikube-arm64.iso
in kc logs of cordns it shows same thing
Storage provisioner uses the Client-Go to get client k8s but the IP it gets is 10.96.0.1:443
in the pods I see the ip is set to "10.0.2.15" (I dont know if that is supposed to be same or not)
The IP that the client-go is trying to hit is the cluster ip that I also see in the services.
and I confirmed that manually I can not hit the service URL either as this command times out:
|
@josedonizetti suggested using tcp dump to see what if coredns is going through apiserver I tired this
here are the ips I get from the -o wide
I did tcpdump on the coredns container and I get this "weird" incorrecrt things I see
|
tl;dr: i think that the problem is not with coredns, rather, the problem is with the iptables, that is - the complete lack of k8s-related rules thereof details: i've modified coredns deployment to use latest image, so that it's a bit more specific about the error (should remember to revert changes back to what it was):
ok, let's see what's going from the inside:
would not reach the api server via cluster ip, but it actually works:
let's check the iptables - indeed rules are missing:
should have something like (taken from a working non-arm instance):
iptables should be set by the kube-proxy, let's check logs - indeed:
now, I'm not sure what mode would work for kube-proxy on arm arch, as, according to the https://kubernetes.io/docs/reference/command-line-tools-reference/kube-proxy/, there is no one:
i think we just need to understand why kube-proxy is unable to restore the rules - specific error (from the above logs) is: |
An update here, the arm64 ISO linux kernel config was missing a whole BOATLOAD of networking modules. I have no explanations for why this happens. We're currently testing out a fix. |
If we want to run any hypervisor driver on arm64, we need a new "minikube.iso" that works on the architecture.
Currently the image is full of amd64 binaries that we download from other places, so it does not build from source.
Buildroot does support the ARM architectures (armv7, arm64).
For instance the Raspberry Pi OS still uses 32-bit by default...
The text was updated successfully, but these errors were encountered: