Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

minikube on raspberry pi (ARM desktop) #9762

Closed
afbjorklund opened this issue Nov 21, 2020 · 30 comments
Closed

minikube on raspberry pi (ARM desktop) #9762

afbjorklund opened this issue Nov 21, 2020 · 30 comments
Labels
kind/documentation Categorizes issue or PR as related to documentation. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.

Comments

@afbjorklund
Copy link
Collaborator

This is a follow-up to to the previous issues about supporting arm architectures in general, and raspberry pi in particular.

Since before, we have support for running the "none" driver but soon there is also support for running the "docker" driver.

Normally the Raspberry Pi is used to build clusters. And then you connect to this from a separate machine, like a laptop.

But with the Raspberry Pi 4 (and 400), it is now possible to run both the desktop and the cluster on the same machine...

Distros

Drivers

4 GB of memory is recommended. If your Raspberry Pi 4 has 2 GB, you need to add 2 GB of swap (dphys-swapfile)

It is not recommended to run minikube directly on the Raspberry Pi version 2-3, due to them having 1 GB of memory.
(you can still use these old machines remotely, for instance by using the "generic" driver, but that is a separate story...)

If running a virtual desktop in the cloud, the recommendation is also to use a separate VM instead of nested virtualization.

i.e. get two of them


Remaining issues:

  • kicbase image is not available
  • etcd still adds the arch tag
  • storage-provisioner v3 missing
  • warnings due to resources

We also need some better documentation, for non-amd64 architectures #6159
It should show how to pick "minikube-linux-arm" or "minikube-linux-arm64"

 curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-arm
 sudo install minikube-linux-arm /usr/local/bin/minikube
 curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-arm64
 sudo install minikube-linux-arm64 /usr/local/bin/minikube

In order to run containers, support for memory cgroups needs to be added to kernel cmdline and then it needs a reboot.

When running with Raspbian or Xfce, you also need to install "gnome-terminal" and "fonts-noto-color-emoji" packages
Otherwise the emoji will get replaced by "missing" symbols. Alternatively, you can use MINIKUBE_IN_STYLE=false

There are other distros (Fedora) and other drivers (Podman), but not "supported"

Eventually KVM will be available as well, but it depends on the ISO image #9228

Example output:

😄  minikube v1.15.1 on Ubuntu 20.04 (arm64)
✨  Using the docker driver based on existing profile

⛔  Requested memory allocation (1848MB) is less than the recommended minimum 1907MB. Deployments may fail.


🧯  The requested memory allocation of 1848MiB does not leave room for system overhead (total system memory: 1848MiB)
. You may face stability issues.
💡  Suggestion: Start minikube with less memory allocated: 'minikube start --memory=1848mb'

👍  Starting control plane node minikube in cluster minikube
🔄  Restarting existing docker container for "minikube" ...
E1120 14:23:55.277341    3763 cache.go:63] save image to file "k8s.gcr.io/etcd-arm64:3.4.13-0" -> "/home/ubuntu/.mini
kube/cache/images/k8s.gcr.io/etcd-arm64_3.4.13-0" failed: nil image for k8s.gcr.io/etcd-arm64:3.4.13-0: GET https://k
8s.gcr.io/v2/etcd-arm64/manifests/3.4.13-0: MANIFEST_UNKNOWN: Failed to fetch "3.4.13-0" from request "/v2/etcd-arm64
/manifests/3.4.13-0".
E1120 14:24:08.221227    3763 cache.go:193] Error caching images:  Caching images for kubeadm: caching images: cachin
g image "/home/ubuntu/.minikube/cache/images/k8s.gcr.io/etcd-arm64_3.4.13-0": nil image for k8s.gcr.io/etcd-arm64:3.4
.13-0: GET https://k8s.gcr.io/v2/etcd-arm64/manifests/3.4.13-0: MANIFEST_UNKNOWN: Failed to fetch "3.4.13-0" from req
uest "/v2/etcd-arm64/manifests/3.4.13-0".
🐳  Preparing Kubernetes v1.19.4 on Docker 19.03.13 ...
❌  Unable to load cached images: loading cached images: stat /home/ubuntu/.minikube/cache/images/k8s.gcr.io/etcd-arm
64_3.4.13-0: no such file or directory
🔎  Verifying Kubernetes components...
❗  Executing "docker container inspect minikube --format={{.State.Status}}" took an unusually long time: 3.923854777
s
❗  Executing "docker container inspect minikube --format={{.State.Status}}" took an unusually long time: 3.924723398
s
💡  Restarting the docker service may improve performance.
💡  Restarting the docker service may improve performance.
🌟  Enabled addons: default-storageclass
💡  kubectl not found. If you need it, try: 'minikube kubectl -- get pods -A'
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
@afbjorklund afbjorklund added kind/feature Categorizes issue or PR as related to a new feature. kind/documentation Categorizes issue or PR as related to documentation. labels Nov 21, 2020
@priyawadhwa priyawadhwa added the priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. label Nov 30, 2020
@ioef
Copy link

ioef commented Jan 16, 2021

ubuntu@ubuntu:~$ ./minikube-linux-arm64 version
minikube version: v1.16.0
commit: 9f1e482

ubuntu@ubuntu:~$ minikube start
😄 minikube v1.16.0 on Ubuntu 20.04 (arm64)
✨ Using the docker driver based on user configuration

🤷 Exiting due to PROVIDER_DOCKER_NOT_FOUND: The 'docker' provider was not found: docker driver is not supported on "arm64" systems yet
💡 Suggestion: Try other drivers
📘 Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/

Any suggestion for the way forward is highly appreciated!!!

Thank you!

Update: Using currently the none driver instead of docker which seems to be operating.

@afbjorklund
Copy link
Collaborator Author

afbjorklund commented Jan 17, 2021

Docker driver was still not allowed/"supported" in v1.16.0

It needs 66a671f (to remove previous 24971a5)
Above, I was using the patches from the other issue...

#9227 (comment)

Update: Using currently the none driver instead of docker which seems to be operating.

That would be the way to go, until then. You would have to install docker yourself, and don't get any "node" isolation.

https://minikube.sigs.k8s.io/docs/drivers/none/

This means that it not recommended to run on your desktop, and thus not enough to solve this particular issue...

But supported on a dedicated node, since #6843

@ioef
Copy link

ioef commented Jan 17, 2021

Thank you this is completely understood.

I wonder if there is a possibility to go with the kvm driver. I would definitely like to stick with minikube on raspberry pi 4 and avoid other solutions.

@afbjorklund
Copy link
Collaborator Author

I wonder if there is a possibility to go with the kvm driver. I would definitely like to stick with minikube on raspberry pi 4 and avoid other solutions.

It's outlined at the start there, the main missing piece is the ISO (for arm64) and some minor details like a KVM driver (for arm64)

@spowelljr spowelljr added priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. and removed priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. labels Mar 3, 2021
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 1, 2021
@yaleman
Copy link

yaleman commented Jun 4, 2021

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 4, 2021
@yaleman
Copy link

yaleman commented Jun 4, 2021

I'd love to see some better support for this - installer packages are available for minikube but the docker images for starting it up on armv7 aren't available, while the arm64 install works, but the tutorials don't. :(

@afbjorklund
Copy link
Collaborator Author

I think the docker images for arm (v7) are still missing, but all the pieces (like Ubuntu) should be there for building them. Something to add to minikube next perhaps, after getting the ISO image up on arm64

Back when I wrote this it was easier to build the image yourself, so that should still be possible. It mostly installs packages and scripts on top of ubuntu:20.04 so should not be very heavy other than the dl and I/O ?

@Luttik
Copy link

Luttik commented Jul 18, 2021

@afbjorklund Could you share the steps to do so? that would be great for people (like me) who run into this issue before it is resolved (by actually officially supporting de KIC base image).

@afbjorklund
Copy link
Collaborator Author

afbjorklund commented Jul 30, 2021

I think it is mostly the integrated push that is the problem, using BuildKit and buildx should be possible for most.

KICBASE_ARCH = linux/amd64,linux/arm64,linux/arm

push-kic-base-image: docker-multi-arch-builder ## Push multi-arch local/kicbase:latest to all remote registries

But if you are on an arm server, you can use the "local-kicbase" target instead of cross-building with qemu:

$ make local-kicbase

local-kicbase: ## Builds the kicbase image and tags it local/kicbase:latest and local/kicbase:$(KIC_VERSION)-$(COMMIT_SHORT)

@Luttik
Copy link

Luttik commented Aug 24, 2021

@afbjorklund should the merge above solve the problem for minikube on raspberry pi when using docker? And do you happen to have an ETA for when end-users have access to the fix?

Thanks for the effort!

@afbjorklund
Copy link
Collaborator Author

Good question! But it hasn't kicked in for the temporary builds, so maybe it will only affect the upcoming release ?

$ docker manifest inspect gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6
{
   "schemaVersion": 2,
   "mediaType": "application/vnd.docker.distribution.manifest.list.v2+json",
   "manifests": [
      {
         "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
         "size": 7820,
         "digest": "sha256:50043aeed4b48d15fc257efa371e0eda231333f4878d9a4ea9e55676bd5c7d22",
         "platform": {
            "architecture": "arm64",
            "os": "linux"
         }
      },
      {
         "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
         "size": 7821,
         "digest": "sha256:f75e30b3ec3579436be6869fade4119fe62277cf991fc6f4414c5b04f76820f1",
         "platform": {
            "architecture": "amd64",
            "os": "linux"
         }
      }
   ]
}

You should be able to build your own image meanwhile, though. And use that, for the --base-image ?

make build-kic-base-image KICBASE_ARCH=linux/arm

@Luttik
Copy link

Luttik commented Aug 24, 2021

So I tried the following steps:

  1. Install go (this seems to be required), for my Raspberry PI 4 model B I needed the armv6l version.
  2. clone this repo.
  3. In the root folder of said repo run make build-kic-base-image KICBASE_ARCH=linux/arm

This results in the following error:

$ make build-kic-base-image KICBASE_ARCH=linux/arm

env DOCKER_CLI_EXPERIMENTAL=enabled docker run --rm --privileged multiarch/qemu-user-static --reset -p yes
Unable to find image 'multiarch/qemu-user-static:latest' locally
latest: Pulling from multiarch/qemu-user-static
b71f96345d44: Pull complete
d54997e8dda4: Pull complete
30abb83a18eb: Pull complete
0657daef200b: Pull complete
c4e9493f462e: Pull complete
Digest: sha256:8cf3d90c0370693a0e4cab830d54126a554bda08be75eea967d587ce379bce0e
Status: Downloaded newer image for multiarch/qemu-user-static:latest
WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm/v7) and no specific platform was requested
docker: Error response from daemon: failed to create endpoint happy_meninsky on network bridge: failed to add the host (veth78b297c) <=> sandbox (veth6f963e8) pair interfaces: operation not supported.
make: *** [Makefile:681: docker-multi-arch-builder] Error 125

@afbjorklund Any clue for noobs like me on how to fix this?

@afbjorklund
Copy link
Collaborator Author

afbjorklund commented Aug 24, 2021

Any clue for noobs like me on how to fix this?

Sorry for assuming it would "just work", when building locally on the RPi and not on the laptop all that complexity is not needed.

make local-kicbase

I think the go requirement comes from "auto-pause", in the future this might be distributed like a proper binary package instead...

FROM golang:1.16
WORKDIR /src
# becaue auto-pause binary depends on minikube's code we need to pass the whole source code as the context
ADD . .
RUN cd ./cmd/auto-pause/ && go build

@Luttik
Copy link

Luttik commented Aug 24, 2021

So based on your comment above I tried the following:

  1. Install go (this seems to be required), for my Raspberry PI 4 model B I needed the armv6l version.
  2. clone this repo.
  3. In the root folder of said repo run make local-kicbase

Which returns this:

$ make local-kicbase

docker build -f ./deploy/kicbase/Dockerfile -t local/kicbase:v0.0.25-1628619379-12032  --build-arg COMMIT_SHA=v1.22.0-"6584abaea8dfa67a79c025811519fb71c4206bac" --cache-from gcr.io/k8s-minikube/kicbase:v0.0.25-1628619379-12032 .
Sending build context to Docker daemon  381.4MB
Step 1/48 : FROM golang:1.16
1.16: Pulling from library/golang
1ce31b8c318c: Pull complete
3bb5a38f9519: Pull complete
005a545de7dc: Pull complete
0801bbb4cc34: Pull complete
055a9859d390: Pull complete
795e61bafb4e: Pull complete
74a9e505060c: Pull complete
Digest: sha256:87cbbe43ece5024f0745be543c81ae6bf7b88291a8bc2b4429a43b7236254eca
Status: Downloaded newer image for golang:1.16
 ---> be9ca699ecce
Step 2/48 : WORKDIR /src
 ---> Running in f65d58a2ea9c
Removing intermediate container f65d58a2ea9c
 ---> 3a6a891b838a
Step 3/48 : ADD . .
 ---> 0e506417e8c0
Step 4/48 : RUN cd ./cmd/auto-pause/ && go build
 ---> Running in dbf871d8ad28
failed to create endpoint loving_beaver on network bridge: failed to add the host (veth3df5b2f) <=> sandbox (veth2164a4c) pair interfaces: operation not supported
make: *** [Makefile:692: local-kicbase] Error 1

I believe I've done anything weird with my dockers or raspberry pi's networking (which seems to be the source of the issue). Any clue on how to solve this?

I'd like to get this checklist to a point where not only I can use it but also people who are truly inexperienced with Linux / K8s. Since I believe raspberry pi's are often the tool to learn these things.

@afbjorklund
Copy link
Collaborator Author

afbjorklund commented Aug 24, 2021

I will give it a try later, hopefully it will be easier once the new release is out with the new image...

If you are eager to get started, I would recommend Ubuntu 20.04 for arm64 on a spare SD card...
The support for the legacy arm32 is somewhat lacking, when it comes to Kubernetes and friends

EDIT: some say that a reboot fixed similar issues, on their Raspberry Pi after upgrading things

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 22, 2021
@Luttik
Copy link

Luttik commented Nov 22, 2021

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 22, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 20, 2022
@yaleman
Copy link

yaleman commented Feb 20, 2022

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 20, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 21, 2022
@sebdanielsson
Copy link

/remove

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 21, 2022
@Luttik
Copy link

Luttik commented Jun 22, 2022

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Jun 22, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 20, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Oct 20, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Nov 19, 2022
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@sebdanielsson
Copy link

/reopen

@k8s-ci-robot
Copy link
Contributor

@sebdanielsson: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/documentation Categorizes issue or PR as related to documentation. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.
Projects
None yet
Development

No branches or pull requests

10 participants