Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for arm64, armV7, armV8 #426

Closed
utegental opened this issue Jan 21, 2021 · 60 comments
Closed

Add support for arm64, armV7, armV8 #426

utegental opened this issue Jan 21, 2021 · 60 comments
Assignees
Labels
Hacktoberfest kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@utegental
Copy link

What would you like to be added:
There is no arm support. It would be good to see support this for those architectures.

Why is this needed:
Getting "standard_init_linux.go:219: exec user process caused: exec format error" for pods scheduled on arm nodes.
Usually it's easy to do - just making image multi-architecture =)

@utegental utegental added the kind/feature Categorizes issue or PR as related to a new feature. label Jan 21, 2021
@marquiz
Copy link
Contributor

marquiz commented Jan 21, 2021

I agree. There have been attempts in the past e.g. #203 and #327 at least. Now that we have K8s test-infra (gcb as the image builder) as the CI pipeline this might be doable

@Diaoul
Copy link

Diaoul commented Feb 27, 2021

I would be interested in this as well. Can you elaborate on what kind of work is required to make this happen? From the previous PRs, it doesn't seem that a great deal of code is required.

@xunholy
Copy link

xunholy commented Mar 14, 2021

One is currently being maintained in parity with upstream docker.io/raspbernetes/node-feature-discovery which is built for armv7 & arm64

@Diaoul
Copy link

Diaoul commented Mar 14, 2021

Great! Maybe this can be merged upstream?

@zvonkok
Copy link
Contributor

zvonkok commented Mar 16, 2021

@marquiz We can easily do this with github actions

jobs:
  build_job:
    runs-on: ubuntu-latest
    name: Build on ${{ matrix.arch }}

    strategy:
      matrix:
        include:
          - arch: armv7
            distro: ubuntu20.04
          - arch: aarch64
            distro: ubuntu20.04
          - arch: s390x
            distro: ubuntu20.04
          - arch: ppc64le
            distro: ubuntu20.04
    steps:

@zvonkok
Copy link
Contributor

zvonkok commented Mar 16, 2021

The only question is does gcr.io support manifest lists?

@marquiz
Copy link
Contributor

marquiz commented Mar 16, 2021

@marquiz We can easily do this with github actions

Yeah, sure. But we use prow for building images

@anthr76
Copy link

anthr76 commented Jun 13, 2021

Any updates on this?

@anta5010
Copy link

@marquiz Are there any plans to progress with this issue?
It looks like the main problem is not in updating the Makefile. This change is quite simple if buildx can be used. But, before creating a PR the CI pipelines need to be updated to support buildx. Do you know if it's possible and what help is required?

@marquiz
Copy link
Contributor

marquiz commented Jul 7, 2021

@marquiz Are there any plans to progress with this issue?
It looks like the main problem is not in updating the Makefile. This change is quite simple if buildx can be used. But, before creating a PR the CI pipelines need to be updated to support buildx. Do you know if it's possible and what help is required?

Hmm, kubernetes test-infra might actually support docker buildx nowadays so we could do this. What needs to be done is

  • update Makefile
  • (probably) update scripts/test-infra/* to build images on all architectures
  • update docs (docs/get-started/deployment-and-usage.md and docs/advanced/developer-guide.md quickly come to mind)

Patches are welcome 😄

@ArangoGutierrez
Copy link
Contributor

Best example I could find is https://github.com/kubernetes-sigs/service-catalog/blob/master/Makefile#L343
but I am not big fan of cross compilation tho, so the question is, do we want to do it like that?

@ArangoGutierrez
Copy link
Contributor

@marquiz
Copy link
Contributor

marquiz commented Aug 9, 2021

I am not big fan of cross compilation tho, so the question is, do we want to do it like that?

With docker buildx you shouldn't need that. Just something like docker buildx build --platform linux/arm64 . (haven't experimented this myself, though). Thus, I think we should be ok with relatively simple modifications to the Makefile

@ArangoGutierrez
Copy link
Contributor

kubernetes/test-infra#22977 enables docker buildx on prow

@ArangoGutierrez
Copy link
Contributor

ArangoGutierrez commented Aug 9, 2021

We need to watch the output of

@utegental
Copy link
Author

https://hub.docker.com/r/raspbernetes/node-feature-discovery - these guys have done multi-arch support several months ago. I don't know how exactly that was done, but this container works without issues.

@marquiz
Copy link
Contributor

marquiz commented Aug 10, 2021

I don't know how exactly that was done, but this container works without issues.

I think they use docker buildx: https://github.com/raspbernetes/multi-arch-images

@ArangoGutierrez
Copy link
Contributor

https://hub.docker.com/r/raspbernetes/node-feature-discovery is not a kubernetes-sigs repo, using prow, I think the main idea is to use prow as our official image builder right?

@anthr76
Copy link

anthr76 commented Aug 10, 2021

If prow isn't sufficient to emulate other arch's and there isn't infra to natively build on other arch's why use it? Is there a requirement? I know of other sigs building multi-arch.

Other arch's exist and this helps adoption greatly <3

@marquiz
Copy link
Contributor

marquiz commented Aug 11, 2021

I think the main idea is to use prow as our official image builder right?

Yes. We use k8s test infra (prow) for CI and we're staying there as that tightly coupled with k8s container image hosting (k8s.gcr.io). Moreover, we want to serve all the container images from a single registry.

If prow isn't sufficient to emulate other arch's and there isn't infra to natively build on other arch's why use it? Is there a requirement? I know of other sigs building multi-arch.

I'm a bit lost here 🧐 As I commented earler, afaiu prow nowadays supports multiarch builds via docker buildx. And afaiu, Raspbernetes uses the same build method.

We just need a PR in NFD for enabling multiarch builds with docker buildx

@kmhaeren
Copy link

kmhaeren commented Aug 17, 2021

+1

This would be really handy to discover features on Jetson Xaviers

@jonkerj
Copy link
Contributor

jonkerj commented Aug 23, 2021

FWIW, I build (local) multiarch (arm64/amd64) docker images of NFS using this commandline:

$ git checkout v0.9.0
$ make IMAGE_BUILD_CMD="docker buildx build --push --platform linux/amd64,linux/arm64" IMAGE_REGISTRY="docker.io/jonkerj"

Should not be too difficult to patch up the makefile to do this by default. Maybe it needs some var containing the supported platforms, it's a matter of taste.

@ArangoGutierrez
Copy link
Contributor

@jonkerj running your command returns

 Done setting up docker in docker.
+ WRAPPED_COMMAND_PID=174
+ wait 174
+ scripts/test-infra/build-image.sh
namespace: node-feature-discovery
image: k8s.gcr.io/nfd/node-feature-discovery:v0.10.0-devel-2-g58e147d
docker buildx build --platform linux/amd64,linux/arm64 --build-arg VERSION=v0.10.0-devel-2-g58e147d \
    --target full \
    --build-arg HOSTMOUNT_PREFIX=/host- \
    --build-arg BASE_IMAGE_FULL=debian:buster-slim \
    --build-arg BASE_IMAGE_MINIMAL=gcr.io/distroless/base \
    -t k8s.gcr.io/nfd/node-feature-discovery:v0.10.0-devel-2-g58e147d \
     \
     ./
error: multiple platforms feature is currently not supported for docker driver. Please switch to a different driver (eg. "docker buildx create --use")
make: *** [Makefile:70: image] Error 1 

@ArangoGutierrez
Copy link
Contributor

After splitting out the build by platform, is seems to work ok -> cbd42af

@jonkerj
Copy link
Contributor

jonkerj commented Aug 23, 2021

It could be a feature/limitation of your docker environment or buildx initialization, above invocation works for me (docker 20.10.8 / buildx v0.6.1 / moby/buildkit:buildx-stable-1)

@jonkerj
Copy link
Contributor

jonkerj commented Aug 23, 2021

The Raspbernetes project automatically builds multi-arch images of several projects, including NFD. You could take a look at the workflow file, which essentially wraps docker buildx build

@ArangoGutierrez
Copy link
Contributor

Is not "my" environment, is kubernetes/test-infra prow environment, the one we use for all NFD CI and image building

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label May 27, 2022
@zvonkok
Copy link
Contributor

zvonkok commented Jun 1, 2022

@disconn3ct We're talking here about arm32v7 and arm64v8 right? Those are the terms docker buildx understands. Just to be at the same page.

@eero-t
Copy link

eero-t commented Aug 15, 2022

With #698 we now have ARM64

@marquiz That was merged over half a year ago, to a release done on March.

@disconn3ct We're talking here about arm32v7 and arm64v8 right? Those are the terms docker buildx understands. Just to be at the same page.

Is this ticket still open until all ARM targets supported by dockerx have been added?

@liupeng0518
Copy link
Member

hi guys,
when i build arm64 bin, An error occurred:

╭─  ~/Dev-env/node-feature-discovery   master                                                                                                 ✔ ─╮
╰─ GOOS=linux GOARCH=arm64 go install ./cmd/...                                                                                                      ─╯
# sigs.k8s.io/node-feature-discovery/source/cpu
source/cpu/cpu.go:203:63: undefined: getCpuidFlags

@marquiz
Copy link
Contributor

marquiz commented Oct 24, 2022

when i build arm64 bin, An error occurred:

Probably because you don't have cross-build tools for c code. Try docker buildx (make image-all)

@liupeng0518
Copy link
Member

when i build arm64 bin, An error occurred:

Probably because you don't have cross-build tools for c code. Try docker buildx (make image-all)

ok , thanks.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 23, 2023
@ArangoGutierrez
Copy link
Contributor

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 23, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 23, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 23, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Jun 22, 2023
@Links2004
Copy link
Contributor

/reopen

v0.15.3 does not work on armv7l

# arch
armv7l

# crictl --runtime-endpoint=unix:///run/containerd/containerd.sock pull registry.k8s.io/nfd/node-feature-discovery:v0.15.3
E0401 19:41:41.168147   29602 remote_image.go:171] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"registry.k8s.io/nfd/node-feature-discovery:v0.15.3\": no match for platform in manifest: not found" image="registry.k8s.io/nfd/node-feature-discovery:v0.15.3"
FATA[0001] pulling image: rpc error: code = NotFound desc = failed to pull and unpack image "registry.k8s.io/nfd/node-feature-discovery:v0.15.3": no match for platform in manifest: not found 

@k8s-ci-robot
Copy link
Contributor

@Links2004: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen

v0.15.3 does not work on armv7l

# arch
armv7l

# crictl --runtime-endpoint=unix:///run/containerd/containerd.sock pull registry.k8s.io/nfd/node-feature-discovery:v0.15.3
E0401 19:41:41.168147   29602 remote_image.go:171] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"registry.k8s.io/nfd/node-feature-discovery:v0.15.3\": no match for platform in manifest: not found" image="registry.k8s.io/nfd/node-feature-discovery:v0.15.3"
FATA[0001] pulling image: rpc error: code = NotFound desc = failed to pull and unpack image "registry.k8s.io/nfd/node-feature-discovery:v0.15.3": no match for platform in manifest: not found 

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@marquiz
Copy link
Contributor

marquiz commented Apr 2, 2024

@Links2004 would you be willing to work on this?

@Links2004
Copy link
Contributor

sure, can make some free time for this next week.
only need some pointers on how the build system works.
looks like some external tooling is used (at lease I dont see a github workflow for the image build)

@ArangoGutierrez
Copy link
Contributor

/reopen
/assign Links2004

@k8s-ci-robot
Copy link
Contributor

@ArangoGutierrez: Reopened this issue.

In response to this:

/reopen
/assign Links2004

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale May 2, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Hacktoberfest kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

Successfully merging a pull request may close this issue.