-
Notifications
You must be signed in to change notification settings - Fork 256
[Experimental] Enable kleidi AI examples to run on graviton3 #1721
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/1721
Note: Links to docs will display an error until the docs builds have been completed. This comment was automatically generated by Dr. CI and updates every 15 minutes. |
Hi @akote123! Thank you for your pull request and welcome to our community. Action RequiredIn order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you. ProcessIn order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA. Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with If you have received this in error or have any questions, please contact us at cla@meta.com. Thanks! |
Hi @akote123, thanks for the PR! Can you tell me more about which kernels you're trying to run on graviton3? |
@metascroy , |
59511cd
to
456fecc
Compare
Hi @akote123, so there are two kinds of KleidiAI int4 kernels available. One kind is availble in PyTorch itself and models with it can be quantized like this: https://github.com/pytorch/ao/blob/main/torchao/experimental/tests/test_packed_linear_int8_dynamic_activation_intx_weight_layout_target_aten.py#L48-L60 The other belongs in torchao experimental kernels (#1826) and can be built by running:
from the ao directory (Note that TORCHAO_BUILD_CPU_AARCH64 is automatically set on Arm-based Mac machines). You can see how to quantize a model using these kernels here: https://github.com/pytorch/ao/blob/main/torchao/experimental/tests/test_int8_dynamic_activation_intx_weight.py#L62-L72 (KleidiAI kernels will only be used with int4, has_weight_zeros=false; otherwise our "universal" kernels will be used. If you build with TORCHAO_BUILD_KLEIDIAI=0, our universal kernels will be used instead of KleidiAI for int4/has_weight_zeros=false, too). |
Hi @metascroy ! I've been trying to follow your instructions to get KleidiAI int4 kernels working on a Scaleway ARM instance (4x16), but I'm still encountering issues. I've done the following: Built and installed KleidiAI (the library is installed at However, when I try to run code that uses the KleidiAI kernels, I get this error:
My CPU definitely has the required ARM features (verified with
Including: asimd (NEON), asimddp (Dot Product), etc. Is there something specific I need to do to get the _pack_8bit_act_4bit_weight operator registered? Are there any diagnostic steps I can take to debug this further? |
We currently only enable them on mac platform (https://github.com/pytorch/ao/blob/main/setup.py#L52C1-L56). We could probably relax this condition for linux by dropping the "platform.system() == "Darwin" condition. Btw: when adding the flag TORCHAO_BUILD_KLEIDIAI=1, you do not need to build/install kleidiai separately. setup.py will do it. |
Thanks for your response! I've tried implementing your suggested fix : build_torchao_experimental = (
use_cpp == "1"
and platform.machine().startswith("arm64")
and (platform.system() == "Darwin" or platform.system() == "Linux")
) I then did a complete rebuild:
However, I'm still getting the same error:
This suggests that the KleidiAI operators aren't being registered properly or aren't loading correctly ? |
Moving conversation to issue |
Enable kleidi AI int4 experimental features to run in graviton3.
cc: @metascroy