Skip to content

Enable fp16+int4 mixed precission path for int4 xpu path with int zero point #2240

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

liangan1
Copy link

Backgroup
For XPU device, when user select the int zero point, the _torch.ops.aten.weight_int4pack_mm_with_scales_and_zeros kernel operator will be used to do A16W4 computation. Both Afp16W4 and ABF16int4 are supported in this op on XPU device, while only the BF16 activation is supported in the torchAO now, In this PR we want to unlock the FP16 activation support.

Copy link

pytorch-bot bot commented May 22, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/2240

Note: Links to docs will display an error until the docs builds have been completed.

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label May 22, 2025
@liangan1
Copy link
Author

@jerryzh168 can you help to review?

@liangan1
Copy link
Author

@EikanWang

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants