-
Notifications
You must be signed in to change notification settings - Fork 354
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable aten.relu_.default in the CadenceQuantizer #4344
Conversation
Summary: As titled. The kernel will be used by the quantizer but is missing the meta kernel. Differential Revision: D60070844
Summary: As titled. Some model use `torch.ops.aten.relu_.default` instead of `torch.ops.aten.relu.default`. Enable that in the quantizer. Differential Revision: D60071019
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/4344
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 1ffc76e with merge base f0364e8 (): This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This pull request was exported from Phabricator. Differential Revision: D60071019 |
Summary: As titled. Some model use `torch.ops.aten.relu_.default` instead of `torch.ops.aten.relu.default`. Enable that in the quantizer. Differential Revision: D60071019
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
@@ -31,7 +31,6 @@ | |||
lib.define( | |||
"quantized_layer_norm(Tensor X, Tensor X_scale, Tensor X_zero_point, int[] normalized_shape, Tensor weight, Tensor bias, float eps, float output_scale, int output_zero_point) -> (Tensor Y)" | |||
) | |||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
white lines removed by accident?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no I made all cases look the same (grouping default and out overloads)
This pull request has been merged in 48da61a. |
Summary: As titled. Some model use
torch.ops.aten.relu_.default
instead oftorch.ops.aten.relu.default
. Enable that in the quantizer.Differential Revision: D60071019