Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable aten.relu_.default in the CadenceQuantizer #4344

Closed
wants to merge 2 commits into from

Conversation

mcremon-meta
Copy link
Contributor

Summary: As titled. Some model use torch.ops.aten.relu_.default instead of torch.ops.aten.relu.default. Enable that in the quantizer.

Differential Revision: D60071019

Summary:

As titled. The kernel will be used by the quantizer but is missing the meta kernel.

Differential Revision: D60070844
Summary: As titled. Some model use `torch.ops.aten.relu_.default` instead of `torch.ops.aten.relu.default`. Enable that in the quantizer.

Differential Revision: D60071019
Copy link

pytorch-bot bot commented Jul 22, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/4344

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 1ffc76e with merge base f0364e8 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jul 22, 2024
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D60071019

facebook-github-bot pushed a commit that referenced this pull request Jul 22, 2024
Summary:

As titled. Some model use `torch.ops.aten.relu_.default` instead of `torch.ops.aten.relu.default`. Enable that in the quantizer.

Differential Revision: D60071019
Copy link
Contributor

@zonglinpengmeta zonglinpengmeta left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

@@ -31,7 +31,6 @@
lib.define(
"quantized_layer_norm(Tensor X, Tensor X_scale, Tensor X_zero_point, int[] normalized_shape, Tensor weight, Tensor bias, float eps, float output_scale, int output_zero_point) -> (Tensor Y)"
)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

white lines removed by accident?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no I made all cases look the same (grouping default and out overloads)

@facebook-github-bot
Copy link
Contributor

This pull request has been merged in 48da61a.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported Merged
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants