-
Notifications
You must be signed in to change notification settings - Fork 546
Add small repro test for unsigned -> signed et loss error #8506
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/8506
Note: Links to docs will display an error until the docs builds have been completed. ❌ 2 New Failures, 1 Unrelated FailureAs of commit 13a997c with merge base c00c798 ( NEW FAILURES - The following jobs have failed:
BROKEN TRUNK - The following job failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This pull request was exported from Phabricator. Differential Revision: D69668881 |
79dff9a
to
60b0255
Compare
This pull request was exported from Phabricator. Differential Revision: D69668881 |
@pytorchbot label "topic: not user facing" |
Summary: There was a difference in behavior from `quantized_decomposed.quantize_per_tensor` and `cadence.quantize_per_tensor`, specifically how rounding half values worked. The former rounds towards even (based on `torch.round` which does that). The latter rounds away from zero. Make sure the python implementation matches the Executorch implementation in this regard. Reviewed By: sabarishsnk Differential Revision: D69668881
60b0255
to
c364a2c
Compare
This pull request was exported from Phabricator. Differential Revision: D69668881 |
Summary: There was a difference in behavior from `quantized_decomposed.quantize_per_tensor` and `cadence.quantize_per_tensor`, specifically how rounding half values worked. The former rounds towards even (based on `torch.round` which does that). The latter rounds away from zero. Make sure the python implementation matches the Executorch implementation in this regard. Reviewed By: sabarishsnk Differential Revision: D69668881
c364a2c
to
13a997c
Compare
This pull request was exported from Phabricator. Differential Revision: D69668881 |
@@ -16,6 +16,9 @@ | |||
from numbers import Number | |||
from typing import cast, Sequence | |||
|
|||
# Import these for the cadence function signatures. | |||
import executorch.backends.cadence.aot.ops_registrations # noqa: F401 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This line broke the linter job: https://github.com/pytorch/executorch/actions/runs/13398623818/job/37423762070#step:15:7593
I think it needs two spaces before the #
. Please run lintrunner -a
to fix this, and make sure that the job is passing before merging PRs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sent #8575 to fix this
Fix lint issue in backends/cadence/aot/fuse_ops.py introduced in #8506
Summary:
There was a difference in behavior from
quantized_decomposed.quantize_per_tensor
and
cadence.quantize_per_tensor
, specifically how rounding half values worked.The former rounds towards even (based on
torch.round
which does that).The latter rounds away from zero.
Make sure the python implementation matches the Executorch implementation in this
regard.
Differential Revision: D69668881