-
Notifications
You must be signed in to change notification settings - Fork 607
[Arm backend] Fix for TOSA BI clamp ops #3092
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Min/max range values need to be on quantized form. Change-Id: I68d091306890f0a500d829ce20fc337e6cbe9dba Signed-off-by: Fredrik Knutsson <fredrik.knutsson@arm.com>
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/3092
Note: Links to docs will display an error until the docs builds have been completed. ✅ You can merge normally! (1 Unrelated Failure)As of commit ef71c6d with merge base 73599f4 ( BROKEN TRUNK - The following job failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
Change-Id: Ie59881824d76d5a9c30e95a8024dbbb11055577b
reference_output, | ||
quantization_scale, | ||
) = self._calculate_reference_output( | ||
(reference_output, quantization_scale,) = self._calculate_reference_output( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lintrunner did this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm, the CI lintrunner seem to think differently than my local one
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Weird. I use the same version, 0.11.0, as mandated by the contribution guidelines. I had to re-run lintrunner init
to get the same result. I'll push the update soon...
atol=atol, | ||
rtol=rtol, | ||
), ( | ||
assert torch.allclose(model, ref, atol=atol, rtol=rtol,), ( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
dito
Change-Id: Ia773c04d17f24bd155365a412b3e96c3b3d9aa63
CI failures not related to this PR. |
scale, zp = get_quant_node_args(node.all_input_nodes[0]) | ||
# Convert to quantized representation | ||
clamp_min_qs = round((inputs[1].number / scale) + zp) | ||
clamp_min_qs = max(clamp_min_qs, -128) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we should get qmin/qmax from the quant node
Change-Id: I9dee9bec58e51fcef57ebb287dbad62016c221d1
@digantdesai has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
@digantdesai merged this pull request in b0a400c. |
Min/max range values need to be on quantized form.