Skip to content

Conversation

@MartinPavella
Copy link
Collaborator

@MartinPavella MartinPavella commented Aug 21, 2025

Summary

This PR fixes cases where padding with the value 0 was used for quantized operators. Now, zero point is used instead.

Test plan

Unit tests provided.

cc @digantdesai @JakeStevens @robert-kalmar

@pytorch-bot
Copy link

pytorch-bot bot commented Aug 21, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/13576

Note: Links to docs will display an error until the docs builds have been completed.

❌ 3 New Failures, 1 Pending

As of commit 54ef48c with merge base c70aeda (image):

NEW FAILURES - The following jobs have failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Aug 21, 2025
@MartinPavella
Copy link
Collaborator Author

@pytorchbot label "module: nxp" "release notes: nxp"

@pytorch-bot pytorch-bot bot added module: nxp Issues related to NXP Neutron NPU delegation and code under backends/nxp/ release notes: nxp Changes to the NXP Neutron backend delegate labels Aug 21, 2025
# be included in the computation!
input_quantization = t_op.tmp_inputs[0].quantization
pad_value = (
None
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why None instead of 0 in non quantized?

Copy link
Collaborator Author

@MartinPavella MartinPavella Aug 21, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

None is the default value of the builder.create_pad_operator_before() method's constant_value parameter. This way, the actual default padding value (0) is only defined in 1 place.

But it's hard to image that the default padding value would ever be changed, and using 0 here would make the code more understandable. I have no problem using 0 instead of None if you prefer.

@MartinPavella MartinPavella force-pushed the upstream/main-nxp/EIEX-497-upstream-padding-with-zero_point branch from 0f68cd2 to 52d6c1b Compare August 25, 2025 07:04
@MartinPavella MartinPavella force-pushed the upstream/main-nxp/EIEX-497-upstream-padding-with-zero_point branch from 52d6c1b to 54ef48c Compare August 25, 2025 10:55
@JakeStevens JakeStevens merged commit 49bc664 into pytorch:main Aug 25, 2025
101 of 104 checks passed
agrima1304 pushed a commit to agrima1304/executorch that referenced this pull request Aug 26, 2025
### Summary
This PR fixes cases where padding with the value `0` was used for
quantized operators. Now, zero point is used instead.

### Test plan
Unit tests provided.


cc @digantdesai @JakeStevens @robert-kalmar
kimishpatel pushed a commit that referenced this pull request Sep 2, 2025
### Summary
This PR fixes cases where padding with the value `0` was used for
quantized operators. Now, zero point is used instead.

### Test plan
Unit tests provided.


cc @digantdesai @JakeStevens @robert-kalmar
@robert-kalmar robert-kalmar deleted the upstream/main-nxp/EIEX-497-upstream-padding-with-zero_point branch September 3, 2025 06:57
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. module: nxp Issues related to NXP Neutron NPU delegation and code under backends/nxp/ release notes: nxp Changes to the NXP Neutron backend delegate

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants