NXP backend: Add option to run test reference quantized in Python#17619
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/17619
Note: Links to docs will display an error until the docs builds have been completed. ❌ 5 New Failures, 1 Unrelated FailureAs of commit 7103948 with merge base ef6ceae ( NEW FAILURES - The following jobs have failed:
BROKEN TRUNK - The following job failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
@StrycekSimon feel free to review and use this functionality in your PR. |
62afe26 to
087e0a0
Compare
|
Corresponding ExecuTorch Integration PR: https://bitbucket.sw.nxp.com/projects/AITEC/repos/executorch-integration/pull-requests/127/overview Internal CI: https://bamboo3.sw.nxp.com/browse/MLTECE-EXIGH93-2 |
087e0a0 to
7103948
Compare
StrycekSimon
left a comment
There was a problem hiding this comment.
Nice, looks good to me. 👍
7103948 to
331ac4a
Compare
|
@roman-janik-nxp I have no idea what happened, but when I re-based onto main, this PR got some sort of a failure, it closed automatically, and I cannot re-open it nor does it show the 3 commits ahead of main. So I have created a new PR with exactly the same commits you have just approved (#17733). I would appreciate an approval there so I can merge it. |
Summary
NXP tests run models delegated to Neutron using the NSYS simulator. To determine correct output, a reference model is run on the CPU. Before, there were 2 choices for the reference, either non-delegated
.ptefile running in c++, or the original non-quantized float32 PyTorch model running in Python. This PR adds a 3rd option (to run in quantized edge dialect in Python), as well as easy extension to 2 more options in the future.Test plan
Unit-tests provided.
cc @robert-kalmar @JakeStevens @digantdesai