Skip to content

Make the quantized path the main testing path, and introduce a nop quantizer for fp32 cases #7915

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jan 24, 2025

Conversation

mcremon-meta
Copy link
Contributor

Summary:
For a while, testing APIs were quantize_and_run and run_and_verify, with the former calling the latter. That flow is a bit inconvenient since the quantized and fp32 cases are not consistent, and the names are also inconsistent.
This diff changes the two main APIs to become export_run_and_verify and quantize_export_run_and_verify to be more descriptive.
It also changes the calling order; we now use a nop quantizer for the fp32 case, allowing us to use the exact same flow as the quantized cases.
The existing run_and_verify function is made "private" (as far as python goes at least) and now takes in an ExportedProgram instead of the torch.nn.Module before.
Finally, it removes the eval() part of export_program, since now everything should go through the quantizer (including as a nop).

Reviewed By: zonglinpeng, hsharma35

Differential Revision: D67561806

…antizer for fp32 cases

Summary:
For a while, testing APIs were `quantize_and_run` and `run_and_verify`, with the former calling the latter. That flow is a bit inconvenient since the quantized and fp32 cases are not consistent, and the names are also inconsistent.
This diff changes the two main APIs to become `export_run_and_verify` and `quantize_export_run_and_verify` to be more descriptive.
It also changes the calling order; we now use a nop quantizer for the fp32 case, allowing us to use the exact same flow as the quantized cases.
The existing `run_and_verify` function is made "private" (as far as python goes at least) and now takes in an `ExportedProgram` instead of the `torch.nn.Module` before.
Finally, it removes the `eval()` part of `export_program`, since now everything should go through the quantizer (including as a nop).

Reviewed By: zonglinpeng, hsharma35

Differential Revision: D67561806
Copy link

pytorch-bot bot commented Jan 23, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/7915

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit b72f4b5 with merge base d68ca28 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jan 23, 2025
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D67561806

@facebook-github-bot facebook-github-bot merged commit 9a0b51c into main Jan 24, 2025
44 of 49 checks passed
@facebook-github-bot facebook-github-bot deleted the export-D67561806 branch January 24, 2025 02:23
YIWENX14 pushed a commit that referenced this pull request Jan 28, 2025
…antizer for fp32 cases

Differential Revision: D67561806

Pull Request resolved: #7915
zonglinpeng pushed a commit to zonglinpeng/executorch that referenced this pull request Jan 30, 2025
…antizer for fp32 cases

Differential Revision: D67561806

Pull Request resolved: pytorch#7915
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported topic: not user facing
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants