Skip to content

Qualcomm AI Engine Direct - Backend awareness quantizer#17665

Open
shewu-quic wants to merge 2 commits intopytorch:mainfrom
CodeLinaro:dev1/hutton/backend_awareness_quantizer
Open

Qualcomm AI Engine Direct - Backend awareness quantizer#17665
shewu-quic wants to merge 2 commits intopytorch:mainfrom
CodeLinaro:dev1/hutton/backend_awareness_quantizer

Conversation

@shewu-quic
Copy link
Collaborator

@shewu-quic shewu-quic commented Feb 24, 2026

Summary:

  • Add a file backend_opinfo_adapter.py which adapts BackendOpInfo from the QNN SDK for use with ExecuTorch
    • The BackendOpInfo API is supported starting from QNN SDK 2.41 and above.
    • The BackendOpInfo which is a pybind library contains a list of quantization constraints for each operator. These quantization constraints refer to the operator definitions in QNN documents.
  • Refactor QnnQuantizer
    • Add support for backend-specific annotation by implementing lazy loading of xxx_rules.py in registry_loader.py.
    • Enable validation for quantization annotation with BackendOpInfo
      • It will bypass validation for QNN SDK versions 2.41 and earlier because the BackendOpInfo API is not supported in those versions..
      • Add the backend and soc_model parameters to allow configuration of BackendOpInfo.
      • Introduce a strict parameter. By default, it is enabled, causing the validation stage to raise ValueError if quantization constraints are not met. In this mode, all quantization constraints must be satisfied to fully delegate to the QNN Backend. If disabled, the process will only log warnings instead.
      • Validation items include:
        • Verify htp_arch for LPBQ support such as conv2d op.
        • Verify htp_arch for 16a16w support such as matmul op. - Ensure SharedQuantizationSpec is used for is_math_invariant such as view op. - Check scale and zero_point constraints for certain ops, such as requiring scale = 1 / (q_max - q_min + 1) and zero_point = 0 for sigmoid op.
        • Confirm qscheme meets symmetric constraints.
        • Validate that the dtype of input and output is supported.

Test plan

Successfully tested test_qnn_delegate.py and static llama with QNN version 2.41 and above.

ummary:
- Refactor QnnQuantizer
  - Add support for backend-specific annotation by implementing lazy
    loading of `xxx_rules.py` in `registry_loader.py`.
  - Enable validation for quantization annotation with `BackendOpInfo`
    - Add the `backend` and `soc_model` parameters to allow
      configuration of `BackendOpInfo`.
    - Introduce a `strict` parameter. By default, it is enabled, causing
      the validation stage to `raise ValueError` if quantization
constraints are not met. In this mode, all quantization constraints must
be satisfied to fully delegate to the QNN Backend. If disabled, the
process will only log warnings instead.
    - Validation items include:
      - Verify `htp_arch` for LPBQ support such as conv2d op.
      - Verify `htp_arch` for 16a16w support such as matmul op.
      - Ensure `SharedQuantizationSpec` is used for `is_math_invariant`
	such as view op.
      - Check `scale` and `zero_point` constraints for certain ops, such
	as requiring `scale = 1 / (q_max - q_min + 1)` and `zero_point =
0` for sigmoid op.
      - Confirm `qscheme` meets symmetric constraints.
      - Validate that the `dtype` of input and output is supported.
- Add a file `backend_opinfo_adapter.py` which adapts `BackendOpInfo`
  from the QNN SDK for use with ExecuTorch
  - the `BackendOpInfo` API is supported starting from QNN SDK 2.41 and
    above.
  - The `BackendOpInfo` library contains a list of quantization
    constraints for each operator. These quantization constraints refer
to the [operator definitions in QNN
documents](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-10/operations.html#backend-supplements).
@shewu-quic shewu-quic requested a review from cccclai as a code owner February 24, 2026 08:15
@pytorch-bot
Copy link

pytorch-bot bot commented Feb 24, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/17665

Note: Links to docs will display an error until the docs builds have been completed.

❌ 5 New Failures, 2 Unrelated Failures

As of commit 78161c0 with merge base a5423eb (image):

NEW FAILURES - The following jobs have failed:

FLAKY - The following job failed but was likely due to flakiness present on trunk:

BROKEN TRUNK - The following job failed but was present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Feb 24, 2026
@github-actions
Copy link

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

@shewu-quic
Copy link
Collaborator Author

shewu-quic commented Feb 25, 2026

Hi @cccclai,
This PR is to introduce BackendOpInfo API used to validate quantization configuration for each operator from QNN SDK.
Could you please take a look?
Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant