NXP backend: Condition Linear+BatchNorm fusing passes to valid combination only#17736
Open
StrycekSimon wants to merge 2 commits intopytorch:mainfrom
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/17736
Note: Links to docs will display an error until the docs builds have been completed. ❌ 8 New FailuresAs of commit da6cae3 with merge base f78535d ( NEW FAILURES - The following jobs have failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
Contributor
There was a problem hiding this comment.
Pull request overview
This PR tightens the NXP backend’s Linear+BatchNorm fusion logic by conditioning fusion passes on valid shape combinations (only fuse when Linear and BN operate on the same dimension), and avoids unnecessary graph recompilation in the QAT “remove simulated fusion” pass.
Changes:
- Add shape-based gating (via FX
tensor_metashape) to Linear+BN fusion passes to prevent invalid fusions. - Track whether graph edits were made in
RemoveSimulatedLinearBatchNormFusionQATPassand only recompile when needed. - Update/refactor NXP backend tests and test models; add new negative tests ensuring incompatible Linear+BN are not fused.
Reviewed changes
Copilot reviewed 7 out of 7 changed files in this pull request and generated 6 comments.
Show a summary per file
| File | Description |
|---|---|
| backends/nxp/aten_passes/fuse_batch_norm_with_linear_pass.py | Adds output-shape conditioning before fusing Linear+BN. |
| backends/nxp/aten_passes/simulated_linear_bn_fusion_passes/add_simulated_linear_bn_fusion_qat_pass.py | Adds output-shape conditioning to simulated Linear+BN QAT fusion insertion. |
| backends/nxp/aten_passes/simulated_linear_bn_fusion_passes/remove_simulated_linear_bn_fusion_qat_pass.py | Returns/propagates a “made changes” flag to avoid unnecessary recompile. |
| backends/nxp/backend/graph_utils.py | Adds get_output_shape helper for retrieving tensor_meta shapes. |
| backends/nxp/tests/ir/edge_passes/test_linear_bn_fusing.py | Updates model usage and adds negative tests for non-fusable shapes. |
| backends/nxp/tests/models.py | Refactors BN-related helper modules used by tests (but currently introduces a problematic import / rename compatibility issue). |
| backends/nxp/tests/test_batch_norm_fusion.py | Updates fusion tests to use shared helpers and adds incompatible-shape full-pipeline coverage. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
.../aten_passes/simulated_linear_bn_fusion_passes/remove_simulated_linear_bn_fusion_qat_pass.py
Outdated
Show resolved
Hide resolved
9a50827 to
da6cae3
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Introduce shape conditioning to Linear+BN fusion related passes.
Fix
RemoveSimulatedLinearBatchNormFusionQATPasspass recompiling graph when not necessary.Problem description:
Linear and BatchNorm layers are fusable only in cases where both layers work with the same dimension. Linear layer works strictly with only the last dimension of it's input. BatchNorm on the other hand works in channels dimension (the second one in most cases) except for one case when using BatchNorm1d and input shape
[N, C](see official PyTorch documentation for more info). In this one case both BatchNorm and Linear works on the same dimension and are correctly fusable.Example:
Test plan
New unit test cases were added + relevant older ones were adjusted.
cc @robert-kalmar @JakeStevens @digantdesai