[BugFix] Use torch.zeros for argument in torch.where #3239
+7
−5
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
This reorders the function calls to make the
devicelookup, right before usage.Also, this uses
torch.zeros_likeas argument for thetorch.wherecall insteadof relying on autocasting.
Motivation and Context
We are observing issues to export the module into ONNX on intel devices.
This seems to be due to the tracer having issues to match the inserted constant for the
0to constant zero node autocast.Error:
Interestingly this issue only appears intermittently, we believe due to our CI landing on intel cpus with specific features activated.
Therefore, a direct reproduction is hard to detail.
Types of changes
What types of changes does your code introduce? Remove all that do not apply:
Checklist
Go over all the following points, and put an
xin all the boxes that apply.If you are unsure about any of these, don't hesitate to ask. We are here to help!