Closed as not planned
Closed as not planned
Description
Expected behaviour:
When an arm_cpu
target is used, the grouped convolution should compile successfully without an error.
Actual behaviour:
When an arm_cpu
target is used, the model fails to compile during type inference with:
Incompatible broadcast type TensorType([1, 8, 8, 2], int32) and TensorType([1, 1, 1, 16], int32)
The type inference pass was unable to infer a type for this expression.
This usually occurs when an operator call is under constrained in some way, check other reported errors for hints of what may of happened.
Environment:
Tested with TVM at 6a3fadc. The issue was found as a result of the changes in #16513, however it can be reproduced without as described below.
How to reproduce:
Run the test pytest tests/python/frontend/tflite/test_forward.py -k test_forward_quantized_convolution
with an arm_cpu
target. Note: reminder to remove any skip condition that exists in the test currently.
Likely group convolution needs to be handled correctly in:
tvm/python/tvm/relay/qnn/op/legalizations.py
Line 489 in 8a2ffee