Skip to content

[Bug] Type inference error compiling quantized group convolution on arm_cpu target #16532

Closed as not planned
@lhutton1

Description

@lhutton1

Expected behaviour:

When an arm_cpu target is used, the grouped convolution should compile successfully without an error.

Actual behaviour:

When an arm_cpu target is used, the model fails to compile during type inference with:

Incompatible broadcast type TensorType([1, 8, 8, 2], int32) and TensorType([1, 1, 1, 16], int32)
The type inference pass was unable to infer a type for this expression.
This usually occurs when an operator call is under constrained in some way, check other reported errors for hints of what may of happened.

Environment:

Tested with TVM at 6a3fadc. The issue was found as a result of the changes in #16513, however it can be reproduced without as described below.

How to reproduce:

Run the test pytest tests/python/frontend/tflite/test_forward.py -k test_forward_quantized_convolution with an arm_cpu target. Note: reminder to remove any skip condition that exists in the test currently.


Likely group convolution needs to be handled correctly in:

def _qnn_conv2d_legalize_arm_cpu(attrs, inputs, types):

Metadata

Metadata

Assignees

No one assigned

    Labels

    needs-triagePRs or issues that need to be investigated by maintainers to find the right assignees to address ittopipython/tvm/topitype: bug

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions