Skip to content

Conversation

@anijain2305
Copy link
Contributor

This PR does 2 things

  • Test QNN on a real image. QNN and TFLite outputs differ due to rounding differences, making random input not a good candidate for label testing. So, this PR adds a real image for testing QNN.
  • @inadob also found a bug in QNN parsing [Frontend][TFLite] Fix quantized pad value for convolution #4807 This PR needs that fix as well to enable the real image tests, so adding that fix here.

Thanks to @inadob for finding the bug

@FrozenGene @kevinthesun @inadob Can you please review?

@anijain2305
Copy link
Contributor Author

anijain2305 commented Feb 5, 2020

@inadob Can you please review the TFLite pad test portion? I am not very familiar with using fake quant, and I just replicated your code for Pad. So, closer scrutiny would be good.

@anijain2305
Copy link
Contributor Author

Ping @FrozenGene

@FrozenGene
Copy link
Member

ping @inadob , would you review it again? If you have no other comments, would you set explicit approve? Thanks?

Copy link
Contributor

@inadob inadob left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Almost ready, have a couple of small changes below

@anijain2305
Copy link
Contributor Author

@inadob Done

@anijain2305
Copy link
Contributor Author

@FrozenGene Good to merge?

@FrozenGene
Copy link
Member

Thanks @anijain2305 @inadob It is merged now.

@FrozenGene FrozenGene merged commit 902e21b into apache:master Feb 11, 2020
@masahi
Copy link
Member

masahi commented Feb 12, 2020

What happen if the zero point is a vector, as in per channel quantization? What should the pad value be?

@u99127
Copy link

u99127 commented Feb 20, 2020

What happen if the zero point is a vector, as in per channel quantization? What should the pad value be?

@masahi good question but please file a separate issue only affecting master since per channel quantization came in after this. Otherwise I fear this will be missed.

Ramana

@u99127
Copy link

u99127 commented Feb 20, 2020

@anijain2305 - this needs a backport to 0.6 tag ?

@masahi
Copy link
Member

masahi commented Feb 20, 2020

@u99127 I figured the answer myself. QNN support for per channel quantization applies to only weight quantization (which is supposed to be a common restriction in other frameworks as well). For pad value we can use a scalar zero point associated with activations.

@anijain2305
Copy link
Contributor Author

@masahi You are right (as always :) )

@u99127 Let me look into this.

alexwong pushed a commit to alexwong/tvm that referenced this pull request Feb 26, 2020
* [TFLite] Using real image for QNN testing.

* Setting seed for SSD mobilenet for fixed input.

* Support quantized Pad op.

* Remove unnnecessary line.

* Ina comments.
alexwong pushed a commit to alexwong/tvm that referenced this pull request Feb 28, 2020
* [TFLite] Using real image for QNN testing.

* Setting seed for SSD mobilenet for fixed input.

* Support quantized Pad op.

* Remove unnnecessary line.

* Ina comments.
zhiics pushed a commit to neo-ai/tvm that referenced this pull request Mar 2, 2020
* [TFLite] Using real image for QNN testing.

* Setting seed for SSD mobilenet for fixed input.

* Support quantized Pad op.

* Remove unnnecessary line.

* Ina comments.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants