Skip to content

Conversation

TomWildenhain-Microsoft
Copy link
Collaborator

No description provided.

Signed-off-by: Tom Wildenhain <tomwi@microsoft.com>
@lgtm-com
Copy link

lgtm-com bot commented Feb 6, 2021

This pull request introduces 2 alerts when merging 3a98be1 into 8114f4b - view on LGTM.com

new alerts:

  • 1 for Unused local variable
  • 1 for Unused import

Signed-off-by: Tom Wildenhain <tomwi@microsoft.com>
@TomWildenhain-Microsoft
Copy link
Collaborator Author

NVM disregard this review request until after #1321 merges. It contains changes from both.

@lgtm-com
Copy link

lgtm-com bot commented Feb 8, 2021

This pull request introduces 1 alert when merging 679bc21 into 70bc2b6 - view on LGTM.com

new alerts:

  • 1 for Unused local variable

Signed-off-by: Tom Wildenhain <tomwi@microsoft.com>
self.structured_outputs = structured_outputs # Needed to determine output order for tf_function
self.rtol = rtol
self.atol = atol
self.ptol = ptol
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new models in this commit contain post-processing with an ArgMax op so it makes more sense to allow p percent of the output tensor entries to differ from the TF model rather than allowing for a large atol or rtol. The converted model has a small number of differing entries but they are not necessarily close to to their corresponding tf entries.

model: "deeplabv3_mnv2_ade20k_uint8.tflite"
model_type: tflite
input_get: get_ade20k_uint8
ptol: 1.0
Copy link
Collaborator Author

@TomWildenhain-Microsoft TomWildenhain-Microsoft Feb 8, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a 1% error rate which is surprisingly high. Dequantization may mean the ORT result may be slower but actually more accurate than (but different from) the tf results.

Signed-off-by: Tom Wildenhain <tomwi@microsoft.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants