Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Continue] Do not ignore the entire fused pattern if non-primary nodes are in the ignored scopes #1314

Merged
merged 2 commits into from
Oct 13, 2022

Conversation

ljaljushkin
Copy link
Contributor

Changes

Adjusted quantizer propagation common algo to only ignore the entire fused pattern if the primary node from the pattern ends up matching to an ignored scope.

Reason for changes

Improves logical consistency of the "ignored_scopes" and its effect on the algo result.

Related tickets

92302

Tests

updated references + new test cases in test_activation_ignored_scope

@github-actions github-actions bot added NNCF Common Pull request that updates NNCF Common NNCF PT Pull requests that updates NNCF PyTorch labels Oct 12, 2022
@ljaljushkin
Copy link
Contributor Author

Some tests were failing in #1289:

  1. test_min_max_quantization_graph[model_to_test3] (Maskrcnn-12)
    Solution: made sure that the updated reference .dot file has new quantizers in the expected places.
    Here's the proof:
    maskrcnn_diff
    maskrcnn_diff_fq onnx
    maskrcnn_diff_graph
    Add operation is ignored only.
    Matmul + Add is fused for ONNX: https://github.com/openvinotoolkit/nncf/blob/develop/nncf/experimental/onnx/hardware/fused_patterns.py#L56
    The entire fused pattern is not ignored, matmul is quantized, which is expected with the proposed changes Do not ignore the entire fused pattern if non-primary nodes are in the ignored scopes #1289.

  2. quantization.test_algo_quantization.test_activation_ignored_scope[update_config_info1-should_ignore_quantizers1]
    quantization.test_algo_quantization.test_activation_ignored_scope[update_config_info2-should_ignore_quantizers2]
    Solution: made sure that test produces quantizers in the expected places.
    ReLU is ignored in Conv + ReLU pattern, but Conv's input is quantized, as expected.
    lenet_test_case

@ljaljushkin ljaljushkin requested review from vshampor and kshpv October 12, 2022 19:23
@ljaljushkin ljaljushkin force-pushed the do_not_ignore_pattern branch from 07ba3bf to b2b433e Compare October 13, 2022 10:02
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
NNCF Common Pull request that updates NNCF Common NNCF PT Pull requests that updates NNCF PyTorch
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants