Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ModuleSparsificationInfo] Add logabble_items method #1468

Conversation

dbogunowicz
Copy link
Contributor

@dbogunowicz dbogunowicz commented Mar 22, 2023

Feature Preview

from sparseml.pytorch.models import ModelRegistry
from sparseml.pytorch.utils import PythonLogger, LoggerManager, log_module_sparsification_info
from sparseml.pytorch.optim import ScheduledModifierManager
import logging

logging.basicConfig(level=logging.DEBUG)

ZOO_STUB = "zoo:cv/classification/resnet_v1-18/pytorch/sparseml/imagenet/pruned-conservative"
QUANT_RECIPE = """
!QuantizationModifier
    start_epoch: 0.0
    scheme:
        input_activations:
            num_bits: 8
            symmetric: False
        weights:
            num_bits: 8
            symmetric: True
        scheme_overrides:
            feature_extractor: "default"
            classifier:
                input_activations:
                    num_bits: 8
                    symmetric: False
                weights: null
            Conv2d:
                input_activations:
                    num_bits: 8
                    symmetric: True
        ignore: ["ReLU", "input"]
        disable_quantization_observer_epoch: 2.0
        freeze_bn_stats_epoch: 3.0
        model_fuse_fn_name: 'fuse_module'
        strict: True"""

# SparseZoo stub to pre-trained sparse-quantized ResNet-50 for imagenet dataset
model = ModelRegistry.create(
    key="resnet18",
    pretrained_path=ZOO_STUB,
)

manager = ScheduledModifierManager.from_yaml(QUANT_RECIPE)
manager.apply(model)

log_module_sparsification_info(model, logger = PythonLogger())

Returns logs:

Note: every log has three copies returned because of the bug, fix here: #1483

python SparsificationSummaries/OperationCounts step None: {'ConvBnReLU2d': 9, 'MaxPool2d': 1, 'ConvBn2d': 11, 'AdaptiveAvgPool2d': 1, 'Linear': 1, 'Softmax': 1}
python SparsificationSummaries/OperationCounts step None: {'ConvBnReLU2d': 9, 'MaxPool2d': 1, 'ConvBn2d': 11, 'AdaptiveAvgPool2d': 1, 'Linear': 1, 'Softmax': 1}
python SparsificationSummaries/OperationCounts step None: {'ConvBnReLU2d': 9, 'MaxPool2d': 1, 'ConvBn2d': 11, 'AdaptiveAvgPool2d': 1, 'Linear': 1, 'Softmax': 1}
python SparsificationSummaries/OperationCounts step None: {'ConvBnReLU2d': 9, 'MaxPool2d': 1, 'ConvBn2d': 11, 'AdaptiveAvgPool2d': 1, 'Linear': 1, 'Softmax': 1}
python SparsificationSummaries/ParameterCounts step None: {'input.conv.module.weight': 9408, 'input.conv.module.bn.weight': 64, 'input.conv.module.bn.bias': 64, 'sections.0.0.conv1.module.weight': 36864, 'sections.0.0.conv1.module.bn.weight': 64, 'sections.0.0.conv1.module.bn.bias': 64, 'sections.0.0.conv2.module.weight': 36864, 'sections.0.0.conv2.module.bn.weight': 64, 'sections.0.0.conv2.module.bn.bias': 64, 'sections.0.1.conv1.module.weight': 36864, 'sections.0.1.conv1.module.bn.weight': 64, 'sections.0.1.conv1.module.bn.bias': 64, 'sections.0.1.conv2.module.weight': 36864, 'sections.0.1.conv2.module.bn.weight': 64, 'sections.0.1.conv2.module.bn.bias': 64, 'sections.1.0.conv1.module.weight': 73728, 'sections.1.0.conv1.module.bn.weight': 128, 'sections.1.0.conv1.module.bn.bias': 128, 'sections.1.0.conv2.module.weight': 147456, 'sections.1.0.conv2.module.bn.weight': 128, 'sections.1.0.conv2.module.bn.bias': 128, 'sections.1.0.identity.conv.module.weight': 8192, 'sections.1.0.identity.conv.module.bn.weight': 128, 'sections.1.0.identity.conv.module.bn.bias': 128, 'sections.1.1.conv1.module.weight': 147456, 'sections.1.1.conv1.module.bn.weight': 128, 'sections.1.1.conv1.module.bn.bias': 128, 'sections.1.1.conv2.module.weight': 147456, 'sections.1.1.conv2.module.bn.weight': 128, 'sections.1.1.conv2.module.bn.bias': 128, 'sections.2.0.conv1.module.weight': 294912, 'sections.2.0.conv1.module.bn.weight': 256, 'sections.2.0.conv1.module.bn.bias': 256, 'sections.2.0.conv2.module.weight': 589824, 'sections.2.0.conv2.module.bn.weight': 256, 'sections.2.0.conv2.module.bn.bias': 256, 'sections.2.0.identity.conv.module.weight': 32768, 'sections.2.0.identity.conv.module.bn.weight': 256, 'sections.2.0.identity.conv.module.bn.bias': 256, 'sections.2.1.conv1.module.weight': 589824, 'sections.2.1.conv1.module.bn.weight': 256, 'sections.2.1.conv1.module.bn.bias': 256, 'sections.2.1.conv2.module.weight': 589824, 'sections.2.1.conv2.module.bn.weight': 256, 'sections.2.1.conv2.module.bn.bias': 256, 'sections.3.0.conv1.module.weight': 1179648, 'sections.3.0.conv1.module.bn.weight': 512, 'sections.3.0.conv1.module.bn.bias': 512, 'sections.3.0.conv2.module.weight': 2359296, 'sections.3.0.conv2.module.bn.weight': 512, 'sections.3.0.conv2.module.bn.bias': 512, 'sections.3.0.identity.conv.module.weight': 131072, 'sections.3.0.identity.conv.module.bn.weight': 512, 'sections.3.0.identity.conv.module.bn.bias': 512, 'sections.3.1.conv1.module.weight': 2359296, 'sections.3.1.conv1.module.bn.weight': 512, 'sections.3.1.conv1.module.bn.bias': 512, 'sections.3.1.conv2.module.weight': 2359296, 'sections.3.1.conv2.module.bn.weight': 512, 'sections.3.1.conv2.module.bn.bias': 512, 'classifier.fc.module.weight': 512000, 'classifier.fc.module.bias': 1000}
...
python SparsificationSummaries/QuantizedOperations/count step None: 24
python SparsificationSummaries/QuantizedOperations/count step None: 24
python SparsificationSummaries/QuantizedOperations/count step None: 24
python SparsificationSummaries/QuantizedOperations/count step None: 24
python SparsificationSummaries/QuantizedOperations/percent step None: 1.0
python SparsificationSummaries/QuantizedOperations/percent step None: 1.0
python SparsificationSummaries/QuantizedOperations/percent step None: 1.0
python SparsificationSummaries/QuantizedOperations/percent step None: 1.0
python SparsificationSummaries/PrunedParameters/count step None: 19
python SparsificationSummaries/PrunedParameters/count step None: 19
python SparsificationSummaries/PrunedParameters/count step None: 19
python SparsificationSummaries/PrunedParameters/count step None: 19
python SparsificationSummaries/PrunedParameters/percent step None: 0.3064516129032258
python SparsificationSummaries/PrunedParameters/percent step None: 0.3064516129032258
python SparsificationSummaries/PrunedParameters/percent step None: 0.3064516129032258
python SparsificationSummaries/PrunedParameters/percent step None: 0.3064516129032258
python SparsificationPruning/ZeroParameters/input.conv.module.weight/count step None: 0
python SparsificationPruning/ZeroParameters/input.conv.module.weight/count step None: 0
python SparsificationPruning/ZeroParameters/input.conv.module.weight/count step None: 0
python SparsificationPruning/ZeroParameters/input.conv.module.weight/count step None: 0
python SparsificationPruning/ZeroParameters/input.conv.module.weight/percent step None: 0.0
python SparsificationPruning/ZeroParameters/input.conv.module.weight/percent step None: 0.0
python SparsificationPruning/ZeroParameters/input.conv.module.weight/percent step None: 0.0
python SparsificationPruning/ZeroParameters/input.conv.module.weight/percent step None: 0.0
python SparsificationPruning/ZeroParameters/input.conv.module.bn.weight/count step None: 0
python SparsificationPruning/ZeroParameters/input.conv.module.bn.weight/count step None: 0
python SparsificationPruning/ZeroParameters/input.conv.module.bn.weight/count step None: 0
python SparsificationPruning/ZeroParameters/input.conv.module.bn.weight/count step None: 0
python SparsificationPruning/ZeroParameters/input.conv.module.bn.weight/percent step None: 0.0
python SparsificationPruning/ZeroParameters/input.conv.module.bn.weight/percent step None: 0.0
python SparsificationPruning/ZeroParameters/input.conv.module.bn.weight/percent step None: 0.0
python SparsificationPruning/ZeroParameters/input.conv.module.bn.weight/percent step None: 0.0
...
python SparsificationQuantization/ConvBnReLU2d/enabled step None: True
python SparsificationQuantization/ConvBnReLU2d/enabled step None: True
python SparsificationQuantization/ConvBnReLU2d/enabled step None: True
python SparsificationQuantization/ConvBnReLU2d/enabled step None: True
python SparsificationQuantization/ConvBnReLU2d/precision/weights/num_bits step None: 8
python SparsificationQuantization/ConvBnReLU2d/precision/weights/num_bits step None: 8
python SparsificationQuantization/ConvBnReLU2d/precision/weights/num_bits step None: 8
python SparsificationQuantization/ConvBnReLU2d/precision/weights/num_bits step None: 8
python SparsificationQuantization/ConvBnReLU2d/precision/input_activations/num_bits step None: 8
python SparsificationQuantization/ConvBnReLU2d/precision/input_activations/num_bits step None: 8
python SparsificationQuantization/ConvBnReLU2d/precision/input_activations/num_bits step None: 8
python SparsificationQuantization/ConvBnReLU2d/precision/input_activations/num_bits step None: 8
python SparsificationQuantization/MaxPool2d/enabled step None: True
python SparsificationQuantization/MaxPool2d/enabled step None: True
python SparsificationQuantization/MaxPool2d/enabled step None: True
python SparsificationQuantization/MaxPool2d/enabled step None: True
python SparsificationQuantization/MaxPool2d/precision/weights/num_bits step None: 8
python SparsificationQuantization/MaxPool2d/precision/weights/num_bits step None: 8
python SparsificationQuantization/MaxPool2d/precision/weights/num_bits step None: 8
python SparsificationQuantization/MaxPool2d/precision/weights/num_bits step None: 8
python SparsificationQuantization/MaxPool2d/precision/input_activations/num_bits step None: 8
python SparsificationQuantization/MaxPool2d/precision/input_activations/num_bits step None: 8
python SparsificationQuantization/MaxPool2d/precision/input_activations/num_bits step None: 8
python SparsificationQuantization/MaxPool2d/precision/input_activations/num_bits step None: 8
python SparsificationQuantization/ConvBnReLU2d_1/enabled step None: True
python SparsificationQuantization/ConvBnReLU2d_1/enabled step None: True
python SparsificationQuantization/ConvBnReLU2d_1/enabled step None: True
python SparsificationQuantization/ConvBnReLU2d_1/enabled step None: True
...

@@ -58,5 +59,13 @@ def from_module(cls, module: torch.nn.Module) -> "ModuleSparsificationInfo":
quantization_info=SparsificationQuantization.from_module(module),
)

def loggable_items(self) -> Iterable[Tuple[str, float]]:
raise NotImplementedError()
def loggable_items(self) -> Generator[Tuple[str, Any], None, None]:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thoughts on making it an __iter__, such that we can do:

info = ModuleSparsificationInfo.from_module(...)
for tag, item in info:
    ....

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let's not do that for now since the logged items here should directly match with the logging targets from our PRD and not actually be representative of what's contained in this class

@dbogunowicz
Copy link
Contributor Author

Also note: while working on that feature in noticed a bug in quantization info. Because operators will have the same name (Conv2D for any of the convolutional operators), we need to append some identifier (like an integer) to make them separate from each other. We need all of them to store information e.g. about their dtype or whether quantization is enabled.

@dbogunowicz dbogunowicz changed the title [Sparsification Info] Add logabble_items method [ModuleSparsificationInfo] Add logabble_items method Mar 22, 2023
@@ -58,5 +59,13 @@ def from_module(cls, module: torch.nn.Module) -> "ModuleSparsificationInfo":
quantization_info=SparsificationQuantization.from_module(module),
)

def loggable_items(self) -> Iterable[Tuple[str, float]]:
raise NotImplementedError()
def loggable_items(self) -> Generator[Tuple[str, Any], None, None]:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let's not do that for now since the logged items here should directly match with the logging targets from our PRD and not actually be representative of what's contained in this class

src/sparseml/pytorch/utils/sparsification_info/helpers.py Outdated Show resolved Hide resolved
@dbogunowicz dbogunowicz changed the base branch from feature/damian/summary_configs to feature/damian/add_tests_sparsification March 24, 2023 19:04
Base automatically changed from feature/damian/add_tests_sparsification to feature/damian/summary_configs March 27, 2023 15:28
Base automatically changed from feature/damian/summary_configs to feature/damian/module_sparsification_info March 27, 2023 16:00
@@ -108,6 +108,22 @@ def from_module(
operation_counts=Counter([op.__class__.__name__ for op in operations]),
)

def loggable_items(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thoughts on making an abstract class that all these inherit from? That's the only other thing I can think of for this for ensuring the object you get has loggable_items (or you could just call hasattr)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, sounds like a good idea.

Copy link
Contributor

@corey-nm corey-nm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm, nice use of generators

Copy link
Contributor

@bfineran bfineran left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM pending comment


quantization_scheme = self.quantization_scheme[operation]
if quantization_scheme is None:
yield f"{main_tag}/{operation}/precision", None
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why None instead of 32/16 based on the param?

@dbogunowicz dbogunowicz merged commit 8052c54 into feature/damian/module_sparsification_info Mar 28, 2023
@dbogunowicz dbogunowicz deleted the feature/damian/loggable_items branch March 28, 2023 09:12
dbogunowicz added a commit that referenced this pull request Mar 28, 2023
* initial_commit

* [ModuleSparsificationInfo] Proposal of the main logic (#1454)

* initial commit

* initial developement

* sync with ben

* prototype ready

* included ben's comments

* Update src/sparseml/pytorch/utils/sparsification_info/configs.py

Co-authored-by: Benjamin Fineran <bfineran@users.noreply.github.com>

* Update src/sparseml/pytorch/utils/sparsification_info/configs.py

Co-authored-by: Benjamin Fineran <bfineran@users.noreply.github.com>

* address comments

* fix quantization logic

* remove identities from leaf operations

* fix the Identity removal

* [ModuleSparsificationInfo][Tests] Proposal of the main logic (#1479)

* Update src/sparseml/pytorch/utils/sparsification_info/helpers.py

Co-authored-by: Benjamin Fineran <bfineran@users.noreply.github.com>

* addressing PR comments

---------

Co-authored-by: Benjamin Fineran <bfineran@users.noreply.github.com>

* [ModuleSparsificationInfo] Add `logabble_items` method (#1468)

* initial commit

* initial developement

* sync with ben

* prototype ready

* included ben's comments

* initial commit

* Update src/sparseml/pytorch/utils/sparsification_info/configs.py

Co-authored-by: Benjamin Fineran <bfineran@users.noreply.github.com>

* Update src/sparseml/pytorch/utils/sparsification_info/configs.py

Co-authored-by: Benjamin Fineran <bfineran@users.noreply.github.com>

* address comments

* fix quantization logic

* formatting standardised to the PRD

* remove identities from leaf operations

* fix the Identity removal

* initial commit

* cleanup

* correct tests

* address PR comments

---------

Co-authored-by: Benjamin Fineran <bfineran@users.noreply.github.com>

* [ModuleSparisificationInfo] LoggingModifier (#1484)

* initial commit

* initial developement

* sync with ben

* prototype ready

* included ben's comments

* initial commit

* Update src/sparseml/pytorch/utils/sparsification_info/configs.py

Co-authored-by: Benjamin Fineran <bfineran@users.noreply.github.com>

* Update src/sparseml/pytorch/utils/sparsification_info/configs.py

Co-authored-by: Benjamin Fineran <bfineran@users.noreply.github.com>

* address comments

* fix quantization logic

* formatting standardised to the PRD

* remove identities from leaf operations

* fix the Identity removal

* initial commit

* initial commit

* checkpoint

* Delete modifier_logging.py

* Apply suggestions from code review

* tested the modifier

* Apply suggestions from code review

---------

Co-authored-by: Benjamin Fineran <bfineran@users.noreply.github.com>

* [ModuleSparsificationInfo][Tests] SparsificationLoggingModifier (#1485)

* Apply suggestions from code review

* Trigger tests

---------

Co-authored-by: Benjamin Fineran <bfineran@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants