Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG]Creating a pretrained CoaT model with out_features and return_interm_layers fails #1912

Closed
sakvaua opened this issue Aug 10, 2023 · 0 comments
Assignees
Labels
bug Something isn't working

Comments

@sakvaua
Copy link

sakvaua commented Aug 10, 2023

timm.create_model fails with AttributeError: 'CoaT' object has no attribute 'norm2' error if I try to create a pre-trained model and set out_features and return_interim_layers. It works fine if I set pretrained=False

Steps to reproduce the behavior:

import timm
model=timm.create_model('coat_mini', return_interm_layers=True, out_features=['x1_nocls','x2_nocls','x3_nocls','x4_nocls'], pretrained=True)
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
Cell In[12], line 2
      1 import timm
----> 2 model=timm.create_model('coat_mini',in_chans=1, return_interm_layers=True, out_features=['x1_nocls','x2_nocls','x3_nocls','x4_nocls'], pretrained=True)

File ~/anaconda3/lib/python3.10/site-packages/timm/models/_factory.py:114, in create_model(model_name, pretrained, pretrained_cfg, pretrained_cfg_overlay, checkpoint_path, scriptable, exportable, no_jit, **kwargs)
    112 create_fn = model_entrypoint(model_name)
    113 with set_layer_config(scriptable=scriptable, exportable=exportable, no_jit=no_jit):
--> 114     model = create_fn(
    115         pretrained=pretrained,
    116         pretrained_cfg=pretrained_cfg,
    117         pretrained_cfg_overlay=pretrained_cfg_overlay,
    118         **kwargs,
    119     )
    121 if checkpoint_path:
    122     load_checkpoint(model, checkpoint_path)

File ~/anaconda3/lib/python3.10/site-packages/timm/models/coat.py:752, in coat_mini(pretrained, **kwargs)
    748 @register_model
    749 def coat_mini(pretrained=False, **kwargs) -> CoaT:
    750     model_cfg = dict(
    751         patch_size=4, embed_dims=[152, 216, 216, 216], serial_depths=[2, 2, 2, 2], parallel_depth=6)
--> 752     model = _create_coat('coat_mini', pretrained=pretrained, **dict(model_cfg, **kwargs))
    753     return model

File ~/anaconda3/lib/python3.10/site-packages/timm/models/coat.py:704, in _create_coat(variant, pretrained, default_cfg, **kwargs)
    701 if kwargs.get('features_only', None):
    702     raise RuntimeError('features_only not implemented for Vision Transformer models.')
--> 704 model = build_model_with_cfg(
    705     CoaT,
    706     variant,
    707     pretrained,
    708     pretrained_filter_fn=checkpoint_filter_fn,
    709     **kwargs,
    710 )
    711 return model

File ~/anaconda3/lib/python3.10/site-packages/timm/models/_builder.py:393, in build_model_with_cfg(model_cls, variant, pretrained, pretrained_cfg, pretrained_cfg_overlay, model_cfg, feature_cfg, pretrained_strict, pretrained_filter_fn, kwargs_filter, **kwargs)
    391 num_classes_pretrained = 0 if features else getattr(model, 'num_classes', kwargs.get('num_classes', 1000))
    392 if pretrained:
--> 393     load_pretrained(
    394         model,
    395         pretrained_cfg=pretrained_cfg,
    396         num_classes=num_classes_pretrained,
    397         in_chans=kwargs.get('in_chans', 3),
    398         filter_fn=pretrained_filter_fn,
    399         strict=pretrained_strict,
    400     )
    402 # Wrap the model in a feature extraction module if enabled
    403 if features:

File ~/anaconda3/lib/python3.10/site-packages/timm/models/_builder.py:193, in load_pretrained(model, pretrained_cfg, num_classes, in_chans, filter_fn, strict)
    191 if filter_fn is not None:
    192     try:
--> 193         state_dict = filter_fn(state_dict, model)
    194     except TypeError as e:
    195         # for backwards compat with filter fn that take one arg
    196         state_dict = filter_fn(state_dict)

File ~/anaconda3/lib/python3.10/site-packages/timm/models/coat.py:693, in checkpoint_filter_fn(state_dict, model)
    689 state_dict = state_dict.get('model', state_dict)
    690 for k, v in state_dict.items():
    691     # original model had unused norm layers, removing them requires filtering pretrained checkpoints
    692     if k.startswith('norm1') or \
--> 693             (model.norm2 is None and k.startswith('norm2')) or \
    694             (model.norm3 is None and k.startswith('norm3')):
    695         continue
    696     out_dict[k] = v

File ~/anaconda3/lib/python3.10/site-packages/torch/nn/modules/module.py:1640, in Module.__getattr__(self, name)
   1638     if name in modules:
   1639         return modules[name]
-> 1640 raise AttributeError("'{}' object has no attribute '{}'".format(
   1641     type(self).__name__, name))

AttributeError: 'CoaT' object has no attribute 'norm2'
@sakvaua sakvaua added the bug Something isn't working label Aug 10, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants