Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

has_attentions - consistent test skipping logic and tf tests #17495

Merged

Conversation

amyeroberts
Copy link
Collaborator

What does this PR do?

Two linked changes regarding the control of tests being run:

PyTorch and TF consistency: has_attentions flag is used in test_modeling_common.py to control some of the logic in tests that are run, only applying them if the model has attention(s). This PR adds equivalent logic to tests in test_modeling_tf_common.py

Skipping tests consistency: unittest.skip is used to skip entire tests if they cannot/do not apply to that model e.g. for input embedding test in ConvNext. For test_attention_outputs this was controlled with an if-else statement. This was changed to be controlled with unittest.skip instead for two reasons: 1) consistency with the rest of the code base 2) prevent confusing pytest outputs i.e. models without attention are shown to skip test_attention_outputs instead of passing it.

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a Github issue or the forum? Please add a link
    to it if that's the case.
  • Did you make sure to update the documentation with your changes? Here are the
    documentation guidelines, and
    here are tips on formatting docstrings.
  • Did you write any new necessary tests?


else:
config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common()
seq_len = getattr(self.model_tester, "seq_length", None)
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure why the diff is so confusing for this file (possibly blank lines?). The only change is:

  • Removing the if / else statement
  • Un-indenting the logic in the else block

Apologies for the difficulty this introduces when reviewing.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use a GUI tool for review is OK here.

@HuggingFaceDocBuilderDev
Copy link

HuggingFaceDocBuilderDev commented May 31, 2022

The documentation is not available anymore as the PR was closed or merged.

@amyeroberts
Copy link
Collaborator Author

@NielsRogge, @ydshieh, @patrickvonplaten adding you to cover: git blame ownership, test ownership, some git blame and general transformers ownership. I hope that's OK. Feel free to remove yourselves and/or add others you think are more suitable.

Copy link
Collaborator

@ydshieh ydshieh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you! Leave a comment about a potential wrong reason.

tests/models/poolformer/test_modeling_poolformer.py Outdated Show resolved Hide resolved

else:
config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common()
seq_len = getattr(self.model_tester, "seq_length", None)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use a GUI tool for review is OK here.

tuple_inputs = self._prepare_for_class(inputs_dict, model_class, return_labels=True)
dict_inputs = self._prepare_for_class(inputs_dict, model_class, return_labels=True)
check_equivalence(model, tuple_inputs, dict_inputs, {"output_attentions": True})
if self.has_attentions:
Copy link
Collaborator

@ydshieh ydshieh May 31, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What happens for this test before this PR? It failed? I mean previously we didn't have condition if self.has_attentions:

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pretty much - a tensorflow model would fail when equivalent tests to a PyTorch model without attention layers were implemented. The same logic I'm adding here are in test_modeling_common.py. I found this when working on the ResNet port. All of the models this affects only have PyTorch implementations apart from convnext which uses unittest.skip in its tests, so there's no failing tests at the moment on main.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All yes (I somehow think unconsciously all the models have TF version) 😄

@patrickvonplaten
Copy link
Contributor

Thanks a lot for cleaning this up @amyeroberts !

Copy link
Member

@LysandreJik LysandreJik left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you, @amyeroberts!

@amyeroberts
Copy link
Collaborator Author

amyeroberts commented Jun 8, 2022

@NielsRogge I decide not to remove has_attentions in this PR, and would like to focus on making the PT/TF test consistency. If that's OK with you, I'll go ahead and merge.

@amyeroberts amyeroberts merged commit dfc76b2 into huggingface:main Jun 9, 2022
@amyeroberts amyeroberts deleted the add-tf-test-has-attentions-logic branch June 9, 2022 07:50
elusenji pushed a commit to elusenji/transformers that referenced this pull request Jun 12, 2022
amyeroberts added a commit to amyeroberts/transformers that referenced this pull request Jun 16, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants