You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
`---------------------------------------------------------------------------
ValueError Traceback (most recent call last) in <cell line: 21>()
19
20 print("=> Creating model")
---> 21 model = VideoRecap(old_args, eval_only=True)
22 model = model.cuda()
23 model.load_state_dict(state_dict, strict=True)
5 frames /usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py in _check_and_enable_sdpa(cls, config, hard_check_only)
1729 if hard_check_only:
1730 if not cls._supports_sdpa:
-> 1731 raise ValueError(
1732 f"{cls.name} does not support an attention implementation through torch.nn.functional.scaled_dot_product_attention yet."
1733 " Please request the support for this architecture: huggingface/transformers#28005. If you believe"
ValueError: GPT2LMHeadModel does not support an attention implementation through torch.nn.functional.scaled_dot_product_attention yet. Please request the support for this architecture: huggingface/transformers#28005. If you believe this error is a bug, please open an issue in Transformers GitHub repository and load your model with the argument attn_implementation="eager" meanwhile. Example: model = AutoModel.from_pretrained("openai/whisper-tiny", attn_implementation="eager")
`
Here is the output of the demo file while trying to load VideoRecap model under the Clip Caption section.
Which version of the PyTorchcan run it?
The text was updated successfully, but these errors were encountered:
`---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
in <cell line: 21>()
19
20 print("=> Creating model")
---> 21 model = VideoRecap(old_args, eval_only=True)
22 model = model.cuda()
23 model.load_state_dict(state_dict, strict=True)
5 frames
/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py in _check_and_enable_sdpa(cls, config, hard_check_only)
1729 if hard_check_only:
1730 if not cls._supports_sdpa:
-> 1731 raise ValueError(
1732 f"{cls.name} does not support an attention implementation through torch.nn.functional.scaled_dot_product_attention yet."
1733 " Please request the support for this architecture: huggingface/transformers#28005. If you believe"
ValueError: GPT2LMHeadModel does not support an attention implementation through torch.nn.functional.scaled_dot_product_attention yet. Please request the support for this architecture: huggingface/transformers#28005. If you believe this error is a bug, please open an issue in Transformers GitHub repository and load your model with the argument
attn_implementation="eager"
meanwhile. Example:model = AutoModel.from_pretrained("openai/whisper-tiny", attn_implementation="eager")
`
Here is the output of the demo file while trying to load VideoRecap model under the Clip Caption section.
Which version of the PyTorchcan run it?
The text was updated successfully, but these errors were encountered: