-
Notifications
You must be signed in to change notification settings - Fork 27.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix device mismatch error in Whisper model during feature extraction #35866
Fix device mismatch error in Whisper model during feature extraction #35866
Conversation
cc @eustlb |
@@ -298,6 +298,7 @@ def test_torch_integration_batch(self): | |||
) | |||
# fmt: on | |||
|
|||
torch.set_default_device("cuda") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I verified that this test fails with this addition & without my patch. Is adding this line okay? If not can someone recommend me what is better way to write/update the test case since it depends on cuda?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To the best of my knowledge, we are not testing set_default_device
in Transformers.
I would rather go with:
torch.set_default_device("cuda") | |
with torch.device("cuda"): |
followed with indentation:
with torch.device("cuda"):
input_speech = self._load_datasamples(3)
feature_extractor = WhisperFeatureExtractor()
input_features = feature_extractor(input_speech, return_tensors="pt").input_features
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great catch, thanks! 🤗 Minor change to clean a bit further but otherwise LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for iterating! LGTM
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
Hi @eustlb are you waiting on something to merge changes? It is holding my other PR, so want to get this merged at earliest if possible |
Hi @eustlb what is the reason for the hold up? |
It got buried in my GitHub notifications, sorry about that! |
What does this PR do?
Fixes device mismatch error in Whisper. For instance, when torch default device is set (torch.set_default_device) to cuda, whisper fails with "stft input and window must be on the same device but got self on cpu and window on cuda:0". It is because torch functions (like hann) uses torch_default_device that can lead to device mismatch