-
Notifications
You must be signed in to change notification settings - Fork 625
Add FP32 support for routing_score dtype #4352
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
✅ Deploy Preview for pytorch-fbgemm-docs ready!
To edit notification comments on pull requests, go to your Netlify project configuration. |
This pull request was exported from Phabricator. Differential Revision: D76679848 |
This pull request was exported from Phabricator. Differential Revision: D76679848 |
Summary: Pull Request resolved: pytorch#4352 X-link: facebookresearch/FBGEMM#1420 This diff adds support for both bfloat16 and float data types for the routing_score tensor in the index_shuffling_torch function. Previously, the function only supported bfloat16. * Updated the type check to accept both bfloat16 and float data types * Simplified the kernel selection using the DISPATCH_CASE_FLOATING_TYPES macro Differential Revision: D76679848
72a38ec
to
b9d3802
Compare
This pull request was exported from Phabricator. Differential Revision: D76679848 |
b9d3802
to
7e88ea4
Compare
Summary: Pull Request resolved: pytorch#4352 X-link: facebookresearch/FBGEMM#1420 This diff adds support for both bfloat16 and float data types for the routing_score tensor in the index_shuffling_torch function. Previously, the function only supported bfloat16. * Updated the type check to accept both bfloat16 and float data types * Simplified the kernel selection using the DISPATCH_CASE_FLOATING_TYPES macro Differential Revision: D76679848
This pull request was exported from Phabricator. Differential Revision: D76679848 |
Summary: Pull Request resolved: pytorch#4352 X-link: facebookresearch/FBGEMM#1420 This diff adds support for both bfloat16 and float data types for the routing_score tensor in the index_shuffling_torch function. Previously, the function only supported bfloat16. * Updated the type check to accept both bfloat16 and float data types * Simplified the kernel selection using the DISPATCH_CASE_FLOATING_TYPES macro Differential Revision: D76679848
7e88ea4
to
8fc603a
Compare
This pull request was exported from Phabricator. Differential Revision: D76679848 |
Summary: Pull Request resolved: pytorch#4352 X-link: facebookresearch/FBGEMM#1420 This diff adds support for both bfloat16 and float data types for the routing_score tensor in the index_shuffling_torch function. Previously, the function only supported bfloat16. * Updated the type check to accept both bfloat16 and float data types * Simplified the kernel selection using the DISPATCH_CASE_FLOATING_TYPES macro Reviewed By: q10 Differential Revision: D76679848
8fc603a
to
15a62cb
Compare
Summary: Pull Request resolved: pytorch#4352 X-link: facebookresearch/FBGEMM#1420 This diff adds support for both bfloat16 and float data types for the routing_score tensor in the index_shuffling_torch function. Previously, the function only supported bfloat16. * Updated the type check to accept both bfloat16 and float data types * Simplified the kernel selection using the DISPATCH_CASE_FLOATING_TYPES macro Reviewed By: q10 Differential Revision: D76679848
This pull request was exported from Phabricator. Differential Revision: D76679848 |
15a62cb
to
34e7cc4
Compare
This pull request has been merged in 8a01350. |
Summary:
X-link: https://github.com/facebookresearch/FBGEMM/pull/1420
This diff adds support for both bfloat16 and float data types for the routing_score tensor in the index_shuffling_torch function. Previously, the function only supported bfloat16.
Differential Revision: D76679848