Skip to content

Add FP32 support for routing_score dtype #4352

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from

Conversation

jianyuh
Copy link
Member

@jianyuh jianyuh commented Jun 15, 2025

Summary:
X-link: https://github.com/facebookresearch/FBGEMM/pull/1420

This diff adds support for both bfloat16 and float data types for the routing_score tensor in the index_shuffling_torch function. Previously, the function only supported bfloat16.

  • Updated the type check to accept both bfloat16 and float data types
  • Simplified the kernel selection using the DISPATCH_CASE_FLOATING_TYPES macro

Differential Revision: D76679848

Copy link

netlify bot commented Jun 15, 2025

Deploy Preview for pytorch-fbgemm-docs ready!

Name Link
🔨 Latest commit 34e7cc4
🔍 Latest deploy log https://app.netlify.com/projects/pytorch-fbgemm-docs/deploys/684f3a87538d73000886a4c6
😎 Deploy Preview https://deploy-preview-4352--pytorch-fbgemm-docs.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify project configuration.

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D76679848

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D76679848

jianyuh added a commit to jianyuh/FBGEMM that referenced this pull request Jun 15, 2025
Summary:
Pull Request resolved: pytorch#4352

X-link: facebookresearch/FBGEMM#1420

This diff adds support for both bfloat16 and float data types for the routing_score tensor in the index_shuffling_torch function. Previously, the function only supported bfloat16.

*   Updated the type check to accept both bfloat16 and float data types
*   Simplified the kernel selection using the DISPATCH_CASE_FLOATING_TYPES macro

Differential Revision: D76679848
@jianyuh jianyuh force-pushed the export-D76679848 branch from 72a38ec to b9d3802 Compare June 15, 2025 08:31
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D76679848

@jianyuh jianyuh force-pushed the export-D76679848 branch from b9d3802 to 7e88ea4 Compare June 15, 2025 08:35
jianyuh added a commit to jianyuh/FBGEMM that referenced this pull request Jun 15, 2025
Summary:
Pull Request resolved: pytorch#4352

X-link: facebookresearch/FBGEMM#1420

This diff adds support for both bfloat16 and float data types for the routing_score tensor in the index_shuffling_torch function. Previously, the function only supported bfloat16.

*   Updated the type check to accept both bfloat16 and float data types
*   Simplified the kernel selection using the DISPATCH_CASE_FLOATING_TYPES macro

Differential Revision: D76679848
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D76679848

jianyuh added a commit to jianyuh/FBGEMM that referenced this pull request Jun 15, 2025
Summary:
Pull Request resolved: pytorch#4352

X-link: facebookresearch/FBGEMM#1420

This diff adds support for both bfloat16 and float data types for the routing_score tensor in the index_shuffling_torch function. Previously, the function only supported bfloat16.

*   Updated the type check to accept both bfloat16 and float data types
*   Simplified the kernel selection using the DISPATCH_CASE_FLOATING_TYPES macro

Differential Revision: D76679848
@jianyuh jianyuh force-pushed the export-D76679848 branch from 7e88ea4 to 8fc603a Compare June 15, 2025 08:40
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D76679848

jianyuh added a commit to jianyuh/FBGEMM that referenced this pull request Jun 15, 2025
Summary:
Pull Request resolved: pytorch#4352

X-link: facebookresearch/FBGEMM#1420

This diff adds support for both bfloat16 and float data types for the routing_score tensor in the index_shuffling_torch function. Previously, the function only supported bfloat16.

*   Updated the type check to accept both bfloat16 and float data types
*   Simplified the kernel selection using the DISPATCH_CASE_FLOATING_TYPES macro

Reviewed By: q10

Differential Revision: D76679848
@jianyuh jianyuh force-pushed the export-D76679848 branch from 8fc603a to 15a62cb Compare June 15, 2025 21:22
Summary:
Pull Request resolved: pytorch#4352

X-link: facebookresearch/FBGEMM#1420

This diff adds support for both bfloat16 and float data types for the routing_score tensor in the index_shuffling_torch function. Previously, the function only supported bfloat16.

*   Updated the type check to accept both bfloat16 and float data types
*   Simplified the kernel selection using the DISPATCH_CASE_FLOATING_TYPES macro

Reviewed By: q10

Differential Revision: D76679848
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D76679848

@facebook-github-bot
Copy link
Contributor

This pull request has been merged in 8a01350.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants