-
Notifications
You must be signed in to change notification settings - Fork 382
Feature/sm100 low latency nvfp4 kernels #1214
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Feature/sm100 low latency nvfp4 kernels #1214
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @azhurkevich, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request significantly enhances FlashInfer's Mixture-of-Experts (MoE) capabilities by introducing a new, highly optimized backend for Blackwell GPUs based on NVIDIA's TensorRT-LLM Gen framework. It brings support for advanced FP4/FP8 quantization schemes, flexible routing algorithms, and specialized weight transformations, aiming to deliver speed-of-light low-latency kernels for large language models.
Highlights
- New TRT-LLM Gen MoE Backend: Introduces a new fused Mixture-of-Experts (MoE) kernel implementation specifically for Blackwell (SM100) GPUs, leveraging NVIDIA's TensorRT-LLM Gen framework for highly optimized low-latency operations.
- Advanced Quantization Support: Enables FP4 and FP8 block-scaled quantization for MoE layers, including support for different scale factor layouts (swizzled, linear) and quantization types (FP16_TO_FP4, FP8_TO_FP4, FP16_TO_MXFP8).
- Flexible Routing Methods: Integrates multiple routing algorithms, such as DeepSeekV3, Renormalize (TopK -> Softmax), and RenormalizeNaive (Softmax -> TopK -> Renormalize), providing diverse options for MoE expert selection.
- Optimized Weight Transformations: Implements specialized preprocessing steps for weight matrices, including row reordering for gated activations and shuffling for transposed MMA outputs, to maximize performance on Blackwell architecture.
- Code Refactoring: Renames the existing
FusedMoeRunner
class toFusedMoeCutlassRunner
and updates its references, improving code clarity and distinguishing it from the new TRT-LLM Gen backend.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces enhancements for FP4 quantization on SM100 architecture, including a new trtllmgen
backend for fused MoE operations. The changes involve refactoring of existing quantization kernels for better generality and the addition of new functionalities and APIs. The new tests are comprehensive and cover the new features well. I've provided a few comments to improve code clarity and robustness.
at::Tensor num_non_exiting_ctas | ||
= at::empty({1}, at::TensorOptions().device(routing_logits.device()).dtype(at::ScalarType::Int)); | ||
|
||
// FIXME: check shape |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
CHECK_CONTIGUOUS(self); | ||
CHECK_INPUT_TYPE(globalScale, c10::ScalarType::Float); | ||
TORCH_CHECK(sfVecSize == 16, "sfVecSize can only be 16"); | ||
if (sfUseUE8M0) { | ||
TORCH_CHECK(globalScale.has_value(), "globalScale is required for UE8M0"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The sfVecSize
argument is checked to be 16 or 32, but it's not used to select the kernel implementation. The selection is based on sfUseUE8M0
, which implicitly determines the vector size (16 for false
, 32 for true
). This can be confusing and error-prone. To make the code more robust and self-documenting, add a check to enforce this relationship. For example:
TORCH_CHECK((sfUseUE8M0 && sfVecSize == 32) || (!sfUseUE8M0 && sfVecSize == 16),
"sfVecSize must be 32 for mxfp4 (sfUseUE8M0=true) and 16 for nvfp4 (sfUseUE8M0=false)");
This will ensure that the arguments are consistent with the kernel's expectations.
TORCH_CHECK(false, "Other layout not implemented yet."); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This TORCH_CHECK(false, ...)
will always fail if an unsupported layout is provided. It's better to provide a more informative error message by checking the layout
value directly. This makes debugging easier for users.
TORCH_CHECK(false, "Other layout not implemented yet."); | |
} | |
else | |
{ | |
TORCH_CHECK(false, "Unsupported QuantizationSFLayout. Only SWIZZLED and LINEAR are supported."); | |
} |
(cherry picked from commit efc0d82)
a65a04f
to
7d1d265
Compare
Hi, may I know whether this will be merged today? Thanks! |
thats the plan yes |
Looking forward to it! |
its gonna be long night) |
📌 Description
Enable Blackwell with speed of light low latency kernels. Collaboration with @nekorobov. Supporting: @aleozlx, Kaixi Hou, Dongfeng Yu.
🔍 Related Issues
🚀 Pull Request Checklist
Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete.
✅ Pre-commit Checks
pre-commit
by runningpip install pre-commit
(or used your preferred method).pre-commit install
.pre-commit run --all-files
and fixed any reported issues.🧪 Tests
unittest
, etc.).Reviewer Notes