- 
          
- 
                Notifications
    You must be signed in to change notification settings 
- Fork 10.9k
Upgrade FlashInfer to v0.3.0 #24086
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Upgrade FlashInfer to v0.3.0 #24086
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request upgrades FlashInfer to v0.3.0, which is reflected consistently in both the Dockerfile and setup.py. This is a welcome update that should bring in new features and bug fixes.
My review includes two main points regarding opportunities to clean up obsolete code and workarounds related to previous FlashInfer versions. Addressing these would make the upgrade more complete and improve maintainability and potentially performance. Please see the detailed comments.
        
          
                docker/Dockerfile
              
                Outdated
          
        
      There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With this upgrade to FlashInfer v0.3.0, it's worth investigating if the workarounds for the 'FlashInfer AOT wheel' issue are still necessary. There are TODOs on lines 18 and 428 in this file related to this. If the issue is resolved in v0.3.0, those sections could be cleaned up as part of this upgrade to simplify the Dockerfile.
        
          
                setup.py
              
                Outdated
          
        
      There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This upgrade to flashinfer-python==0.3.0 is great. The release notes for this version mention that dynamic tile size for MoE kernels is now enabled. This likely resolves the TODO on line 28 of vllm/model_executor/layers/quantization/utils/flashinfer_utils.py, which hardcodes tile_tokens_dim = 8 due to issues in a previous FlashInfer version. It would be beneficial to update that logic to take advantage of the new version's capabilities and potentially improve performance.
Head branch was pushed to by a user without write access
5b14725    to
    4858c46      
    Compare
  
    Mainly to get the GPT-OSS MXFP4 trtllm-gen MoE autotuning and the bug fix in: flashinfer-ai/flashinfer#1573 Signed-off-by: Po-Han Huang <pohanh@nvidia.com>
4858c46    to
    e7e16d8      
    Compare
  
    | https://app.hex.tech/533fe68e-dcd8-4a52-a101-aefba762f581/app/vLLM-CI-030kdEgDv6lSlh1UPYOkWP/latest The two distributed tests are failing on main | 
Signed-off-by: Po-Han Huang <pohanh@nvidia.com> Co-authored-by: Simon Mo <simon.mo@hey.com>
Signed-off-by: Po-Han Huang <pohanh@nvidia.com> Co-authored-by: Simon Mo <simon.mo@hey.com>
Signed-off-by: Po-Han Huang <pohanh@nvidia.com> Co-authored-by: Simon Mo <simon.mo@hey.com>
Signed-off-by: Po-Han Huang <pohanh@nvidia.com> Co-authored-by: Simon Mo <simon.mo@hey.com> Signed-off-by: xuebwang-amd <xuebwang@amd.com>
Signed-off-by: Po-Han Huang <pohanh@nvidia.com> Co-authored-by: Simon Mo <simon.mo@hey.com> Signed-off-by: xuebwang-amd <xuebwang@amd.com>
Purpose
Mainly to get the GPT-OSS MXFP4 trtllm-gen MoE autotuning and the bug fix in: flashinfer-ai/flashinfer#1573
Test Plan
Test Result
Essential Elements of an Effective PR Description Checklist
supported_models.mdandexamplesfor a new model.