-
Notifications
You must be signed in to change notification settings - Fork 362
[1/N][refactor] torchair fused_moe refactor #2438
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request refactors the fused_moe
logic for torchair
by moving it into a new, dedicated file, vllm_ascend/torchair/ops/torchair_fused_moe.py
. This is a good step towards better code organization. The changes in torchair_deepseek_v2.py
correctly adopt the new refactored module. However, I've identified a critical issue in the newly created torchair_fused_moe.py
file which appears to be a copy-paste error from the refactoring. This will cause a NameError
at runtime and needs to be addressed.
if envs_ascend.VLLM_ASCEND_ENABLE_MOE_ALL2ALL_SEQ and isinstance( | ||
self.quant_method, AscendUnquantizedFusedMoEMethod): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
AscendUnquantizedFusedMoEMethod
is not defined or imported in this file, which will lead to a NameError
at runtime. Based on the refactoring, it seems the check should be against TorchairAscendUnquantizedFusedMoEMethod
, which is defined in this file.
if envs_ascend.VLLM_ASCEND_ENABLE_MOE_ALL2ALL_SEQ and isinstance( | |
self.quant_method, AscendUnquantizedFusedMoEMethod): | |
if envs_ascend.VLLM_ASCEND_ENABLE_MOE_ALL2ALL_SEQ and isinstance( | |
self.quant_method, TorchairAscendUnquantizedFusedMoEMethod): |
👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:
If CI fails, you can run linting and testing checks locally according Contributing and Testing. |
9ece025
to
986ed47
Compare
06cef8b
to
7109ffb
Compare
Signed-off-by: hust17yixuan <303660421@qq.com> Signed-off-by: hust17yixuan <303660421@qq.com> Signed-off-by: hust17yixuan <303660421@qq.com> Signed-off-by: hust17yixuan <303660421@qq.com> Signed-off-by: hust17yixuan <303660421@qq.com> Signed-off-by: hust17yixuan <303660421@qq.com> Signed-off-by: hust17yixuan <303660421@qq.com> Signed-off-by: hust17yixuan <303660421@qq.com> Signed-off-by: hust17yixuan <303660421@qq.com> Signed-off-by: hust17yixuan <303660421@qq.com> Signed-off-by: hust17yixuan <303660421@qq.com>
7109ffb
to
e97ad15
Compare
Codecov Report❌ Patch coverage is
❌ Your patch check has failed because the patch coverage (65.55%) is below the target coverage (80.00%). You can increase the patch coverage or adjust the target coverage. Additional details and impacted files@@ Coverage Diff @@
## main #2438 +/- ##
==========================================
- Coverage 78.04% 77.54% -0.50%
==========================================
Files 132 134 +2
Lines 17557 18270 +713
==========================================
+ Hits 13702 14168 +466
- Misses 3855 4102 +247
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
What this PR does / why we need it?
Move torchair related fused_moe section into torchair_fused_moe to make the code clear. Next step we'll remove all torchair related code outside of torchair_fused_moe .
Does this PR introduce any user-facing change?
No
How was this patch tested?
vLLM version: v0.10.0
vLLM main: vllm-project/vllm@08d5f71