-
-
Notifications
You must be signed in to change notification settings - Fork 8.4k
Fix tensor shapes for DeepEP and DeepGEMM assertions #19546
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -120,6 +120,8 @@ def apply( | |
a2q = a2q.view(E, max_num_tokens, -1) | ||
a2q_scale = a2q_scale.view(E, max_num_tokens, -1) | ||
|
||
output = output.view(E, max_num_tokens, -1) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. If the output shape is incorrect it should be fixed in There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Thanks @ptarasiewiczNV - This is the fix I have #19515 - In addition to resolving this it fixes a few other issues as well. Can you please take a look and see if it works for you ? much appreciated 🙌 Thanks. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Thanks, this looks great. I will be able to test it tomorrow, but definitely this is the proper direction so I will close this PR |
||
|
||
dg.m_grouped_gemm_fp8_fp8_bf16_nt_masked((a2q, a2q_scale), | ||
(w2, w2_scale), | ||
out=output, | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -174,6 +174,11 @@ def finalize(self, output: torch.Tensor, fused_expert_output: torch.Tensor, | |
# weights have already been applied. | ||
combine_topk_weights = torch.ones_like(topk_weights) | ||
|
||
_, _, num_max_dispatch_tokens_per_rank, _, num_experts = self.handle | ||
fused_expert_output = fused_expert_output.view( | ||
num_experts // self.buffer.group_size, | ||
self.buffer.group_size * num_max_dispatch_tokens_per_rank, -1) | ||
Comment on lines
+177
to
+180
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It would be helpful to add a comment explaining this reshape. For instance, mentioning that this view operation transforms |
||
|
||
# TODO (varun) : Enable zero copy mode | ||
_, event, hook = self.buffer.low_latency_combine( | ||
fused_expert_output, | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider adding a comment here to explain why this reshape is necessary. For example, clarifying that this view aligns the
output
tensor with the shape expected by thedg.m_grouped_gemm_fp8_fp8_bf16_nt_masked
kernel would improve code clarity for future readers.