-
Notifications
You must be signed in to change notification settings - Fork 283
moe quant with dedicated kernels [wip] #2325
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/2325
Note: Links to docs will display an error until the docs builds have been completed. ❌ 2 New Failures, 1 Unrelated FailureAs of commit 186708f with merge base f0f1f6c ( NEW FAILURES - The following jobs have failed:
FLAKY - The following job failed but was likely due to flakiness present on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
isinstance(self.w1, torchao.quantization.linear_activation_quantized_tensor.LinearActivationQuantizedTensor) and | ||
isinstance(self.w1.original_weight_tensor._layout, torchao.dtypes.floatx.float8_layout.Float8Layout) | ||
): | ||
final_out = fp8_dq_moe_op(x, self.w1, self.w2, self.w3, expert_indices, scores) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is it possible to call this op without modifying the source model?
is there a gropup_mm for bfloat16 that we can overwrite and dispatch to scaled_grouped_mmm?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, there is _grouped_mm
in PyTorch core that does that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
that's a better integration point but i'm not sure i'll be able to complete that before i have to head out on leave.
also i'd probably make that a separate PR instead of combining everything into one since that would be a significant change to the base moe integration.
PR to hopefully remove need for padding groups is here: pytorch/pytorch#155466. |
alignment = 16 | ||
if _torchtitan_available: | ||
num_ranks = 1 | ||
padded_indices, m_offsets = torchtitan_pad(num_tokens_per_expert, alignment, num_ranks) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
heads up, soon we won't need padding once #155466 lands
input_fp8[valid_values] = q_input_data[token_shuffle] | ||
input_scale[valid_values] = q_input_scale[token_shuffle] if q_input_scale.numel()>1 else q_input_scale | ||
|
||
if use_fbgemm_kernel: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we have fbgemm-like kernels available via autotuning in torch.compile, thanks to #155138, do you think we still need separate fbgemm path?
Summary: extending the torchao moe support to have more performant kernels. This PR supports both scaled_grouped_mm and fbgemm's grouped_gemm_fp8_rowwise though it seems like grouped_gemm_fp8_rowwise is a bit buggy (need to make a clear repro) todo: run benchmarks, debug fbgemm kernel, unit tests Test Plan: Reviewers: Subscribers: Tasks: Tags:
Summary:
current status:
both kernels are working. The padding is a significant issue with compile for the pytorch kernel while the fbgemm kernel doesn't seem compatible with compile. Hopefully this can be handled using the changes mentioned below to avoid the data dependent padding.
todo:
test the no-padding compilable pytorch kernel
change base integration to grouped_gemm (another PR)
Test Plan:
Reviewers:
Subscribers:
Tasks:
Tags: