-
-
Notifications
You must be signed in to change notification settings - Fork 8.4k
[AMD][Kernel][BugFix] fix test_rocm_compressed_tensors_w8a8 for rocm #19509
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -144,10 +144,10 @@ def triton_scaled_mm(input: torch.Tensor, | |
scale_b = scale_b.reshape(-1, 1) if scale_b.dim() <= 1 else scale_b | ||
|
||
assert scale_a.dtype == scale_b.dtype and scale_a.is_floating_point() | ||
assert scale_a.shape == torch.Size([1, 1]) or scale_a.shape == torch.Size( | ||
[M, 1]) | ||
assert scale_b.shape == torch.Size([1, 1]) or scale_b.shape == torch.Size( | ||
[N, 1]) | ||
assert scale_a.shape[1] == 1 and (scale_a.shape[0] == 1 | ||
or scale_a.shape[0] == M) | ||
assert scale_b.shape[1] == 1 and (scale_b.shape[0] == 1 | ||
or scale_b.shape[0] == N) | ||
Comment on lines
+147
to
+150
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Modifying the assertions to avoid direct comparison with The new assertions are logically equivalent to the old ones and ensure compatibility. The way they are split across lines maintains readability. |
||
assert out_dtype.is_floating_point | ||
assert bias is None or bias.is_floating_point() | ||
assert is_weak_contiguous(input) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Replacing the dynamic import using
importlib.import_module
with a directfrom ... import ...
statement is a good change. This directly addresses the statedtorch.compile
incompatibility withimportlib
and generally improves code clarity by making the dependency explicit.The use of
# noqa
is appropriate here to suppress linter warnings for an import that is not at the top level of the module, given its conditional nature within theif
block.