-
Notifications
You must be signed in to change notification settings - Fork 258
Add support for RDNA1 GPUs #3220
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: Gavin Zhao <git@gzgz.dev>
Signed-off-by: Gavin Zhao <git@gzgz.dev>
Signed-off-by: Gavin Zhao <git@gzgz.dev>
|
Oh this issue is a compiler optimization bug that seems to only happen with FP64: diff --git a/include/ck/tensor_operation/gpu/grid/gridwise_elementwise_2d.hpp b/include/ck/tensor_operation/gpu/grid/gridwise_elementwise_2d.hpp
index 839a68a978..142f084a67 100644
--- a/include/ck/tensor_operation/gpu/grid/gridwise_elementwise_2d.hpp
+++ b/include/ck/tensor_operation/gpu/grid/gridwise_elementwise_2d.hpp
@@ -33,6 +33,16 @@
const Block2TileMap block_2_tile_map,
const ElementwiseOperation elementwise_op)
{
+#if defined(__gfx101__)
+ // Workaround for gfx101x FP64 compiler bug: prevent incorrect optimization
+ // The compiler appears to misoptimize FP64 elementwise kernels without this barrier
+ if(threadIdx.x == 0 && blockIdx.x == 0)
+ {
+ // Use volatile pointer dereference to force memory dependency
+ auto ptr = p_in_global_tuple[Number<0>{}];
+ asm volatile("" : "+v"(ptr) :: "memory");
+ }
+#endif
GridwiseElementwiseFunctor::Run(in_grid_desc_tuple,
out_grid_desc_tuple,
p_in_global_tuple,This fixes the test. Unfortunately I'm unable to create a minimum working example. |
|
Thanks, I appreciate the effort to add support on gfx101x. Our main issue with gfx101x is that we don't have the hardware to keep running and testing on a regular basis. Hence no official support. But I would be open to merging this. |
|
Thank you! I completely understand that gfx101x has no official support, so this patch was written in a way such that it shouldn't affect other architectures. |
|
OK, I've made sure this passes through the CI and made inquiries about the hardware. Sounds like we may have something we could use. So I think we can merge this. |
|
Thank you! |
* Allow compilation for RDNA1 (__gfx101__) Signed-off-by: Gavin Zhao <git@gzgz.dev> * More RDNA1 changes Signed-off-by: Gavin Zhao <git@gzgz.dev> * Even more RDNA1 changes Signed-off-by: Gavin Zhao <git@gzgz.dev> * cmake: skip build quantization for unsupported arches * add gfx10-1-generic support as well * add gfx1013 and complete gfx10-1-generic * fix clang format * enable DL kernels on gfx101x --------- Signed-off-by: Gavin Zhao <git@gzgz.dev> Co-authored-by: illsilin_amdeng <Illia.Silin@amd.com> Co-authored-by: Illia Silin <98187287+illsilin@users.noreply.github.com>
Proposed changes
Support compiling and running CK on RDNA1 GPUs (including the new
gfx10-1-generictarget). This just involves#if defined(__gfx10x__)guards to includedefined(__gfx101__)ckProfiler"unable to find-ldevice_quantization", since device quantization instances seem to be incompatible/unavailable for RDNA1 GPUs.Fixes #2411.
Fixes #3185.
Test plan
Built CK on an AWS
g4ad.xlargeinstance (gfx1011) and ran tests as follows:109/110 tests reliably pass.
test_batchnorm_infer_rank_4fails (<1% has wrong values) due to a compiler bug documented at the end..Checklist
Please put an
xinto the boxes that apply. You can also fill these out after creating the PR. If you're not sure, please don't hesitate to ask.clang-formaton all changed filesDiscussion
I don't understand the hardware capabilities of RDNA1 and what capabilities are required by CK, so I'd hope if some maintainers of CK can check if RDNA1 GPUs are indeed incompatible with device quantization and if they have a more robust solution to the linker error.
Running unmodified, test
test_batchnorm_infer_rank_4fails with the below output:Test output
HOWEVER upon applying this patch that adds only debug output, the test succeeds no matter how many times I run it:
Logging patch
UPDATE: this is a compiler bug, see comment