-
Notifications
You must be signed in to change notification settings - Fork 13.6k
[AMDGPU] Bugfix for scaled MFMA parsing FP literals #142493
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Fix bug on parsing FP literals for scale values in the scaled MFMA. Due to the change in order of operands between MCinst and parsed operands, the FP literal imms for scale values were not parsed correctly.
@llvm/pr-subscribers-backend-amdgpu @llvm/pr-subscribers-mc Author: Vigneshwar Jayakumar (VigneshwarJ) Changesbugfix on parsing FP literals for scale values in the scaled MFMA. Due to the change in order of operands between MCinst and parsed operands, the FP literal imms for scale values were not parsed correctly. Patch is 29.52 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/142493.diff 5 Files Affected:
diff --git a/llvm/lib/Target/AMDGPU/AsmParser/AMDGPUAsmParser.cpp b/llvm/lib/Target/AMDGPU/AsmParser/AMDGPUAsmParser.cpp
index 95932c4932327..4ba499f807732 100644
--- a/llvm/lib/Target/AMDGPU/AsmParser/AMDGPUAsmParser.cpp
+++ b/llvm/lib/Target/AMDGPU/AsmParser/AMDGPUAsmParser.cpp
@@ -8835,6 +8835,13 @@ void AMDGPUAsmParser::cvtScaledMFMA(MCInst &Inst,
for (unsigned E = Operands.size(); I != E; ++I) {
AMDGPUOperand &Op = static_cast<AMDGPUOperand &>(*Operands[I]);
+ // the order of operands in MCInst and parsed operands are different.
+ // Adding dummy blgp and cbsz operands at corresponding MCInst operand
+ // indeces for parsing scale values correctly.
+ if (I == 5) {
+ Inst.addOperand(MCOperand::createImm(0));
+ Inst.addOperand(MCOperand::createImm(0));
+ }
if (isRegOrImmWithInputMods(Desc, Inst.getNumOperands())) {
Op.addRegOrImmWithFPInputModsOperands(Inst, 2);
} else if (Op.isImmModifier()) {
@@ -8845,12 +8852,21 @@ void AMDGPUAsmParser::cvtScaledMFMA(MCInst &Inst,
}
// Insert CBSZ and BLGP operands for F8F6F4 variants
- int InsertPos = AMDGPU::getNamedOperandIdx(Opc, AMDGPU::OpName::cbsz);
- addOptionalImmOperand(Inst, Operands, OptionalIdx, AMDGPUOperand::ImmTyCBSZ,
- 0, InsertPos);
- InsertPos = AMDGPU::getNamedOperandIdx(Opc, AMDGPU::OpName::blgp);
- addOptionalImmOperand(Inst, Operands, OptionalIdx, AMDGPUOperand::ImmTyBLGP,
- 0, InsertPos);
+ auto CbszInsIdx = AMDGPU::getNamedOperandIdx(Opc, AMDGPU::OpName::cbsz);
+ auto CbszIdx = OptionalIdx.find(AMDGPUOperand::ImmTyCBSZ);
+ if (CbszIdx != OptionalIdx.end()) {
+ auto CbszVal =
+ static_cast<const AMDGPUOperand &>(*Operands[CbszIdx->second]).getImm();
+ Inst.getOperand(CbszInsIdx).setImm(CbszVal);
+ }
+
+ auto BlgpInsIdx = AMDGPU::getNamedOperandIdx(Opc, AMDGPU::OpName::blgp);
+ auto BlgpIdx = OptionalIdx.find(AMDGPUOperand::ImmTyBLGP);
+ if (BlgpIdx != OptionalIdx.end()) {
+ auto BlgpVal =
+ static_cast<const AMDGPUOperand &>(*Operands[BlgpIdx->second]).getImm();
+ Inst.getOperand(BlgpInsIdx).setImm(BlgpVal);
+ }
// Add dummy src_modifiers
Inst.addOperand(MCOperand::createImm(0));
diff --git a/llvm/test/CodeGen/AMDGPU/llvm.amdgcn.mfma.scale.f32.16x16x128.f8f6f4.ll b/llvm/test/CodeGen/AMDGPU/llvm.amdgcn.mfma.scale.f32.16x16x128.f8f6f4.ll
index e027dda957a6d..04ee0bbd17673 100644
--- a/llvm/test/CodeGen/AMDGPU/llvm.amdgcn.mfma.scale.f32.16x16x128.f8f6f4.ll
+++ b/llvm/test/CodeGen/AMDGPU/llvm.amdgcn.mfma.scale.f32.16x16x128.f8f6f4.ll
@@ -2024,6 +2024,205 @@ define amdgpu_kernel void @test_mfma_scale_f32_16x16x128_f8f6f4__vgprcd___scaleA
ret void
}
+define amdgpu_kernel void @test_mfma_scale_f32_16x16x128_f8f6f4__vgprcd___scaleA_kimm__scaleB__FP_literal(<8 x i32> %arg0, <8 x i32> %arg1, <4 x float> %arg2, ptr addrspace(1) %ptr) #0 {
+; SDAG-LABEL: test_mfma_scale_f32_16x16x128_f8f6f4__vgprcd___scaleA_kimm__scaleB__FP_literal:
+; SDAG: ; %bb.0:
+; SDAG-NEXT: s_load_dwordx16 s[8:23], s[4:5], 0x0
+; SDAG-NEXT: s_load_dwordx4 s[0:3], s[4:5], 0x40
+; SDAG-NEXT: s_movk_i32 s6, 0x41
+; SDAG-NEXT: s_load_dwordx2 s[4:5], s[4:5], 0x50
+; SDAG-NEXT: v_mov_b32_e32 v16, 0
+; SDAG-NEXT: s_waitcnt lgkmcnt(0)
+; SDAG-NEXT: v_mov_b32_e32 v0, s8
+; SDAG-NEXT: v_mov_b32_e32 v1, s9
+; SDAG-NEXT: v_mov_b32_e32 v2, s10
+; SDAG-NEXT: v_mov_b32_e32 v3, s11
+; SDAG-NEXT: v_mov_b32_e32 v4, s12
+; SDAG-NEXT: v_mov_b32_e32 v5, s13
+; SDAG-NEXT: v_mov_b32_e32 v6, s14
+; SDAG-NEXT: v_mov_b32_e32 v7, s15
+; SDAG-NEXT: v_accvgpr_write_b32 a0, s0
+; SDAG-NEXT: v_mov_b32_e32 v8, s16
+; SDAG-NEXT: v_mov_b32_e32 v9, s17
+; SDAG-NEXT: v_mov_b32_e32 v10, s18
+; SDAG-NEXT: v_mov_b32_e32 v11, s19
+; SDAG-NEXT: v_mov_b32_e32 v12, s20
+; SDAG-NEXT: v_mov_b32_e32 v13, s21
+; SDAG-NEXT: v_mov_b32_e32 v14, s22
+; SDAG-NEXT: v_mov_b32_e32 v15, s23
+; SDAG-NEXT: v_accvgpr_write_b32 a1, s1
+; SDAG-NEXT: v_accvgpr_write_b32 a2, s2
+; SDAG-NEXT: v_accvgpr_write_b32 a3, s3
+; SDAG-NEXT: s_nop 1
+; SDAG-NEXT: v_mfma_scale_f32_16x16x128_f8f6f4 a[0:3], v[0:7], v[8:15], a[0:3], s6, 1.0 op_sel:[1,1,0] op_sel_hi:[1,0,0]
+; SDAG-NEXT: s_nop 7
+; SDAG-NEXT: s_nop 3
+; SDAG-NEXT: global_store_dwordx4 v16, a[0:3], s[4:5]
+; SDAG-NEXT: s_endpgm
+;
+; GISEL-LABEL: test_mfma_scale_f32_16x16x128_f8f6f4__vgprcd___scaleA_kimm__scaleB__FP_literal:
+; GISEL: ; %bb.0:
+; GISEL-NEXT: s_load_dwordx16 s[8:23], s[4:5], 0x0
+; GISEL-NEXT: s_load_dwordx4 s[0:3], s[4:5], 0x40
+; GISEL-NEXT: v_mov_b32_e32 v16, 0x41
+; GISEL-NEXT: s_load_dwordx2 s[4:5], s[4:5], 0x50
+; GISEL-NEXT: s_waitcnt lgkmcnt(0)
+; GISEL-NEXT: v_mov_b64_e32 v[0:1], s[8:9]
+; GISEL-NEXT: v_mov_b64_e32 v[2:3], s[10:11]
+; GISEL-NEXT: v_mov_b64_e32 v[4:5], s[12:13]
+; GISEL-NEXT: v_mov_b64_e32 v[6:7], s[14:15]
+; GISEL-NEXT: v_mov_b64_e32 v[8:9], s[16:17]
+; GISEL-NEXT: v_accvgpr_write_b32 a0, s0
+; GISEL-NEXT: v_mov_b64_e32 v[10:11], s[18:19]
+; GISEL-NEXT: v_mov_b64_e32 v[12:13], s[20:21]
+; GISEL-NEXT: v_mov_b64_e32 v[14:15], s[22:23]
+; GISEL-NEXT: v_accvgpr_write_b32 a1, s1
+; GISEL-NEXT: v_accvgpr_write_b32 a2, s2
+; GISEL-NEXT: v_accvgpr_write_b32 a3, s3
+; GISEL-NEXT: s_nop 1
+; GISEL-NEXT: v_mfma_scale_f32_16x16x128_f8f6f4 a[0:3], v[0:7], v[8:15], a[0:3], v16, 1.0 op_sel:[1,1,0] op_sel_hi:[1,0,0]
+; GISEL-NEXT: v_mov_b32_e32 v0, 0
+; GISEL-NEXT: s_nop 7
+; GISEL-NEXT: s_nop 2
+; GISEL-NEXT: global_store_dwordx4 v0, a[0:3], s[4:5]
+; GISEL-NEXT: s_endpgm
+ %result = call <4 x float> @llvm.amdgcn.mfma.scale.f32.16x16x128.f8f6f4.v8i32.v8i32(<8 x i32> %arg0, <8 x i32> %arg1, <4 x float> %arg2, i32 0, i32 0, i32 3, i32 65, i32 1, i32 1065353216)
+ store <4 x float> %result, ptr addrspace(1) %ptr, align 16
+ ret void
+}
+
+define amdgpu_kernel void @test_mfma_scale_f32_16x16x128_f8f6f4__vgprcd___scaleA_FP_literal__scaleB__inline_imm(<8 x i32> %arg0, <8 x i32> %arg1, <4 x float> %arg2, ptr addrspace(1) %ptr) #0 {
+; SDAG-LABEL: test_mfma_scale_f32_16x16x128_f8f6f4__vgprcd___scaleA_FP_literal__scaleB__inline_imm:
+; SDAG: ; %bb.0:
+; SDAG-NEXT: s_load_dwordx16 s[8:23], s[4:5], 0x0
+; SDAG-NEXT: s_load_dwordx4 s[0:3], s[4:5], 0x40
+; SDAG-NEXT: v_mov_b32_e32 v16, 0
+; SDAG-NEXT: s_load_dwordx2 s[4:5], s[4:5], 0x50
+; SDAG-NEXT: s_waitcnt lgkmcnt(0)
+; SDAG-NEXT: v_mov_b32_e32 v0, s8
+; SDAG-NEXT: v_mov_b32_e32 v1, s9
+; SDAG-NEXT: v_mov_b32_e32 v2, s10
+; SDAG-NEXT: v_mov_b32_e32 v3, s11
+; SDAG-NEXT: v_mov_b32_e32 v4, s12
+; SDAG-NEXT: v_mov_b32_e32 v5, s13
+; SDAG-NEXT: v_mov_b32_e32 v6, s14
+; SDAG-NEXT: v_mov_b32_e32 v7, s15
+; SDAG-NEXT: v_accvgpr_write_b32 a0, s0
+; SDAG-NEXT: v_mov_b32_e32 v8, s16
+; SDAG-NEXT: v_mov_b32_e32 v9, s17
+; SDAG-NEXT: v_mov_b32_e32 v10, s18
+; SDAG-NEXT: v_mov_b32_e32 v11, s19
+; SDAG-NEXT: v_mov_b32_e32 v12, s20
+; SDAG-NEXT: v_mov_b32_e32 v13, s21
+; SDAG-NEXT: v_mov_b32_e32 v14, s22
+; SDAG-NEXT: v_mov_b32_e32 v15, s23
+; SDAG-NEXT: v_accvgpr_write_b32 a1, s1
+; SDAG-NEXT: v_accvgpr_write_b32 a2, s2
+; SDAG-NEXT: v_accvgpr_write_b32 a3, s3
+; SDAG-NEXT: s_nop 1
+; SDAG-NEXT: v_mfma_scale_f32_16x16x128_f8f6f4 a[0:3], v[0:7], v[8:15], a[0:3], 1.0, -2 op_sel:[1,1,0] op_sel_hi:[1,0,0]
+; SDAG-NEXT: s_nop 7
+; SDAG-NEXT: s_nop 3
+; SDAG-NEXT: global_store_dwordx4 v16, a[0:3], s[4:5]
+; SDAG-NEXT: s_endpgm
+;
+; GISEL-LABEL: test_mfma_scale_f32_16x16x128_f8f6f4__vgprcd___scaleA_FP_literal__scaleB__inline_imm:
+; GISEL: ; %bb.0:
+; GISEL-NEXT: s_load_dwordx16 s[8:23], s[4:5], 0x0
+; GISEL-NEXT: s_load_dwordx4 s[0:3], s[4:5], 0x40
+; GISEL-NEXT: s_waitcnt lgkmcnt(0)
+; GISEL-NEXT: v_mov_b64_e32 v[0:1], s[8:9]
+; GISEL-NEXT: v_mov_b64_e32 v[2:3], s[10:11]
+; GISEL-NEXT: v_mov_b64_e32 v[4:5], s[12:13]
+; GISEL-NEXT: v_mov_b64_e32 v[6:7], s[14:15]
+; GISEL-NEXT: v_mov_b64_e32 v[8:9], s[16:17]
+; GISEL-NEXT: v_accvgpr_write_b32 a0, s0
+; GISEL-NEXT: v_mov_b64_e32 v[10:11], s[18:19]
+; GISEL-NEXT: v_mov_b64_e32 v[12:13], s[20:21]
+; GISEL-NEXT: v_mov_b64_e32 v[14:15], s[22:23]
+; GISEL-NEXT: v_accvgpr_write_b32 a1, s1
+; GISEL-NEXT: v_accvgpr_write_b32 a2, s2
+; GISEL-NEXT: v_accvgpr_write_b32 a3, s3
+; GISEL-NEXT: s_load_dwordx2 s[4:5], s[4:5], 0x50
+; GISEL-NEXT: s_nop 0
+; GISEL-NEXT: v_mfma_scale_f32_16x16x128_f8f6f4 a[0:3], v[0:7], v[8:15], a[0:3], 1.0, -2 op_sel:[1,1,0] op_sel_hi:[1,0,0]
+; GISEL-NEXT: v_mov_b32_e32 v0, 0
+; GISEL-NEXT: s_waitcnt lgkmcnt(0)
+; GISEL-NEXT: s_nop 7
+; GISEL-NEXT: s_nop 1
+; GISEL-NEXT: global_store_dwordx4 v0, a[0:3], s[4:5]
+; GISEL-NEXT: s_endpgm
+ %result = call <4 x float> @llvm.amdgcn.mfma.scale.f32.16x16x128.f8f6f4.v8i32.v8i32(<8 x i32> %arg0, <8 x i32> %arg1, <4 x float> %arg2, i32 0, i32 0, i32 3, i32 1065353216, i32 1, i32 -2)
+ store <4 x float> %result, ptr addrspace(1) %ptr, align 16
+ ret void
+}
+
+define amdgpu_kernel void @test_mfma_scale_f32_16x16x128_f8f6f4__vgprcd___scaleA_FP_literal__scaleB__FP_literal(<8 x i32> %arg0, <8 x i32> %arg1, <4 x float> %arg2, ptr addrspace(1) %ptr) #0 {
+; SDAG-LABEL: test_mfma_scale_f32_16x16x128_f8f6f4__vgprcd___scaleA_FP_literal__scaleB__FP_literal:
+; SDAG: ; %bb.0:
+; SDAG-NEXT: s_load_dwordx16 s[8:23], s[4:5], 0x0
+; SDAG-NEXT: s_load_dwordx4 s[0:3], s[4:5], 0x40
+; SDAG-NEXT: v_mov_b32_e32 v16, 0
+; SDAG-NEXT: s_load_dwordx2 s[4:5], s[4:5], 0x50
+; SDAG-NEXT: s_waitcnt lgkmcnt(0)
+; SDAG-NEXT: v_mov_b32_e32 v0, s8
+; SDAG-NEXT: v_mov_b32_e32 v1, s9
+; SDAG-NEXT: v_mov_b32_e32 v2, s10
+; SDAG-NEXT: v_mov_b32_e32 v3, s11
+; SDAG-NEXT: v_mov_b32_e32 v4, s12
+; SDAG-NEXT: v_mov_b32_e32 v5, s13
+; SDAG-NEXT: v_mov_b32_e32 v6, s14
+; SDAG-NEXT: v_mov_b32_e32 v7, s15
+; SDAG-NEXT: v_accvgpr_write_b32 a0, s0
+; SDAG-NEXT: v_mov_b32_e32 v8, s16
+; SDAG-NEXT: v_mov_b32_e32 v9, s17
+; SDAG-NEXT: v_mov_b32_e32 v10, s18
+; SDAG-NEXT: v_mov_b32_e32 v11, s19
+; SDAG-NEXT: v_mov_b32_e32 v12, s20
+; SDAG-NEXT: v_mov_b32_e32 v13, s21
+; SDAG-NEXT: v_mov_b32_e32 v14, s22
+; SDAG-NEXT: v_mov_b32_e32 v15, s23
+; SDAG-NEXT: v_accvgpr_write_b32 a1, s1
+; SDAG-NEXT: v_accvgpr_write_b32 a2, s2
+; SDAG-NEXT: v_accvgpr_write_b32 a3, s3
+; SDAG-NEXT: s_nop 1
+; SDAG-NEXT: v_mfma_scale_f32_16x16x128_f8f6f4 a[0:3], v[0:7], v[8:15], a[0:3], 1.0, 0.15915494 op_sel:[1,1,0] op_sel_hi:[1,0,0]
+; SDAG-NEXT: s_nop 7
+; SDAG-NEXT: s_nop 3
+; SDAG-NEXT: global_store_dwordx4 v16, a[0:3], s[4:5]
+; SDAG-NEXT: s_endpgm
+;
+; GISEL-LABEL: test_mfma_scale_f32_16x16x128_f8f6f4__vgprcd___scaleA_FP_literal__scaleB__FP_literal:
+; GISEL: ; %bb.0:
+; GISEL-NEXT: s_load_dwordx16 s[8:23], s[4:5], 0x0
+; GISEL-NEXT: s_load_dwordx4 s[0:3], s[4:5], 0x40
+; GISEL-NEXT: s_waitcnt lgkmcnt(0)
+; GISEL-NEXT: v_mov_b64_e32 v[0:1], s[8:9]
+; GISEL-NEXT: v_mov_b64_e32 v[2:3], s[10:11]
+; GISEL-NEXT: v_mov_b64_e32 v[4:5], s[12:13]
+; GISEL-NEXT: v_mov_b64_e32 v[6:7], s[14:15]
+; GISEL-NEXT: v_mov_b64_e32 v[8:9], s[16:17]
+; GISEL-NEXT: v_accvgpr_write_b32 a0, s0
+; GISEL-NEXT: v_mov_b64_e32 v[10:11], s[18:19]
+; GISEL-NEXT: v_mov_b64_e32 v[12:13], s[20:21]
+; GISEL-NEXT: v_mov_b64_e32 v[14:15], s[22:23]
+; GISEL-NEXT: v_accvgpr_write_b32 a1, s1
+; GISEL-NEXT: v_accvgpr_write_b32 a2, s2
+; GISEL-NEXT: v_accvgpr_write_b32 a3, s3
+; GISEL-NEXT: s_load_dwordx2 s[4:5], s[4:5], 0x50
+; GISEL-NEXT: s_nop 0
+; GISEL-NEXT: v_mfma_scale_f32_16x16x128_f8f6f4 a[0:3], v[0:7], v[8:15], a[0:3], 1.0, 0.15915494 op_sel:[1,1,0] op_sel_hi:[1,0,0]
+; GISEL-NEXT: v_mov_b32_e32 v0, 0
+; GISEL-NEXT: s_waitcnt lgkmcnt(0)
+; GISEL-NEXT: s_nop 7
+; GISEL-NEXT: s_nop 1
+; GISEL-NEXT: global_store_dwordx4 v0, a[0:3], s[4:5]
+; GISEL-NEXT: s_endpgm
+ %result = call <4 x float> @llvm.amdgcn.mfma.scale.f32.16x16x128.f8f6f4.v8i32.v8i32(<8 x i32> %arg0, <8 x i32> %arg1, <4 x float> %arg2, i32 0, i32 0, i32 3, i32 1065353216, i32 1, i32 1042479491)
+ store <4 x float> %result, ptr addrspace(1) %ptr, align 16
+ ret void
+}
+
; This should be optimized to avoid the scale
define <4 x float> @test_mfma_scale_f32_16x16x128_f8f6f4___constant_scale_0_0_a(<8 x i32> %arg0, <8 x i32> %arg1, <4 x float> %arg2, i32 %scale0, i32 %scale1) {
; GCN-LABEL: test_mfma_scale_f32_16x16x128_f8f6f4___constant_scale_0_0_a:
diff --git a/llvm/test/CodeGen/AMDGPU/llvm.amdgcn.mfma.scale.f32.32x32x64.f8f6f4.ll b/llvm/test/CodeGen/AMDGPU/llvm.amdgcn.mfma.scale.f32.32x32x64.f8f6f4.ll
index 5574313f22a47..a7ea385185190 100644
--- a/llvm/test/CodeGen/AMDGPU/llvm.amdgcn.mfma.scale.f32.32x32x64.f8f6f4.ll
+++ b/llvm/test/CodeGen/AMDGPU/llvm.amdgcn.mfma.scale.f32.32x32x64.f8f6f4.ll
@@ -4252,6 +4252,191 @@ define <16 x float> @test_mfma_scale_f32_32x32x64_f8f6f4_0_0__scaleA_kimm__scale
ret <16 x float> %result
}
+define <16 x float> @test_mfma_scale_f32_32x32x64_f8f6f4_0_0__scaleA_kimm__scaleB_FP_literal(<8 x i32> %arg0, <8 x i32> %arg1, <16 x float> %arg2, i32 %scale0, i32 %scale1) {
+; SDAG-LABEL: test_mfma_scale_f32_32x32x64_f8f6f4_0_0__scaleA_kimm__scaleB_FP_literal:
+; SDAG: ; %bb.0:
+; SDAG-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; SDAG-NEXT: scratch_load_dword a15, off, s32
+; SDAG-NEXT: s_movk_i32 s0, 0x41
+; SDAG-NEXT: v_accvgpr_write_b32 a0, v16
+; SDAG-NEXT: v_accvgpr_write_b32 a1, v17
+; SDAG-NEXT: v_accvgpr_write_b32 a2, v18
+; SDAG-NEXT: v_accvgpr_write_b32 a3, v19
+; SDAG-NEXT: v_accvgpr_write_b32 a4, v20
+; SDAG-NEXT: v_accvgpr_write_b32 a5, v21
+; SDAG-NEXT: v_accvgpr_write_b32 a6, v22
+; SDAG-NEXT: v_accvgpr_write_b32 a7, v23
+; SDAG-NEXT: v_accvgpr_write_b32 a8, v24
+; SDAG-NEXT: v_accvgpr_write_b32 a9, v25
+; SDAG-NEXT: v_accvgpr_write_b32 a10, v26
+; SDAG-NEXT: v_accvgpr_write_b32 a11, v27
+; SDAG-NEXT: v_accvgpr_write_b32 a12, v28
+; SDAG-NEXT: v_accvgpr_write_b32 a13, v29
+; SDAG-NEXT: v_accvgpr_write_b32 a14, v30
+; SDAG-NEXT: s_waitcnt vmcnt(0)
+; SDAG-NEXT: s_nop 0
+; SDAG-NEXT: v_mfma_scale_f32_32x32x64_f8f6f4 a[0:15], v[0:7], v[8:15], a[0:15], s0, 1.0 op_sel_hi:[1,1,0]
+; SDAG-NEXT: s_nop 7
+; SDAG-NEXT: s_nop 7
+; SDAG-NEXT: s_nop 3
+; SDAG-NEXT: v_accvgpr_read_b32 v0, a0
+; SDAG-NEXT: v_accvgpr_read_b32 v1, a1
+; SDAG-NEXT: v_accvgpr_read_b32 v2, a2
+; SDAG-NEXT: v_accvgpr_read_b32 v3, a3
+; SDAG-NEXT: v_accvgpr_read_b32 v4, a4
+; SDAG-NEXT: v_accvgpr_read_b32 v5, a5
+; SDAG-NEXT: v_accvgpr_read_b32 v6, a6
+; SDAG-NEXT: v_accvgpr_read_b32 v7, a7
+; SDAG-NEXT: v_accvgpr_read_b32 v8, a8
+; SDAG-NEXT: v_accvgpr_read_b32 v9, a9
+; SDAG-NEXT: v_accvgpr_read_b32 v10, a10
+; SDAG-NEXT: v_accvgpr_read_b32 v11, a11
+; SDAG-NEXT: v_accvgpr_read_b32 v12, a12
+; SDAG-NEXT: v_accvgpr_read_b32 v13, a13
+; SDAG-NEXT: v_accvgpr_read_b32 v14, a14
+; SDAG-NEXT: v_accvgpr_read_b32 v15, a15
+; SDAG-NEXT: s_setpc_b64 s[30:31]
+;
+; GISEL-LABEL: test_mfma_scale_f32_32x32x64_f8f6f4_0_0__scaleA_kimm__scaleB_FP_literal:
+; GISEL: ; %bb.0:
+; GISEL-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GISEL-NEXT: scratch_load_dword a15, off, s32
+; GISEL-NEXT: v_mov_b32_e32 v31, 0x41
+; GISEL-NEXT: v_accvgpr_write_b32 a0, v16
+; GISEL-NEXT: v_accvgpr_write_b32 a1, v17
+; GISEL-NEXT: v_accvgpr_write_b32 a2, v18
+; GISEL-NEXT: v_accvgpr_write_b32 a3, v19
+; GISEL-NEXT: v_accvgpr_write_b32 a4, v20
+; GISEL-NEXT: v_accvgpr_write_b32 a5, v21
+; GISEL-NEXT: v_accvgpr_write_b32 a6, v22
+; GISEL-NEXT: v_accvgpr_write_b32 a7, v23
+; GISEL-NEXT: v_accvgpr_write_b32 a8, v24
+; GISEL-NEXT: v_accvgpr_write_b32 a9, v25
+; GISEL-NEXT: v_accvgpr_write_b32 a10, v26
+; GISEL-NEXT: v_accvgpr_write_b32 a11, v27
+; GISEL-NEXT: v_accvgpr_write_b32 a12, v28
+; GISEL-NEXT: v_accvgpr_write_b32 a13, v29
+; GISEL-NEXT: v_accvgpr_write_b32 a14, v30
+; GISEL-NEXT: s_waitcnt vmcnt(0)
+; GISEL-NEXT: s_nop 0
+; GISEL-NEXT: v_mfma_scale_f32_32x32x64_f8f6f4 a[0:15], v[0:7], v[8:15], a[0:15], v31, 1.0 op_sel_hi:[1,1,0]
+; GISEL-NEXT: s_nop 7
+; GISEL-NEXT: s_nop 7
+; GISEL-NEXT: s_nop 3
+; GISEL-NEXT: v_accvgpr_read_b32 v0, a0
+; GISEL-NEXT: v_accvgpr_read_b32 v1, a1
+; GISEL-NEXT: v_accvgpr_read_b32 v2, a2
+; GISEL-NEXT: v_accvgpr_read_b32 v3, a3
+; GISEL-NEXT: v_accvgpr_read_b32 v4, a4
+; GISEL-NEXT: v_accvgpr_read_b32 v5, a5
+; GISEL-NEXT: v_accvgpr_read_b32 v6, a6
+; GISEL-NEXT: v_accvgpr_read_b32 v7, a7
+; GISEL-NEXT: v_accvgpr_read_b32 v8, a8
+; GISEL-NEXT: v_accvgpr_read_b32 v9, a9
+; GISEL-NEXT: v_accvgpr_read_b32 v10, a10
+; GISEL-NEXT: v_accvgpr_read_b32 v11, a11
+; GISEL-NEXT: v_accvgpr_read_b32 v12, a12
+; GISEL-NEXT: v_accvgpr_read_b32 v13, a13
+; GISEL-NEXT: v_accvgpr_read_b32 v14, a14
+; GISEL-NEXT: v_accvgpr_read_b32 v15, a15
+; GISEL-NEXT: s_setpc_b64 s[30:31]
+ %result = call <16 x float> @llvm.amdgcn.mfma.scale.f32.32x32x64.f8f6f4.v8i32.v8i32(<8 x i32> %arg0, <8 x i32> %arg1, <16 x float> %arg2, i32 0, i32 0, i32 2, i32 65, i32 2, i32 1065353216)
+ ret <16 x float> %result
+}
+
+define <16 x float> @test_mfma_scale_f32_32x32x64_f8f6f4_0_0__scaleA_FP_literal__scaleB_inlineimm(<8 x i32> %arg0, <8 x i32> %arg1, <16 x float> %arg2, i32 %scale0, i32 %scale1) {
+; GCN-LABEL: test_mfma_scale_f32_32x32x64_f8f6f4_0_0__scaleA_FP_literal__scaleB_inlineimm:
+; GCN: ; %bb.0:
+; GCN-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GCN-NEXT: scratch_load_dword a15, off, s32
+; GCN-NEXT: v_accvgpr_write_b32 a0, v16
+; GCN-NEXT: v_accvgpr_write_b32 a1, v17
+; GCN-NEXT: v_accvgpr_write_b32 a2, v18
+; GCN-NEXT: v_accvgpr_write_b32 a3, v19
+; GCN-NEXT: v_accvgpr_write_b32 a4, v20
+; GCN-NEXT: v_accvgpr_write_b32 a5, v21
+; GCN-NEXT: v_accvgpr_write_b32 a6, v22
+; GCN-NEXT: v_accvgpr_write_b32 a7, v23
+; GCN-NEXT: v_accvgpr_write_b32 a8, v24
+; GCN-NEXT: v_accvgpr_write_b32 a9, v25
+; GCN-NEXT: v_accvgpr_write_b32 a10, v26
+; GCN-NEXT: v_accvgpr_write_b32 a11, v27
+; GCN-NEXT: v_accvgpr_write_b32 a12, v28
+; GCN-NEXT: v_accvgpr_write_b32 a13, v29
+; GCN-NEXT: v_accvgpr_write_b32 a14, v30
+; GCN-NEXT: s_waitcnt vmcnt(0)
+; GCN-NEXT: s_nop 0
+; GCN-NEXT: v_mfma_scale_f32_32x32x64_f8f6f4 a[0:15], v[0:7], v[8:15], a[0:15], 1.0, -2 op_sel_hi:[1,1,0]
+; GCN-NEXT: s_nop 7
+; GCN-NEXT: s_nop 7
+; GCN-NEXT: s_nop 3
+; GCN-NEXT: v_accvgpr_read_b32 v0, a0
+; GCN-NEXT: v_accvgpr_read_b32 v1, a1
+; GCN-NEXT: v_accvgpr_read_b32 v2, a2
+; GCN-NEXT: v_accvgpr_read_b32 v3, a3
+; GCN-NEXT: v_accvgpr_read_b32 v4, a4
+; GCN-NEXT: v_accvgpr_read_b32 v5, a5
+; GCN-NEXT: v_accvgpr_read_b32 v6, a6
+; GCN-NEXT: v_accvgpr_read_b32 v7, a7
+; GCN-NEXT: v_accvgpr_read_b32 v8, a8
+; GCN-NEXT: v_accvgpr_read_b32 v9, a9
+; GCN-NEXT: v_accvgpr_read_b32 v10, a10
+; GCN-NEXT: v_accvgpr_read_b32 v11, a11
+; GCN-NEXT: v_accvgpr_read_b32 v12, a12
+; GCN-NEXT: v_accvgpr_read_b32 v13, a13
+; GCN-NEXT: v_accvgpr_read_b32 v14, a14
+; GCN-NEXT: v_accvgpr_read_b32 v15, a15
+; GCN-NEXT: s_setpc_b64 s[30:31]
+ %result = call <16 x float> @llvm.amdgcn.mfma.scale.f32.32x32x64.f8f6f4.v8i32.v8i32(<8 x i32> %arg0, <8 x i32> %arg1, <16 x float> %arg2, i32 0, i32 0, i32 2, i32 1065353216, i32 2, i32 -2)
+ ret <16 x float> %result
+}
+
+define <16 x float> @test_mfma_scale_f32_32x32x64_f8f6f4_0_0__scaleA_FP_literal__scaleB_FP_literal(<8 x i32> %arg0, <8 x i32> %arg1, <16 x float> %arg2, i32 %scale0, i32 %scale1) {
+; GCN-LABEL: test_mfma_scale_f32_32x32x64_f8f6f4_0_0__scaleA_FP_literal__scaleB_FP_literal:
+; GCN: ; %bb.0:
+; GCN-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GCN-NEXT: scratch_load_dword a15, off, s32
+; GCN-NEXT: v_accvgpr_write_b32 a0, v16
+; GCN-NEXT: v_accvgpr_write_b32 a1, v17
+; GCN-NEXT: v_accvgpr_write_b32 a2, v18
+; GCN-NEXT: v_accvgpr_write_b32 a3, v19
+; GCN-NEXT: v_accvgpr_write_b32 a4, v20
+; GCN-NEXT: v_accvgpr_write_b32 a5, v21
+; GCN-NEXT: v_accvgpr_write_b32 a6, v22
+; GCN-NEXT: v_accvgpr_write_b32 a7, v23
+; GCN-NEXT: v_accvgpr_write_b32 a8, v24
+; GCN-NEXT: v_accvgpr_write_b32 ...
[truncated]
|
@@ -8835,6 +8835,13 @@ void AMDGPUAsmParser::cvtScaledMFMA(MCInst &Inst, | |||
for (unsigned E = Operands.size(); I != E; ++I) { | |||
AMDGPUOperand &Op = static_cast<AMDGPUOperand &>(*Operands[I]); | |||
|
|||
// the order of operands in MCInst and parsed operands are different. | |||
// Adding dummy blgp and cbsz operands at corresponding MCInst operand | |||
// indeces for parsing scale values correctly. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
s/the/The/ s/indeces/indices/
// the order of operands in MCInst and parsed operands are different. | ||
// Adding dummy blgp and cbsz operands at corresponding MCInst operand | ||
// indeces for parsing scale values correctly. | ||
if (I == 5) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you use getNamedOperandIdx() instead of using a hardcoded value?
auto CbszIdx = OptionalIdx.find(AMDGPUOperand::ImmTyCBSZ); | ||
if (CbszIdx != OptionalIdx.end()) { | ||
auto CbszVal = | ||
static_cast<const AMDGPUOperand &>(*Operands[CbszIdx->second]).getImm(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is not common to see a static_cast
auto BlgpInsIdx = AMDGPU::getNamedOperandIdx(Opc, AMDGPU::OpName::blgp); | ||
auto BlgpIdx = OptionalIdx.find(AMDGPUOperand::ImmTyBLGP); | ||
if (BlgpIdx != OptionalIdx.end()) { | ||
auto BlgpVal = |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no auto
and same comment as above
@@ -2024,6 +2024,205 @@ define amdgpu_kernel void @test_mfma_scale_f32_16x16x128_f8f6f4__vgprcd___scaleA | |||
ret void | |||
} | |||
|
|||
define amdgpu_kernel void @test_mfma_scale_f32_16x16x128_f8f6f4__vgprcd___scaleA_kimm__scaleB__FP_literal(<8 x i32> %arg0, <8 x i32> %arg1, <4 x float> %arg2, ptr addrspace(1) %ptr) #0 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a bug fix for asm parser but why do you add IR tests?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd guess if we're missing codegen cases that hit the asm printer side of this we might need it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, its not directly related, but there were no tests for FP literals in IR tests. I thought I would add it too.
…8f6f4.ll Co-authored-by: Matt Arsenault <arsenm2@gmail.com>
This picks up a bug fix for AMDGPU scaled mfma: * llvm/llvm-project#142493
This picks up a bug fix for AMDGPU scaled mfma: * llvm/llvm-project#142493
bugfix on parsing FP literals for scale values in the scaled MFMA.
Due to the change in order of operands between MCinst and parsed operands, the FP literal imms for scale values were not parsed correctly.