Skip to content

[AMDGPU] Extend SRA i64 simplification for shift amts in range [33:62] #138913

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 7 commits into from
May 30, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
23 changes: 9 additions & 14 deletions llvm/lib/Target/AMDGPU/AMDGPUISelLowering.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -4165,22 +4165,17 @@ SDValue AMDGPUTargetLowering::performSraCombine(SDNode *N,
SDLoc SL(N);
unsigned RHSVal = RHS->getZExtValue();

// (sra i64:x, 32) -> build_pair x, (sra hi_32(x), 31)
if (RHSVal == 32) {
// For C >= 32
// (sra i64:x, C) -> build_pair (sra hi_32(x), C - 32), (sra hi_32(x), 31)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This fails alive: https://alive2.llvm.org/ce/z/ohLB4J

We need a freeze somewhere, maybe this https://alive2.llvm.org/ce/z/kcwGVF (but need to scale it down to a smaller size to get this to complete)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added freeze.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you post the scaled down alive link that shows this works

Copy link
Contributor Author

@LU-JOHN LU-JOHN May 29, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried to create a scaled down verification for shifting by 33, but alive2 has a timeout if ashr is used:

https://alive2.llvm.org/ce/z/S-gaBW

Replacing ashr with the commented 4-instruction sequence in the link avoids the timeout. Is this an issue with alive2's handling of ashr?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alive2 verification with 8/16-bit sizes: https://alive2.llvm.org/ce/z/YWP8qy.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can also add nuw to the shift https://alive2.llvm.org/ce/z/ocnLgT and nuw nsw on the sub: https://alive2.llvm.org/ce/z/tRDbS2

And if the shift is exact, can preserve shift on the new right shift: https://alive2.llvm.org/ce/z/rie9oU

Copy link
Contributor Author

@LU-JOHN LU-JOHN May 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can also add nuw to the shift https://alive2.llvm.org/ce/z/ocnLgT and nuw nsw on the sub: https://alive2.llvm.org/ce/z/tRDbS2

And if the shift is exact, can preserve shift on the new right shift: https://alive2.llvm.org/ce/z/rie9oU

This PR is only handling shifts by a constant. The sub is only in the alive2 verification to validate multiple shift values.

This PR does not use a 'shl' and 'or' to construct the 64-bit value as in the old alive2 verification. Instead it uses 'insertelement' and 'bitcast', https://alive2.llvm.org/ce/z/4bs2mv.

if (RHSVal >= 32) {
SDValue Hi = getHiHalf64(N->getOperand(0), DAG);
SDValue NewShift = DAG.getNode(ISD::SRA, SL, MVT::i32, Hi,
DAG.getConstant(31, SL, MVT::i32));
Hi = DAG.getFreeze(Hi);
SDValue HiShift = DAG.getNode(ISD::SRA, SL, MVT::i32, Hi,
DAG.getConstant(31, SL, MVT::i32));
SDValue LoShift = DAG.getNode(ISD::SRA, SL, MVT::i32, Hi,
DAG.getConstant(RHSVal - 32, SL, MVT::i32));

SDValue BuildVec = DAG.getBuildVector(MVT::v2i32, SL, {Hi, NewShift});
return DAG.getNode(ISD::BITCAST, SL, MVT::i64, BuildVec);
}

// (sra i64:x, 63) -> build_pair (sra hi_32(x), 31), (sra hi_32(x), 31)
if (RHSVal == 63) {
SDValue Hi = getHiHalf64(N->getOperand(0), DAG);
SDValue NewShift = DAG.getNode(ISD::SRA, SL, MVT::i32, Hi,
DAG.getConstant(31, SL, MVT::i32));
SDValue BuildVec = DAG.getBuildVector(MVT::v2i32, SL, {NewShift, NewShift});
SDValue BuildVec = DAG.getBuildVector(MVT::v2i32, SL, {LoShift, HiShift});
return DAG.getNode(ISD::BITCAST, SL, MVT::i64, BuildVec);
}

Expand Down
16 changes: 8 additions & 8 deletions llvm/test/CodeGen/AMDGPU/ashr.v2i16.ll
Original file line number Diff line number Diff line change
Expand Up @@ -685,16 +685,16 @@ define amdgpu_kernel void @ashr_v_imm_v4i16(ptr addrspace(1) %out, ptr addrspace
; CI-NEXT: buffer_load_dwordx2 v[2:3], v[0:1], s[4:7], 0 addr64
; CI-NEXT: s_mov_b64 s[2:3], s[6:7]
; CI-NEXT: s_waitcnt vmcnt(0)
; CI-NEXT: v_bfe_i32 v6, v3, 0, 16
; CI-NEXT: v_ashr_i64 v[3:4], v[2:3], 56
; CI-NEXT: v_bfe_i32 v5, v2, 0, 16
; CI-NEXT: v_bfe_i32 v4, v2, 0, 16
; CI-NEXT: v_bfe_i32 v5, v3, 0, 16
; CI-NEXT: v_ashrrev_i32_e32 v3, 24, v3
; CI-NEXT: v_ashrrev_i32_e32 v2, 24, v2
; CI-NEXT: v_bfe_u32 v4, v6, 8, 16
; CI-NEXT: v_lshlrev_b32_e32 v2, 16, v2
; CI-NEXT: v_bfe_u32 v5, v5, 8, 16
; CI-NEXT: v_lshlrev_b32_e32 v3, 16, v3
; CI-NEXT: v_or_b32_e32 v3, v4, v3
; CI-NEXT: v_or_b32_e32 v2, v5, v2
; CI-NEXT: v_bfe_u32 v5, v5, 8, 16
; CI-NEXT: v_lshlrev_b32_e32 v2, 16, v2
; CI-NEXT: v_bfe_u32 v4, v4, 8, 16
; CI-NEXT: v_or_b32_e32 v3, v5, v3
; CI-NEXT: v_or_b32_e32 v2, v4, v2
; CI-NEXT: buffer_store_dwordx2 v[2:3], v[0:1], s[0:3], 0 addr64
; CI-NEXT: s_endpgm
;
Expand Down
6 changes: 3 additions & 3 deletions llvm/test/CodeGen/AMDGPU/dagcomb-mullohi.ll
Original file line number Diff line number Diff line change
Expand Up @@ -150,9 +150,9 @@ define i32 @mul_one_bit_hi_hi_u32_lshr_ashr(i32 %arg, i32 %arg1, ptr %arg2) {
; CHECK-LABEL: mul_one_bit_hi_hi_u32_lshr_ashr:
; CHECK: ; %bb.0: ; %bb
; CHECK-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
; CHECK-NEXT: v_mul_hi_u32 v4, v1, v0
; CHECK-NEXT: v_ashrrev_i64 v[0:1], 33, v[3:4]
; CHECK-NEXT: flat_store_dword v[2:3], v4
; CHECK-NEXT: v_mul_hi_u32 v0, v1, v0
; CHECK-NEXT: flat_store_dword v[2:3], v0
; CHECK-NEXT: v_ashrrev_i32_e32 v0, 1, v0
; CHECK-NEXT: s_waitcnt vmcnt(0) lgkmcnt(0)
; CHECK-NEXT: s_setpc_b64 s[30:31]
bb:
Expand Down
7 changes: 4 additions & 3 deletions llvm/test/CodeGen/AMDGPU/div_i128.ll
Original file line number Diff line number Diff line change
Expand Up @@ -4398,9 +4398,10 @@ define i128 @v_sdiv_i128_v_pow2k(i128 %lhs) {
; GFX9-NEXT: v_addc_co_u32_e32 v2, vcc, 0, v2, vcc
; GFX9-NEXT: v_addc_co_u32_e32 v3, vcc, 0, v3, vcc
; GFX9-NEXT: v_lshlrev_b64 v[0:1], 31, v[2:3]
; GFX9-NEXT: v_lshrrev_b32_e32 v4, 1, v4
; GFX9-NEXT: v_ashrrev_i64 v[2:3], 33, v[2:3]
; GFX9-NEXT: v_or_b32_e32 v0, v4, v0
; GFX9-NEXT: v_lshrrev_b32_e32 v2, 1, v4
; GFX9-NEXT: v_or_b32_e32 v0, v2, v0
; GFX9-NEXT: v_ashrrev_i32_e32 v2, 1, v3
; GFX9-NEXT: v_ashrrev_i32_e32 v3, 31, v3
; GFX9-NEXT: s_setpc_b64 s[30:31]
;
; GFX9-O0-LABEL: v_sdiv_i128_v_pow2k:
Expand Down
28 changes: 19 additions & 9 deletions llvm/test/CodeGen/AMDGPU/fptoi.i128.ll
Original file line number Diff line number Diff line change
Expand Up @@ -1433,15 +1433,25 @@ define i128 @fptoui_f32_to_i128(float %x) {
}

define i128 @fptosi_f16_to_i128(half %x) {
; GCN-LABEL: fptosi_f16_to_i128:
; GCN: ; %bb.0:
; GCN-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
; GCN-NEXT: v_cvt_f32_f16_e32 v0, v0
; GCN-NEXT: v_cvt_i32_f32_e32 v0, v0
; GCN-NEXT: v_ashrrev_i32_e32 v1, 31, v0
; GCN-NEXT: v_mov_b32_e32 v2, v1
; GCN-NEXT: v_mov_b32_e32 v3, v1
; GCN-NEXT: s_setpc_b64 s[30:31]
; SDAG-LABEL: fptosi_f16_to_i128:
; SDAG: ; %bb.0:
; SDAG-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
; SDAG-NEXT: v_cvt_f32_f16_e32 v0, v0
; SDAG-NEXT: v_cvt_i32_f32_e32 v0, v0
; SDAG-NEXT: v_ashrrev_i32_e32 v1, 31, v0
; SDAG-NEXT: v_ashrrev_i32_e32 v2, 31, v1
; SDAG-NEXT: v_mov_b32_e32 v3, v2
; SDAG-NEXT: s_setpc_b64 s[30:31]
;
; GISEL-LABEL: fptosi_f16_to_i128:
; GISEL: ; %bb.0:
; GISEL-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
; GISEL-NEXT: v_cvt_f32_f16_e32 v0, v0
; GISEL-NEXT: v_cvt_i32_f32_e32 v0, v0
; GISEL-NEXT: v_ashrrev_i32_e32 v1, 31, v0
; GISEL-NEXT: v_mov_b32_e32 v2, v1
; GISEL-NEXT: v_mov_b32_e32 v3, v1
; GISEL-NEXT: s_setpc_b64 s[30:31]
%cvt = fptosi half %x to i128
ret i128 %cvt
}
Expand Down
Loading
Loading