Skip to content

AMDGPU: Improve v16f16/v16bf16 copysign handling #142176

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged

Conversation

arsenm
Copy link
Contributor

@arsenm arsenm commented May 30, 2025

No description provided.

@llvmbot
Copy link
Member

llvmbot commented May 30, 2025

@llvm/pr-subscribers-backend-amdgpu

Author: Matt Arsenault (arsenm)

Changes

Patch is 56.04 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/142176.diff

3 Files Affected:

  • (modified) llvm/lib/Target/AMDGPU/SIISelLowering.cpp (+3-3)
  • (modified) llvm/test/CodeGen/AMDGPU/fcopysign.bf16.ll (+75-490)
  • (modified) llvm/test/CodeGen/AMDGPU/fcopysign.f16.ll (+48-383)
diff --git a/llvm/lib/Target/AMDGPU/SIISelLowering.cpp b/llvm/lib/Target/AMDGPU/SIISelLowering.cpp
index ecfa6daf7803d..3535eb41682d9 100644
--- a/llvm/lib/Target/AMDGPU/SIISelLowering.cpp
+++ b/llvm/lib/Target/AMDGPU/SIISelLowering.cpp
@@ -759,7 +759,7 @@ SITargetLowering::SITargetLowering(const TargetMachine &TM,
     // Can do this in one BFI plus a constant materialize.
     setOperationAction(ISD::FCOPYSIGN,
                        {MVT::v2f16, MVT::v2bf16, MVT::v4f16, MVT::v4bf16,
-                        MVT::v8f16, MVT::v8bf16},
+                        MVT::v8f16, MVT::v8bf16, MVT::v16f16, MVT::v16bf16},
                        Custom);
 
     setOperationAction({ISD::FMAXNUM, ISD::FMINNUM}, MVT::f16, Custom);
@@ -5942,8 +5942,8 @@ SDValue SITargetLowering::splitBinaryVectorOp(SDValue Op,
   assert(VT == MVT::v4i16 || VT == MVT::v4f16 || VT == MVT::v4bf16 ||
          VT == MVT::v4f32 || VT == MVT::v8i16 || VT == MVT::v8f16 ||
          VT == MVT::v8bf16 || VT == MVT::v16i16 || VT == MVT::v16f16 ||
-         VT == MVT::v8f32 || VT == MVT::v16f32 || VT == MVT::v32f32 ||
-         VT == MVT::v32i16 || VT == MVT::v32f16);
+         VT == MVT::v16bf16 || VT == MVT::v8f32 || VT == MVT::v16f32 ||
+         VT == MVT::v32f32 || VT == MVT::v32i16 || VT == MVT::v32f16);
 
   auto [Lo0, Hi0] = DAG.SplitVectorOperand(Op.getNode(), 0);
   auto [Lo1, Hi1] = DAG.SplitVectorOperand(Op.getNode(), 1);
diff --git a/llvm/test/CodeGen/AMDGPU/fcopysign.bf16.ll b/llvm/test/CodeGen/AMDGPU/fcopysign.bf16.ll
index ab4cff2469467..4bbd170529ad0 100644
--- a/llvm/test/CodeGen/AMDGPU/fcopysign.bf16.ll
+++ b/llvm/test/CodeGen/AMDGPU/fcopysign.bf16.ll
@@ -1719,87 +1719,31 @@ define amdgpu_ps <8 x i32> @s_copysign_v16bf16(<16 x bfloat> inreg %arg_mag, <16
 ;
 ; GFX8-LABEL: s_copysign_v16bf16:
 ; GFX8:       ; %bb.0:
-; GFX8-NEXT:    s_movk_i32 s16, 0x7fff
+; GFX8-NEXT:    s_mov_b32 s16, 0x7fff7fff
 ; GFX8-NEXT:    v_mov_b32_e32 v0, s7
 ; GFX8-NEXT:    v_mov_b32_e32 v1, s15
-; GFX8-NEXT:    s_lshr_b32 s15, s15, 16
-; GFX8-NEXT:    s_lshr_b32 s7, s7, 16
 ; GFX8-NEXT:    v_bfi_b32 v0, s16, v0, v1
-; GFX8-NEXT:    v_mov_b32_e32 v1, s7
-; GFX8-NEXT:    v_mov_b32_e32 v2, s15
-; GFX8-NEXT:    v_bfi_b32 v1, s16, v1, v2
-; GFX8-NEXT:    v_lshlrev_b32_e32 v1, 16, v1
-; GFX8-NEXT:    v_or_b32_sdwa v0, v0, v1 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:WORD_0 src1_sel:DWORD
 ; GFX8-NEXT:    v_mov_b32_e32 v1, s6
 ; GFX8-NEXT:    v_mov_b32_e32 v2, s14
-; GFX8-NEXT:    s_lshr_b32 s7, s14, 16
-; GFX8-NEXT:    s_lshr_b32 s6, s6, 16
 ; GFX8-NEXT:    v_bfi_b32 v1, s16, v1, v2
-; GFX8-NEXT:    v_mov_b32_e32 v2, s6
-; GFX8-NEXT:    v_mov_b32_e32 v3, s7
-; GFX8-NEXT:    v_bfi_b32 v2, s16, v2, v3
-; GFX8-NEXT:    v_lshlrev_b32_e32 v2, 16, v2
-; GFX8-NEXT:    v_or_b32_sdwa v1, v1, v2 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:WORD_0 src1_sel:DWORD
 ; GFX8-NEXT:    v_mov_b32_e32 v2, s5
 ; GFX8-NEXT:    v_mov_b32_e32 v3, s13
-; GFX8-NEXT:    s_lshr_b32 s6, s13, 16
-; GFX8-NEXT:    s_lshr_b32 s5, s5, 16
 ; GFX8-NEXT:    v_bfi_b32 v2, s16, v2, v3
-; GFX8-NEXT:    v_mov_b32_e32 v3, s5
-; GFX8-NEXT:    v_mov_b32_e32 v4, s6
-; GFX8-NEXT:    v_bfi_b32 v3, s16, v3, v4
-; GFX8-NEXT:    v_lshlrev_b32_e32 v3, 16, v3
-; GFX8-NEXT:    v_or_b32_sdwa v2, v2, v3 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:WORD_0 src1_sel:DWORD
 ; GFX8-NEXT:    v_mov_b32_e32 v3, s4
 ; GFX8-NEXT:    v_mov_b32_e32 v4, s12
-; GFX8-NEXT:    s_lshr_b32 s5, s12, 16
-; GFX8-NEXT:    s_lshr_b32 s4, s4, 16
 ; GFX8-NEXT:    v_bfi_b32 v3, s16, v3, v4
-; GFX8-NEXT:    v_mov_b32_e32 v4, s4
-; GFX8-NEXT:    v_mov_b32_e32 v5, s5
-; GFX8-NEXT:    v_bfi_b32 v4, s16, v4, v5
-; GFX8-NEXT:    v_lshlrev_b32_e32 v4, 16, v4
-; GFX8-NEXT:    v_or_b32_sdwa v3, v3, v4 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:WORD_0 src1_sel:DWORD
 ; GFX8-NEXT:    v_mov_b32_e32 v4, s3
 ; GFX8-NEXT:    v_mov_b32_e32 v5, s11
-; GFX8-NEXT:    s_lshr_b32 s4, s11, 16
-; GFX8-NEXT:    s_lshr_b32 s3, s3, 16
 ; GFX8-NEXT:    v_bfi_b32 v4, s16, v4, v5
-; GFX8-NEXT:    v_mov_b32_e32 v5, s3
-; GFX8-NEXT:    v_mov_b32_e32 v6, s4
-; GFX8-NEXT:    v_bfi_b32 v5, s16, v5, v6
-; GFX8-NEXT:    v_lshlrev_b32_e32 v5, 16, v5
-; GFX8-NEXT:    v_or_b32_sdwa v4, v4, v5 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:WORD_0 src1_sel:DWORD
 ; GFX8-NEXT:    v_mov_b32_e32 v5, s2
 ; GFX8-NEXT:    v_mov_b32_e32 v6, s10
-; GFX8-NEXT:    s_lshr_b32 s3, s10, 16
-; GFX8-NEXT:    s_lshr_b32 s2, s2, 16
 ; GFX8-NEXT:    v_bfi_b32 v5, s16, v5, v6
-; GFX8-NEXT:    v_mov_b32_e32 v6, s2
-; GFX8-NEXT:    v_mov_b32_e32 v7, s3
-; GFX8-NEXT:    v_bfi_b32 v6, s16, v6, v7
-; GFX8-NEXT:    v_lshlrev_b32_e32 v6, 16, v6
-; GFX8-NEXT:    v_or_b32_sdwa v5, v5, v6 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:WORD_0 src1_sel:DWORD
 ; GFX8-NEXT:    v_mov_b32_e32 v6, s1
 ; GFX8-NEXT:    v_mov_b32_e32 v7, s9
-; GFX8-NEXT:    s_lshr_b32 s2, s9, 16
-; GFX8-NEXT:    s_lshr_b32 s1, s1, 16
 ; GFX8-NEXT:    v_bfi_b32 v6, s16, v6, v7
-; GFX8-NEXT:    v_mov_b32_e32 v7, s1
-; GFX8-NEXT:    v_mov_b32_e32 v8, s2
-; GFX8-NEXT:    v_bfi_b32 v7, s16, v7, v8
-; GFX8-NEXT:    v_lshlrev_b32_e32 v7, 16, v7
-; GFX8-NEXT:    v_or_b32_sdwa v6, v6, v7 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:WORD_0 src1_sel:DWORD
 ; GFX8-NEXT:    v_mov_b32_e32 v7, s0
 ; GFX8-NEXT:    v_mov_b32_e32 v8, s8
-; GFX8-NEXT:    s_lshr_b32 s1, s8, 16
-; GFX8-NEXT:    s_lshr_b32 s0, s0, 16
 ; GFX8-NEXT:    v_bfi_b32 v7, s16, v7, v8
-; GFX8-NEXT:    v_mov_b32_e32 v8, s0
-; GFX8-NEXT:    v_mov_b32_e32 v9, s1
-; GFX8-NEXT:    v_bfi_b32 v8, s16, v8, v9
-; GFX8-NEXT:    v_lshlrev_b32_e32 v8, 16, v8
-; GFX8-NEXT:    v_or_b32_sdwa v7, v7, v8 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:WORD_0 src1_sel:DWORD
 ; GFX8-NEXT:    v_readfirstlane_b32 s0, v7
 ; GFX8-NEXT:    v_readfirstlane_b32 s1, v6
 ; GFX8-NEXT:    v_readfirstlane_b32 s2, v5
@@ -1812,87 +1756,31 @@ define amdgpu_ps <8 x i32> @s_copysign_v16bf16(<16 x bfloat> inreg %arg_mag, <16
 ;
 ; GFX9-LABEL: s_copysign_v16bf16:
 ; GFX9:       ; %bb.0:
-; GFX9-NEXT:    s_movk_i32 s16, 0x7fff
+; GFX9-NEXT:    s_mov_b32 s16, 0x7fff7fff
 ; GFX9-NEXT:    v_mov_b32_e32 v0, s7
 ; GFX9-NEXT:    v_mov_b32_e32 v1, s15
-; GFX9-NEXT:    s_lshr_b32 s15, s15, 16
-; GFX9-NEXT:    s_lshr_b32 s7, s7, 16
 ; GFX9-NEXT:    v_bfi_b32 v0, s16, v0, v1
-; GFX9-NEXT:    v_mov_b32_e32 v1, s7
-; GFX9-NEXT:    v_mov_b32_e32 v2, s15
-; GFX9-NEXT:    v_bfi_b32 v1, s16, v1, v2
-; GFX9-NEXT:    v_and_b32_e32 v0, 0xffff, v0
-; GFX9-NEXT:    v_lshl_or_b32 v0, v1, 16, v0
 ; GFX9-NEXT:    v_mov_b32_e32 v1, s6
 ; GFX9-NEXT:    v_mov_b32_e32 v2, s14
-; GFX9-NEXT:    s_lshr_b32 s7, s14, 16
-; GFX9-NEXT:    s_lshr_b32 s6, s6, 16
 ; GFX9-NEXT:    v_bfi_b32 v1, s16, v1, v2
-; GFX9-NEXT:    v_mov_b32_e32 v2, s6
-; GFX9-NEXT:    v_mov_b32_e32 v3, s7
-; GFX9-NEXT:    v_bfi_b32 v2, s16, v2, v3
-; GFX9-NEXT:    v_and_b32_e32 v1, 0xffff, v1
-; GFX9-NEXT:    v_lshl_or_b32 v1, v2, 16, v1
 ; GFX9-NEXT:    v_mov_b32_e32 v2, s5
 ; GFX9-NEXT:    v_mov_b32_e32 v3, s13
-; GFX9-NEXT:    s_lshr_b32 s6, s13, 16
-; GFX9-NEXT:    s_lshr_b32 s5, s5, 16
 ; GFX9-NEXT:    v_bfi_b32 v2, s16, v2, v3
-; GFX9-NEXT:    v_mov_b32_e32 v3, s5
-; GFX9-NEXT:    v_mov_b32_e32 v4, s6
-; GFX9-NEXT:    v_bfi_b32 v3, s16, v3, v4
-; GFX9-NEXT:    v_and_b32_e32 v2, 0xffff, v2
-; GFX9-NEXT:    v_lshl_or_b32 v2, v3, 16, v2
 ; GFX9-NEXT:    v_mov_b32_e32 v3, s4
 ; GFX9-NEXT:    v_mov_b32_e32 v4, s12
-; GFX9-NEXT:    s_lshr_b32 s5, s12, 16
-; GFX9-NEXT:    s_lshr_b32 s4, s4, 16
 ; GFX9-NEXT:    v_bfi_b32 v3, s16, v3, v4
-; GFX9-NEXT:    v_mov_b32_e32 v4, s4
-; GFX9-NEXT:    v_mov_b32_e32 v5, s5
-; GFX9-NEXT:    v_bfi_b32 v4, s16, v4, v5
-; GFX9-NEXT:    v_and_b32_e32 v3, 0xffff, v3
-; GFX9-NEXT:    v_lshl_or_b32 v3, v4, 16, v3
 ; GFX9-NEXT:    v_mov_b32_e32 v4, s3
 ; GFX9-NEXT:    v_mov_b32_e32 v5, s11
-; GFX9-NEXT:    s_lshr_b32 s4, s11, 16
-; GFX9-NEXT:    s_lshr_b32 s3, s3, 16
 ; GFX9-NEXT:    v_bfi_b32 v4, s16, v4, v5
-; GFX9-NEXT:    v_mov_b32_e32 v5, s3
-; GFX9-NEXT:    v_mov_b32_e32 v6, s4
-; GFX9-NEXT:    v_bfi_b32 v5, s16, v5, v6
-; GFX9-NEXT:    v_and_b32_e32 v4, 0xffff, v4
-; GFX9-NEXT:    v_lshl_or_b32 v4, v5, 16, v4
 ; GFX9-NEXT:    v_mov_b32_e32 v5, s2
 ; GFX9-NEXT:    v_mov_b32_e32 v6, s10
-; GFX9-NEXT:    s_lshr_b32 s3, s10, 16
-; GFX9-NEXT:    s_lshr_b32 s2, s2, 16
 ; GFX9-NEXT:    v_bfi_b32 v5, s16, v5, v6
-; GFX9-NEXT:    v_mov_b32_e32 v6, s2
-; GFX9-NEXT:    v_mov_b32_e32 v7, s3
-; GFX9-NEXT:    v_bfi_b32 v6, s16, v6, v7
-; GFX9-NEXT:    v_and_b32_e32 v5, 0xffff, v5
-; GFX9-NEXT:    v_lshl_or_b32 v5, v6, 16, v5
 ; GFX9-NEXT:    v_mov_b32_e32 v6, s1
 ; GFX9-NEXT:    v_mov_b32_e32 v7, s9
-; GFX9-NEXT:    s_lshr_b32 s2, s9, 16
-; GFX9-NEXT:    s_lshr_b32 s1, s1, 16
 ; GFX9-NEXT:    v_bfi_b32 v6, s16, v6, v7
-; GFX9-NEXT:    v_mov_b32_e32 v7, s1
-; GFX9-NEXT:    v_mov_b32_e32 v8, s2
-; GFX9-NEXT:    v_bfi_b32 v7, s16, v7, v8
-; GFX9-NEXT:    v_and_b32_e32 v6, 0xffff, v6
-; GFX9-NEXT:    v_lshl_or_b32 v6, v7, 16, v6
 ; GFX9-NEXT:    v_mov_b32_e32 v7, s0
 ; GFX9-NEXT:    v_mov_b32_e32 v8, s8
-; GFX9-NEXT:    s_lshr_b32 s1, s8, 16
-; GFX9-NEXT:    s_lshr_b32 s0, s0, 16
 ; GFX9-NEXT:    v_bfi_b32 v7, s16, v7, v8
-; GFX9-NEXT:    v_mov_b32_e32 v8, s0
-; GFX9-NEXT:    v_mov_b32_e32 v9, s1
-; GFX9-NEXT:    v_bfi_b32 v8, s16, v8, v9
-; GFX9-NEXT:    v_and_b32_e32 v7, 0xffff, v7
-; GFX9-NEXT:    v_lshl_or_b32 v7, v8, 16, v7
 ; GFX9-NEXT:    v_readfirstlane_b32 s0, v7
 ; GFX9-NEXT:    v_readfirstlane_b32 s1, v6
 ; GFX9-NEXT:    v_readfirstlane_b32 s2, v5
@@ -1906,74 +1794,26 @@ define amdgpu_ps <8 x i32> @s_copysign_v16bf16(<16 x bfloat> inreg %arg_mag, <16
 ; GFX10-LABEL: s_copysign_v16bf16:
 ; GFX10:       ; %bb.0:
 ; GFX10-NEXT:    v_mov_b32_e32 v0, s15
-; GFX10-NEXT:    v_mov_b32_e32 v2, s14
-; GFX10-NEXT:    s_lshr_b32 s14, s14, 16
-; GFX10-NEXT:    s_lshr_b32 s15, s15, 16
-; GFX10-NEXT:    v_mov_b32_e32 v3, s14
-; GFX10-NEXT:    v_mov_b32_e32 v1, s15
-; GFX10-NEXT:    v_bfi_b32 v0, 0x7fff, s7, v0
-; GFX10-NEXT:    v_bfi_b32 v2, 0x7fff, s6, v2
-; GFX10-NEXT:    v_mov_b32_e32 v4, s13
-; GFX10-NEXT:    s_lshr_b32 s6, s6, 16
-; GFX10-NEXT:    s_lshr_b32 s7, s7, 16
-; GFX10-NEXT:    v_bfi_b32 v3, 0x7fff, s6, v3
-; GFX10-NEXT:    s_lshr_b32 s6, s13, 16
-; GFX10-NEXT:    v_bfi_b32 v1, 0x7fff, s7, v1
-; GFX10-NEXT:    v_and_b32_e32 v0, 0xffff, v0
-; GFX10-NEXT:    v_and_b32_e32 v2, 0xffff, v2
-; GFX10-NEXT:    v_mov_b32_e32 v5, s6
-; GFX10-NEXT:    v_bfi_b32 v4, 0x7fff, s5, v4
-; GFX10-NEXT:    s_lshr_b32 s5, s5, 16
-; GFX10-NEXT:    v_lshl_or_b32 v0, v1, 16, v0
-; GFX10-NEXT:    v_lshl_or_b32 v1, v3, 16, v2
-; GFX10-NEXT:    v_bfi_b32 v2, 0x7fff, s5, v5
-; GFX10-NEXT:    v_and_b32_e32 v3, 0xffff, v4
-; GFX10-NEXT:    s_lshr_b32 s5, s12, 16
-; GFX10-NEXT:    v_mov_b32_e32 v4, s12
-; GFX10-NEXT:    v_mov_b32_e32 v5, s5
+; GFX10-NEXT:    v_mov_b32_e32 v1, s14
+; GFX10-NEXT:    v_mov_b32_e32 v2, s13
+; GFX10-NEXT:    v_mov_b32_e32 v3, s8
+; GFX10-NEXT:    v_mov_b32_e32 v4, s9
+; GFX10-NEXT:    v_mov_b32_e32 v5, s10
 ; GFX10-NEXT:    v_mov_b32_e32 v6, s11
-; GFX10-NEXT:    v_lshl_or_b32 v2, v2, 16, v3
-; GFX10-NEXT:    s_lshr_b32 s5, s4, 16
-; GFX10-NEXT:    v_bfi_b32 v3, 0x7fff, s4, v4
-; GFX10-NEXT:    s_lshr_b32 s4, s11, 16
-; GFX10-NEXT:    v_bfi_b32 v4, 0x7fff, s5, v5
-; GFX10-NEXT:    v_bfi_b32 v5, 0x7fff, s3, v6
-; GFX10-NEXT:    v_mov_b32_e32 v6, s4
-; GFX10-NEXT:    s_lshr_b32 s4, s10, 16
-; GFX10-NEXT:    v_mov_b32_e32 v7, s10
-; GFX10-NEXT:    v_mov_b32_e32 v8, s4
-; GFX10-NEXT:    s_lshr_b32 s3, s3, 16
-; GFX10-NEXT:    v_mov_b32_e32 v9, s9
-; GFX10-NEXT:    v_mov_b32_e32 v10, s8
-; GFX10-NEXT:    v_bfi_b32 v6, 0x7fff, s3, v6
-; GFX10-NEXT:    s_lshr_b32 s3, s2, 16
-; GFX10-NEXT:    v_bfi_b32 v7, 0x7fff, s2, v7
-; GFX10-NEXT:    v_bfi_b32 v8, 0x7fff, s3, v8
-; GFX10-NEXT:    s_lshr_b32 s2, s9, 16
-; GFX10-NEXT:    s_lshr_b32 s3, s8, 16
-; GFX10-NEXT:    v_bfi_b32 v9, 0x7fff, s1, v9
-; GFX10-NEXT:    v_mov_b32_e32 v11, s2
-; GFX10-NEXT:    v_mov_b32_e32 v12, s3
-; GFX10-NEXT:    v_bfi_b32 v10, 0x7fff, s0, v10
-; GFX10-NEXT:    s_lshr_b32 s1, s1, 16
-; GFX10-NEXT:    s_lshr_b32 s0, s0, 16
-; GFX10-NEXT:    v_bfi_b32 v11, 0x7fff, s1, v11
-; GFX10-NEXT:    v_bfi_b32 v12, 0x7fff, s0, v12
-; GFX10-NEXT:    v_and_b32_e32 v10, 0xffff, v10
-; GFX10-NEXT:    v_and_b32_e32 v9, 0xffff, v9
-; GFX10-NEXT:    v_and_b32_e32 v7, 0xffff, v7
-; GFX10-NEXT:    v_and_b32_e32 v5, 0xffff, v5
-; GFX10-NEXT:    v_and_b32_e32 v3, 0xffff, v3
-; GFX10-NEXT:    v_lshl_or_b32 v10, v12, 16, v10
-; GFX10-NEXT:    v_lshl_or_b32 v9, v11, 16, v9
-; GFX10-NEXT:    v_lshl_or_b32 v7, v8, 16, v7
-; GFX10-NEXT:    v_lshl_or_b32 v5, v6, 16, v5
-; GFX10-NEXT:    v_lshl_or_b32 v3, v4, 16, v3
-; GFX10-NEXT:    v_readfirstlane_b32 s0, v10
-; GFX10-NEXT:    v_readfirstlane_b32 s1, v9
-; GFX10-NEXT:    v_readfirstlane_b32 s2, v7
-; GFX10-NEXT:    v_readfirstlane_b32 s3, v5
-; GFX10-NEXT:    v_readfirstlane_b32 s4, v3
+; GFX10-NEXT:    v_mov_b32_e32 v7, s12
+; GFX10-NEXT:    v_bfi_b32 v0, 0x7fff7fff, s7, v0
+; GFX10-NEXT:    v_bfi_b32 v1, 0x7fff7fff, s6, v1
+; GFX10-NEXT:    v_bfi_b32 v2, 0x7fff7fff, s5, v2
+; GFX10-NEXT:    v_bfi_b32 v3, 0x7fff7fff, s0, v3
+; GFX10-NEXT:    v_bfi_b32 v4, 0x7fff7fff, s1, v4
+; GFX10-NEXT:    v_bfi_b32 v5, 0x7fff7fff, s2, v5
+; GFX10-NEXT:    v_bfi_b32 v6, 0x7fff7fff, s3, v6
+; GFX10-NEXT:    v_bfi_b32 v7, 0x7fff7fff, s4, v7
+; GFX10-NEXT:    v_readfirstlane_b32 s0, v3
+; GFX10-NEXT:    v_readfirstlane_b32 s1, v4
+; GFX10-NEXT:    v_readfirstlane_b32 s2, v5
+; GFX10-NEXT:    v_readfirstlane_b32 s3, v6
+; GFX10-NEXT:    v_readfirstlane_b32 s4, v7
 ; GFX10-NEXT:    v_readfirstlane_b32 s5, v2
 ; GFX10-NEXT:    v_readfirstlane_b32 s6, v1
 ; GFX10-NEXT:    v_readfirstlane_b32 s7, v0
@@ -1981,73 +1821,24 @@ define amdgpu_ps <8 x i32> @s_copysign_v16bf16(<16 x bfloat> inreg %arg_mag, <16
 ;
 ; GFX11-LABEL: s_copysign_v16bf16:
 ; GFX11:       ; %bb.0:
-; GFX11-NEXT:    v_mov_b32_e32 v0, s15
-; GFX11-NEXT:    v_mov_b32_e32 v2, s14
-; GFX11-NEXT:    s_lshr_b32 s14, s14, 16
-; GFX11-NEXT:    s_lshr_b32 s15, s15, 16
-; GFX11-NEXT:    v_mov_b32_e32 v3, s14
-; GFX11-NEXT:    v_mov_b32_e32 v1, s15
-; GFX11-NEXT:    v_bfi_b32 v0, 0x7fff, s7, v0
-; GFX11-NEXT:    v_bfi_b32 v2, 0x7fff, s6, v2
-; GFX11-NEXT:    v_mov_b32_e32 v4, s13
-; GFX11-NEXT:    s_lshr_b32 s6, s6, 16
-; GFX11-NEXT:    s_lshr_b32 s7, s7, 16
-; GFX11-NEXT:    v_bfi_b32 v3, 0x7fff, s6, v3
-; GFX11-NEXT:    s_lshr_b32 s6, s13, 16
-; GFX11-NEXT:    v_bfi_b32 v1, 0x7fff, s7, v1
-; GFX11-NEXT:    v_dual_mov_b32 v5, s6 :: v_dual_and_b32 v0, 0xffff, v0
-; GFX11-NEXT:    v_dual_mov_b32 v7, s10 :: v_dual_and_b32 v2, 0xffff, v2
-; GFX11-NEXT:    v_bfi_b32 v4, 0x7fff, s5, v4
-; GFX11-NEXT:    s_lshr_b32 s5, s5, 16
-; GFX11-NEXT:    s_delay_alu instid0(VALU_DEP_3) | instskip(NEXT) | instid1(VALU_DEP_3)
-; GFX11-NEXT:    v_lshl_or_b32 v0, v1, 16, v0
-; GFX11-NEXT:    v_lshl_or_b32 v1, v3, 16, v2
-; GFX11-NEXT:    v_bfi_b32 v2, 0x7fff, s5, v5
-; GFX11-NEXT:    v_dual_mov_b32 v4, s12 :: v_dual_and_b32 v3, 0xffff, v4
-; GFX11-NEXT:    s_lshr_b32 s5, s12, 16
-; GFX11-NEXT:    v_dual_mov_b32 v6, s11 :: v_dual_mov_b32 v9, s9
-; GFX11-NEXT:    v_mov_b32_e32 v5, s5
-; GFX11-NEXT:    s_delay_alu instid0(VALU_DEP_3)
-; GFX11-NEXT:    v_lshl_or_b32 v2, v2, 16, v3
-; GFX11-NEXT:    s_lshr_b32 s5, s4, 16
-; GFX11-NEXT:    v_bfi_b32 v3, 0x7fff, s4, v4
-; GFX11-NEXT:    s_lshr_b32 s4, s11, 16
-; GFX11-NEXT:    v_bfi_b32 v4, 0x7fff, s5, v5
-; GFX11-NEXT:    v_bfi_b32 v5, 0x7fff, s3, v6
-; GFX11-NEXT:    v_mov_b32_e32 v6, s4
-; GFX11-NEXT:    s_lshr_b32 s4, s10, 16
-; GFX11-NEXT:    v_bfi_b32 v9, 0x7fff, s1, v9
-; GFX11-NEXT:    v_mov_b32_e32 v8, s4
-; GFX11-NEXT:    s_lshr_b32 s3, s3, 16
-; GFX11-NEXT:    v_bfi_b32 v7, 0x7fff, s2, v7
-; GFX11-NEXT:    v_mov_b32_e32 v10, s8
-; GFX11-NEXT:    v_bfi_b32 v6, 0x7fff, s3, v6
-; GFX11-NEXT:    s_lshr_b32 s3, s2, 16
-; GFX11-NEXT:    s_lshr_b32 s2, s9, 16
-; GFX11-NEXT:    v_and_b32_e32 v9, 0xffff, v9
-; GFX11-NEXT:    v_bfi_b32 v8, 0x7fff, s3, v8
-; GFX11-NEXT:    s_lshr_b32 s3, s8, 16
-; GFX11-NEXT:    s_delay_alu instid0(SALU_CYCLE_1)
-; GFX11-NEXT:    v_dual_mov_b32 v11, s2 :: v_dual_mov_b32 v12, s3
-; GFX11-NEXT:    v_and_b32_e32 v5, 0xffff, v5
-; GFX11-NEXT:    v_and_b32_e32 v7, 0xffff, v7
-; GFX11-NEXT:    v_bfi_b32 v10, 0x7fff, s0, v10
-; GFX11-NEXT:    s_lshr_b32 s1, s1, 16
-; GFX11-NEXT:    s_lshr_b32 s0, s0, 16
-; GFX11-NEXT:    v_bfi_b32 v11, 0x7fff, s1, v11
-; GFX11-NEXT:    v_bfi_b32 v12, 0x7fff, s0, v12
-; GFX11-NEXT:    v_and_b32_e32 v10, 0xffff, v10
-; GFX11-NEXT:    v_and_b32_e32 v3, 0xffff, v3
-; GFX11-NEXT:    v_lshl_or_b32 v7, v8, 16, v7
-; GFX11-NEXT:    v_lshl_or_b32 v9, v11, 16, v9
-; GFX11-NEXT:    v_lshl_or_b32 v5, v6, 16, v5
-; GFX11-NEXT:    v_lshl_or_b32 v10, v12, 16, v10
-; GFX11-NEXT:    v_lshl_or_b32 v3, v4, 16, v3
-; GFX11-NEXT:    v_readfirstlane_b32 s2, v7
-; GFX11-NEXT:    v_readfirstlane_b32 s1, v9
-; GFX11-NEXT:    v_readfirstlane_b32 s3, v5
-; GFX11-NEXT:    v_readfirstlane_b32 s0, v10
-; GFX11-NEXT:    v_readfirstlane_b32 s4, v3
+; GFX11-NEXT:    v_dual_mov_b32 v0, s15 :: v_dual_mov_b32 v1, s14
+; GFX11-NEXT:    v_dual_mov_b32 v2, s13 :: v_dual_mov_b32 v3, s8
+; GFX11-NEXT:    v_dual_mov_b32 v4, s9 :: v_dual_mov_b32 v5, s10
+; GFX11-NEXT:    v_dual_mov_b32 v6, s11 :: v_dual_mov_b32 v7, s12
+; GFX11-NEXT:    s_delay_alu instid0(VALU_DEP_4)
+; GFX11-NEXT:    v_bfi_b32 v0, 0x7fff7fff, s7, v0
+; GFX11-NEXT:    v_bfi_b32 v1, 0x7fff7fff, s6, v1
+; GFX11-NEXT:    v_bfi_b32 v2, 0x7fff7fff, s5, v2
+; GFX11-NEXT:    v_bfi_b32 v3, 0x7fff7fff, s0, v3
+; GFX11-NEXT:    v_bfi_b32 v4, 0x7fff7fff, s1, v4
+; GFX11-NEXT:    v_bfi_b32 v5, 0x7fff7fff, s2, v5
+; GFX11-NEXT:    v_bfi_b32 v6, 0x7fff7fff, s3, v6
+; GFX11-NEXT:    v_bfi_b32 v7, 0x7fff7fff, s4, v7
+; GFX11-NEXT:    v_readfirstlane_b32 s0, v3
+; GFX11-NEXT:    v_readfirstlane_b32 s1, v4
+; GFX11-NEXT:    v_readfirstlane_b32 s2, v5
+; GFX11-NEXT:    v_readfirstlane_b32 s3, v6
+; GFX11-NEXT:    v_readfirstlane_b32 s4, v7
 ; GFX11-NEXT:    v_readfirstlane_b32 s5, v2
 ; GFX11-NEXT:    v_readfirstlane_b32 s6, v1
 ; GFX11-NEXT:    v_readfirstlane_b32 s7, v0
@@ -2717,262 +2508,56 @@ define <16 x bfloat> @v_copysign_v16bf16(<16 x bfloat> %mag, <16 x bfloat> %sign
 ; GFX8-LABEL: v_copysign_v16bf16:
 ; GFX8:       ; %bb.0:
 ; GFX8-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
-; GFX8-NEXT:    v_lshrrev_b32_e32 v16, 16, v15
-; GFX8-NEXT:    v_lshrrev_b32_e32 v17, 16, v7
-; GFX8-NEXT:    s_movk_i32 s4, 0x7fff
-; GFX8-NEXT:    v_bfi_b32 v16, s4, v17, v16
-; GFX8-NEXT:    v_bfi_b32 v7, s4, v7, v15
-; GFX8-NEXT:    v_lshrrev_b32_e32 v15, 16, v14
-; GFX8-NEXT:    v_lshrrev_b32_e32 v17, 16, v6
-; GFX8-NEXT:    v_bfi_b32 v15, s4, v17, v15
-; GFX8-NEXT:    v_bfi_b32 v6, s4, v6, v14
-; GFX8-NEXT:    v_lshrrev_b32_e32 v14, 16, v13
-; GFX8-NEXT:    v_lshrrev_b32_e32 v17, 16, v5
-; GFX8-NEXT:    v_bfi_b32 v14, s4, v17, v14
-; GFX8-NEXT:    v_bfi_b32 v5, s4, v5, v13
-; GFX8-NEXT:    v_lshrrev_b32_e32 v13, 16, v12
-; GFX8-NEXT:    v_lshrrev_b32_e32 v17, 16, v4
-; GFX8-NEXT:    v_bfi_b32 v13, s4, v17, v13
-; GFX8-NEXT:    v_bfi_b32 v4, s4, v4, v12
-; GFX8-NEXT:    v_lshrrev_b32_e32 v12, 16, v11
-; GFX8-NEXT:    v_lshrrev_b32_e32 v17, 16, v3
-; GFX8-NEXT:    v_bfi_b32 v12, s4, v17, v12
-; GFX8-NEXT:    v_bfi_b32 v3, s4, v3, v11
-; GFX8-NEXT:    v_lshrrev_b32_e32 v11, 16, v10
-; GFX8-NEXT:    v_lshrrev_b32_e32 v17, 16, v2
-; GFX8-NEXT:    v_bfi_b32 v11, s4, v17, v11
-; GFX8-NEXT:    v_bfi_b32 v2, s4, v2, v10
-; GFX8-NEXT:    v_lshrrev_b32_e32 v10, 16, v9
-; GFX8-NEXT:    v_lshrrev_b32_e32 v17, 16, v1
-; GFX8-NEXT:    v_bfi_b32 v10, s4, v17, v10
-; GFX8-NEXT:    v_bfi_b32 v1, s4, v1, v9
-; GFX8-NEXT:    v_lshrrev_b32_e32 v9, 16, v8
-; GFX8-NEXT:    v_lshrrev_b32_e32 v17, 16, v0
-; GFX8-NEXT:    v_bfi_b32 v9, s4, v17, v9
+; GFX8-NEXT:    s_mov_b32 s4, 0x7fff7fff
 ; GFX8-NEXT:    v_bfi_b32 v0, s4, v0, v8
-; GFX8-NEXT:    v_lshlrev_b32_e32 v8, 16, v9
-; GFX8-NEXT:    v_or_b32_sdwa v0, v0, v8 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:WORD_0 src1_sel:DWORD
-; GFX8-NEXT:    v_lshlrev_b32_e32 v8, 16, v10
-; GFX8-NEXT:    v_or_b32_sdwa v1, v1, v8 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:WORD_0 src1_sel:DWORD
-; GFX8-NEXT:    v_lshlrev_b32_e32 v8, 16, v11
-; GFX8-NEXT:    v_or_b32_sdwa v2, v2, v8 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:WORD_0 src1_sel:DWORD
-; GFX8-NEXT:    v_lshlrev_b32_e32 v8, 16, v12
-; GFX8-NEXT:    v_or_b32_sdwa v3, v3, v8 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:WORD_0 src1_sel:DWORD
-; GFX8-NEXT: ...
[truncated]

@arsenm arsenm force-pushed the users/arsenm/amdgpu/improve-v16f16-v16bf16-copysign-lowering branch from a05deda to 9374893 Compare May 30, 2025 15:58
@arsenm arsenm force-pushed the users/arsenm/amdgpu/improve-v8f16-v8bf16-copysign-lowering branch from 196b010 to 883a508 Compare May 30, 2025 15:58
@arsenm arsenm force-pushed the users/arsenm/amdgpu/improve-v8f16-v8bf16-copysign-lowering branch from 883a508 to d94e349 Compare May 30, 2025 18:05
@arsenm arsenm force-pushed the users/arsenm/amdgpu/improve-v16f16-v16bf16-copysign-lowering branch from 9374893 to bae3443 Compare May 30, 2025 18:05
Copy link
Contributor Author

arsenm commented May 31, 2025

Merge activity

  • May 31, 5:58 AM UTC: A user started a stack merge that includes this pull request via Graphite.
  • May 31, 6:16 AM UTC: Graphite rebased this pull request as part of a merge.
  • May 31, 6:18 AM UTC: @arsenm merged this pull request with Graphite.

@arsenm arsenm force-pushed the users/arsenm/amdgpu/improve-v8f16-v8bf16-copysign-lowering branch 2 times, most recently from 42f5f13 to aa7ccbf Compare May 31, 2025 06:12
Base automatically changed from users/arsenm/amdgpu/improve-v8f16-v8bf16-copysign-lowering to main May 31, 2025 06:15
@arsenm arsenm force-pushed the users/arsenm/amdgpu/improve-v16f16-v16bf16-copysign-lowering branch from bae3443 to 05e5a9c Compare May 31, 2025 06:15
@arsenm arsenm merged commit 3aeffcf into main May 31, 2025
6 of 11 checks passed
@arsenm arsenm deleted the users/arsenm/amdgpu/improve-v16f16-v16bf16-copysign-lowering branch May 31, 2025 06:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants