Skip to content

[AtomicExpand] Add bitcasts when expanding load atomic vector #120716

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: users/jofrn/spr/main/80b9b6a7
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 14 additions & 0 deletions llvm/include/llvm/Target/TargetSelectionDAG.td
Original file line number Diff line number Diff line change
Expand Up @@ -1904,6 +1904,20 @@ def atomic_load_64 :
let MemoryVT = i64;
}

def atomic_load_128_v2i64 :
PatFrag<(ops node:$ptr),
(atomic_load node:$ptr)> {
let IsAtomic = true;
let MemoryVT = v2i64;
}

def atomic_load_128_v4i32 :
PatFrag<(ops node:$ptr),
(atomic_load node:$ptr)> {
let IsAtomic = true;
let MemoryVT = v4i32;
Comment on lines +1907 to +1918
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This patch should not require adding this, or touching any of the backend patterns

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The tests that use them must also have the changes from AtomicExpand.

}

def atomic_load_nonext_8 :
PatFrag<(ops node:$ptr), (atomic_load_nonext node:$ptr)> {
let IsAtomic = true; // FIXME: Should be IsLoad and/or IsAtomic?
Expand Down
15 changes: 12 additions & 3 deletions llvm/lib/CodeGen/AtomicExpandPass.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -2066,9 +2066,18 @@ bool AtomicExpandImpl::expandAtomicOpToLibcall(
I->replaceAllUsesWith(V);
} else if (HasResult) {
Value *V;
if (UseSizedLibcall)
V = Builder.CreateBitOrPointerCast(Result, I->getType());
else {
if (UseSizedLibcall) {
// Add bitcasts from Result's scalar type to I's <n x ptr> vector type
auto *PtrTy = dyn_cast<PointerType>(I->getType()->getScalarType());
auto *VTy = dyn_cast<VectorType>(I->getType());
if (VTy && PtrTy && !Result->getType()->isVectorTy()) {
Comment on lines +2071 to +2073
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There should probably be a utility for this somewhere

unsigned AS = PtrTy->getAddressSpace();
Value *BC = Builder.CreateBitCast(
Result, VTy->getWithNewType(DL.getIntPtrType(Ctx, AS)));
V = Builder.CreateIntToPtr(BC, I->getType());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
V = Builder.CreateIntToPtr(BC, I->getType());
V = Builder.CreateIntToPtr(BC, VTy);

} else
V = Builder.CreateBitOrPointerCast(Result, I->getType());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
V = Builder.CreateBitOrPointerCast(Result, I->getType());
V = Builder.CreateBitOrPointerCast(Result, VTy);

} else {
V = Builder.CreateAlignedLoad(I->getType(), AllocaResult,
AllocaAlignment);
Builder.CreateLifetimeEnd(AllocaResult, SizeVal64);
Expand Down
5 changes: 5 additions & 0 deletions llvm/lib/Target/X86/X86InstrCompiler.td
Original file line number Diff line number Diff line change
Expand Up @@ -1211,6 +1211,11 @@ def : Pat<(v4i32 (scalar_to_vector (i32 (atomic_load_32 addr:$src)))),
def : Pat<(v2i64 (scalar_to_vector (i64 (atomic_load_64 addr:$src)))),
(MOV64toPQIrm addr:$src)>; // load atomic <2 x i32,float>

def : Pat<(v2i64 (atomic_load_128_v2i64 addr:$src)),
(VMOVAPDrm addr:$src)>; // load atomic <2 x i64>
def : Pat<(v4i32 (atomic_load_128_v4i32 addr:$src)),
(VMOVAPDrm addr:$src)>; // load atomic <4 x i32>
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These are required for 128 bit vectors in SSE/AVX. The tests added in this PR require both the AtomicExpand change and these td records.

Copy link
Collaborator

@RKSimon RKSimon Jun 2, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These require AVX/AVX512 variants (see below) - but x86 doesn't guarantee atomics for anything above 8 bytes (and those must be aligned to avoid cacheline crossing).

Copy link
Collaborator

@RKSimon RKSimon Jun 2, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think all known AVX capable x86 targets allow 16 byte aligned atomics, its not official but we assume it in X86TargetLowering::shouldExpandAtomicLoadInIR


// Floating point loads/stores.
def : Pat<(atomic_store_32 (i32 (bitconvert (f32 FR32:$src))), addr:$dst),
(MOVSSmr addr:$dst, FR32:$src)>, Requires<[UseSSE1]>;
Expand Down
51 changes: 51 additions & 0 deletions llvm/test/CodeGen/ARM/atomic-load-store.ll
Original file line number Diff line number Diff line change
Expand Up @@ -983,3 +983,54 @@ define void @store_atomic_f64__seq_cst(ptr %ptr, double %val1) {
store atomic double %val1, ptr %ptr seq_cst, align 8
ret void
}

define <1 x ptr> @atomic_vec1_ptr(ptr %x) #0 {
; ARM-LABEL: atomic_vec1_ptr:
; ARM: @ %bb.0:
; ARM-NEXT: ldr r0, [r0]
; ARM-NEXT: dmb ish
; ARM-NEXT: bx lr
;
; ARMOPTNONE-LABEL: atomic_vec1_ptr:
; ARMOPTNONE: @ %bb.0:
; ARMOPTNONE-NEXT: ldr r0, [r0]
; ARMOPTNONE-NEXT: dmb ish
; ARMOPTNONE-NEXT: bx lr
;
; THUMBTWO-LABEL: atomic_vec1_ptr:
; THUMBTWO: @ %bb.0:
; THUMBTWO-NEXT: ldr r0, [r0]
; THUMBTWO-NEXT: dmb ish
; THUMBTWO-NEXT: bx lr
;
; THUMBONE-LABEL: atomic_vec1_ptr:
; THUMBONE: @ %bb.0:
; THUMBONE-NEXT: push {r7, lr}
; THUMBONE-NEXT: movs r1, #0
; THUMBONE-NEXT: mov r2, r1
; THUMBONE-NEXT: bl __sync_val_compare_and_swap_4
; THUMBONE-NEXT: pop {r7, pc}
;
; ARMV4-LABEL: atomic_vec1_ptr:
; ARMV4: @ %bb.0:
; ARMV4-NEXT: push {r11, lr}
; ARMV4-NEXT: mov r1, #2
; ARMV4-NEXT: bl __atomic_load_4
; ARMV4-NEXT: pop {r11, lr}
; ARMV4-NEXT: mov pc, lr
;
; ARMV6-LABEL: atomic_vec1_ptr:
; ARMV6: @ %bb.0:
; ARMV6-NEXT: ldr r0, [r0]
; ARMV6-NEXT: mov r1, #0
; ARMV6-NEXT: mcr p15, #0, r1, c7, c10, #5
; ARMV6-NEXT: bx lr
;
; THUMBM-LABEL: atomic_vec1_ptr:
; THUMBM: @ %bb.0:
; THUMBM-NEXT: ldr r0, [r0]
; THUMBM-NEXT: dmb sy
; THUMBM-NEXT: bx lr
%ret = load atomic <1 x ptr>, ptr %x acquire, align 4
ret <1 x ptr> %ret
}
93 changes: 93 additions & 0 deletions llvm/test/CodeGen/X86/atomic-load-store.ll
Original file line number Diff line number Diff line change
Expand Up @@ -860,6 +860,53 @@ define <2 x i32> @atomic_vec2_i32(ptr %x) nounwind {
ret <2 x i32> %ret
}

; Move td records to AtomicExpand
define <2 x ptr> @atomic_vec2_ptr_align(ptr %x) nounwind {
; CHECK-O3-LABEL: atomic_vec2_ptr_align:
; CHECK-O3: # %bb.0:
; CHECK-O3-NEXT: pushq %rax
; CHECK-O3-NEXT: movl $2, %esi
; CHECK-O3-NEXT: callq __atomic_load_16@PLT
; CHECK-O3-NEXT: movq %rdx, %xmm1
; CHECK-O3-NEXT: movq %rax, %xmm0
; CHECK-O3-NEXT: punpcklqdq {{.*#+}} xmm0 = xmm0[0],xmm1[0]
; CHECK-O3-NEXT: popq %rax
; CHECK-O3-NEXT: retq
;
; CHECK-SSE-O3-LABEL: atomic_vec2_ptr_align:
; CHECK-SSE-O3: # %bb.0:
; CHECK-SSE-O3-NEXT: vmovaps (%rdi), %xmm0
; CHECK-SSE-O3-NEXT: retq
;
; CHECK-AVX-O3-LABEL: atomic_vec2_ptr_align:
; CHECK-AVX-O3: # %bb.0:
; CHECK-AVX-O3-NEXT: vmovaps (%rdi), %xmm0
; CHECK-AVX-O3-NEXT: retq
;
; CHECK-O0-LABEL: atomic_vec2_ptr_align:
; CHECK-O0: # %bb.0:
; CHECK-O0-NEXT: pushq %rax
; CHECK-O0-NEXT: movl $2, %esi
; CHECK-O0-NEXT: callq __atomic_load_16@PLT
; CHECK-O0-NEXT: movq %rdx, %xmm1
; CHECK-O0-NEXT: movq %rax, %xmm0
; CHECK-O0-NEXT: punpcklqdq {{.*#+}} xmm0 = xmm0[0],xmm1[0]
; CHECK-O0-NEXT: popq %rax
; CHECK-O0-NEXT: retq
;
; CHECK-SSE-O0-LABEL: atomic_vec2_ptr_align:
; CHECK-SSE-O0: # %bb.0:
; CHECK-SSE-O0-NEXT: vmovapd (%rdi), %xmm0
; CHECK-SSE-O0-NEXT: retq
;
; CHECK-AVX-O0-LABEL: atomic_vec2_ptr_align:
; CHECK-AVX-O0: # %bb.0:
; CHECK-AVX-O0-NEXT: vmovapd (%rdi), %xmm0
; CHECK-AVX-O0-NEXT: retq
%ret = load atomic <2 x ptr>, ptr %x acquire, align 16
ret <2 x ptr> %ret
}

define <4 x i8> @atomic_vec4_i8(ptr %x) nounwind {
; CHECK-O3-LABEL: atomic_vec4_i8:
; CHECK-O3: # %bb.0:
Expand Down Expand Up @@ -903,6 +950,52 @@ define <4 x i16> @atomic_vec4_i16(ptr %x) nounwind {
ret <4 x i16> %ret
}

define <4 x ptr addrspace(270)> @atomic_vec4_ptr270(ptr %x) nounwind {
; CHECK-O3-LABEL: atomic_vec4_ptr270:
; CHECK-O3: # %bb.0:
; CHECK-O3-NEXT: pushq %rax
; CHECK-O3-NEXT: movl $2, %esi
; CHECK-O3-NEXT: callq __atomic_load_16@PLT
; CHECK-O3-NEXT: movq %rdx, %xmm1
; CHECK-O3-NEXT: movq %rax, %xmm0
; CHECK-O3-NEXT: punpcklqdq {{.*#+}} xmm0 = xmm0[0],xmm1[0]
; CHECK-O3-NEXT: popq %rax
; CHECK-O3-NEXT: retq
;
; CHECK-SSE-O3-LABEL: atomic_vec4_ptr270:
; CHECK-SSE-O3: # %bb.0:
; CHECK-SSE-O3-NEXT: vmovaps (%rdi), %xmm0
; CHECK-SSE-O3-NEXT: retq
;
; CHECK-AVX-O3-LABEL: atomic_vec4_ptr270:
; CHECK-AVX-O3: # %bb.0:
; CHECK-AVX-O3-NEXT: vmovaps (%rdi), %xmm0
; CHECK-AVX-O3-NEXT: retq
;
; CHECK-O0-LABEL: atomic_vec4_ptr270:
; CHECK-O0: # %bb.0:
; CHECK-O0-NEXT: pushq %rax
; CHECK-O0-NEXT: movl $2, %esi
; CHECK-O0-NEXT: callq __atomic_load_16@PLT
; CHECK-O0-NEXT: movq %rdx, %xmm1
; CHECK-O0-NEXT: movq %rax, %xmm0
; CHECK-O0-NEXT: punpcklqdq {{.*#+}} xmm0 = xmm0[0],xmm1[0]
; CHECK-O0-NEXT: popq %rax
; CHECK-O0-NEXT: retq
;
; CHECK-SSE-O0-LABEL: atomic_vec4_ptr270:
; CHECK-SSE-O0: # %bb.0:
; CHECK-SSE-O0-NEXT: vmovapd (%rdi), %xmm0
; CHECK-SSE-O0-NEXT: retq
;
; CHECK-AVX-O0-LABEL: atomic_vec4_ptr270:
; CHECK-AVX-O0: # %bb.0:
; CHECK-AVX-O0-NEXT: vmovapd (%rdi), %xmm0
; CHECK-AVX-O0-NEXT: retq
%ret = load atomic <4 x ptr addrspace(270)>, ptr %x acquire, align 16
ret <4 x ptr addrspace(270)> %ret
}

define <4 x half> @atomic_vec4_half(ptr %x) nounwind {
; CHECK-LABEL: atomic_vec4_half:
; CHECK: # %bb.0:
Expand Down
Loading
Loading