Skip to content

JIT: Unblock Vector###<long> intrinsics on x86 #112728

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 20 commits into from
Mar 20, 2025

Conversation

saucecontrol
Copy link
Member

@saucecontrol saucecontrol commented Feb 20, 2025

Resolves #11626

This resolves a large number of TODOs around HWIntrinsic expansion involving scalar longs on x86.

The most significant change here is in promoting CreateScalar and ToScalar to be code generating intrinsics instead of converting them to other intrinsics at lowering. This was necessary in order to handle emitting movq for scalar long loads/stores but also unlocks several other optimizations since we can now allow CreateScalar and ToScalar to be contained and can specialize codegen depending on whether they end up loading/storing from/to memory or not. Some example improvements on x64:

Vector128.CreateScalar(ref float):

-       vinsertps xmm0, xmm0, dword ptr [rbp+0x10], 14
+       vmovss   xmm0, dword ptr [rbp+0x10]

Vector128.CreateScalar(ref double):

-       vxorps   xmm0, xmm0, xmm0
-       vmovsd   xmm1, qword ptr [rbp-0x08]
-       vmovsd   xmm0, xmm0, xmm1
+       vmovsd   xmm0, qword ptr [rbp-0x08]

ref byte = Vector128<byte>.ToScalar():

-       vmovd    r9d, xmm3
-       mov      byte  ptr [r10], r9b
+       vpextrb  byte  ptr [r10], xmm3, 0

Vector<byte>.ToScalar()

-       vmovups  ymm0, ymmword ptr [esp+0x04]
-       vmovd    eax, xmm0
-       movzx    eax, al
+       movzx    eax, byte  ptr [esp+0x04]

And the less realistic, but still interesting
Sse.AddScalar(Vector128.CreateScalar(ref float), Vector128.CreateScalar(ref float)).ToScalar():

-       xorps    xmm0, xmm0
-       movss    xmm1, dword ptr [rcx]
-       movss    xmm0, xmm1
-       xorps    xmm1, xmm1
-       movss    xmm2, dword ptr [rdx]
-       movss    xmm1, xmm2
-       addss    xmm0, xmm1
+       movss    xmm0, dword ptr [rcx]
+       addss    xmm0, dword ptr [rdx]

This also removes some redundant casts for CreateScalar of small types. Previously, a zero-extending cast was inserted unconditionally and was sometimes removed by peephole opt on x64 but often wasn't.

Vector128.CreateScalar(short):

-       movsx    rax, dx
-       movzx    rax, ax
-       movd     xmm0, rax
+       movzx    rax, dx
+       movd     xmm0, eax

Vector128.CreateScalar(checked((byte)val)):

        cmp      edx, 255
        ja       SHORT G_M000_IG04
        mov      eax, edx
-       movzx    rax, al
-       vmovd    xmm0, rax
+       vmovd    xmm0, eax

Vector128.CreateScalar(ref sbyte):

-       movsx    rax, byte  ptr [rdx]
-       movzx    rax, al
-       vmovd    xmm0, rax
+       movzx    rax, byte  ptr [rdx]
+       vmovd    xmm0, eax

x86 diffs are much more significant, because of the newly-enabled intrinsic expansion:

Collection Base size (bytes) Diff size (bytes) PerfScore in Diffs
benchmarks.run.windows.x86.checked.mch 7,149,204 -1,892 -2.17%
benchmarks.run_pgo.windows.x86.checked.mch 46,986,713 -738 +0.03%
benchmarks.run_tiered.windows.x86.checked.mch 9,470,045 -976 +0.11%
coreclr_tests.run.windows.x86.checked.mch 320,065,247 -205,564 -6.41%
libraries.crossgen2.windows.x86.checked.mch 31,314,339 -15,854 -4.11%
libraries.pmi.windows.x86.checked.mch 34,326,245 -14,416 -2.19%
libraries_tests.run.windows.x86.Release.mch 215,517,600 -55,366 -2.41%
libraries_tests_no_tiered_compilation.run.windows.x86.Release.mch 115,783,488 -80,576 -3.65%
realworld.run.windows.x86.checked.mch 9,587,950 -467 -0.45%

Copy link
Member Author

@saucecontrol saucecontrol left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is ready for review.
cc @tannergooding

Comment on lines -489 to -504
// Keep casts with operands usable from memory.
if (castOp->isContained() || castOp->IsRegOptional())
{
return op;
}
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This condition, added in #72719, made this method effectively useless. Removing it was a zero-diff change. I can look in future at containing the casts rather than removing them.

@@ -4677,19 +4539,16 @@ GenTree* Lowering::LowerHWIntrinsicCreate(GenTreeHWIntrinsic* node)
return LowerNode(node);
}

GenTree* op2 = node->Op(2);

// TODO-XArch-AVX512 : Merge the NI_Vector512_Create and NI_Vector256_Create paths below.
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The churn in this section is just taking care of this TODO


assert(comp->compIsaSupportedDebugOnly(InstructionSet_SSE2));

tmp2 = InsertNewSimdCreateScalarUnsafeNode(TYP_SIMD16, op2, simdBaseJitType, 16);
LowerNode(tmp2);

node->ResetHWIntrinsicId(NI_SSE_MoveLowToHigh, tmp1, tmp2);
Copy link
Member Author

@saucecontrol saucecontrol Feb 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changing this to UnpackLow shows up as a regression in a few places, because movlhps is one byte smaller, but it enables other optimizations since unpcklpd takes a memory operand plus mask and embedded broadcast.

Vector128.Create(double, 1.0):

-       vmovups  xmm0, xmmword ptr [reloc @RWD00]
-       vmovlhps xmm0, xmm1, xmm0
+       vunpcklpd xmm0, xmm1, qword ptr [reloc @RWD00] {1to2}

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should probably be peepholed back to vmovlhps if both are from register.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was thinking the same but would rather save that for a followup. llvm has a replacement list of equivalent instructions that have different sizes, and unpcklpd is on it, as are things like vpermilps, which is replaced by pshufd.

It's worth having a discussion about whether we'd also want to do replacements that switch between float and integer domains. I'll open an issue.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I looked at this again, and there's actually only a size difference for legacy SSE encoding, so it's probably not worth special casing.

Comment on lines +2391 to +2395
if (varDsc->lvIsParam)
{
// Promotion blocks combined read optimizations for SIMD loads of long params
return;
}
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In isolation, this change produced a small number of diffs and was mostly an improvement. A few regressions show up in the SPMI reports, but the overall impact is good, especially considering the places we can load a long to vector with movq

@saucecontrol saucecontrol marked this pull request as ready for review February 22, 2025 00:18
@saucecontrol
Copy link
Member Author

It occurred to me the optimization to emit pinsrb/w for CreateScalarUnsafe was a bad idea because it creates a false dependency on the upper bits of the target reg. Removed that.

Copy link
Member

@tannergooding tannergooding left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CC. @dotnet/jit-contrib, @EgorBo, @BruceForstall for secondary review

@BruceForstall BruceForstall merged commit 16236fd into dotnet:main Mar 20, 2025
110 checks passed
@saucecontrol saucecontrol deleted the createscalar64 branch March 20, 2025 01:00
jakobbotsch pushed a commit that referenced this pull request Mar 25, 2025
* fix cast asserts

* fix containment of CreateScalar

* add tests
@github-actions github-actions bot locked and limited conversation to collaborators Apr 19, 2025
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
area-CodeGen-coreclr CLR JIT compiler in src/coreclr/src/jit and related components such as SuperPMI community-contribution Indicates that the PR has been added by a community member
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Investigate emitting movq for the CreateScalarUnsafe helper intrinsics that take a long/ulong on x86
4 participants