-
Notifications
You must be signed in to change notification settings - Fork 5k
JIT: Unblock Vector###<long> intrinsics on x86 #112728
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
197fac5
to
628d4f8
Compare
628d4f8
to
3a130c8
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is ready for review.
cc @tannergooding
// Keep casts with operands usable from memory. | ||
if (castOp->isContained() || castOp->IsRegOptional()) | ||
{ | ||
return op; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This condition, added in #72719, made this method effectively useless. Removing it was a zero-diff change. I can look in future at containing the casts rather than removing them.
@@ -4677,19 +4539,16 @@ GenTree* Lowering::LowerHWIntrinsicCreate(GenTreeHWIntrinsic* node) | |||
return LowerNode(node); | |||
} | |||
|
|||
GenTree* op2 = node->Op(2); | |||
|
|||
// TODO-XArch-AVX512 : Merge the NI_Vector512_Create and NI_Vector256_Create paths below. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The churn in this section is just taking care of this TODO
|
||
assert(comp->compIsaSupportedDebugOnly(InstructionSet_SSE2)); | ||
|
||
tmp2 = InsertNewSimdCreateScalarUnsafeNode(TYP_SIMD16, op2, simdBaseJitType, 16); | ||
LowerNode(tmp2); | ||
|
||
node->ResetHWIntrinsicId(NI_SSE_MoveLowToHigh, tmp1, tmp2); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changing this to UnpackLow
shows up as a regression in a few places, because movlhps
is one byte smaller, but it enables other optimizations since unpcklpd
takes a memory operand plus mask and embedded broadcast.
Vector128.Create(double, 1.0)
:
- vmovups xmm0, xmmword ptr [reloc @RWD00]
- vmovlhps xmm0, xmm1, xmm0
+ vunpcklpd xmm0, xmm1, qword ptr [reloc @RWD00] {1to2}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should probably be peepholed back to vmovlhps
if both are from register.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was thinking the same but would rather save that for a followup. llvm has a replacement list of equivalent instructions that have different sizes, and unpcklpd
is on it, as are things like vpermilps
, which is replaced by pshufd
.
It's worth having a discussion about whether we'd also want to do replacements that switch between float and integer domains. I'll open an issue.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I looked at this again, and there's actually only a size difference for legacy SSE encoding, so it's probably not worth special casing.
if (varDsc->lvIsParam) | ||
{ | ||
// Promotion blocks combined read optimizations for SIMD loads of long params | ||
return; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In isolation, this change produced a small number of diffs and was mostly an improvement. A few regressions show up in the SPMI reports, but the overall impact is good, especially considering the places we can load a long to vector with movq
It occurred to me the optimization to emit |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
CC. @dotnet/jit-contrib, @EgorBo, @BruceForstall for secondary review
Resolves #11626
This resolves a large number of TODOs around HWIntrinsic expansion involving scalar longs on x86.
The most significant change here is in promoting
CreateScalar
andToScalar
to be code generating intrinsics instead of converting them to other intrinsics at lowering. This was necessary in order to handle emittingmovq
for scalar long loads/stores but also unlocks several other optimizations since we can now allowCreateScalar
andToScalar
to be contained and can specialize codegen depending on whether they end up loading/storing from/to memory or not. Some example improvements on x64:Vector128.CreateScalar(ref float)
:Vector128.CreateScalar(ref double)
:ref byte = Vector128<byte>.ToScalar()
:Vector<byte>.ToScalar()
And the less realistic, but still interesting
Sse.AddScalar(Vector128.CreateScalar(ref float), Vector128.CreateScalar(ref float)).ToScalar()
:This also removes some redundant casts for
CreateScalar
of small types. Previously, a zero-extending cast was inserted unconditionally and was sometimes removed by peephole opt on x64 but often wasn't.Vector128.CreateScalar(short)
:Vector128.CreateScalar(checked((byte)val))
:Vector128.CreateScalar(ref sbyte)
:x86 diffs are much more significant, because of the newly-enabled intrinsic expansion: