Skip to content

Commit a21b95b

Browse files
jkotasdotnet-bot
authored andcommitted
Workaround memset alignment sensitivity (dotnet/coreclr#24302)
* Workaround memset alignment sensitivity memset is up to 2x slower on misaligned block on some types of hardware. The problem is uneven performance of "rep stosb" used to implement the memset in some cases. The exact matrix on when it is slower and by how much is very complex. This change workarounds the issue by aligning the memory block before it is passed to memset and filling in the potential misaligned part manually. This workaround will regress performance by a few percent (<10%) in some cases, but we will gain up to 2x improvement in other cases. Fixes dotnet#24300 Signed-off-by: dotnet-bot <dotnet-bot@microsoft.com>
1 parent ae8426b commit a21b95b

File tree

1 file changed

+3
-1
lines changed

1 file changed

+3
-1
lines changed

src/Common/src/CoreLib/System/SpanHelpers.cs

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,9 @@ public static unsafe void ClearWithoutReferences(ref byte b, nuint byteLength)
2424
return;
2525

2626
#if CORECLR && (AMD64 || ARM64)
27-
if (byteLength > 4096)
27+
// The exact matrix on when RhZeroMemory is faster than InitBlockUnaligned is very complex. The factors to consider include
28+
// type of hardware and memory aligment. This threshold was chosen as a good balance accross different configurations.
29+
if (byteLength > 768)
2830
goto PInvoke;
2931
Unsafe.InitBlockUnaligned(ref b, 0, (uint)byteLength);
3032
return;

0 commit comments

Comments
 (0)