Closed
Description
I took all of our benchmarks and run them for:
- .NET Core 2.1 using native
CpuMathNative
which uses SSE - .NET Core 3.0 using new managed Hardware Intrinsics API which uses AVX with Tiered compilation enabled
- .NET Core 3.0 using new managed Hardware Intrinsics API which uses AVX with Tiered compilation disabled
I will add a separate comment for every benchmark in this issue with some analysis.
Environment info (updated):
BenchmarkDotNet=v0.11.1.786-nightly, OS=Windows 10.0.17134.285 (1803/April2018Update/Redstone4)
Intel Xeon CPU E5-1650 v4 3.60GHz, 1 CPU, 12 logical and 6 physical cores
Frequency=3507500 Hz, Resolution=285.1033 ns, Timer=TSC
.NET Core SDK=2.2.100-preview2-009404
[Host] : .NET Core 2.1.4 (CoreCLR 4.6.26814.03, CoreFX 4.6.26814.02), 64bit RyuJIT
Core 2.1 : .NET Core 2.1.4 (CoreCLR 4.6.26814.03, CoreFX 4.6.26814.02), 64bit RyuJIT
Core 3.0 NonTiered : .NET Core 3.0.0-preview1-27004-04 (CoreCLR 4.6.27003.04, CoreFX 4.6.27003.02), 64bit RyuJIT