Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimized large SIMD butterflies for better register usage #134

Merged
merged 14 commits into from
Feb 27, 2024

Conversation

ejmahler
Copy link
Owner

This PR rewrites the size-16, size-24, and size-32 sse, neon, and wasm simd butterfies. It optimizes their use of temporary variables to reduce the number of register spills, similar to the way bigger AVX butterflies do. It loads data as late as possible, does as much work as possible on a single piece of data before moving on, then stores it as soon as possible.

I tried applying the same transformation to scalar butterflies and the AVX butterflies that don't have it, or don't do it completely, but didn't see any benefit. For AVX at least, the optimizer was aggressively resisting the code transformation, and was reordering code to be less performant, etc. I didn't look into the code generation of scalar code, but it wouldn't surprise if it just doesn't use registers very well in the first place.

SSE benefited the most from this change, because it's the most register-constrained. It only has 16 FP registers, while neon has 32, and I assume wasm simd is JIT compiled and doesn't optimally use registers at all. The size-32 butterfly is faster for all 3 instruction sets though - fast enough that I was able to remove the section of code in the planner that switches between butterfly8 and butterfly32 for radix 4. All 3 now use butterfly32 unconditionally.

Benchmarks - in each pair of results, compare the first value which represents before, to the second which represents after:

SSE

test sse_butterfly32_16             ... bench:         135 ns/iter (+/- 2)
test sse_butterfly32_16             ... bench:          93 ns/iter (+/- 7)

test sse_butterfly64_16             ... bench:         142 ns/iter (+/- 4)
test sse_butterfly64_16             ... bench:         141 ns/iter (+/- 7)

test sse_butterfly32_24             ... bench:         205 ns/iter (+/- 4)
test sse_butterfly32_24             ... bench:         182 ns/iter (+/- 7)

test sse_butterfly64_24             ... bench:         264 ns/iter (+/- 6)
test sse_butterfly64_24             ... bench:         247 ns/iter (+/- 7)

test sse_butterfly32_32             ... bench:         304 ns/iter (+/- 17)
test sse_butterfly32_32             ... bench:         259 ns/iter (+/- 20)

test sse_butterfly64_32             ... bench:         436 ns/iter (+/- 11)
test sse_butterfly64_32             ... bench:         370 ns/iter (+/- 15)

NEON

test neon_butterfly32_16             ... bench:          63 ns/iter (+/- 0)
test neon_butterfly32_16             ... bench:          64 ns/iter (+/- 1)

test neon_butterfly64_16             ... bench:         112 ns/iter (+/- 4)
test neon_butterfly64_16             ... bench:         106 ns/iter (+/- 1)

test neon_butterfly32_24             ... bench:         113 ns/iter (+/- 3)
test neon_butterfly32_24             ... bench:         113 ns/iter (+/- 0)

test neon_butterfly64_24             ... bench:         199 ns/iter (+/- 3)
test neon_butterfly64_24             ... bench:         198 ns/iter (+/- 7)

test neon_butterfly32_32             ... bench:         198 ns/iter (+/- 6)
test neon_butterfly32_32             ... bench:         169 ns/iter (+/- 6)

test neon_butterfly64_32             ... bench:         338 ns/iter (+/- 11)
test neon_butterfly64_32             ... bench:         300 ns/iter (+/- 7)

WASM SIMD

test wasm_simd_butterfly32_16             ... bench:         116 ns/iter (+/- 3)
test wasm_simd_butterfly32_16             ... bench:         116 ns/iter (+/- 6)

test wasm_simd_butterfly64_16             ... bench:         182 ns/iter (+/- 11)
test wasm_simd_butterfly64_16             ... bench:         175 ns/iter (+/- 15)

test wasm_simd_butterfly32_24             ... bench:         208 ns/iter (+/- 9)
test wasm_simd_butterfly32_24             ... bench:         207 ns/iter (+/- 10)

test wasm_simd_butterfly64_24             ... bench:         309 ns/iter (+/- 20)
test wasm_simd_butterfly64_24             ... bench:         301 ns/iter (+/- 14)

test wasm_simd_butterfly32_32             ... bench:         348 ns/iter (+/- 10)
test wasm_simd_butterfly32_32             ... bench:         307 ns/iter (+/- 11)

test wasm_simd_butterfly64_32             ... bench:         507 ns/iter (+/- 17)
test wasm_simd_butterfly64_32             ... bench:         452 ns/iter (+/- 36)

This is independent of the WIP dinterleaving project in the other PR, but I did stumble across this optimization while testing different options for butterfly optimization. In particular, i determined that if the butterflies start out interleaved, deinterleaving them was not faster than just computing them interleaved -- at least not on SSE, which is probably the best at handling interleaved multiplies due to the addsub instruction. I intend to test deinterleaved butterflies on the other platforms, but I'm reluctant to implement different algorithms on different platforms, so even if it's marginally faster to deinterleave neon or wasm simd butterflies, I will probably leave them interleaved just to keep the implementations identical.

I got an apple M1 laptop so I could do NEON coding directly, and this is my first time doing benchmarking with it, and I'm very impressed with its performance compared to SSE on my relatively new intel PC.

@ejmahler ejmahler merged commit 1043e40 into master Feb 27, 2024
18 checks passed
@ejmahler ejmahler deleted the optimized-butterflies branch February 27, 2024 06:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant