-
Notifications
You must be signed in to change notification settings - Fork 1.6k
Fix some hyperthreading errors. #5295
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Conversation
Hi @XiWeiGu, @martin-frbg, On LoongArch systems with hyper-threading (SMT), I noticed that OpenBLAS fails to correctly bind threads to physical CPU cores, which can lead to suboptimal performance due to logical core contention. I’ve implemented fixes to address some of these hyper-threading core-binding issues—could you kindly review the changes when you have time? |
This is mostly original GotoBLAS code, last modified by Xianyi over ten years ago in response to #112. I must admit that I'm a bit wary to change it globally, I wonder if this is something more likely to occur on Loongarch hardware, or simply not reported during all these years. |
From the code analysis, this modification indeed makes the code more compatible with a wider range of platforms. I’ve conducted functionality tests on some x86 platforms without any issues, but there aren't enough test samples to cover all hardware platforms. If you're concerned about potential issues on other platforms, I can restrict this modification to the Loongarch platform. |
I recommend taking a more conservative approach: keep the logic unchanged for other platforms, and implement separate handling only for the LoongArch platform. |
Hi @XiWeiGu, @martin-frbg, I've updated the code, limiting the changes to the LoongArch platform. Could you review it when possible? |
When there are multiple NUMA nodes and hyper-threading causes adjacent logical cores to share a physical core (e.g., common -> avail[i] = 0x5555555555555555UL), the numa_mapping function should not use a bitmask for filtering, as this would lead to redundant masking with the subsequent local_cpu_map function.
LGTM, Thanks |
1.When there are multiple NUMA nodes and hyper-threading causes adjacent logical cores to share a physical core (e.g., common -> avail[i] = 0x5555555555555555UL), the numa_mapping function should not use a bitmask for filtering, as this would lead to redundant masking with the subsequent local_cpu_map function.
2.In the scenario described above, the final_num_procs parameter cannot accurately represent the actual number of valid CPU cores. The num_procs parameter can be used as a replacement, so the final_num_procs parameter has been removed.