Skip to content

[SYCL] Redistribute USM aspects among CUDA devices #18782

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jun 5, 2025

Conversation

frasercrmck
Copy link
Contributor

@frasercrmck frasercrmck commented Jun 3, 2025

We were previously reporting all USM aspects as supported on all CUDA devices. This is incorrect behaviour as many devices do not support USM system allocations, nor atomic host/shared USM allocations.

Unfortunately it is very difficult to get a conclusive list of which devices support which features.

Links such as 1 suggest that pageable memory access (which the UR adapater uses to determine the runtime equivalents of these aspects) is limited to a Grace Hopper device or newer, or with Linux systems with HMM enabled. HMM is not something we can currently determine at compile time for these aspects. This change is therefore conservative for older devices (SM6.X) with HMM enabled, where we will now report "false".

For atomic host/shared allocations, the documentation on the 'hostNativeAtomicSupported' property at 1 and 2 suggests that we need both a hardware coherent system, for which 3 suggests we again need at least a Grace Hopper device. However, note again that only "some" hardware-coherent systems support the host native atomics, "including" NVLink-connected devices. This is therefore not an exhaustive list and we can't derive anything conclusive from it. This change might again be conservative for architectures older than Grace Hopper.

In short, this PR essentially just punts the problem slightly further down the road and prevents these three USM aspects from being reported as supported for SM89 devices and earlier.

We were previously reporting all USM aspects as supported on all CUDA
devices. This is incorrect behaviour as many devices do not support USM
system allocations, nor atomic host/shared USM allocations.

Unfortunately it is very difficult to get a conclusive list of which
devices support which features.

Links such as [1] suggest that pageable memory access (which the UR
adapater uses to determine the runtime equivalents of these aspects) is
limited to at least Grace Hopper device, and possibly only with Linux
systems with HMM enabled. This is not something we can currently
determine at compile time for these aspects. Nevertheless, we can
probably assume that devices below Grace Hopper cannot support USM
system allocations.

For atomic host/shared allocations, the documentation on the
'hostNativeAtomicSupported' property at [1] and [2] suggests that we need
both a hardware coherent system, for which [3] suggests we again need at
least a Grace Hopper device. However, note again that only "some"
hardware-coherent systems support the host native atomics, "including"
NVLink-connected devices. This is therefore not an exhaustive list and
we can't derive anything conclusive from it.

In short, this PR essentially just punts the problem slightly further
down the road and prevents these three USM aspects from being reported
as supported for SM89 devices and earlier.

[1]: https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#system-requirements-for-unified-memory.
[2]: https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#host-native-atomics
[3]: https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#cpu-and-gpu-page-tables-hardware-coherency-vs-software-coherency
@frasercrmck frasercrmck requested a review from a team as a code owner June 3, 2025 11:15
@frasercrmck frasercrmck requested a review from GeorgeWeb June 3, 2025 11:15
@frasercrmck frasercrmck requested a review from ldrumm June 3, 2025 11:15
Copy link
Contributor

@ldrumm ldrumm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good writeup. Too much of my knowledge of what is and isn't supported on NVIDIA has come as rumours from press-releases and blog posts rather than real technical docs

@frasercrmck
Copy link
Contributor Author

Good writeup. Too much of my knowledge of what is and isn't supported on NVIDIA has come as rumours from press-releases and blog posts rather than real technical docs

Unfortunately @GeorgeWeb's device is reporting sycl::aspect::usm_system_allocations: 1 on SM75 but with Linux HMM. So I've probably got something wrong and now we're too conservative.

@frasercrmck
Copy link
Contributor Author

frasercrmck commented Jun 3, 2025

Good writeup. Too much of my knowledge of what is and isn't supported on NVIDIA has come as rumours from press-releases and blog posts rather than real technical docs

Unfortunately @GeorgeWeb's device is reporting sycl::aspect::usm_system_allocations: 1 on SM75 but with Linux HMM. So I've probably got something wrong and now we're too conservative.

It looks as if that's because of HMM, which we can't determine in the compiler. I've updated the PR description to explain why we report 'false' for such situations. This might be something we can enhance in the future, but it's inherently limited as HMM is not a property of the CUDA device, but of the host system, operating system, etc.

@GeorgeWeb
Copy link
Contributor

These docs https://docs.nvidia.com/cuda/archive/12.1.0/pascal-tuning-guide/index.html#unified-memory-improvements support the fact that on supported operating systems system memory can be accessed from the GPU. However, this is reliant on, as #frasercrmck already said, the Linux kernel driver, Cuda driver's open kernel modules, etc. to support HMM (Heterogeneous Memory Management) in Cuda.

This blog post https://developer.nvidia.com/blog/simplifying-gpu-application-development-with-heterogeneous-memory-management/#enabling_and_detecting_hmm is an interesting reading and shows how to check if your Nvidia device supports Memory Addressing via HMM by using nvidia-smi.

Just as extra demonstration:
My GeForce GTX 1650 (SM 7.5 with HMM) and relevant Linux kernel version and Cuda driver with open kernel modules, is an example of this System USM being supported on older cards too as long as requirements are met.

NVIDIA-SMI 560.35.05              Driver Version: 560.35.05      CUDA Version: 12.6
$ nvidia-smi --query-gpu=compute_cap --format=csv
compute_cap
7.5
$ nvidia-smi -q | grep Addressing
Addressing Mode : HMM

That being said, it cannot be 100% guaranteed, judging by Fraser's case, that all NVIDIA GPUs prior to GH (or SM 9.0) will support the feature out-the-box, so maybe being conservative here for the compile-time/static value of the aspect is okay. Currently, we have no good way of determining this as the HMM property is not a part of the Cuda device.

If the user's system does have the required setup for Cuda's USM System Memory (and a device newer than SM 6.x) and they need to use the feature safely, they can simply re-query the value for that aspect via a runtime device query in SYCL.

Copy link
Contributor

@GeorgeWeb GeorgeWeb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left a larger general info comment separately.

LGTM. This more conservative approach seems best for now for the DeviceConfigFile.

@frasercrmck
Copy link
Contributor Author

@intel/llvm-gatekeepers this is ready to merge, thanks

@kbenzie
Copy link
Contributor

kbenzie commented Jun 5, 2025

Approval is still required from @intel/dpcpp-tools-reviewers

@frasercrmck
Copy link
Contributor Author

Approval is still required from @intel/dpcpp-tools-reviewers

How did I miss that??

Copy link
Contributor

@maarquitos14 maarquitos14 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

@ldrumm ldrumm merged commit cfc803c into intel:sycl Jun 5, 2025
24 checks passed
@frasercrmck frasercrmck deleted the sycl-cuda-usm-aspects branch June 5, 2025 13:56
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants