- 
                Notifications
    You must be signed in to change notification settings 
- Fork 13.9k
Add intrinsic for dynamic shared memory #146181
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
| rustbot has assigned @petrochenkov. Use  | 
| Some changes occurred in src/tools/compiletest cc @jieyouxu Some changes occurred in compiler/rustc_codegen_ssa Some changes occurred to the intrinsics. Make sure the CTFE / Miri interpreter | 
      
        
              This comment has been minimized.
        
        
      
    
  This comment has been minimized.
0aa0e58    to
    3ebaccb      
    Compare
  
    
      
        
              This comment has been minimized.
        
        
      
    
  This comment has been minimized.
3ebaccb    to
    2378959      
    Compare
  
            
          
                library/core/src/intrinsics/mod.rs
              
                Outdated
          
        
      | #[rustc_nounwind] | ||
| #[unstable(feature = "dynamic_shared_memory", issue = "135513")] | ||
| #[cfg(any(target_arch = "amdgpu", target_arch = "nvptx64"))] | ||
| pub fn dynamic_shared_memory<T: ?Sized>() -> *mut T; | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note that outside the GPU world, "shared memory" typically refers to memory shared between processes. So I would suggest using a name that's less likely to be confused, like something that explicitly involves "GPU" or so.
This sounds like a form of "global" memory (similar to a static item), but then apparently OpenCL calls it "local" which is very confusing...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does it make sense to add a mod gpu?
I think there are more intrinsics for gpus that make can be added (although more in the traditional intrinsic sense, relating to an instruction, edit: re-exposing intrinsics from core::arch::nvptx and the amdgpu equivalent).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Or should it be in core::arch::gpu?
(From #135516 (comment), cc @workingjubilee)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rust intrinsic names are not namespaced. They are exposed in a module, but inside the compiler they are identified entirely by their name. So moving them into a different module doesn't alleviate the need for a clear name that will be understandable to non-GPU people working in the compiler (which is the vast majority of compiler devs).
If there's more GPU intrinsics to come, moving them into a gpu.rs file here still might make sense.
I don't have a strong opinion on how the eventually stable public API is organized, I am commenting entirely as someone who has an interest in keeping the set of intrinsics the Rust compiler offers understandable and well-defined (the ones in this folder, not the ones in core::arch which you call "more traditional" but that's very dependent on your background ;). These intrinsics are just an implementation detail, but every intrinsic we add here is a new language primitive -- it's like adding a new keyword, just without the syntax discussions and perma-unstable. In the past we used to have intrinsics that entirely break the internal consistency of the language, and we used to have intrinsics whose safety requirements were very poorly documented.
        
          
                library/core/src/intrinsics/mod.rs
              
                Outdated
          
        
      | /// All pointers returned by `dynamic_shared_memory` point to the same address, | ||
| /// so alias the same memory. | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this mean that all dynamic_shared_memory::<T> for the same T return the same pointer, or do even dynamic_shared_memory::<T> and dynamic_shared_memory::<U> point to the same memory?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All of them alias, independent of the type.
It’s probably worth explaining the concept of shared memory in the comment?
Maybe this makes it clearer:
Returns a pointer to dynamic shared memory.
Shared memory is a memory region that is shared between all threads in
the same block/work-group. It is usually faster than global memory, which is
shared between all threads on a GPU.
Dynamic shared memory is in the shared memory region, though the allocated
size is specified late, when launching agpu-kernel.The pointer returned by
dynamic_shared_memory()is the start of the dynamic
shared memory region. All calls todynamic_shared_memoryin a block/work-group,
independent of the generic type, return the same address, so alias the same memory.
The returned pointer is aligned by at least the alignment ofT.Other APIs
CUDA and HIP call this shared memory, shared between threads in a block.
OpenCL and SYCL call this local memory, shared between threads in a work-group.
GLSL calls this shared memory, shared between invocations in a work group.
DirectX calls this groupshared memory, shared between threads in a thread-group.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It’s probably worth explaining the concept of shared memory in the comment?
It's probably worth using a more specific term ("GPU shared memory" or so) since many people reading this will think "shared memory" refers to its more standard meaning of memory shared across different processes (often set up via mmap). It's unfortunate that GPU vendors chose to overload this term, but when working in a more general-purpose codebase you can't just assume everyone to know the conventions of the GPU community, and you can't give general-purpose terms a GPU-specific meaning without risking confusion.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Other APIs
CUDA and HIP call this shared memory, shared between threads in a block.
OpenCL and SYCL call this local memory, shared between threads in a work-group.
GLSL calls this shared memory, shared between invocations in a work group.
DirectX calls this groupshared memory, shared between threads in a thread-group.
This sort of "translation guide" is not actually useful if you are not familiar with any of these things, so I would just leave it out as it is a distraction from the actual description. Especially since it's very easy to go look up a definition of, say, OpenCL's local memory, see it referred to as "this is GLSL's shared memory", look up a definition of that and see it referred to as basically the same idea as groupshared memory in DirectX, then look up a definition of that and... you get the idea. Our definition should stand on its own.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The translation guide might be useful to people that are familiar with these things and wondering why we are making up our own terms.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Then I will merely continue to insist that the description should make sense without reference to prevent infinite regress.
Exceedingly fine details, of course, can be handled elsewhere by other sources, but the concepts should be usefully clear here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For sure, we need something reasonably concise such that a rust compiler dev with zero GPU knowledge has a rough idea of what this does after reading the description. I don't think that's that hard, GPUs aren't that special, but they use a lot of "weird" (read: grown-over-time) terminology that presents a big barrier to entry.
| /// The returned pointer is the start of the dynamic shared memory region. | ||
| /// All pointers returned by `dynamic_shared_memory` point to the same address, | ||
| /// so alias the same memory. | ||
| /// The returned pointer is aligned by at least the alignment of `T`. | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Speaking of safety requirements... how does one use this pointer? I get that it is aligned, but does it point to enough memory to store a T? If it's always the same address, doesn't everyone overwrite each other's data all the time? This API looks very odd for a non-GPU person, and it's not clear to me whether that is resolved by having more magic behavior (which should be documented or at least referenced here), or whether there's higher-level APIs built on top that deal with this (but this intrinsic provides so few guarantees, I can't see how that should be possible).
Typically, intrinsic documentations should be detailed enough that I can read and write code using the intrinsic and know exactly whether the code is correct and what it will do in all circumstances. I don't know if there's any hope of achieving that with GPU intrinsics, but if not then we need to have a bit of a wider discussion -- we have had bad experience with just importing "externally defined" semantics into Rust without considering all the interactions (in general, it is not logically coherent to have semantics externally defined).
The current docs would let me implement this intrinsic by just always returning 1024, and emitting a compile error if T has alignment bigger than 1024. I doubt that's a legal implementation. But that means the docs are not precise enough to describe what the implementation must do.
| /// The returned pointer is the start of the dynamic shared memory region. | ||
| /// All pointers returned by `dynamic_shared_memory` point to the same address, | ||
| /// so alias the same memory. | ||
| /// The returned pointer is aligned by at least the alignment of `T`. | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there some prior discussion of the design decision to determine the alignment by giving a type parameter? I could also be a const generic parameter, for instance. I don't have an opinion on the matter since I am an outsider to the GPU world, but as a compiler team member it'd be good to know if this is something you thought about for 5 minutes or whether there's some sort of larger design by a team that has a vision of how all these things will fit together.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is some discussion in #135516. I don’t mind either way, I thought (for 5 minutes ;)) that specifying the type of the returned pointer makes sense.
I’m not much of a GPU programmer, but I think in most cases, you would store an array in dynamic shared memory, or maybe a struct followed by a dynamically sized array (or maybe two/n arrays of different types).
For just a struct, static shared memory would make more sense, though we don’t support that yet (there’s some discussion in the tracking issue, but I think that’s more complicated to design and implement).
| Sorry for drowning you in questions here, but extending the core language with new operations (as in, adding a new intrinsic doing things that couldn't be done before) is a big deal, and we had a bad experience in the past when this was done without wider discussion in the team to ensure that the intrinsics actually make sense in the context of Rust. Not everything that exists in the hardware can be 1:1 exposed in Rust, sometimes this requires a lot of work and sometimes it's just basically impossible. It can be a lot of work to clean these things up later, and as someone who did a bunch of that work, I'd rather not have to do it again. :) | 
| I agree that it makes a lot of sense to have the discussion now. Thanks for taking a look and helping to design something useful! 
 Heh, yes, that’s something that should be mentioned in the doc comment as well. (Especially comments on how to safely use it.) 
 Depends on the size specified on the CPU side when launching the gpu-kernel. It may or it may not. 
 There are “higher-level APIs” like “do a fast matrix-matrix multiplication”, but not much in-between. I’d assume that people usually use this in its raw form. 
 Two general use cases are: 1) All threads in a group load a part from global memory (the RAM/VRAM) and store it in shared memory. Then all threads read from the collaboratively loaded data. 2) All threads in a group do some work and collaborate on shared memory (with atomics or so) to aggregate results. Then one of the threads stores the final result to global memory. So, shared memory is meant to be accessed collaboratively and the developer must ensure proper synchronization. It is hard to provide a safe abstraction for this and tbh, I don’t want to try 😅 (though I can see 3rd party crates doing this – at least to some extent). From Rust’s perspective, guarantees should be the same as with memory that’s shared between processes. 
 I agree, it would be nice to have good documentation for the intrinsics in Rust! | 
| 
 Wait, there's a single static size set when launching the kernel? Why is it called "dynamic" memory? "dynamic" memory usually means  Are you saying dynamic shared memory is neither dynamic in the normal sense nor shared in the normal sense? ;) | 
| r? @RalfJung | 
| I won't be able to do the final approval here, I can just help with ensuring that the intrinsics are documented well enough that they can be understood without GPU expertise, and that the LLVM codegen looks vaguely reasonable. I don't know if we have anyone who actually knows how the generated LLVM IR should look like and can ensure it makes sense. r? @nikic maybe? | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with Ralf. More specifically, I suggest referring to this idea internally as "GPU workgroup memory" (or "GPU threadgroup memory"). The notion of a "block" is a term that is highly loaded in the compiler so, while it is somewhat fortunate that CUDA and HIP prefer the same terms, we actually want to pick the words that don't overlap.
I'm also not entirely sure of this entire approach, but we will likely encounter this definition-level issue regardless of approach, and some of the code will be similar anyway, yeah?
        
          
                library/core/src/intrinsics/mod.rs
              
                Outdated
          
        
      | ) | ||
| } | ||
|  | ||
| /// Returns a pointer to dynamic shared memory. | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know this is a short description, but it is just repeating the function signature:
pub fn dynamic_shared_memory<T: ?Sized>() -> *mut T;
/*     ^^^^^^^^^^^^^^^^^^^^               ^^ ^^^^^^
       |                                  |  |
       |                                  |  |
       -- "dynamic shared memory"         |  - "a pointer"
                                          - "returns"
*/I'm basically repeating the same comment I already made about "shared with what?" except with adding that "dynamic" also is pretty vague since in computer programming it's always relative to some notion of "static". Yet the "static" is not obviously present in this description.
        
          
                library/core/src/intrinsics/mod.rs
              
                Outdated
          
        
      | /// All pointers returned by `dynamic_shared_memory` point to the same address, | ||
| /// so alias the same memory. | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Other APIs
CUDA and HIP call this shared memory, shared between threads in a block.
OpenCL and SYCL call this local memory, shared between threads in a work-group.
GLSL calls this shared memory, shared between invocations in a work group.
DirectX calls this groupshared memory, shared between threads in a thread-group.
This sort of "translation guide" is not actually useful if you are not familiar with any of these things, so I would just leave it out as it is a distraction from the actual description. Especially since it's very easy to go look up a definition of, say, OpenCL's local memory, see it referred to as "this is GLSL's shared memory", look up a definition of that and see it referred to as basically the same idea as groupshared memory in DirectX, then look up a definition of that and... you get the idea. Our definition should stand on its own.
| I'm honestly still lost about the overall picture here -- it looks like the program is in charge of controlling the alignment but some other component (the CPU code that launches the kernel, presumably) is in charge of controlling the size of this memory? How on Earth does it makes sense to split the responsibility of picking size and align across such a big abstraction barrier? | 
| I’ll need to look more into the alignment and how it works. I’m not too confident that what I claim here is actually correct, I just assumed that the alignment on the extern global is used for something (I can’t find anything documented, i’ll try to look into the amdgpu source code). 
 
 Yes, it is more dynamic than the traditional, static shared memory though! In graphics shaders (which existed long before cuda/hip/…), shared memory is used by declaring an annotated global variable (the same also works in cuda/hip/…). Therefore the amount of shared memory available in a kernel is fixed at kernel compile-time. Just to mention it, an alternative to the intrinsic would be to follow cuda/hip and get the pointer by declaring an  | 
| Right, thanks for the explanation. Both "dynamic" and "shared" need to be carefully qualified then to avoid confusing it with the usual meaning these terms have outside the GPU realm (in particular since the other meanings do show up quite a bit in rustc, at least for "dynamic"). Since you mentioned it, I like "group-shared" as a term. That makes it clear that it's a specific kind of shared and it's hopefully something one can google. "dynamic" is awkward... its basically "fixed at launch time" (in contrast to "fixed at build time"). Not sure what a good alternative term would be here. From a Rust AM perspective, this is much closer to static than to dynamic. If the intrinsic name also includes  | 
| 
 To pick up on your nomenclature: | 
| FWIW, as static groupshared memory has been mentioned a couple of times. I think static groupshared variables need to be declared as proper global variables in Rust. Getting them through an intrinsic will not work out well (think about wanting two globals with the same type, the intrinsic getting duplicated through inlining or tail duplication, or wanting to link to a cuda/hip program using the same variable). | 
| Yeah, what you call "static memory" seems to be very similar to Rust's  The "dynamic" ones OTOH seem to be more like a single global pointer that's just passed in by whomever launches the kernel. Well not quite, something has to decide which "group" or so sees which value... or maybe they all initially see the same value but when they change it that's only visible within the same group or so? So,  But, if that's all too alien I can also live with "dynamic" as long as it's clearly documented. It's not the only term that has different meanings in different communities. | 
| I found the code for amdgpu. It indeed takes the maximum alignment of all external addrspace(3) declarations, so the implementation here does the correct thing in that regard. | 
| So it truly seems to be the case that every kernel has a fixed "alignment requirement for the dynamic groupshared memory", and then whoever launches the kernel just gets to decide how big that memory is (well, they closely coordinate that with the kernel I suppose). That's a very strange division of labor, usually size and alignment are picked together and by the same party... | 
| Yes, "group-shared memory" also seems fine. Using any of the existing terms in a reasonably obvious composition seems fine. We shouldn't make up our own terms, but IME when reading GPU stuff you inevitably run into "this person is referring to things this way" and you just have to deal with it, and they don't explain the context except by handwavy reference to "oh, this other thing" as if that explains it (it doesn't, because you then click 12 links deep into borderline-recursive references... I am speaking from my experience, there). | 
| ☔ The latest upstream changes (presumably #146683) made this pull request unmergeable. Please resolve the merge conflicts. | 
Group-shared memory is a memory region that is shared between all threads in a work-group on GPUs. Dynamic group-shared memory is in that memory region, though the allocated size is specified late, when launching a kernel, instead of early at compile-time. Group-shared memory in amdgpu and nvptx lives in address space 3. Dynamic group-shared memory is implemented by creating an external global variable in address space 3. The global is declared with size 0, as the actual size is only known at runtime. It is defined behavior in LLVM to access an external global outside the defined size. As far as I know, there is no similar way to get the allocated size of dynamic shared memory on amdgpu an nvptx, so users have to pass this out-of-band or rely on target specific ways.
2378959    to
    a9f43d5      
    Compare
  
    | This PR was rebased onto a different master commit. Here's a range-diff highlighting what actually changed. Rebasing is a normal part of keeping PRs up to date, so no action is needed—this note is just to help reviewers. | 
Shared memory is a memory region that is shared between all threads in a thread block/workgroup on GPUs. Dynamic shared memory is in that memory region, though the allocated size is specified late, when launching a kernel.
Shared memory in amdgpu and nvptx lives in address space 3. Dynamic shared memory is implemented by creating an external global variable in address space 3. The global is declared with size 0, as the actual size is only known at runtime. It is defined behavior in LLVM to access an external global outside the defined size.
As far as I know, there is no similar way to get the allocated size of dynamic shared memory on amdgpu an nvptx, so users have to pass this out-of-band or rely on target specific ways.
Tracking issue: #135516