-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
…_expansion
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,21 @@ | ||
MIT License | ||
|
||
Copyright (c) 2017-2024 Advanced Micro Devices, Inc. All rights reserved. | ||
|
||
Permission is hereby granted, free of charge, to any person obtaining a copy | ||
of this software and associated documentation files (the "Software"), to deal | ||
in the Software without restriction, including without limitation the rights | ||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell | ||
copies of the Software, and to permit persons to whom the Software is | ||
furnished to do so, subject to the following conditions: | ||
|
||
The above copyright notice and this permission notice shall be included in all | ||
copies or substantial portions of the Software. | ||
|
||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR | ||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, | ||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE | ||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER | ||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, | ||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE | ||
SOFTWARE. |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,23 @@ | ||
.. ## | ||
.. ## Copyright (c) 2016-24, Lawrence Livermore National Security, LLC | ||
.. ## and RAJA project contributors. See the RAJA/LICENSE file | ||
.. ## for details. | ||
.. ## | ||
.. ## SPDX-License-Identifier: (BSD-3-Clause) | ||
.. ## | ||
.. _cook-book-label: | ||
|
||
************************ | ||
RAJA Cook Book | ||
************************ | ||
|
||
The following sections show common use case patterns and the recommended | ||
RAJA features and policies to use with them. They are intended | ||
to provide users with complete beyond usage examples beyond what can be found in other parts of the RAJA User Guide. In particular, the examples and discussion provide guidance on RAJA execution policy selection to improve performance of user application codes. | ||
|
||
.. toctree:: | ||
:maxdepth: 2 | ||
|
||
cook_book/reduction | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,110 @@ | ||
.. ## | ||
.. ## Copyright (c) 2016-24, Lawrence Livermore National Security, LLC | ||
.. ## and other RAJA project contributors. See the RAJA/LICENSE file | ||
.. ## for details. | ||
.. ## | ||
.. ## SPDX-License-Identifier: (BSD-3-Clause) | ||
.. ## | ||
.. _cook-book-reductions-label: | ||
|
||
======================= | ||
Cooking with Reductions | ||
======================= | ||
|
||
Please see the following section for overview discussion about RAJA reductions: | ||
|
||
* :ref:`feat-reductions-label`. | ||
|
||
|
||
---------------------------- | ||
Reductions with RAJA::forall | ||
---------------------------- | ||
|
||
Here is the setup for a simple reduction example:: | ||
|
||
const int N = 1000; | ||
|
||
int vec[N]; | ||
|
||
for (int i = 0; i < N; ++i) { | ||
|
||
vec[i] = 1; | ||
|
||
} | ||
|
||
Here a simple sum reduction is performed in a for loop:: | ||
|
||
int vsum = 0; | ||
|
||
// Run a kernel using the reduction objects | ||
for (int i = 0; i < N; ++i) { | ||
|
||
vsum += vec[i]; | ||
|
||
} | ||
|
||
The results of these operations will yield the following values: | ||
|
||
* ``vsum == 1000`` | ||
|
||
RAJA uses policy types to specify how things are implemented. | ||
|
||
The forall *execution policy* specifies how the loop is run by the ``RAJA::forall`` method. The following discussion includes examples of several other RAJA execution policies that could be applied. | ||
For example ``RAJA::seq_exec`` runs a C-style for loop sequentially on a CPU. The | ||
``RAJA::cuda_exec_with_reduce<256>`` runs the loop as a CUDA GPU kernel with | ||
256 threads per block and other CUDA kernel launch parameters, like the | ||
number of blocks, optimized for performance with reducers.:: | ||
|
||
using exec_policy = RAJA::seq_exec; | ||
// using exec_policy = RAJA::omp_parallel_for_exec; | ||
// using exec_policy = RAJA::omp_target_parallel_for_exec<256>; | ||
// using exec_policy = RAJA::cuda_exec_with_reduce<256>; | ||
// using exec_policy = RAJA::hip_exec_with_reduce<256>; | ||
// using exec_policy = RAJA::sycl_exec<256>; | ||
|
||
The reduction policy specifies how the reduction is done and must match the | ||
execution policy. For example ``RAJA::seq_reduce`` does a sequential reduction | ||
and can only be used with sequential execution policies. The | ||
``RAJA::cuda_reduce_atomic`` policy uses atomics, if possible with the given | ||
data type, and can only be used with cuda execution policies. Similarly for other RAJA execution back-ends, such as HIP and OpenMP. Here are example RAJA reduction policies whose names are indicative of which execution policies they work with:: | ||
|
||
using reduce_policy = RAJA::seq_reduce; | ||
// using reduce_policy = RAJA::omp_reduce; | ||
// using reduce_policy = RAJA::omp_target_reduce; | ||
// using reduce_policy = RAJA::cuda_reduce_atomic; | ||
// using reduce_policy = RAJA::hip_reduce_atomic; | ||
// using reduce_policy = RAJA::sycl_reduce; | ||
|
||
|
||
Here a simple sum reduction is performed using RAJA:: | ||
|
||
RAJA::ReduceSum<reduce_policy, int> vsum(0); | ||
|
||
RAJA::forall<exec_policy>( RAJA::RangeSegment(0, N), | ||
[=](RAJA::Index_type i) { | ||
|
||
vsum += vec[i]; | ||
|
||
}); | ||
|
||
The results of these operations will yield the following values: | ||
|
||
* ``vsum.get() == 1000`` | ||
|
||
|
||
Another option for the execution policy when using the cuda or hip backends are | ||
the base policies which have a boolean parameter to choose between the general | ||
use ``cuda/hip_exec`` policy and the ``cuda/hip_exec_with_reduce`` policy.:: | ||
|
||
// static constexpr bool with_reduce = ...; | ||
// using exec_policy = RAJA::cuda_exec_base<with_reduce, 256>; | ||
// using exec_policy = RAJA::hip_exec_base<with_reduce, 256>; | ||
|
||
Another option for the reduction policy when using the cuda or hip backends are | ||
the base policies which have a boolean parameter to choose between the atomic | ||
``cuda/hip_reduce_atomic`` policy and the non-atomic ``cuda/hip_reduce`` policy.:: | ||
|
||
// static constexpr bool with_atomic = ...; | ||
// using reduce_policy = RAJA::cuda_reduce_base<with_atomic>; | ||
// using reduce_policy = RAJA::hip_reduce_base<with_atomic>; |
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,362 @@ | ||
/*! | ||
****************************************************************************** | ||
* | ||
* \file | ||
* | ||
* \brief Header file containing RAJA intrinsics templates for HIP execution. | ||
* | ||
* These methods should work on any platform that supports | ||
* HIP devices. | ||
* | ||
****************************************************************************** | ||
*/ | ||
|
||
//~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~// | ||
// Copyright (c) 2016-24, Lawrence Livermore National Security, LLC | ||
// and RAJA project contributors. See the RAJA/LICENSE file for details. | ||
// | ||
// SPDX-License-Identifier: (BSD-3-Clause) | ||
//~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~// | ||
|
||
#ifndef RAJA_hip_intrinsics_HPP | ||
#define RAJA_hip_intrinsics_HPP | ||
|
||
#include "RAJA/config.hpp" | ||
|
||
#if defined(RAJA_ENABLE_HIP) | ||
|
||
#include <type_traits> | ||
|
||
#include <hip/hip_runtime.h> | ||
|
||
#include "RAJA/util/macros.hpp" | ||
#include "RAJA/util/SoAArray.hpp" | ||
#include "RAJA/util/types.hpp" | ||
|
||
#include "RAJA/policy/hip/policy.hpp" | ||
|
||
|
||
namespace RAJA | ||
{ | ||
|
||
namespace hip | ||
{ | ||
|
||
namespace impl | ||
{ | ||
|
||
/*! | ||
* \brief Abstracts access to memory when coordinating between threads at | ||
* device scope. The fences provided here are to be used with relaxed | ||
* atomics in order to guarantee memory ordering and visibility of the | ||
* accesses done through this class. | ||
* | ||
* \Note This uses device scope fences to ensure ordering and to flush local | ||
* caches so that memory accesses become visible to the whole device. | ||
* \Note This class uses normal memory accesses that are cached in local caches | ||
* so device scope fences are required to make memory accesses visible | ||
* to the whole device. | ||
*/ | ||
struct AccessorDeviceScopeUseDeviceFence : RAJA::detail::DefaultAccessor | ||
{ | ||
static RAJA_DEVICE RAJA_INLINE void fence_acquire() | ||
{ | ||
__threadfence(); | ||
} | ||
|
||
static RAJA_DEVICE RAJA_INLINE void fence_release() | ||
{ | ||
__threadfence(); | ||
} | ||
}; | ||
|
||
/*! | ||
****************************************************************************** | ||
* | ||
* \brief Abstracts access to memory when coordinating between threads at | ||
* device scope. The fences provided here are to be used with relaxed | ||
* atomics in order to guarantee memory ordering and visibility of the | ||
* accesses done through this class. | ||
* | ||
* \Note This may use block scope fences to ensure ordering and avoid flushing | ||
* local caches so special memory accesses are used to ensure visibility | ||
* to the whole device. | ||
* \Note This class uses device scope atomic memory accesses to bypass local | ||
* caches so memory accesses are visible to the whole device without | ||
* device scope fences. | ||
* \Note A memory access may be split into multiple memory accesses, so | ||
* even though atomic instructions are used concurrent accesses between | ||
* different threads are not thread safe. | ||
* | ||
****************************************************************************** | ||
*/ | ||
struct AccessorDeviceScopeUseBlockFence | ||
{ | ||
// hip has 32 and 64 bit atomics | ||
static constexpr size_t min_atomic_int_type_size = sizeof(unsigned int); | ||
static constexpr size_t max_atomic_int_type_size = sizeof(unsigned long long); | ||
|
||
template < typename T > | ||
static RAJA_DEVICE RAJA_INLINE T get(T* in_ptr, size_t idx) | ||
{ | ||
using ArrayType = RAJA::detail::AsIntegerArray<T, min_atomic_int_type_size, max_atomic_int_type_size>; | ||
using integer_type = typename ArrayType::integer_type; | ||
|
||
ArrayType u; | ||
auto ptr = const_cast<integer_type*>(reinterpret_cast<const integer_type*>(in_ptr + idx)); | ||
|
||
for (size_t i = 0; i < u.array_size(); ++i) { | ||
#if defined(RAJA_USE_HIP_INTRINSICS) && RAJA_INTERNAL_CLANG_HAS_BUILTIN(__hip_atomic_load) | ||
u.array[i] = __hip_atomic_load(&ptr[i], __ATOMIC_RELAXED, __HIP_MEMORY_SCOPE_AGENT); | ||
#else | ||
u.array[i] = atomicAdd(&ptr[i], integer_type(0)); | ||
#endif | ||
} | ||
|
||
return u.get_value(); | ||
} | ||
|
||
template < typename T > | ||
static RAJA_DEVICE RAJA_INLINE void set(T* in_ptr, size_t idx, T val) | ||
{ | ||
using ArrayType = RAJA::detail::AsIntegerArray<T, min_atomic_int_type_size, max_atomic_int_type_size>; | ||
using integer_type = typename ArrayType::integer_type; | ||
|
||
ArrayType u; | ||
u.set_value(val); | ||
auto ptr = reinterpret_cast<integer_type*>(in_ptr + idx); | ||
|
||
for (size_t i = 0; i < u.array_size(); ++i) { | ||
#if defined(RAJA_USE_HIP_INTRINSICS) && RAJA_INTERNAL_CLANG_HAS_BUILTIN(__hip_atomic_store) | ||
__hip_atomic_store(&ptr[i], u.array[i], __ATOMIC_RELAXED, __HIP_MEMORY_SCOPE_AGENT); | ||
#else | ||
atomicExch(&ptr[i], u.array[i]); | ||
#endif | ||
} | ||
} | ||
|
||
static RAJA_DEVICE RAJA_INLINE void fence_acquire() | ||
{ | ||
#if defined(RAJA_USE_HIP_INTRINSICS) && RAJA_INTERNAL_CLANG_HAS_BUILTIN(__builtin_amdgcn_fence) | ||
__builtin_amdgcn_fence(__ATOMIC_ACQUIRE, "workgroup"); | ||
#else | ||
__threadfence(); | ||
#endif | ||
} | ||
|
||
static RAJA_DEVICE RAJA_INLINE void fence_release() | ||
{ | ||
#if defined(RAJA_USE_HIP_INTRINSICS) && RAJA_INTERNAL_CLANG_HAS_BUILTIN(__builtin_amdgcn_fence) && \ | ||
RAJA_INTERNAL_CLANG_HAS_BUILTIN(__builtin_amdgcn_s_waitcnt) | ||
__builtin_amdgcn_fence(__ATOMIC_RELEASE, "workgroup"); | ||
// Wait until all vmem operations complete (s_waitcnt vmcnt(0)) | ||
__builtin_amdgcn_s_waitcnt(/*vmcnt*/ 0 | (/*exp_cnt*/ 0x7 << 4) | (/*lgkmcnt*/ 0xf << 8)); | ||
#else | ||
__threadfence(); | ||
#endif | ||
} | ||
}; | ||
|
||
|
||
// hip only has shfl primitives for 32 bits | ||
constexpr size_t min_shfl_int_type_size = sizeof(unsigned int); | ||
constexpr size_t max_shfl_int_type_size = sizeof(unsigned int); | ||
|
||
/*! | ||
****************************************************************************** | ||
* | ||
* \brief Method to shuffle 32b registers in sum reduction for arbitrary type. | ||
* | ||
* \Note Returns an undefined value if src lane is inactive (divergence). | ||
* Returns this lane's value if src lane is out of bounds or has exited. | ||
* | ||
****************************************************************************** | ||
*/ | ||
template <typename T> | ||
RAJA_DEVICE RAJA_INLINE T shfl_xor_sync(T var, int laneMask) | ||
{ | ||
RAJA::detail::AsIntegerArray<T, min_shfl_int_type_size, max_shfl_int_type_size> u; | ||
u.set_value(var); | ||
|
||
for (size_t i = 0; i < u.array_size(); ++i) { | ||
u.array[i] = ::__shfl_xor(u.array[i], laneMask); | ||
} | ||
return u.get_value(); | ||
} | ||
|
||
template <typename T> | ||
RAJA_DEVICE RAJA_INLINE T shfl_sync(T var, int srcLane) | ||
{ | ||
RAJA::detail::AsIntegerArray<T, min_shfl_int_type_size, max_shfl_int_type_size> u; | ||
u.set_value(var); | ||
|
||
for (size_t i = 0; i < u.array_size(); ++i) { | ||
u.array[i] = ::__shfl(u.array[i], srcLane); | ||
} | ||
return u.get_value(); | ||
} | ||
|
||
|
||
template <> | ||
RAJA_DEVICE RAJA_INLINE int shfl_xor_sync<int>(int var, int laneMask) | ||
{ | ||
return ::__shfl_xor(var, laneMask); | ||
} | ||
|
||
template <> | ||
RAJA_DEVICE RAJA_INLINE float shfl_xor_sync<float>(float var, int laneMask) | ||
{ | ||
return ::__shfl_xor(var, laneMask); | ||
} | ||
|
||
template <> | ||
RAJA_DEVICE RAJA_INLINE int shfl_sync<int>(int var, int srcLane) | ||
{ | ||
return ::__shfl(var, srcLane); | ||
} | ||
|
||
template <> | ||
RAJA_DEVICE RAJA_INLINE float shfl_sync<float>(float var, int srcLane) | ||
{ | ||
return ::__shfl(var, srcLane); | ||
} | ||
|
||
|
||
//! reduce values in block into thread 0 | ||
template <typename Combiner, typename T> | ||
RAJA_DEVICE RAJA_INLINE T warp_reduce(T val, T RAJA_UNUSED_ARG(identity)) | ||
{ | ||
int numThreads = blockDim.x * blockDim.y * blockDim.z; | ||
|
||
int threadId = threadIdx.x + blockDim.x * threadIdx.y + | ||
(blockDim.x * blockDim.y) * threadIdx.z; | ||
|
||
T temp = val; | ||
|
||
if (numThreads % policy::hip::WARP_SIZE == 0) { | ||
|
||
// reduce each warp | ||
for (int i = 1; i < policy::hip::WARP_SIZE; i *= 2) { | ||
T rhs = shfl_xor_sync(temp, i); | ||
Combiner{}(temp, rhs); | ||
} | ||
|
||
} else { | ||
|
||
// reduce each warp | ||
for (int i = 1; i < policy::hip::WARP_SIZE; i *= 2) { | ||
int srcLane = threadId ^ i; | ||
T rhs = shfl_sync(temp, srcLane); | ||
// only add from threads that exist (don't double count own value) | ||
if (srcLane < numThreads) { | ||
Combiner{}(temp, rhs); | ||
} | ||
} | ||
} | ||
|
||
return temp; | ||
} | ||
|
||
/*! | ||
* Allreduce values in a warp. | ||
* | ||
* | ||
* This does a butterfly pattern leaving each lane with the full reduction | ||
* | ||
*/ | ||
template <typename Combiner, typename T> | ||
RAJA_DEVICE RAJA_INLINE T warp_allreduce(T val) | ||
{ | ||
T temp = val; | ||
|
||
for (int i = 1; i < policy::hip::WARP_SIZE; i *= 2) { | ||
T rhs = shfl_xor_sync(temp, i); | ||
Combiner{}(temp, rhs); | ||
} | ||
|
||
return temp; | ||
} | ||
|
||
|
||
//! reduce values in block into thread 0 | ||
template <typename Combiner, typename T> | ||
RAJA_DEVICE RAJA_INLINE T block_reduce(T val, T identity) | ||
{ | ||
int numThreads = blockDim.x * blockDim.y * blockDim.z; | ||
|
||
int threadId = threadIdx.x + blockDim.x * threadIdx.y + | ||
(blockDim.x * blockDim.y) * threadIdx.z; | ||
|
||
int warpId = threadId % policy::hip::WARP_SIZE; | ||
int warpNum = threadId / policy::hip::WARP_SIZE; | ||
|
||
T temp = val; | ||
|
||
if (numThreads % policy::hip::WARP_SIZE == 0) { | ||
|
||
// reduce each warp | ||
for (int i = 1; i < policy::hip::WARP_SIZE; i *= 2) { | ||
T rhs = shfl_xor_sync(temp, i); | ||
Combiner{}(temp, rhs); | ||
} | ||
|
||
} else { | ||
|
||
// reduce each warp | ||
for (int i = 1; i < policy::hip::WARP_SIZE; i *= 2) { | ||
int srcLane = threadId ^ i; | ||
T rhs = shfl_sync(temp, srcLane); | ||
// only add from threads that exist (don't double count own value) | ||
if (srcLane < numThreads) { | ||
Combiner{}(temp, rhs); | ||
} | ||
} | ||
} | ||
|
||
// reduce per warp values | ||
if (numThreads > policy::hip::WARP_SIZE) { | ||
|
||
static_assert(policy::hip::MAX_WARPS <= policy::hip::WARP_SIZE, | ||
"Max Warps must be less than or equal to Warp Size for this algorithm to work"); | ||
|
||
__shared__ unsigned char tmpsd[sizeof(RAJA::detail::SoAArray<T, policy::hip::MAX_WARPS>)]; | ||
RAJA::detail::SoAArray<T, policy::hip::MAX_WARPS>* sd = | ||
reinterpret_cast<RAJA::detail::SoAArray<T, policy::hip::MAX_WARPS> *>(tmpsd); | ||
|
||
// write per warp values to shared memory | ||
if (warpId == 0) { | ||
sd->set(warpNum, temp); | ||
} | ||
|
||
__syncthreads(); | ||
|
||
if (warpNum == 0) { | ||
|
||
// read per warp values | ||
if (warpId * policy::hip::WARP_SIZE < numThreads) { | ||
temp = sd->get(warpId); | ||
} else { | ||
temp = identity; | ||
} | ||
|
||
for (int i = 1; i < policy::hip::MAX_WARPS; i *= 2) { | ||
T rhs = shfl_xor_sync(temp, i); | ||
Combiner{}(temp, rhs); | ||
} | ||
} | ||
|
||
__syncthreads(); | ||
} | ||
|
||
return temp; | ||
} | ||
|
||
} // end namespace impl | ||
|
||
} // end namespace hip | ||
|
||
} // end namespace RAJA | ||
|
||
#endif // closing endif for RAJA_ENABLE_HIP guard | ||
|
||
#endif // closing endif for header file include guard |
Large diffs are not rendered by default.
Large diffs are not rendered by default.
This file was deleted.
This file was deleted.