-
Notifications
You must be signed in to change notification settings - Fork 110
Feature/prefetch2 #1604
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
maddyscientist
wants to merge
82
commits into
develop
Choose a base branch
from
feature/prefetch2
base: develop
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Feature/prefetch2 #1604
+1,704
−419
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
…tead of logic operations when computing the neighboring index; this is branch free and less operations
…quarter precision support
…for executing single-thread regions of code. On CUDA install the latest version of CCCL via CPM since we need some new features
…slash kernels. Disabled by default (set with with Arg::prefetch_distance parameter), and TMA prefetch will be added in next push
…ith QUDA_DSLASH_PREFETCH_BULK=ON). Prefetch distance is now set via CMake (QUDA_DSLASH_PREFETCH_DISTANCE_WILSON and QUDA_DSLASH_PREFETCH_DISTANCE_STAGGERED)
…ble on CUDA platform
…ants of vector_load and vector_store: these allow for hte pointer offset and the index to be computed together first in 32-bit, before accumulation to the pointer in 64-bit, reducing pointer arithmetic overheads
…d and vector_store to reduce indexing overheads
…tNOrder uses optimized 3-operand indexing
TMA (Tensor Memory Accelerator) is only available on Hopper (sm_90+) and later architectures. This commit wraps the cuTensorMapEncodeTiled calls with a compile-time guard to prevent runtime errors on Volta/Ampere GPUs.
…configs should be chosen when doing full dslash (whether or not TMA is used)
maddyscientist
commented
Dec 10, 2025
| CPMAddPackage( | ||
| NAME CCCL | ||
| GITHUB_REPOSITORY nvidia/cccl | ||
| GIT_TAG main # Fetches the latest commit on the main branch |
Member
Author
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix this with a specific tag
…nd legacy architectures. Updated some deprecated calls to modern equivalents
…t can lead to catestrophic cancelation
…fresh, not copying or moving it
…er tests should all now pass
… not include RHS dimension). Remove legacy dslash constants no longer used
…100, e.g., we only tune over max L1 or max shared mem. No observed effect on performance, and default can be overriden with an envarg
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This work is latest towards optimizing QUDA for Blackwell:
vector_load. At present, not deployed anywhere.QUDA_DSLASH_PREFETCHCMake parameter, with 0=per-thread, 1=TMA bulk, and 2=TMA descriptortarget::is_thread_zero()which should be used for TMA issuance.QUDA_DSLASH_DOUBLE_STORE=ONwhich is required for TMA-based prefetching (for alignment reasons).* Prefetching is exposed for both ColorSpinorFields and GaugeFields, though only latter actually used at present.QUDA_DSLASH_PREFETCH_DISTANCE_WILSONandQUDA_DSLASH_PREFETCH_DISTANCE_STAGGEREDCMake parameters.vector_loadandvector_storeto this end (respectively).intwith division byfast_intdiv)The end result of this work is that both Staggered and Wilson dslash kernels can saturate over 90% memory bandwidth for most variants. Outstanding are half precision variants using reconstruction, that are still lagging. These will be the focus of a subsequent PR.