-
Notifications
You must be signed in to change notification settings - Fork 38
Add GPU support using CuPy #464
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
AlexanderSinn
wants to merge
76
commits into
LASY-org:development
Choose a base branch
from
AlexanderSinn:gpu_lasy
base: development
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
Author
|
Now the CI passes when I run it on a GPU (A100) with CuPy. It is twice as fast compated to running it on the CPU. CPU: |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
design
Raise discussion on LASY architecture and design
gpu
Related to running LASY on GPU
non-backward-compatible
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Based on #437
This PR adds GPU support using CuPy. Similar to how it is done in pywarpx, the
lasy.backendfile definesxpwhich is then used instead of numpynpeverywhere.xpis either cupy or numpy depending on if the cupy module was found. Additionallylasy.backenddefines the helper functionsto_cpuandto_gpuwhich can be used to convert between numpy and cupy arrays. If numpy is used as a backend, the functions are still available but do nothing.lasy.backend:Changes in this PR:
For every single occurrence in lasy of:
This PR replaces it with:
Except in
refractive_index.pywhere numpy is needed to work with numdifftools.IMPORTANT: All future additions to lasy should also only use
xpinstead ofnp.In case a lasy array is passed to a different Python library that does not support cupy, such as matplotlib, numdifftools, scipy (the functions not provided in cupyx.scipy), openpmd-api and axiprop (has CuPy backend, but the interface mostly uses CPU arrays), the
to_cpuhelper function has to be used. For example:When indexing into a CuPy array or using
xp.sum(...), a dimension zero array is returned instead of a Python float to avoid copies to CPU and synchronization. This makes it necessary to explicitly cast tofloat()in a few places where scalars are needed.