Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Summary:
This diff does the following thing:
Integrate torch.device into CrypTen. If a user provide a CUDA tensor as input, CrypTen will detect that and run the backend on a GPU.
Apply cuda_patches to beaver.py. As suggested by lvdmaaten, this design could make our code error prone if mpc.py and arithmetic.py is littered with if else statements. After running tests on the backend though, we discovered that only a few places related to arithmetic_function that we need to use cuda_patches. The rest of the codebase work smoothly without much changes. It is probably fine to use cuda_patches as a temporary hack. What do you think? lvdmaaten knottb
This diff only integrate CUDA support for TFP. Future diff will extend it to TTP.
Differential Revision: D21952814