Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CUDA backend integration for TFP #134

Closed
wants to merge 4 commits into from
Closed

CUDA backend integration for TFP #134

wants to merge 4 commits into from

Conversation

jeffreysijuntan
Copy link
Contributor

Summary:
This diff does the following thing:

  1. Integrate torch.device into CrypTen. If a user provide a CUDA tensor as input, CrypTen will detect that and run the backend on a GPU.

  2. Apply cuda_patches to beaver.py. As suggested by lvdmaaten, this design could make our code error prone if mpc.py and arithmetic.py is littered with if else statements. After running tests on the backend though, we discovered that only a few places related to arithmetic_function that we need to use cuda_patches. The rest of the codebase work smoothly without much changes. It is probably fine to use cuda_patches as a temporary hack. What do you think? lvdmaaten knottb

This diff only integrate CUDA support for TFP. Future diff will extend it to TTP.

Differential Revision: D21952814

jeffreysijuntan added 4 commits June 9, 2020 07:53
Summary:
Pull Request resolved: #131

This diff does the two following thing:

1. Implement a encoder for LongTensor that embed fixed-point arithmetic into floating-point arithmetic. It enables us to utilize existing CUDA kernel for floating points to compute matrix multiplication for LongTensor.

2.  Add a new test file test_cuda.py for future CUDA related testing

The encoder currently only supports matmul. Conv2d will be supported in future diffs.

The API design is based on: D21860688

Differential Revision: D21882865

fbshipit-source-id: 1c46741a81ca5f386ecbb974dc937bbe759a1af2
Summary: Support CUDA for conv1d, conv2d, conv1d_transpose, conv2d_transpose

Differential Revision: D21885098

fbshipit-source-id: 74f7b59ebc4e7e57519f0a8a38b81dca6d363c5c
Summary: Add comprehensive test case for TestCUDA that would enable us to check the correctness of cuda integration

Differential Revision: D21911937

fbshipit-source-id: 0b65a38f0d838d91cb58f20cbf525ea3ac7f5504
Summary:
This diff does the following thing:

1. Integrate torch.device into CrypTen. If a user provide a CUDA tensor as input, CrypTen will detect that and run the backend on a GPU.

2. Apply cuda_patches to beaver.py. As suggested by lvdmaaten, this design could make our code error prone if mpc.py and arithmetic.py is littered with if else statements. After running tests on the backend though, we discovered that only a few places related to arithmetic_function that we need to use cuda_patches. The rest of the codebase work smoothly without much changes. It is probably fine to use cuda_patches as a temporary hack. What do you think? lvdmaaten knottb

This diff only integrate CUDA support for TFP. Future diff will extend it to TTP.

Differential Revision: D21952814

fbshipit-source-id: 0811c4848ae9fd7a115df82ef998a9f5202938bf
@facebook-github-bot facebook-github-bot added CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported labels Jun 9, 2020
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D21952814

@knottb knottb closed this Jun 30, 2020
tanjuntao pushed a commit to tanjuntao/CrypTen that referenced this pull request Nov 27, 2023
Summary:
## Types of changes

- [ ] Bug fix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to change)
- [x] Docs change / refactoring / dependency upgrade

## Motivation and Context / Related issue

Cleaning up and restructuring

## How Has This Been Tested (if it applies)

Tested locally on macbook
## Checklist

- [x] The documentation is up-to-date with the changes I made.
- [x] I have read the **CONTRIBUTING** document and completed the CLA (see **CONTRIBUTING**).
- [ ] All tests passed, and additional code has been covered with new tests.
Pull Request resolved: fairinternal/CrypTen#134

Differential Revision: D17850045

Pulled By: vshobha

fbshipit-source-id: 5f8e13d3031bc0212d6db10660df2f791b9aa114
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants