Skip to content

Fix PT2 compliant opcheck tests #3404

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from

Conversation

spcyppt
Copy link
Contributor

@spcyppt spcyppt commented Nov 22, 2024

Summary:
Add decorator to skip PT2 compliant opcheck tests.

AssertionError: op 'fbgemm::split_embedding_codegen_lookup_rowwise_adagrad_function' was tagged with torch.Tag.pt2_compliant_tag but it failed some of the generated opcheck tests (['BackwardAdagradGlobalWeightDecay.test_faketensor__test_backward_adagrad_global_weight_decay', 

This is due to faketensor test being added to the failure_dict.

Faketensor tests fail because it expect all tensors to be on the same device, whereas we set learning_rate_tensor to be on CPU to avoid D->H sync as it will be converted to float for the kernel.

Reviewed By: q10

Differential Revision: D66346179

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D66346179

Copy link

netlify bot commented Nov 22, 2024

Deploy Preview for pytorch-fbgemm-docs ready!

Name Link
🔨 Latest commit 6a3a99e
🔍 Latest deploy log https://app.netlify.com/sites/pytorch-fbgemm-docs/deploys/67410e20e83ec7000858904e
😎 Deploy Preview https://deploy-preview-3404--pytorch-fbgemm-docs.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify site configuration.

spcyppt added a commit to spcyppt/FBGEMM that referenced this pull request Nov 22, 2024
Summary:
X-link: facebookresearch/FBGEMM#492


Add decorator to skip PT2 compliant opcheck tests.

```
AssertionError: op 'fbgemm::split_embedding_codegen_lookup_rowwise_adagrad_function' was tagged with torch.Tag.pt2_compliant_tag but it failed some of the generated opcheck tests (['BackwardAdagradGlobalWeightDecay.test_faketensor__test_backward_adagrad_global_weight_decay', 
```
This is due to faketensor test being added to the failure_dict.

Faketensor tests fail because it expect all tensors to be on the same device, whereas we set learning_rate_tensor to be on CPU to avoid D->H sync as it will be converted to float for the kernel.

Reviewed By: q10

Differential Revision: D66346179
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D66346179

spcyppt added a commit to spcyppt/FBGEMM that referenced this pull request Nov 22, 2024
Summary:
X-link: facebookresearch/FBGEMM#492


Add decorator to skip PT2 compliant opcheck tests.

```
AssertionError: op 'fbgemm::split_embedding_codegen_lookup_rowwise_adagrad_function' was tagged with torch.Tag.pt2_compliant_tag but it failed some of the generated opcheck tests (['BackwardAdagradGlobalWeightDecay.test_faketensor__test_backward_adagrad_global_weight_decay', 
```
This is due to faketensor test being added to the failure_dict.

Faketensor tests fail because it expect all tensors to be on the same device, whereas we set learning_rate_tensor to be on CPU to avoid D->H sync as it will be converted to float for the kernel.

Reviewed By: q10

Differential Revision: D66346179
Summary:
X-link: facebookresearch/FBGEMM#492


Add decorator to skip PT2 compliant opcheck tests.

```
AssertionError: op 'fbgemm::split_embedding_codegen_lookup_rowwise_adagrad_function' was tagged with torch.Tag.pt2_compliant_tag but it failed some of the generated opcheck tests (['BackwardAdagradGlobalWeightDecay.test_faketensor__test_backward_adagrad_global_weight_decay', 
```
This is due to faketensor test being added to the failure_dict.

Faketensor tests fail because it expect all tensors to be on the same device, whereas we set learning_rate_tensor to be on CPU to avoid D->H sync as it will be converted to float for the kernel.

Reviewed By: q10

Differential Revision: D66346179
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D66346179

@facebook-github-bot
Copy link
Contributor

This pull request has been merged in ed15cca.

q10 pushed a commit to q10/FBGEMM that referenced this pull request Apr 10, 2025
Summary:
Pull Request resolved: facebookresearch/FBGEMM#492

X-link: pytorch#3404

Add decorator to skip PT2 compliant opcheck tests.

```
AssertionError: op 'fbgemm::split_embedding_codegen_lookup_rowwise_adagrad_function' was tagged with torch.Tag.pt2_compliant_tag but it failed some of the generated opcheck tests (['BackwardAdagradGlobalWeightDecay.test_faketensor__test_backward_adagrad_global_weight_decay',
```
This is due to faketensor test being added to the failure_dict.

Faketensor tests fail because it expect all tensors to be on the same device, whereas we set learning_rate_tensor to be on CPU to avoid D->H sync as it will be converted to float for the kernel.

Reviewed By: q10

Differential Revision: D66346179

fbshipit-source-id: 8b0784071a9845a33ba3a75ab8a486d727a92672
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants