Skip to content

primitive scale fix #2210

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open

Conversation

YIWENX14
Copy link
Contributor

Differential Revision: D74446877

Copy link

pytorch-bot bot commented May 14, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/2210

Note: Links to docs will display an error until the docs builds have been completed.

❗ 1 Active SEVs

There are 1 currently active SEVs. If your PR is affected, please view them below:

✅ No Failures

As of commit ed81130 with merge base 554cb60 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label May 14, 2025
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D74446877

@YIWENX14
Copy link
Contributor Author

@pytorchbot label "topic: not user facing"

@pytorch-bot pytorch-bot bot added the topic: not user facing Use this tag if you don't want this PR to show up in release notes label May 14, 2025
YIWENX14 added a commit to YIWENX14/ao that referenced this pull request May 14, 2025
)

Summary: Pull Request resolved: pytorch#2210

Differential Revision: D74446877
@YIWENX14 YIWENX14 force-pushed the export-D74446877 branch from 3fc617b to 0db5fc2 Compare May 14, 2025 22:44
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D74446877

@YIWENX14 YIWENX14 force-pushed the export-D74446877 branch from 0db5fc2 to cdc082f Compare May 14, 2025 23:54
YIWENX14 added a commit to YIWENX14/ao that referenced this pull request May 14, 2025
)

Summary: Pull Request resolved: pytorch#2210

Differential Revision: D74446877
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D74446877

YIWENX14 added a commit to YIWENX14/ao that referenced this pull request May 15, 2025
)

Summary: Pull Request resolved: pytorch#2210

Differential Revision: D74446877
@YIWENX14 YIWENX14 force-pushed the export-D74446877 branch from cdc082f to fa63a56 Compare May 15, 2025 00:01
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D74446877

)

Summary: Pull Request resolved: pytorch#2210

Differential Revision: D74446877
@YIWENX14 YIWENX14 force-pushed the export-D74446877 branch from fa63a56 to ed81130 Compare May 15, 2025 00:17
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D74446877

@drisspg drisspg requested a review from jerryzh168 May 15, 2025 00:20
@@ -948,7 +952,9 @@ def _choose_qparams_affine(
scale = torch.clamp(scale, min=eps)
else:
assert mapping_type == MappingType.ASYMMETRIC.name
scale = (max_val_pos - min_val_neg) / float(quant_max - quant_min)
scale = (max_val_pos - min_val_neg) / torch.tensor(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what if you did:
(max_val_pos - min_val_neg) / (quant_max - quant_min).to(torch.float32)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Casting to float32 doesn't help the discrepancy on CPU vs GPU.

Copy link
Contributor

@drisspg drisspg May 15, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ahh sorry I see that quant_max and quant_mins are ints, why is this better than the existing? Maybe a test might be helpful

import torch
from transformer_nuggets.utils.tracing import LoggingMode
a = torch.randn(2, 3, device="cuda")
with LoggingMode():
    out = a / float(32 - 12)
    print(out)


with LoggingMode():
    out = a / torch.tensor(float(32 - 12), dtype=a.dtype, device=a.device)
    print(out)
    

Produces:

$1: f32[2, 3] = aten.div.Tensor($0, 20.0)
tensor([[-0.0313, -0.0028, -0.0344],
       [ 0.0299,  0.0099, -0.0426]], device='cuda:0')
$0: f32[] = aten.lift_fresh.default($0)
$2: f32[2, 3] = aten.div.Tensor($1, $0)
tensor([[-0.0313, -0.0028, -0.0344],
       [ 0.0299,  0.0099, -0.0426]], device='cuda:0')

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Driss, the issue is "float" as in python float is actually float64

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So this isn't about fixing a device but about fixing the dtype?

Copy link
Contributor

@jerryzh168 jerryzh168 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, I think it's fine to merge as long as it doesn't break existing tests, there are some flexibility in these quant primitive ops since we didn't really define these very precisely

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported topic: not user facing Use this tag if you don't want this PR to show up in release notes
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants