Skip to content

Re-land the PR of "Add INT8 SDPA path for CPU" #2215

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 5 commits into from
May 21, 2025

Conversation

Valentine233
Copy link
Collaborator

@Valentine233 Valentine233 commented May 16, 2025

Re-land #1372.

Based on the original PR, there are two main modifications:

  1. Fix the wheel issue nightly build for mac stops on 0422 #2157 by disabling cpp files building by default. The cpp kernels are only enabled if we manually set USE_CPP_KERNELS=1 when building from source. The support for pip installation with cpp kernels will be a follow-up work.
  2. Change the API name from scaled_dot_product_int8 to qscaled_dot_product, in order to reuse the API for future FP8 SDPA support.

Copy link

pytorch-bot bot commented May 16, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/2215

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 65f7d50 with merge base 96aec6a (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label May 16, 2025
@Valentine233 Valentine233 added the topic: not user facing Use this tag if you don't want this PR to show up in release notes label May 16, 2025
@Valentine233 Valentine233 marked this pull request as draft May 16, 2025 06:11
@Valentine233 Valentine233 marked this pull request as ready for review May 16, 2025 07:27
from torchao.prototype.inductor.fx_passes.int8_sdpa_fusion import _int8_sdpa_init
from torchao.utils import TORCH_VERSION_AT_LEAST_2_7

use_cpp_avx512 = os.getenv("USE_AVX512", "0") == "1"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This feels wrong

  1. if user didn't set the flag during the build phase but only testing, will it cause CI failure?
  2. if user build the custom op, but didn't enable this flag to test, will it just skip the testing?

One way comes to my mind is to check if this custom op has been registered to the CPU dispatch key correctly, for example torch._C._dispatch_dump("torchao::qscaled_dot_product"). Feel free to explore if any better idea.

Copy link
Collaborator Author

@Valentine233 Valentine233 May 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks the suggestion, replace with "CPU" in torch._C._dispatch_dump("torchao::qscaled_dot_product").

self.device, enabled=enable_autocast, dtype=torch.bfloat16
),
):
_int8_sdpa_init()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For how to register the custom pass, could we follow the suggestion in pytorch/pytorch#153532 (comment)?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks and modified!

test/test_ops.py Outdated
compute_max_diff,
)

use_cpp_avx512 = os.getenv("USE_AVX512", "0") == "1"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ditto

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks the suggestion, replace with "CPU" in torch._C._dispatch_dump("torchao::qscaled_dot_product").

setup.py Outdated
@@ -55,6 +55,10 @@ def read_version(file_path="version.txt"):
and platform.system() == "Darwin"
)

use_cpp_avx512 = os.getenv("USE_AVX512", "0") == "1"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This name might not be intuitive. This flag actual decide building of CPP kernels or not.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks. Changed to use_cpp_kernels.

Copy link
Contributor

@atalman atalman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wheel build looks good

@Valentine233 Valentine233 merged commit 1bbeed1 into pytorch:main May 21, 2025
35 checks passed
drisspg added a commit that referenced this pull request May 21, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. topic: not user facing Use this tag if you don't want this PR to show up in release notes
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants