-
Notifications
You must be signed in to change notification settings - Fork 25.2k
If the input is contiguous, short-circuit infer_size_dv in reshape #95216
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
The main improvement is that this avoids guards from infer_size_dv, although this also counts as a minor perf improvement too. Signed-off-by: Edward Z. Yang <ezyang@meta.com> [ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/95216
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 0f5279e: This comment was automatically generated by Dr. CI and updates every 15 minutes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess that's ok as long as you're not doing anything funky with dispatch_sizes_strides_policy
.
You do need to add the same if (!self.is_xla() && !self.is_lazy() && !self.is_ipu() && !at::isTensorSubclassLike(self)) {
as below though.
I think it is sound to omit the checks. Let us enumerate the cases:
|
lol but CI HAS PROVED ME WRONGGGG |
… reshape" The main improvement is that this avoids guards from infer_size_dv, although this also counts as a minor perf improvement too. Signed-off-by: Edward Z. Yang <ezyangmeta.com> [ghstack-poisoned]
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good!
…95216) The main improvement is that this avoids guards from infer_size_dv, although this also counts as a minor perf improvement too. Signed-off-by: Edward Z. Yang <ezyang@meta.com> Pull Request resolved: pytorch/pytorch#95216 Approved by: https://github.com/albanD
…shape (pytorch#95216)" This reverts commit e5785f1.
…ytorch#95216) The main improvement is that this avoids guards from infer_size_dv, although this also counts as a minor perf improvement too. Signed-off-by: Edward Z. Yang <ezyang@meta.com> Pull Request resolved: pytorch#95216 Approved by: https://github.com/albanD
one recent problem is that unbacked hit this case, and when that happens we loose the option to copy as a default last resort behaviour. |
Stack from ghstack (oldest at bottom):
The main improvement is that this avoids guards from infer_size_dv,
although this also counts as a minor perf improvement too.
Signed-off-by: Edward Z. Yang ezyang@meta.com