-
Notifications
You must be signed in to change notification settings - Fork 129
Add lowering for adaptive avg pool #962
Conversation
Hm, I would think they reduce to 1x1 output and thus could be replaced by sum, not no-op? |
torchinductor/lowering.py
Outdated
# no-op if the same input and output | ||
if h_in == h_out and w_in == w_out: | ||
# TODO: do I need to copy ? _to_copy does not | ||
return x |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
better do a copy, yes (to make sure that previously run functionalization is still valid)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
torchdynamo/torchinductor/lowering.py
Line 915 in 4b212e1
def clone(x, *, memory_format=0): |
The output size is given, not the kernel size which differs from the other apis... When the output_size == input_size for adaptive average pooling, that means the kernel size will be [1, 1] and you're doing a 1-element average (no-op). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should add a test for this.
Kind of funny these are no-ops, though not the first time I've seen cases like that.
torchinductor/lowering.py
Outdated
# no-op if the same input and output | ||
if h_in == h_out and w_in == w_out: | ||
# TODO: do I need to copy ? _to_copy does not | ||
return x |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
torchdynamo/torchinductor/lowering.py
Line 915 in 4b212e1
def clone(x, *, memory_format=0): |
This was already implemented as a lowering in pytorch/torchdynamo#962. I'm putting the idea up here (I haven't even run this code, so it surely has *many* issues, but I reckon the general idea should hopefully be alright). [ghstack-poisoned]
This was already implemented as a lowering in pytorch/torchdynamo#962. I'm putting the idea up here (I haven't even run this code, so it surely has *many* issues, but I reckon the general idea should hopefully be alright). [ghstack-poisoned]
This was already implemented as a lowering in pytorch/torchdynamo#962. I'm putting the idea up here (I haven't even run this code, so it surely has *many* issues, but I reckon the general idea should hopefully be alright). [ghstack-poisoned]
This was already implemented as a lowering in pytorch/torchdynamo#962. I'm putting the idea up here (I haven't even run this code, so it surely has *many* issues, but I reckon the general idea should hopefully be alright). ghstack-source-id: 3875878 Pull Request resolved: #84062
This was already implemented as a lowering in pytorch/torchdynamo#962. I'm putting the idea up here (I haven't even run this code, so it surely has *many* issues, but I reckon the general idea should hopefully be alright). [ghstack-poisoned]
This was already implemented as a lowering in pytorch/torchdynamo#962. I'm putting the idea up here (I haven't even run this code, so it surely has *many* issues, but I reckon the general idea should hopefully be alright). [ghstack-poisoned]
This was already implemented as a lowering in pytorch/torchdynamo#962. I'm putting the idea up here (I haven't even run this code, so it surely has *many* issues, but I reckon the general idea should hopefully be alright). [ghstack-poisoned]
This was already implemented as a lowering in pytorch/torchdynamo#962. I'm putting the idea up here (I haven't even run this code, so it surely has *many* issues, but I reckon the general idea should hopefully be alright). [ghstack-poisoned]
This was already implemented as a lowering in pytorch/torchdynamo#962. I'm putting the idea up here (I haven't even run this code, so it surely has *many* issues, but I reckon the general idea should hopefully be alright). [ghstack-poisoned]
This was already implemented as a lowering in pytorch/torchdynamo#962. I'm putting the idea up here (I haven't even run this code, so it surely has *many* issues, but I reckon the general idea should hopefully be alright). [ghstack-poisoned]
This was already implemented as a lowering in pytorch/torchdynamo#962. I'm putting the idea up here (I haven't even run this code, so it surely has *many* issues, but I reckon the general idea should hopefully be alright). [ghstack-poisoned]
This was already implemented as a lowering in pytorch/torchdynamo#962. I'm putting the idea up here (I haven't even run this code, so it surely has *many* issues, but I reckon the general idea should hopefully be alright). [ghstack-poisoned]
This was already implemented as a lowering in pytorch/torchdynamo#962. I'm putting the idea up here (I haven't even run this code, so it surely has *many* issues, but I reckon the general idea should hopefully be alright). [ghstack-poisoned]
This was already implemented as a lowering in pytorch/torchdynamo#962. I'm putting the idea up here (I haven't even run this code, so it surely has *many* issues, but I reckon the general idea should hopefully be alright). [ghstack-poisoned]
This was already implemented as a lowering in pytorch/torchdynamo#962. I'm putting the idea up here (I haven't even run this code, so it surely has *many* issues, but I reckon the general idea should hopefully be alright). ghstack-source-id: 0473704 Pull Request resolved: #84062
This was already implemented as a lowering in pytorch/torchdynamo#962. I'm putting the idea up here ~(I haven't even run this code, so it surely has *many* issues, but I reckon the general idea should hopefully be alright).~ The tests now pass and I corrected the issues that the first implementation had. [ghstack-poisoned]
This was already implemented as a lowering in pytorch/torchdynamo#962. I'm putting the idea up here ~(I haven't even run this code, so it surely has *many* issues, but I reckon the general idea should hopefully be alright).~ The tests now pass and I corrected the issues that the first implementation had. [ghstack-poisoned]
This was already implemented as a lowering in pytorch/torchdynamo#962. I'm putting the idea up here ~(I haven't even run this code, so it surely has *many* issues, but I reckon the general idea should hopefully be alright).~ The tests now pass and I corrected the issues that the first implementation had. [ghstack-poisoned]
This was already implemented as a lowering in pytorch/torchdynamo#962. I'm putting the idea up here ~(I haven't even run this code, so it surely has *many* issues, but I reckon the general idea should hopefully be alright).~ The tests now pass and I corrected the issues that the first implementation had. [ghstack-poisoned]
This was already implemented as a lowering in pytorch/torchdynamo#962. I'm putting the idea up here (I haven't even run this code, so it surely has *many* issues, but I reckon the general idea should hopefully be alright). ghstack-source-id: 09e090f Pull Request resolved: #84062
This was already implemented as a lowering in pytorch/torchdynamo#962. I'm putting the idea up here (I haven't even run this code, so it surely has *many* issues, but I reckon the general idea should hopefully be alright). ghstack-source-id: 656a53e Pull Request resolved: #84062
This was already implemented as a lowering in pytorch/torchdynamo#962. I'm putting the idea up here ~(I haven't even run this code, so it surely has *many* issues, but I reckon the general idea should hopefully be alright).~ The tests now pass and I corrected the issues that the first implementation had. [ghstack-poisoned]
This was already implemented as a lowering in pytorch/torchdynamo#962. I'm putting the idea up here ~(I haven't even run this code, so it surely has *many* issues, but I reckon the general idea should hopefully be alright).~ The tests now pass and I corrected the issues that the first implementation had. [ghstack-poisoned]
This was already implemented as a lowering in pytorch/torchdynamo#962. I'm putting the idea up here ~(I haven't even run this code, so it surely has *many* issues, but I reckon the general idea should hopefully be alright).~ The tests now pass and I corrected the issues that the first implementation had. Pull Request resolved: #84062 Approved by: https://github.com/jansel
This was already implemented as a lowering in pytorch/torchdynamo#962. I'm putting the idea up here ~(I haven't even run this code, so it surely has *many* issues, but I reckon the general idea should hopefully be alright).~ The tests now pass and I corrected the issues that the first implementation had. Pull Request resolved: #84062 Approved by: https://github.com/jansel
Summary: This was already implemented as a lowering in pytorch/torchdynamo#962. I'm putting the idea up here ~(I haven't even run this code, so it surely has *many* issues, but I reckon the general idea should hopefully be alright).~ The tests now pass and I corrected the issues that the first implementation had. Pull Request resolved: #84062 Approved by: https://github.com/jansel Test Plan: contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/9fc02f6bc558f26a460241cbeaea915ea2b41005 Reviewed By: weiwangmeta Differential Revision: D39091747 fbshipit-source-id: f984c729e355d2d1a4abe0c01a97b30fbdca95b7
The two occurrences in torchbench of this op are both no-ops lol. When you disable the no-op optimization, the op is ~6x faster (again, for those two sets of inputs).