forked from pytorch/pytorch
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Fuse tensor-scalar ops when scalar is constant (pytorch#10511)
Summary: This is on the way to resolving pytorch#9940. Fixes pytorch#10501 This PR modifies graph fuser to fuse operations that have constant scalar arguments. These constant scalar arguments are directly inlined into the kernel body. The context for this is that LSTM backward (in particular, sigmoid backward) has many add(x, 1.) operations. This PR should be sufficient for LSTM backward to get fused by the graph fuser. cc apaszke zdevito Pull Request resolved: pytorch#10511 Differential Revision: D9378896 Pulled By: zou3519 fbshipit-source-id: 6a7a2987f5b6e8edaaf4b599cd200df33361650f
- Loading branch information
1 parent
f3ac619
commit 86c9856
Showing
8 changed files
with
300 additions
and
168 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
11 changes: 11 additions & 0 deletions
11
test/expect/TestScript.test_tensor_scalar_fusion_cuda-1.expect
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,11 @@ | ||
graph(%x : Float(2, 2)) { | ||
%1 : Float(2, 2) = prim::FusionGroup_0[device=0](%x) | ||
return (%1); | ||
} | ||
with prim::FusionGroup_0 = graph(%0 : Float(2, 2)) { | ||
%z : float = prim::Constant[value=3]() | ||
%4 : int = prim::Constant[value=1]() | ||
%y : Float(2, 2) = aten::add(%0, %z, %4) | ||
%2 : Float(2, 2) = aten::mul(%0, %y) | ||
return (%2); | ||
} |
8 changes: 8 additions & 0 deletions
8
test/expect/TestScript.test_tensor_scalar_fusion_cuda-2.expect
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,8 @@ | ||
graph(%x : Float(2, 2) | ||
%z : Float()) { | ||
%2 : int = prim::TensorToNum(%z) | ||
%3 : int = prim::Constant[value=1]() | ||
%y : Dynamic = aten::add(%x, %2, %3) | ||
%5 : Dynamic = aten::mul(%x, %y) | ||
return (%5); | ||
} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.