Skip to content

Commit

Permalink
[Opt depthwise_conv2d] Simplify depthwise_conv2d use_cudnn attribute (#…
Browse files Browse the repository at this point in the history
…48010)

* simplify depthwise_conv2d phi kernel selection

* fix depthwise_conv2d
  • Loading branch information
jiahy0825 authored Nov 16, 2022
1 parent 8e6315e commit 7c30458
Show file tree
Hide file tree
Showing 3 changed files with 7 additions and 9 deletions.
8 changes: 4 additions & 4 deletions paddle/phi/api/yaml/legacy_backward.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -391,7 +391,7 @@
optional : mask

- backward_op : depthwise_conv2d_double_grad
forward : depthwise_conv2d_grad (Tensor input, Tensor filter, Tensor grad_out, int[] strides, int[] paddings, str padding_algorithm, int groups, int[] dilations, str data_format, bool use_gpudnn) -> Tensor(grad_input), Tensor(grad_filter)
forward : depthwise_conv2d_grad (Tensor input, Tensor filter, Tensor grad_out, int[] strides, int[] paddings, str padding_algorithm, int groups, int[] dilations, str data_format) -> Tensor(grad_input), Tensor(grad_filter)
args : (Tensor input, Tensor filter, Tensor grad_out, Tensor grad_input_grad, Tensor grad_filter_grad, int[] strides, int[] paddings, str padding_algorithm, int groups, int[] dilations, str data_format)
output : Tensor(input_grad), Tensor(filter_grad), Tensor(grad_out_grad)
infer_meta :
Expand All @@ -402,16 +402,16 @@
optional : grad_input_grad, grad_filter_grad

- backward_op : depthwise_conv2d_grad
forward : depthwise_conv2d (Tensor input, Tensor filter, int[] strides, int[] paddings, str padding_algorithm, int groups, int[] dilations, str data_format, bool use_gpudnn) -> Tensor(out)
args : (Tensor input, Tensor filter, Tensor out_grad, int[] strides, int[] paddings, str padding_algorithm, int groups, int[] dilations, str data_format, bool use_gpudnn)
forward : depthwise_conv2d (Tensor input, Tensor filter, int[] strides, int[] paddings, str padding_algorithm, int groups, int[] dilations, str data_format) -> Tensor(out)
args : (Tensor input, Tensor filter, Tensor out_grad, int[] strides, int[] paddings, str padding_algorithm, int groups, int[] dilations, str data_format)
output : Tensor(input_grad), Tensor(filter_grad)
infer_meta :
func : GeneralBinaryGradInferMeta
param : [input, filter]
kernel :
func : depthwise_conv2d_grad
param : [input, filter, out_grad, strides, paddings, padding_algorithm, groups, dilations, data_format]
use_gpudnn : use_gpudnn
use_gpudnn : True
backward : depthwise_conv2d_double_grad

- backward_op : depthwise_conv2d_transpose_grad
Expand Down
4 changes: 2 additions & 2 deletions paddle/phi/api/yaml/legacy_ops.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -541,15 +541,15 @@
backward : deformable_conv_grad

- op : depthwise_conv2d
args : (Tensor x, Tensor filter, int[] strides, int[] paddings, str padding_algorithm, int groups, int[] dilations, str data_format, bool use_gpudnn)
args : (Tensor x, Tensor filter, int[] strides, int[] paddings, str padding_algorithm, int groups, int[] dilations, str data_format)
output : Tensor(out)
infer_meta :
func : DepthwiseConvInferMeta
param : [x, filter, strides, paddings, padding_algorithm, groups, dilations, data_format]
kernel :
func : depthwise_conv2d
param : [x, filter, strides, paddings, padding_algorithm, groups, dilations, data_format]
use_gpudnn : use_gpudnn
use_gpudnn : true
backward : depthwise_conv2d_grad

- op : depthwise_conv2d_transpose
Expand Down
4 changes: 1 addition & 3 deletions python/paddle/nn/functional/conv.py
Original file line number Diff line number Diff line change
Expand Up @@ -172,7 +172,6 @@ def _conv_nd(
groups,
dilation,
data_format,
use_cudnn,
)
if bias is not None:
channel_dim = (
Expand Down Expand Up @@ -484,7 +483,7 @@ def conv1d(
conv2d_data_format,
)
else:
out = getattr(_C_ops, l_type)(
out = _C_ops.depthwise_conv2d(
x,
weight,
stride,
Expand All @@ -497,7 +496,6 @@ def conv1d(
-1,
False,
False,
use_cudnn,
)
if bias is not None:
out = nn.elementwise_add(out, bias, axis=channel_dim)
Expand Down

0 comments on commit 7c30458

Please sign in to comment.