Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

【Hackathon 5th No.38】为 Paddle 新增 FractionalMaxPool2d / FractionalMaxPool3d API -kernel #59847

Merged
merged 25 commits into from
Jan 12, 2024
Merged
Show file tree
Hide file tree
Changes from 5 commits
Commits
Show all changes
25 commits
Select commit Hold shift + click to select a range
ada4fa5
[Init] add fractional max pool kernel and api
megemini Dec 8, 2023
6411892
[Fix] pooling.cu seed offset
megemini Dec 9, 2023
ac2151c
[Change] remove adaptive from fractional max pool
megemini Dec 10, 2023
6d6dcf8
[Change] fractional max 2d gpu pooling.cu grad
megemini Dec 10, 2023
7b3ef68
[Change] fractional max 2d gpu pooling.cu grad with dim3
megemini Dec 11, 2023
80caf0f
[Change] use UnchangedInferMeta
megemini Dec 18, 2023
f512f96
[Change] test api with uint16
megemini Dec 19, 2023
f092060
[Change] wrap test disable_static
megemini Dec 19, 2023
a23d0f2
[Change] regiester float16/bfloat16
megemini Dec 19, 2023
a6f18d4
[Change] remove bfloat16 from cpu kernrl
megemini Dec 19, 2023
8fd2c3f
[Change] test dtypes in cpu and gpu
megemini Dec 19, 2023
940499d
[Change] test_fractional_max_pool3d_2d/3d timeout to 30s
megemini Dec 19, 2023
36aee10
[Fix] resolve conflict
megemini Dec 19, 2023
7d65773
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
megemini Dec 20, 2023
f6182d4
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
megemini Dec 20, 2023
0e8df70
[Change] win32 cannot detect bfloat16 correctly
megemini Dec 20, 2023
6f1f822
[Change] force set_device
megemini Dec 21, 2023
331cd95
[Add] test random_u is None
megemini Dec 21, 2023
7706c4a
[Change] use kernel_size for overlapping mode
megemini Jan 3, 2024
ba2970f
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
megemini Jan 5, 2024
98d4c9b
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
megemini Jan 8, 2024
34b4d28
[Change] clean headers
megemini Jan 9, 2024
c1305fe
[CodeStyle] pooling
megemini Jan 9, 2024
6cc617d
[Change] rename op
megemini Jan 10, 2024
e24739a
[Change] rename func without index
megemini Jan 11, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 18 additions & 0 deletions paddle/phi/api/yaml/backward.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -911,6 +911,24 @@
data_type : out_grad
no_need_buffer : x

- backward_op : fractional_max_pool2d_with_index_grad
forward : fractional_max_pool2d_with_index(Tensor x, int[] output_size, float random_u = 0.0) -> Tensor(out), Tensor(mask)
args : (Tensor x, Tensor mask, Tensor out_grad, int[] output_size, float random_u)
output : Tensor(x_grad)
infer_meta :
func : FractionalMaxPoolWithIndexGradInferMeta
kernel :
func : fractional_max_pool2d_with_index_grad

- backward_op : fractional_max_pool3d_with_index_grad
forward : fractional_max_pool3d_with_index(Tensor x, int[] output_size, float random_u = 0.0) -> Tensor(out), Tensor(mask)
args : (Tensor x, Tensor mask, Tensor out_grad, int[] output_size, float random_u)
output : Tensor(x_grad)
infer_meta :
func : FractionalMaxPoolWithIndexGradInferMeta
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

我看这个实现是跟UnchangedInferMeta是一样的,直接配置UnchangedInferMeta即可,并且反向不需要新增infermeta函数

kernel :
func : fractional_max_pool3d_with_index_grad

- backward_op : frame_grad
forward : frame(Tensor x, int frame_length, int hop_length, int axis=-1) -> Tensor(out)
args : (Tensor x, Tensor out_grad, int frame_length, int hop_length, int axis)
Expand Down
12 changes: 12 additions & 0 deletions paddle/phi/api/yaml/op_compat.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -1171,6 +1171,18 @@
outputs :
out : Y

- op : fractional_max_pool2d_with_index
inputs :
{x : X}
outputs :
{out : Out, mask : Mask}

- op : fractional_max_pool3d_with_index
inputs :
{x : X}
outputs :
{out : Out, mask : Mask}

- op : frame
backward : frame_grad
inputs :
Expand Down
18 changes: 18 additions & 0 deletions paddle/phi/api/yaml/ops.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -1011,6 +1011,24 @@
func: fold
backward: fold_grad

- op : fractional_max_pool2d_with_index
args : (Tensor x, int[] output_size, float random_u = 0.0)
output : Tensor(out), Tensor(mask)
infer_meta :
func : FractionalMaxPoolWithIndexInferMeta
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

InferMeta函数名为什么多了WithIndex

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

之前为了与其他带有 index 的 pooling 算子命名保持一致所以用的 with index ~ 我改一下吧 ~

kernel :
func : fractional_max_pool2d_with_index
backward : fractional_max_pool2d_with_index_grad

- op : fractional_max_pool3d_with_index
args : (Tensor x, int[] output_size, float random_u = 0.0)
output : Tensor(out), Tensor(mask)
infer_meta :
func : FractionalMaxPoolWithIndexInferMeta
kernel :
func : fractional_max_pool3d_with_index
backward : fractional_max_pool3d_with_index_grad

- op : frame
args : (Tensor x, int frame_length, int hop_length, int axis=-1)
output : Tensor(out)
Expand Down
10 changes: 10 additions & 0 deletions paddle/phi/infermeta/backward.cc
Original file line number Diff line number Diff line change
Expand Up @@ -662,6 +662,16 @@ void MaxPoolWithIndexGradInferMeta(const MetaTensor& x,
dx->share_meta(x);
}

void FractionalMaxPoolWithIndexGradInferMeta(
const MetaTensor& x,
const MetaTensor& mask,
const MetaTensor& dout,
const std::vector<int>& output_size,
float random_u,
MetaTensor* dx) {
dx->share_meta(x);
}

void MemoryEfficientAttentionGradInferMeta(const MetaTensor& query,
const MetaTensor& key,
const MetaTensor& value,
Expand Down
8 changes: 8 additions & 0 deletions paddle/phi/infermeta/backward.h
Original file line number Diff line number Diff line change
Expand Up @@ -312,6 +312,14 @@ void MaxPoolWithIndexGradInferMeta(const MetaTensor& x,
bool adaptive,
MetaTensor* dx);

void FractionalMaxPoolWithIndexGradInferMeta(
const MetaTensor& x,
const MetaTensor& mask,
const MetaTensor& dout,
const std::vector<int>& output_size,
float random_u,
MetaTensor* dx);

void MeshgridGradInferMeta(const std::vector<const MetaTensor*>& inputs,
const std::vector<const MetaTensor*>& outputs_grad,
std::vector<MetaTensor*> inputs_grad);
Expand Down
36 changes: 36 additions & 0 deletions paddle/phi/infermeta/unary.cc
Original file line number Diff line number Diff line change
Expand Up @@ -2333,6 +2333,42 @@ void MaxPoolWithIndexInferMeta(const MetaTensor& x,
mask->set_dtype(phi::CppTypeToDataType<int>::Type());
}

void FractionalMaxPoolWithIndexInferMeta(const MetaTensor& x,
const std::vector<int>& output_size,
float random_u,
MetaTensor* out,
MetaTensor* mask,
MetaConfig config) {
std::vector<int> output_size_ = output_size;

auto x_dims = x.dims();

PADDLE_ENFORCE_EQ(
(x_dims.size() == 4 || x_dims.size() == 5),
true,
errors::InvalidArgument("Pooling intput should be 4-D or "
"5-D tensor but received %dD-Tensor",
x_dims.size()));

PADDLE_ENFORCE_EQ(
x_dims.size() - output_size_.size(),
2U,
errors::InvalidArgument(
"The input size %d minus the output size %d should equal to 2.",
x_dims.size(),
output_size_.size()));

std::vector<int64_t> output_shape({x_dims[0], x_dims[1]});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

上面指定x的dim size是4 或者5 为什么这儿output dim size只能为2了,是不是应该也可以为3

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

output_size 只需要是 2、3 即可,只需要设置最后的几个 size ~

比如使用 2d 的 api 输入是 [2, 3, 32, 32] ,输出 output_size 是 [18, 18] ;如果使用 3d 的 api 输入是 [2, 3, 32, 32, 32] ,输出 output_size 是 [18, 18, 18] ~

所以使用 x 的前两个 dim ~

output_shape.insert(
output_shape.end(), output_size_.begin(), output_size_.end());

out->set_dims(common::make_ddim(output_shape));
out->set_dtype(x.dtype());

mask->set_dims(common::make_ddim(output_shape));
mask->set_dtype(phi::CppTypeToDataType<int>::Type());
}

void MeanAllInferMeta(const MetaTensor& x, MetaTensor* out) {
out->set_dims(common::make_ddim({}));
out->set_dtype(x.dtype());
Expand Down
7 changes: 7 additions & 0 deletions paddle/phi/infermeta/unary.h
Original file line number Diff line number Diff line change
Expand Up @@ -348,6 +348,13 @@ void MaxPoolWithIndexInferMeta(const MetaTensor& x,
MetaTensor* mask,
MetaConfig config = MetaConfig());

void FractionalMaxPoolWithIndexInferMeta(const MetaTensor& x,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

InferMeta函数按照字母序排列

const std::vector<int>& output_size,
float random_u,
MetaTensor* out,
MetaTensor* mask,
MetaConfig config = MetaConfig());

void MeanAllInferMeta(const MetaTensor& x, MetaTensor* out);

void ModeInferMeta(const MetaTensor& x,
Expand Down
18 changes: 18 additions & 0 deletions paddle/phi/kernels/cpu/pool_grad_kernel.cc
Original file line number Diff line number Diff line change
Expand Up @@ -44,3 +44,21 @@ PD_REGISTER_KERNEL(max_pool3d_with_index_grad,
double) {
kernel->InputAt(1).SetDataType(phi::CppTypeToDataType<int>::Type());
}

PD_REGISTER_KERNEL(fractional_max_pool2d_with_index_grad,
CPU,
ALL_LAYOUT,
phi::FractionalMaxPool2dWithIndexGradKernel,
float,
double) {
kernel->InputAt(1).SetDataType(phi::CppTypeToDataType<int>::Type());
}

PD_REGISTER_KERNEL(fractional_max_pool3d_with_index_grad,
CPU,
ALL_LAYOUT,
phi::FractionalMaxPool3dWithIndexGradKernel,
float,
double) {
kernel->InputAt(1).SetDataType(phi::CppTypeToDataType<int>::Type());
}
18 changes: 18 additions & 0 deletions paddle/phi/kernels/cpu/pool_kernel.cc
Original file line number Diff line number Diff line change
Expand Up @@ -36,3 +36,21 @@ PD_REGISTER_KERNEL(max_pool3d_with_index,
double) {
kernel->OutputAt(1).SetDataType(phi::CppTypeToDataType<int>::Type());
}

PD_REGISTER_KERNEL(fractional_max_pool2d_with_index,
CPU,
ALL_LAYOUT,
phi::FractionalMaxPool2dWithIndexKernel,
float,
double) {
kernel->OutputAt(1).SetDataType(phi::CppTypeToDataType<int>::Type());
}

PD_REGISTER_KERNEL(fractional_max_pool3d_with_index,
CPU,
ALL_LAYOUT,
phi::FractionalMaxPool3dWithIndexKernel,
float,
double) {
kernel->OutputAt(1).SetDataType(phi::CppTypeToDataType<int>::Type());
}
Loading