Skip to content

[PTen] Add variable transform to/from ptenTensor and add cast kernel #36916

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 63 commits into from
Nov 22, 2021

Conversation

MingMingShangTian
Copy link
Contributor

@MingMingShangTian MingMingShangTian commented Nov 1, 2021

PR types

New features

PR changes

Others

Describe

本PR 主要改动点如下:

  1. 重构了Pten 的Cast Kernel,并完成了原Op Kernel的替换。
  2. 新增了PD_VISIT_ALL_TYPES 宏定义,用于匹配对应类型,执行自定的操作。
  3. DenseTensor 的Mutable_data 支持在data_type 未定义情况下,根据特化类型设置data_type的操作。
  4. 新增了在原框架中Pten kernel执行完成后,回写到原Variable的逻辑。
  5. 修复了原pten中以下一些bug.
  • Pten tensor 写回到原Variable时,部分场景下dtype 没设置的bug
  • mutable_data对 request byte 为0时,不执行Malloc的bug
  • reshape kernel的Inplace 操作导致xshape类型错误的bug
  • 生成pten tensor 的place和原来Variable的place 不一致的bug
  • ReMakePtenDenseTensor 函数中out的place和原来Variable的place 不一致的bug
  • Revert 了之前pten tensor 的resize 引入的lod 设置的改动
  • ResetHolderAndType因为reset holder 时会执行size check ,在新的type比原来小的情况下失败的bug

@paddle-bot-old
Copy link

paddle-bot-old bot commented Nov 1, 2021

Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

@@ -209,5 +209,7 @@ void Tensor::ResetHolderWithType(std::shared_ptr<memory::Allocation> holder,
type_ = type;
}

void Tensor::setType(const proto::VarType::Type type) { type_ = type; }
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

setType -> set_type

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@@ -552,14 +552,13 @@ class Reshape2Op : public ReshapeOp {
const framework::ExecutionContext &ctx) const override {
auto multi_inputs = ctx.MultiInput<framework::Tensor>("ShapeTensor");
if (multi_inputs.size() > 0) {
return framework::KernelSignature(
"reshape2.mulhost.mid", {"X", "ShapeTensor"}, {}, {"XShape", "Out"});
return framework::KernelSignature("reshape2.mulhost",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

为什么XShape输出可以删除?这里mul和host后缀最好拆分一下

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

和负责的rd 沟通过,先保持现状

@@ -21,6 +21,8 @@ namespace experimental {

PD_DLL_DECL Tensor flatten(const Tensor& x, int start_axis, int stop_axis);

Tensor cast(const Tensor& x, DataType out_dtype);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

需要增加PD_DLL_DECL声明

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

#define PTEN_PRIVATE_CASE_TYPE(NAME, enum_type, type, ...) \
PTEN_PRIVATE_CASE_TYPE_USING_HINT(NAME, enum_type, type, data_t, __VA_ARGS__)

#define PTEN_DISPATCH_ALL_TYPES(TYPE, NAME, ...) \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里建议直接复用ext/dispatch.h中的实现,或者在ext/dispatch.h新增宏

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@@ -74,6 +74,12 @@ DenseTensorMeta FlattenInferShape(const DenseTensorMeta& x_meta,
return return_meta;
}

DenseTensorMeta CastInferShape(const DenseTensorMeta& x_meta,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

建议直接命名为InferMeta吧,后面也需要统一改

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

Comment on lines 1815 to 1839
size_t start_idx =
(i == 0 ? 0 : pt_kernel_context_->InputRangeAt(i - 1).second);
size_t end_idx = start_idx + ins_vector.size();

if (pt_kernel_context_->InputsSize() == start_idx) {
paddle::SmallVector<std::shared_ptr<pten::TensorBase>> tmp_inputs;
for (auto* var : ins_vector) {
tmp_inputs.emplace_back(
experimental::MakePtenTensorBaseFromVar(*var, in_def));
}
pt_kernel_context_->EmplaceBackInputs(std::move(tmp_inputs));
} else {
} else if (pt_kernel_context_->InputsSize() > start_idx) {
size_t input_size = pt_kernel_context_->InputsSize();
for (size_t j = 0; j < ins_vector.size(); ++j) {
if (input_size > i + j) {
if (input_size > start_idx + j) {
experimental::ReMakePtenDenseTensorFromVar(
*ins_vector[j], in_def,
pt_kernel_context_->MutableInputAt<pten::DenseTensor>(i + j));
pt_kernel_context_->MutableInputAt<pten::DenseTensor>(start_idx +
j));
} else {
pt_kernel_context_->EmplaceBackInputWithoutSetRange(
experimental::MakePtenTensorBaseFromVar(*ins_vector[j], in_def));
}
// TODO(chenweihang): adapt multi-input case later
}
pt_kernel_context_->MutableInputRangeAt(i) =
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这部分逻辑建议加上详细注释,方便代码阅读理解

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

已经添加注释

@@ -60,6 +60,40 @@ PD_DLL_DECL Tensor flatten(const Tensor& x, int start_axis, int stop_axis) {
return out;
}

Tensor cast(const Tensor& x, DataType out_dtype) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

需要增加PD_DLL_DECL声明

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@MingMingShangTian MingMingShangTian changed the title [PTen]add cast kernel [PTen] Add variable transform to/from ptenTensor and add cast kernel Nov 18, 2021
Comment on lines 1843 to 1845
"error start index when trying to set new tensor to inputs, start "
"index is `%d`, but current pt_kernel_context_.inputs.size() is "
"`%d` ",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

报错信息首字母大写,结尾加.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

Comment on lines 1882 to 1884
"error start index when trying to set new tensor to inputs, start "
"index is `%d`, but current pt_kernel_context_.outputs.size() is "
"`%d` ",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同上

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

Comment on lines 326 to 328
"error start index when trying to set new tensor to inputs, start "
"index is `%d`, but current pt_kernel_context_.inputs.size() is "
"`%d` ",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

报错信息首字母大写,结尾加.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

Comment on lines 364 to 366
"error start index when trying to set new tensor to inputs, start "
"index is `%d`, but current pt_kernel_context_.outputs.size() is "
"`%d` ",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

报错信息首字母大写,结尾加.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

auto* tensor = variable->GetMutable<framework::SelectedRows>();
auto dtype = pten::TransToProtoVarType(src->dtype());

if (tensor->value().IsInitialized()) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if内部没有处理逻辑是否可以和else合并?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

已经合并

ReshapeFromVectorValImpl(dev_ctx, x, shape, out, false);
auto out_meta = InferShapeFromVecValue(x.meta(), shape);
if (&x == out) {
LOG(INFO) << "out_meta dims:" << out_meta.dims;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LOG(INFO)是调试加的代码吗?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

已删除

@@ -209,5 +209,7 @@ void Tensor::ResetHolderWithType(std::shared_ptr<memory::Allocation> holder,
type_ = type;
}

void Tensor::set_type(const proto::VarType::Type type) { type_ = type; }
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

const proto::VarType::Type -> const proto::VarType::Type&

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@PaddlePaddle PaddlePaddle locked and limited conversation to collaborators Nov 18, 2021
@PaddlePaddle PaddlePaddle unlocked this conversation Nov 18, 2021
@PaddlePaddle PaddlePaddle locked and limited conversation to collaborators Nov 19, 2021
@PaddlePaddle PaddlePaddle unlocked this conversation Nov 19, 2021
Copy link
Contributor

@chenwhql chenwhql left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@@ -1901,5 +1961,26 @@ void OperatorWithKernel::BuildPtenKernelContext(
}
}

void OperatorWithKernel::WriteBackToOutputs(RuntimeContext* ctx) const {
// auto& input_names = std::get<0>(pt_kernel_signature_->args);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

如果是无用的注释,建议在下个PR移除

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

好的

@MingMingShangTian MingMingShangTian merged commit 5caa6fc into PaddlePaddle:develop Nov 22, 2021
@MingMingShangTian MingMingShangTian deleted the cast_kernel branch November 22, 2021 06:26
@zhiqiu zhiqiu mentioned this pull request Dec 9, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants