Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MKLDNN layout: Support for convolution operator #11099

Merged

Conversation

mozga-intel
Copy link
Contributor

@mozga-intel mozga-intel commented May 31, 2018

Waiting for supports of the mkldnn’s layout.

Please have a look at this convolution operator supported by the MKLDNN’s layout and assess if we can keep this design of the code.

This code uses the implementation of layout which is available in the last pull-request. Therefore some of the function in this code are not available in this pull-request, but are available here

This version of code can be merged into the main branch when the pull-request with layout is accepted.

The concept of splits of the long code was proposed by @luotao1

Pull-request is related to #11040

@mozga-intel mozga-intel force-pushed the mozga-intel/Conv_mkldnn_layout branch from ae9b5a5 to 9908d3c Compare June 10, 2018 21:16
@mozga-intel
Copy link
Contributor Author

@luotao1 The code is prepared to the code-review process.

@luotao1 luotao1 requested a review from tensor-tang June 11, 2018 07:29
if (memory::primitive_desc(conv_pd->src_primitive_desc()) !=
user_src_memory.get_primitive_desc()) {
src_memory = memory(conv_pd->src_primitive_desc());
reorder_src = reorder(user_src_memory, src_memory);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this inplace reorder according to L108?

The result would be correct?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, this is in-place reorder. During the tests I did't see any problems with the result. The all tests worked correctly.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually I think that is not inplace reorder since you write src_memory = memory(conv_pd->src_primitive_desc()); . That will allocate new memory.

Copy link
Contributor Author

@mozga-intel mozga-intel Jun 13, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@tensor-tang Oh yes this is not in place. I can not find information about it. All of tests work correctly. therefore I think that this code is correct. Should be out-of-place?.

if (memory::primitive_desc(conv_pd->weights_primitive_desc()) !=
user_weights_memory.get_primitive_desc()) {
weights_memory = memory(conv_pd->weights_primitive_desc());
reorder_weights = reorder(user_weights_memory, weights_memory);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as above

if (memory::primitive_desc(conv_bwd_weights_pd.src_primitive_desc()) !=
user_src_memory.get_primitive_desc()) {
src_memory = memory(conv_bwd_weights_pd.src_primitive_desc());
reorder_src = reorder(user_src_memory, src_memory);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as above

@@ -75,9 +75,8 @@ void ConvOp::InferShape(framework::InferShapeContext* ctx) const {
framework::OpKernelType ConvOp::GetExpectedKernelType(
const framework::ExecutionContext& ctx) const {
framework::LibraryType library{framework::LibraryType::kPlain};

std::string data_format = ctx.Attr<std::string>("data_format");
// TODO(pzelazko-intel): enable MKLDNN layout when it's ready
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This comment shoud be removed?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes.

Copy link
Contributor

@tensor-tang tensor-tang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

Thanks for your great work!

@tensor-tang tensor-tang merged commit 647c0eb into PaddlePaddle:develop Jun 13, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants