Skip to content

Conversation

@tensor-tang
Copy link
Contributor

@tensor-tang tensor-tang commented Nov 16, 2017

fix #5728

@tensor-tang tensor-tang requested a review from luotao1 November 16, 2017 09:28
outFmt = format::nc;
} else {
outFmt = format::nchw;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

我看has16c, has8c和hasnc只用了一次,所以107-130行能简化成:

if (inputs[i]->getFormat() == format::nc &&  oc_ % 16 == 0) {
   outFmt = format::nChw16c;
}
... 

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

不是的,在108行这块地方,是初始化这些bool,代表input的所有layer中有一个就设为true了。同时,要先等初始化完,然后在根据先后顺序判断需要的格式,所以不好放在一起做掉。

channels_.resize(inputLayers_.size());
channels_[0] = ic;
// need change the output channel, so use oc_ instead
// TODO(TJ): change API, use &oc
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这段注释没懂,change API,什么API呢? change the output channel成什么样呢?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里指的是 reshape这个函数,因为只有oc没有用应用,但是定义的时候考虑到oc不希望被改的,但是这个layer是可以改oc的,但是改起来会牵扯到别的很多layer,所以后面马上可以统一改为引用。

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

后面马上?请问是什么时候改,下一个PR?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

可以。本来是想这个时候改的,但是因为会牵扯到很多layer,不适合在concat这个PR里面提,所以决定分开提。

bool has8c = false, has16c = false, hasnc = false;
for (size_t i = 0; i < inputs.size(); i++) {
// resetInValue will use ic_ so temporary change as current input's channel
// TODO(TJ): change ic_ as vector then can remove channels_
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so temporary change as? 缺宾语,change 什么 as current input's channel?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

resetInValue will use ic_ so temporary change as ic_ current input's channel

同上,这里也是有一个TODO的。这里的注释也是要去掉的,所以没注意太多。
准备马上就要去掉了,如果需要也可以补好。

VLOG(MKLDNN_FMTS) << "Input " << i << ", " << inputLayers_[i]->getName()
<< ": " << inGrads_[i]->getFormat() << "<<<";
}
}
Copy link
Contributor

@luotao1 luotao1 Nov 17, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这几个print函数和基类的区别在什么地方呢?我看就多打印了<< i << ", " << inputLayers_[i]->getName()。是不是增强下基类函数的功能,就不用重写了呢

除了MKLDNNAddtoLayer有单独实现这部分功能,其他layer都没有。

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

是的,只有Addto和concat需要多个input输入,所以重写了,但是同上面的TODO,这些都是想加到MKLDNNLayer里面的,所以先override了。

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TODO的话,能不能在issue或者project的card里记一下。已经有挺多TODO了。

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

可以的。

std::vector<std::shared_ptr<mkldnn::primitive>>& prims,
std::vector<MKLDNNMatrixPtr>& inputs,
MKLDNNMatrixPtr& out);
};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

.h文件这里的一系列reset*函数,是不是可以都去掉。如果是继承了父类,只要在.cpp里直接实现即可。其他layer同。

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里是不可以的,子类还是在头文件里面声明要override掉父类的某个函数。

Copy link
Contributor

@luotao1 luotao1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@luotao1 luotao1 merged commit 9b56074 into PaddlePaddle:develop Nov 17, 2017
@tensor-tang tensor-tang deleted the mkldnn_concat branch November 17, 2017 08:43
@tensor-tang tensor-tang mentioned this pull request Nov 20, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Add MKLDNNConcatLayer

2 participants