Description
@jczaja Hi, I have a question about inplace mechanism in mkldnn ops.
Description
I was implementing a operator that can view real tensors(dtype: float32 or float64) of shape (d1, d2, ..., 2)
as complex tensors(dtype: complex64 or complex128, respectively) of shape (d1, d2, ...)
. This requires that sharing buffer(or storage) between 2 tensors with different data types. (PR: #37240)
Paddle shares storage with ShareBufferWith
.
void ShareBufferWith(const Tensor& tensor) {
holder_ = tensor.holder_;
offset_ = tensor.offset_;
type_ = tensor.type_;
}
I tried by removing type_ = tensor.type_
. I could actually share storage bewteen real and complex tensors by removing this. But I notices that there was a related PR where this line was added.
PR #24853 Fix bug in ShareBufferWith causing non-fp32 inplace uts to fail
During call to ShareBufferWith, the type wasn't assigned when holders were assigned. This causes inplace UTs that use non-fp32 tensors (for example int8, uint8) to fail on FetchOp.
Also, removing this line PR #37247 disable copying of datatype when sharing buffer between two tensorscause a failure of test_elementwise_add_bf16_mkldnn_op
. The build log(see the attachment) shows that there is a mismatch in memory size, possible caused by error in data type.
But I think there could be a better way to solve this problem. By definition, data type is a metadatum associated with a Tensor instead of a storage object.
Question
So I why sharing data type in ShareBufferWith
fixes unittests for mkldnn ops using non-float datatypes when inplace strategy is enabled. If this is figured out, we could fix this while avoid sharing data types in ShareBufferWith
, this sharing
storage between Tensor with different data types is possible.
Thank you!