-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Dup] Fix SAME_UPPER/SAME_LOWER (auto_pad attribute) in ConvTranspose #12537
Conversation
…jcw/convtrans-same
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should confirm that the CPU EP is the only EP affected by this, not also DML, CUDA... Generally seems fine to me, if it was a mistake in implementation per the spec, but I wouldn't want to just fix the CPU EP and now have discrepancies between EP's.
Looking here at ConvolutionHelperBase::InitializeKernelAndShapesTransposed
, the DML EP may have the same issue.
LGTM. Please run this by @pranavsharma to get a sign-off as well. |
Thank @fdwr for the review! I did check CUDA ep and it seems it just uses the CPU implementation to calculate ConvTranspose's attribute see the line here. @hariharans29 Please correct me if I am wrong.
Good catch. I think that line is incorrect. By contrast, the line in the same file is right. I can include the fix in this PR. I roughly went through other EPs as well: ORT web implemented it correctly originally. See this line. Other than that, it seems to me other EPs do not have specific kernel for ConvTranspose's SAME/UPPER/LOWER case. |
@@ -721,7 +721,7 @@ namespace OperatorHelper | |||
int paddings = gsl::narrow_cast<int>((inputDimensions[i + dimOffset] - 1) * stride + windowSize - m_outputShapes[0].GetShape()[i + dimOffset]); | |||
paddings = std::max<int>(0, paddings); | |||
|
|||
m_kernel.startPadding[i] = m_kernel.autoPadSameUpper ? (paddings + 1) / 2 : paddings / 2; | |||
m_kernel.startPadding[i] = m_kernel.autoPadSameUpper ? paddings / 2 : (paddings + 1) / 2; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FYI @jeffbloo.
Description:
Dup of #5368. Recreate this PR for cleaner commit log. Target it to ORT 1.13. Recreating this PR to retarget the main branch instead of master branch.
To sync the definition of SAME_UPPER/SAME_LOWER among all operators and make it same as ONNX definition, switch the logic of SAME_UPPER and SAME_LOWER in ConvTranspose.
Definition of SAME_UPPER and SAME_LOWER should be as follows:
Besides, revert the temporary warning from" #11984.
Motivation and Context
The
auto_pad
attribute,SAME_UPPER
andSAME_LOWER
ofConvTranspose
is different from other operators' (pool and conv related operators)auto_pad
attribute. The behavior of same attribute should be the same among all operators. Also, it does not meet the definition in ONNX.SAME_UPPER
andSAME_LOWER
in other operatorsonnxruntime/onnxruntime/core/providers/cpu/nn/pool_attributes.h
Line 149 in c20fcf2
https://github.com/onnx/onnx/blob/b2ed660d0a065b8346816f2c3a95d79ca79b88c9/onnx/defs/nn/defs.cc#L1222
Update spec for Convtranspose to make it sync onnx/onnx#3019