-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ARM CPU] Fix AUGRU layer in dien.xml model #21925
Conversation
@@ -293,7 +293,8 @@ MemoryPtr DynamicBuffer::create_buffer(const dnnl::engine& eng) { | |||
const auto estimated_iters = estimate_iters(); | |||
const Shape _shape = Shape({count, static_cast<size_t>(abs_stride * estimated_iters), len/elem_size}); | |||
auto _descCreator = BlockedDescCreator::getCommonCreators().at(LayoutType::ncsp); | |||
auto new_buffer_desc = _descCreator->createSharedDesc(from->getDesc().getPrecision(), _shape); | |||
auto prec = from->getDesc().getPrecision() == ov::element::f16 ? ov::element::f32 : from->getDesc().getPrecision(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why we need this change?
TensorIterator has nothing x64 or ARM specific from the impl perspective.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this change depend on augru rnn layer if we don't set fp32, seg fault is created in loop after rnn layer
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is doesn't mean we need to create such kind of WA.
TI should work correctly w/o dependency on parent operation. We need to analyze the issue and find real root-cause
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@dmitry-gorokhov its became irrelevant change, dien.xml is worked without tensoriterator changes (all problem was depend on fp16 in rnn and calculate shapes in scheduler)
This PR will be closed in a week because of 2 weeks of no activity. |
Although dien model still can't be inferred successfully on ARM:
@allnes can we fix this issue in this PR? |
@allnes @alvoron So does Dien works successfuly now or we still have an issue? |
@dmitry-gorokhov dien.xml is worked. |
### Details: - *Fixed problem with precision in RNN and TensorIterator layers* - *Corrected shape calculation of current layers in ACL Scheduler* ### Tickets: - CVS-123900 - CVS-134520
Details:
Tickets: