Skip to content

[CINN]fix layer_norm bug in combinatorial operator #69553

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Nov 22, 2024

Conversation

zhanghonggeng
Copy link
Contributor

PR Category

Performance Optimization

PR Types

Performance

Description

在cinn前执行AutoMixedPrecisionPass后layer_norm的三个输入会添加cast算子转换成fp16,同时三个输出也变成了fp16。而组合算子拆分后的逻辑中后两个输出是fp32,导致check_decomp_outputs中检查错误
PreconditionNotMetError: [Prim] For op pd_op.layer_norm, its origin 1-index output dtype float16 is not equal to decomp output dtype float32
Pcard-67164

Copy link

paddle-bot bot commented Nov 20, 2024

你的PR提交成功,感谢你对开源项目的贡献!
请关注后续CI自动化测试结果,详情请参考Paddle-CI手册
Your PR has been submitted. Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

Copy link
Contributor

@yuanlehome yuanlehome left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@zyfncg zyfncg merged commit 703c4de into PaddlePaddle:develop Nov 22, 2024
27 of 28 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants