Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

remove fp32 tmp tensor and cast op for initializer.Normal and initializer.Constant #38818

Merged
merged 3 commits into from
Jan 10, 2022

Conversation

GuoxiaWang
Copy link
Contributor

@GuoxiaWang GuoxiaWang commented Jan 9, 2022

PR types

Function optimization

PR changes

APIs

Describe

remove fp32 tmp tensor and cast op for initializer.Normal and initializer.Constant

背景:
当参数是 FP16 时,如果需要初始化的 tensor 非常大时,例如 17 个G 大小的 tensor,现有代码是先申请一个临时的 FP32 tensor,进行初始化,然后再cast 成 FP16,拷贝到 FP16 tensor 中。那这个临时的 FP32 tensor 需要 34 G 显存,32G V100上 直接显存 OOM了。通过分析,这些中间变量完全不需要。

PLSC 项目

修改前:
静态图:最大支持 6000万类 FC
动态图:最大支持 6700万类 FC

修改后:
静态图:最大支持 9200万类 FC
动态图:最大支持 8700万类 FC

@paddle-bot-old
Copy link

paddle-bot-old bot commented Jan 9, 2022

Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

Copy link

@sandyhouse sandyhouse left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Contributor

@wangxicoding wangxicoding left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants