Skip to content

Conversation

kexinzhao
Copy link
Contributor

@kexinzhao kexinzhao commented Mar 20, 2018

fix #9266

I originally plan to add one single line in activation_op.cu to add fp16 forward kernel for all activation ops. However, this brings up tons of eigen error message requiring lots of modification to float16.h. Because of the currently limited time, I am temporarily putting this plan on hold and simply add fp16 support for relu op in this PR.

@kexinzhao kexinzhao requested a review from chengduoZH March 21, 2018 00:35
@kexinzhao kexinzhao added the 预测 原名Inference,包含Capi预测问题等 label Mar 21, 2018
def test_check_output(self):
if core.is_compiled_with_cuda():
place = core.CUDAPlace(0)
if core.is_float16_supported(place):
Copy link
Contributor

@helinwang helinwang Mar 21, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just to double check: will the condition ever become True?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It will be true if the compute capability of GPU is >= 5.3, which means all pascal and volta GPUs will be tested.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, thank you!

Copy link
Contributor

@helinwang helinwang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@kexinzhao kexinzhao merged commit b9e6364 into PaddlePaddle:develop Mar 21, 2018
@kexinzhao kexinzhao deleted the new_relu_fp16 branch March 21, 2018 20:58
blacksheep-Aristotle pushed a commit to blacksheep-Aristotle/Paddle that referenced this pull request Nov 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

预测 原名Inference,包含Capi预测问题等

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Need float16 support for relu op

2 participants