-
Notifications
You must be signed in to change notification settings - Fork 5.7k
Adam operator optimized with Eigen #10229
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adam operator optimized with Eigen #10229
Conversation
@luotao1 Could you have a look at this PR? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! Thanks very much!
@dzhwinter How about the same Eigen optimization on GPU execution? |
param_out_(param_out) {} | ||
|
||
void operator()(size_t numel) const { | ||
Eigen::Map<const Eigen::Array<T, 1, Eigen::Dynamic>> g{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@tpatejko You can use framework::EigenTensor<T, 1>
to replace Eigen::Map<const Eigen::Array<T, 1, Eigen::Dynamic>>
in next PR.
We have a eigen wrap in https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/framework/eigen.h.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@luotao1 Thanks for the remark. I will use it next time.
This PR implements Adam operator optimized with Eigen. This is important for CPU execution.
Two benchmarks were taken into account: mnist and machine_translation. The results of profiling on default parameters, on Skylake CPU:
machine translation
mnist: