Skip to content

Fix/adam float64 #10407

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
May 6, 2018
Merged

Fix/adam float64 #10407

merged 3 commits into from
May 6, 2018

Conversation

dzhwinter
Copy link
Contributor

fix #10405

@abhinavarora
Copy link
Contributor

@dzhwinter Should this also be done in sgd_op and ftrl_op?

@dzhwinter
Copy link
Contributor Author

That's true. Done.

Copy link
Contributor

@sidgoyal78 sidgoyal78 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks for the PR Zhihong.

Copy link
Contributor

@sidgoyal78 sidgoyal78 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dzhwinter : It seems that changing the datatype to float64, we get an error:

paddle.fluid.core.EnforceNotMet: Tensor holds the wrong type, it holds f at [/paddle/paddle/fluid/framework/tensor_impl.h:84]

Did I miss something?

@dzhwinter dzhwinter merged commit a28dffb into PaddlePaddle:develop May 6, 2018
@dzhwinter
Copy link
Contributor Author

@sidgoyal78 also need to change the optimizer datatype.

@sidgoyal78
Copy link
Contributor

sidgoyal78 commented May 7, 2018

@dzhwinter Do you have an example? I don't quite understand how we could change the optimizer datatype. (Since, in the optimizer API we don't have the dtype exposed).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Non-deterministic outputs for book chapters (recognize_digits, etc) for a given random seed on GPU
3 participants