Use float64 instead of float32 in logger & metric & result #11386
Labels
bug
Something isn't working
logging
Related to the `LoggerConnector` and `log()`
priority: 0
High priority task
Milestone
🐛 Bug
Torch metrics or logs use float32 as the default type.
However, this handling is not appropriate for large data sizes.
example in code, I log_dict - tensor(67128463., dtype=torch.float64)
but output result is tensor(67128464.) Maybe this logger use default type float32
https://github.com/PyTorchLightning/pytorch-lightning/blob/master/pytorch_lightning/trainer/connectors/logger_connector/result.py#L217
I think the default type of log or metric would be float64.
torch.tensor(67128463., dtype=torch.float64)
Out[17]: tensor(67128463., dtype=torch.float64)
torch.tensor(67128463., dtype=torch.float32)
Out[18]: tensor(67128464.)
To Reproduce
torch.tensor(67128463., dtype=torch.float64)
Out[17]: tensor(67128463., dtype=torch.float64)
torch.tensor(67128463., dtype=torch.float32)
Out[18]: tensor(67128464.)
In model, self.log_dict({"a": torch.tensor(67128463., dtype=torch.float64)})
output is tensor(67128464.)
Expected behavior
Environment
conda
,pip
, source):torch.__config__.show()
:Additional context
cc @tchaton @rohitgr7 @carmocca @edward-io @ananthsub @kamil-kaczmarek @Raalsky @Blaizzy
The text was updated successfully, but these errors were encountered: