Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Clarify mixed precision training support (OpenNMT#1458)
Change the wording to avoid confusion. Mixed precision ensures both higher arithmetic throughput and numerical stability, not exactly synonymous to pure half-precision/FP16 training. Also add mentioning of tensor cores since older generation GPUs without tensor cores don't support true mixed precision training.
- Loading branch information