Skip to content

Commit

Permalink
Clarify mixed precision training support (OpenNMT#1458)
Browse files Browse the repository at this point in the history
Change the wording to avoid confusion. Mixed precision ensures both higher arithmetic throughput and numerical stability, not exactly synonymous to pure half-precision/FP16 training. Also add mentioning of tensor cores since older generation GPUs without tensor cores don't support true mixed precision training.
  • Loading branch information
khoa-ho authored and vince62s committed Jun 5, 2019
1 parent 065c99f commit a7e5cee
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ Note that we currently only support PyTorch 1.1 (should work with 1.0)
- Inference time loss functions.
- [Conv2Conv convolution model]
- SRU "RNNs faster than CNN" paper
- FP16 training (mixed-precision with Apex)
- Mixed-precision training with [APEX](https://github.com/NVIDIA/apex), optimized on [Tensor Cores](https://developer.nvidia.com/tensor-cores)

## Quickstart

Expand Down

0 comments on commit a7e5cee

Please sign in to comment.