-
Notifications
You must be signed in to change notification settings - Fork 6.8k
[MXNET-139] Tutorial for mixed precision training with float16 #10391
Conversation
…o docs-fp16-new
…o docs-fp16-new
docs/tutorials/python/float16.md
Outdated
Note the accuracy you observe above. You can change DTYPE above to float32 if you want to observe the speedup gained by using float16. | ||
|
||
|
||
### Finetuning |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should be "Fine-tuning" ?
docs/tutorials/python/float16.md
Outdated
@@ -0,0 +1,280 @@ | |||
# Mixed precision training using float16 | |||
|
|||
The computational resources required for training deep neural networks has been increasing of late because of complexity of the architectures and size of models. Mixed precision training allows us to reduces the resources required by using lower precision arithmetic. In this approach we train using 16 bit floating points (half precision) while using 32 bit floating points (single precision) for output buffers of float16 computation. This combination of single and half precision gives rise to the name Mixed precision. It allows us to achieve the same accuracy as training with single precision, while decreasing the required memory and training or inference time. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
resources required for training deep neural networks has ->
resources required for training deep neural networks have
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
gives rise to the name Mixed precision: why capital M?
docs/tutorials/python/float16.md
Outdated
|
||
The float16 data type, is a 16 bit floating point representation according to the IEEE 754 standard. It has a dynamic range where the precision can go from 0.0000000596046 (highest, for values closest to 0) to 32 (lowest, for values in the range 32768-65536). Despite the decreased precision when compared to single precision (float32), float16 computation can be much faster on supported hardware. The motivation for using float16 for deep learning comes from the idea that deep neural network architectures have natural resilience to errors due to backpropagation. Half precision is typically sufficient for training neural networks. This means that on hardware with specialized support for float16 computation we can greatly improve the speed of training and inference. This speedup results from faster matrix multiplication, saving on memory bandwidth and reduced communication costs. It also reduces the size of the model, allowing us to train larger models and use larger batch sizes. | ||
|
||
The Volta range of Graphics Processing Units (GPUs) from Nvidia have Tensor Cores which perform efficient float16 computation. A tensor core allows accumulation of half precision products into single or half precision outputs. For the rest of this tutorial we assume that we are working with Nvidia's Tensor Cores on a Volta GPU. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Put a reference link to Tensor Cores?
docs/tutorials/python/float16.md
Outdated
2. Cast the data to float16 to match the input type expected by the blocks if necessary. | ||
|
||
### Training | ||
Let us look at an example of training a Resnet50 model with the Caltech101 dataset with float16. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add a reference link to the dataset description?
docs/tutorials/python/float16.md
Outdated
from mxnet.gluon.data.vision.datasets import ImageFolderDataset | ||
``` | ||
|
||
Let us start by fetching the Caltech101 dataset and extracting it. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you add a reminder of how big the dataset is (num images, number of GBs)
docs/tutorials/python/float16.md
Outdated
- Volta range of Nvidia GPUs | ||
- Cuda 9 or higher | ||
- CUDNN v7 or higher | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you start with an overview that the tutorial covers both Gluon and Symbolic APIs?
docs/tutorials/python/float16.md
Outdated
return net | ||
``` | ||
|
||
It is preferable to use **multi_precision mode of optimizer** when training in float16. This mode of optimizer maintains the weights in float32 even when the training is in float16. This helps increase precision of the weights and leads to faster convergence for some networks. (Further discussion on this towards the end.) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do all optimizers support this mode?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SGD supports this natively, as in there's a special kernel for that. Other optimizers support this by making a copy in Python, which can be slightly slower.
I feel this tutorial is a little bit too complex to reader to follow.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Style:
Use the pattern of 'you' instead of 'we'. It's ok to say we prepared this tutorial, but the steps and the prerequisites are for 'you'. Please make this update throughout.
Otherwise, a few other suggestions inline.
docs/tutorials/python/float16.md
Outdated
|
||
The computational resources required for training deep neural networks has been increasing of late because of complexity of the architectures and size of models. Mixed precision training allows us to reduces the resources required by using lower precision arithmetic. In this approach we train using 16 bit floating points (half precision) while using 32 bit floating points (single precision) for output buffers of float16 computation. This combination of single and half precision gives rise to the name Mixed precision. It allows us to achieve the same accuracy as training with single precision, while decreasing the required memory and training or inference time. | ||
|
||
The float16 data type, is a 16 bit floating point representation according to the IEEE 754 standard. It has a dynamic range where the precision can go from 0.0000000596046 (highest, for values closest to 0) to 32 (lowest, for values in the range 32768-65536). Despite the decreased precision when compared to single precision (float32), float16 computation can be much faster on supported hardware. The motivation for using float16 for deep learning comes from the idea that deep neural network architectures have natural resilience to errors due to backpropagation. Half precision is typically sufficient for training neural networks. This means that on hardware with specialized support for float16 computation we can greatly improve the speed of training and inference. This speedup results from faster matrix multiplication, saving on memory bandwidth and reduced communication costs. It also reduces the size of the model, allowing us to train larger models and use larger batch sizes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no comma after type
docs/tutorials/python/float16.md
Outdated
|
||
The Volta range of Graphics Processing Units (GPUs) from Nvidia have Tensor Cores which perform efficient float16 computation. A tensor core allows accumulation of half precision products into single or half precision outputs. For the rest of this tutorial we assume that we are working with Nvidia's Tensor Cores on a Volta GPU. | ||
|
||
In this tutorial we will walk through how one can train deep learning neural networks with mixed precision on supported hardware. We will first see how to use float16 and then some techniques on achieving good performance and accuracy. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd move this to the top as the main intro, then use ## Background
for the rest.
docs/tutorials/python/float16.md
Outdated
|
||
The Volta range of Graphics Processing Units (GPUs) from Nvidia have Tensor Cores which perform efficient float16 computation. A tensor core allows accumulation of half precision products into single or half precision outputs. For the rest of this tutorial we assume that we are working with Nvidia's Tensor Cores on a Volta GPU. | ||
|
||
In this tutorial we will walk through how one can train deep learning neural networks with mixed precision on supported hardware. We will first see how to use float16 and then some techniques on achieving good performance and accuracy. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In this tutorial you will learn how you can train...
You will first see how...
Please continue on in this pattern.
docs/tutorials/python/float16.md
Outdated
test_data = gluon.data.DataLoader(dataset_test, BATCH_SIZE, shuffle=False, num_workers=NUM_WORKERS) | ||
``` | ||
|
||
Next, we'll define softmax cross entropy as our loss, accuracy as our metric and the context on which to run our training jobs. It is set by default to gpu. Please note that float16 on CPU might not be supported for all operators, as float16 on CPU is slower than float32. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
GPU or gpu
?
docs/tutorials/python/float16.md
Outdated
|
||
### Finetuning | ||
|
||
You can also finetune in float16, a model which was originally trained in float32. The section of the code which builds the network would now look as follows. We first fetch the pretrained resnet50_v2 model from model zoo. This was trained using Imagenet data, so we need to pass classes as 1000 for fetching the pretrained model. Then we create our new network for Caltech 101 by passing number of classes as 101. We will then cast it to `float16` so that we cast all parameters to `float16`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the model zoo.
docs/tutorials/python/float16.md
Outdated
|
||
There are a few examples of building such networks which can handle float16 input in [examples/image-classification/symbols/](https://github.com/apache/incubator-mxnet/tree/master/example/image-classification/symbols). Specifically you could look at the [resnet](https://github.com/apache/incubator-mxnet/blob/master/example/image-classification/symbols/resnet.py) example. | ||
|
||
An illustration of the relevant section of the code is below. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Try to avoid above below left and right. Use follows or as follows. Or previously. This supports different reading modes.
docs/tutorials/python/float16.md
Outdated
## Things to keep in mind | ||
|
||
### For performance | ||
1. Nvidia Tensor core essentially perform the computation D = A * B + C, where A and B are half precision matrices, while C and D could be either half precision or full precision. The tensor cores are most efficient when dimensions of these matrices are multiples of 8. This means that Tensor Cores can not be used in all cases for fast float16 computation. When training models like Resnet50 on the Cifar10 dataset, the tensors involved are sometimes smaller, and tensor cores can not always be used. The computation in that case falls back to slower algorithms and using float16 turns out to be slower than float32 on a single GPU. Note that when using multiple GPUs, using float16 can still be faster than float32 because of reduction in communication costs. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cores perform or core performs
Use consistent CapitaliZation throughout.
ping @rahul003 |
Thanks guys for your comments. I'll address them soon and update the PR |
docs/tutorials/python/float16.md
Outdated
1. [Training with Mixed Precision User Guide](http://docs.nvidia.com/deeplearning/sdk/mixed-precision-training/index.html) | ||
2. [Mixed Precision Training at ICLR 2018](https://arxiv.org/pdf/1710.03740.pdf) | ||
3. [Mixed-Precision Training of Deep Neural Networks](https://devblogs.nvidia.com/mixed-precision-training-deep-neural-networks/) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you add this <!-- INSERT SOURCE DOWNLOAD BUTTONS -->
at the end of your .md file to enable the notebook download of your tutorial? Thanks!
docs/tutorials/python/float16.md
Outdated
url = "https://s3.us-east-2.amazonaws.com/mxnet-public/101_ObjectCategories.tar.gz" | ||
dataset_name = "101_ObjectCategories" | ||
data_folder = "data" | ||
if not os.path.isdir(data_folder): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
these lines are unnecessary as the mx.gluon.utils.download
will create the directory if it does not exist 😃
docs/tutorials/python/float16.md
Outdated
pretrained_net = models.get_model(name='resnet50_v2', ctx=ctx, pretrained=True, classes=1000) | ||
pretrained_net.hybridize() | ||
pretrained_net.cast(dtype) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
a simpler way of fine-tuning a model from the model-zoo is to use pretrained_net.output = gluon.nn.Dense(101)
and then initializing it.
docs/tutorials/python/float16.md
Outdated
train(net, dtype=DTYPE, num_epochs=25) | ||
``` | ||
|
||
We can confirm above that the pretrained model helps achieve much higher accuracy of about 0.97 in the same number of epochs. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry much higher accuracy than?
I think float16 helps you train much faster than float32, but I didn't know it would give you a higher accuracy for a given number of epoch?
On second thoughts as per feedback above I changed this from a runnable tutorial style to a document focusing on the changes needed to switch to Mixed precision. I updated an example in the source to add the example I had in this tutorial, and put a command to run that in this document. I've setup two runs with Resnet50 on Imagenet with float32 and float16, whose plots I will add tomorrow. Also added link to the video tutorial we have from the MXNet Meetup. I'm hesitant to share raw performance numbers as those would soon become outdated as we improve. I could mention a rough speedup factor instead. What do you guys think? |
@ Reviewers, please check the tutorial now. I think we can merge it and keep updating it if you have suggestions for other things to add. Even as is, it would be very useful. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great work! Pls resolve conflicts
can we merge this? |
if d == mx.cpu() and dtype == 'float16': | ||
#float16 is not supported on CPU | ||
continue | ||
elif net in ['inception-bn', 'alexnet'] and dt == 'float16': |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Benchmark crash here since dt is not defined.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for letting me know, fixing it here #11533
…e#10391) * dtype for data, working fp16 * test dtype fp16 gluon * add gluon fine tuning code * data iter caltech * caltech iter * working finetuning for fp16, but is it using pretrained params * benchmark fp16 * add wip tutorials * working notebook fp16 * changes to symbolic examples * changes to symbolic examples * add fp16 notebook * remove extra files * remove output of notebook * update md file * remove from faq * dtype for data, working fp16 * test dtype fp16 gluon * add gluon fine tuning code * data iter caltech * caltech iter * working finetuning for fp16, but is it using pretrained params * benchmark fp16 * add wip tutorials * working notebook fp16 * changes to symbolic examples * changes to symbolic examples * add fp16 notebook * remove extra files * remove output of notebook * update md file * remove from faq * WIP address feedback * gluon example * add top5 back * clean up gluon example * address feedback * address comments * move tutorial to faq * Add training curves * formatting * update image * trigger ci
Description
Adds a FAQ page on mixed precision training with float16. Explains usage for both Gluon and Symbolic, and discusses tips to improve performance and accuracy when using mixed precision.
https://issues.apache.org/jira/browse/MXNET-139
Checklist
Essentials
Please feel free to remove inapplicable items for your PR.
Changes