I'm playing with PyTorch on the CIFAR10 dataset.
Pros:
- Built-in data loading and augmentation, very nice!
- Training is fast, maybe even a little bit faster.
- Very memory efficient!
Cons:
- No progress bar, sad :(
- No built-in log.
Model | Acc. |
---|---|
VGG16 | 92.64% |
ResNet18 | 93.02% |
ResNet50 | 93.62% |
ResNet101 | 93.75% |
MobileNetV2 | 94.43% |
ResNeXt29(32x4d) | 94.73% |
ResNeXt29(2x64d) | 94.82% |
DenseNet121 | 95.04% |
PreActResNet18 | 95.11% |
DPN92 | 95.16% |
I manually change the lr
during training:
0.1
for epoch[0,150)
0.01
for epoch[150,250)
0.001
for epoch[250,350)
Resume the training with python main.py --resume --lr=0.01
python main.py
Specify a batch size by using -b <batch_size>
. By default it is 128
.
By default the output is not saved to a file.
python compare_batches.py <name>
When training the model, have the output written to log/<name>-<batch_size>
,
where <name>
will be used in the title of the produced graph.
The batch sizes used in the graph are specified in the batch_no
variable.