|
52 | 52 | - [InceptionV4](https://github.com/Cadene/pretrained-models.pytorch#inception)
|
53 | 53 | - [NASNet-A-Large](https://github.com/Cadene/pretrained-models.pytorch#nasnet)
|
54 | 54 | - [NASNet-A-Mobile](https://github.com/Cadene/pretrained-models.pytorch#nasnet)
|
| 55 | + - [PNASNet-5-Large](https://github.com/Cadene/pretrained-models.pytorch#pnasnet) |
55 | 56 | - [ResNeXt101_32x4d](https://github.com/Cadene/pretrained-models.pytorch#resnext)
|
56 | 57 | - [ResNeXt101_64x4d](https://github.com/Cadene/pretrained-models.pytorch#resnext)
|
57 | 58 | - [ResNet101](https://github.com/Cadene/pretrained-models.pytorch#torchvision)
|
@@ -118,7 +119,7 @@ import pretrainedmodels
|
118 | 119 |
|
119 | 120 | ```python
|
120 | 121 | print(pretrainedmodels.model_names)
|
121 |
| -> ['fbresnet152', 'bninception', 'resnext101_32x4d', 'resnext101_64x4d', 'inceptionv4', 'inceptionresnetv2', 'alexnet', 'densenet121', 'densenet169', 'densenet201', 'densenet161', 'resnet18', 'resnet34', 'resnet50', 'resnet101', 'resnet152', 'inceptionv3', 'squeezenet1_0', 'squeezenet1_1', 'vgg11', 'vgg11_bn', 'vgg13', 'vgg13_bn', 'vgg16', 'vgg16_bn', 'vgg19_bn', 'vgg19', 'nasnetalarge', 'nasnetamobile', 'cafferesnet101', 'senet154', 'se_resnet50', 'se_resnet101', 'se_resnet152', 'se_resnext50_32x4d', 'se_resnext101_32x4d'] |
| 122 | +> ['fbresnet152', 'bninception', 'resnext101_32x4d', 'resnext101_64x4d', 'inceptionv4', 'inceptionresnetv2', 'alexnet', 'densenet121', 'densenet169', 'densenet201', 'densenet161', 'resnet18', 'resnet34', 'resnet50', 'resnet101', 'resnet152', 'inceptionv3', 'squeezenet1_0', 'squeezenet1_1', 'vgg11', 'vgg11_bn', 'vgg13', 'vgg13_bn', 'vgg16', 'vgg16_bn', 'vgg19_bn', 'vgg19', 'nasnetalarge', 'nasnetamobile', 'cafferesnet101', 'senet154', 'se_resnet50', 'se_resnet101', 'se_resnet152', 'se_resnext50_32x4d', 'se_resnext101_32x4d', 'pnasnet5large'] |
122 | 123 | ```
|
123 | 124 |
|
124 | 125 | - To print the available pretrained settings for a chosen model:
|
@@ -202,6 +203,8 @@ Results were obtained using (center cropped) images of the same size than during
|
202 | 203 |
|
203 | 204 | Model | Version | Acc@1 | Acc@5
|
204 | 205 | --- | --- | --- | ---
|
| 206 | +PNASNet-5-Large | [Tensorflow](https://github.com/tensorflow/models/tree/master/research/slim) | 82.858 | 96.182 |
| 207 | +[PNASNet-5-Large](https://github.com/Cadene/pretrained-models.pytorch#pnasnet) | Our porting | 82.736 | 95.992 |
205 | 208 | NASNet-A-Large | [Tensorflow](https://github.com/tensorflow/models/tree/master/research/slim) | 82.693 | 96.163
|
206 | 209 | [NASNet-A-Large](https://github.com/Cadene/pretrained-models.pytorch#nasnet) | Our porting | 82.566 | 96.086
|
207 | 210 | SENet154 | [Caffe](https://github.com/hujie-frank/SENet) | 81.32 | 95.53
|
@@ -357,6 +360,12 @@ Source: [Caffe repo of Jie Hu](https://github.com/hujie-frank/SENet)
|
357 | 360 | - `se_resnext50_32x4d(num_classes=1000, pretrained='imagenet')`
|
358 | 361 | - `se_resnext101_32x4d(num_classes=1000, pretrained='imagenet')`
|
359 | 362 |
|
| 363 | +#### PNASNet* |
| 364 | + |
| 365 | +Source: [TensorFlow Slim repo](https://github.com/tensorflow/models/tree/master/research/slim) |
| 366 | + |
| 367 | +- `pnasnet5large(num_classes=1000, pretrained='imagenet')` |
| 368 | +- `pnasnet5large(num_classes=1001, pretrained='imagenet+background')` |
360 | 369 |
|
361 | 370 | #### TorchVision
|
362 | 371 |
|
|
0 commit comments