Skip to content
This repository has been archived by the owner on Mar 19, 2024. It is now read-only.

Latest commit

 

History

History
55 lines (40 loc) · 2.51 KB

README.md

File metadata and controls

55 lines (40 loc) · 2.51 KB

PIRL

Self-Supervised Learning of Pretext-Invariant Representations

Ishan Misra, Laurens van der Maaten

[arXiv] [BibTeX]

PIRL_teaser_figure

Training

All the model configs used for training models are found under the configs/config/pretrain/pirl directory here.

For example, to train a ResNet-50 model used in the PIRL paper, you can run

python tools/run_distributed_engines.py config=pretrain/pirl/pirl_jigsaw_4node_resnet50

Improvements to PIRL training

We can train the PIRL model with improvements from SimCLR (Chen et al., 2020), namely - the MLP head for projection of features and the Gaussian blur data augmentations.

python tools/run_distributed_engines.py config=pretrain/pirl/pirl_jigsaw_4node_resnet50 \
    +config/pretrain/pirl/models=resnet50_mlphead
    +config/pretrain/pirl/transforms=photo_gblur

Model Zoo

We provide the following pretrained models and report their single crop top-1 accuracy on the ImageNet validation set.

Model Epochs Head Top-1 Checkpoint
R50 200 Linear 62.9 model
R50 200 MLP 65.8 model
R50 800 Linear 64.29 model
R50 800 MLP 69.9 model
R50w2 400 Linear 69.3 model
R50w2 400 MLP 70.9 model

Citing PIRL

If you find PIRL useful, please consider citing the following paper

@inproceedings{misra2020pirl,
  title={Self-Supervised Learning of Pretext-Invariant Representations},
  author={Misra, Ishan and van der Maaten, Laurens},
  booktitle={CVPR},
  year={2020}
}