This repository contains supplementary code for the paper A Closer Look at Benchmarking Self-Supervised Pre-Training with Image Classification.
- Set up Python environment (3.8 or higher)
- run
setup.shto install dependencies - Set paths in
paths.yaml(no need to create the folders manually) - Manually run
download_imagenet.shordownload_imagenet_d.shif needed. Other datasets and checkpoints get downloaded on-the-fly. - Use one of the following scripts as your entry point:
run_linear_probe.pyrun_finetuning.pyrun_knn_probe.py(we recommend precalculating embeddings withprecalculate_embeddings.pybeforehand)run_fewshot_finetuning.py
All scripts currently log results to wandb. You might need to adapt the scripts if you do not want to use wandb.
If you find this repo useful, please consider citing us:
@article{marks2024benchmarking,
title={A Closer Look at Benchmarking Self-Supervised Pre-training with Image Classification},
author={Marks, Markus and Knott, Manuel and Kondapaneni, Neehar and Cole, Elijah and Defraeye, Thijs and Perez-Cruz, Fernando and Perona, Pietro},
journal={arXiv preprint arXiv:2407.12210},
year={2024}
}
This repo uses code and checkpoints adapted from different repositories:
- Jigsaw (from VISSL model zoo)
- Rotnet (from VISSL model zoo)
- NPID (from VISSL model zoo)
- SeLa-v2 (from SwAV repo)
- NPID++ (from VISSL model zoo)
- PIRL (from VISSL model zoo)
- Clusterfit (from VISSL model zoo)
- DeepCluster-v2 (from SwAV repo)
- SwAV
- SIMCLR (from VISSL model zoo)
- MoCo v2
- SiamSiam (from MMSelfSup model zoo)
- BYOL (Inofficial Pytorch implementation))
- Barlow Twins (from MMSelfSup model zoo)
- DenseCL
- DINO
- MoCo v3
- iBOT
- MAE
- MaskFeat (from MMSelfSup model zoo)
- BEiT v2
- MILAN
- EVA (from MMSelfSup model zoo)
- PixMIM (from MMSelfSup model zoo)