Skip to content

[ICML 2023] Architecture-Agnostic Masked Image Modeling -- From ViT back to CNN

License

Notifications You must be signed in to change notification settings

Westlake-AI/A2MIM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Masked image modeling (MIM), an emerging self-supervised pre-training method, has shown impressive success across numerous downstream vision tasks with Vision transformers (ViT). Its underlying idea is simple: a portion of the input image is randomly masked out and then reconstructed via the pre-text task. However, why MIM works well is not well explained, and previous studies insist that MIM primarily works for the Transformer family but is incompatible with CNNs. In this paper, we first study interactions among patches to understand what knowledge is learned and how it is acquired via the MIM task. We observe that MIM essentially teaches the model to learn better middle-level interactions among patches and extract more generalized features. Based on this fact, we propose an Architecture-Agnostic Masked Image Modeling framework (A2MIM), which is compatible with not only Transformers but also CNNs in a unified way. Extensive experiments on popular benchmarks show that our A2MIM learns better representations and endows the backbone model with the stronger capability to transfer to various downstream tasks for both Transformers and CNNs.

Table of Contents
  1. Catalog
  2. License
  3. Acknowledgement
  4. Citation

Catalog

We have released implementations of A2MIM based on OpenMixup. In the future, we plan to add A2MIM implementations to MMPretrain. Pre-trained and fine-tuned models are released in GitHub / Baidu Cloud.

Pre-training on ImageNet

1. Installation

Please refer to INSTALL.md for installation instructions.

2. Pre-training and fine-tuning

We provide scripts for multiple GPUs pre-training and the specified CONFIG_FILE.

bash tools/dist_train.sh ${CONFIG_FILE} ${GPUS} [optional arguments]

For example, you can run the script below to pre-train ResNet-50 with A2MIM on ImageNet with 8 GPUs:

PORT=29500 bash tools/dist_train.sh configs/openmixup/pretrain/a2mim/imagenet/r50_l3_sz224_init_8xb256_cos_ep300.py 8

After pre-trianing, you can fine-tune and evaluate the models with the corresponding script:

python tools/model_converters/extract_backbone_weights.py work_dirs/openmixup/pretrain/a2mim/imagenet/r50_l3_sz224_init_8xb256_cos_ep300/latest.pth ${PATH_TO_CHECKPOINT}
PORT=29500 bash tools/dist_train_ft_8gpu.sh configs/openmixup/finetune/imagenet/r50_rsb_a3_ft_sz160_4xb512_cos_fp16_ep100.py ${PATH_TO_CHECKPOINT}

Results and Models

We provide the summarization of pre-training and fine-tuning results of A2MIM and baselines on ImageNet-1K.

Methods # Params. Supervision SimMIM A2MIM
Target (M) Label RGB RGB
ViT-S 48.8 79.9 81.7 82.1
ViT-B 86.7 81.8 83.8 84.2
ViT-L 304.6 82.6 85.6 86.1
ResNet-50 25.6 79.8 79.9 80.4
ResNet-101 44.5 81.3 81.3 81.9
ResNet-152 60.2 81.8 81.9 82.5
ResNet-200 64.7 82.1 82.2 83.0
ConvNeXt-S 50.2 83.1 83.2 83.7
ConvNeXt-B 88.6 83.5 83.6 84.1

Config files, models, logs, and visualization of reconstructions are provided as follows. These files can also be downloaded from a2mim-in1k-weights or Baidu Cloud: A2MIM (3q5i).

ViT-S/B/L on ImageNet-1K.
Method Backbone Epoch Fine-tuning Top-1 Pre-training Fine-tuning Results
SimMIM ViT-Small 800 81.7 config | ckpt | vis config ckpt | log
A2MIM ViT-Small 800 82.1 config | ckpt | vis config ckpt | log
SimMIM ViT-Base 800 83.8 config | ckpt | vis config ckpt | log
A2MIM ViT-Base 800 84.3 config | ckpt | vis config ckpt | log
SimMIM ViT-Large 800 85.6 config | ckpt | vis config log
A2MIM ViT-Large 800 86.1 config | ckpt | vis config log
ResNet-50/101/152/200 on ImageNet-1K.
Method Backbone Epoch Fine-tuning (A2) Top-1 Pre-training Fine-tuning Results
SimMIM ResNet-50 300 79.9 config | ckpt | vis RSB A2 -
A2MIM ResNet-50 100 78.8 config | ckpt | vis RSB A3 ckpt | log
A2MIM ResNet-50 300 80.4 config | ckpt | vis RSB A2 ckpt | log
SimMIM ResNet-101 300 81.3 config | ckpt RSB A2 ckpt (A3) | log (A3)
A2MIM ResNet-101 300 81.9 config | ckpt (300ep) | ckpt (800ep) RSB A2 ckpt (A2) | log (A2)
SimMIM ResNet-152 300 81.9 config | ckpt RSB A2 log (A3)
A2MIM ResNet-152 300 82.5 config | ckpt (300ep) | ckpt (800ep) RSB A2 ckpt (A2) | log (A2)
SimMIM ResNet-200 300 82.2 config | ckpt | vis RSB A2 ckpt | log
A2MIM ResNet-200 300 83.0 config | ckpt | vis RSB A2 ckpt | log
ConvNeXt-S/B on ImageNet-1K.
Method Backbone Epoch Fine-tuning (A2) Top-1 Pre-training Fine-tuning Results
SimMIM ConvNeXt-S 300 83.2 config | ckpt | vis RSB A2 -
A2MIM ConvNeXt-S 300 83.7 config | ckpt | vis RSB A2 ckpt | log
SimMIM ConvNeXt-B 300 83.6 config | ckpt RSB A2 ckpt | log
A2MIM ConvNeXt-B 300 84.1 config | ckpt RSB A2 ckpt (A2) | ckpt (A3) | log (A2) | log (A3)

License

This project is released under the Apache 2.0 license.

Acknowledgement

Our implementation is mainly based on the following codebases. We gratefully thank the authors for their wonderful works.

  • OpenMixup: Open-source toolbox for supervised and self-supervised visual representation learning.
  • pytorch-image-models: PyTorch image models, scripts, pretrained weights.
  • SimMIM: Official PyTorch implementation of SimMIM.
  • MMPretrain: OpenMMLab Pre-training Toolbox and Benchmark.
  • MMDetection: OpenMMLab Detection Toolbox and Benchmark.
  • MMSegmentation: OpenMMLab Semantic Segmentation Toolbox and Benchmark.

Citation

If you find this repository helpful, please consider citing our paper:

@inproceedings{icml2023a2mim,
  title={Architecture-Agnostic Masked Image Modeling -- From ViT back to CNN},
  author={Li, Siyuan and Wu, Di and Wu, Fang and Zang, Zelin and Li, Stan. Z.},
  booktitle={International Conference on Machine Learning},
  year={2023},
}

(back to top)