We provide links for you to download our checkpoints, including pretrained and finetuned models on different tasks. If you would like to use OFA with Transformers, please download checkpoints at https://huggingface.co/OFA-Sys, and check the code in the branch feature/add_transformers
.
- Pre-trained checkpoint (OFA-Huge) (~930M parameters)
- Pre-trained checkpoint (OFA-Large) (~470M parameters)
- Pre-trained checkpoint (OFA-Base) (~180M parameters)
- Pre-trained checkpoint (OFA-Medium) (~93M parameters)
- Pre-trained checkpoint (OFA-Tiny) (~33M parameters)
- Finetuned checkpoint for Caption on COCO
- Finetuned checkpoint for Caption on COCO During Stage1 Finetuning
- Finetuned checkpoint for RefCOCO
- Finetuned checkpoint for RefCOCO+
- Finetuned checkpoint for RefCOCOg
- Finetuned checkpoint for VQAv2
- Finetuned checkpoint for SNLI-VE
- Finetuned checkpoint for Text-to-Image Generation on COCO && CLIP checkpoint && VQGAN checkpoint
- Finetuned checkpoint for ImageNet-1K
- Finetuned checkpoint for Gigaword
- Finetuned base checkpoint for Caption on COCO
- Finetuned base checkpoint for RefCOCO
- Finetuned base checkpoint for RefCOCO+
- Finetuned base checkpoint for RefCOCOg
- Finetuned base checkpoint for VQAv2
- Finetuned base checkpoint for SNLI-VE
- Finetuned base checkpoint for Text-to-Image Generation on COCO
To follow our multimodal pretraining, we suggest using pretrained language models for the initialization. Note that for the base-size and large-size models, we directly use BART-base and BART-large, and for the other sizes, we pretrained the tiny-size, medium size, and huge-size OFA-based language models.