Yuxin Fang2,1, Wen Wang3,1, Binhui Xie4,1, Quan Sun1, Ledell Wu1, Xinggang Wang2, Tiejun Huang1, Xinlong Wang1, Yue Cao1
We launch EVA, a vision-centric foundation model to Explore the limits of Visual representation at scAle using only publicly accessible data and academic resources. EVA is a vanilla ViT pre-trained to reconstruct the masked out image-text aligned vision features (i.e., CLIP features) conditioned on visible image patches. Via this pretext task, we can efficiently scale up EVA to one billion parameters, and sets new records on a broad range of representative vision downstream tasks.
EVA is the first open-sourced billion-scale vision foundation model that achieves state-of-the-art performance on a broad range of downstream tasks.
Dec 12, 2022
: EVA and EVA-L model weights are added to the awesometimm
library, thanks @rwightman!Dec 07, 2022
: launch EVA-L, the best ViT-L (304M) to date that can reach up to 89.2 top-1 acc on IN-1K (weights & logs) by leveraging vision features from EVA-CLIP.Nov 25, 2022
: release EVA-CLIP zero-shot evaluation results on 35 benchmarks.Nov 22, 2022
: release code & model of object detection and instance segmentation.Nov 21, 2022
: release code & model of video classification, semantic segmentation, EVA-CLIP.Nov 20, 2022
: release code & model of pre-training and image classification.Nov 18, 2022
: release wandb log & statistics of 1.1B EVA-CLIP training.
All EVA model checkpoints are now available at 🤗 Hugging Face Models and BAAI ModelHub (EVA & EVA-CLIP). Try them out!
- Pre-training
- Image Classification
- Video Classification
- Object Detection & Instance Segmentation
- Semantic Segmentation
- CLIP
image & video classification
image classification | video classification | |||||||
---|---|---|---|---|---|---|---|---|
model | #param. | IN-1K, e2e ft | IN-1K, linear | IN-1K, zero-shot | 12 avg. zero-shot | K400 | K600 | K700 |
EVA or EVA-CLIP | 1.0B | 89.7 | 86.5 | 78.5 | 75.7 | 89.7 | 89.8 | 82.9 |
object detection & segmentation
COCO det & ins seg | LVIS det & ins seg | sem seg | |||||||
---|---|---|---|---|---|---|---|---|---|
model | #param. | det (test) | det (val) | seg (test) | seg (val) | det | seg | COCO-Stuff | ADE20K |
EVA | 1.0B | 64.7 | 64.5 | 55.5 | 55.0 | 62.2 | 55.0 | 53.4 | 62.3 |
If you find our work helpful, please star🌟 this repo and cite📑 our paper. Thanks for your support!
@article{EVA,
title={EVA: Exploring the Limits of Masked Visual Representation Learning at Scale},
author={Fang, Yuxin and Wang, Wen and Xie, Binhui and Sun, Quan and Wu, Ledell and Wang, Xinggang and Huang, Tiejun and Wang, Xinlong and Cao, Yue},
journal={arXiv preprint arXiv:2211.07636},
year={2022}
}
If you find our open-sourced code & models helpful to your research, please also consider cite📑 this repo.
@misc{EVA_code_models,
author={Fang, Yuxin and Wang, Wen and Xie, Binhui and Sun, Quan and Wu, Ledell and Wang, Xinggang and Huang, Tiejun and Wang, Xinlong and Cao, Yue},
title={Code and Models of EVA: Exploring the Limits of Masked Visual Representation Learning at Scale},
year={2022},
howpublished = {\url{https://github.com/baaivision/EVA}}
}
The content of this project itself is licensed under LICENSE.
-
For help and issues associated with EVA, or reporting a bug, please open a GitHub Issue. Let's build a better & stronger EVA together :)
-
We are hiring at all levels at BAAI Vision Team, including full-time researchers, engineers and interns. If you are interested in working with us on foundation model, self-supervised learning and multimodal learning, please contact Yue Cao (
caoyue@baai.ac.cn
) and Xinlong Wang (wangxinlong@baai.ac.cn
).