Skip to content
/ EVA Public
forked from baaivision/EVA

Exploring the Limits of Masked Visual Representation Learning at Scale (https://arxiv.org/abs/2211.07636)

License

Notifications You must be signed in to change notification settings

PkuRainBow/EVA

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

We launch EVA, a vision-centric foundation model to Explore the limits of Visual representation at scAle using only publicly accessible data and academic resources. EVA is a vanilla ViT pre-trained to reconstruct the masked out image-text aligned vision features (i.e., CLIP features) conditioned on visible image patches. Via this pretext task, we can efficiently scale up EVA to one billion parameters, and sets new records on a broad range of representative vision downstream tasks.

EVA is the first open-sourced billion-scale vision foundation model that achieves state-of-the-art performance on a broad range of downstream tasks.

News

$\color{red}{\text{All the code and dozens of state-of-the-art billion-scale models are open-sourced!}}$

Catalog

All EVA model checkpoints are now available at 🤗 Hugging Face Models and BAAI ModelHub (EVA & EVA-CLIP). Try them out!

Summary of EVA's performance

image & video classification

image classificationvideo classification
model#param.IN-1K, e2e ftIN-1K, linearIN-1K, zero-shot12 avg. zero-shotK400K600K700
EVA or EVA-CLIP1.0B89.786.578.575.789.789.882.9

object detection & segmentation

COCO det & ins segLVIS det & ins segsem seg
model#param.det (test)det (val)seg (test)seg (val)detsegCOCO-StuffADE20K
EVA1.0B64.764.555.555.062.255.053.462.3

Citation

If you find our work helpful, please star🌟 this repo and cite📑 our paper. Thanks for your support!

@article{EVA,
  title={EVA: Exploring the Limits of Masked Visual Representation Learning at Scale},
  author={Fang, Yuxin and Wang, Wen and Xie, Binhui and Sun, Quan and Wu, Ledell and Wang, Xinggang and Huang, Tiejun and Wang, Xinlong and Cao, Yue},
  journal={arXiv preprint arXiv:2211.07636},
  year={2022}
}

If you find our open-sourced code & models helpful to your research, please also consider cite📑 this repo.

@misc{EVA_code_models,
  author={Fang, Yuxin and Wang, Wen and Xie, Binhui and Sun, Quan and Wu, Ledell and Wang, Xinggang and Huang, Tiejun and Wang, Xinlong and Cao, Yue},
  title={Code and Models of EVA: Exploring the Limits of Masked Visual Representation Learning at Scale},
  year={2022},
  howpublished = {\url{https://github.com/baaivision/EVA}}
}

License

The content of this project itself is licensed under LICENSE.

Contact

  • For help and issues associated with EVA, or reporting a bug, please open a GitHub Issue. Let's build a better & stronger EVA together :)

  • We are hiring at all levels at BAAI Vision Team, including full-time researchers, engineers and interns. If you are interested in working with us on foundation model, self-supervised learning and multimodal learning, please contact Yue Cao (caoyue@baai.ac.cn) and Xinlong Wang (wangxinlong@baai.ac.cn).

Misc

↳ Stargazers, thank you for your support!

Stargazers repo roster for @baaivision/EVA

↳ Forkers, thank you for your support!

Forkers repo roster for @baaivision/EVA

↳ Star History

Star History Chart

About

Exploring the Limits of Masked Visual Representation Learning at Scale (https://arxiv.org/abs/2211.07636)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 92.5%
  • Cuda 4.2%
  • C++ 2.2%
  • Shell 0.5%
  • Jupyter Notebook 0.5%
  • Dockerfile 0.1%