Highlights
- Pro
👻SelfSupervisedLearning
PyTorch code for Vision Transformers training with the Self-Supervised learning method DINO
PyTorch implementation of MoCo v3 https//arxiv.org/abs/2104.02057
PyTorch implementation of MoCo: https://arxiv.org/abs/1911.05722
[ICLR'23 Spotlight🔥] The first successful BERT/MAE-style pretraining on any convolutional network; Pytorch impl. of "Designing BERT for Convolutional Networks: Sparse and Hierarchical Masked Modeling"
Parametric Contrastive Learning (ICCV2021) & GPaCo (TPAMI 2023)
EVA Series: Visual Representation Fantasies from BAAI
(CVPR 2022) Pytorch implementation of "Self-supervised transformers for unsupervised object discovery using normalized cut"
[ICLR'23] Effective Self-supervised Pre-training on Low-compute networks without Distillation
MLCD & UNICOM : Large-Scale Visual Representation Model
A general representation model across vision, audio, language modalities. Paper: ONE-PEACE: Exploring One General Representation Model Toward Unlimited Modalities
FFCV-SSL Fast Forward Computer Vision for Self-Supervised Learning.
Reproduce results of "EMP-SSL: Towards Self-Supervised Learning in One Training Epoch" paper.