Description
Recently, we introduced Tied-Augment, a simple framework that combines self-supervised learning learning and supervised learning by making forward passes on two augmented views of the data with tied (shared) weights. In addition to the classification loss, it adds a similarity term to enforce invariance between the features of the augmented views. We found that our framework can be used to improve the effectiveness of both simple flips-and-crops (Crop-Flip) and aggressive augmentations (RandAugment) even for few-epoch training. As the effect of data augmentation is amplified, the sample efficiency of the data increases.
I believe Tied-Augment would be a nice addition to Timm training script. It can significantly improve mixup/RandAugment (77.6% → 79.6%) with marginal extra cost. Here is my reference implementation.