Closed
Description
🚀 The feature
Add an argument to make loading imagenet22k weights as painless as loading imagenet1k pretrained weights.
Motivation, pitch
Since most people using pretrained models will just use a torchvision.models model, it'd be great if it were easy to specify to use an imagenet22k or at least greater than 1k model when such weights are available. Features obtained from models trained with more classes do much better on downstream tasks. Also, these models are better when fine-tuned. I think a large portion of the user-base is leaving performance on the table by using the default imagenet1k trained weights.
Alternatives
No response
Additional context
No response
cc @datumbox