Conversation
💊 CI failures summary and remediationsAs of commit ecc598e (more details on the Dr. CI page): 💚 💚 Looks good so far! There are no failures yet. 💚 💚 This comment was automatically generated by Dr. CI (expand for details).Please report bugs/suggestions to the (internal) Dr. CI Users group. |
datumbox
left a comment
There was a problem hiding this comment.
@normster I would love your input on the implementation since you are one of the key authors. Your feedback is welcome in all parts of the PR, but I specifically flagged a couple of places within the implementation to get your input below.
Thanks in advance!
|
In later data augmentation papers such as PixMix, we used all of these PIL augmentations. Consequently I think the PyTorch vision AugMix implementation could use these too. |
|
@hendrycks Thanks for the input! In the current implementation, we support it and make it optional to use the extra 4 transforms by setting Let me know if you have any other feedback on the implementation. The main changes are on the |
|
@normster @hendrycks We intend to merge soon. If you have any thoughts on the comments above let us know, thanks! |
normster
left a comment
There was a problem hiding this comment.
This looks great! I had a small question about per-image/per-batch processing that I noted inline but what you have should work fine in practice.
|
@normster Thanks a lot for the feedback. In relation to the sampling of weights, I will benchmark your proposal. Out of curiosity, did you do any experiments on the sampling per image vs batch? |
|
I spoke offline with Norman and he mentioned they didn't do experiments with using the same weight for the entire batch. So to align with the official implementation, I just updated the code to sample one weight per image in the batch. |
| fill = self.fill | ||
| if isinstance(orig_img, Tensor): | ||
| img = orig_img | ||
| if isinstance(fill, (int, float)): | ||
| fill = [float(fill)] * F.get_image_num_channels(img) | ||
| elif fill is not None: | ||
| fill = [float(f) for f in fill] | ||
| else: | ||
| img = self._pil_to_tensor(orig_img) |
There was a problem hiding this comment.
Later we may want to refactor this part of code that could be applicable to all other aug strategies...
There was a problem hiding this comment.
Certainly. We would need to consider how Videos can be handled as well here too.
79ac4bf to
e4b62be
Compare
Co-authored-by: vfdev <vfdev.5@gmail.com>
…th as RandAugment.
|
Though the official repo used a default severity of 1, I've decided to change our default value to 3. This aligns the "intensity" of the transform with other methods such as |
Summary: * Adding basic augmix implementation. * Finish the implementation. * Add tests and documentation. * Fix tests. * Simplify code. * Speed optimizations. * Per image weights instead of per batch. * Fix tests. * Update torchvision/transforms/autoaugment.py * Changing the default severity value to get by default the same strength as RandAugment. Reviewed By: jdsgomes Differential Revision: D34475319 fbshipit-source-id: 4637ad23deace03cf1f96b5c19a310c360f179d5 Co-authored-by: vfdev <vfdev.5@gmail.com> Co-authored-by: vfdev <vfdev.5@gmail.com>
Adding the AugMix data augmentation method. Inspired from the work on the official repo.