Closed
Description
Maybe "bug" is too harsh, but should we be setting module_output.training = module.training
in convert_sync_batchnorm()?
This is what torch.nn.SyncBatchNorm does now too, so personally I think we should.
I ran into some issues with mmdetection
when this wasn't being set, but of course that could be mitigated by changing how/when model.eval()
is called. Still, I think setting the module_output.training
flag is correct.
Thoughts?