Skip to content

[BUG] Training flag in convert_sync_batchnorm() #2422

Closed
@collinmccarthy

Description

@collinmccarthy

Maybe "bug" is too harsh, but should we be setting module_output.training = module.training in convert_sync_batchnorm()?

This is what torch.nn.SyncBatchNorm does now too, so personally I think we should.

I ran into some issues with mmdetection when this wasn't being set, but of course that could be mitigated by changing how/when model.eval() is called. Still, I think setting the module_output.training flag is correct.

Thoughts?

Metadata

Metadata

Assignees

Labels

bugSomething isn't working

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions