Skip to content

Conversation

@Shukla-Gaurav
Copy link
Collaborator

  • This commit adds support for aten.native_batch_norm operation.
  • The current implementation only supports inference mode of
    aten.native_batch_norm op.

Signed-Off-By: Gaurav Shukla gaurav@nod-labs.com

Copy link
Contributor

@cathyzhyi cathyzhyi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good! I just have one question and one nit.

Copy link
Collaborator

@ramiro050 ramiro050 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the comments in the decomposition. They made things very easy to follow. I just have one small comment

- This commit adds support for `aten.native_batch_norm` operation.
- The current implementation only supports inference mode of
  `aten.native_batch_norm` op.

Signed-Off-By: Gaurav Shukla <gaurav@nod-labs.com>
@Shukla-Gaurav Shukla-Gaurav force-pushed the gaurav/native_batch_norm branch from 5a36240 to b6338f6 Compare February 7, 2022 21:07
@byronyi
Copy link
Contributor

byronyi commented Apr 29, 2022

@Shukla-Gaurav any plan to support training mode for batch norm?

@Shukla-Gaurav
Copy link
Collaborator Author

@byronyi I can take a look into it. Although there is some issue with the training mode, I will post a draft for it first. This may take a couple of days.

@cathyzhyi
Copy link
Contributor

@Shukla-Gaurav FYI, there is an issue #663 with regards to training and Sean had some suggestions on how to proceed in that issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants