Skip to content

Conversation

@gprateek93
Copy link
Collaborator

This PR adds two commits:

  1. adds support for aten.native_layer_norm_backward operation. It also adds support for matching constant bools stored in a boolean list.
  2. fixes aten.native_layer_norm. Previously this operation was not calculating correct shapes for mean and inverted STD. This has been corrected in this commit. Some new helper functions are added to calculate the inverted STD and to broadcast a given input with the help of a broadcast mask.

Signed-Off-By: Prateek Gupta prateek@nod-labs.com

This commit fixs the lowering of `aten.native_layer_norm` operation.
Previously this operation was not calculating correct shapes for
mean and inverted STD. This has been corrected in this commit.
Some new helper functions are added to calculate the inverted STD
and to broadcast a given input with the help of a broadcast mask.

Signed-Off-By: Prateek Gupta <prateek@nod-labs.com>
This commit adds support for `aten.native_layer_norm_backward` operation.
It also adds support for matching constant bools stored in a boolean
list.

Signed-Off-By: Prateek Gupta <prateek@nod-labs.com>
Copy link
Contributor

@cathyzhyi cathyzhyi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would recommend implementing this in DecomposeComplexOps if possible. Math calculation in DecomposeComplexOps are easier to implement and more readable and maintainable.

@gprateek93
Copy link
Collaborator Author

@cathyzhyi I have added a PR for this decomposition in functorch: pytorch/functorch#525
So I guess having the decomposition here also will duplicate it.

@cathyzhyi
Copy link
Contributor

@cathyzhyi I have added a PR for this decomposition in functorch: pytorch/functorch#525 So I guess having the decomposition here also will duplicate it.

@gprateek93 After the decomposition is merged can you add an e2e test to make sure it would make aten::native_layer_norm_backward work?

@gprateek93
Copy link
Collaborator Author

gprateek93 commented Mar 30, 2022

The functorch PR for aten.native_layer_norm_backward is merged. Waiting for the integration of latest functorch in torch-mlir. Once done, we can safely close this PR.

@henrytwo
Copy link
Member

henrytwo commented May 18, 2022

Hey are there any updates on this PR? We seem to have support for native_layer_norm but it would be great to also have native_layer_norm_backward working too.

Ideally we'd like to have native_layer_norm_backward stay as a high level op, rather than have it be decomposed.

cc: @antoniojkim @ke1337

@henrytwo
Copy link
Member

henrytwo commented May 30, 2022

FYI: I'm working on a new PR that will borrow some code from this one (with credits and reference), since I need support for aten.native_layer_norm_backward soon. #888

qedawkins pushed a commit to nod-ai/torch-mlir that referenced this pull request Oct 3, 2022
(trivial drive-by fix)

Co-authored-by: Alexandre Eichenberger <alexe@us.ibm.com>
Co-authored-by: Kevin O'Brien <caomhin@us.ibm.com>
@xgupta
Copy link
Contributor

xgupta commented Nov 26, 2022

This PR can be closed, seems #888 is merged to add aten.native_layer_norm_backward op.

@pashu123 pashu123 closed this Nov 26, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants