Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

More API Compat Fixes #64

Merged
merged 1 commit into from
Jul 14, 2023
Merged

More API Compat Fixes #64

merged 1 commit into from
Jul 14, 2023

Conversation

123epsilon
Copy link
Contributor

@123epsilon 123epsilon commented Jul 13, 2023

This contributes the following:

  • Implement torch.tensor.add
  • Implement torch.tensor.norm - this resolves the expected test case's API issue (NormalizeModule_basic) but results in an IR failure due to some additional pass in torch-mlir lowering the torch.aten.norm.ScalarOpt_dim to torch.aten.linalg_vector_norm using the operand value, however we match the raw IR so I added this to xfails, as it is functionally the same.
  • Added back the permute pybind to autogeneration script and addressed some failing cases for permute due to the precedence of the *args signature, adding back the original pybind is still necessary to handle the case where dims is passed by keyword
  • Change the base class for the layout and memory_layout enum classes to IntEnum so that they can be readily converted to integers where necessary, this resolves issues of the type:
1 : ElementwiseCloneChannelsLastMemoryFormatModule_basic
 clone(): incompatible function arguments. The following argument types are supported:
     1. (self: pi.mlir._mlir_libs._pi_mlir.Tensor, memory_format: pi.mlir._mlir_libs._pi_mlir.AnyTorchOptionalIntValue = None, *, loc: mlir.ir.Location = None, ip: mlir.ir.InsertionPoint = None) -> pi.mlir._mlir_libs._pi_mlir.Tensor

 Invoked with: Tensor(<block argument> of type '!torch.tensor' at index: 0); kwargs: memory_format=<memory_format.channels_last: 2>

While this last change does resolve a number of test cases, it is worth noting that for one case: ToDtypeLayoutStridedModule_basic the API failure is resolved but the case still fails due to:

Failure while executing pass pipeline:
error: unknown: found an op that was marked as backend illegal
note: unknown: see current operation: %4 = "torch.aten.to.dtype_layout"(%arg0, %3, %2, %1, %1, %0, %0, %1) : (!torch.vtensor<[?,?],f32>, !torch.int, !torch.int, !torch.none, !torch.none, !torch.bool, !torch.bool, !torch.none) -> !torch.vtensor<[?,?],f64>
note: unknown: this is likely due to DecomposeComplexOps being unable to decompose this op

emitted from here

I couldn't find a decomposition for this op (aten.to.dtype_layout) in DecomposeComplexOps.cpp so this may be something missing from torch-mlir, but if its better to remove it from the PR that's fine too - this is only relevant to changing the layout enum. @makslevental

Copy link
Contributor

@brucekimrokcmu brucekimrokcmu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

Comment on lines +191 to 194
class layout(IntEnum):
strided = 1
sparse_coo = 2
sparse_csr = 3
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

image

@123epsilon 123epsilon merged commit b1becd0 into main Jul 14, 2023
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants