-
Notifications
You must be signed in to change notification settings - Fork 738
[EXIR] Register _clone_dim_order op and map aten.clone #12971
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
877119f
f75845d
83d8c75
cff39c9
1fe461f
95db027
63d45e7
d9a181c
c7caa27
e54605f
246bc44
c48467c
e262a36
5546360
5c5e65a
fe7dd11
7a0bc6a
74e2cce
f9f9515
8d0cb06
6839212
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -15,6 +15,7 @@ | |
| from coremltools.converters.mil.frontend.torch.ops import ( | ||
| _get_inputs, | ||
| _get_kwinputs, | ||
| noop, | ||
| NUM_TO_NUMPY_DTYPE, | ||
| NUM_TO_TORCH_DTYPE, | ||
| split, | ||
|
|
@@ -67,6 +68,28 @@ def _to_dim_order_copy(context, node): | |
| to(context, node) | ||
|
|
||
|
|
||
| @register_torch_op( | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. FYI @metascroy
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This change looks OK to me, but do we have a test that covers it?
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I added a test for _clone_dim_order here. Is this what you had in mind?
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Looks good |
||
| torch_alias=[ | ||
| "dim_order_ops::_clone_dim_order", | ||
| "dim_order_ops._clone_dim_order", | ||
| ], | ||
| override=False, | ||
| ) | ||
| def _clone_dim_order(context, node): | ||
| dim_order = _get_kwinputs(context, node, "dim_order", default=[None])[0] | ||
| node.kwinputs.pop("dim_order") | ||
|
|
||
| # In CoreML, dim_order.val will be a ndarray, so we convert it to a list to check memory format. | ||
| dim_order = [int(d) for d in dim_order.val] | ||
| memory_format = get_memory_format(dim_order) | ||
| assert ( | ||
| memory_format == _torch.contiguous_format | ||
| ), "Only contiguous memory format is supported in CoreML" | ||
|
|
||
| # Since CoreML only supports contiguous format, no dim_order preservation is needed. Treat this as a no-op clone. | ||
| noop(context, node) | ||
digantdesai marked this conversation as resolved.
Show resolved
Hide resolved
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @keyprocedure Why not just call clone(context, node) here? It will be equivalent to noop now, but let's use clone if that's what the op is. Just in case coremltools updates clone to be something other than a noop in future?
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. That makes a lot of sense! But I don't see clone exposed in coremltools for import, it looks like it’s only handled as an alias to noop.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I guess this is OK then |
||
|
|
||
|
|
||
| # https://github.com/apple/coremltools/pull/2558 | ||
| @register_torch_op( | ||
| torch_alias=["torchao::dequantize_affine", "torchao.dequantize_affine"], | ||
|
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,76 @@ | ||
| # Copyright 2024-2025 Arm Limited and/or its affiliates. | ||
| # | ||
| # This source code is licensed under the BSD-style license found in the | ||
| # LICENSE file in the root directory of this source tree. | ||
|
|
||
| # pyre-unsafe | ||
| import logging | ||
|
|
||
| import torch | ||
| import torch.fx as fx | ||
|
|
||
| from executorch.backends.arm.operator_support.tosa_supported_operators import ( | ||
| register_tosa_support_check, | ||
| SupportedTOSAOperatorCheck, | ||
| ) | ||
| from executorch.backends.arm.tosa_specification import TosaSpecification | ||
| from executorch.exir.dialects._ops import ops as exir_ops | ||
|
|
||
| logger = logging.getLogger(__name__) | ||
|
|
||
|
|
||
| @register_tosa_support_check | ||
| class CloneDimOrderSupport(SupportedTOSAOperatorCheck): | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Did we remove support to partition aten.clone op?
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We didn't, |
||
| targets = [ | ||
| exir_ops.edge.dim_order_ops._clone_dim_order.default, | ||
| ] | ||
|
|
||
| tosa_specs = [ | ||
| TosaSpecification.create_from_string("TOSA-1.0+INT"), | ||
| TosaSpecification.create_from_string("TOSA-1.0+FP"), | ||
| ] | ||
|
|
||
| def is_node_tosa_supported( | ||
| self, node: fx.Node, tosa_spec: TosaSpecification | ||
| ) -> bool: | ||
| assert node.target in self.targets | ||
|
|
||
| # Check input type | ||
| assert len(node.all_input_nodes) == 1 | ||
| input_val = node.all_input_nodes[0].meta["val"] | ||
| assert isinstance(input_val, torch._subclasses.FakeTensor) | ||
| input_dtype = input_val.dtype | ||
|
|
||
| # Check output type | ||
| output_val = node.meta["val"] | ||
| assert isinstance(output_val, torch._subclasses.FakeTensor) | ||
| if output_val.dtype != input_dtype: | ||
| self.reporter.report_reject( | ||
| node, | ||
| f"Input dtype {input_val.dtype} does not match {output_val.dtype}.", | ||
| ) | ||
| return False | ||
|
|
||
| # Check memory format | ||
| if "memory_format" in node.kwargs: | ||
| if node.kwargs["memory_format"] in (torch.preserve_format,): | ||
| self.reporter.report_reject( | ||
| node, | ||
| f"Argument 'memory_format' is not supported for " | ||
| f"{node.target} right now.", | ||
| ) | ||
| return False | ||
|
|
||
| # Check dim_order | ||
| if "dim_order" in node.kwargs: | ||
| dim_order = node.kwargs["dim_order"] | ||
| # pyre-ignore[6] | ||
| if dim_order != list(range(len(dim_order))): # type: ignore[arg-type] | ||
| self.reporter.report_reject( | ||
| node, | ||
| f"Argument {dim_order=} is not supported for " | ||
| f"{node.target} right now.", | ||
| ) | ||
| return False | ||
|
|
||
| return True | ||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -38,7 +38,7 @@ | |
| ] | ||
| linear_residual_exir_op: list[str] = [ | ||
| "executorch_exir_dialects_edge__ops_aten_gelu_default", | ||
| "executorch_exir_dialects_edge__ops_aten_clone_default", | ||
| "executorch_exir_dialects_edge__ops_dim_order_ops__clone_dim_order_default", | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Removing aten.clone here because the corresponding tests is dim order only?
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Since ARM's pass manager doesn't seem to pass the
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. yeah my point is we should make our update in line with Arm's target in a consistent way. That is, if we expect dim-order-only in Arm backend, there should be no However if we should support dim_order equals to both on and off, we may test both case for clone operator, though I'm ok to split it into several PRs for better structure.
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. That makes sense, thanks for breaking down the options.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. FYI see - 135e875 Also cc @oscarandersson8218 for what if no aten.clone support.
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @digantdesai thanks for looking into this and sharing the |
||
| "executorch_exir_dialects_edge__ops_aten_linear_default", | ||
| "executorch_exir_dialects_edge__ops_aten_add_Tensor", | ||
| ] | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
did we remove support to partition
aten.cloneop?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't explicitly remove support for
aten.clone. From my understanding this change would only add support for_clone_dim_order,aten.cloneis still supported and handled by the noop handler incoremltools.