Skip to content

Commit

Permalink
Change the elementwise broadcasting contract from graph to kernel
Browse files Browse the repository at this point in the history
Summary:
Currently, there is a graph level pass to handle limited broadcasting of elementwise ops if the input tensors are not of the same size.

We move this responsibility down to the kernels with this diff, which is how ET and the portable ops do it. Ops of this kind are only `add`, `sub`, `mul` and `div` for now, but there will be more.

We retain the implementations for the reference kernels, because we want to avoid linking the portable ops directly, which takes forever at compile time. We can also use a much smaller set of types (basically only `float`).

We can remove a hack in the RNNT Joiner with this change, and run it natively. It takes a huge hit in performance, which will be fixed by getting broadcast-friendly kernels from Cadence.

We finally remove the binop tests in `test_aten_ops.py`, which were also using strange types and had been on the chopping block for a while.

Differential Revision: D58207691
  • Loading branch information
mcremon-meta authored and facebook-github-bot committed Jun 7, 2024
1 parent 6554fa5 commit d52c39f
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion kernels/portable/cpu/util/targets.bzl
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ def define_common_targets():
"//executorch/runtime/kernel:kernel_includes",
"//executorch/runtime/core/exec_aten/util:tensor_util",
],
visibility = ["//executorch/kernels/portable/cpu/...", "//executorch/kernels/optimized/cpu/..."],
visibility = ["//executorch/kernels/portable/cpu/...", "//executorch/kernels/optimized/cpu/...", "@EXECUTORCH_CLIENTS"],
)

runtime.cxx_library(
Expand Down

0 comments on commit d52c39f

Please sign in to comment.