-
Notifications
You must be signed in to change notification settings - Fork 607
[ET-VK][Ops] aten.embedding #3762
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
## The Operator `nn.Module` invocations on the embedding returned by [`torch.nn.Embedding`](https://pytorch.org/docs/stable/generated/torch.nn.Embedding.html) get compiled to `aten.embedding.default` in the Edge Dialect, which carries the following signature. ``` - func: embedding(Tensor weight, Tensor indices, SymInt padding_idx=-1, bool scale_grad_by_freq=False, bool sparse=False) -> Tensor ``` ## Implementation This is a C-packing-only implementation. Interestingly, the 1D-`indices` case is equivalent to the `dim=0` case of the preceding `aten.index_select`: #3744 ``` - func: index_select(Tensor self, int dim, Tensor index) -> Tensor ``` I naïvely thought the rest of the operator would be similarly easy but it wasn't. The 2D and 3D-`indices` cases are more involved to the extent that we require a standalone `cpp`/`glsl` file. ## Codegen We add support for making 2D and 3D index tensors. This requires new generation functions as well as renaming of the `case_name` string to recursively handle list `pylist`s. ``` // 1D Test(weight=[10, 9], indices=[0, 2]), // 2D Test(weight=[10, 9], indices=[[0, 2], [1, 4], [7, 7]]), // 3D Test(weight=[10, 9], indices=[[[3, 1, 4], [1, 5, 9]], [[2, 6, 5], [3, 5, 8]]]), ``` Differential Revision: [D57880520](https://our.internmc.facebook.com/intern/diff/D57880520/) [ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/3762
Note: Links to docs will display an error until the docs builds have been completed. ❗ 1 Active SEVsThere are 1 currently active SEVs. If your PR is affected, please view them below: ✅ No FailuresAs of commit 7b4b2ae with merge base 55d11e1 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This pull request was exported from Phabricator. Differential Revision: D57880520 |
## The Operator `nn.Module` invocations on the embedding returned by [`torch.nn.Embedding`](https://pytorch.org/docs/stable/generated/torch.nn.Embedding.html) get compiled to `aten.embedding.default` in the Edge Dialect, which carries the following signature. ``` - func: embedding(Tensor weight, Tensor indices, SymInt padding_idx=-1, bool scale_grad_by_freq=False, bool sparse=False) -> Tensor ``` ## Implementation This is a C-packing-only implementation. Interestingly, the 1D-`indices` case is equivalent to the `dim=0` case of the preceding `aten.index_select`: #3744 ``` - func: index_select(Tensor self, int dim, Tensor index) -> Tensor ``` I naïvely thought the rest of the operator would be similarly easy but it wasn't. The 2D and 3D-`indices` cases are more involved to the extent that we require a standalone `cpp`/`glsl` file. ## Codegen We add support for making 2D and 3D index tensors. This requires new generation functions as well as renaming of the `case_name` string to recursively handle list `pylist`s. ``` // 1D Test(weight=[10, 9], indices=[0, 2]), // 2D Test(weight=[10, 9], indices=[[0, 2], [1, 4], [7, 7]]), // 3D Test(weight=[10, 9], indices=[[[3, 1, 4], [1, 5, 9]], [[2, 6, 5], [3, 5, 8]]]), ``` Differential Revision: [D57880520](https://our.internmc.facebook.com/intern/diff/D57880520/) ghstack-source-id: 228038402 Pull Request resolved: #3762
## The Operator `nn.Module` invocations on the embedding returned by [`torch.nn.Embedding`](https://pytorch.org/docs/stable/generated/torch.nn.Embedding.html) get compiled to `aten.embedding.default` in the Edge Dialect, which carries the following signature. ``` - func: embedding(Tensor weight, Tensor indices, SymInt padding_idx=-1, bool scale_grad_by_freq=False, bool sparse=False) -> Tensor ``` ## Implementation This is a C-packing-only implementation. Interestingly, the 1D-`indices` case is equivalent to the `dim=0` case of the preceding `aten.index_select`: #3744 ``` - func: index_select(Tensor self, int dim, Tensor index) -> Tensor ``` I naïvely thought the rest of the operator would be similarly easy but it wasn't. The 2D and 3D-`indices` cases are more involved to the extent that we require a standalone `cpp`/`glsl` file. ## Codegen We add support for making 2D and 3D index tensors. This requires new generation functions as well as renaming of the `case_name` string to recursively handle list `pylist`s. ``` // 1D Test(weight=[10, 9], indices=[0, 2]), // 2D Test(weight=[10, 9], indices=[[0, 2], [1, 4], [7, 7]]), // 3D Test(weight=[10, 9], indices=[[[3, 1, 4], [1, 5, 9]], [[2, 6, 5], [3, 5, 8]]]), ``` Differential Revision: [D57880520](https://our.internmc.facebook.com/intern/diff/D57880520/) [ghstack-poisoned]
This pull request was exported from Phabricator. Differential Revision: D57880520 |
## The Operator `nn.Module` invocations on the embedding returned by [`torch.nn.Embedding`](https://pytorch.org/docs/stable/generated/torch.nn.Embedding.html) get compiled to `aten.embedding.default` in the Edge Dialect, which carries the following signature. ``` - func: embedding(Tensor weight, Tensor indices, SymInt padding_idx=-1, bool scale_grad_by_freq=False, bool sparse=False) -> Tensor ``` ## Implementation This is a C-packing-only implementation. Interestingly, the 1D-`indices` case is equivalent to the `dim=0` case of the preceding `aten.index_select`: #3744 ``` - func: index_select(Tensor self, int dim, Tensor index) -> Tensor ``` I naïvely thought the rest of the operator would be similarly easy but it wasn't. The 2D and 3D-`indices` cases are more involved to the extent that we require a standalone `cpp`/`glsl` file. ## Codegen We add support for making 2D and 3D index tensors. This requires new generation functions as well as renaming of the `case_name` string to recursively handle list `pylist`s. ``` // 1D Test(weight=[10, 9], indices=[0, 2]), // 2D Test(weight=[10, 9], indices=[[0, 2], [1, 4], [7, 7]]), // 3D Test(weight=[10, 9], indices=[[[3, 1, 4], [1, 5, 9]], [[2, 6, 5], [3, 5, 8]]]), ``` Differential Revision: [D57880520](https://our.internmc.facebook.com/intern/diff/D57880520/) [ghstack-poisoned]
This pull request was exported from Phabricator. Differential Revision: D57880520 |
## The Operator `nn.Module` invocations on the embedding returned by [`torch.nn.Embedding`](https://pytorch.org/docs/stable/generated/torch.nn.Embedding.html) get compiled to `aten.embedding.default` in the Edge Dialect, which carries the following signature. ``` - func: embedding(Tensor weight, Tensor indices, SymInt padding_idx=-1, bool scale_grad_by_freq=False, bool sparse=False) -> Tensor ``` ## Implementation This is a C-packing-only implementation. Interestingly, the 1D-`indices` case is equivalent to the `dim=0` case of the preceding `aten.index_select`: #3744 ``` - func: index_select(Tensor self, int dim, Tensor index) -> Tensor ``` I naïvely thought the rest of the operator would be similarly easy but it wasn't. The 2D and 3D-`indices` cases are more involved to the extent that we require a standalone `cpp`/`glsl` file. ## Codegen We add support for making 2D and 3D index tensors. This requires new generation functions as well as renaming of the `case_name` string to recursively handle list `pylist`s. ``` // 1D Test(weight=[10, 9], indices=[0, 2]), // 2D Test(weight=[10, 9], indices=[[0, 2], [1, 4], [7, 7]]), // 3D Test(weight=[10, 9], indices=[[[3, 1, 4], [1, 5, 9]], [[2, 6, 5], [3, 5, 8]]]), ``` Differential Revision: [D57880520](https://our.internmc.facebook.com/intern/diff/D57880520/) [ghstack-poisoned]
This pull request was exported from Phabricator. Differential Revision: D57880520 |
This pull request has been merged in a36ace7. |
Pull Request resolved: pytorch/executorch#3762 ## The Operator `nn.Module` invocations on the embedding returned by [`torch.nn.Embedding`](https://pytorch.org/docs/stable/generated/torch.nn.Embedding.html) get compiled to `aten.embedding.default` in the Edge Dialect, which carries the following signature. ``` - func: embedding(Tensor weight, Tensor indices, SymInt padding_idx=-1, bool scale_grad_by_freq=False, bool sparse=False) -> Tensor ``` ## Implementation This is a C-packing-only implementation. Interestingly, the 1D-`indices` case is equivalent to the `dim=0` case of the preceding `aten.index_select`: pytorch/executorch#3744 ``` - func: index_select(Tensor self, int dim, Tensor index) -> Tensor ``` I naïvely thought the rest of the operator would be similarly easy but it wasn't. The 2D and 3D-`indices` cases are more involved to the extent that we require a standalone `cpp`/`glsl` file. ## Codegen We add support for making 2D and 3D index tensors. This requires new generation functions as well as renaming of the `case_name` string to recursively handle list `pylist`s. ``` // 1D Test(weight=[10, 9], indices=[0, 2]), // 2D Test(weight=[10, 9], indices=[[0, 2], [1, 4], [7, 7]]), // 3D Test(weight=[10, 9], indices=[[[3, 1, 4], [1, 5, 9]], [[2, 6, 5], [3, 5, 8]]]), ``` ghstack-source-id: 228201965 Differential Revision: [D57880520](https://our.internmc.facebook.com/intern/diff/D57880520/)
Stack from ghstack (oldest at bottom):
The Operator
nn.Module
invocations on the embedding returned bytorch.nn.Embedding
get compiled toaten.embedding.default
in the Edge Dialect, which carries the following signature.Implementation
This is a C-packing-only implementation.
Interestingly, the 1D-
indices
case is equivalent to thedim=0
case of the precedingaten.index_select
: #3744I naïvely thought the rest of the operator would be similarly easy but it wasn't. The 2D and 3D-
indices
cases are more involved to the extent that we require a standalonecpp
/glsl
file.Codegen
We add support for making 2D and 3D index tensors. This requires new generation functions as well as renaming of the
case_name
string to recursively handle listpylist
s.Differential Revision: D57880520