You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Reuse GELU implementation from PyTorch core (#8322)
* Reuse GELU implementation from PyTorch core
Pull Request resolved: #7041
kernels/optimized doesn't need to support embedded systems, so it can just take a header-only dep on PyTorch.
Note that, because we will pick up Sleef internally and ignore it
externally thanks to ATen vec, this PR gets to enable optimized GELU in OSS.
Testing: CI to make sure this doesn't break mobile build modes; happy to take advice on anything not currently covered that might break.
ghstack-source-id: 265190627
@exported-using-ghexport
Differential Revision: [D66335522](https://our.internmc.facebook.com/intern/diff/D66335522/)
* apply @huydhn's fix to new code
* extra tweak from @huydhn
---------
Co-authored-by: Github Executorch <github_executorch@arm.com>
Co-authored-by: Scott Wolchok <swolchok@meta.com>
0 commit comments