Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update performance benchmarking script to include a layer with ops that rely on fallbacks for autovectorization #455

Closed
LukeWood opened this issue May 25, 2022 · 2 comments

Comments

@LukeWood
Copy link
Contributor

tensorflow/tensorflow#56242

@bhack
Copy link
Contributor

bhack commented May 25, 2022

tensorflow/tensorflow#56242 was not going to importing keras-cv so it is related to Keras only.

I think some preprocessing layers in Keras with 2.9 was refactored, like KLP layers in Keras-cv, on the new base class acquiring a randomization policy within the batch + vectorized_map (randomize the transformation for every single element in the batch).

Other then for missing converters I think that the problem often is in that native ops that doesn't support "a batch" of different parameters:
tensorflow/tensorflow#55639

As already exposed with #291 on nightly we have the new signature that will expose the root cause of the fallback:

def vectorized_map(fn, elems, fallback_to_while_loop=True, warn=True):

If you want to really benchmark the performance gap here an in Keras refactored preprocessing layers I suppose what you really want to test is the batch augmentation (with fixed parameter) vs within batch augmentation.

Related:
#372 (comment)

@bhack
Copy link
Contributor

bhack commented May 25, 2022

P.s. just to make the history more consistent, for who is landing on this ticket directly, we started the discussion with @qlzh727 3 months ago at #146.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants