Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Easy and help welcome] Migrating all tests from run_all_in_graph_and_eager_mode to the new pytest fixture. #1328

Closed
gabrieldemarmiesse opened this issue Mar 18, 2020 · 7 comments
Labels
good first issue help wanted Needs help as a contribution test-cases Related to Addons tests

Comments

@gabrieldemarmiesse
Copy link
Member

gabrieldemarmiesse commented Mar 18, 2020

This issue is to track the progression. Help is welcome, we have a lot of work to do but it's fairly easy. You can take example on the pull requests made before.

The idea is that before, using the @run_all_in_graph_and_eager_mode meant that we needed to make the tests with the tensorflow 1.x style, with things like tf.compat.v1.initialize_global_variables, self.evaluateand other scary functions like that.

Now we use the @pytest.mark.usefixtures("maybe_run_functions_eagerly") which will run the test function twice. Once normally, and once with tf.config.experimental_run_functions_eagerly(True). Which means the tests are eager-first. No need to initialize variables, get the result with .numpy(). To test the values, use the numpy testing module instead of the methods of tf.test.TestCase like self.AssertAllClose and such.

You can take as reference the two pull request already made:
#1288
#1327

When doing a pull request, please do not do more than 2 tests at once.

The policy is that when we're testing a simple function (for example, using a custom op), no need to use tf.functions and no need to use @pytest.mark.usefixtures("maybe_run_functions_eagerly") because we're not afraid of some python side effects.
When working with complex functions (for loops, if/else with tensors...) we need to add tf.function and to add the @pytest.mark.usefixtures("maybe_run_functions_eagerly") to make sure we don't rely on some pythonic behavior. To avoid having to make complex input_signature in tf.function we can isolate the sensitive part (if/else/for loop with tensors) in a separate tf.function and not decorate the main one.

@fsx950223
Copy link
Member

fsx950223 commented Mar 20, 2020

I suggest using tf.autograph.to_graph instead of tf.function when working with complex functions. There are two reasons:
1.The function will raise errors when I set autograph=False with tf.config.experimental_run_functions_eagerly(False)
2.Subgraph will use more resources

And as for lack of graph test, I suggest every ops should add tf.function test case which could test TensorFlow graph edge cases e.g. TensorArray.

@gabrieldemarmiesse
Copy link
Member Author

@fsx950223 you're right we need more discussion about when and how to use tf.function. We may sometimes have complexe signatures (when multiple dtypes are accepted for exemple) and it's hard to express with tf.function and tf.autograph.to_graph. We still need to test the graph mode though (not all the time). So we need to find a solution.

@stmugisha
Copy link

I suggest using tf.autograph.to_graph instead of tf.function when working with complex functions. There are two reasons:
1.The function will raise errors when I set autograph=False with tf.config.experimental_run_functions_eagerly(False)

@fsx950223 I tried recreating this error and the problem arose at the arguments passed to the function(s). Once autograph is set to false in @tf.functionand tf.config.experimental_run_functions_eagerly(False) , functions expect actual float values rather than tensors.
hence this error:
OperatorNotAllowedInGraphError: using a tf.Tensoras a Pythonbool is not allowed: AutoGraph is disabled in this function. Try decorating it directly with @tf.function.

@fsx950223
Copy link
Member

fsx950223 commented Mar 24, 2020

I suggest using tf.autograph.to_graph instead of tf.function when working with complex functions. There are two reasons:
1.The function will raise errors when I set autograph=False with tf.config.experimental_run_functions_eagerly(False)

@fsx950223 I tried recreating this error and the problem arose at the arguments passed to the function(s). Once autograph is set to false in @tf.functionand tf.config.experimental_run_functions_eagerly(False) , functions expect actual float values rather than tensors.
hence this error:
OperatorNotAllowedInGraphError: using a tf.Tensoras a Pythonbool is not allowed: AutoGraph is disabled in this function. Try decorating it directly with @tf.function.

Yeah, it's a problem. You could fix the bug by decorating target function with tf.autograph.to_graph.

@stmugisha
Copy link

stmugisha commented Mar 24, 2020

Why would you disable autograph in tf.function only for you to use it with tf.autograph.to_graph?? Am a little confused.

@gabrieldemarmiesse
Copy link
Member Author

Thanks @autoih for all the help!

@seanpmorgan
Copy link
Member

Thanks @autoih for all the help!

@gabrieldemarmiesse @autoih Much thanks to both of you for all of your work in accomplishing this! The new test suite is significantly easier to work with and more understandable from a graph vs. eager perspective.

jrruijli pushed a commit to jrruijli/addons that referenced this issue Dec 23, 2020
…sorflow#1405)

* Moved test out of run_all_in_graph_and_eager_mode in softshrink.

See tensorflow#1328

* Small fix.
jrruijli pushed a commit to jrruijli/addons that referenced this issue Dec 23, 2020
…ensorflow#1430)

* Moved test out of run_all_in_graph_and_eager_mode sparse_image_wrap.

See tensorflow#1328

* Update tensorflow_addons/image/sparse_image_warp_test.py
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue help wanted Needs help as a contribution test-cases Related to Addons tests
Projects
None yet
Development

No branches or pull requests

5 participants