Description
This issue is to track the progression. Help is welcome, we have a lot of work to do but it's fairly easy. You can take example on the pull requests made before.
The idea is that before, using the @run_all_in_graph_and_eager_mode
meant that we needed to make the tests with the tensorflow 1.x style, with things like tf.compat.v1.initialize_global_variables
, self.evaluate
and other scary functions like that.
Now we use the @pytest.mark.usefixtures("maybe_run_functions_eagerly")
which will run the test function twice. Once normally, and once with tf.config.experimental_run_functions_eagerly(True)
. Which means the tests are eager-first. No need to initialize variables, get the result with .numpy()
. To test the values, use the numpy testing module instead of the methods of tf.test.TestCase
like self.AssertAllClose
and such.
You can take as reference the two pull request already made:
#1288
#1327
When doing a pull request, please do not do more than 2 tests at once.
The policy is that when we're testing a simple function (for example, using a custom op), no need to use tf.functions
and no need to use @pytest.mark.usefixtures("maybe_run_functions_eagerly")
because we're not afraid of some python side effects.
When working with complex functions (for loops, if/else with tensors...) we need to add tf.function and to add the @pytest.mark.usefixtures("maybe_run_functions_eagerly")
to make sure we don't rely on some pythonic behavior. To avoid having to make complex input_signature
in tf.function
we can isolate the sensitive part (if/else/for loop with tensors) in a separate tf.function
and not decorate the main one.