Open
Description
In tensorflow graph mode, RandAugment will repeatedly use the same augmentations, which were sampled during graph tracing.
It relies on shuffling a python list using random.shuffle, which only works during eager mode execution. In graph mode the operations are sampled then compiled, but the sampling process itself isn't compiled so the same operations are used repeatedly.
If I add a tf.print
statement to this code:
random.shuffle(self._AUGMENT_LAYERS)
for layer_name in self._AUGMENT_LAYERS[: self.num_ops]:
tf.print(layer_name, tf.executing_eagerly()) # <----
augmentation_layer = getattr(self, layer_name)
transformation[layer_name] = (
augmentation_layer.get_random_transformation(
data,
training=training,
seed=self._get_seed_generator(self.backend._backend),
)
)
then run this test:
def test_graph_issue(self):
input_data = np.random.random((10, 8, 8, 3))
layer = layers.RandAugment()
ds = tf_data.Dataset.from_tensor_slices(input_data).batch(2).map(layer)
print()
for output in ds:
output.numpy()
i get this output
equalization False
random_posterization False
equalization False
random_posterization False
equalization False
random_posterization False
equalization False
random_posterization False
equalization False
random_posterization False