You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Resolves coredump caused by tf.data.experimental.save with prefetch
Repeat and prefetch in combination cause the snapshot reader Initialize function to be invoked multiple times.
However, there is nothing to prefetch on the very last iteration. This results in Prefetch issuing a CancelThreads call while the snapshot thread is trying to initialize. See https://github.com/tensorflow/tensorflow/blob/6446dda92eaadf11d22377e2354307642d739d73/tensorflow/core/kernels/data/prefetch_dataset_op.cc#L151
Currently the dataset reference counting is done asymmetrically. The reference increment happens at the end of initialization, where as the reference decrement
happens in a destructor. When prefetch cancels the snapshot thread, it errors out of the initialization function. And stops calling the reference increment. However, the reference decrement happens regardless, as it is in the destructor which always is invoked during cleanup. This results in an attempt to decrement the null dataset pointer, and therefore a segmentation fault.
This is different from all other dataset ops, where the dataset reference increment happens in the constructor and the decrement happens in the destructor, which are symmetric.
The solution to this is to ensure that the dataset reference is always initialized to nullptr, and to check for null when decrementing the dataset reference.
0 commit comments