-
Notifications
You must be signed in to change notification settings - Fork 19.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Write to TensorBoard every x samples. #11152
Write to TensorBoard every x samples. #11152
Conversation
…d` method to the TensorBoard class
…rboard-callback-modifications. Resolved conflicts and tested callbacks # Conflicts: # keras/callbacks.py # tests/keras/test_callbacks.py
…om/sameermanek/keras into batch_tensorboard
the metrics and losses.
time it wrote to the logs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like a useful feature, thanks for the PR. @omalleyt12 would you mind taking a look? Thank you.
keras/callbacks.py
Outdated
@@ -715,6 +715,12 @@ class TensorBoard(Callback): | |||
`embeddings_layer_names`. Numpy array (if the model has a single | |||
input) or list of Numpy arrays (if the model has multiple inputs). | |||
Learn [more about embeddings](https://www.tensorflow.org/programmers_guide/embedding) | |||
write_step: `'batch'` or `'epoch'` or integer. When using `'batch'`, writes |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's already a embeddings_freq
argument, and this new argument write_step
is not consistent with it. Maybe write_freq
or update_freq
would be better. Thoughts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Naming things is as hard as ever.
I like update_freq
since we "abstract" the writing to the file in this case. The user doesn't really care (take this with a grain of salt) what happens to the file. He/she wants to see the plot being updated. That's what matters.
|
ping @omalleyt12 |
LGTM, a useful feature and a nice improvement over the pr it is based on. The new param names + options are much better. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, LGTM!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great! Note that we will need to port that same feature to tf.keras as well.
Thanks. I don't think I'll make the PR in the Tensorflow repo because of lack of time. I'll let someone else do it if you don't mind. |
I just recently switched from Keras to tf.keras, and noticed that TensorBoard summaries looked a little bit different there: With tensorflow-gpu==1.11.0 and tensorboard==1.11.0, I no longer have data in val_loss, but in epoch_val_loss; and in addition to loss, there are epoch_loss and batch_loss. Is that a difference between the two Keras version? Will it impact porting of this feature here to tf.keras? |
Summary
Closes #7617 .
This PR is an update of #7617. Credit goes to @sameermanek for his work.
The goal of the PR is to be able to see the metrics and losses in a per-batch basis instead of a per-epoch basis.
Related Issues
#6692
PR Overview
This PR is backward compatible
A new argument has been added to the constructor of the TensorBoard callback. Here is the description that I added in the docs:
Feedback welcome. Especially from @fchollet since the API has been changed.