-
Notifications
You must be signed in to change notification settings - Fork 46
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve tensorboard monitoring #514
Conversation
summary_writer = get_tensorboard_writer() | ||
if summary_writer: | ||
with summary_writer.as_default(step=get_step_number()): | ||
if tf.rank(query_points) == 2: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why wouldnt this be true?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because the type system doesn't tell me so :-)
Seriously: does it have to be true (i.e. do we assume/assert it anywhere) or is just always true in practice?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
f"{tag}.observation.best_overall", | ||
np.min(datasets[tag].observations), | ||
step=step, | ||
tf.summary.histogram( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
histogram makes sense only for batches, no?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mean it still works for batch size one (though it doesn't provide any more information that the best_in_batch scalar plot).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks ok to me
what I found needed was recording some info about the run, like size of the data at the beginning (i.e. num initial points), budget aka number of steps, and information about the model and acq rule - this can all be filed as text, can serve as a useful metadata when comparing several runs
good suggestion, though i'd rather add text summaries in a separate pr |
Some initial improvements, based on suggestions at #401: