-
Notifications
You must be signed in to change notification settings - Fork 4.3k
collecting latest step as a stat #5264
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
0c95181
e9e6fea
1bef8cd
f2fe073
2079450
c93b64a
7828e6e
e39ea76
ec9885b
e36a990
a29f49b
a231ba5
71b545f
6f3bcbe
ba862d5
5657b05
daedc45
a03642f
38077f1
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -211,6 +211,7 @@ def _increment_step(self, n_steps: int, name_behavior_id: str) -> None: | |
p = self.get_policy(name_behavior_id) | ||
if p: | ||
p.increment_step(n_steps) | ||
self.stats_reporter.set_stat("Step", float(self.get_step)) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @ervteng Is there a place that I can collect more granular/frequent step updates? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think this is the right place if we care about synchronizing with the other metrics. The other place would be in the AgentProcessor, but that is a bit decoupled from the other metrics as it would be added pre-trajectory assembly. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. At some point (e.g. if the inference and training are separate processes) it might be useful to emit both as different stats (e.g. |
||
|
||
def _get_next_interval_step(self, interval: int) -> int: | ||
""" | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not a fan of the magic strings here. I think these would be better as a static frozenset() in TensorBoardWriter.
Any thoughts?
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I encouraged @mahon94 to remove it as a magic string from the TensorboardWriter. My thinking is that the writer has no responsibility for the implementation of the trainer or its stats and as such configuration/behavior should be removed from the details of said implementation. The get_default_stats_writers in stats_writer.py plugin on the otherhand is more closely related to the main entry-point/execution of the trainer. et
If it wasn't mutable, I'd say this should just be the default value of TensorboardWriter param and call it a day. Transforming it from a list to a frozenset resolves the mutability issue -- even if it's still a collection of magic strings in the writer. Do you have opinions on the coupling implications?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't have a strong opinion. I am okay leaving it as is.