-
-
Notifications
You must be signed in to change notification settings - Fork 653
Description
🚀 Feature
Here is the feature I would like to have, for example, with TensorboardLogger, but we can implement that for all exp tracking systems:
tb_logger = TensorboardLogger(log_dir="experiments/tb_logs")
tb_logger.attach_output_handler(
trainer,
event_name=Events.ITERATION_COMPLETED,
tag="training",
state_attrs=["alpha", "beta"] # <-> trainer.state.alpha and trainer.state.beta
)
where state_attrs
defines some attributes from trainer.state
to log to TB.
Currently, this could be done as
tb_logger = TensorboardLogger(log_dir="experiments/tb_logs")
def trainer_state_logger(state_attrs, tag=None):
def wrapper(engine, logger, event_name):
assert engine == trainer
global_step = engine.state.get_event_attrib_value(event_name)
tag_prefix = f"{tag}/" if tag else ""
for name in state_attrs:
value = getattr(engine, name, None)
if value is None:
continue
logger.writer.add_scalar(f"{tag_prefix}{name}", value, global_step)
tb_logger.attach(
trainer,
event_name=Events.ITERATION_COMPLETED,
log_handler=trainer_state_logger(["alpha", "beta"], tag="training")
)
We would like to support state attributes types same as for metrics:
See
ignite/ignite/contrib/handlers/tensorboard_logger.py
Lines 287 to 296 in 4d6d220
for key, value in metrics.items(): | |
if isinstance(value, numbers.Number): | |
logger.writer.add_scalar(f"{self.tag}/{key}", value, global_step) | |
elif isinstance(value, torch.Tensor) and value.ndimension() == 0: | |
logger.writer.add_scalar(f"{self.tag}/{key}", value.item(), global_step) | |
elif isinstance(value, torch.Tensor) and value.ndimension() == 1: | |
for i, v in enumerate(value): | |
logger.writer.add_scalar(f"{self.tag}/{key}/{i}", v.item(), global_step) | |
else: | |
warnings.warn(f"TensorboardLogger output_handler can not log metrics value type {type(value)}") |
This issues can be handled in several PRs that should implement the feature for the following exp tracking systems and loggers:
- TensorBoard logger
- Visdom logger
- ClearML logger
- Neptune logger
- WandB logger
- MLFlow logger
- Polyaxon logger
- tqdm logger
If you would like work on this issue, please comment out and say which part you would like to cover.
A PR should contain new code, tests and documentation updates. Please see our contributing guide: https://github.com/pytorch/ignite/blob/master/CONTRIBUTING.md
A draft PR can be sent and maintainers can help to iterate over the remaining tasks.