You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
IMHO it could be very helpful to improve the visualization output during training already. Either by distributing into Wandb/Tensorboard or by output image results of validation since these are calculated anyway. Of course this would increase storage consumption during training but the advantages are very big.
Beside of simple monitor the training process this would also help to isolate hardest training examples at earliest possible.
Is there such a mechanism switch already available which would enhance the training output already?
Do you have any hints what might be further beneficial to achieve the same goals other than post evaluating the model?
The text was updated successfully, but these errors were encountered:
@thhart there is a W&B PR which more formally implements your suggestions, and also the validation labels and predictions are plotted for the first and last epochs by default and stored in your runs/train/exp directory. You can modify this very simply here to print these jpgs every epoch, and every batch if you'd like.
We will not enable this by default however due to the much slower performance and the much large storage requirement it would introduce on all users compared to the current implementation.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
🚀 Feature
Improve validation output during training.
Motivation
IMHO it could be very helpful to improve the visualization output during training already. Either by distributing into Wandb/Tensorboard or by output image results of validation since these are calculated anyway. Of course this would increase storage consumption during training but the advantages are very big.
Beside of simple monitor the training process this would also help to isolate hardest training examples at earliest possible.
Is there such a mechanism switch already available which would enhance the training output already?
Do you have any hints what might be further beneficial to achieve the same goals other than post evaluating the model?
The text was updated successfully, but these errors were encountered: