-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use the correct batch values in the output #2191
Conversation
✅ Deploy Preview for pytorch-tutorials-preview ready!
To edit notification comments on pull requests, go to your Netlify site settings. |
@svekars thanks! The other thing I want to mention is that I couldn’t find a way to update the values in the
|
Fixed merge conflict. |
The current logic uses enumerate counter (i.e. variable `batch`). It displays the loss for trained data in an increment of 100 using batch % 100 == 0. i.e. 1 batch, 101 batch, and so on for the given dataset. This maps to batch =0, 100, and so on). So for the first batch, the loss displayed should be [ 64/60000] instead of [ 0/60000]). For the second it should be [ 6464/60000] instead of [ 6400/60000] , and so on. The Output values, e.g. loss: 2.306185 [ 0/60000], read as a loss observed for zero input data which seems incorrect. This loss mentioned here was for the first batch, which was 64 input data. The second was for 101 batch which has 6464 input data, and so on.
thanks |
@svekars hello, considering the changes, I am wondering if I need to request review from any specific reviewer? No rush, I just thought good idea I ask in case it's needed. Thanks! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@spzala thank you for improving the tutorial!
@suraj813 yrw, and thank you so much for the review! |
The current logic uses enumerate counter (i.e. variable
batch
). It displays the loss for trained data in an increment of 100 usingif batch % 100 == 0
. i.e. 1st batch, 101 stbatch, and so on for the given dataset. This maps to batch =0, 100, and so on). So for the first batch, the loss displayed should be[ 64/60000]
instead of[ 0/60000]
. For the second it should be[ 6464/60000]
instead of[ 6400/60000]
, and so on. The values displayed in theOut:
text box on the tutorial page, e.g.loss: 2.306185 [ 0/60000]
, read as a loss observed for zero input data which seems incorrect. This loss mentioned here was for the first batch, which was 64 input data. The second was for 101 batch which has 6464 input data, and so on.cc @suraj813