Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use the correct batch values in the output #2191

Merged
merged 2 commits into from
Feb 13, 2023
Merged

Conversation

spzala
Copy link
Contributor

@spzala spzala commented Feb 2, 2023

The current logic uses enumerate counter (i.e. variable batch). It displays the loss for trained data in an increment of 100 using if batch % 100 == 0. i.e. 1st batch, 101 stbatch, and so on for the given dataset. This maps to batch =0, 100, and so on). So for the first batch, the loss displayed should be [ 64/60000] instead of [ 0/60000]. For the second it should be [ 6464/60000] instead of [ 6400/60000] , and so on. The values displayed in the Out: text box on the tutorial page, e.g. loss: 2.306185 [ 0/60000], read as a loss observed for zero input data which seems incorrect. This loss mentioned here was for the first batch, which was 64 input data. The second was for 101 batch which has 6464 input data, and so on.

cc @suraj813

@netlify
Copy link

netlify bot commented Feb 2, 2023

Deploy Preview for pytorch-tutorials-preview ready!

Name Link
🔨 Latest commit 2fdff89
🔍 Latest deploy log https://app.netlify.com/sites/pytorch-tutorials-preview/deploys/63ea57abe4e82200079d5392
😎 Deploy Preview https://deploy-preview-2191--pytorch-tutorials-preview.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify site settings.

@spzala
Copy link
Contributor Author

spzala commented Feb 2, 2023

@svekars thanks! The other thing I want to mention is that I couldn’t find a way to update the values in the Out: box on the tutorial page, I hope you can help there :) If the PR pass the review, here are the values that should be used in all the epoch. We can leave the loss value as such considering this is an example and the values will be different in different user environment.

[   64/60000]
[ 6464/60000]
[12864/60000]
[19264/60000]
[32064/60000]
[25664/60000]
[38464/60000]
[44864/60000]
[51264/60000]
[57664/60000]

@spzala
Copy link
Contributor Author

spzala commented Feb 2, 2023

Fixed merge conflict.

@spzala
Copy link
Contributor Author

spzala commented Feb 2, 2023

cc @HamidShojanazeri

The current logic uses enumerate counter (i.e. variable `batch`). It displays the loss for
trained data in an increment of 100 using batch % 100 == 0. i.e. 1 batch, 101 batch, and so on
for the given dataset. This maps to batch =0, 100, and so on). So for the first batch, the loss
displayed should be [ 64/60000] instead of [ 0/60000]). For the second it should be
[ 6464/60000] instead of [ 6400/60000] , and so on. The Output values, e.g. loss: 2.306185  [
0/60000], read as a loss observed for zero input data which seems incorrect. This loss mentioned
here was for the first batch, which was 64 input data. The second was for 101 batch which has
6464 input data, and so on.
@usteiner9
Copy link

thanks

@spzala
Copy link
Contributor Author

spzala commented Feb 6, 2023

@svekars hello, considering the changes, I am wondering if I need to request review from any specific reviewer? No rush, I just thought good idea I ask in case it's needed. Thanks!

@subramen subramen self-requested a review February 13, 2023 15:31
Copy link
Contributor

@subramen subramen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@spzala thank you for improving the tutorial!

@spzala
Copy link
Contributor Author

spzala commented Feb 13, 2023

@suraj813 yrw, and thank you so much for the review!

@svekars svekars merged commit 327f259 into pytorch:main Feb 13, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants