forked from Atcold/NYU-DLSP20
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit 39fc6fa
authored
Update 09-echo_data.ipynb (Atcold#801)
Hi Alfredo,
I realized that when we increase the batch_size = 5 to batch_size = 100, the accuracy goes above 100%. I proposed the following changes to correct it:
Change 1:
Added one constant:
total_values_in_one_chunck = batch_size * BPTT_T
Change 2:
Changed:
correct += (pred == target.byte()).int().sum().item()
To (in both def train(hidden) and def test(hidden)):
correct += (pred == target.byte()).int().sum().item()/total_values_in_one_chunck
(After this change, we get a number between 0 and 1 for each comparison, to be added to the previous correct value, instead of a number between 0 and otal_values_in_one_chunck (batch_size * BPTT_T)
Change 3:
Changed:
train_accuracy = float(correct) / train_size # train_size = num_of_chuncks
To:
train_accuracy = float(correct)*100 / train_size # train_size = num_of_chuncks
After these 3 changes, we get accuracy below 100 % for batch_size = 100 and above because what is now being added to the "correct" after each comparison is a percentage (between 0 and 1), which makes more sense to me to evaluate the equality rate of two vectors rather than total number of equal values.
Please let me know what you think.
Thanks,
Gelareh1 parent b0a2aea commit 39fc6faCopy full SHA for 39fc6fa
File tree
Expand file treeCollapse file tree
0 file changed
+0
-0
lines changedFilter options
Expand file treeCollapse file tree
0 file changed
+0
-0
lines changed
0 commit comments