Skip to content

Commit a079550

Browse files
authored
Update answers.txt
1 parent 85b9de8 commit a079550

File tree

1 file changed

+1
-4
lines changed

1 file changed

+1
-4
lines changed

code/answers.txt

Lines changed: 1 addition & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,3 @@
1-
Name: Dorin Keshales
2-
ID: 313298424
3-
41
1. I got about the same percentage of accuracy with both models - around 85%-86.6%.
52
Sometimes with the log-linear model I even got accuracy of 87%.
63
In my opinion, when linear model gets such high accuracy there is no much to MLP with one hidden layer to do in order to improve the accuracy. I mean that a linear model is enough in this case to solve the language identification task well.
@@ -12,4 +9,4 @@ In my opinion, the reason for lower percentage of accuracy with the letter-unigr
129

1310
3. In each execution of train_mlp1, I got different number of iterations in which I correctly solve the xor problem. In my opinion, it's caused because of the random initialisation of the weights matrices and bias vectors. Moreover, it is known that perceptron doesn't assure that after he saw an example he will correctly classify this example the next time he will see it. And that's another reason that can explain the difference between the runs.
1411
In order to still be able to answer that question, I used an average of 5 runs which can approximately tell how many iterations it takes to mlp1 to correctly solve the xor problem.
15-
So, on an average of 5 runs, I was able to solve the xor problem in the 34th iteration.
12+
So, on an average of 5 runs, I was able to solve the xor problem in the 34th iteration.

0 commit comments

Comments
 (0)