You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I am really impressed by your readable code. A simple question about the code which may not worth mention: I think it should stop the learning when the PPL on the development set are not increased any more or the increased error rate meets our requirement, rather than on the training set. If current ppl is larger than the previous one, we should increase the learning rate or make some other decisions.
The text was updated successfully, but these errors were encountered:
@fangyw Yes, RNNSharp already supports such strategy, but we don't use it currently. If you want to enable this feature, you can check return value of "rnn.ValidateNet(ValidationSet, iter)". If it's false, it means we cannot get a better model on validated corpus, then we can update learning rate.
Hi, I am really impressed by your readable code. A simple question about the code which may not worth mention: I think it should stop the learning when the PPL on the development set are not increased any more or the increased error rate meets our requirement, rather than on the training set. If current ppl is larger than the previous one, we should increase the learning rate or make some other decisions.
The text was updated successfully, but these errors were encountered: