Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Step size and gradient clipping for bias terms #209

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

ErinGeorge
Copy link

I added processing on the updates for the bias terms of the word vectors to mirror the other updates. Without these, the eta and grad-clip parameters do not function as described, and the loss function minimized is not quite the one that appears in the original paper.

In personal experiments, this does not seem to affect the final output of the code noticeably in most cases. It appears to only matter in certain edge cases where the original code fails to converge, such as when the co-occurence matrix contains many entries between 0 and 1.0.

Adds gradient clipping and step size for bias update terms.
@AngledLuffa
Copy link
Contributor

What's kind of weird about this is that by missing the eta term in the original bias calculation, we've effectively made the learning rate for the bias 200x the default learning rate for the rest of the parameters. Our first couple experiments with this on rebuilding English word vectors w/ and w/o this change make it appear that the new learning rate is making the results worse. We'll dig into it some more and check if there's a way to scale eta for the bias in such a way that the vectors are coming out better

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants