You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is there a reason for this discrepancy? It is making us hard-code the vocabulary size to fix this, and we hope we are correctly initializing from gatortron.
Otherwise, thank you so much for open sourcing this! It is extremely helpful :)
The text was updated successfully, but these errors were encountered:
Hi! Had a quick question about the discrepancy between the input embeddings:
There are 50176 in this module, but the tokenizer has 50101 vocabulary items (https://huggingface.co/UFNLP/gatortron-base/raw/main/vocab.txt).
Is there a reason for this discrepancy? It is making us hard-code the vocabulary size to fix this, and we hope we are correctly initializing from gatortron.
Otherwise, thank you so much for open sourcing this! It is extremely helpful :)
The text was updated successfully, but these errors were encountered: