You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Yeah I'm trying to train with word2vec.
Word2vec can be either 100d, 200d, 300d vector i.e 1d array with 100 values for each word for 100d model
Can anyone help me where should I change the dimension values.
for eg: what values should be replaced in below lines: self.embedding(input).view(1, 1, -1)
return torch.tensor(indexes, dtype=torch.long, device=device).view(-1, 1)
torchtext currently supports pretrained GloVe, FastText, and CharNGram embeddings. Other embeddings can be loaded using torchtext.vocab.Vectors. If anyone is interested, I can edit the tutorial to show how you could use those.
Hi,
Thank you for your tutorial! I tried to change the embedding with pre-trained word embeddings such as word2vec, here is my code:
the dimension size of this word2vec is 300 dimensions
Is I need to change other things in my Encoder?
Thank you!
The text was updated successfully, but these errors were encountered: