Skip to content
forked from lmthang/bivec

Train bilingual embeddings as described in our NAACL 2015 workshop paper "Bilingual Word Representations with Monolingual Quality in Mind". Besides, it has all the functionalities of word2vec with added features and code clarity. See README for more info.

License

Notifications You must be signed in to change notification settings

peterzhang2029/bivec

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

text2vec
Thang Luong @ 2014, <lmthang@stanford.edu>. With contributions from Hieu Pham <hyhieu@stanford.edu>.

This codebase is based on the word2vec code by Tomas Mikolov
https://code.google.com/p/word2vec/ with added functionalities:
 (a) Train multiple iterations
 (b) Save in/out vectors
 (c) wordsim/analogy evaluation
 (d) Automatically save vocab file and load vocab if there's one exists.
 (e) More comments

Files & Directories:
(a) demo-skip.sh: test skip-gram model 
(b) demo-cbow.sh: test cbow model
(c) demo/: contains expected outputs of the above scripts
(d) wordsim / analogy: to evaluate trained embeddings on the word similarity
and analogy tasks.
(e) run_mono.sh: train mono models.
(f) cldc/, run_bi.sh, demo-bi.sh: related to bilingual embedding models (on-going work). To be able to run these, you need to do:
  (i) go into cldc/, and run ant
  (ii) and copy the CLDC's data into cldc/data (ask Thang/Hieu for such data).

Notes:
If you don't have Matlab, modify demo-*.sh to set -eval 0 (instead of -eval 1).

Sample commands:
* Bi skipgram: ./run_bi.sh 1 tmp data/data.10k 50 1 5 4 10
* Bi CBOW: ./run_bi.sh 1 tmp data/data.10k 50 1 5 4 10 1

-------- Mikolov's README -------

Tools for computing distributed representtion of words
------------------------------------------------------

We provide an implementation of the Continuous Bag-of-Words (CBOW) and the Skip-gram model (SG), as well as several demo scripts.

Given a text corpus, the word2vec tool learns a vector for every word in the vocabulary using the Continuous
Bag-of-Words or the Skip-Gram neural network architectures. The user should to specify the following:
 - desired vector dimensionality
 - the size of the context window for either the Skip-Gram or the Continuous Bag-of-Words model
 - training algorithm: hierarchical softmax and / or negative sampling
 - threshold for downsampling the frequent words 
 - number of threads to use
 - the format of the output word vector file (text or binary)

Usually, the other hyper-parameters such as the learning rate do not need to be tuned for different training sets. 

The script demo-word.sh downloads a small (100MB) text corpus from the web, and trains a small word vector model. After the training
is finished, the user can interactively explore the similarity of the words.

More information about the scripts is provided at https://code.google.com/p/word2vec/

About

Train bilingual embeddings as described in our NAACL 2015 workshop paper "Bilingual Word Representations with Monolingual Quality in Mind". Besides, it has all the functionalities of word2vec with added features and code clarity. See README for more info.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • MATLAB 84.2%
  • C 9.4%
  • C++ 4.1%
  • Shell 1.3%
  • Python 0.7%
  • Charity 0.2%
  • Makefile 0.1%