Skip to content

Latest commit

 

History

History
3 lines (2 loc) · 521 Bytes

gnmt-multilingual.md

File metadata and controls

3 lines (2 loc) · 521 Bytes

TLDR; The authors train a multilingual Neural Machine Translation (NMT) system based on the Google NMT architecture by prepend a special 2[lang] (e.g. 2fr) token to the input sequence to specify the target language. They empirically evaluate model performance on many-to-one, one-to-many and many-to-many translation tasks and demonstrate evidence for shared representations (interlingua).