We test different word embedding methods in Turkish. The goal is to represent related words in a high dimensional space such that their positions reflect this relationship. We compare word2vec, fastText, and ELMo on three Turkish corpora of different sizes. Word2vec works at the word level, fastText works at the character level; ELMo, unlike the other two, is context dependent. Our experiments show that fastText is better on name and verb inflection, and word2vec is better on semantic/syntactic analogy tasks. Bag-of-words model is better than most trained word embedding models on classification.
Longhui WangYong WangYudong Xie
Shaosheng CaoWei LuQiongkai Xu