Magdalena BiesialskaMarta R. Costa‐jussà
Cross-lingual word embeddings aim to bridge the gap between high-resource and low-resource languages by allowing to learn multilingual word representations even without using any direct bilingual signal.The lion's share of the methods are projectionbased approaches that map pre-trained embeddings into a shared latent space.These methods are mostly based on the orthogonal transformation, which assumes language vector spaces to be isomorphic.However, this criterion does not necessarily hold, especially for morphologically-rich languages.In this paper, we propose a selfsupervised method to refine the alignment of unsupervised bilingual word embeddings.The proposed model moves vectors of words and their corresponding translations closer to each other as well as enforces length-and center-invariance, thus allowing to better align cross-lingual embeddings.The experimental results demonstrate the effectiveness of our approach, as in most cases it outperforms stateof-the-art methods in a bilingual lexicon induction task.
Michelle YuanMozhi ZhangBenjamin Van DurmeLeah FindlaterJordan Boyd‐Graber
Anders SøgaardIvan VulićSebastian RuderManaal Faruqui
Yuling LiYuhong ZhangPeipei LiXuegang Hu
Anders SøgaardIvan VulićSebastian RuderManaal Faruqui
Anders SøgaardIvan VulićSebastian RuderManaal Faruqui