Fuxue LiChuncheng ChiHong YanBeibei LiuMingzhi Shao
Transformer-based neural machine translation (NMT) has achieved state-of-the-art performance in the NMT paradigm. However, it relies on the availability of copious parallel corpora. For low-resource language pairs, the amount of parallel data is insufficient, resulting in poor translation quality. To alleviate this issue, this paper proposes an efficient data augmentation (DA) method named STA. Firstly, the pseudo-parallel sentence pairs are generated by translating sentence trunks with the target-to-source NMT model. Furthermore, two strategies are introduced to merge the original data and pseudo-parallel corpus to augment the training set. Experimental results on simulated and real low-resource translation tasks show that the proposed method improves the translation quality over the strong baseline, and also outperforms other data augmentation methods. Moreover, the STA method can further improve the translation quality when combined with the back-translation method with the extra monolingual data.
Marzieh FadaeeArianna BisazzaChristof Monz
Fuxue LiBeibei LiuHong YanMingzhi ShaoPeijun XieJiarui LiChuncheng Chi
Hong YanBeibei LiuFuxue LiM. LiChuncheng ChiZhen Zhang
Fuxue LiMingzhi ShaoHong YanChuncheng Chi
Thai Nguyen QuocLê Thanh HươngHanh Pham Van