Neural machine translation (NMT) is the current state-of-the-art approach for machine translation. However, NMT models should be trained with a large amount of data, making NMT in low-resource scenarios a tricky issue. In this paper, we concluded three categories of methods for low-resource NMT. Firstly, data augmentation is the most direct solution, producing extra parallel corpus. Secondly, multilingual NMT model can improve the performance of low-resource languages. Finally, multimodal NMT is especially useful because multimodal information is easy to acquire online. We also illustrate some promising directions to further explore in the future.
Mohammad Abdullah Al MuminMd Hanif SeddiquiMuhammed Zafar IqbalMd Jakiul Islam
Mohammad Abdullah Al MuminMd Hanif SeddiquiMuhammed Zafar IqbalMd Jakiul Islam
Mohammad Abdullah Al MuminMd. Hanif SeddiquiMuhammed Zafar IqbalMohammed Jahirul Islam
Sahinur Rahman LaskarPartha PakraySivaji Bandyopadhyay
Andargachew Mekonnen GezmuAndreas Nürnberger