JOURNAL ARTICLE

Pre-Trained Multilingual Sequence-to-Sequence Models: A Hope for Low-Resource Language Translation?

Abstract

What can pre-trained multilingual sequence-to-sequence models like mBART contribute to translating low-resource languages? We conduct a thorough empirical experiment in 10 languages to ascertain this, considering five factors: (1) the amount of fine-tuning data, (2) the noise in the fine-tuning data, (3) the amount of pre-training data in the model, (4) the impact of domain mismatch, and (5) language typology. In addition to yielding several heuristics, the experiments form a framework for evaluating the data sensitivities of machine translation systems. While mBART is robust to domain differences, its translations for unseen and typologically distant languages remain below 3.0 BLEU. In answer to our title's question, mBART is not a low-resource panacea; we therefore encourage shifting the emphasis from new models to new data.

Keywords:
Computer science Machine translation Natural language processing Sequence (biology) Artificial intelligence Heuristics Typology Resource (disambiguation) Domain (mathematical analysis) Language model Linguistics Mathematics Geography

Metrics

0
Cited By
0.00
FWCI (Field Weighted Citation Impact)
0
Refs
0.15
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Topics

Hermeneutics and Narrative Identity
Social Sciences →  Arts and Humanities →  Philosophy
Aging, Elder Care, and Social Issues
Health Sciences →  Health Professions →  General Health Professions
Health, Medicine and Society
Health Sciences →  Health Professions →  General Health Professions
© 2026 ScienceGate Book Chapters — All rights reserved.