JOURNAL ARTICLE

Multimodal Transformer for Multimodal Machine Translation

Abstract

Multimodal Machine Translation (MMT) aims to introduce information from other modality, generally static images, to improve the translation quality.Previous works propose various incorporation methods, but most of them do not consider the relative importance of multiple modalities.In MMT, equally treating text and images may encode too much irrelevant information from images which may introduce noise.In this paper, we propose the multimodal self-attention in Transformer to solve the issues above.The proposed method learns the representations of images based on the text, which avoids encoding irrelevant information in images.Experiments and visualization analysis demonstrate that our model benefits from visual information and substantially outperforms previous works and competitive baselines in terms of various metrics.

Keywords:
Machine translation ENCODE Transformer Visualization Translation (biology) Encoding (memory)

Metrics

0
Cited By
0.00
FWCI (Field Weighted Citation Impact)
0
Refs
0.41
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Topics

Natural Language Processing Techniques
Physical Sciences →  Computer Science →  Artificial Intelligence
Generative Adversarial Networks and Image Synthesis
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Multimodal Machine Learning Applications
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition

Related Documents

JOURNAL ARTICLE

Multimodal Transformer for Multimodal Machine Translation

Shaowei YaoXiaojun Wan

Journal:   Greater South Information System Year: 2020
JOURNAL ARTICLE

Multimodal Machine Translation

Jiatong Liu

Journal:   IEEE Access Year: 2021 Pages: 1-1
JOURNAL ARTICLE

5. Multimodal Machine Translation

Hideki Nakayama

Journal:   The Journal of The Institute of Image Information and Television Engineers Year: 2018 Vol: 72 (9)Pages: 668-671
© 2026 ScienceGate Book Chapters — All rights reserved.