JOURNAL ARTICLE

A Simple and Effective Positional Encoding for Transformers

Pu-Chin ChenHenry TsaiSrinadh BhojanapalliHyung Won ChungYin-Wen ChangChun-Sung Ferng

Year: 2021 Journal:   Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing Pages: 2974-2988

Abstract

Transformer models are permutation equivariant. To supply the order and type information of the input tokens, position and segment embeddings are usually added to the input. Recent works proposed variations of positional encodings with relative position encodings achieving better performance. Our analysis shows that the gain actually comes from moving positional information to attention layer from the input. Motivated by this, we introduce Decoupled Positional Attention for Transformers (DIET), a simple yet effective mechanism to encode position and segment information into the Transformer models. The proposed method has faster training and inference time, while achieving competitive performance on GLUE, XTREME and WMT benchmarks. We further generalize our method to long-range transformers and show performance gain.

Keywords:
Transformer Computer science ENCODE Inference Encoding (memory) Artificial intelligence Algorithm Engineering Electrical engineering

Metrics

50
Cited By
4.05
FWCI (Field Weighted Citation Impact)
32
Refs
0.95
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Topic Modeling
Physical Sciences →  Computer Science →  Artificial Intelligence
Natural Language Processing Techniques
Physical Sciences →  Computer Science →  Artificial Intelligence
Multimodal Machine Learning Applications
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
© 2026 ScienceGate Book Chapters — All rights reserved.