Image captioning tasks based on deep learning encompasses two major domains: computer vision and natural language processing. The Transformer architecture has achieved leading performance in the field of natural language processing, There have been studies using Transformer in image caption encoder and decoder, the results proving better performance compared to previous solutions. Positional encoding is an essential part in Transformer. Rotary Transformer proposed Rotary Position Embedding (RoPE), has achieved comparable or superior performance on various language modeling tasks. Limited work has been done to adapt the Roformer's architecture to image captioning tasks. The study conduct research based on the positional encoding of Transformer architecture, our proposed model consists of modified Roformer as an encoder and BERT as a decoder. With extracted feature as inputs as well as some training tricks, our model achieves similar or better performance on MSCOCO dataset compared to "CNN+RNN" models and regular transformer solutions.
Anisha AdhikariMahigya DahalRudra NepalPriya Shilpakar
Muneeb NabiRohit PachauriShouaib AhmadKanishk VarshneyPrachi GoelApurva Jain
Pengpeng ZengHaonan ZhangJingkuan SongLianli Gao
Jiangyun LiPeng YaoLongteng GuoWeicun Zhang
Guang LiLinchao ZhuPing LiuYi Yang