Generating natural sentences from images is a fundamental learning task for\nvisual-semantic understanding in multimedia. In this paper, we propose to apply\ndual attention on pyramid image feature maps to fully explore the\nvisual-semantic correlations and improve the quality of generated sentences.\nSpecifically, with the full consideration of the contextual information\nprovided by the hidden state of the RNN controller, the pyramid attention can\nbetter localize the visually indicative and semantically consistent regions in\nimages. On the other hand, the contextual information can help re-calibrate the\nimportance of feature components by learning the channel-wise dependencies, to\nimprove the discriminative power of visual features for better content\ndescription. We conducted comprehensive experiments on three well-known\ndatasets: Flickr8K, Flickr30K and MS COCO, which achieved impressive results in\ngenerating descriptive and smooth natural sentences from images. Using either\nconvolution visual features or more informative bottom-up attention features,\nour composite captioning model achieves very promising performance in a\nsingle-model mode. The proposed pyramid attention and dual attention methods\nare highly modular, which can be inserted into various image captioning modules\nto further improve the performance.\n
Huijun XingShuai WangDezhi ZhengXiaotong Zhao
Anh Cong HoangHoang Long NguyenThi Thuy LeMinh Phong PhanThe Anh PhamDinh Cong Nguyen
Tianyu ChenZhixin LiJingli WuHuifang MaBianping Su
Peng LiWenbo XuYu LiBoyang ZhangXufeng WeiHaitao Jia