Yuanyuan WangMeng LiuXuemeng SongLiqiang Nie
Video question answering, aiming to answer a natural language question related to the given video, has become prevalent in the past few years. Although remarkable improvements have been obtained, it is still exposed to the challenge of insufficient comprehension of video content. To this end, we propose a spatial-temporal representative visual exploitation network for video question answering, which enhances the understanding of the video by merely summarizing representative visual information. In order to explore representative object information, we advance adaptive attention based on uncertainty estimation. At the same time, to capture representative frame-level and clip-level visual information, we structure a much more compact set of representations iteratively in an expectation-maximization manner to deprecate noisy information. Both the quantitative and qualitative results on NExT-QA, TGIF-QA, MSRVTT-QA, and MSVD-QA datasets demonstrate the superiority of our model over several state-of-the-art approaches.
Haibo GongLiang LiJiehua ZhangYaoqi SunYuhan GaoChenggang Yan
Ziyi BaiRuiping WangDifei GaoXilin Chen
Weike JinZhou ZhaoYimeng LiJie LiJun XiaoYueting Zhuang
Yunseok JangYale SongChris Dongjoo KimYoungjae YuYoungjin KimGunhee Kim
Steven SchockaertDavid AhnMartine De CockEtienne Kerre