JOURNAL ARTICLE

UAT: Universal Attention Transformer for Video Captioning

Heeju ImYong Suk Choi

Year: 2022 Journal:   Sensors Vol: 22 (13)Pages: 4817-4817   Publisher: Multidisciplinary Digital Publishing Institute

Abstract

Video captioning via encoder–decoder structures is a successful sentence generation method. In addition, using various feature extraction networks for extracting multiple features to obtain multiple kinds of visual features in the encoding process is a standard method for improving model performance. Such feature extraction networks are weight-freezing states and are based on convolution neural networks (CNNs). However, these traditional feature extraction methods have some problems. First, when the feature extraction model is used in conjunction with freezing, additional learning of the feature extraction model is not possible by exploiting the backpropagation of the loss obtained from the video captioning training. Specifically, this blocks feature extraction models from learning more about spatial information. Second, the complexity of the model is further increased when multiple CNNs are used. Additionally, the author of Vision Transformers (ViTs) pointed out the inductive bias of CNN called the local receptive field. Therefore, we propose the full transformer structure that uses an end-to-end learning method for video captioning to overcome this problem. As a feature extraction model, we use a vision transformer (ViT) and propose feature extraction gates (FEGs) to enrich the input of the captioning model through that extraction model. Additionally, we design a universal encoder attraction (UEA) that uses all encoder layer outputs and performs self-attention on the outputs. The UEA is used to address the lack of information about the video’s temporal relationship because our method uses only the appearance feature. We will evaluate our model against several recent models on two benchmark datasets and show its competitive performance on MSRVTT/MSVD datasets. We show that the proposed model performed captioning using only a single feature, but in some cases, it was better than the others, which used several features.

Keywords:
Closed captioning Computer science Feature extraction Transformer Encoder Artificial intelligence Feature learning Feature (linguistics) Backpropagation Sentence Pattern recognition (psychology) Artificial neural network Speech recognition Engineering

Metrics

7
Cited By
0.87
FWCI (Field Weighted Citation Impact)
42
Refs
0.70
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Multimodal Machine Learning Applications
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Advanced Image and Video Retrieval Techniques
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Human Pose and Action Recognition
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition

Related Documents

JOURNAL ARTICLE

Multimodal attention-based transformer for video captioning

M. HemalathaCharu Chandra

Journal:   Applied Intelligence Year: 2023 Vol: 53 (20)Pages: 23349-23368
JOURNAL ARTICLE

Video captioning using transformer network

Mubashira I. NechikkatBhagyasree V. PattilikattilSoumya VarmaAjay James

Journal:   AIP conference proceedings Year: 2022 Vol: 2563 Pages: 050003-050003
JOURNAL ARTICLE

Attention-Aligned Transformer for Image Captioning

Zhengcong Fei

Journal:   Proceedings of the AAAI Conference on Artificial Intelligence Year: 2022 Vol: 36 (1)Pages: 607-615
JOURNAL ARTICLE

Captioning Transformer with Stacked Attention Modules

Xinxin ZhuLixiang LiJing LiuHaipeng PengXinxin Niu

Journal:   Applied Sciences Year: 2018 Vol: 8 (5)Pages: 739-739
© 2026 ScienceGate Book Chapters — All rights reserved.