JOURNAL ARTICLE

Neural Sign Language Translation with SF-Transformer

Abstract

The popular methods are based on the combination of CNNs and RNNs in the sign language translation. Recently, Transformer has also attracted the attention of researchers and achieved success in this subject. However, researchers usually only focus on the accuracy of their model, while ignoring the practical application value. In this paper, we propose the SF-Transformer, a lightweight model based on Encoder-Decoder architecture for sign language translation, which achieves new state-of-the-art performance on Chinese Sign Language (CSL) dataset. We used 2D/3D convolution blocks of SF-Net and Transformer's Decoders to build our network. Benefiting from fewer parameters and a high level of parallelization, the training and inference speed of our model is faster. We hope that our method can contribute to the practical application of sign language translation on low-computing devices such as mobile phones.

Keywords:
Computer science Transformer Inference Machine translation Sign language Architecture Encoder Artificial intelligence Speech recognition Natural language processing Engineering Voltage Electrical engineering Linguistics

Metrics

6
Cited By
0.89
FWCI (Field Weighted Citation Impact)
12
Refs
0.72
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Hand Gesture Recognition Systems
Physical Sciences →  Computer Science →  Human-Computer Interaction
Human Pose and Action Recognition
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Gait Recognition and Analysis
Physical Sciences →  Engineering →  Biomedical Engineering

Related Documents

© 2026 ScienceGate Book Chapters — All rights reserved.