Abstract

Sign language recognition and translation bridges the gap between hearing-impaired people and ordinary people. Compared with sign language recognition (SLR), continuous sign language translation (CSLT) is closer to people's speaking habits, and has greater practicality. However, there are problems such as insufficient information and short pauses in continuous sentences that are difficult to separate. To solve these problems, this paper gets coarse-grained arm movement, fine-grained finger movement and hand rotation information through the MYO armband; an encoder-decoder model with attention mechanism translates in an end-to-end manner without segmentation. After a series of experiments, the best model was selected, achieving an accuracy of 94.1%.

Keywords:
Sign language Computer science Speech recognition Translation (biology) Sign (mathematics) Machine translation Movement (music) Artificial intelligence Language model Natural language processing Linguistics

Metrics

0
Cited By
0.00
FWCI (Field Weighted Citation Impact)
19
Refs
0.12
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Topics

Hand Gesture Recognition Systems
Physical Sciences →  Computer Science →  Human-Computer Interaction
Hearing Impairment and Communication
Social Sciences →  Psychology →  Developmental and Educational Psychology
Human Pose and Action Recognition
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
© 2026 ScienceGate Book Chapters — All rights reserved.