JOURNAL ARTICLE

Dual-TBNet: Improving the Robustness of Speech Features via Dual-Transformer-BiLSTM for Speech Emotion Recognition

Zheng LiuXin KangFuji Ren

Year: 2023 Journal:   IEEE/ACM Transactions on Audio Speech and Language Processing Vol: 31 Pages: 2193-2203   Publisher: Institute of Electrical and Electronics Engineers

Abstract

Speech emotion recognition has always been one of the topics that have attracted a lot of attention from many researchers. In traditional feature fusion methods, the speech features used only come from the data set, and the weak robustness of features can easily lead to overfitting of the model. In addition, these methods often use simple concatenation to fuse features, which will cause the loss of speech information. In this paper, to solve the above problems and improve the recognition accuracy, we utilize self-supervised learning to enhance the robustness of speech features and propose a feature fusion model(Dual-TBNet) that consists of two 1D convolutional layers, two Transformer modules and two bidirectional long short-term memory (BiLSTM) modules. Our model uses 1D convolution to take features of different segment lengths and dimension sizes as input, uses the attention mechanism to capture the correspondence between the two features, and uses the bidirectional time series module to enhance the contextual information of the fused features. We designed a total of four fusion models to fuse five pre-trained features and acoustic features. In the comparison experiments, the Dual-TBNet model achieved a recognition accuracy and F1 score of 95.7% and 95.8% on the CASIA dataset, 66.7% and 65.6% on the eNTERFACE05 dataset, 64.8% and 64.9% on the IEMOCAP dataset, 84.1% and 84.3% on the EMO-DB dataset and 83.3% and 82.1% on the SAVEE dataset. The Dual-TBNet model effectively fuses acoustic features of different lengths and dimensions with pre-trained features, enhancing the robustness of the features, and achieved the best performance.

Keywords:
Computer science Robustness (evolution) Overfitting Speech recognition Artificial intelligence Pattern recognition (psychology) Fuse (electrical) Transformer Hidden Markov model Convolutional neural network Artificial neural network

Metrics

49
Cited By
20.42
FWCI (Field Weighted Citation Impact)
72
Refs
0.99
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Emotion and Mood Recognition
Social Sciences →  Psychology →  Experimental and Cognitive Psychology
Speech and Audio Processing
Physical Sciences →  Computer Science →  Signal Processing
Music and Audio Processing
Physical Sciences →  Computer Science →  Signal Processing

Related Documents

BOOK-CHAPTER

Improving Speech Emotion Recognition by Fusing Pre-trained and Acoustic Features Using Transformer and BiLSTM

Zheng LiuXin KangFuji Ren

IFIP advances in information and communication technology Year: 2022 Pages: 348-357
BOOK-CHAPTER

Improving Noise Robustness of Speech Emotion Recognition System

Łukasz Juszkiewicz

Studies in computational intelligence Year: 2013 Pages: 223-232
JOURNAL ARTICLE

Dual-Residual Transformer Network for Speech Recognition

Zhikui DuanGuozhi GaoJiawei ChenShiren LiJinbiao RuanGuangguang YangXinmei Yu

Journal:   Journal of the Audio Engineering Society Year: 2022 Vol: 70 (10)Pages: 871-881
JOURNAL ARTICLE

Speech Emotion Recognition using Dual-Conv2D architecture

Souha Ayadi

Journal:   PRZEGLĄD ELEKTROTECHNICZNY Year: 2024 Vol: 1 (6)Pages: 211-213
© 2026 ScienceGate Book Chapters — All rights reserved.