JOURNAL ARTICLE

Multistage information fusion for audio-visual speech recognition

Abstract

The paper looks into the information fusion problem in the context of audio-visual speech recognition. Existing approaches to audio-visual fusion typically address the problem in either the feature domain or the decision domain. We consider a hybrid approach that aims to take advantage of both the feature fusion and the decision fusion methodologies. We introduce a general formulation to facilitate information fusion at multiple stages, followed by an experimental study of a set of fusion schemes allowed by the framework. The proposed method is implemented on a real-time audio-visual speech recognition system, and evaluated on connected digit recognition tasks under varying acoustic conditions. The results show that the multistage fusion system consistently achieves lower word error rates than the reference feature fusion and decision fusion systems. It is further shown that removing the audio only channel from the multistage system leads to only minimal degradations in recognition performance while providing a noticeable reduction in computational load.

Keywords:
Computer science Speech recognition Audio mining Fusion Context (archaeology) Audio visual Feature (linguistics) Domain (mathematical analysis) Set (abstract data type) Artificial intelligence Word error rate Feature extraction Sensor fusion Word (group theory) Pattern recognition (psychology) Speech processing Acoustic model Multimedia

Metrics

6
Cited By
0.65
FWCI (Field Weighted Citation Impact)
12
Refs
0.70
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Speech and Audio Processing
Physical Sciences →  Computer Science →  Signal Processing
Blind Source Separation Techniques
Physical Sciences →  Computer Science →  Signal Processing
Music and Audio Processing
Physical Sciences →  Computer Science →  Signal Processing
© 2026 ScienceGate Book Chapters — All rights reserved.