BOOK-CHAPTER

Two-Level Bimodal Association for Audio-Visual Speech Recognition

Jong‐Seok LeeTouradj Ebrahimi

Year: 2009 Lecture notes in computer science Pages: 133-144   Publisher: Springer Science+Business Media

Abstract

This paper proposes a new method for bimodal information fusion in audio-visual speech recognition, where cross-modal association is considered in two levels. First, the acoustic and the visual data streams are combined at the feature level by using the canonical correlation analysis, which deals with the problems of audio-visual synchronization and utilizing the cross-modal correlation. Second, information streams are integrated at the decision level for adaptive fusion of the streams according to the noise condition of the given speech datum. Experimental results demonstrate that the proposed method is effective for producing noise-robust recognition performance without a priori knowledge about the noise conditions of the speech data.

Keywords:
Computer science Canonical correlation Speech recognition Audio visual A priori and a posteriori Modal Feature (linguistics) Noise (video) Correlation Artificial intelligence Association (psychology) Pattern recognition (psychology) Multimedia Image (mathematics)

Metrics

3
Cited By
0.00
FWCI (Field Weighted Citation Impact)
27
Refs
0.16
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Speech and Audio Processing
Physical Sciences →  Computer Science →  Signal Processing
Music and Audio Processing
Physical Sciences →  Computer Science →  Signal Processing
Blind Source Separation Techniques
Physical Sciences →  Computer Science →  Signal Processing
© 2026 ScienceGate Book Chapters — All rights reserved.