JOURNAL ARTICLE

Auditory-visual speech recognition by hearing-impaired subjects: Consonant recognition, sentence recognition, and auditory-visual integration

Ken W. GrantBrian E. WaldenPhilip F. Seitz

Year: 1998 Journal:   The Journal of the Acoustical Society of America Vol: 103 (5)Pages: 2677-2690   Publisher: Acoustical Society of America

Abstract

Factors leading to variability in auditory-visual (AV) speech recognition include the subject’s ability to extract auditory (A) and visual (V) signal-related cues, the integration of A and V cues, and the use of phonological, syntactic, and semantic context. In this study, measures of A, V, and AV recognition of medial consonants in isolated nonsense syllables and of words in sentences were obtained in a group of 29 hearing-impaired subjects. The test materials were presented in a background of speech-shaped noise at 0-dB signal-to-noise ratio. Most subjects achieved substantial AV benefit for both sets of materials relative to A-alone recognition performance. However, there was considerable variability in AV speech recognition both in terms of the overall recognition score achieved and in the amount of audiovisual gain. To account for this variability, consonant confusions were analyzed in terms of phonetic features to determine the degree of redundancy between A and V sources of information. In addition, a measure of integration ability was derived for each subject using recently developed models of AV integration. The results indicated that (1) AV feature reception was determined primarily by visual place cues and auditory voicing+manner cues, (2) the ability to integrate A and V consonant cues varied significantly across subjects, with better integrators achieving more AV benefit, and (3) significant intra-modality correlations were found between consonant measures and sentence measures, with AV consonant scores accounting for approximately 54% of the variability observed for AV sentence recognition. Integration modeling results suggested that speechreading and AV integration training could be useful for some individuals, potentially providing as much as 26% improvement in AV consonant recognition.

Keywords:
Speech recognition Sentence Consonant Voice Context (archaeology) Sensory cue Computer science Speech perception Psychology Audiology Perception Vowel Artificial intelligence

Metrics

337
Cited By
8.57
FWCI (Field Weighted Citation Impact)
30
Refs
0.98
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Multisensory perception and integration
Social Sciences →  Psychology →  Experimental and Cognitive Psychology
Hearing Loss and Rehabilitation
Life Sciences →  Neuroscience →  Cognitive Neuroscience
Tactile and Sensory Interactions
Life Sciences →  Neuroscience →  Cognitive Neuroscience

Related Documents

JOURNAL ARTICLE

Sentence Recognition Performance in Visual-Only and Auditory-Visual Conditions by Normal and Hearing Impaired Adults

Jonghyun KwakJae-Hee ChoiHyunsook Jang

Journal:   Eon'eo cheong'gag jang'ae yeon'gu/Communication sciences & disorders Year: 2019 Vol: 24 (4)Pages: 1077-1086
JOURNAL ARTICLE

Auditory, Visual, and Auditory-Visual Recognition of Consonants by Children with Normal and Impaired Hearing

Norman P. Erber

Journal:   Journal of Speech and Hearing Research Year: 1972 Vol: 15 (2)Pages: 413-422
JOURNAL ARTICLE

Auditory and auditory-visual frequency-band importance functions for consonant recognition

Joshua G. W. BernsteinJonathan H. VeneziaKen W. Grant

Journal:   The Journal of the Acoustical Society of America Year: 2020 Vol: 147 (5)Pages: 3712-3727
JOURNAL ARTICLE

Auditory filter width and consonant recognition for hearing-impaired listeners

Judy R. DubnoDonald D. Dirks

Journal:   The Journal of the Acoustical Society of America Year: 1988 Vol: 83 (S1)Pages: S76-S76
JOURNAL ARTICLE

Auditory filter characteristics and consonant recognition for hearing-impaired listeners

Judy R. DubnoDonald D. Dirks

Journal:   The Journal of the Acoustical Society of America Year: 1989 Vol: 85 (4)Pages: 1666-1675
© 2026 ScienceGate Book Chapters — All rights reserved.