This study examined individual differences in how older adults with normal hearing (ONH) or hearing impairment (OHI) allocate auditory and cognitive resources during speech recognition in noise at equal recognition. Associations between predictor variables and speech recognition were assessed across three datasets that each included 15–16 conditions involving temporally filtered speech. These datasets involved (1) degraded spectral cues, (2) competing speech-modulated noise, and (3) combined degraded spectral cues in speech-modulated noise. To minimize effects of audibility differences, speech was spectrally shaped according to each listener’s hearing thresholds. The extended Short-Time Objective Intelligibility metric was used to derive psychometric functions that relate the acoustic degradation to speech recognition. From these functions, speech recognition thresholds (SRTs) were determined at 20%, 50%, and 80% recognition. A multiple regression dominance analysis, conducted separately for ONH and OHI groups, determined the relative importance of auditory and cognitive predictor variables to speech recognition. ONH participants had a stronger association of vocabulary knowledge with speech recognition, whereas OHI participants had a stronger association of speech glimpsing abilities with speech recognition. Combined with measures of working memory and hearing thresholds, these predictors accounted for 73% to 89% of the total variance for ONH and OHI, respectively, and generalized to other diverse measures of speech recognition.
Siti Zamratol‐Mai Sarah MukariYusmeera YusofWan Syafira IshakNashrah MaamorKalaivani ChellapanMariam Adawiah Dzulkifli
Larry E. HumesGary R. KiddJennifer J. Lentz
Shelemiah CrockettTessa BentErica E. RyherdMelissa M. Baese‐BerkNatalie Manley