Everyday environments are often noisy and can degrade temporal speech modulations. Furthermore, these environments lead to large variability in speech recognition due to individual differences in auditory and cognitive processing. Older adults with normal (ONH) or impaired hearing (OHI) completed three speech recognition experiments consisting of 15–16 measures of temporally filtered speech with (1) degraded spectral cues, (2) competing speech-modulated noise, and (3) combined degraded spectral cues in speech-modulated noise. Results were compared to other measures of speech-on-speech masking. Speech was spectrally shaped according to each listener’s hearing thresholds. Speech recognition thresholds (SRTs) were determined at SRT20, SRT50, and SRT80 percent correct recognition and summarized across experiments as a single principal component. Measures of auditory and cognitive function were entered into a dominance analysis separately for ONH and OHI, which determined the relative importance of each predictor in the presence of all other predictor combinations. Auditory and cognitive measures accounted for 72%–89% of the variance in speech recognition with greater contributions from vocabulary knowledge for ONH and from speech glimpsing abilities for OHI. These results suggest that individual differences in auditory and cognitive abilities and group differences in hearing function significantly contribute to speech recognition in degraded auditory environments.