Mark SteadmanChristian J. Sumner
Cochlear implants provide a degraded input to the auditory system. Despite this, cochlear implant users are able to discriminate speech sounds in quiet with a degree of accuracy comparable to that of normal hearing listeners. The neural bases of this phenomenon is not well understood. A set of vowel-consonant-vowel phoneme sequences, each produced by multiple talkers, were parametrically degraded using a noise vocoder. Neural responses were recorded in the guinea pig midbrain and cortex, and auditory nerve responses were generated using a computational model. The discriminability of these responses was quantified using a nearest neighbor classifier. When envelope modulations were limited to 16 Hz, classifier performance was qualitatively similar to that of human listeners for all brain regions. However, in the auditory nerve and the midbrain, the preservation of high rate envelope cues enabled the near perfect discrimination of speech tokens even for heavily spectrally degraded speech. High rate envelope cues do not appear to increase discriminability of auditory cortex responses. High rate envelope cues, represented up to the midbrain, are useful for discriminating speech tokens. However, qualitatively more consistent with perception, high rate envelope cues do not contribute to the discriminability of cortical neural responses.
Kamalini G. RanasingheWilliam A. VranaChanel J. MatneyMichael P. Kilgard
Antoine J. ShahinJess R. KerlinJyoti BhatLee M. Miller
Alexa BautistaStephen M. Wilson
Tomomi MizuochiMichiru Makuuchi
Antoine J. ShahinChristopher W. BishopLee M. Miller