AudiologyOnline Phone: 800-753-2160


Sonic Radiant - January 2021

Research on Sound, Neural Processing Could Help Deaf People Hear Amidst the Noise

Share:
West Lafayette, Ind. - A new understanding about how the inner ear processes the temporal structure of sound could some day improve how prosthetic hearing devices are designed to help people with profound hearing loss hear better in noisy places, according to new Purdue University research.

"Sound can be divided into fast and slow components, and today's cochlear implants provide only the slow varying components that help people with profound hearing loss hear conversations in quiet rooms, but don't allow them to hear as well in busy restaurants," said Michael G. Heinz, an associate professor of speech, language and hearing sciences who specializes in auditory neuroscience. "It has been thought that the fast varying sound components - which can't be provided with current cochlear implant technology - help to hear in noisy environments. Evidence for this idea has come from listening experiments that were interpreted based on the assumption that the fast and slow sound components could be separated within the ear.



Research led by Michael G. Heinz, an associate professor of speech, language and hearing sciences who specializes in auditory neuroscience, shows how the inner ear processes the temporal structure of sound. These findings could someday improve how prosthetic hearing devices are designed to help people with profound hearing loss hear better in noisy places. The findings were published last month in The Journal of Neuroscience. (Purdue University photo/Andrew Hancock)

"We decided to approach this problem by acknowledging that this separation is theoretically impossible to achieve but not impossible to deal with. We found that slowly varying neural components actually play the primary role in helping the brain understand speech in noisy environments. The critical fast varying acoustic components are actually transformed by the normal-hearing cochlea into slower neural components to ultimately help people hear better. Additional studies will be needed to explore how current cochlear implant technology can be adjusted to account for these cochlear transformations."

Heinz and Jayaganesh Swaminathan, a Purdue graduate and post-doctoral research associate at the Massachusetts Institute of Technology, analyzed how sounds picked up by normal hearing ears are understood by the brain. Previous studies had evaluated the perception of sound's acoustic waveform. However, focusing on the neural processing in this study clarified how fast and slow varying components each contribute to speech perception. The findings were published last month in The Journal of Neuroscience.

"Some have thought that one component can exist without the other, but now we know this is impossible to achieve in the ear, and this new knowledge can help scientists who are working to improve cochlear implant design," Swaminathan says.

Cochlear implants are a surgically implanted neural prosthesis that is used by more than 200,000 patients worldwide. The device helps deaf people whose cochlea is missing its hair cells translate sound into neural responses as the implant's electrodes stimulate the auditory nerve fibers.

"But, perhaps cochlear implants are not delivering all of the useful information with their current stimulation strategies," Swaminathan says. "At this time, their design focuses on the slowly varying components in the acoustic waveform rather than what the slowly varying components look like in the neural responses of normal-hearing ears."

The researchers used a psychophysiological approach that quantitatively linked neural coding - based on a computational auditory nerve model - and perception of speech in noise that was measured using normal hearing people. The same set of five specialized acoustic stimuli produced by vocoders was used, and listeners were asked to identify one of 16 consonants in varying degrees of background noise.

"The key distinction in our results is that it was the neural slow-fluctuation cues that were shown to be important rather than the acoustic slow-component cues that cochlear implants provide," said Heinz, who also has a joint appointment in biomedical engineering. "This may sound like the same thing, but the slow neural components include the effects of fast to slow conversions that occur within the normal-hearing cochlea but do not occur in the damaged ears of cochlear implant patients. These results are promising because they provide insight into a possible way to provide the useful information from fast acoustic cues using the slow fluctuations that existing cochlear implant technology can provide."

Heinz and Swaminathan will continue studying how neural signal processing can improve our understanding of speech perception in noise, as well as how these findings can be used to improve cochlear implants.

This research was supported by the National Institutes of Health's National Institute on Deafness and Other Communication Disorders, the Purdue Research Foundation, and Weinberg funds from the Department of Speech, Language and Hearing Sciences.

Taken from www.purdue.edu/newsroom/research/2012/120327HeinzHearing.html
Rexton Reach - November 2024

Our site uses cookies to improve your experience. By using our site, you agree to our Privacy Policy.