AudiologyOnline Phone: 800-753-2160


ReSound Nexia - August 2024

20Q: Hearing Aids - The Brain Connection

20Q: Hearing Aids - The Brain Connection
Kelly Tremblay, PhD
January 7, 2013
Share:

From the desk of Gus Mueller

In the world of diagnostic audiology, it long has been common to utilize objective measures to support our behavioral findings, or in some cases, replace behavioral testing. The immittance battery and the ABR have been with us since the early 1970s, and OAEs were added to the battery a decade or so later. In the world of amplification, however, our objective measures are limited. The verification tool of “How does that sound?” is as popular today as it was when body aids and vacuum tubes were the norm. Sure, we have probe-mic measures, but as has often been pointed out, “one can obtain a beautiful real-ear probe-mic match to prescriptive targets from a cadaver.” Knowing that the output of the hearing aid is correct in the ear canal is of course a necessary first step, but what we really want to know is if and how the brain is using the information.

For example, most hearing aids today utilize digital noise reduction, and it often has been suggested that his technology provides relaxed listening, maybe reducing the cognitive load to the brain, and freeing up some space for other tasks. Sounds like a good thing, but do we actually have objective brain behavior studies to show that this is true? Is it even possible to do these studies?

What we need is someone to bring the findings of cutting-edge auditory research to the doorstep of clinicians. We have such a person with us this month here at 20Q, Kelly Tremblay, Ph.D. She’s a researcher who has also been a clinician for over 25 years, and works with students as Professor and Head of Audiology at the University of Washington. After completing her PhD in neuroscience at Northwestern University in 1997, she has made it her mission to apply neuroscience theory to clinical questions, so new approaches to rehabilitation can take place.

A recent example is her latest book series entitled Translational Perspectives in Auditory Neuroscience. In this engaging series, Dr. Tremblay assembled some of the top scholars in hearing science and had them write chapters that clinicians could relate to. The result is an impressive resource that all clinicians and scientists can learn from.

But back to hearing aids and this month’s 20Q article. We again see how Kelly is looking to the brain to answer important questions about the effects of amplification. She addresses the issue of whether brain measures should or should not be used for clinical assessment of hearing aid performance.

I suspect that some of Kelly’s comments in this 20Q article relate to a recent project of hers, in which she assembled a team of experts and published a special issue on Hearing Aids and the Brain in the International Journal of Otolaryngology. You’ll find some great articles in this special issue, many of which apply directly to clinical practice.

Gus Mueller, Ph.D.
Contributing Editor
January 2013

To browse the complete collection of 20Q with Gus Mueller articles, please visit www.audiologyonline.com/20Q

20Q: Hearing Aids - The Brain Connection

Kelly Tremblay PhD1. I’m intrigued by your title, so I have to ask, what exactly do you mean by the “brain connection”?

When you ask audiologists how we approach rehabilitation, the first thing that most of us will mention is the need for improved audibility – through the use of hearing aids.  That’s an essential first step. But what a person does with that sound can be bit of a mystery.  If we think of the brain as a CPU that controls what we do with sound, then it’s also important to study what the brain is doing with amplified sound. This CPU might help us to understand why one person will do really well with his or her hearing aids but another person won’t. 

2. I agree it could be a “brain thing,” but do we really have ways to measure this?

There are so many ways the brain can be measured: fMRI, PET, MEG and EEG. Not all can be used while people are wearing their hearing aids. Most are not going to find their way into the audiology clinic as part of a hearing aid fitting. But EEG, as you know is versatile and clinically useful. You might recall way back in 2006, in Gus' old Page Ten column, I  described a series of EEG/hearing aid studies that left us quite perplexed (Tremblay, 2006).

3. Not sure I remember that article, can you refresh my memory?

We used auditory evoked potentials to study the brain and we discovered that you can reliably record cortical evoked potentials such as the P1-N1-P2 while people are wearing their hearing aids (Tremblay, Kalstein, Billings, & Souza, 2006), and different speech sounds evoke different neural detection patterns (Tremblay, Billings, Friesen, & Souza, 2006). The perplexing part was that recording and interpreting evoked potentials while a person is wearing a hearing aid introduces issues that deviate from the typical evoked potential literature.

4. Why did you want to use evoked potentials in combination with hearing aids in the first place?

That’s a good question. Our lab and others were motivated to study the effects of hearing aid amplification on the brain – as a form of experience or stimulation related plasticity. We wanted to know if the brain changed within a specific time period, after amplification, for some individuals and not others. Or, if auditory training changed the way certain types of sounds were represented in the central auditory system. Information gained from this direction of science could help to explain why and how some people’s brains respond to rehabilitation.

Another use for brain measures could be their use during hearing aid fitting, in difficult to test populations such as babies. Methods such as probe microphone technology and aided speech recognition testing commonly employed to verify and validate hearing aid intervention for an older population are limited in their ability to meaningfully measure the effects of amplification beyond the tympanic membrane.

5. Makes sense, but regarding your last point, I’ve heard some comments against the use of brain measures when fitting babies with hearing aids.

I have too.  Some people argue that behavioral measures are the gold standard and can be quite helpful when prescribing a hearing aid. Speech scientists who measure the development of language in infants are often cited as examples of what can be done with young babies. So why add a brain measure that still doesn’t assure the baby can perceive the sound?  Others argue that a brain measure, signaling the neural detection of sound at a specific amplified intensity level, could be useful to pediatric clinicians when setting gain levels.

6. What do you think?

Well we’re making progress, but I think there’s still room for debate.  I think brain measures provide important information that will influence rehabilitation in the future, but I don’t personally think they are ready to be used in the clinic for assessment or outcome purposes.  Nor do I think brain measures will replace behavioral and electroacoustic measures that are currently used during hearing aid fittings. Collectively, however, all of these tools can provide complementary information.

To be able to use brain measures, we first need to have converging evidence that brain measures can in fact provide information that is not attainable in any other way. Right now, no consistent picture is emerging to justify the expense and time of adding brain measures to our clinical service. To help move the field forward I solicited a team of experts, with different areas of expertise, to contribute to a special issue of the International Journal of Otolaryngology on the topic entitled, Hearing Aids and the Brain.  Readers can find the published manuscripts using this website.   I’ve been thinking about many of these articles, so you might hear me mention them as we go along.

So what do I think?  Despite enthusiasm for this direction of research, and even commercially available systems being marketed to test evoked potentials while people are wearing their hearing aids (Munro, Purdy, Ahmen, Begum, & Dillon, 2011), a common conclusion in the published manuscripts is that we lack sufficient information to recommend that brain measures be used in the clinic for the purpose of hearing aid fitting and/or how people are making use of the amplified sound.

7. Can you give me an example?

Sure, hot topics in hearing aid research include listening effort and cognition. If you’re a regular reader of 20Q, you know that there were two articles on that topic last year. A quick review goes something like this. When speech is degraded and distorted as a result of hearing loss, it is more difficult to understand and therefore listeners have to focus more to understand. This is known as ‘effortful listening’. Researchers at Queen’s University in Canada used fMRI to study how the brain processes distorted speech (Wild et al., 2012). They demonstrated that specific frontal regions of the brain were engaged when listeners were actively trying to understand distorted speech. These areas were not active when speech was easy to understand.  Their results show that there are different brain areas involved when trying to listen to distorted speech which might in turn help us to understand why some people are more or less able to listen to distorted sounds, especially in competing background noise. But clinicians are not about to include fMRI in their everyday hearing aid fittings, so other less time intensive tools are being used to study effortful listening.  One tool that is under investigation is pupillometry.

8. Pupillometry?  Sounds like something to do with eyes?

It certainly is.  Pupillometry involves the measurement of pupil dilation during different listening conditions. Koelewijn and colleagues (2012) used measures of pupil dilation (which of course is also innervated by the brain) to assess listening effort under adverse conditions in normally hearing adults. One of the advantages of pupillometry is the immediacy of the measurement, making it time efficient if it were shown to be of use to the clinician during hearing aid fittings. In this study, Koelewijn and colleagues were able to demonstrate a difference in the amount of pupil dilation among normal hearing participants listening to speech masked by fluctuating noise versus a single talker. The authors suggest that the degree of pupil dilation may reflect the additional cognitive load resulting from the more difficult listening task.  They further suggest that pupillometry may offer a viable objective measure of the benefits associated with specific hearing aid signal processing features such as digital noise reduction.  As of yet, the ability to reliably measures these responses using different hearing aid technology and different types of hearing loss has not been established.

9. Aren’t you making things more complicated than necessary? Why not just use speech measures to determine if a particular algorithm is best?

Sounds easy yes, but as you might know, demonstrating the benefits of digital noise reduction algorithms in hearing aids on improved speech recognition in noise has been elusive. Anecdotal reports, however, suggest that such algorithms may provide non-speech perception benefits such as improved ease of listening. Physiologic measures of stress such as pupillometry may therefore provide an objective measure of such non-speech benefits.

10. What other brain measures are possibly useful during the assessment and fitting of hearing aids?

Traditional click evoked auditory brainstem responses were tried long ago to estimate unaided and aided thresholds. They proved to be unsuccessful because the short duration signal presumably interacted with the hearing aid circuitry in a way that introduced ringing and other artifacts (Gorga, Beauchaine, & Reiland, 1987). However, longer duration stimuli are now being used to measure brainstem activity.  For example,  Anderson and Kraus (2012) provide some case studies to show that complex speech evoked ABRs (also called the frequency following response) can be evoked when stimuli are presented through a hearing aid. An advantage of this technique is that it could be used to measure longitudinal changes in certain components of speech signals (e.g., fundamental frequency ) over time, and may be less sensitive to signal-to-noise (SNR) issues that are problematic when recording cortical evoked potentials (Billings, Tremblay, Stecker, & Tolin, 2009; Billings, Tremblay, & Miller, 2011). But we don’t yet know how specific hearing aid signal processing influences this type of ABR.

11. I’m not much of an expert on cortical potentials.  What do you mean by “SNR issues"?

Let me explain a little of what we are looking at.  Cortical evoked potentials such as the P1-N1-P2 response are similar to ABR responses in that they are sensitive to stimulus level, increasing in latency and decreasing in amplitude as the intensity of the signal decreases.  It is used to estimate hearing threshold, similar to the ABR, but suprathreshold testing can also be useful. We’ve shown that the P1-N1-P2 response reflects time-varying aspects (e.g. envelope) of the stimulus used to evoke it (Tremblay, 2006a, b). This means you can use these cortical responses to determine the acoustic contents that differentiate two different CV’s (e.g., /see/ versus /she/).

12. Could these responses be used to determine if certain aspects of amplified sounds are being detected?

That’s what we had hoped for. But in a series of studies we found results that were less than ideal. It turns out that the hearing aid modifies sound in a way that affects brain responses and present a conundrum. In a series of studies, Billings et al. (2007; 2009) found that cortical responses do not reflect an increase in stimulus level when the change in level is provided by a hearing aid. That is, adding 20dB of gain does not alter the latency and amplitude of P1-N1-P2 responses as one might expect based on unaided studies.  These results raise caution about using cortical responses because it appears that the hearing aid device is introducing variables that are not yet understood.

13. Interesting, but you still haven’t answered my question about SNR?

When we looked at the output of the hearing aid using a probe microphone, we saw that the noise floor and the stimulus level had increased, making the SNR fairly constant from unaided to aided conditions. Animal research (Phillips & Kelly, 1992) has shown that cortical neurons are sensitive to SNR rather than absolute signal level. We think that brain measures are therefore affected by aspects of hearing aid processing, perhaps microphone noise, that affect output SNR.  

14. Is this something that others have studied too?

Yes, in that special Brain and Hearing Aids issue that I mentioned earlier, Jenstad, Stapells and colleagues at the University of British Columbia looked to see if the results by Billings et al. (2007; 2009; 2011) were related to the type of hearing aid being used. They used three different types of hearing aids (one analog and two types of digital) and two different gain settings (20dB and 40dB) (Marynewhich, Jenstad, & Stappells, 2012). None of the hearing aids resulted in a reliable increase in response amplitude relative to the unaided across conditions. They too concluded that the P1-N1-P2 responses may not accurately reflect the amplified stimulus expected from the hearing aids and should therefore require more research before being considered for clinical use. In addition to altering SNR, they also conclude that the digital hearing aids alter the rise time of the stimuli, which also affects brain responses (Jenstad, Marynewich, & Stappells, 2012). 

15. Does this mean that cortical evoked potentials should not be used to estimate aided thresholds?

Yes and no. There are some unresolved issues. Billings and colleagues (2012) compared these same P1-N1-P2 responses in people with simulated hearing loss and found that physiological detection approach appears to be a reasonable use of aided cortical evoked potentials (CAEPs) because these measures are sensitive to differences in detectability of an inaudible or barely audible signal and a suprathreshold signal. But, things become more complicated when comparing suprathreshold responses that vary in intensity because the hearing aid processing modifies audible stimulus acoustics (e.g., SNR and onset characteristics).

16. If the brain is sensitive to some of the acoustic processing that is imposed by the hearing aid, could this be exploited in a positive way for clinicians?

Possibly. This is another direction of research, and something that has been investigated by Susan Scollie and her group at the University of Western Ontario (UWO).  For example, Glista and colleagues (2012) set out to examine if CAEPs are sensitive to different aspects of technology. They tested children with and without hearing loss to determine if frequency compression hearing aid technology improved audibility of a 4 kHz tone burst and if it translated into improved detection of CAEP responses. Their results suggest that in some children, frequency compression hearing aid processing improved audibility of specific frequencies, leading to increased rates of detectable cortical responses in hearing-impaired children.  However, the CAEPs were not always present even when sounds were clearly audible. These points reinforce the need for further research to assess the effects of hearing aid fine tuning (e.g., changes to hearing aid gain and frequency shaping), or other aspects of hearing aid signal processing, on CAEPs.

17. So the problem is related to how CAEPs are measured?

To some extent, yes.  Just like the ABR where repeated presentations of clicks are presented, CAEP protocols involve repeated presentations of a target phoneme that is preceded by an ISI (a silence period). Easwar, Purcell and Scollie (2012) studied this mode of presentation to determine how the presentation of isolated phonemes differs from everyday speech processing where the same phoneme in running speech is likely to be preceded by other phonemes. Since nonlinear hearing aids continuously and rapidly adjust band-specific gains based on the acoustic input, there is a possibility that the hearing aids may react differently to the same phoneme when presented during aided CAEP testing as compared to when they occur in running speech.

The research group froom UWO reported significant differences in hearing aid functioning between running speech and isolated phoneme contexts, along with considerable inter-hearing aid variability. In some instances, the magnitude of these differences was large enough to impact calibration, or interpretation of group data. This may indicate the need to perform acoustic calibration for individual hearing aids for the purpose of well-defined CAEP stimuli.

18. Acoustic calibration for hearing aids?

As I mentioned earlier, this might mean that recording CAEPs to isolated speech sounds might not reflect what the brain is really doing with real world running speech.  Traditionally, in the field of hearing science, we focus on evoked potentials that are sensitive to the acoustics of sound, rather than language processing using running speech. So considering alternative CAEP recording techniques, using different stimulus presentation methods rather than recording a single syllable in isolation repeated for a number of times, could be a new direction of research. One step in this direction is the use of the acoustic change complex (ACC) through hearing aids (Tremblay et al., 2006). Here you can pair two sounds (e.g., /u/-/i/) to determine if the brain detected a change from one vowel to another. Or if the temporal envelope (e.g., /she/ and /see/) is being encoded at the level of the cortex. It is still not the same as running speech, but perceptual discrimination thresholds have been shown to closely estimate physiological discrimination thresholds (Won et al. 2010). With that said, Easwar et al. (2012) state that the output levels in different contexts may have implications for calibration and estimation of audibility based on CAEPs. The variability across hearing aids observed could make it challenging to predict differences on an individual basis.

19. So we’re making progress, but it sounds like there are still no consistent guidelines for clinicians?

Sadly, that is true. But more and more researchers are joining in on this area and I’m optimistic that we’ll continue to learn from each other. I think it’s exciting to look at brain measures as a potential source of new information, but like any other area of research, there are limits to neuroscience, too.

20. What are you hinting at?

Well, there are only so many ways we can measure the brain. The information we obtain is dependent on how the tool is used. By this I mean, we shouldn’t toss away our existing tools in hope that neuroscience will solve everything. Take CAEPs, for example. Although the presence of a P1-N1-P2 response suggests the neural detection of sound at the level of the cortex, it does not say anything about the integrity of the brain as a whole or the ability to make use of this information. Moreover, as we’ve learned from the research in the special issue, when brain responses are absent or insensitive to a particular sound, it doesn’t necessarily mean there is faulty neural processing. It could mean that the interaction between hearing aid technology and brain recording techniques is confounding brain measurements.

For all of the above mentioned reasons, brain responses can provide information, but this need not necessarily mean a single brain response will predict hearing aid success. Contributions of hearing aid technology, cognitive status, listening effort, and the ability to ignore irrelevant stimuli, will also help to inform clinicians about how patients are making use of the information provided by their hearing aids. 

References

Anderson, S. & Kraus, N. (in press). The potential role of the cABR in assessment and management of hearing impairment. International Journal of Audiology.

Billings, C.J., Papesh, M.A., Penman, T.M., Baltzell, LS., & Gallun, F.J. (2012). Clinical use of aided cortical auditory evoked potentials as a measure of physiological detection or physiological discrimination. International Journal of Otolaryngology, 2012. Article ID 365752, doi:10.1155/2012/365752.

Billings, C.J., Tremblay, K.L. & Miller, C.W. (2011).  Aided cortical auditory evoked potentials in response to changes in hearing aid gain.  International Journal of Audiology, 50(7), 459–467.

Billings, C.J., Tremblay, K.L., Souza, P.E., & Binns, M.A.  (2007). Effects of hearing aid amplification and stimulus intensity on cortical auditory evoked potentials. Audiology and Neurotology, 12(4), 234-246.

Billings, C.J., Tremblay, K.L., Stecker, G.C., & Tolin, W.M. (2009).  Human evoked cortical activity to signal-to-noise ratio and absolute signal level.  Hearing Research, 254(1-2), 15 - 24, doi: 10.1016/j.heares.2009.04.002.

Easwar, V., Purcell, D.W., & Scollie, S.D. (2012). Electroacoustic comparison of hearing aid output of phonemes in running speech versus isolation: Implications for aided cortical auditory evoked potentials testing. International Journal of Otolaryngology, 2012. Article ID 518202, doi: 10.1155/2012/518202.

Glista, D., Easwar, V., Purcell, D.W., & Scollie, S. (2012).  A pilot study on cortical auditory evoked potentials in children: Aided CAEPs reflect improved high-frequency audibility with frequency compression hearing aid technology. International Journal of Otolaryngology, 2012. Article ID 982894, doi:10.1155/2012/982894.

Gorga, M.P, Beauchaine, K.A., & Reiland, J.K.  (1987). Comparison of onset and steady-state responses of hearing aids: Implications for use of the auditory brainstem response in the selection of hearing aids. Journal of Speech, Language and Hearing Research, 30,130-136.

Jenstad, L.M., Marynewich, S. & Stapells, D.R. (2012).  Slow cortical potentials and amplification—Part II: Acoustic measures. International Journal of Otolaryngology, 2012. Article ID 386542, doi: 10.1155/2012/386542.

Koelewijn, T., Zekveld, A.A., Festen, J.M., Rönnberg, J.,  & Kramer, S.E. (2012).  Processing load induced by informational masking is related to linguistic abilities. International Journal of Otolaryngology, 2012. Article ID 865731, doi:10.1155/2012/865731.

Marynewich, S., Jenstad, L.M., & Stapells, D.R. (2012).  Slow cortical potentials and amplification—Part I: N1-P2 measures. International Journal of Otolaryngology, 2012. Article ID 921513, doi:10.1155/2012/921513.

Munro, K.J., Purdy, S.C., Ahmed, S., Begum, R. & Dillon, H. (2011).  Obligatory cortical auditory evoked potential waveform detection and differentiation using a commercially available clinical system: HEARLab. Ear and Hearing, 32, 782–786.

Phillips, D.P., & Kelly, J.B. (1992).  Effects of continuous noise maskers on tone-evoked potentials in cat primary auditory cortex. Cerebral Cortex, 2(2), 134–140.

Tremblay, K.L. (2006).  Hearing aids and the brain – what’s the connection?  Hearing Journal, 59(8), 10-17.

Tremblay, K.L., Billings, C.J., Friesen, L ,M., &  Souza, P.E. (2006). Neural representation of amplified speech sounds. Ear and Hearing, 27(2), 93–103.

Tremblay, K.L., Kalstein, L., Billings, C.J., & Souza, P.E. (2006).  The neural representation of consonant-vowel transitions in adults who wear hearing aids. Trends in Amplification, 10(3), 155–162.

Wild, C.J., Yusuf, A., Wilson, D.E., Peelle, J.E., Davis, M.H., Johnsrude, I.S. (2012). Effortful listening: the processing of degraded speech depends critically on attention. Journal of Neuroscience, 32(40), 14010-21

Won, J.H, Clinard, C.G., Kwon, S., Dasika, V.K., Nie, K., Drennan, W.R.,...Rubinstein, J.T. (2011).  Relationship between behavioral and physiological spectral-ripple discrimination. Journal of the Association of Research in Otolaryngology, 12(3), 375-93.

Cite this content as:

Tremblay, K. (2013, January). 20Q: Hearing aids - the brain connection. AudiologyOnline, Article #11538. Retrieved from https://www.audiologyonline.com/

 

Phonak Infinio - December 2024

kelly tremblay

Kelly Tremblay, PhD

University of Washington

Dr. Kelly Tremblay is a Professor at the University of Washington in Seattle, WA.  She is an audiologist and neuroscientist who studies the effects of aging and rehabilitation on the brain.



Related Courses

20Q: Audiologic Care for Musicians - Creating the Perfect Harmony
Presented by Cory Portnuff, AuD, PhD
Text/Transcript
Course: #36100Level: Intermediate1 Hour
Musicians' ears are part of their instruments, and audiology expertise is important for amateur as well as professional musicians. Standard audiology clinic protocols and knowledge may not always be on target for musicians. This course uses an engaging Q & A format to discuss musicians' unique hearing needs and how audiologists can best meet them.

20Q: Hearing Aid Adoption — What MarkeTrak Surveys are Telling Us
Presented by Lindsey E. Jorgensen, AuD, PhD
Text/Transcript
Course: #38410Level: Intermediate2 Hours
This 20Q article provides a discussion of data collected in the MarkeTrak surveys and how this impacts our patients adopting hearing aids.

20Q: Frequency Lowering Ten Years Later - Evidence for Benefit
Presented by Ryan McCreery, PhD
Text/Transcript
Course: #28431Level: Intermediate1 Hour
This text course is a Q & A discussion of the research looking at the benefit of frequency lowering hearing aid technology and what clinical conclusions can be made based on the evidence.

20Q: Hearing Aid Verification - Can You Afford Not To?
Presented by H. Gustav Mueller, PhD
Text/Transcript
Course: #30225Level: Intermediate1 Hour
This course covers basic concepts regarding the selection of hearing aid gain and output, verification, and potential negative consequences when verification is not performed. It also reviews recent changes in the US hearing aid market and makes the case as to why hearing aid verification is more important than ever. This text-based course is written in an engaging Q & A format.

20Q: Optimizing Hearing Aid Processing for Listening to Music
Presented by Marshall Chasin, AuD
Text/Transcript
Course: #38931Level: Intermediate2 Hours
Music and speech have some differences which include spectral shape, intensity, and “crest factors”. Most modern digital hearing aids cannot handle the more intense inputs that are characteristic of music. New hearing aid technologies and clinically efficient strategies will be provided to optimize hearing aids for music as well as for speech. These technologies and clinical strategies are designed to both circumvent some of the limitations with current hearing aids and to optimize the quality of amplified music.

Our site uses cookies to improve your experience. By using our site, you agree to our Privacy Policy.