AudiologyOnline Phone: 800-753-2160


Neuromod Devices - Your Partner for Tinnitus CTA - September 2021

Vanderbilt Audiology Journal Club: Recent Hearing Aid Innovations and Technology

Vanderbilt Audiology Journal Club: Recent Hearing Aid Innovations and Technology
Erin Picou, AuD, PhD
May 29, 2018

To earn CEUs for this article, become a member.

unlimited ceu access $129/year

Join Now
Share:

Learning Objectives

After this course, participants will be able to:

  • List new key journal articles on the topic of hearing aid technology.
  • Describe the purpose, methods and results of new key journal articles on hearing aid technology.
  • Explain some clinical takeaway points from new key journal articles on hearing aid technology.

Introduction and Overview

This course will review journal articles from 2017 which focus on:

  • Hearing aid technology and innovation
  • Noise reduction and directionality on listening effort
  • Visually-guided directionality on speech recognition
  • Individual differences in bilateral directionality benefit for speech recognition
  • How clinicians fit and fine-tune processing schemes

Each article review will consist of this course agenda: 

  • What they asked
  • Background
  • Why it matters
  • What they did
  • What they found
  • Why is this important?
  • Does it matter clinically?

To begin, we will discuss two articles which examine noise reduction and directional microphones, and how they affect listening effort. These studies shed light on a visually guided directional hearing aid system and speech recognition. Furthermore, we will discuss an article related to individual differences and potential candidacy issues for advanced directional technologies. Also, as a bonus study, we will review an article surveying how clinicians fit and fine-tune hearing aids and processing schemes. I will use the format above to discuss the following studies.

Impact of Noise and Noise Reduction on Processing Effort: A Pupillometry Study (Wendt, Hietkamp, & Lunner, 2017)

What They Asked

In 2017, a group from Denmark showed interest in noise reduction schemes (digital noise reduction and directional microphones) and whether they can reduce processing effort during speech recognition. For the purpose of this study, the authors defined noise reduction schemes as a combination of digital noise reduction and directional microphones.

Background

Speech understanding is important and our patients with hearing loss often have difficulty understanding speech in noise. These difficulties are related not only to speech recognition, but also to the increased cognitive demands and listening effort they use to understand that speech. Noise reduction algorithms can make listening in noise easier for the user. Digital noise reduction does a nice job of improving patient comfort, because it does not change the instantaneous signal-to-noise ratio. Typically, we do not see improvements in speech recognition with digital noise reduction in hearing aids. Directional microphones which are more sensitive to sounds coming from the front than the back improve the signal-to-noise ratio, subsequently improving speech recognition when speech and noise are spatially separated.

Are these features helpful to the hearing aid user? We certainly have seen evidence that directional microphones and digital noise reduction can help improve speech recognition. However, many of these studies use unfavorable signal-to-noise ratios. The authors cite hearer range of -10 to 0 dB. We also know that many of these noise direction schemes work best at positive signal-to-noise ratios. In a true environment, most people are not in a negative signal-to-noise ratio when communicating with others. Certain laboratory tests using positive signal-to-noise ratios might show ceiling effects, but the ceiling effect does not mean that our patients are not struggling. Scoring high on speech recognition does not mean that they are doing well in a noisy environment.

Researchers and clinicians are interested in listening or processing effort. Consider listening effort as one of the cognitive resources necessary for understanding speech, or how much brain power someone has to put forth while they are listening. We know that listening effort is affected by the environment. In a poor signal-to-noise ratio, individuals use more brain power or cognitive effort. Also, in cases of reduced audibility, listening effort can be worse and can also be affected by listener characteristics (e.g., age, hearing loss). Individuals who have hearing loss exhibit increased listening effort. Listening or processing effort can be measured in a variety of ways. The easiest way is to ask, "How much effort did you put in?" or, "How much brain power did you use?" Their subjective ratings of effort have strengths and limitations. The limitations are that they are prone to recall bias. We can only ask people after they are done listening, and it relies on their subjective ability to accurately reflect or think about and comment on their effort. We use reaction time-based paradigms in our lab. Some people are also using memory measures. Both of these have associated limitations and require some assumptions about human cognitive capacity and cooperation.

Perceived listening effort aside, physiology measures are the main focus of this study. Pupillometry is a physiological measure that can tap into the way our body changes when we are using more cognitive energy, or brain power. Pupillometry is a way to get an indirect measure of effort without requiring a lot of cooperation from the patient and can be done in real time. This physiological measure uses pupil dilation diameter as an indication of processing effort. When we are thinking harder and putting more cognitive energy into a task, our pupils enlarge. I mentioned we can do this in real time and we can measure processing or listening effort by the size of the pupil or how long the pupil stays dilated while someone is listening or after they are done listening. We know that hearing aids in general can reduce listening effort, and there have been studies looking at comparing groups who are fit with hearing aids compared to their peers with more normal hearing (Desjardins & Doherty, 2014; Wendt, Kollmeier, & Brand, 2015). We know that with hearing aids, sometimes the effects of hearing loss on listening effort can be quite small.

Hearing aids with digital noise reduction can reduce listening effort (Doherty & Desjardins, 2015; Sarampalis, Kalluri, Edwards, & Hafter, 2009). Directional microphones can further reduce listening effort (Picou, Moore, & Ricketts, 2017; Desjardins, 2016). These benefits have been related to an individual listener's cognitive ability (Rönnberg et al., 2013). Perhaps it would be interesting to consider candidacy issues, so can we predict who might benefit from these features as it relates to listening effort. There is evidence to suggest that people who have larger working memory capacities might benefit more from digital noise reduction (Ng, Rudner, Lunner, & Rönnberg, 2015). Ultimately, these individuals might struggle less in difficult listening situations.

Why It Matters

A clinician's goal is to make listening easier for our patients. We want to have evidence that the hearing aid features are working in the environments that patients are going to be exposed to. We do not want to focus only on -10 to 0 dB signal-to-noise ratios. We also want to look at naturalistic signal-to-noise ratios. The following review will look at the signal-to-noise ratios our patients are routinely exposed to. A recent study published by Wu and his colleagues at Iowa and Ear Hearing found that when noise was present, most commonly the speech was 68 dB and noise was 60.5 dB (Wu et al., 2017). The most common signal-to-noise ratio for patients in the real world was 7.5 dB SNR (Wu et al., 2017). That's quite a bit different than the -10 to 0 dB SNR many other studies have used (Smeds, Wolters, & Rung, 2015).

What They Did

The authors tested 24 listeners with mild to moderately-severe hearing loss. All were experienced bilateral hearing aid users and the average age was 59 years. They measured everyone's cognitive abilities using a reading span task. In this task, participants were to listen to a sentence and make a judgment about what they heard. Participants would answer "yes" or "no" to determine if a sentence sounded semantically correct. For example, in response to the sentence, "The train sang a song", the participant would likely answer "No". After completing a couple semantic judgements, they were asked to recall either the initial or final words. This tests their working memory as they are trying to recall while multitasking. Researchers used pupillometry which measured the response using goggles and sophisticated equipment. A baseline of pupil dilation is represented as a zero, which is an individual's resting state. The researchers turned on the sentence and the pupil dilation enlarged. After the sentence concludes, we see the pupillometry return to baseline. In this study, the participants had speech noise on either side of the sentence presentation as well. The speech was presented from a loud speaker in front and the noise was presented from four loud speakers to the sides and the back of the listener.

Researchers conducted two experiments:

Experiment One. In the first experiment, researchers looked at noise reduction and directional microphones and the effect on speech recognition and listening effort. This was evaluated when speech recognition was 50% and 95%. They were looking at challenging listening situations, but also situations where the listener would be near a ceiling. Also measured were individual differences in cognitive ability. In the no-noise-reduction condition and the noise-reduction condition, we see in the L50 (i.e., level where they are getting 50% correct), their sentence recognition is around 50. We note this benefit of the noise reduction technology in listeners. We see that they are doing better in general in this listening condition and no significant improvement with the activation of that digital noise reduction and directional microphone. When we look at pupil dilation, again, larger pupils mean more processing or listening effort. In this challenging task, we see larger pupils in that the noise reduction algorithm, which is noise reduction and directional microphone, improved listening effort in both conditions. We do not see an improvement in sentence recognition, but even when they are at ceiling, that noise reduction directionality improved listening effort.

Experiment Two. In the second experiment, researchers used two commercially available hearing aids and evaluated at 95% sentence recognition performance level. Hearing aid one was a relatively new hearing aid with fast-acting noise reduction and a sophisticated beamformer. Hearing aid two was an older hearing aid with slow-acting noise reduction. As I mentioned, they were looking to measure effort and sentence recognition at these two performance levels. Therefore, researchers individually set the signal-to-noise ratio based on a participant's speech understanding ability. Then, they tested everyone with no noise reduction and with noise reduction. **High sentence recognition performance and we see a slight benefit in terms of listening effort of the newer, more advanced noise reduction and directional processing. Also, in the noise reduction conditions (both the 50% and the 95% correct), they found a significant relationship (although it was weak) between working memory capacity and listening effort. People who had larger working memory capacities had smaller pupils or less listening effort. That trend did not hold for the no-noise-reduction conditions. This suggests that there might be issues in terms of candidacy, although these issues are not clearly defined and more research is needed.

Why is This Important?

This is further evidence supporting dissociation between listening effort and speech recognition. We can see benefits of the technology on listening effort, even if we do not see it in speech recognition, because people are doing pretty well. Also, this evidence supporting pupillometry is conceptually related to task difficulty. Pupillometry is a good measure of listening effort. Finally, we are seeing confirmation of benefits of noise reduction and directional microphones for speech recognition and also processing or listening effort. Again, this study was conducted in realistic signal-to-noise ratios. On average, the L50 (level 50) was 1.3 dB and the L95 (level 95) sentence recognition performance was 7.1 dB. I mentioned earlier that Wu and his group found that 7.5 dB was the most common SNR. They are right on target by using their ecologically valid speech in noise levels.

Does it Matter Clinically?

Testing at these ecologically valid signal-to-noise ratios supports the clinical utility of these technologies and helps us translate our laboratory benefits to real-world listening experiences. Of course, with any research study, we have some lingering questions:

  • Would we expect these benefits in real-world situations and not just laboratory environments?
  • Are these benefits large enough for patients to notice?
  • Can we think about ways to individualize fitting of these technologies for patients based on their working memory capacity? I think we need to do more work to look at the candidacy issues.
  • Are we convinced that benefits are related to cognition and not just someone's emotional state? Pupillometry is used not only in studies of listening effort, but also in studies of emotion regulation and stress.

There is discussion about pupil dilation and listening effort and whether or not these changes are reflective of actual processing effort or someone's emotional response. This topic is a nice segue to the next study, which uses a different kind of physiology technique to measure listening effort.

Neurodynamic Evaluation of Hearing Aid Features Using EEG Correlates of Listening Effort (Bernarding, Strauss, Hannemann, Seidler, & Corona-Strauss, 2017)

What They Asked

Researchers in this study examined if electroencephalography can be used as an indication of listening effort. Also, they examined if a combination of directional microphones and digital noise reduction can further reduce listening effort.

Background

I have already mentioned some of the limitations of the existing methodologies to measure listening effort:
 
  • Subjective ratings are influenced by bias
  • Response time measures require some assumptions and also cooperation from patients
  • Pupillometry might be influenced by arousal or stress
  • With auditory evoked responses, stimuli must be of short duration and are prone to exogenous effects (e.g., noise levels, hearing aid settings)
The authors propose that we could use the ongoing oscillatory activity of the EEG and measure and quantify neuro phase synchronization. The theory is that higher neuro phase synchronization occurs due to increased attentional focus. Bernarding, Strauss, Hannemann, Seidler and Corona-Strauss (2017) studied ongoing oscillatory activity of the EEG. Higher phase synchronization occurs due to increased attentional modulation in the theta band and reflects higher cognitive effort. In low effort conditions, phase uniformly distributed. For high effort conditions, phase clustered or entrained. There's no coordinated effort but in the high effort condition, we see more phase responses clustered right at zero. This would be an indication of high effort, because all those neurons are synchronized and working together, focusing attention (and potentially effort) on the listening task at hand.

Why It Matters

Clinically, our patients report difficulties understanding speech in noise (e.g., speech intelligibility, listening effort, fatigue). If listening effort can be measured with EEG, and if directional microphones can reduce effort, we would have another tool. Furthermore, we would have more evidence supporting this technology to help our patients. Hearing aids and special features can reduce listening effort, but the existing methodologies have conceptual limitations.

What They Did

Researchers tested 14 participants who were all experienced hearing aid users with mild to moderate bilateral hearing loss, and an average age of 65 years old. Participants were fit with commercial BTE devices with occluding domes and then tested in four hearing aid settings:
 
  • DSEstr – directional microphone and strong Wiener filter NR
  • DSEmed – directional microphone and medium Wiener filter NR
  • DSEoff – directional microphone and off Wiener filter NR
  • ODM – omnidirectional microphone and off Wiener filter NR

Condition One. Speech recognition condition using the Oldenburg Sentence Test. Participants were asked to repeat back sentences. The Oldenburg sentences will have the same structure or subject, verb, number, adjective, object (e.g., "Peter buys three red cups."). Those speech levels were presented at 65 dB

Condition Two. Story comprehension task was utilized from a validated German comprehension task, also at 65 dB. Speech was presented from the front and they used a combination of two noises (ISTS-60 dB and Cafeteria 67dB) and those noise signals originated from the back. Sentence recognition and story comprehension was administered with each of the four hearing aid settings. They also elicited subjective ratings, scored on a seven-point scale, of perceived listening effort and speech intelligibility.

What They Found

In general, perceived speech intelligibility and rated intelligibility were better with directional microphones (participants repeated back more words correctly). Performance and ratings were worse in the omnidirectional condition. Looking at listening effort, in the omnidirectional condition, participants reported higher perceived listening effort, as well as more neuro phase synchronization as compared to the directional settings. There was not much difference between the different degrees of digital noise reduction.

The other thing they found was a significant and strong relationship between their EEG measure and their subjective ratings of effort. The EEG scores were highly correlated with perceived effort in all hearing aid settings. Overall, the omnidirectional microphone resulted in both highest perceived effort and also highest measured effort. 

Why is This Important?

These findings support the use of EEG and neural phase entrainment or synchronization as a physiological measure of listening effort. Also, this research provides further evidence that directionality can improve speech intelligibility, as well as improve listening effort, both objectively (physiologic) and subjectively.

Does it Matter Clinically?

Although we’ve had directional microphones for decades, it’s not clear that patients prefer them. Benefits have been attributed to small effects in natural environments, untrained ears, and also patients not being in situations where directional technology is expected to help. Manufacturers, clinicians and researchers are interested in ways to further improve the SNR to see if we can address the small effects/benefits, with the hope that the benefits are noticed by patients in the real-world, even without training. We are starting to see converging lines of evidence suggesting that noise reduction and directional microphones can improve listening effort, and we are seeing it with all of these different listening effort methodologies.

With any research study, we have some lingering questions: 

  • Are there individual factors that might affect benefits or candidacy?
  • Are these changes large enough to matter to patients?
  • Would a patient notice the difference (i.e., would there be evidence of reduced effort) between directional and omni in natural, real-world situations? 

The Benefit of a Visually Guided Beamformer in a Dynamic Speech Task (Best, Roverud, Streeter, Mason, & Kidd, 2017)

What They Asked

Now, we are going to switch our focus from that of listening effort to speech recognition. In the next study, they addressed the question of whether a visually guided hearing aid would be able to improve speech intelligibility in a dynamic, “real-world” communication setting. This study comes from a group out of Boston University.

Background

Directional microphones do a good job of improving speech recognition in spatially separated noises, particularly if the talker is in front and the noise comes from behind or the side. However, we know that our patients continue to have trouble understanding speech in noise. In response to this difficulty, manufacturers are developing more advanced directional microphones, which improve directionality and subsequently improve the signal-to-noise ratio by combining signals from multiple microphones. This can be done in a couple of ways. One would be by using a bigger microphone array by including more microphones on a single hearing aid, or by combining information across a pair of hearing aids. This would be the case of a bilateral beamformer (e.g., Phonak Stereo Zoom). In all cases, the idea is to get improved signal-to-noise ratio because you're combining information across more microphones. The challenge with more microphones, and particularly bilateral beamformers but even regular directional microphones, is they are more sensitive to sounds coming from the front than from other locations, but steering them can be problematic. Clinically, I tell our patients to point their nose at what they want to hear, to make sure that they are getting the maximum response and getting the noise in that null point. These authors have been working on a visually guided beamformer. Thus, instead of trying to point your head or turn your nose to what you're listening to, the idea is that this beamformer would track with eye gaze. The assumption is then that people are looking at what they are interested in hearing. They have had quite a bit of success with this visually guided beamformer.

Kidd et al. (2013) used a microphone array and 16 microphones mounted on a headband. They used an eye tracker mounted on the head and the microphone array steers that point of maximum sensitivity based on the eye gaze. This visually guided beamformer can improve the signal-to-noise ratio by 9 dB and improves the speech recognition by 6-9 dB. This is a pretty large improvement. If you think about traditional directional microphones, on average across listening situations, the benefits might be 2-3 dB. The visually guided beamformer has significant potential to improve speech recognition and the signal-to-noise ratio.

Of course, these advanced beamformers have downsides. In a true bilateral beamformer that's combining information from both hearing aids and both sides, the output is a single signal, and that single signal would get sent to both hearing aids. You do not want to do that fully, because you lose binaural information. Patients wouldn't be able to tell where sounds were coming from and they would just be listening to this single signal. Consequently, performance might also be worse, because any time you lose those spatial cues you have trouble, particularly in moving listening environments where there are multiple talkers.

Why It Matters

Cue preservation is a way to reintroduce some of those binaural cues. This is sort of a compromise between natural listening and this advanced beamformer. One way to preserve some of those cues would be to reintroduce binaural cues with a head related transfer function. For example, a system could be beamforming only at high sound frequencies. You could let the low frequencies be natural and have the natural binaural information, and then have this aggressive bilateral beamformer only in the high frequencies.

Another study by the same researchers suggests that a cue preserving visually guided hearing aid can help when speech is in front and the competing talker is on either side. There is evidence to suggest that this visually guided cue preserving beamformer can help, but many typical situations do not only involve a talker in the front and on either side. This is likely to change from moment to moment. For example, in a multiple partner conversation situation, the source is not stationary. As I mentioned earlier, the microphones need to be steered. Traditionally, you would steer it by moving your head or pointing your nose at what you're interested in. These authors are suggesting they've had some success by using eye movement. 

What They Did

There were 14 young adults participating in this study. Seven had normal hearing and seven had bilateral mild to severe sensorineural hearing loss. Three testing conditions were conducted: BEAM condition (which is a simulated, visually guided hearing aid), KEMAR condition (which you can think of as just being the natural condition, without any processing or beamforming), and BEAMAR (a combination of a BEAM and a KEMAR). The BEAMAR condition is low frequency. They tested everyone in a test booth and had this eye tracker to measure the position of the eyes to see where they were looking. Also, the position of the eye tracker updated the microphone output in real time, so the eye tracker is steering or guiding the directionality. Instead of a straight sentence recognition and word recognition task, they used a question and answer task. Researchers wanted to achieve natural communication environments and think about the challenges that our patients are facing in the real world. The question and answer tasks have questions and answers spoken by 22 talkers, half of whom are male. Half of the answers are correct. A participant might hear, "What day comes before Monday?" And the answer they would hear, "Sunday." It's their task to say whether or not that's true. Participants had a keypad with buttons labeled "correct" and "incorrect".

Two spatial conditions were tested: a dynamic and a fixed. The questions and the answers moved and were not predictable and did not come from a predictable location. In the fixed spatial condition, the question and the answer all came from the same speaker so it would always be in the front. Participants knew where it was coming from and it would be easier to steer or guide using their eyes. Participants had visual cues on top of the monitor so that they could get confirmation that they were looking in the right place. The maskers were secondary conversations going on at different locations throughout the test environment, and the target level was 55 dB. Various masker levels were used and researchers plotted the performance in the different masker levels to come up with the signal-to-noise ratio where participants were scoring 75% correct. 

What They Found

In this fixed condition, where the participants knew that the sound is going to come from the front or one of the sides, the BEAMAR is significantly better than all others. In the dynamic condition, the worst performance was with the BEAM. With the unpredictable talker locations, the BEAM negatively affected the performance on this task for both groups and we do not really see a benefit of the BEAMAR relative to just natural unaided listening. Again, the BEAM performance was worse with the BEAM for both groups.

Now thinking about that fixed scenario, these are the results for the talkers coming from the center or talkers coming from the side. In both cases, we see that in that fixed condition. The BEAMAR results in the best performance for both groups and when the talker comes from the side, that BEAM significantly impairs sense recognition but it helps. BEAM significantly impairs the story comprehension performance, but when the talker comes from the center, the BEAM does help relative to the KEMAR. The benefits of this technology are specific to the location of the talker and to the predictability of the talker.

They also found a wide range of benefits for both microphone types. Earlier in the course, I mentioned candidacy, individual variability and individualization. Researchers tried to explain some of their variability and benefits based on pure tone average (PTA). Pure tone average was related to performance in general, it wasn't related to benefit with either of these directional technologies. PTA was not a successful predictor. As you might expect, people were not as good at finding the location of the talker with their eyes when the location of the talkers were unpredictable, but it was also independent of microphone condition. Regardless of whether they were in KEMAR or the advanced directional or a combination (like the cue preserving), their errors were consistent, but larger eye gaze errors occurred in the dynamic condition. The eye gaze errors are why the performance in the dynamic condition is not as good.

Why is This Important?

This is evidence that a cue preserving visually guided beamformer can help improve speech recognition and comprehension for both listeners with normal hearing and hearing loss. The cue preserving in this case was a bilateral beamforming only in the high frequencies. These low frequency cues are important and helpful for listeners. It's also interesting to think about this task. We do not see a lot of these comprehension-discussion type tasks in literature. The majority of literature is regarding sentence recognition. It's nice to see that we can still see these benefits that we might expect in the comprehension task and that's another important finding of this study.

Does it Matter Clinically?

The data supports the importance of binaural low frequency cues. When we are thinking about fitting bilateral beamforming, I think we want to stay away from a full bilateral beamforming that gives the same signal to both ears. The idea of steering a microphone with your eyes is intriguing. Lingering questions remaining after this: 

  • How well do these conditions reflect natural listening situations? Perhaps move some of these studies out of the lab and into the real world.
  • Will the benefits translate to a wearable device?
  • What candidacy factors do we need to consider when we think about who's going to benefit from this technology? In this study the authors did not find pure tone average as a significant predictor.

Speech Reception with Different Bilateral Directional Processing Schemes: The Influence of Binaural Hearing, Audiometric Asymmetry, and Acoustic Scenario (Neher, Wagener, & Latzel, 2017)

What They Asked

The next study probed whether we can identify candidacy for bilateral directional processing schemes. This group looked at two general candidacy factors: symmetry of hearing thresholds below 2K Hz and binaural intelligibility level difference (BILD). The binaural cues are important and most helpful to people who have equal hearing in both ears. Researchers were also interested to see if patients benefit when listening with two ears instead of just one ear.

Background

Patients have trouble understanding speech in noise. All studies in this course are still looking at trying to help address the clinical problem of understanding speech in noise. We've talked about how beamformer microphones can improve the signal-to-noise ratio, but we do not have a good idea about candidacy yet. What is binaural hearing good for? In this study, the idea was if a patient does not have good binaural hearing, it might not matter if they've lost the binaural cues. Interaural level differences are useful for frequencies below about 1.5K Hz and timing differences are useful for higher frequencies. In the general population, we do see considerable variability in hearing abilities, binaural hearing abilities and also binaural speech recognition benefits. The binaural intelligibility level difference (BILD) is one way to measure binaural hearing. Think of it as a proxy for binaural hearing ability. The idea is to measure the improvement in speech in noise reception and recognition due to binaural processing. If you play speech in noise to one ear and then add the second ear, how much benefit does a person get? In this case, they measured the BILD using virtual acoustics and measured the difference between binaural and mono listening.

Why It Matters

Of course, the primary objective is to improve outcomes for patients and whenever possible, and address their primary clinical complaints. Imagine how great it would be if we could identify candidacy factors. If fitting the benefits of bilateral beamforming are variable, then fitting this feature on all patients would be counterproductive. However, not fitting patients with this feature is also counterproductive. People are variable and benefits are variable, so we’d like to focus our efforts for those most likely to benefit, to both maximize benefits and limit downsides. First, we could figure out who those people are.

What They Did

The participant pool included 39 individuals, with the majority of them as hearing aid users. They either had symmetric hearing loss below 2K Hz or had asymmetric hearing loss below 2K Hz. Both groups exhibited a similar spread on measures of age, overall degree of hearing loss and binaural masking level differences. 

They used virtual acoustics to present stimuli. The schematic in Figure 1 shows the kind of speaker set up that they were simulating. The Oldenburg sentence test was administered using maskers of 65 dB located either at plus or minus 60 dB. Researchers created a diffused cafeteria-like environment. Noise is introduced from all around and the speech is presented from the front. A computer to simulate a hearing aid was linked to BTE devices with no masking, and they used the Master Hearing Aid research platform and directional processing. Participants were fit to NAL-RP and then further adjustments were made for the headphone. Hearing aid microphone conditions were as follows:

  • Pinna (2 unilateral BTE devices with modest directivity above 1000 Hz; binaural cues available across full range)
  • Beamfull (combined information across hearing aids and output diotic signals; no binaural cues available SNR improvement maximized; 4.4 dB directivity improvement)
  • Beam > 800 Hz (pinna < 800 Hz; beamfull> 800 Hz; preserved low frequency binaural cues; improved mid and high frequency SNR; 2.1 dB directivity improvement)
  • Beam < 2000 Hz (pinna > 2000 Hz; beamfull < 2000 Hz; improved low and mid frequency SNR; preserved high frequency binaural cues; 2.1 dB directivity improvement)
  • Beambetter (beamfull, but signal presented to ear with better speech in noise performance; 4.4 dB directivity improvement)

Schematic of stimuli presentation

Figure 1. Schematic of stimuli presentation. 

What They Found

For the group with symmetric hearing loss, with the speech front and the competing talkers at plus or minus 60 dB, the Beamfull and Beambetter performance was worse with these super directional signals where there were no binaural cues available. This suggests that speech recognition in this competing talker environment was reliant on having binaural cues available. In fact, for this group, the best performance was with the Pinna, with mild directionality and full binaural cues available. With the diffused cafeteria noise, we see a different pattern of results. Instead of the Pinna being the best with noises coming from all around, the Beamfull was the best. The full advanced directional processing allowed for the best performance.

We see a similar pattern of results for the listeners with asymmetric hearing loss. We do not see differences across conditions with the speech front and the competing talkers on the side. However, the performance of the Pinna condition is the worst and we see a trend for the Beamfull to help the Pinna or no directionality condition.

Comparing the two spatial maskers, there was a trend for performance to be better in the speech noise condition compared to the speech masker. With the spatial maskers near the frontal speech target, conditions where binaural hearing would be expected to be important, the Beamfull and Beambetter did quite poorly and the performance with the pinna was best. In the diffuse cafeteria noise, the best performance was with the Beamfull and the Pinna was the worst, consistent with the importance of maximizing SNR in this situation for listeners with symmetrical hearing loss. Unlike the symmetrical group, there was no effect of microphone effect, suggesting the binaural cues (present or absent) did not affect performance. Less able to use binaural cues in diffuse noise, the beamformers helped, especially when the beam was full or presented to the better ear. In conditions where low frequency binaural information would be helpful, performance was worse, suggesting they need the additional improvement.

Maskers +/- 60 degrees

  • Listeners with large BILDs (> 2 – 3 dB) benefited more from low-frequency binaural cues (pinna; beam >800 Hz) than from greater directionality (Beamfull; Beambetter)
  • Listeners with small BILDS benefited more from greater directionality (Beamfull; Beambetter) than from low-frequency binaural cues (pinna; beam >800 Hz).
  • In diffuse noise, maximal SNR improvements provided the most benefits, regardless of BILD

Why is This Important?

Individualization of technologies based on patient characteristics can improve outcomes. Findings suggests that people with poor binaural hearing are not good candidates for beamforming technology if they will be in situations with spatial maskers. Everyone can benefit from beamforming in diffuse, cafeteria noise.

Does it Matter Clinically?

This does matter clinically, although we need to continue to investigate this, and see it replicated with more listeners. Lingering questions remain:

  • Are the results repeatable and do they hold for a large population of clinical patients?
  • Will the relationships hold with clinically appropriate venting, where vent-transmitted sound is not directional?
  • Do the results translate to natural listening scenarios?
  • Will clinicians use these recommendations to fit or fine tune directionality?

Survey of Current Practice in the Fitting and Fine-Tuning of Common Signal-Processing Features in Hearing Aids for Adults (Anderson, Arehart, & Souza, 2017)

A group or researchers surveyed clinical audiologists with diverse experiences regarding how they make clinical decisions about fitting signal-processing features for adults. 

What They Found

In terms of the initial hearing aid fitting, a high percentage of providers are using real ear probe microphone measures to verify this prescriptive fitting method, according to best practice guidelines from AAA. This is reassuring to see that the use of best practices is prevalent among the clinicians surveyed. Most clinicians are reporting using first fit regarding time constants (i.e., noise suppression, feedback management, directional microphones). Clinicians surveyed are using their own expertise and patient feedback to make fine-tuning adjustments. The next most common way that people are making decisions to fine-tune is by using manufacturer software recommendations, information they've learned from articles, conferences, or from manufacturer training. Very few people are using loudness measures, cognitive assessments, TEN task, or the ANL test to fine-tune. These come up in the literature as potential candidacy, issues but it does not look like many of these candidacy things are being translated into the clinic.

Initial hearing aid fitting (frequency-specific gain)
  • 51% use prescriptive fitting method
  • 35% use manufacturers’ first fit
Frequency lowering
  • Manufacturers’ first fit (40%)
  • Own expertise (36%)
  • Disabled feature (17%)

Summary and Conclusion

Audiologists fit and fine-tune using best practices recommendations and first fits (Anderson, Arehart, & Souza, 2018). A collective long-term goal would be to research individualization and to implement the findings into each of our clinics. Most audiologists are not currently using individualized loudness, ANL, or cognitive measures to fine-tune hearing aids. 
 
For bilateral beamformers, one area of individualization is binaural hearing (Neher, Wagener, & Latzel, 2017). Directionality that was steerable based on eye gaze, which showed us that this technology can help improve speech recognition, also highlights the importance of those low frequency binaural cues in some of those advanced directional processing schemes. For some situations, we need to look at poor binaural hearing candidacy as a consideration. However, in diffuse cafeteria noise, everyone benefits from this hearing aid feature. 
 
Visually guided hearing aids in the laboratory can be beneficial, especially in combination with natural low frequency binaural cues (Best et al., 2017). Noise reduction technologies reduce listening effort. Pupillometery research and commercial hearing aids reduce pupil dilation using a combination of directional microphones and digital noise reduction. With regard to electroencephalography, directional microphones reduce neural phase synchronization. In essence, both digital noise reduction and directional microphones, can reduce listening effort. Both of these outcomes suggest that patients have the potential to benefit from these hearing aid technologies.

References

Anderson, M., Arehart, K. & Souza, P. (2018). Survey of current practice in the fitting and fine-tuning of common signal-processing features in hearing aids for adults. Journal of the American Academy of Audiology, 29, 118-124.

Bernarding, C., Strauss, D. J., Hannemann, R., Seidler, H., & Corona-Strauss, F. I. (2017). Neurodynamic evaluation of hearing aid features using EEG correlates of listening effort. Cognitive neurodynamics, 11(3), 203-215.

Best, V., Roverud, E., Streeter, T., Mason, C. R., & Kidd Jr, G. (2017). The benefit of a visually guided beamformer in a dynamic speech task. Trends in hearing, 21, 2331216517722304.

Desjardins, J. L., & Doherty, K. A. (2014). The effect of hearing aid noise reduction on listening effort in hearing-impaired adults. Ear and Hearing, 35(6), 600-610.

Desjardins, J. L. (2016). The effects of hearing aid directional microphone and noise reduction processing on listening effort in older adults with hearing loss. Journal of the American Academy of Audiology, 27(1), 29-41.

Doherty, K. A., & Desjardins, J. L. (2015). The benefit of amplification on auditory working memory function in middle-aged and young-older hearing impaired adults. Frontiers in psychology, 6, 721.

Kidd Jr, G., Favrot, S., Desloge, J. G., Streeter, T. M., & Mason, C. R. (2013). Design and preliminary testing of a visually guided hearing aid. The Journal of the Acoustical Society of America, 133(3), EL202-EL207.

Neher, T., Wagener, K. C., & Latzel, M. (2017). Speech reception with different bilateral directional processing schemes: Influence of binaural hearing, audiometric asymmetry, and acoustic scenario. Hearing research, 353, 36-48.

Ng, E. H. N., Rudner, M., Lunner, T., & Rönnberg, J. (2015). Noise reduction improves memory for target language speech in competing native but not foreign language speech. Ear and hearing, 36(1), 82-91.

Picou, E. M., Moore, T. M., & Ricketts, T. A. (2017). The effects of directional processing on objective and subjective listening effort. Journal of Speech, Language, and Hearing Research, 60(1), 199-211.

Rönnberg, J., Lunner, T., Zekveld, A., Sörqvist, P., Danielsson, H., Lyxell, B., ... & Rudner, M. (2013). The ease of language understanding (ELU) model: theoretical, empirical, and clinical advances. Frontiers in systems neuroscience, 7, 31.

Sarampalis, A., Kalluri, S., Edwards, B., & Hafter, E. (2009). Objective measures of listening effort: effects of background noise and noise reduction. Journal of Speech, Language, and Hearing Research, 52(5), 1230-1240.

Smeds, K., Wolters, F., & Rung, M. (2015). Estimation of signal-to-noise ratios in realistic sound scenarios. Journal of the American Academy of Audiology, 26(2), 183-196.

Wendt, D., Kollmeier, B., & Brand, T. (2015). How hearing impairment affects sentence comprehension: using eye fixations to investigate the duration of speech processing. Trends in hearing, 19, 1-18.

Wendt, D., Hietkamp, R. K., & Lunner, T. (2017). Impact of noise and noise reduction on processing effort: A pupillometry study. Ear and hearing, 38(6), 690-700.

Wu, Y.H., Stangl, E., Chipara, O, Hasan, S.S., Welhaven, A., & Oleson, J. (2017). Characteristics of real-world signal-to-noise ratios and speech listening situations of older adults with mild-to-moderate hearing loss. Ear and Hearing.

Citation

Picou, E. (2018, May). Vanderbilt audiology journal club: recent hearing aid innovations and technology. AudiologyOnline, Article 22772. Retrieved from https://www.audiologyonline.com

To earn CEUs for this article, become a member.

unlimited ceu access $129/year

Join Now
Rexton Reach - November 2024

erin picou

Erin Picou, AuD, PhD

Research Assistant Professor, Vanderbilt University Medical Center

Erin Picou, PhD, CCC-A,  is a research assistant professor in the Department of Hearing and Speech Sciences at Vanderbilt University Medical Center.   She has been working in the Dan Maddox Hearing Aid Research Laboratory since she was an AuD student.  After completing her PhD (also at Vanderbilt) she was hired to a research faculty position.  Her research interests are primarily related to hearing aid technologies for adults and children, with a specific focus on speech recognition, listening effort, and emotional responses to sound.  This work continues to be supported through a variety of industry and federal funding sources.  In addition to her research activities, Erin is involved with teaching and mentoring AuD students at Vanderbilt.  In addition, Erin is currently serving as an Associate Section Editor for Ear and Hearing and is on the editorial board for the American Journal of Audiology.



Related Courses

20Q: The New Hearing Aid Fitting Standard - A Roundtable Discussion
Presented by H. Gustav Mueller, PhD, John A. Coverstone, AuD, Erin Picou, AuD, PhD, Lindsey E. Jorgensen, AuD, PhD, Jason Galster, PhD
Text/Transcript
Course: #36965Level: Intermediate2.5 Hours
A new hearing aid fitting standard recently was published. Importantly, it is a standard, not a guideline. Will it move the needle? In this lively round table discussion, experts dissect the key components of the standard, and give their candid opinions regarding some of the "hows" and "whys" the standard was created.

20Q: Amplification Options for Unilateral Hearing Loss - A Case for CROS
Presented by Erin Picou, AuD, PhD
Text/Transcript
Course: #36470Level: Intermediate1.5 Hours
Over the last several years, amplification options for children with single-sided deafness have expanded. From remote microphone technology to CROS systems and implant options, it is important to consider the individual needs of the children and design a system that is right for them. This course discusses the various options available and gives practical advice on what to consider when working with these children.

Vanderbilt Audiology Journal Club: Clinical Insights from Recent Hearing Aid Research
Presented by Todd Ricketts, PhD, Erin Margaret Picou, AuD, PhD, H. Gustav Mueller, PhD
Recorded Webinar
Course: #37376Level: Intermediate1 Hour
This course will review new key journal articles on hearing aid technology and provide clinical implications for practicing audiologists.

Vanderbilt Audiology Journal Club: Update in Hearing Aid Research with Clinical Implications
Presented by Erin Margaret Picou, AuD, PhD, Todd Ricketts, PhD
Recorded Webinar
Course: #33164Level: Intermediate1 Hour
This course will cover a review of recent key hearing aid journal articles with clinical implications, by Drs. Todd Ricketts and Erin Picou from Vanderbilt University.

Vanderbilt Audiology Journal Club: Recent Hearing Aid Innovations and Technology
Presented by Erin Margaret Picou, AuD, PhD
Recorded Webinar
Course: #30621Level: Intermediate1 Hour
This course will review new key journal articles on hearing aid technology and provide clinical implications for practicing professionals.

Our site uses cookies to improve your experience. By using our site, you agree to our Privacy Policy.