AudiologyOnline Phone: 800-753-2160


MED-EL - Bonebridge - August 2023

20Q: Hearing Aid Features - The Continuing Search for Patient Benefit

20Q: Hearing Aid Features - The Continuing Search for Patient Benefit
Ruth Bentler, PhD, H. Gustav Mueller, PhD
March 28, 2011
Share:

From the desk of Gus Mueller

Is it bigger than a bread box? Animal, vegetable or mineral? As you probably recall from your childhood, the 20 Questions format works pretty well as a car or parlor game. Over the years, I've found it works pretty well for audiology articles too. Welcome to the first installment of 20Q! Hopefully you'll find this feature to be an enjoyable way to read about the latest developments in audiology and hearing science. We have an impressive list of guest authors lined up, and our "Question Man" is champing at the bit.

 

My first writing experience using the 20Q format was in the fall of 1993. Initially, I wasn't so sure the concept would actually work, so not wanting to go down alone, I coerced my friend and colleague Ruth Bentler to co-author the piece with me. We had a quick meeting at O'Hare airport following an IHAFF gathering, outlined the article on a bar napkin, and "Measurements of TD: How Loud is Allowed?" was published in January of 1994.

 

It only seemed fitting, therefore, that Ruth, sans bar napkin, would join me for the launch of 20Q here at AudiologyOnline. The topic was easy to pick. There are only a handful of independent researchers around the U.S. who are critically evaluating all the new technology from different manufacturers, and Dr. Bentler is one of them. In this installment of 20Q we discuss the real and potential patient benefit related to three specific hearing aid features: directional technology, digital noise reduction, and frequency lowering schemes.

On occasion, Dr. Bentler's research has been categorized by some as being a bit "pesky." In case you've forgotten, a while back she was the one who reported that for understanding speech in background noise, modern hearing aids didn't do much better than an 1890s speaking tube. She was also the one who reported that you could alter someone's real-world self-report about hearing aid benefit, and his or her preference for a given hearing aid simply by changing what you tell them about the product's signal processing, even when it's not true! I'm not saying she's become Polyanna-esque, but this time we're not here to stir up trouble; just to report what the latest research is revealing about some popular hearing aid features and fitting concepts.

Gus Mueller

April 2011

To browse the complete collection of 20Q with Gus Mueller articles, please visit www.audiologyonline.com/20Q

 


20Q: Hearing Aid Features - The Continuing Search for Patient Benefit

 

 


1. It's good to see that the two of you are back answering questions about hearing aid selection and fitting. But do we have to talk about loudness perceptions again this time?

Mueller:
It's great to be back, and we're happy to talk about whatever you like. Let me remind you, it was you who was asking us all those questions about loudness over the years that led to our previous three 20 Question articles. So what do you have in mind for us this time?



2. Well, as always I'm having trouble keeping up with all the new technology. Does it all really work the way the manufacturer's rep says it does, and is there always patient benefit?

Mueller:
The short answers would be "maybe and probably not." But you've hit on a very broad category. Let's narrow it down to some specific features and maybe we can help with your understanding. Understand, with some of the new features, there just isn't much supporting research evidence available, but we'll give a shot.



3. Let's start with an "oldie but goodie," directional amplification. I sort of thought that directional was old news, but now I'm hearing about new research?

Bentler:
I know what you mean. Every time we finish a study related to directional microphone technology, I insist there is nothing important left to study. The words are barely uttered when another question arises, and so the search continues . . . for more truths! One could argue that since most hearing aids have directional microphones already, and since rarely is a directional mic worse than an omnidirectional mic (an old David Hawkins adage), it's hard to do any harm, so why not move on?



Mueller: But Ruth, you didn't move on, as I've noticed that you and Wu have published some articles on real-world directional hearing aid use just in the last year or so.



Bentler: This is true. We all know from the work of Brian Walden and others, that with directional hearing aids, laboratory data often don't match real-world data in terms of performance and patient preference. For this reason, we undertook a study to try to quantify the impact of the listening environment and the availability of visual cues on successful use of this feature. Why is it that the benefits we see in clinical testing don't always translate to the patient's real-world listening?



4. I know I've certainly had patients who said they couldn't hear any differences when they switched to directional. Did you find out anything interesting?

Bentler:
We collected a lot of information with these studies, so you might have to pick through them to find all the main points (Wu & Bentler, 2010, Part 1 & 2). But, there were a few surprising findings I can tell you about. For one, even with a lot of training to understand what environments lead to more directional benefit, we found that many listeners don't find themselves in those optimal communication situations very often. When they do, the benefit of using the available visual cues often outweighs the benefit of the directional microphone, especially for the older adult. Our subjects also told us that even if the environment is well-suited for directional mic benefit, they liked the loudness afforded by the omni condition better! Since directional mics are used to reduce background noise (in the first place) and the frequency response of both modes of listening was matched in this study, that report from the patients was interesting.



5. So the real-ear frequency responses were matched, but the omnidirectional mic setting still "sounded louder?"

Bentler:
That's correct—not everyone of course said that, but it was a common finding. I'm not entirely sure why that was happening, but it should give industry something to ponder, like maybe increasing on-axis gain to compensate for the off-axis gain reduction when in the directional mode.



Mueller: I know, Ruth, you've already mentioned the "do no harm" adage, but I'd like to think that there should be a way to move directional technology into the "do some good" category on a more frequent basis. One factor, however, that seems to relate to real-world benefit is the instrument's signal classification system. Most hearing aids now have automatic directional mode—that is, the hearing aid, not the user, decides when an omni or specific directional pattern should be employed. I know why this is popular, as research has shown that as many as 50% of patients neglect to switch when they should. But with these automatic systems, a product could have the best directivity index (DI) in the world, but if the signal classification system messes things up, benefit might be minimal. And there are a ton of decisions to be made regarding this automatic activation: At what SPL level? Different levels for different inputs? Should strength change with input SPL? What type of signal should prompt activation—noise alone, speech in noise, speech in quiet? How many conditions need to exist simultaneously? Bilaterally linked or ear-independent?



Bentler: Whoa. I hope I don't have to answer all those questions. I can say, that Wu and I did not implement automatic switching in our studies, since we were trying to understand the preference for omnidirectional and directional modes in the real world, and an automatic switch would have blurred that distinction for us. As you point out, however, the automatic function of many of these features varies across manufacturer as to activation level and time, strength of effect, and so on.



6. Speaking of automatic changes in polar patterns, what's the deal with these new products that are designed to focus on speech from the sides or the back?

Mueller:
Well, as you know, historically directional technology has been geared to improve the signal-to-noise ratio when the talker of interest is from the front. Hearing aid users are in fact instructed to position themselves to maximize this condition. There are some situations, however, when the talker of interest is from behind or off to the side, and it just isn't possible to turn your head—a common example would be talking to someone in the back seat of a car when you are driving. In recent years, manufacturers have developed products that will automatically switch to an anti-cardioid (reverse cardioid) pattern for these conditions. There also are unique polar patterns for speech from the side. The thought is, that with the switching capabilities available today, and with good signal classification systems, it should be possible to improve the signal-to-noise ratio (SNR) regardless of the azimuth of the primary speech signal. Of course, all the old rules of distance, reverberation, noise source, etc. still apply. I think there currently are three different manufacturers who have a product that addresses this area.



Bentler: And now you're going to ask if they really work. These products only have been available the last year or so, so research is just starting to emerge. We both have run some preliminary clinic studies with these instruments, however—Gus, you want to go first?



Mueller: Sure. We recently conducted an efficacy lab study to determine if 1) the classification system correctly implemented the anti-cardioid pattern, and 2) if so, was there patient improvement for speech understanding compared to omnidirectional or a traditional hypercardioid directional pattern (Mueller, Weber & Bellanova, 2011). Speech was presented from the back and noise from the front. The results were encouraging; using the hearing-in-noise-test (HINT) material we saw about a 5 dB Reception Threshold for Sentences (RTS) advantage for the anti-cardioid algorithm compared to omnidirectional, and all participants had a significant degree of benefit, suggesting that the switching was occurring correctly. Of course, this was a fairly ideal listening environment, and more testing needs to be conducted in real-world conditions, which I understand Ruth, you and your colleague Wu are going to tackle. Is that right?



Bentler: That is correct, but let me first say that we also did a preliminary lab study similar to yours Gus, and the results were essentially identical. Now, here's the (sort of) real-world study that we're going to conduct, which should be fun. Since one of the advertised "use cases" for this type of directional processing is driving a car, we thought that we would work this task into the speech-in-noise testing. At the University of Iowa we have a unique driving simulator; it's called SIREN - or simulator for interdisciplinary research in ergonomics and neuroscience. At first glance, this blue Saturn looks just like a normal car - except it's in the middle of a research facility at the Univerisity of Iowa Hospitals and Clinics. SIREN collects data on behavior in drivers with a variety of health problems besides hearing loss including Alzheimer's and Parkinson's disease, lack of sleep or other ailments. Sensors collect information from the simulator, including the position of the steering wheel, the brake and accelerator pedals. It's crucial feedback for researchers. We're just starting a study we we'll compare these unique directional algorithms from different manufacturers using the SIREN.



7. I'll look forward to hearing about those findings. A final question on directional: I fit a lot of open-canal products - does it really matter if they are directional?

Mueller:
Probably. You say "open canal" (OC) so I'm going to assume you mean that. Sometimes people say open canal simply because they used a mini-BTE, a Receiver-In-Canal or a thin tube, and the canal might not be all that open. But when the canal truly is open, you have two things working against good overall directivity: a reduction (leaking out) of the low frequencies (which is why you're fitting this style), and an open passage for all sounds to pass directly to the tympanic membrane. Because of these two factors, even with a good directional hearing aid, you won't see much directionality below 1500 Hz or so (Mueller & Ricketts, 2006). Regarding the directionality above 1500 Hz, it's going to depend on what product you use.



I know that Todd Ricketts of Vanderbilt has conducted KEMAR DI testing for dozens of OC products from all major manufacturers. I'm not sure if these data are published, but I do recall him reporting that some of these products have a DI as low as 0.5 dB or so, while the better designed ones have a DI up around 2.0 to 2.5 dB (Ricketts, 2011). This shows that there does seem to be a difference among manufacturers, and also within manufacturers for different products. The average DI across manufacturers was around 1.5 dB (across frequencies). Not great, but remember that in the omni mode the DI might be -1.0 dB or so, so there still is a 2.5 dB advantage.



So, the answer to your question is . . . Yes, if you have a good product then the directional technology should make a noticeable difference for some listening situations even with an OC fitting. Keep in mind the benefit will not be as great as you would obtain with a more closed fitting with gain across all frequencies.



Bentler: Just to add to that, there has also been clinical behavioral research published, which showed a speech-recognition-in-noise benefit for directional technology in OC fittings, when compared to omnidirectional (Valente & Mispagel, 2008).



8. Okay, let's move on to digital noise reduction (DNR). Have we simply given up on hoping that it will provide a significant benefit in speech understanding?

Mueller:
I don't think anyone has given up, as you continue to see advances in the algorithms with each new product—think "baby steps." I can see why you might have gotten this perception, however, as you don't see as many publications anymore comparing speech intelligibility for "DNR-On" versus "DNR-Off." But we do continue to hear from most of our patients, even in controlled research studies, that they prefer DNR-On, so something good must be happening. We think that there are ways that DNR can indirectly improve speech "communication," and that's an area of DNR research that has been popular in recent years.



Bentler: Despite the consistent evidence over the past 12 years that DNR does little to enhance speech perception in adults or children, we recently have completed a study with pediatric listeners that offers some potential.



9. Potential? Regarding improved speech understanding for DNR?



Yes—that's the intriguing part, but let me tell you about the entire study. We had 60 participants (29 males, 31 females) with ages ranging from 5.5-10.7 years (mean of 8 years 5 months). Of these, 50 had normal hearing; the other 10 children had hearing impairment. The latter group was recruited in order to look for similar trends that we had found earlier with the normal listeners. We determined a priori that we would not have enough subjects with all of the variation that hearing loss presents (age factor, hearing level and configuration, education setting etc) to do adequate statistical analysis, but wanted to determine if the trend was supported in this hearing impaired group.



The results from the normal hearing children were both surprising and encouraging. The children were assigned to one of two groups (25 each) prior to the data gathering: Max-DNR and Medium-DNR. Words presented in noise (CASPA 4.1, Boothroyd 2006) were used to assess speech perception; six novel words were created under similar phonotactic probabilities and lexical neighborhoods using a novel word learning paradigm (Stiles, Bentler & McGregor, 2008). We also measured ease of listening using a 6-point rating scale adapted from a pediatric pain rating scale (Wong & Baker, 1988). All stimuli were recorded through two premier-level hearing aids with the DNR feature off or on. The children listened to the stimuli in a calibrated sound field.



The results for the ease of listening measure showed results similar to nearly everything reported to date for adults: easier listening with DNR on. For the novel word learning task, DNR had no impact—this was a positive finding of course, as some have suggested that DNR might have a negative effect for word learning. The most interesting outcome, however—are you listening—was the finding that DNR significantly improved speech perception performance for these pediatric listeners! Our explanation? Just as adults report easier listening, and better dual-processing using the DNR feature, children may be better able to attend to the task when the cognitive overload (speech in noise) is reduced.



10. Interesting. I guess the message is that all these years researchers with DNR should have been using children as subjects! But back to adults. What exactly are these "other benefits" of DNR that I hear about?

Mueller:
There is not a lot of supporting research evidence, but there have been reports that suggest that DNR can lead to improved sound quality, easier listening, and reduced effort while listening to speech in background noise (Mueller & Ricketts, 2005 ). All good things, which indirectly could lead to improved speech understanding. Unfortunately, most of these finding were laboratory-based, so we only can assume that they transfer to actual everyday hearing aid use. But, I know my partner here has looked at DNR benefits for real-world listening.



Bentler: A couple years ago we conducted a double-blinded study of DNR effectiveness, and yes indeed Gus, there was a real-world component (Bentler, Wu, Kettel & Hurtig, 2008). First, regarding the testing we carried out in our lab, we did not see an improvement in speech recognition, but we did find the usual positive effects: ease of listening and listening comfort. In the field study, the results of the outcome measures were pretty much the same for DNR-On versus Off. We did see an improvement for (or rather a reduction of) aversiveness, however, for the DNR-On condition.



11. I've been hearing a lot lately regarding the notion that DNR can assist with cognitive processing. Is that something you also looked at?

Bentler:
Not in that study, but it is something that we're looking at in current research. As you may know, positive outcomes with DNR have been expanded to include better "dual-processing" when DNR is engaged. That is, we know that persons with hearing loss have more trouble with communication in noise, due in part to the cognitive demands of the situation. One recent study investigated the idea that DNR may reduce cognitive effort needed to hear sentences in noise at various SNRs while performing a short-term memory or reaction-time task (Sarampalis, Kalluri, Edwards & Hafter, 2009). Their findings were interesting—DNR showed no benefit for speech recognition (as expected from most all previous studies), but it did lead to better performance on a word-memory task and quicker responses in visual reaction-times. The authors concluded that these results support the hypothesis that DNR reduces listening effort and frees up cognitive resources for other tasks.



Mueller: A similar, but slightly different area where DNR has the potential to show benefit is related to mental fatigue. Ben Hornsby of Vanderbilt presented a paper on this at the most recent American Auditory Society meeting (Hornsby, 2011). We know that people with hearing loss often have to exert extra effort to make it through a long day of communication. This can lead to increased stress, tension, fatigue and possibly even a reduced quality of life. It could be, that if the DNR provided more "relaxed listening" throughout the day, stress and tension could be reduced. Researchers are looking at ways to measure this is the lab.



12. Let's move on to another technology I've been hearing about—frequency compression. What do think of that?

Mueller:
Well first, it's probably best to talk about this type of processing using the more generic term, frequency lowering, which can be accomplished in different ways. Compression happens to be one of them. Frequency lowering refers to any of the current technologies that take high-frequency input signals—typically considered to be speech sounds—and deliver these sounds to a lower frequency region with the intended goal of improved speech understanding (or for children, maybe development or production). The concept is not new—but the potential for success may be.



Bentler: Maybe you recall earlier schemes (e.g., vocoders, slow playback, etc.) that were considered to be innovative trends at the time, but the current potential for success lies in the availablity of the digital processing chip that accompanies most current hearing aids, allowing for real-time manipulation of the incoming signal. The two methods for accomplishing this manipulation are frequency transposition (Kuk, Keenan, Korhonen & Lau, 2009; Kuk, Peeters, Keenan, Jessen & Andersen, 2006) and frequency compression (Glista et al., 2009). Data are starting to emerge to guide us in this arena, although the value of this technology probably is different for the pediatric listener than for the adult listener.



13. Why do you say that?

Bentler:
Let's start with the pediatric population. There have been reports of improved perception of sibilant sounds (s, sh, for example) for pediatric listeners when these algorithms were used. And, there is some evidence of improved production of the same (Glista et al., 2009). So with the pediatric population, we are often working with children who have never heard these high frequency speech sounds. This is quite different from most adults we are fitting with hearing aids, who once had normal hearing.



Let me tell you a little bit about a large multi-site National Institutes of Health study that we are involved in. We are following close to 400 children with hearing loss from ages 1 to 6 for three years. Approximatey 100 of these children were fitted with hearing aids with frequency compression, regardless of the degree or configuration of the hearing loss. Since an obvious weakness in some earlier studies of this technology was in not allowing for sufficient training of the listener to the novel sound of the amplification scheme, this large data set allows for a closer look at the bigger picture over the three years of data gathering: speech and language development (incuding vocabulary size), academic achievement, psychosocial development, and so on. To date, the children fitted with the frequency compression scheme cannot be differentiated from the children fit with more conventional technology.



14. So are you saying that in general, this technology has no benefit for children?

Bentler:
Not at all! One could interpret these findings in a more positive manner: The one thing we are starting to show is the strong correlation of audibility to speech and language development, and all things important in psycho-social development. It is probable that these children with the frequency compression technology are indistinguishable from the others because audibility is audibility, however it is accomplished!



15. So how about my adult patients—who would I consider to be a candidate for frequency lowering?

Mueller:
Well, Ruth has already mentioned the audibility issue, and this applies for adults too. What you would want to consider would be—is the hearing loss in the high frequencies so severe that it wouldn't be possible to make speech audible in the 3000-4000 Hz range with tradtional amplification? Someone perhaps, with relatively normal hearing in the low frequencies, a 40-50 dB hearing loss around 1500-2000 Hz, and then dropping to 90 dB or so in the 3000-4000 Hz range. Some people have suggested that we also consider frequency lowering when high frequency cochlear dead regions are present, but it's pretty unlikely that that would have an impact on your decision.



But even when a patient meets this criterion, you still have to ask if audibility at a different frequency region is better than no audibility at all? And of course, you'll have to verify that the lowered speech signals are indeed audible, which is an area of discussion in itself.



16. I'm not following what you said about dead regions. It's sort of a no-brainer not to amplify a dead region isn't it?

Mueller:
That notion might sound reasonable but it's not that simple. If we only consider high frequency dead regions (the rules might be different when the dead region is in the low frequencies) for a downward sloping hearing loss, we know from lab data that speech recognition often improves when audibility is added above the edge of the dead region. If you'd need some proof of this for subjects using real hearing aids, consider the findings from a well-designed study from Robyn Cox and colleagues (2009). These researchers studied 18 pairs of subjects matched for hearing loss (downward sloping) and age; one of the pair had cochlear dead regions, as identified using the TEN test. The subjects for each group were fitted monaurally with two different gain prescriptions: a fitting to the NAL prescriptive targets, and a second program with approximately 10 dB roll-off of the NAL targets in the high frequencies. Lab testing was conducted following real world use of both settings. What did they find? For the lab testing, both the dead region group and the control group benefited from the high frequencies for speech in quiet (CASPA) and speech in background noise (BKB-SIN). For the field ratings, only the dead region group reported a significant advantage for the additional high frequency gain of the NAL prescription, and 78% of the subjects in this group stated a preference for the NAL program; the primary reason was "improved clarity." It's only one study, but these findings tell me that it would be rather risky to routinely withhold audilbity for these patients. Now of course, some of these individuals will have hearing loss of 90 dB or greater in the 3000-4000 Hz, which is the issue we discussed earlier.



17. Good to know. So what about these patients you mention where we just can't obtain reasonable audibilty in the high frequencies? Will they do better with frequency lowering?

Bentler:
Hard to say. O'Brien and collegues from the NAL (2010) assessed horizontal localization errors, speech perception in noise, and self-reported benefit (using the Speech, Spatial and Qualities of Hearing Questionnaire). Half of the 24 subjects had eight weeks with the frequency compression scheme followed by with eight weeks with the conventional scheme; the other half of the subjects did the opposite. Their findings? For older adults, frequency compression neither harms nor helps front-back discrimination, speech recognition in noise, or satisfaction with amplification. The results of Simpson, Hersbach and McDermott (2006) are in general agreement with these findings. In contrast, Glista and her colleagues (2009) found that adults with more high frequency loss showed benefit and preference for frequency compression.



18. That's not overly exciting. What have other studies with adults shown?

Bentler:
There have been a number of studies done in the past five years looking at outcomes with adults. The results are unclear, in that many of them did not indicate whether the technology actually provided improved audibility to the subjects (i.e., verification outcomes). Also, different studies used different lengths of training and different - and sometimes newly designed - tests of speech perception. Although it is difficult to compare these studies, the evidence of the usefulness of this technology for adults is not strong, but then again, the adults can make that decision for themselves!



Our investigations at Iowa have shown similar equivocal results. In one study we looked to determine if frequency compression offered a better bi-modal option for cochlear implant (CI) users compared to more conventional amplification. We looked at localization, speech perception in noise and consonant and vowel perception in quiet. The subjects were fit to NAL-NL1 targets as far as was possible given the sloping configuration of the hearing loss. The frequency compression was turned on in one memory, and off in the other. The subjects were instructed to switch back and forth on a daily basis for two months prior to returning for data gathering. Data logging indicated that they followed instructions (approximately 50% time spent in each mode), but the outcomes showed no advantage (or disadvantage) to using frequency lowering. For each outcome, there were several subjects performing significantly better with one or the other technology, but those subjects were not unique relative to degree or slope of hearing loss.



19. So is there a "bottom line" regarding the use of frequency lowering for adults?

Bentler:
In general, I'd say the bottom line is pretty much the same as for all emerging technology—follow the research and see where it leads you. In the meantime, as is the case with many new technologies, ask the adult patient to participate in the decision-making. In another study with musically-trained listeners, we found little reason to support - or not - the frequency compression option when looking at the standardized outcome measures, using paired-comparison and strength of vote tallies. Our subjects, however, were quite vocal about what was good or bad about the listening experiences. We are subjecting those data to a more qualitative statistical analysis. Since adults are generally top-down listeners, they rarely need all the phonemes to understand the message. They are able to make judgments based on sound quality and other attributes of the altered signal.



20. Well darn. My 20 Questions are finished and I had two or three more advanced features to ask about. Can we do this again down the road?

Mueller:
You bet. The way things change in the hearing aid world you'll probably have even more new questions next week!



About the Authors



Ruth Bentler, Ph.D. is Professor of Audiology in the Department of Communication Sciences and Disorders at the University of Iowa. She is internationally known for her research and publications in the area of modern hearing aid technology.



H. Gustav Mueller, Ph.D. is Professor of Hearing and Speech Science at Vanderbilt University. He is a Contributing Editor for AudiologyOnline, and an audiology consultant for Siemens Hearing Instruments. Dr. Mueller is an internationally known workshop lecturer and noted author.



References



Bentler, R., Wu, Y-H., Kettel, J. & Hurtig, R. (2008). Digital noise reduction: Outcomes from field and lab studies. International Journal of Audiology, 47(8), 447-460.


Boothroyd, A. (2006). Manual for CASPA 4.1 Computer Assisted Speech Perception Assessment. Copyright Arthur Boothroyd.

Cox, R.M., Johnson, J.A. & Alexander, G.C. (2009, May). High frequency dead regions: Implications for hearing aid fitting. Paper presented at the biennial meeting of the International Collegium of Rehabilitative Audiology, Oldenburg, Germany.


Glista, D., Scollie, S., Bagatto, M., Seewald, R., Parsa, V., & Johnson, A. (2009). Evaluation of nonlinear frequency compression: Clinical outcomes. International Journal of Audiology,48(1), 632-644.


Hornsby, BW. (2011, March). Effect of hearing aid use on mental fatigue. Paper presented at the annual meeting of the American Auditory Society, Scottsdale, A.Z.


Kuk, F., Keenan, D., Korhonen, P., & Lau, C. C. (2009). Efficacy of linear frequency transposition on consonant identification in quiet and in noise. Journal of the American Academy of Audiology, 20(8), 465-479.


Kuk, F., Peeters, H., Keenan, D., Jessen, A., & Andersen, H. (2006). Linear frequency transposition: Extending the audibility of high-frequency information. Hearing Review, 13, 44-46.


Mueller, H.G., Bentler, R. A. (1994). Measurements of TD: How loud is allowed? Hearing Journal, 41(1),10,42-44.


Mueller, H. G. & Ricketts, T.A. (2005). Digital noise reduction: Much ado about something? Hearing Journal. 58(1), 10-18.


Mueller, H, G. & Ricketts, T.A. (2006). Open-canal fittings: Ten take-home tips. Hearing Journal, 59(1), 24-37.


Mueller, H.G., Weber, J. & Bellanova, M. (2011). Clinical evaluation of a new hearing aid anti-cardioid directivity pattern. International Journal of Audiology,50(4),249-254.


O'Brien, A., Yeend, I., Hartley, L., Keidser, G. & Nyffeler, M. (2010). Evaluation of frequency compression and high-frequency directionality. Hearing Journal, 63(8), 32,34-37.


Ricketts, T.A. (2011, April). Open fittings and other hearing aid features: Clinical tips. Paper presented at the annual meeting of the American Academy of Audiology, Chicago, IL.


Sarampalis, A., Kalluri, S., Edwards, B. & Hafter, E. (2009). Objective measures of listening effort: Effects of background noise and noise reduction. Journal of Speech, Language and Hearing Research, 52(5), 1230-40.


Simpson, A., Hersbach, A. A. & McDermott, H. J. (2006). Frequency-compression outcomes in listeners with steeply sloping audiograms. International Journal of Audiology, 45(11), 619-629.


Stiles, D. J., Bentler, R.A., Ph.D., McGregor, K.K. (Submitted) Effects of directional microphone on children's recognition and fast-mapping of speech presented from the rear azimuth.


Valente, M. & Mispagel, K.M. (2008). Unaided and aided performance with a directional open-fit hearing aid. International Journal of Audiology.47(6), 329-36.


Wong D.L. & Baker C. (1988). Pain in children: comparison of assessment scales. Pediatric Nursing, 14, 9-17.


Wu, Y-S. & Bentler, R.A. (2010). Impact of visual cues on directional benefit and preference: Part 1 - Laboratory tests. Ear and Hearing, 31(1),35-46.


Wu, Y-S. & Bentler, R.A. (2010). Impact of visual cues on directional benefit and preference: Part 2 - Field tests. Ear and Hearing,31(1),22-34.

 

 

Phonak Infinio - December 2024

ruth bentler

Ruth Bentler, PhD

Professor in the Department of Speech Pathology and Audiology at The University of Iowa

Ruth Bentler, Ph.D. Professor Department of Speech Pathology & Audiology The University of Iowa
Iowa City, Iowa 52246
319.335.8723
FAX 319.335.8851
Ruth-bentler@uiowa.edu

Bio:  Ruth Bentler is a Professor in the Department of Speech Pathology and Audiology at The University of Iowa.  After spending 15 years in clinical audiology, she joined the faculty there in 1987.  She teaches coursework related to hearing aids and adult rehab, as well as assembly and repair of hearing aids.  Her research involves evaluating new signal processing and/or fitting strategies in an effort to optimize user benefit.


h gustav mueller

H. Gustav Mueller, PhD

Professor of Audiology, Vanderbilt University

Dr. H. Gustav Mueller is Professor of Audiology, Vanderbilt University, and has a private consulting practice nestled between the tundra and reality in Bismarck, ND. He is the Senior Audiology consultant for Siemens Hearing Instruments and Contributing Editor for AudiologyOnline. He also holds faculty positions with Central Michigan University, University of Northern Colorado and Rush University. Dr. Mueller is a Founder of the American Academy of Audiology, a Fellow of the ASHA, serves on the Editorial Boards of several audiology journals, and is the Hearing Aids Series Editor for Plural Publishing. Dr. Mueller is an internationally known workshop lecturer, and has published nearly 200 articles and book chapters on diagnostic audiology and hearing aid applications. He is the senior author of the books “Communication Disorders in Aging”, “Probe Microphone Measurements”, and the co-author of the “The Audiologists’ Desk Reference, Volumes I and II



Related Courses

20Q: Hearing Aid Verification - Can You Afford Not To?
Presented by H. Gustav Mueller, PhD
Text/Transcript
Course: #30225Level: Intermediate1 Hour
This course covers basic concepts regarding the selection of hearing aid gain and output, verification, and potential negative consequences when verification is not performed. It also reviews recent changes in the US hearing aid market and makes the case as to why hearing aid verification is more important than ever. This text-based course is written in an engaging Q & A format.

20Q: The New Hearing Aid Fitting Standard - A Roundtable Discussion
Presented by H. Gustav Mueller, PhD, John A. Coverstone, AuD, Erin Picou, AuD, PhD, Lindsey E. Jorgensen, AuD, PhD, Jason Galster, PhD
Text/Transcript
Course: #36965Level: Intermediate2.5 Hours
A new hearing aid fitting standard recently was published. Importantly, it is a standard, not a guideline. Will it move the needle? In this lively round table discussion, experts dissect the key components of the standard, and give their candid opinions regarding some of the "hows" and "whys" the standard was created.

20Q: Hearing Aid Verification - Will AutoREMfit Move the Sticks?
Presented by H. Gustav Mueller, PhD, Todd A. Ricketts, PhD
Text/Transcript
Course: #31600Level: Advanced1 Hour
This article discusses autoREMfit hearing aid fitting, how it compares to best practices in hearing aid verification, and provides considerations and recommendations for professionals using autoREMfit to help optimize accuracy.

Vanderbilt Audiology Journal Club: Clinical Insights from Recent Hearing Aid Research
Presented by Todd Ricketts, PhD, Erin Margaret Picou, AuD, PhD, H. Gustav Mueller, PhD
Recorded Webinar
Course: #37376Level: Intermediate1 Hour
This course will review new key journal articles on hearing aid technology and provide clinical implications for practicing audiologists.

Hearing Aid Verification: What You Can't Buy Over the Counter
Presented by H. Gustav Mueller, PhD
Recorded Webinar
Course: #29787Level: Intermediate1 Hour
This course reviews the fundamental concepts surrounding the selection of appropriate hearing aid gain and output. It then addresses the concept of verification, and reviews the potential negative consequences when verification is not performed.

Our site uses cookies to improve your experience. By using our site, you agree to our Privacy Policy.