AudiologyOnline Phone: 800-753-2160


ReSound Smart Fit - August 2024

The Influence of Cognitive Factors on Outcomes with Frequency Lowering

The Influence of Cognitive Factors on Outcomes with Frequency Lowering
Jennifer Schumacher, AuD
February 29, 2016
Share:
This article is sponsored by ReSound.

Learner Objectives

  • Readers will be able to define the types of frequency lowering features commercially available in hearing aids today.
  • Readers will be able to explain how a frequency lowering feature may interact with the cognitive abilities of patients.
  • Readers will be able to list the possible effects of frequency lowering on patient outcomes when determining candidacy. 

Introduction

Since frequency lowering technology has become commercially available in modern digital hearing aids, researchers have set out to determine what benefits this technology could provide hearing-impaired patients. There is an abundance of conflicting evidence in the literature regarding possible benefits, or lack thereof, from frequency lowering technology (Simpson, Hersbach, & McDermott, 2005, 2006; Glista et al., 2009; Ching et al., 2013; Alexander, Kopun, & Stelmachowicz, 2014). At this point, what is clear is that frequency lowering is not a “one-size-fits-all” approach for hearing impaired patients. Factors such as hearing thresholds, slope of hearing loss, age, frequency lowering settings and experience with frequency lowering have all been found to influence outcomes with various implementations of the technology (McCreery,Venediktov, Coleman, & Leech, 2012; Alexander, 2013a). It should be noted there are not consistent relationships between these factors and frequency lowering outcomes.

An additional factor- cognitive abilities of the patient- and its influence on success with hearing aids has been a hot topic in our field in recent years (Akeroyd, 2008; Lunner, Rudner, & Ronnberg, 2009). We know that hearing involves complex processes that go beyond the peripheral auditory system, and that these abilities vary widely across listeners. The cognitive processes devoted to speech understanding can be thought of as a finite set of resources, with dedicated functions for processing speech, and for storing running speech to short-term memory (Lunner et al., 2009). Because these resources are finite, an increased need for cognitive resources to process speech means a decrease in resources available for short-term memory storage (Lunner et al., 2009).

Tests of specific cognitive abilities can be used to quantify processing abilities in individual listeners. These test results can then be analyzed along with performance on auditory tasks, such as speech intelligibility or listening effort, while employing various hearing aid features or algorithms. So far, we have learned that there are interactions between the individual cognitive abilities of a listener, and the signal processing that occurs with use of a hearing aid feature (Lunner & Sundewall-Thoren, 2007; Cox & Xu, 2010; Sarampalis, Kalluri, Edwards, & Hafter, 2009). In particular, listeners who struggle more with performing working memory tasks have been shown to do more poorly when using hearing aid features that increase the processing load of the listener (Lunner & Sundewall-Thoren, 2007; Cox & Xu, 2010; Foo, Rudner, Ronnberg, & Lunner, 2007). 

This article will review evidence that suggests cognitive abilities can influence outcomes with frequency lowering, and the possible implications this may hold for hearing aid patients.

What is Frequency Lowering?

Frequency lowering algorithms modify an acoustic signal in order to provide audibility for high frequency sounds not audible via conventional amplification. This is accomplished by “moving” high frequency sounds to a lower frequency region where there is aidable hearing. The hope is that frequency lowering can lead to benefits in detection of high frequency sounds and speech understanding for the patient. The lowering is done using a variety of signal processing techniques available in current commercial hearing aids (Alexander, 2013a).

One of these signal processing techniques is frequency transposition. A range of high-frequency sounds above a cutoff frequency are identified. The algorithm does not amplify this region, but rather transposes the peak signal within this area to a lower frequency region one to two octaves lower than the transposed frequency. When the peak is transposed, the harmonic structure of the signal is maintained.

Frequency translation works in a similar manner as transposition. As with transposition, when the algorithm detects a sound in the high-frequency region, the peak is lowered to a frequency region below a cutoff frequency. The high-frequency signal is lowered in a way that allows the harmonic structure of the original signal to be maintained. However, unlike transposition, the full bandwidth of the hearing aid remains amplified, including the high-frequency region that was lowered. In a current implementation of the feature, the lowering is activated only in specific acoustic environments, such as those containing speech (Alexander, 2013a).

Frequency compression differs from both transposition and translation. Instead of pasting high-frequency sounds into a lower frequency region, the spectral relationship between input and output of a high-frequency region is changed. This allows a greater range of high frequencies to be fitted within a reduced range to the listener, similar to the way amplitude compression allows a greater range of intensities to be fitted to the hearing-impaired patient. All processing occurs above the cutoff frequency (in this case, the compression threshold).

Frequency compression can be further divided into linear and nonlinear strategies. Linear frequency compression maintains a constant relationship between the frequency of the input signal and the frequency of the compressed output. Nonlinear compression, on the other hand, compresses higher frequencies more than those frequencies closer to the compression threshold (Phonak, 2013).

Does Cognition Play a Role in Frequency Lowering Outcomes?

It is important to remember that frequency lowering is activated with the goal of improving audibility of high frequency sounds, but comes at a cost of altering the frequency and intensity content of the original signal (Kates, Arehart, & Souza, 2013). This distortion of the signal may improve audibility, but it may also require more cognitive resources to be processed by the listener. Recall from the introduction that this increase in processing could reduce available resources for short-term storage, or hinder those listeners who already require more resources to perform short-term memory tasks.

Under this theory, we might expect to see the following:

  1. Frequency lowering increases the processing resources needed for speech understanding to occur as compared to conventional amplification.
  2. As distortion to the signal increases (either due to environmental noise or more aggressive frequency compression settings), performance further decreases.
  3. Listeners who have poorer working memory skills perform poorer with frequency lowering than their peers with better working memory.

To examine possible interactions between frequency lowering and cognition, the authors of the following studies used a test of working memory, the reading span test (Daneman & Carpenter, 1980), to place participants into “low” and “high” ability groups. The participants all passed a screening for overall cognitive functioning, so the “low” and “high” labels referred to normal variation in working memory, not an indication of global cognitive functioning. Average performance on speech intelligibility or sound quality tasks, both with and without frequency compression, were then compared between the low and high working memory groups.

Arehart, Souza, Baca, & Kates (2013) measured the effect of working memory on frequency-compressed speech understanding. The participants were older adults, divided into low and high working memory groups.  Age and hearing thresholds did not differ significantly between the groups. The stimuli were speech materials processed using three compression thresholds (2, 1.5, 1 kHz) and three compression ratios (3:1, 2:1, 1.5:1) and conventional amplification; all processing conditions were presented in quiet and in babble noise at five signal to noise ratios (-10, -5, 0, +5 and +10 dB SNR). On average, the participants performed poorer on the speech intelligibility task as overall distortion increased (poorer SNR and more aggressive frequency compression). Age also factored into performance, with the older subset of participants performing more poorly than those who were not as old. But the working memory abilities played a factor in performance as well. The participants in the high working memory group were better at the task than those in the low group. This became especially true as the distortion in the stimuli increased; listeners with high working memory abilities were less susceptible to the effects of the distortion caused by frequency lowering and babble noise than the low working memory group.

Other research has shown no relationship between working memory and intelligibility of frequency-compressed speech (Ellis & Munro, 2013, 2015). The same factors that have been shown to contribute to variability in performance with frequency lowering may also contribute to the differences seen in these results. Ellis and Munro (2013) investigated working memory abilities and benefit obtained from frequency compression in a speech in noise task. The authors used a fixed 1.6 kHz compression threshold, with CRs of 2:1 and 3:1, similar to Arehart et al. (2013), but the participants in the study all had normal hearing. There was no effect of working memory on performance with frequency compression. The difference in results between this study and Arehart et al. (2013) can perhaps be explained by an interaction between hearing thresholds and cognitive abilities. Souza, Arehart, Shen, Anderson, & Kates (2015) analyzed the data from Arehart et al. (2013) and found an interaction between working memory and hearing thresholds. The listeners with the poorest working memory abilities and the worst hearing thresholds did poorest when using frequency compression.

Ellis and Munro (2015) performed similar testing with hearing-impaired listeners. Unlike their first study (Ellis & Munro, 2013) or Arehart et al. (2013), the subjects were fitted with an individualized frequency compression setting. Many of the frequency compression settings were more conservative than those used in Arehart et al. (2013). Again, no effect of working memory on frequency compressed speech recognition was found. In this case, it is possible that the individualized, more conservative frequency compression settings minimized distortion from frequency compression, and did not tax cognitive resources enough for working memory differences to become an important factor in performance.

Cognitive factors do not appear to hold much influence on judgements of sound quality for frequency compressed speech. The relationship between sound quality ratings for frequency-compressed speech and various factors were measured (Kates et al., 2013; Souza, Arehart, Shen, Anderson, & Kates, 2013; Souza et al., 2015). In these studies, sound quality ratings were best predicted by the amount of distortion present in the frequency-compressed signal. More aggressive compression settings (lower compression threshold, larger compression ratio) lead to both lower intelligibility scores and lower ratings of sound quality (Kates et al., 2013; Souza et al., 2013; Souza et al., 2015).  When noise (an additional form of distortion) was added to speech, these conditions were rated more poorly than speech in quiet, and an interaction between the presence of noise and more aggressive compression settings was measured (Souza et al., 2015).

Cognitive abilities and frequency lowering were examined in a slightly different manner by Kokx-Ryan et al. (2015). Instead of measuring working memory capacity and using the score to categorize the participant as having high- or low-working memory abilities, a direct measure of cognitive ability was completed during a speech intelligibility task. Participants performed a speech-in-quiet and speech-in-noise task. The amount of available cognitive resources “left over” during the task were measured using a second, concurrent test of reaction time. Better performance on the secondary reaction time task would suggest freer cognitive resources, which suggests less effort was needed to complete the speech understanding task. The participants completed the dual task with frequency compression and with conventional amplification. Prior to testing, frequency compression settings had been optimized and verified for each listener, and the frequency compression settings that yielded the best consonant identification score for each individual were used for the dual task. 

There was no effect of frequency compression measured on any portion of the task. Average speech intelligibility in quiet and in noise, and average reaction time in quiet and in noise were the same for frequency compression on and off. It would have been interesting if the participants had been asked to rate the amount of listening effort they expended while performing the dual task. Perhaps some effect could have been measured there, similar to Sarampalis et al. (2009), where no difference in speech intelligibility with noise reduction on or off was measured, but ratings of listening effort did show a positive effect from use of noise reduction.

Souza et al. (2015) included a non-linguistic measure of reaction time, and compared the scores to speech intelligibility performance with and without frequency compression. Just as in the Kokx-Ryan et al. (2015) study, there was no correlation found between measured reaction time and speech intelligibility. Because the test in this study did not include a measure of speech intelligibility, it is difficult to determine if the non-linguistic nature of the task perhaps led to the lack of effect.

What Does This Mean for Patients?

We can take away two points from the reviewed studies. First, cognitive factors of the patient, not just demographic (age) or peripheral (hearing thresholds) factors, may need to be considered when addressing candidacy and feature settings for frequency lowering, at least in some cases (Kates et al., 2015). Secondly, it is important to minimize distortion from use of frequency lowering if the feature is to be used, or even in the design of the feature itself.

It would be beneficial during the hearing examination for clinicians to include a measure of speech-in-noise abilities, such as the QuickSIN (Killion, Niquette, Gudmundsen, Revit, & Banerjee, 2004) to measure the ability of patients to understand speech distorted with noise. QuickSIN scores were found to be the best predictor of intelligibility performance with frequency-compressed speech in Kates et al. (2013), even though the test was presented to the listeners without any amplification. Perhaps the QuickSIN can be thought of as a measure of “susceptibility” to distortion, whether from noise or processing algorithms. Patients who perform poorly on the QuickSIN may also struggle to receive benefit for speech understanding with use of frequency lowering.

Tools for measuring working memory abilities are not available clinically at this time. Clinicians can consider, however, adding screening tools for global cognitive functioning, such as the Mini Mental State Examination (MMSE; Folstein, Folstein, & McHugh, 1975). Although this screening tool does not measure working memory abilities, it can identify patients who may have overall deficits in cognitive function. For these patients, clinicians may want to be more mindful of what signal modifications they are using in the hearing aid, including frequency lowering.

In the future, it may be possible to use objective test measurements to determine which frequency lowering settings are optimal for a patient (Kirby & Brown, 2015). For now, it has been recommended that when setting frequency lowering parameters, the clinician should use the least amount of lowering needed to provide audibility of high frequency sounds, in order to maximize audible bandwidth (Alexander, 2013b) and minimize distortion (Stender & Groth, 2014). The findings from the studies discussed in this article support this recommendation. Recall that greater amounts of frequency compression were associated with poorer speech intelligibility and poorer ratings of sound quality (Arehart et al., 2013; Souza et al., 2013; Souza et al., 2015). Note that these findings do not suggest which frequency lowering parameters should be used in individual patients; that is for the clinician to decide on a case-by-case basis. It should also be mentioned that it is not possible to determine which individual settings provide audibility with minimal distortion for each patient without verification of the feature.

The goal of minimizing the distortion effects of frequency lowering have also been used to guide the design of the feature itself. For instance, the Resound SoundShaper feature uses linear frequency compression, with the idea of maintaining the harmonic structure of the compressed speech sounds for minimal changes in sound quality (Haastrup, 2013). In addition, high frequency sounds are only compressed if they are determined to be speech. The fitting software will suggest activation of SoundShaper if the patient’s degree and slope of high frequency hearing loss suggests possible benefit from use of the feature, but SoundShaper is not activated unless the clinician manually chooses to do so. Fitting parameters for SoundShaper were simplified to allow for the clinician to individualize the frequency compression settings, but also to prevent inadvertent application of too much compression of high frequency sounds.

Conclusions

Our field is continuing to gain knowledge on how outcomes with hearing aid technology are influenced by many factors, including cognitive abilities. Frequency lowering features have become more common in today’s technology, and the possible benefits, and costs, which come with the use of the feature should be carefully weighed for each individual patient. 

References

Akeroyd, M. (2008). Are individual differences in speech reception related to individual differences in cognitive ability? A survey of twenty experimental studies with normal and hearing-impaired adults. International Journal of Audiology, 47, S53–S71.

Alexander, J.M. (2013a). Individual variability in recognition of frequency-lowered speech. Seminars in Hearing, 34, 86-109.

Alexander, J.M. (2013b). 20Q: The highs and lows of frequency lowering amplification. AudiologyOnline, Article #11772. Retrieved from: www.audiologyonline.com

Alexander, J.M., Kopun, J. G., & Stelmachowicz, P.G. (2014). Effects of frequency compression and frequency transposition on fricative and affricate perception in listeners with normal hearing and mild to moderate hearing loss. Ear & Hearing, 35, 519-532.

Arehart, K.H., Souza, P., Baca, R., & Kates, J.M. (2013). Working memory, age, and hearing loss: Susceptibility to hearing aid distortion. Ear & Hearing, 34, 251-260.

Ching, T. Y., Day, J., Zhang, V., Dillon, H., VanBuynder, P., Seeto, M., … Flynn, C. (2013). A randomized controlled trial of nonlinear frequency compression versus conventional processing in hearing aids: speech and language of children at three years of age. International Journal of Audiology, 52, S46-S54.

Cox, R.M., & Xu, J. (2010). Short and long compression release times: Speech understanding, real-world preferences, and association with cognitive ability. Journal of the American Academy of Audiology, 21, 121–138. 

Daneman, M., & Carpenter, P.A. (1980). Individual differences in working memory and reading. Journal of Verbal Learning and Verbal Behavior, 19, 450–466.

Ellis, R. J., & Munro, K. J. (2013). Does cognitive function predict frequency compressed speech recognition in listeners with normal hearing and normal cognition? International Journal of Audiology, 52, 14–22.

Ellis, R. J., & Munro, K. J. (2015). Predictors of aided speech recognition, with and without frequency compression, in older adults. International Journal of Audiology, 54, 467–475.

Folstein, M.F., Folstein, S.E., & McHugh, P.R. (1975). Mini-Mental State – Practical method for grading cognitive state of patients for the clinician. Journal of Psychiatric Research, 12, 189–198. 

Foo, C., Rudner, M., Rönnberg, J., & Lunner, T. (2007). Recognition of speech in noise with new hearing instrument compression release settings requires explicit cognitive storage and processing capacity. Journal of the American Academy of Audiology, 18, 618–631.

Glista, D., Scollie, S., Bagatto, M., Seewald, R., Parsa, V., & Johnson, A. (2009). Evaluation of nonlinear frequency compression: Clinical outcomes. International Journal of Audiology, 48, 632–644.

Haastrup, A. (2013). Improving high frequency audibility with Sound Shaper. Retrieved from: https://www.resound.nl/~/media/DownloadLibrary/ReSound/Products/LiNX/White,-sp-,paper,-sp-,sound,-sp-,shaper.ashx.

Kates, J.M., Arehart, K.H., & Souza,P. (2013).Integrating cognitive and peripheral factors in predicting hearing-aid processing benefit. Journal of the Acoustical Society of America, 134, 4458–4469.

Killion, M. C., Niquette, P. A., Gudmundsen, G. I., Revit, L. J., & Banerjee, S. (2004). Development of a quick speech-in-noise test for measuring signal-to-noise ratio loss in normal-hearing and hearing impaired listeners. Journal of the Acoustical Society of America, 116, 2395–2405.

Kirby, B.J., & Brown, C.J. (2015). Effects of nonlinear frequency compression on ACC amplitude and listener performance. Ear & Hearing, 36, 261-270.

Kokx-Ryan, M., Cohen, J., Cord, M.T., Walden, T.C., Makashay, M.J., Sheffield, B.M., & Brungart, D.S. (2015). Benefits of nonlinear frequency compression in adult hearing aid users. Journal of the American Academy of Audiology, 26, 838-855.

Lunner, T., Rudner, M., & Rönnberg, J. (2009). Cognition and hearing aids. Scandinavian Journal of Psychology, 50, 395–403.

Lunner, T., & Sundewall-Thoren, E. (2007). Interactions between cognition, compression, and listening conditions: Effects on speech-in-noise performance in a two-channel hearing aid. Journal of the American Academy of Audiology, 18, 604–617.

McCreery, R. W., Venediktov, R. A., Coleman, J. J., & Leech, H. M. (2012). An evidence-based systematic review of frequency lowering in hearing aids for school-age children with hearing loss. American Journal of Audiology, 21, 313–328.

Phonak. (2013). SoundRecover: Background information from the field of audiology. Retrieved on December 17, 2015 at https://www.phonakpro.com/content/dam/phonakpro/gc_hq/en/resources/evidence/white_paper/documents/Compendium_No3_SoundRecover.pdf.

Sarampalis, A., Kalluri, S., Edwards, B., & Hafter, E. (2009). Objective measures of listening effort: Effects of background noise and noise reduction. Journal of Speech Language Hearing Research, 52, 1230–1240.

Simpson, A., Hersbach, A. A., & McDermott, H. J. (2005). Improvements in speech perception with an experimental nonlinear frequency-compression hearing device. International Journal of Audiology, 44, 281-292.

Simpson, A., Hersbach, A. A., & McDermott, H. J. (2006). Frequency compression outcomes for listeners with steeply sloping audiograms. International Journal of Audiology, 45, 619-629.

Souza, P., Arehart, K.H., Shen, J., Anderson, M., & Kates, J.M. (2015). Working memory and intelligibility of hearing-aid processed speech. Frontiers in Psychology, 6, 1-14. doi: 10.3389/fpsyg.2015.00526.

Souza, P., Arehart, K.H., Kates, J.M.,Croghan, N.B., & Gehani, N. (2013). Exploring the limits of frequency lowering. Journal of Speech Language Hearing Research, 56, 1349–1363.

Stender, T. and Groth, J. (2014). Evidence-based and practical considerations when fitting Sound Shaper for individual patients. Hearing Review, 24, 20-23.

Cite this Content as:

 Schumacher, J. (2016, March). The influence of cognitive factors on outcomes with frequency lowering. AudiologyOnline, Article 16425. Retrieved from https://www.audiologyonline.com.

 

Industry Innovations Summit Live CE Feb. 1-28

jennifer schumacher

Jennifer Schumacher, AuD

Audiologist, GN ReSound Global Audiology

Jennifer Schumacher, AuD is an audiologist in the GN ReSound Global Audiology group. She is involved in designing and conducting clinical research trials, analyzing data, supervising AuD students and presenting audiology information to students and clinicians. Her primary areas of interest include hearing aid technology, and using this technology to assist in providing patient-centered solutions.



Related Courses

ReSound OMNIA: Introducing the Full Family
Presented by Neil Wright, AuD
Recorded Webinar
Course: #38667Level: Introductory1 Hour
This course will introduce the expanded line of the ReSound OMNIA family of hearing aids, including the new mini-rechargeable Receiver In the Ear, as well as the updated BTE and custom models. The presentation will highlight the benefits of the OMNIA technology, and discuss changes made to ReSound Smart Fit fitting software that are introduced as part of the update to version 1.16.

Next-Gen Breakthroughs with Auracast Broadcast
Presented by Megan Quilter, AuD
Live WebinarThu, Feb 20, 2025 at 12:00 pm EST
Course: #40204Level: Intermediate1 Hour
Join us to learn more about the Auracast Revolution. This course provides an in-depth exploration of Auracast technology and its transformative impact on wireless audio accessibility. Participants will learn how Auracast works within Bluetooth LE Audio to broadcast audio directly to hearing aids and other devices, offering a seamless listening experience in public and private settings. The session will cover the key benefits of Auracast for individuals with hearing loss, its applications in environments like theaters, classrooms, and workplaces, and how it compares to traditional solutions.

Fitting Assistance at a Distance: ReSound Live Assistance for the Government Services Clinician
Presented by Kara Sinner, MS, CCC-A, FAAA
Recorded Webinar
Course: #35099Level: Introductory1 Hour
This presentation will introduce the process for performing remote fine-tuning sessions using ReSound Assist: Live Assistance. Details will describe steps as they pertain to both the patient and the Government Services clinician.

Auracast - The New System for Assistive Listening and Much More
Presented by Nikolai Bisgaard, M.ScEE
Recorded Webinar
Course: #39200Level: Intermediate1 Hour
This course discusses the new Bluetooth LE Audio system, which offers major new opportunities for hearing aids. Auracast broadcast audio is a new Bluetooth capability that improves audio accessibility and promotes better living through better hearing.

Grand Rounds By ReSound: Directionality Session 1
Presented by Valerie Kedem, AuD
Recorded Webinar
Course: #39725Level: Intermediate0.5 Hours
Grand Rounds by ReSound offers a 30-minute segment on directionality options in Program 1, All-Around.

Our site uses cookies to improve your experience. By using our site, you agree to our Privacy Policy.