As audiologists, we are accustomed to categorizing hearing loss from mild to profound. Is severe loss just a point on a continuum, somewhere between moderate and profound, or are patients with severe hearing loss a special group with special needs? In this article, we discuss the characteristics and needs of listeners with severe loss, review recent data on listeners with severe loss, and recommend amplification choices.
Severe loss, in this article, does not refer to patients with relatively good low- and mid-frequency hearing that falls into the severe range only in the high frequencies. Instead, we refer to severe loss as hearing loss across the speech frequency range, from at least 500 to 3000 Hz. We also expand beyond the conventional 71-90 dB HL classification (Goodman, 1965) to discuss patients with hearing thresholds greater than 60 dB HL. The rationale is that whereas the 71-90 dB HL division is arbitrary, more recent data and physiological surveys suggest that 60 dB HL can serve as a rough division point between patients with primarily outer hair cell damage and those with both outer and inner hair cell damage (Killion & Niquette, 2000;Nelson & Hinojosa, 2006;Van Tasell, 1993).
Figure 1 shows air conduction thresholds (bone conduction thresholds are not shown but were within 10 dB of the air conduction threshold at each frequency tested) and frequency-specific uncomfortable loudness levels for a patient with severe bilateral sensorineural hearing loss. This is a 55 year old female who has worn hearing aids since age 5 and is currently wearing power behind-the-ear aids with directional microphones. As with most patients with severe loss, she demonstrates difficulty with monosyllabic word recognition in quiet and with sentence recognition in noise, and she has a reduced dynamic range.
Figure 1. Example audiogram for a patient with a severe loss. Left and right graphs show air conduction thresholds and uncomfortable loudness levels for each ear. Tables show admittance and speech discrimination scores.
Compared to the average hearing aid patient, patients with severe loss are more likely to:
- Have had longstanding or lifetime hearing loss (U.S. Department of Health and Human Services, 1994),
- Have had previous hearing aid experience,
- Have poor speech discrimination. Figure 2 shows results for a group of mild to moderately impaired (blue circles) and a group of severely impaired (red circles) listeners. On average, the severe listeners have lower NU6 scores, but the range of performance is also very large. Whereas the mild to moderately impaired listeners scored 80% or better for monosyllables in quiet, the severe listeners' scores ranged from 10% to 100%! Note also that this is not very predictable based on degree of loss—some listeners with PTAs between 60 and 70 dB HL performed as poorly as listeners with close to 100 dB HL PTAs.
- Have problems hearing in noise. Figure 3 shows results for a group of mild to moderately impaired (blue circles) and a group of severely impaired (red circles) listeners. On average, the severe listeners have higher QuickSIN scores, but as with speech in quiet, the range of performance is very large. Recall that the QuickSIN is expressed as dB signal-to-noise ratio, so that a score of 10 dB means that a listener needs a + 10 dB signal-to-noise ratio to obtain threshold sentence recognition. Some of the severely impaired listeners needed signal-to-noise ratios as favorable as +20, and these were not always the listeners with the worst hearing loss, which emphasized the need to measure speech-in-noise ability as part of our test battery.
Figure 2. Relationship between pure-tone average and ability to understand speech in quiet. Blue circles are listeners with mild to moderate loss;red circles are listeners with moderately-severe to severe loss.
Figure 3. Relationship between pure-tone average and ability to understand speech in noise. Blue circles are listeners with mild to moderate loss;red circles are listeners with moderately-severe to severe loss.
Psychoacoustics of severe hearing loss
The main consequence of the significant inner hair cell damage which underlies severe hearing loss is broadened auditory filters causing loss of frequency selectivity (Moore, Vickers, Glasberg, & Baer, 1997;Rosen, Faulkner & Moore, 1990). The broad filters are responsible for severely impaired listeners' reduced ability to separate two signals in the frequency domain and are a major contributor to difficulty hearing in background noise. Simply put, the broader the auditory filter, the more noise passes through it, and the more difficult it is to identify a signal in the presence of that noise. It has also been hypothesized that reduced frequency selectivity leads to a greater reliance on temporal cues and perhaps to greater susceptibility to distortion of those cues with amplification (Souza, Jenstad & Folino, 2005).
There has also been recent work in identifying auditory dead regions, with most of this being done by Brian Moore and his colleagues. In a dead region, the auditory hair cells are sparse or absent, reducing the ability to use auditory information at those characteristic frequencies. It has been suggested that making amplified speech audible within the dead region is not useful and may be detrimental to recognition. The probability of a dead region increases for thresholds above 70 dB HL, but even in that range the incidence is only about 60% (Vinay & Moore, 2007). Most diagnoses of dead regions have been in subjects with relatively better low-frequency hearing and steeply sloping hearing loss (Scollie, 2006). However, dead regions may also be present in subjects with flat severe hearing losses (Moore, Killen & Munro, 2003). Clinicians can test for dead regions with their audiometer (Moore, Glasberg & Stone, 2004);however, it may not be possible to get conclusive results with a severe loss due to maximum output limits of the audiometer. If dead region information is available, it can be included in clinical fitting decisions. For example, if tests indicate a dead region above 2 kHz, I might consider limiting aided gain above 2 kHz. However, this should be done cautiously, because it is unclear if this is a viable strategy for all patients (Gordo, & Martinelli Iorio, 2007;Mackersie, Crocker, & Davis, 2004). As we know, there is no "one size fits all" in hearing aids;therefore, any decision to limit gain in a frequency region should be made on an individual basis, carefully considering other audiometric factors as well as patient feedback. For example, patients with dead regions may report that the audiometer signal loses tonality and sounds like static above the frequency edge of the dead region.
Choosing appropriate amplification
The choice of a hearing aid for a listener with severe loss can be summarized simply, if not optimistically: These listeners need all the help they can get! This does not necessarily mean high-end hearing aid processing, but it does mean carefully selected amplification maintained in good working order, with appropriately fit earmolds, frequent tubing changes, and careful fitting adjustments and follow up. Several features of digital hearing aids have been helpful to listeners with severe loss. Among them is digital feedback suppression which allows up to 35 dB more gain than aids without digital feedback management (Chung, 2004). This is a clear advantage over acoustic modifications (tighter fitting earmolds, smaller vents) to control feedback. In addition, digital amplifiers are physically smaller than analog amplifiers, allowing for smaller behind-the-ear power hearing aids. Digital technology also offers the potential for unconventional solutions to severe loss including frequency transposition (Kuk, 2006), spectral sharpening (Oxenham, Simonson, Turicchia, & Sarpeskar, 2007), or a combination of the two (Munoz, Nelson, Rutledge, & Gago, 1999).
Perhaps the biggest advantage with digital aids is the nearly universal use of multi-channel wide dynamic range compression (WDRC) amplification. Despite early cautions that listeners with severe loss would be resistant to WDRC processing because it would be insufficiently loud, we know that audibility is a prerequisite to improving speech recognition. Multichannel WDRC offers the best opportunity to improve audibility across a range of speech input levels. This is illustrated in Figure 4, which shows a comparison of consonant recognition scores for listeners with severe loss (open circles) and mild to moderate loss (filled circles). Each listener was fit with a four-channel WDRC behind-the-ear hearing aid set to NAL-NL targets (condition WDRC) and to the same hearing aid adjusted for linear processing with output compression limiting, again set to NAL targets (condition CL). Scores falling on the diagonal indicate equivalent benefit for both processors. Note that for soft (50 dB SPL) speech, performance is better with WDRC (i.e., more points fall above the diagonal than below it), and the difference between processors is larger for the listeners with severe loss. Note also the group of listeners with low CL scores (
Figure 4. Comparison of consonant recognition scores with a behind-the-ear hearing aid set to either 4-channel wide dynamic range compression (WDRC) or linear amplification with output compression limiting (CL). Open circles show scores for listeners with severe loss;filled circles show scores for listeners with mild to moderate loss. Points falling on the diagonal indicate equivalent performance for both circuits.
Another useful feature is a directional microphone. For patients unwilling or unable to use a personal FM system with remote microphone, directional microphones are our best option for improving the signal-to-noise ratio. Automatic and adaptive directional microphones are now readily available in high power behind-the-ear hearing aids. Considering the speech-in-noise problems experienced by listeners with severe loss, ordering and programming a directional mode seems a simple choice. Ricketts and Hornsby (2006) measured the benefits of directional microphones for listeners with severe loss. Despite audibility predictions which suggested there might be reduced directional benefit in this population, they found average speech-recognition benefits of about 14% (compared to about 17% in previously published work with mild to moderately impaired listeners). In addition, Ricketts and Henry (2002) recommend that the directional mode be equalized when fitting a severe loss to avoid reducing low-frequency gain. Some products do this automatically, but in other products, you will have to manually adjust the frequency response within the directional mode.
Beyond audibility - is there evidence that distortion matters?
There is strong evidence that WDRC amplification provides audibility benefit—and improved speech recognition—over more linear strategies for listeners with severe loss. But, not all WDRC processing is equivalent. In most digital aids, clinicians can adjust the gain characteristics (compression ratio and compression threshold), attack time and release time (sometimes separately for each compression channel), and crossover frequencies. More compression (i.e., higher compression ratios and/or shorter time constants) means greater distortion. This is acknowledged in some manufacturer's materials. The following is an excerpt from the fitting help file for a leading manufacturer's high-end product, referring to the use of a long attack and release time: "The big advantage of this behavior is, that the time structure of speech signals is not changed. (The time structure contains valuable information e.g. to distinguish different phonemes)."
However, more compression also means improved audibility (as more of the signal is brought into the dynamic range). In theory, the parameter choices we would make to maximize audibility for a severely impaired listener with a small dynamic range would be to use low compression kneepoints, high compression ratios, short attack and release times, and multiple channels, all of which could compromise natural temporal cues that are present in a speech signal. This is also noted by hearing aid manufacturers. For the same product as described above, the manufacturer states that:
"...some hearing impaired users have such a restricted residual dynamic range that loud parts of speech exceed their UCL, while soft consonants fall below their hearing threshold. In these cases it is necessary to compress the speech itself, i.e. reduce the gain for loud vowels and increase the gain for soft consonants. For this reason the syllabic compression works with attack and release times matched to speech."
Therefore, a balance between improved audibility and minimal distortion seems necessary. A classic illustration of the difficulties of this approach is provided by DeGennaro, Braida, and Durlach (1986) who fit listeners with severe loss with a fast-acting 16-channel compression system that improved audibility to varying extents over linear amplification. However, despite improved audibility, speech recognition was worse with multichannel compression. In cases where linear and WDRC processing provided similar audibility, Souza, Jenstad and Folino (2005) found poorer recognition with a two- or three-channel WDRC system over the linear aid. As DeGennaro had, Souza et al. suggested that temporal distortion might play a role, based on specific consonant confusions consistent with temporal cue distortion. If so, it seems that listeners with severe loss should perform poorer with fast-acting WDRC, which we know alters temporal cues (Jenstad & Souza, 2005). Indeed, that is the case.
Figure 5 shows results for a group of severely impaired listeners fit with either fast-acting or slow-acting WDRC. Scores were significantly worse with the fast-acting compression at inputs of 50 and 65 dB SPL. The difference did not occur at 80 dB SPL, probably because at that input level the hearing aid response was dominated by activation of the output limiter, creating temporal distortion in both processors. A comparison group of mild to moderately impaired listeners (results not shown) showed no significant difference for any input level. Some individual patients did perform better with fast-acting than with slow-acting compression, probably because the audibility advantage conferred outweighed the effects of any temporal distortion for that listener. This might be because their need for improved audibility dominated, which tended to be the case for listeners with the poorest thresholds. Alternatively, those individuals might have better spectral discrimination, or they apply some other listening strategy which places less "weight" on temporal cues. Current research in our laboratory focuses on predicting the balance point (in terms of optimal hearing aid parameters) between audibility and distortion for an individual. For now, it seems reasonable to use multichannel WDRC but with the lowest distortion (i.e., lowest compression ratio and longest time constants) that will achieve acceptable audibility.
Figure 5. Nonsense syllable recognition with fast-acting (5 ms attack time, 100 ms release time) or slow-acting (500 ms attack time, 5 sec release time) compression for 22 listeners with severe hearing loss. Results are shown for 3 input levels: 50, 65, and 80 dB SPL. Performance is significantly poorer with the fast compression at 50 and 65 dB SPL. There was no difference in scores at 80 dB SPL. This is attributed to activation of the compression limiter, which temporally distorts the signal.
The double whammy: advanced age + severe hearing loss
There is limited information about the combination of age and severe loss. However, listeners with severe loss depend to a greater extent on temporal cues, and age also affects temporal discrimination (Pichora-Fuller & Souza, 2003);therefore, the combination may put these listeners at a disadvantage. Folino (2002) compared amplified consonant recognition for older (aged 72-88 years) and younger (aged 19-57 years) listeners with severe hearing loss. Scores were about 20% poorer for the older group. Because subject numbers were small, this study is suggestive rather than conclusive. Furthermore, more recent consonant recognition data from our laboratory with a larger set of severely impaired subjects did not show a strong effect of age. However, it seems prudent that as clinicians we stay alert that age combined with reduced processing for complex sounds could be a problem for older listeners with severe loss.
Older adults who acquire severe hearing loss relatively late in life, either through progressively worsening thresholds over time or through sudden onset loss, have unique problems. They are more likely to suffer from age-related comorbid health conditions which make adjusting to the loss difficult (Brennan & Bally, 2007). They may experience grieving and depression, or they may withdraw from work, family, or social activities (Barlow, Turner, Hammond, & Gailey, 2007), especially when the loss is relatively sudden. Therefore, appropriate vocational, psychosocial, and/or cognitive referrals should be considered for this group.
The new world of cochlear implant options
Not so long ago, patients with severe loss were fit with large behind-the-ear aids, even body aids. Those with very poor aided discrimination simply had to accept that their best possible speech recognition benefit might be well below what they had hoped. I recall one of my patients who was a long-time body aid wearer, had minimal speech recognition, and for whom awareness of his own voice was so poor that he simply shouted everything as loud as he could. Today, such a patient would be evaluated for a cochlear implant. The decision to implant is based on three factors: whether physical implantation is appropriate for the patient's physical health, whether communication benefit will be greater with the implant than with a hearing aid, and whether the patient has the necessary psychological and family support to use and maintain the implant (American Speech Language Hearing Association, 2004).
In 1995, guidelines were expanded to allow implantation for adults with severe as well as profound hearing loss (National Institutes of Health, 1995). Current FDA guidelines allow implantation in adults with speech discrimination scores as high as 50-60% (American Speech Language Hearing Association, 2004). Newer configurations, such as bilateral or bimodal implantation to improve hearing in noise and sound localization (Firszt, Reeder, & Skinner, 2008) as well as hybrid implants which combine use of low-frequency acoustic cues with higher-frequency electrical cues (Turner, Gantz, & Reiss, 2008), have improved cochlear implant users' communication abilities and quality of life. The success of cochlear implants may have slightly reduced the demands for hearing aid fittings, but the majority of cochlear-implant eligible adults will not end up getting implants for a variety of reasons (Bird & Murray, 2008). As the audiologists who see these patients, we must be knowledgeable about cochlear implant candidacy, be able to counsel appropriately, and make appropriate referrals.
Conclusions
In many ways patients with severe hearing loss are the most interesting we see, calling upon our skills as clinicians to develop assistive strategies, provide counseling, and think more creatively than the "typical" hearing aid fitting. As clinicians, we understand that the end result of a hearing aid fitting is limited by the processing capability of the peripheral and central auditory system, and that few patients with severe sensorineural hearing loss will achieve high levels of recognition in complex listening situations. However, we can maximize success with carefully selected processing and features and the needed follow-up adjustments. Patients with severe loss are also the best illustration of the complexities of the auditory system and remind us (yet again) that adding gain is not a simple solution to communication problems.
Acknowledgments
The author thanks Marc Brennan, Evelyn Davies-Venn and Rich Folino for their help obtaining the data shown here. This work was supported by the National Institutes of Health (R01 006014 and P30 DC 04661) and by the Bloedel Hearing Research Foundation.
References
American Speech Language Hearing Association. (2004). Cochlear Implants [Technical Report]. Rockville, MD.
Barlow, J. H., Turner, A. P., Hammond, C. L., & Gailey, L. (2007). Living with late deafness: Insight from between worlds. International Journal of Audiology, 46, 442-448.
Bird, P. A., & Murray, D. (2008). Cochlear implantation: A panacea for severe hearing loss? New Zealand Medical Journal, 121(1280), 4-6.
Brennan, M., & Bally, S. J. (2007). Psychosocial adaptation to dual sensory loss in middle and late adulthood. Trends in Amplification, 11(4), 281-300.
Chung, K. (2004). Challenges and recent developments in hearing aids. Part II. Feedback and occlusion effect reduction strategies, laser shell manufacturing processes, and other signal processing technologies. Trends Amplif, 8, 125-64.
Firszt, J., Reeder, R. M., & Skinner, M. W. (2008). Restoring hearing symmetry with two cochlear implants or one cochlear implant and a contralateral hearing aid. Journal of Rehabilitation Research and Development, 45(5), 749-768.
Folino, R. (2002). The effect of age and amplification type on speech recognition for individuals with severe sensorineural hearing loss. Unpublished master's thesis: University of Washington, Seattle.
Gordo, A., & Martinelli Iorio, M. (2007). Dead regions in the cochlea at high frequencies: Implications for the adaptation to hearing aids. Braz J Otorhinolaryngol 73, 299-307.
Jenstad, L., & Souza, P. (2005). Quantifying the effect of compression, hearing and release time on speech acoustics and intelligibility. J Speech Lang Hear Res, 48, 651-667.
Killion, M., & Niquette, P. (2000). What can the pure-tone audiogram tell us about a patient's SNR loss? Hear J 53, 46-53.
Kuk, F. (2006). Linear frequency transposition: Extending the audibility of high-frequency information. Hearing Review, October, www.hearingreview.com/issues/articles/2006-10_08.asp.
Mackersie, C., Crocker, T., & Davis, R. (2004). Limiting high-frequency gain in listeners with and without suspected cochlear dead regions. J Am Acad Audiol 15, 498-507.
Moore, B.C., Glasberg, B., & Stone, M. (2004). New version of the TEN test with calibrations in dB HL. Ear Hear 25, 478-487.
Moore, B.C., Killen, T., & Munro, K.J. (2003). Application of the TEN test to hearing-impaired teenagers with severe-to-profound hearing loss. Int J Audiol 42, 465-474.
Moore, B. C., Vickers, D. A., Glasberg, B. R., & Baer, T. (1997). Comparison of real and simulated hearing impairment in subjects with unilateral and bilateral cochlear hearing loss. Br J Audiol, 31(4), 227-245.
Munoz, C.M.A., Nelson, P.B., Rutledge, J.C., & Gago, A. (1999). Frequency lowering processing for listeners with significant hearing loss. Electronics, Circuits and Systems: Proceedings of ICECS '99, 2, 741-744.
National Institutes of Health. (1995). NIH consensus conference. Cochlear implants in adults and children. Journal of the American Medical Association, 274(24), 1955-1961.
Nelson, E.G., & Hinojosa, R. (2006). Presbycusis: A human temporal bone study of individuals with downward sloping audiometric patterns of hearing loss and review of the literature. Laryngoscope, 116, 1-12.
Oxenham, A. J., Simonson, A. M., Turicchia, L., & Sarpeshkar, R. (2007). Evaluation of companding-based spectral enhancement using simulated cochlear-implant processing. Journal of the Acoustical Society of America, 121(3), 1709-1716.
Pichora-Fuller, K., & Souza, P. (2003). Effects of aging on auditory processing of speech. Int J Audiol, 42 Suppl 2, 2S11-2S16.
Ricketts, T., & Henry, P. (2002). Low-frequency gain compensation in directional hearing aids. American Journal of Audiology, 11, 29-41.
Ricketts, T., & Hornsby, B. (2006). Directional hearing aid benefit in listeners with severe hearing loss. Int J Audiol, 45, 190-197.
Rosen, S., Faulkner, A., & Moore, B. (1990). Residual frequency selectivity in the profoundly hearing impaired listener. Br J Audiol, 4, 381-392.
Scollie, S. (2006). Diagnosis and treatment of severe high-frequency hearing loss. Proceedings of the Phonak Adult Care Conference, pp. 169-179. www.phonak.com/professional/informationpool.
Souza, P., Jenstad, L., & Folino, R. (2005). Using multichannel wide-dynamic range compression in severe hearing loss: Effects on speech recognition and quality. Ear Hear, 26, 120-131.
Turner, C., Gantz, B. J., & Reiss, L. (2008). Integration of acoustic and electrical hearing. J Rehabil Res Dev, 45(5), 769-778.
U.S. Department of Health and Human Services (1994). Vital and health statistics: Prevalence and characteristics of persons with hearing trouble: United States 1990-91. Hyattsville, MD: DHHS Publication No. [PHS] 94-1516.
Van Tasell, D. J. (1993). Hearing loss, speech, and hearing aids. J Speech Hear Res, 36(2), 228-244.
Vinay & Moore, B. (2007). Prevalence of dead regions in subjects with sensorineural hearing loss. Ear Hear 28, 231-241.