It is typical for audiologists to "first fit" hearing aids using the manufacturer's proprietary algorithm as a starting point. After several days of using the hearing aids, the patient returns to the clinic for some tweaking of the acoustic parameters based on their experiences with the devices in everyday listening situations. This is referred to by some as a "complaint-driven" approach because the fine-tuning is done to reduce or eliminate negative experiences the patient is having with the hearing aids. In addition, this complaint-driven approach is predicated on the fact that patients can accurately explain their perception of the hearing aid's performance. In turn, the audiologist needs to be able to interpret the patient's description of the problem and make appropriate adjustments. Over the past few years, user-trainable algorithms, have emerged as a possible solution to this complaint-driven approach.
This article will review fitting and selection considerations surrounding trainability in modern hearing aids. Many modern hearing aids allow the end user to train the hearing aid so that certain acoustic parameters can be tailored to the individual's listening preferences. Although each manufacturer implements trainable algorithms in a slightly different manner, all trainable hearing aids are designed to allow the patient to train the response of the hearing aid to a desired setting over a period of time.
A trainable hearing aid has controls that enable the patient to change the hearing aid settings in their listening environment in a parametric way, and has been clinically available for at least three years. While the hearing aid is being worn, the device collects and stores information about the patient-selected settings. During the collection of this data, the hearing aid's signal classification system is continuing to define the acoustic input of the listening environment. Hearing aids with training or learning capabilities, in theory, are convenient for patients to use because the hearing aid can be taught to remember preferred settings across different listening environments. According to Dillon et al. (2006), there are several other potential advantages of trainable hearing aids, including:
- The hearing aid's settings can be customized to the patient's preferences in real world listening situations.
- Fewer follow-up visits are needed to adjust or fine-tune the hearing aids.
- Patients can retrain their hearing aids if their listening needs change over time.
- Patients are likely to take more ownership of their success with the hearing aid because they are involved in the fitting process
In order to get a better understanding of how trainable algorithms fit into current clinical practices, let's examine some of the philosophical underpinnings of how we select and fit hearing aids today.
The Rise of the Prescriptive Fitting Approach: The 1970s and 1980s
One of the essential components of a successful hearing aid fitting is achieving adequate gain settings. When adequate gain is obtained, speech intelligibility can often be maximized. Although hearing aid technology has evolved over the past 30 years, the main goals of obtaining gain sufficient to improve speech intelligibility have remained unchanged. Numerous prescriptive fitting targets have been developed, some as early as the 1940s, in an effort to establish adequate gain in a clinically efficient manner.
By the early 1970s, prescriptive procedures largely replaced the impractical comparative hearing aid selection method that had been used previously to select hearing aids. During this decade, several prescriptive fitting methods were developed;including POGO, Libby 1/3, and Berger. Despite some important differences, these prescriptive approaches were based primarily on estimates of average preferred levels as they attempted to optimize gain. Regardless of their underlying philosophy, all prescriptive fitting methods have one thing in common: They take at least one characteristic of hearing loss that can be measured on the individual, and use this known variable to calculate a gain target across the frequency range.
Perhaps the best-known prescriptive method developed in the 1970s was the National Acoustic Laboratories (NAL) target. Like other prescriptive fitting methods developed at the time, the original NAL procedure was intended for linear devices. According to the researchers at NAL, the aim of the NAL family of targets is to maximize speech intelligibility at the patient's preferred listening level. Intelligibility is maximized when all bands of speech are perceived to have the same loudness. This is referred to as a loudness equalization method.
Several studies published in the 1980s and 1990s, using linear analog hearing aids, suggested that preferred hearing aid gain was different than target gain derived from a prescriptive formula (Clasen, Vesterager, & Parving, 1987;Liejon, Eriksson-Mangold, & Bech-Karlsen,1984). Specifically, studies of gain used in everyday listening situations noted that preferred gain settings were considerably lower than those specified by a prescriptive procedure. Hence, despite the theoretical advantage of gain prescriptions, there was evidence suggesting that they were providing insufficient gain in everyday listening situations, as preferred listening levels depended on the listening environment of the individual. Cox & Alexander (1991) found that for the average hearing aid wearer, the range of available volume adjustments needed to be +/- 8 dB relative to the prescribed gain, and that preferred gain varied depending on the listening environment.. These results are typical when compared to other, similar, studies of preferred gain.
During this time, hearing aids with multiple memories became popular, in part because they were able to address gain preferences across various listening situations. Thus, one set of amplification characteristics might be used when listening in quiet, another for listening in noise, and yet another for listening to music. According to one published report (Keidser, Dillon, & Byrne, 1996), the proportion of subjects who preferred different frequency responses in different listening environments was above 80%.
The Proliferation of Proprietary Fitting Algorithms: Late 1990s and Early 2000s
In the years since many of these original preferred listening level or preferred gain studies were conducted, digital hearing aids utilizing wide dynamic range compression (WDRC) to improve audibility have become popular, and have largely supplanted linear analog devices. Wide dynamic range compression is designed to provide progressively less gain as the input signal increases, Thus, the amount of gain changes based on the intensity level of the listening environment. For the end user, WDRC instruments have the potential to provide the preferred loudness for various listening situations without the need for a manual control to manipulate the volume.
In order to keep pace with increased use of multiple channels of compression and other advanced features, prescriptive fitting methods, incorporating WDRC processing into their fitting formulae, were developed. The two most commonly used independent prescriptive fittings targets are the desired sensation level (DSL) v5 and the NAL-NL1 were updated in order to keep pace with non-linear processing. Although the DSL {i/o} v5 and NAL-NL1 fitting philosophies do differ, a primary goal of each is to maximize the audibility of speech for hearing aids with WDRC. Although these current procedures were deemed to be logical starting points for prescriptive fits for typical listening situations (Keidser et al., 1996), the current clinical trend is the use of proprietary algorithms to fit hearing aids, rather than the research-validated NAL or DSL targets. This approach largely takes the fitting process out of the hands of the audiologist and places it with the manufacturer.
Several reports published in the early 2000s called into question the consistency of proprietary fitting formulas. Keidser, Brew, and Peck (2003) compared the proprietary first fit algorithm from four manufacturers to the most current versions of the DSL and NAL prescriptive targets. Results indicated an approximate 10 dB difference in gain for average level inputs. Similar findings were reported by Killion (2004) who found the aided Speech Intelligibility Index (SII) fell below 50% for the majority of fittings using a proprietary first fit target. Finally, Hawkins and Cook (2003) reported that the prescriptive gain displayed in the manufacturer's software may be quite different that what is present in the actual ear, as high frequency gain can be reduced by as much as 10-15 when a proprietary first fit target is utilized when compared to the NAL target. Despite the lack of evidence supporting the effectiveness of proprietary fitting targets, they remain popular today.
Individual Gain Preferences in the Real World
The use of one independently validated prescriptive fitting formula (NAL) to approximate preferred gain is supported by the evidence. According to one systematic evidence-based review of eleven studies pertaining to preferred gain using the NAL targets as a starting point;gain similar to or slightly less (-3dB) than the NAL prescription was preferred (Mueller, 2005).
In the four years since this evidence based review article was published, other studies utilizing hearing aids with advanced features have yielded results similar to Mueller's findings. Keidser, Dillon, and Convery (2008) compared the collective preferred gain results of several published reports and found the mean variation from the NAL prescribed target to be -4dB. Keidser, O'Brien, Carter, McLelland, and Yeend (2008) examined the relationship between preferred gain and the experience of hearing aid users. On average, experienced users preferred 2.6 dB less overall than prescribed by NAL-NL1 for an average level input. This finding is in good agreement with Convery, Keidser, and Dillon's (2005) systematic evidence based review of gain preference over time, which found little difference in gain preferences between new and experienced hearing aid users.
Preferred Gain and Trainable Hearing Aids
According to the published reports cited above, preferred gain, on average, closely approximates prescribed NAL gain;however, there is considerable individual variability. Hornsby and Mueller (2008) examined changes in gain for 16 bilaterally fitted subjects who were encouraged to make extensive volume control adjustments. Although the mean change in preferred gain was approximately 1 to 2 dB, there were up to +/- 8 dB of individual deviation from the NAL target.
Given the significant individual variability in preferred gain, algorithms which allow the user to train the gain settings may enable the user the opportunity to conveniently establish gain preferences in real world listening conditions. Recently, several laboratories and at least one real world study have explored the feasibility of trainable hearing aids. Dreschler, Keidser, Convery, and Dillon (2008) allowed twenty-four subjects with mild to moderate sensorineural hearing loss to fine tune hearing aid responses in a laboratory study. They utilized simulated hearing aids with a 2:1 compression ratio matched to a prescribed NAL-RP gain target. Using four different gain and frequency response configurations across six different listening conditions, subjects were asked to establish preferred listening levels. Results showed that subjects systematically reduced the gain between 4 and 8 dB, and the slope by 1 to 2 dB in reference from the prescribed NAL-RP gain starting point across all six listening conditions. In reference to the preferred slope, there was a tendency for the subjects to select a slightly flatter slope relative to the prescribed NAL-RP target. This preference for reduced gain and a flatter slope relative to the NAL-RP target may be due to the fact that the NAL-RP target is based on a speech input at a conversational level, whereas most of the stimuli used in this study were non-speech or speech with a raised vocal effort.
Other important findings from the Dreschler et al. (2008) study are related to reproducibility of the preferred gain preferences and subjective judgments of the control configurations. Results showed that the subjectively preferred final setting for the overall gain, relative to the NAL-RP prescription, was consistently dependent on the gain relative to baseline starting point of the fine-tuning session. According to the authors, this may suggest that there is a range of equally good response shapes, and the subjects ceased making adjustments once they entered the nearest part of that range.
Subjects were given a questionnaire at the end of each test session and most of them had no difficulty using any of the manually adjustable controllers to train their hearing aids. Furthermore, the majority of the subjects found that they were able to improve both sound quality and speech clarity by using the control. In general, the findings from the Dreschler et al. (2008) study in laboratory conditions, using a limited number of acoustic environments and finite number of hearing aid parameters, indicated that subjects were able to generate reproducible results for various listening conditions. In addition, the gain response at baseline turned out to strongly influence the preferred response.
There were two other articles published in 2008 suggesting that initial gain settings can influence the trained outcome. Keidser, Dillon, et al. (2008) investigated the effect of baseline response shape when repeated adjustments are made in a listening environment;and the adjusted setting after each repetition becomes the baseline for the next adjustment. Although this procedure does not exactly replicate trainability, it does provide insight into the patient's ability to reach an optimum setting for different starting points when using a procedure resembling a training paradigm. Twenty-four experienced hearing aid users with symmetrical mild to moderate hearing loss, listening to 37 real life recordings through simulated hearing aids in a laboratory condition, participated in the study.
Two baseline responses were used, "flat" and "steep." The authors created these responses by applying gain "tilts" of +/-4.8 dB/octave and applied them to the individual's NAL-RP response. Results of the Keidser, Dillon, et al. (2008) study suggest that, on average, the effect of the baseline response is significant even after five adaptive adjustments of gain in which the baseline response changed from one adjustment to the next based on the previously selected response. These findings also suggest that the starting baseline response biases the final preferred gain settings.
In another recent study using hearing aids with advanced features, such as digital noise reduction and automatic feedback cancellers activated, as they would be in real world listening situations, Mueller, Hornsby, and Weber (2008) fitted 22 participants with two different prescriptive gain conditions volume control start up at 6 dB above the NAL-NL1 target, and start up 6 dB below the NAL-NL1 target. Using a crossover design, the participants were allowed to wear their hearing aids for 10 to 14 days in each condition. One of the key findings from this study was that the initial start-up gain significantly influenced the trained gain. The mean preferred gain for the +6 dB start-up condition was approximately 9 dB higher than the preferred gain for the - 6 dB start-up condition. Even though there is considerable individual variability, the majority of patients tend to prefer gain settings centered around a specific starting point.
Mueller et al. (2008) also examined the relationship between initial programmed gain and overall satisfaction. Satisfaction ratings were obtained 10 to 14 days post fitting for groups starting 6 dB under the NAL-NL1 target and 6 dB over the NAL-NL1 target. Results indicated that the group starting 6 dB under the NAL-NL1 target was, on average, to be more satisfied/less dissatisfied when compared to the group starting 6 dB over the target. This finding would support a fitting process that aims to maximize satisfaction with loudness rather than maximize audibility by first matching the NAL target.
Hearing Aid Microphone Preferences
Although numerous laboratory studies have shown better speech understanding in background noise for hearing aids with directional microphones as compared to those with omni-directional microphones;less consistent benefit has been obtained in user preference studies. For example, Walden et al. (2007) examined patient preferences for omni-directional or directional processing in everyday listening. The omni-directional mode tended to be preferred in relatively quiet listening or in the presence of background noise when the signal source was not located directly in front of the listener.
In a study examining microphone preferences across eleven different signal-to-noise ratios, Walden et al. (2005) found that directional microphones were preferred when the signal-to-noise ratio is between +3 and -6 dB, as approximately 70% of the study participants preferred directional processing within this range of signal-to-noise ratios. At more favorable (-6 dB SNR) signal-to-noise ratios, a significant number of study participants had no clear preference for either directional or omni-directional microphone processing. Even at signal-to-noise ratios in which directional microphone processing was preferred by most, nearly 20% of the participants still had no preference, suggesting substantial individual variability across listeners in noisy situations.
In order to investigate the consistency of microphone preferences across listeners, Walden et al. (2007) asked 40 participants to rate their microphones preferences for various passages recorded in everyday listening situations that had been evaluated by a small group of field raters. The field raters were used to classify various listening environments as either "clear preference for omni", "clear preference for directional", or "no preference." After the listening environments were classified by the field raters, passages were recorded through GN Resound Canta 770 D hearing aids in both the omnidirectional and directional mode. Seventy-two recorded passages were then randomly presented to the 40 participants and they were asked to state their preference after listening to each passage for 20 to 30 seconds.
Results showed that preferences for omnidirectional processing are highly robust across listeners and listening environments. That is, if field raters preferred omnidirectional processing, then the study participants were extremely likely to agree with their omnidirectional preference. On the other hand, field raters' preference for directional was not consistently replicated by the study participants in the laboratory. In many cases a no-preference rating was assigned to the recorded samples of the directional-preferred sites. Given these findings it appears that clinicians cannot assume that different listeners will prefer directional processing in the same noisy listening conditions.
In another real world study of microphone preference by Palmer, Bentler, and Mueller (2006), 49 participants were asked to subjectively rate their preference for either omnidirectional, fixed directional or automatic-adaptive directional microphones. Overall preference for each of the three microphone conditions were equally distributed across the three microphone conditions. Results of several studies (Walden et al., 2005;Palmer et al. 2006;Walden et al., 2007) of microphone preference suggest that there is a substantial amount of variability across listeners, especially when listening in noisy situations. Given these findings, user control and trainability of microphone preference has the potential to be a useful feature for a large number of patients. Clinical field trials of a premium product utilizing a trainable algorithm combined with a remote control have been shown by most users to be easy to use and beneficial (Unitron Passport Field Trials, Spring, 2009).
Conclusions and Practical Considerations
Even though trainable hearing aids continue to become more sophisticated, there are several points of practical importance from this review article that clinicians can use when making hearing aid selection decisions.
- Although there are a number of scientifically-defensible proprietary formulas developed by several manufacturers in use today, they have not been validated with independent research. The published evidence would support the use of the NAL family of prescriptive "first fit" targets as a starting point for determining gain and output for all patients. In addition to prescribing a reasonable starting point for preferred gain, using the NAL formula for all fittings, rather than relying on each manufacturer's proprietary first fit algorithm, contributes to a more consistent fit because the same prescriptive formula is used for every fitting. While many manufacturers allow clinicians to prescribe gain and output with either the NAL-NL1 or DSL [i/o]v5, only one major manufacturer (Unitron) uses the NAL-NL1 as its default "first fit" formula. Regardless of preference for manufacturers, the use of an independently validated prescriptive method, arguably gives more control to the audiologist when making hearing aid selection decisions.
Practical Point for the Busy Clinician: For the audiologist fitting hearing aids from more than one manufacturer, use the NAL or DSL formula as a starting point for programming initial gain and output. Relying on a single proprietary formula allows the audiologist to know how much approximately how much gain each patient is walking out the door with after the initial fitting appointment. This is especially important for audiologists choosing not to use probe microphone equipment to verify the match of a prescriptive target. - Several studies using trainable hearing aids indicate that preferred gain is influenced by initial gain starting point. Furthermore, there is an ample body of research suggesting that proprietary fitting targets, which are commonly used to "first fit" many hearing aids, provide substantially less gain relative to both the DSL and NAL fitting targets. Given the fact that users tend to train preferred gain around the starting point, many patients may never experience full audibility of soft and average speech if a proprietary "first fit" target is used and not properly verified with probe microphone measures. This is not necessarily negative, if the clinician believes that a successful fitting is primarily based on keeping the patient happy. As the Mueller et al. (2008) study suggests, patients fit 6 dB below the NAL-NL1 target are more likely to be satisfied with loudness than patients fit 6 dB above the NAL-NL1 target. However, if a primary goal of the fitting is to improve speech intelligibility, then restoring audibility is of prime importance. Therefore, using an independently validated would be the obvious choice as an initial starting point for gain.
Practical Point for the Busy Clinician: How initial programmed gain and target output affects the starting point is largely a philosophical matter. If you believe that increased audibility leads to improved speech intelligibility is of primary importance, matching a NAL or DSL target is a reasonable first step in the fitting process. On the other hand, if you believe that patient satisfaction with loudness is of primary importance, then initial starting gain well below what NAL or DSL calls for would be an important first step in your fitting process. - Considering the significant individual variability in listener preferences for gain and microphone strategy across a range of listening environments, virtually all patients are candidates for hearing aids that can be trained to an optimal setting. Research has shown that patients can make consistent judgments regarding the acoustic settings of their hearing aids.
Practical Point for the Busy Clinician: Even though approximating the NAL gain target for average level inputs has been shown to be closely related to average preferred gain levels, individual differences in gain as well as microphone strategy preferences suggest that all patients are candidates for trainable algorithms. Once initial starting points for gain have been established and verified with probe microphone measures, patients need to be encouraged to "train" their devices to reach an optical setting. Several studies have shown that most patients will train the hearing aids no more than about +/- 6 dB of the initial starting point. Therefore, when patients return for a 1 to 2 week follow-up appointment after the initial fitting, you can expect most patients to tweak the overall gain by +/-6 dB. Larger deviations in trained gain relative to the initial starting point may be an indication that the patient does not have a good understanding of the feature. In these cases, clinicians should consider turning the trainability feature off. - Numerous preferred gain and directional microphone preference studies demonstrate that there is a wide range of individual differences. Based on these findings, many patients may benefit from a more interactive fitting approach in which the patient can choose a preferred setting by using a remote control. Some combination of a remote control and trainable algorithms would allow patients to tailor their preferred settings over a relatively short period of time.
Practical Point for the Busy Clinician: Combining trainable algorithms with remote controls is akin to training wheels on a bicycle. Like the toddler who is mastering the bike with the use of temporary training wheels, the hearing aid user can master the settings of their hearing aids through the implementation of trainable algorithms and a remote control. Once the patient has trained the hearing aids in various listening situations, routine use of the remote control may be eliminated.
References
Clasen , T., Vesterager, V., & Parving, A. (1987). In-the-ear hearing aids. Scandinavian Audiology, 16, 195-200.
Cox, R. & Alexander, G. (1991). Preferred hearing aid gain in everyday environments. Ear and Hearing, 12(2),123-126.
Convery, E., Keidser, G., & Dillon, H. (2005). A review and analysis: Does amplification experience have an effect on preferred gain over time. Australian and New Zealand Journal of Audiology, 27(1),18-32.
Dillon, H., Zakis, J., McDermott, H., Keidser, G., Dreschler, W., & Convery, E. (2006). The trainable hearing aid: What will it do for clients and clinicians. The Hearing Journal, 59(4), 30-36.
Dreschler, W., Keidser, G., Convery, E., & Dillon, H. (2008). Client-based adjustments of hearing-aid gain: The effect of different control configurations. Ear and Hearing, 29(2), 214-227.
Hawkins, D.B., & Cook, J.A. (2003). Hearing aid software predictive gain values: How accurate are they? Hearing Journal, 56(26), 28, 32, 34.
Hornsby, B. & Mueller, H.G. (2008). User preference and reliability of bilateral hearing aid gain adjustments. Journal of the American Academy of Audiology, 19(2), 158-170.
Jerger, J. (2008). Editorial: Evidence-based practice and individual differences. Journal of the American Academy of Audiology, 19, 656.
Keidser, G., Dillon, H., & Byrne, D. (1996). Guidelines for fitting multiple memory hearing aids. Journal of the American Academy of Audiology, 7(6), 406-418.
Keidser, G., Brew, C., & Peck, A. (2003). Proprietary fitting algorithms compared with one another and with generic formulas. The Hearing Journal. 56(3), 28-38.
Keidser, G., O'Brien, A., Carter, L., McLelland, M., & Yeend, I. (2008). Variation in preferred gain with experience for hearing aid users. International Journal of Audiology, 47(10), 621-635.
Keidser, G., Dillon, H., & Convery, E. (2008). The effect of the base line response on self-djustments of hearing aid gain. Journal of the Acoustical Society of America, 124(3), 1668-1681.
Killion, M. (2004). Myths about hearing aid benefit and satisfaction. Hearing Review, August. Retrieved September 1, 2009, from www.hearingreview.com
Liejon, A., Eriksson-Mangold, M., & Bech-Karlsen, A. (1984). Preferred hearing aid gain and bass-cut in relation to prescriptive fitting. Scandinavian Audiology, 13, 157-161.
Mueller, H.G. (2005). Fitting hearing aids to adults using prescriptive methods: An evidence-based review. Journal of the American Academy of Audiology, 16(7), 448-460.
Mueller, H.G., Hornsby, B.W., & Weber, J.E. (2008). Using trainable hearing aids to examine real-world preferred gain. Journal of the American Academy of Audiology, 19(10), 758-773.
Palmer, C., Bentler, R., & Mueller, H.G. (2006). Evaluation of a second-order directional microphone hearing aid II: Self-report outcomes. Journal of the American Academy of Audiology, 17(3), 190-201.
Unitron (2009, Spring). [Passport Field Trials]. Unpublished study.
Walden, B., Surr, R., Grant, K., Van Summers, W., Cord, M., & Dyrlund, O. (2005). Effect of signal-to-noise ratio on directional microphone benefit and preference. Journal of the American Academy of Audiology, 16(9), 662-676.
Walden, B., Surr, R., Cord, M., Grant, K., Summers, V., & Dittberner, A. (2007). The robustness of hearing aid microphone preference in everyday listening environments. Journal of the American Academy of Audiology, 18(5), 385-379.
Zakis, J., Dillon, H., & McDermott, H. (2007). The design and evaluation of a hearing aid with trainable amplification parameters. Ear and Hearing, 28(6), 812-830.