AudiologyOnline Phone: 800-753-2160


Widex SmartRic - February 2024

Music as an Input to a Hearing Aid

Music as an Input to a Hearing Aid
Marshall Chasin, AuD
February 12, 2007
Share:
Introduction

Music as an input to a hearing aid poses some interesting problems both for the hearing aid design engineer and for the hearing health-care professional. The following discussion equally concerns the fitting of hearing aids for musicians, as well as for those non-musicians who like to listen to music. In many cases, as will be seen, the question really is "which hearing aid manufacturer would be willing to make subtle changes for individual customers?", rather than "what is the best set of electro-acoustic parameters for users who listen to music?" In order to understand the programming and internal algorithm changes necessary for music as an input to a hearing aid or a cochlear implant, four primary, physical differences between speech and music need to be understood. They are (1) the long-term spectrum of music versus speech, (2) differing overall intensities, (3) crest factors, and (4) phonetic vs. phonemic perceptual requirements of different musicians.

  1. Long-term spectrum of music vs. the long-term spectrum of speech

    Speech is derived from a 17 cm long vocal tract with a tongue that causes constrictions over a small range and a highly damped nasal cavity. Although it is complex, it is understandable that the long-term spectrum of speech is well defined and typically language independent. In contrast, music can derive from many sources such as the vocal tract, a percussive instrument (e.g. drums), a woodwind or brass instrument with dependence of both a quarter and one-half wavelength resonator (e.g. clarinet and saxophone, respectively), and any number of instruments that are stringed with a rich harmonic structure of a half-wavelength resonator (e.g. violin and guitar). Any of these sources can be amplified or unamplified. Even if unamplified, the music may be of low-intensity or ofhigh-intensity; and depending on the instrument, the music spectrum may be high- or low-frequency in emphasis. Unlike speech, music is highly variable and the goal of a "long-term music spectrum" is poorly conceived. There is simply no music "target" when programming hearing aids as there is for amplified speech.

  2. Differing overall intensities

    At one meter, speech averages 65 dB SPL (RMS) and has peaks and valleys of about 12-15 dB in magnitude. Because all speech derives from the human vocal tract, meaning similar human lungs imparting similar subglottal pressures to drive the vocal chords, the potential intensity range is well defined and also quite restricted- approximately 30-35 dB. In contrast, depending on the music played or listened to, various musical instruments can generate very soft sounds (20-30 dB SPL [e.g. brushes on a jazz drum]) to loud sounds, such as amplified guitar or even the brass of Wagner's Ring Cycle* (in excess of 120 dB SPL). (Incidentally I have no idea of the excruciatingly loud sound level of a picollo because I have a legal restraining order preventing all picollo players from coming within 30 meters of my office!) The dynanic range of music as an input to a hearing aid is therefore on the order of 100 dB vs. only 30-35 dB for speech.

  3. Crest Factors

    The crest factor (which has nothing to do with toothpaste) is the difference in decibels between the highest peak of a waveform and its average or root mean square (RMS). The RMS value corresponds with one's perception of loudness- the subjective attribute correlating with intensity. For speech, the RMS is about 65 dB with peaks extending about 12 dB beyond the RMS level. The crest factor for speech is therefore on the order of about 12 dB. This is well known in the hearing aid industry. Compression circuitry and hearing aid test systems use this information. The explanations behind the 12 dB crest factor for speech are numerous, but generally correspond to the damping or loss of energy that is inherent in our soft-walled vocal tracts. Before a spoken word is heard, the vocal energy passes by a soft tongue, with soft cheeks and lips, and a nasal cavity full of soft tissue and occasionally some other foreign "snotty" materials. These soft tissues "damp" the sound such that the peaks are generally only 12 dB more intense than the average intensity of the speech. In contrast, a trumpet has no soft walls or lips. The same can be said of most musical instruments and as such, the peaks are less damped and "peakier" relative to the RMS waveform of speech. Crest factors of 18-20 dB are not uncommon for many musical instruments. Compression systems and detectors based on peak sound pressure levels may have different operating characteristics for music as input to a hearing aid than for speech. That is, music may cause compression systems to enter the non-linear phase at a lower intensity than would be appropriate for that individual.

  4. Phonetic vs. phonemic perceptual requirements

    This refers to the difference between what is actually heard: the physical vibrations in the air (phonetic) as opposed to the perceptual needs or requirements of the individual or group of individuals (phonemic). These terms derive from the study of Linguistics but have direct applicability here. For speech, despite the fact that for all languages of the world, the long-term speech spectrum contains most of its energy in the lower-frequency region and less in the higher frequency region (its phonetic manifestation); depending on the language, the clarity, as measured by word discrimination scores or the articulation index, derives from the mid- and high-frequency regions. This mismatch between energy (phonetic) and clarity (phonemic) is complex, but well understood in the field of hearing aid amplification.

    In contast to speech, some musicians need to hear the lower-frequency sounds more than others, regardless of the output (phonetics) of the instrument. A clarinet player, for example, is typically satisfied with their tone only if the lower frequency inter-resonant breathiness is at a certain level, despite the fact that the clarinet can generate significant amounts of high-frequency energy. This is in sharp contrast to a violin player who needs to hear the magnitude of the higher-frequency harmonics before they can judge it to be a good sound. The clarinet and the violin both have similar energy spectra (phonetics) but dramatically differing uses of the sound (phonemics).
These four differences between the physical properties of speech and music can now serve as the basis for differing electro-acoustic settings of a hearing aid for inputs of speech vs. music.

"The front end"- Peak-input limiting level in hearing aids

This is the most important of all factors in selecting a set of electro-acoustic parameters that are optimal or near optimal for listening to amplified music through a hearing aid. Many hearing aids on the market have a front-end limiter or clipper that prevents sounds above 85-90 dB SPL from effectively getting into the hearing aid. This should not be confused with output limiting. Front-end limiting refers to what enters a hearing aid as the initial input. This is gradually changing in the industry however, and many modern hearing aids have peak-input limiting levels that are on the order of 95-100 dB SPL. Historically this has been quite reasonable since the most intense components of shouted speech are approximately 85-90 dB SPL. In addition, manufacturers of digital hearing aids want to ensure that the analog to digital (A/D) converter is not overdriven. The argument is that any sound above about 90 dB SPL is not speech (or speech-like) and as such, the limiter will function as a rudimentary noise reduction system. However, music is generally much more intense than 85 -90 dB SPL, and is limted or distorted at the front end of the hearing aid. Modern hearing aid microphones can certainly handle up to 115 dB SPL without appreciable distortion so there is no inherent reason (other than historical precedence or limiting an input to a poorly configured A/D converter) for having an input-limiting level set so low. Once intense inputs are limted and distorted at the front end of the hearing aid, the music will never have the desired clarity and high-fidelity, regardless of the "music program" that occurs later on. There are available techniques to avoid this front-end distortion problem, and depending on the implementation, the hearing aid may employ a compressor to "sneak" the input under the peak limiter by way of expansion after the limiter point in the hearing aid. Hearing aids are also available with a very high peak-input limiting level. A good metaphor is a plane flying under a low-level bridge. Unless the bridge is raised or the plane dips under the bridge, problems will occur. A website has been established with audio files to demonstrate this phenomenon. It can be found on the Musicians' Clinics of Canada website (www.musiciansclinics.com in the Links section under "Marshall Chasin's powerpoint lectures").

Research has shown that anything below a peak-input limiting level of 105 dB SPL will cause a deleterious distortion for music regardless of what program(s) come later in the hearing aid processing scheme. (Chasin, 2003; Chasin & Russo, 2004). A "quick and dirty" clinical test to determine if a hearing aid's front end clips or distorts loud music is to set the output high (>115 dB) and the gain low (5-8 dB). In a hearing aid test box, apply an intense signal (e.g. 100 dB SPL); there should not be any peak clipping since the output is set to>115 dB. If there is a high level of distortion (>10%) then the culprit is most likely a "front-end" or peak-input limiting level that is too low to handle intense music (Chasin, 2006). An example of a hearing aid that had a very high peak-input limiter was the K-AMP. Another, depending on its implementation, is the new Venture™ digital platform from Gennum Corporation. (An earlier version, served as the basis for the Digi-K™). The peak-input limiting level is not required to be printed on a hearing aid specification sheet, so contact should be made with arepresentative of the hearing aid manufacturer for specific details pertaining to this parameter.

If a client has a hearing aid with a peak-input limiting level that is too low for music, one strategy would be to turn down the input (e.g. home stereo or MP3 system) and turn up the gain of the hearing aid. This is analogous to letting the plane fly under the bridge. Another approach simply uses a "resistive network" just after the hearing aid microphone; this "fools" the hearing aid into thinking that the input is 10-12 dB less intense. Typically this network is only engaged with a button or switch for the "music program". Figure 1 shows the effect of this resistive network. It should be noted that the output is 10-12 dB quieter only because the input is 10-12 dB quieter, with no change in gain. Most manufacturers can accomplish this "resistive network" so that the interested clinician can use this approach with their favorite hearing aids. The question now becomes "who is the most flexible hearing aid manufacturer that can implement circuit changes such as a resistive network?" rather than "what is the best hearing aid for music?"

Other techniques that are not as elegant, but still work quite well, may include placing a band-aid like cover over the hearing aid microphone(s) to "fool" the hearing aid into thinking that the input is lower (because of the attenuation of the band-aid). The gain may or may not need to be increased to compensate, since music is generally more intense than speech.



Figure 1. Output in a 2cc coupler showing both the unaltered frequency response and one with a specially built resistive network designed to bias the microphone by making it 10-12 dB less sensitive. This allows 10-12 dB greater headroom for more intense inputs such as music, without affecting gain.

One channel is best for music

In sharp contrast to hearing speech, especially in noise, one channel,or many channels with the same compression ratios and kneepoints, appears to be the appropriate choice for listening to music. Unlike speech, the relative balance between the lower-frequency fundamental energy and the higher-frequency harmonics is crucial for most types of music. High-fidelity music is related to many parameters, one of which is the audibility of the higher-frequency harmonics at the correct amplitude. Poor fidelity can result from the intensity of these harmonics being too low or too high. A multi-channel hearing aid that uses differing kneepoints and degrees of compression for various channels runs the distinct risk of severely altering this important low-frequency (fundamental)/high-frequency (harmonic) balance. Subsequently a "music program" within a hearing aid should be one channel or, equivalently, a multi-channel system where all compression parameters are set in a similar fashion. It has been suggested that in some heavy bass situations, a two-channel system may be useful with the lower-frequency channel set at 500 Hz with greater attenuation at higher input levels. (L. Revit, personal communication, 2004).

Compression

The clinical rules of thumb for setting compression parameters for speech are rather straight forward. Compression characteristics are set based on the crest factor of speech which, recall, is on the order of 12 dB. For speech, wide dynaminc range compression (WDRC) systems function to limit overly intense outputs and to ensure that soft sounds are heard as soft sounds, medium sounds are heard as medium sounds, and intense sounds are heard as intense (but not too intense) sounds. In short, these systems take the dynamic range of speech (30-35 dB) and alter it to correspond with the dynamic range of the person with a hearing impairment. Likewise, there is no inherent reason why a wide WDRC system that works well for a client with speech as input should not also work well for music. However, the dynamic range of music is typically 50 to 70 dB greater than that of speech. Having said this, it turns out that, clinically, no major changes need to be made since the more intense components to music are just in a different part of the input-output curve of the compression function. The difference lies in whether the compression system uses a peak detector or an RMS detector. If the compressor uses an RMS (or average intensity) detector, then no changes need to be made for a "music program". However, if the hearing aid utilizes a peak detector to activate the compression circuit, the kneepoint should be set about 5-8 dB higher for a "music program" than for speech. This is related to the larger crest factor of music (18 dB vs. 12 dB for speech). Care should be taken that these peaks do not activate the compression circuit prematurely.

Feedback reduction systems

In most cases, since spectral intensity is greater for music than speech, feedback is not an issue. The gain of the hearing aid for these higher-level inputs is typically less than for speech. However, if feedback reduction is required, or the feedback circuit cannot be disabled in a "music program" then those systems that utilize a gain reduction method (as in the. Phonak Perseo™ or Widex Diva™, although the Widex Diva™ only uses this approach for the music program) would be the best. The Bernafon ICOS™ is an example of an aid where the feedback reduction can be disabled. The remaining two feedback management approaches, notch filtering and phase cancellation, may actually create problems. The centre frequency of the filters used with these strategies may "hop" around searching for the feedback, thereby causing a blurry sound. Although this artifact has been reported in the literature (Chung, 2004), I have never clinically experienced this "frequency-hopping artifact."

The phase-cancellation approach uses a technique where a signal is generated 180 degrees out of phase with the feedback. Although this works well for speech, and the majority of hearing aid manufacturers use such a technique, the narrow-bandwidth harmonics of music (since there is minimal damping) can, and do, confuse the hearing aid into suppressing the music. In addition, if the harmonic is of short duration, the created cancellation signal can become audible and is heard as a brief "chirp". Two approaches have been used to remediate this. One is to limit the feedback detector to the very high frequency range where the musical harmonic structure in inherently less intense (as used in the Oticon Syncro™ and Siemens Triano™) or to use a two-stage phase cancellation technique using both fast and slow attack times (Siemens Acuris™ and Bernafon Symbio™). However, if at all possible, disabling any feedback reduction system would be the optimal approach for listening or playing music.

Noise reduction systems

As with feedback-reduction systems, it would be best to disable the noise reduction system when listening to music. Typically, the signal-to-noise ratio is quite favourable so noise reduction would not be necessary. However, for some hearing aids, the noise reduction system cannot be disabled; since the primary benefit of noise reduction systems seems to be in improving listening comfort rather than reducing noise, choosing an approach for music that has a minimal noise-reducing effect may be beneficial for a "music program". It should be noted that unlike compression systems that use a fast attack time and slow release time, noise reduction systems use opposite time constants- a slow attack and a fast release time. Because of the slow attack time, noise reduction systems are not be as deleterious to the perception of music, as they sometimes are for speech, because music has a higher modulation rate and does not allow the noise reduction algorithm to engage as frequently.

Conclusions: The "music program"

A "music program," or a set of optimal electro-acoustic parameters for enjoying music would include:

  1. A sufficiently high-peak input-limiting level so more intense components of music are not distorted at the front end of the hearing aid.

  2. Either a single-channel or a multi-channel system in which all channels are set for similar compression ratios and kneepoints.

  3. An RMS detector compression scheme (similar to the speech-based compression system) with a kneepoint set to engage at inputs 5-8 dB higher if the hearing aid uses a peak compression detector.

  4. A disabled feedback reduction system, or a feedback redcution system that uses gain reduction or a more sophisticated form of phase feedback cancellation (either one with short and long attack times or one that only operates on a resticted range of frequencies).

  5. A disabled noise reduction circuit, although because of long attack times and a short release times, this circuitry may rarely be activated for many forms of music.
Footnote
*: Wagner's Ring Cycle, or Der Ring des Nibelungen is a four-opera cycle that Wagner compsosed over the course of 25 years. Wagner based the Cycle around strong human themes such as jealousy, greed, passion, and love. A presentation of Ring Cycle requires a large orchestral company.

References

Chasin, M. (1996). Musicians and the Prevention of Hearing Loss, San Diego: Singular Publishing Group.

Chasin, M. (2003). Music and hearing aids. The Hearing Journal, 56(7), 36-41.

Chasin, M. (2006). Can your hearing aid handle loud music? A quick test will tell you. The Hearing Journal, 59(12), 22-24.

Chasin, M., & Russo, F.A. (2004). Hearing Aids and Music. Trends in Amplification, 8(4), 35-47.

Chung, K. (2004). Challenges and recent developments in hearing aids: Part I. Speech understanding in noise, microphone technologies and noise reduction algorithms. Trends in Amplification, 8(3), 83-124.

Kent, R.D., and Read, C. (2002). Acoustic Analysis of Speech (2nd edition). New York: Delmar.
Rexton Reach - November 2024

marshall chasin

Marshall Chasin, AuD

Director of Auditory Research at Musicians' Clinics of Canada



Related Courses

A Deeper Look at Sound Environments
Presented by Don Schum, PhD
Recorded Webinar
Course: #33536Level: Intermediate1 Hour
The characteristics of the sound environment have a fundamental impact on the performance of the hearing aid user. In this course, we will describe the obvious and sometimes more subtle aspects of sound environments that will affect hearing aid performance.

The Subjective Evaluation of a New Hearing Aid Fitting
Presented by Don Schum, PhD
Recorded Webinar
Course: #35584Level: Intermediate1 Hour
The final judge of the success of a new fitting will of course be the patient, and the criteria that they use may not always be in line with an objective audiological measure. This course will review some of the issues and options at play when having the patient weigh in on the value of the new devices.

Auditory Wellness: What Clinicians Need to Know
Presented by Brian Taylor, AuD, Barbara Weinstein, PhD
Audio
Course: #36608Level: Intermediate0.5 Hours
As most hearing care professionals know, the functional capabilities of individuals with hearing loss are defined by more than the audiogram. Many of these functional capabilities fall under the rubric, auditory wellness. This podcast will be a discussion between Brian Taylor of Signia and his guest, Barbara Weinstein, professor of audiology at City University of New York. They will outline the concept of auditory wellness, how it can be measured clinically and how properly fitted hearing aids have the potential to improve auditory wellness.

Vanderbilt Audiology Journal Club: Clinical Insights from Recent Hearing Aid Research
Presented by Todd Ricketts, PhD, Erin Margaret Picou, AuD, PhD, H. Gustav Mueller, PhD
Recorded Webinar
Course: #37376Level: Intermediate1 Hour
This course will review new key journal articles on hearing aid technology and provide clinical implications for practicing audiologists.

61% Better Hearing in Noise: The Roger Portfolio
Presented by Steve Hallenbeck
Recorded Webinar
Course: #38656Level: Introductory1 Hour
Every patient wants to hear better in noise, whether it be celebrating over dinner with a group of friends or on a date with your significant other. Roger technology provides a significant improvement over normal-hearing ears, hearing aids, and cochlear implants to deliver excellent speech understanding.

Our site uses cookies to improve your experience. By using our site, you agree to our Privacy Policy.